One of the places that we put a lot of effort in lately is around the area of AI at the edge. We are now able to—with very little power consumption (five watts typically), particularly from tools such as Google’s Coral chip—do massive inference at the edge. What we can do is we can convert any sensor, any sort of dumb sensor like a camera or a microphone into a smart device where instead of shuffling all of the data that it records up to the cloud, it can be interpreted locally, then we can send events up to the cloud instead.
I’ll give you a use case that we’re working on now. We’re recognizing birds around windmills. Instead of sending all this camera data up to the cloud, and then have someone look for the bird in the picture, what we can do is we can use a TensorFlow model, a well-trained TensorFlow model, and recognize what we’re looking for. Instead of sending the data or all the video, we can send, “There is a bird coming.” Remember that we are building edge units that also can control actuators so we could take counter measures. Maybe sound, maybe lights, whatever, to chase the bird away because what we’re trying to prevent is that birds hit the windmill and die. That’s very expensive for the operators of the windmill and really bad for the birds, of course.
Another example, in the power industry, we’re using microphones to listen to transformers. We could send all that noise up to the cloud, but why? Because we know what we’re listening for. We’re listening for sparks. When the sparks are there, we know that there is a need for maintenance. Instead of sending all the sound, which would be terabytes worth of data over time, all we do is say, “This transformer seemed to have a problem. We believe there are sparks. Here is a sample of a recording. Service technician, can you please verify that?”
Not only does it save data but it also now becomes practical to have a self-monitoring transformer. Before we put the solution in place for the power company, the power company would have to send a service technician out to this transformer station. This was in Norway where it could take hours to hike into these transformer stations just to check whether there is something wrong. Now, instead, they get a message saying, “Hey, there’s something wrong.”
You can imagine, any camera can now be converted into a smart sensor, any microphone, or any high data sensor, whether it’s an EKG that is reading things. If we’re allowed to, we can look at the EKG and say, “You have a problem,” or “This was clear.”
But what we found to be one of the biggest challenges is, how do you support the whole work process around the AI at the edge? One of the things that we have done at Pratexo is we actually are running the edge nodes in different modes. We can run in a mode where we’re doing nothing but recording. We’re recording sensor data and we’re pushing that sensor data, actually, up to the cloud. Why? Because the best learning environments are, right now, in the cloud.
As you probably know, to train a model requires a lot more compute power. Now, we could train them on the edge but, in general, more practical is to push the data from the edge up to the cloud. And when the data is in the cloud, we will use the vendor’s training models, Google’s model, for instance. Then we would train the model while we’re still recording, our edge devices are recording. At some point, you’re happy with the model, you want to test out the model how well it works.
You push the model back out to the edge. We’re able to push that model to our compute units or third-party compute units. For instance, we’re working with Axis and the Axis cameras that have a Coral chip in it so we can push the model to them. Meanwhile, we can still record. We can be in the hybrid mode where we infer the information or we infer events from the data but we’re still recording, we’re still optimizing. We’re pushing the data up again, you retrain your model, or you make it more optimal version of the model. We can push it out again.
This whole workflow process from recording, learning, deploying the model, that turns out to be a lot more tricky than we first thought but that is one of the things that we set out to solve.
I think it’s worth mentioning, when we’re discussing what’s coming next, the concept of swarm computing. This is going to be very important in the IoT space. It’s very often that the machines that are out in the field would create ad-hoc swarms, maybe because they’re geographically close together, maybe because they start collaborating over something.Our research shows that we can also solve this at the network layer, at the very bottom layer that we can get the machines to hook up together and start collaborating over something very dynamic. I’m really excited about that because there are so many of our use cases where there will be a tremendous advantage to have a set of machines collaborate, but it would be different machines based on context.