It takes a lot of processing power to navigate the roads safely. Think about all the decisions you make on your way to work each morning—decisions that are so routine you barely realize you’re making them. Now, imagine trying to program algorithms for those decisions, not to mention the countless others you’re capable of making should the need arise, and you start to get an idea of the magnitude of the challenge.
In most cases, cloud computing offers the best solution to the problem of massive processing requirements. For autonomous vehicles, though, every millisecond counts. Google, Amazon, and the other big cloud providers rely on gargantuan server farms located in one or two facilities that handle the computing needs of users all over the country. These datacenters can do some serious crunching, but when the information has to travel from a vehicle and back again, the distance is problematic—if not fatal.
And that’s leaving aside lapses in connectivity. When you’re barreling down the road at 65 miles per hour with a two-ton load—or, for that matter, if you’re going 10 miles an hour in a Tesla sedan—you can’t afford to wait while your navigation system reconnects.
Okay, so the answer must be huge servers housed somewhere in the vehicle, right? Well, onboard computers run into all the old problems the cloud was originally created to solve. Servers tend to be large, heavy, delicate, and energy-draining. And the more processing power they have, the worse all these liabilities become.
The new solution that’s gaining currency is what engineers are calling edge computing. In the usual cloud paradigm, the center of a computing network is, unsurprisingly, the datacenter. This is where all your data is stored and where all your processing takes place.
The edge of the network is in some cases a server closer to where the data is collected and used. Some service providers are opting to create smaller datacenters and disperse them across their regions of coverage, creating so-called edge networks. This model is similar to cell phone companies, who build widely dispersed towers to reach all their customers as quickly and efficiently as possible.
But edge computing can also mean processing takes place inside the devices that are gathering the data to be processed. We’re all familiar with the minicomputers we carry around in our pockets. But everything from refrigerators to home security systems to assembly line machines on factory floors are now connected to the internet as part of what’s known as the Internet of Things (IoT).
What this means for the autonomous vehicle industry is that some of the decision-making will be handled by onboard computers. But some of the processing will be pushed even farther toward the edge, to smart devices and sensors. Cameras, for instance, will be programmed to assign images to categories, a first step toward deciding on a response to the situation being captured by the lenses.
Sounds cool, right? But how close is this stuff to market? Well, both Microsoft, with its Azure IoT Edge, and Amazon, with AWS Greengrass, are already offering versions of this technology.
So, essentially, edge computing means that data is processed in stages. The most basic computations can be made on the devices or sensors gathering the data. Then there may be another stage in which an onboard server takes on higher-order categorization or goes further in the decision-making process. Next, the data may be sent to small, regional datacenters. And, finally, the data may be transmitted to one of those large, centralized facilities for longer-term storage and updates to decision-making strategies.
The precise details of where the processing occurs and which servers handle which aspects of a vehicle’s response to its situation are what engineers, software developers, and device manufacturers are working out now. But the basic idea is that real-time responsiveness is best achieved by pushing moment-by-moment decisions closer and closer to the situation being responded to.
A good analogy is to human reflexes. There are some situations your body needs to react to immediately. If you touch a hot surface, for instance, you don’t have time to wait for the neural signal to travel from the nerves in your hand, up your arm, into your spine, up to your brain, and all the way back. You need to make a very simple decision to pull your hand away as quickly as possible. The human body accomplishes this by activating a reflex as soon as the pain signal reaches the spine. But theoretically the reflexive response could be activated by the nerves in the hands as well.
In the same way, an autonomous truck may identify dangerous road conditions, like a patch of ice, and rather than sending the information to some far-off datacenter, the onboard computer decides to delay executing a lane change decision it made previously. The data from the incident and the response will eventually reach the datacenter for evaluation, but the danger will already have long since been evaded.
Will edge computing pave the way for fully autonomous vehicles anytime soon? There are still plenty of kinks to work out, but advances are coming faster than anyone anticipated even a few years ago.
Comments