Verne Global

HPC | Manufacturing |

22 March 2018

Deep learning for autonomous cars

Written by Nick Dale

Nick is Senior Director at Verne Global and leads our work across HPC and specifically its implementation within discrete and process manufacturing.

Since Sebastian Thrum and his team used machine learning to win the DARPA Grand Challenge in 2005, machine learning and deep learning have been an integral part of developing autonomous vehicle technology. Today, rapid advances in deep learning technology continue to drive the fantastic growth in the autonomous vehicle industry, derived from both a vibrant start-up scene, as well as growing investment from established automakers that are eager to win and maintain a leadership position in the autonomous vehicle market. Despite great progress, how to best employ deep learning technology in the field of vehicle autonomy is still a complex question.

The first and most popular approach to deep learning technology in autonomous vehicles is the traditional “robotics” or “mediated perception” approach, which means using computer vision, LiDAR and other components that rely on deep learning technology to parse an image into relevant objects (or “features”), such as pedestrians or other cars. The information from these disparate systems is then assembled by the car’s processing unit where driving decisions are made according to a system of pre-programmed rules. Companies such as Uber and Google’s Waymo have pursued this “traditional” approach, using deep learning in the computer vision and sensor system to perform tasks like locating lanes and pedestrians, recognising and understand road signs, and monitoring the vehicle’s blind spots.

Although this method has been widely adopted, it’s not without its limitation. For example, sensor input may be degraded when rain or mist gets on sensor lenses, thereby reducing the quality of the data returning to the system. Lidar systems in particular may be vulnerable to poor weather, sometimes falsely-perceiving rain or snow as approaching objects to be avoided. Black ice and other road conditions further complicate the safe operation of these systems.

Meanwhile, other companies are approaching vehicle autonomy from a purely deep learning perspective. One of these companies, Drive.ai, has been particularly successful with this deep-learning first approach to vehicle autonomy. Instead of a pre-programmed, rules-based approach, Drive.ai is training a deep learning system to devise its own decision-making capability based on the scenarios that it encounters. By encouraging vehicles to learn on their own, the team at Drive.ai hopes to develop a system that can handle the “edge cases” in autonomous driving — the unexpected experiences that make driving unpredictable, and often stymie pre-programmed intelligence systems. In 2017, Drive.ai raised $50 million (and then another $15 million), added renowned machine learning researcher Andrew Ng to its board, and signed a partnership deal with Lyft to start testing self-driving cars alongside with the ride-sharing service. Other companies working on similar technologies include Hungarian firm Almotive (which recently opened an office in Mountain View), and deep learning leader Nvidia, which is bringing its accrued expertise to develop the Nvidia PilotNet solution.

Proponents of the “deep-learning-centric” model consider the endeavor something like teaching a teenager to drive, though the challenges are probably a little greater than that. More importantly, this deep-learning-first model presents the “black box” problem. With all of the intelligence of these system occurring deep within a convolutional neural network, understanding and adjusting their intelligence can be a difficult, opaque process for developers. In a situation where human lives are at stake, this opacity and lack of control can be very hard to accept.

Though the majority of companies are following one of these two approaches, still other methods of providing vehicle autonomy exist as well. An MIT spin-off called ISEE, operating out of Boston lab space the Engine, is eschewing both the models mentioned above in favor of developing an intelligent “common sense” for cars, based on deep-learning enabled observation of human experience and interaction. By developing algorithms to study how humans behave with each other, and in the physical world around them, the company hopes to develop a model that can anticipate and improvise in reaction to the subtler aspects of driving, based on its accumulated knowledge of human social nuance.

It remains to be seen how exactly deep learning will find its optimal implementation in vehicle autonomy. Is it just a tool for vision and perception, or should it provide the cognitive “heart and soul” of our vehicles as well? While the answer to this fundamental debate is still unclear, the benefits of autonomous vehicle technology become more apparent by the day. Not only can they help keep the roads safer, but good autonomous vehicles will also help the disabled regain their independence and self-confidence, and may even make ships that are safer, more efficient, and cheaper to operate a reality. Verne Global, as a provider of cost-effective, 100% renewable HPC computer power, which includes our new hpcDIRECT solution, greatly values its role in making the age of vehicle autonomy a safe and satisfying reality by delivering HPC and deep learning capabilities to our clients worldwide.

Share:FacebookTwitterLinkedInmail

Sign up for the Verne Global newsletter

Opinion, thought leadership and news delivered directly to your inbox once a month.