Restoring mobility to the disabled
Innovations to create and market autonomous cars are driving up all the fields of Artificial Intelligence. Before the autonomous car will be marketed on a large scale, innovations will benefit disabled people through the development of autonomous wheelchairs for people with reduced mobility.
In 2012, the first public demonstration of the autonomous car featured a blind man shopping with the first Google Car prototype. Steve Mahan, the “driver” found incredible to drive alone for the first time since he had visual impairment.
In 2016, researchers at MIT succeeded in installing the kit for autonomous driving of a car on a wheelchair for people with reduced mobility. In 2017, this autonomous wheelchair was tested in hospitals in Singapore.
Panasonic also tested autonomous wheelchairs at Tokyo airport to facilitate the transportation of passengers with reduced mobility.
In July 2020, the University of Pittsburgh launched a national study on independent transportation accessibility for people with disabilities after receiving a $1 million grant from the U.S. Department of Transportation.
No autonomous car without AI
Google, Nvidia, Baidu, Uber, Tesla who have invested in the field of autonomous cars are also champions of Artificial Intelligence. This is not surprising since autonomous driving relies on Artificial Intelligence which replaces the driver’s vision and judgment.
AI is able to identify objects on the street and determine how the car should behave in different situations. In a series of short videos, Nvidia shows how Artificial Intelligence helps autonomous vehicles to better perceive intersection structures, optimize night-time pedestrian detection and anticipate the routes of other vehicles.
More powerful algorithms : DeepVO, YOLO, DQN, PoseNet
Artificial Intelligence for the implementation of autonomous driving must fulfill four tasks: location and mapping, scene understanding, route planning and driver behavior. Researchers are developing increasingly powerful algorithms to simultaneously accomplish these tasks.
Location and mapping (where am I?)
SLAM (Visual simultaneous localization and mapping) models are the models by which a robot/vehicle builds a map of its environment and uses this map to navigate or deduce its position at any time. Deep VO is a more advanced ‘end to end model’ than traditional SLAM models to perform this task. It is based on CNN (Convolutional Neural Networks) and RNN (Recurrent Neural Networks).
Understanding of the scene (who and where are the others?)
The models used for this task are models for object detection /recognition and image segmentation. The real-time object detection algorithm YOLO (You Only Look Once) is an efficient object detection algorithm used by experts for the development of self-driving car.
Trip planning (how to get from point A to point B?)
To accomplish this task, Researchers use reinforcement learning algorithms such as Deep Q-Networks (DQN). Instead of learning from a set of labeled (or unlabeled) data — supervised or unsupervised learning — the algorithm learns from errors generated by a reward system.
Driver behaviour (blink, position behind the wheel, driver mood, etc..)
These behavioral and cognitive analysis are carried out with algorithms such as PoseNet, which allows the study of driver breaks while driving.
A large-scale deployment of AI
The autonomous car is one of the areas where AI is developed in large-scale production. Tesla’s approach to autonomous driving is not based on the use of LIDAR or HD mapping but entirely on Computer Vision Machine Learning: the real-time analysis of images collected by Tesla car cameras using convolutional neural networks (CNNs) and recurrent neural networks (RNNs).
Andrej Karpathy , director of AI at Tesla, explains that his team has developed 48 neural networks that make 1,000 separate predictions. It takes 70,000 GPU hours to train these neural networks. Tesla uses Pytorch : an open source machine learning framework, developed by Facebook, which accelerates the path from research prototyping to production deployment. Tesla’s engineers were forced to develop their own computational FSD computers to gain efficiency and save money.
To go further about the autonomous car
MIT has given a specialized course on self-driving car in 2018 and 2019, led by Lex Fridman, who is also an expert in Deep Learning. These courses are freely available on YouTube.
Lex Fridman is producing an AI podcast that presents the main actors of the AI field in the United State. As part of his Podcast, he has interviewed many actors in autonomous driving.
- Kyle Vogt: Cruise Automation | Podcast #14
- Elon Musk: Tesla Autopilot | Podcast #18
- Chris Urmson: Self-Driving Cars at Aurora, Google, CMU, and DARPA | Podcast #28
- George Hotz: Comma.ai, OpenPilot, and Autonomous Vehicles | Podcast #31 & Podcast #132
- Sebastian Thrun: Flying Cars, Autonomous Vehicles, and Education | Podcast #59
To learn more about the Tesla Autonomous Car