Autonomous Vehicles: The Long, Bumpy Road Ahead

Despite some tragic setbacks, the development of autonomous vehicles (AVs) is accelerating.

In March of 2018, Uber’s self-driving test model struck a pedestrian. This resulted in her death, bringing an immediate suspension to the program. It also stoked the public’s fear about giving up control of their vehicles. Because of this, AV programs from companies such as Apple, Google, Waymo, BMW, Mercedes, etc. were waylaid. The future of self-driving vehicles was uncertain.

However, research has continued. In only the last few months, researchers have successfully integrated AVs’ neural networks with physics based models, developed augmented simulations, and analyzed the co-evolution of AVs and urban environments.

All of these developments inch us forward towards the necessary and inevitable destination: safe, level 5, fully autonomous vehicles.

How do Autonomous Vehicles Work?

Each manufacturer has a different design, but the two basic components of autonomous vehicles are the sensors and the software.

The main sensor is either radar or lidar, each of which comes with its benefits and drawbacks. The point of both is to collect information by bouncing electromagnetic (EM) waves off the surroundings. This allows it to determine the distance, size, speed, direction, etc. of objects in relation to the vehicle.

Radar is simpler and cheaper. Because it uses low frequency EM waves, it has a longer range and can easily pass through bad weather, such as fog, rain, and snow. However, these low frequency waves, because of their bigger size, are unable to capture small details that might be useful for AVs.

On the other hand, lidar uses much higher EM frequencies. This allows it to detect tiny details of surrounding objects, enough to be used in the creation of accurate 3D modeling. However, these higher EM frequencies are ineffective in bad weather and are more expensive.

Other sensors include high resolution cameras, ultrasonic sensors, altimeters, high precision GPS, tachymeters, gyroscopes, along with several others.

All of the information collected by these sensors is fed into extraordinarily complex, machine learning algorithms to create a comprehensive understanding of the situation, predict the future state of the situation, plot a safe course through the situation, and then alter its behavior based on the results.

A commonly used algorithm is the Scale Invariant Feature Transform (SIFT). This takes an image of a new object, extracts the key points, and compares them to a database of key points from other already understood images, thus allowing it to identify the type of object. For example, if the sensors detect a stop sign, this algorithm will recognize its eight corners as key points, cross check it with similar images in the database, and inevitably conclude that the image is of a stop sign. This works despite the angle or scale of the image, as well as on images of warped or damaged objects.

You Only Look Once (YOLO) is an algorithm that analyzes an entire image, breaks it down into its constituent parts, identifies relevant parts based on pre-taught specifications for certain objects, places these parts into boundary boxes, and then makes predictions about how these objects are going to move. This allows an AV to understand how a given situation is going to evolve.

Furthermore, Histogram of Oriented Gradients (HOG)looks at how and in which direction the intensity of each section of an image changes and converts this information into raw data, which is computed and classified. This method is computationally faster and been proven useful for identifying humans and small animals.

Other common algorithms include TextonBoost, which analyzes images based on their shape, appearance, and context, and AdaBoost, which increases the performance of algorithms by optimally pairing them with the strengths and weaknesses of other algorithms.

While these algorithms are fairly sophisticated, they are still not enough to get us to level 5, where human intervention can be completely eliminated.

New Progress

A team from Stanford University successfully integrated physics based models and neural networks. They did this to create algorithms that can take vehicles up to their theoretical friction limits. Physics based models have the benefits of being mathematically rigorous, although they can only approximate reality and they cannot make use of the myriad of data points collected by the AV’s sensors. On the other hand, neural networks can incorporate the data collected by the sensors, but they lack the mathematical foundation of the models. However, the neural network created by this team uses data from physics based models as the inputs, along with past states, to optimize its performance, thus getting the best of both.

Furthermore, researchers from the US and China have developed augmented simulations to help test AVs. An essential part of AV development is creating accurate virtual driving scenarios. This is necessary to test the performance of the many systems in the AVs. But, up until now, their accuracy has been limited. The main reason for this was the processing speed of computers and the limitations of computer graphics. However, this team has created an Augmented Autonomous Driving Simulation (AADS). This system captures real world images with high resolution cameras, lidar, etc. It then annotates them with necessary data such as the position of objects and the movement of traffic. The end product is a realistic and scalable simulation. These can be used to train the neural networks, thus helping them work out the kinks.

Also, Fabio Duarte, a researcher from MIT, has been studying how urban environments and AVs must co-evolve. For example, he explains cities and companies will need to provide AVs with large amounts of data. This includes traffic flows, smart phone data, data from other cars, and much more for them to successfully navigate the urban landscape.

He also claims cities need to be redesigned once AVs become widespread. He highlighted altering commuting patterns, less of a need for parking spaces and parking garages, changes in real estate prices, just to name a few.

Unanswered Questions

While the above progress gets us closer to level 5 AVs, some questions still need to be answered. For example, who is liable for deaths and accidents? A court ruled that Uber was not criminally liable for the death caused by its self-driving car last year. But the company can still be sued in civil court. Some experts have noted that Uber employees may face charges as well.

Also, what happens when autonomous vehicles experience something unexpected? The algorithms can only understand what they have been programmed to or what they have learned to on their own. If, for example, they come across a new animal, the AV will not recognize it. And it will not be able to predict its movements. In 2017, Volvo’s AV had problems with kangaroos.

Moreover, what about hacking? Will AVs be cost effective? Will the public ever trust AVs enough to allow them on the road en masse?

Even without answers to these and many other questions, virtually every major car company in the world is developing autonomous vehicles. However, actual wide scale adoption is still at least a decade away.

If you enjoyed the article, please consider donating!

error

Enjoy this blog? Please spread the word :)

RSS
Share