By Dr. Lance B. Eliot, the AI Insider
The self-driving car was gradually coming up to a red light at a very busy intersection. Intending to make a right turn, the self-driving car came to a proper stop at the pedestrian walkway and waited to see that it was all clear to make the right turn. Sensing no particular hindrance, the AI instructed the self-driving car to roll forward and make the right turn.
Suddenly, a small child darted off the sidewalk and directly into the path of the self-driving car. Not realizing that the child was going to do this, the self-driving car had little time to react. Upon sensing the movement of the child and having the radar and LIDAR detect that the child was in the way, the AI determined that the only chance of not hitting the child would be to slam on the brakes. Doing so would cause the occupants of the self-driving car to be shaken and possibly suffer neck injuries, but it seemed worth it to avoid hitting the small child. The self-driving car chose to engage the brakes forcibly, but it turns out that meanwhile the small child opted to veer even closer toward the self-driving car and the two struck each other. The child was hit, and the occupants of the self-driving car were also injured. A disaster!
Could this happen? Absolutely. Did it happen? Not in this case, since this was in fact a simulation of a self-driving car and involved trying to teach the AI to be aware of driving situations and what can happen in them. With a computer-based simulation, it is possible to run millions of miles of scenarios for self-driving cars and help the Machine Learning grasp what to do in real-world situations. Google is well-known for having their simulators “teach” their self-driving car software about driving aspects. Nearly all of the prominent self-driving car makers make use of simulations for improving their self-driving car tactics and strategies. Simulators allow us to teach self-driving cars and do so without fear. There is no fear that the self-driving car is going to hurt or kill someone, since it is all taking place in a simulated environment. It is like the movie “The Matrix” for the training of self-driving cars. But, of course, whatever the self-driving car learns during the simulations can ultimately cause harm, if it is learning things that won’t properly translate into safe actions while in the real-world.
A simulation for a self-driving car has to be realistic, otherwise it will lead the AI down a primrose path. If the simulator does not abide by the law of physics, for example, and allows a self-driving car to go from 60 to 0 miles per hour in a nanosecond, this is not what happens in the real-world. As much as possible, the simulation has to be equivalent to the real-world. You can’t drive up onto the sidewalk willy-nilly to avoid hitting another car. Well, actually, in the real-world you could possibly do so, but you’d need to be sure that going onto the sidewalk was prudent, physically viable, and the only available last-resort option. Some aspects of the real-world are pretty obvious constraints such as not having the self-driving car sprout wings and fly out of a gnarly situation (I am hoping that does become possible at some far ahead future time!), or be able to instantaneously reverse direction or leap over a tall building.
I mention this aspect about needing to have real-world constraints in a simulation since sometimes a self-driving car simulator is based on a car driving game which doesn’t especially care about real-world aspects. Many of the car driving games are loosely based on the real-world, but are not “burdened” by having to strictly abide by the real-world. A game that allows the car to hit other cars and bounce off them, or hit pedestrians without caring that those pedestrians are injured or killed would not be an appropriate simulator for machine learning purposes. By extremely careful about using any game playing car simulator when training your self-driving car using machine learning. The machine learning won’t realize that those real-world constraints aren’t real, and so the AI will ascertain that aspects such as hitting others or flying are all reasonable options to undertake. Keep in mind that the machine learning is looking for patterns and trends, and it will find them in the must subtle of ways, thus, if the simulator doesn’t bound it with real-world constraints then it is going to potentially find those “outs” and make use of them whenever it wishes.
At our Cybernetics Self-Driving Car lab, we are using simulators extensively and have some handy tips and insights for others that are desirous of using such simulators.
First, as mentioned, make sure the simulator is based on real-world constraints such as the physics of what happens in the real-world. Objects need to abide by the laws of motion, and need to take into account environmental conditions such as the impacts of rain, snow, and the like. A car reacts differently on a rain soaked road than it does on a dry road. Some simulators don’t consider this aspect and so when the self-driving car is trying to learn driving techniques it will falsely think it can hit the brakes and have the same stopping distance no matter what the surface of the road is like. This is not the real-world. This will cause the machine learning to be messed-up in the real-world. Imagine it trying to urgently apply the brakes when driving on an icy road and falsely learned via simulation that the car can stop in a certain distance due to simulations that pretend that all roads consist of pristine asphalt in dry and perfect conditions.
Next, the simulation should have a wide variety of typical driving situations so that the machine learning can properly generalize and find patterns. If the simulator is solely about driving on the open highway, this won’t do much good for the machine learning since it will only know about open highway circumstances. It won’t know what to do in the inner city or suburb driving conditions. You need to make sure that the simulator covers a myriad of the typical driving situations. Not only is the overall environment crucial, it needs to include aspects such as other cars on the roadway too.
I’ve seen one simulator that only has the self-driving car on the road. There aren’t any other cars. The machine learning will opt to potentially use any lane of the road that it wishes to do so. It can change lanes without having to detect and worry about other cars in those lanes. It can go any speed it wants, since there aren’t cars ahead of it. It can brake suddenly and not worry about any cars behind it that might smash into it. This kind of an artificial world is maybe interesting for some research purposes, but it won’t help a self-driving car that is destined to be on the real-world roads.
Another simulator that I’ve seen is one that is skewed in hidden ways. The self-driving car is given the highest priority in the simulator. Thus, if the self-driving car decides to change lanes, and if other cars are next to it, lo-and-behold those other cars let the self-driving car change lanes and don’t try to prevent it or get in the way. How many times per day do you try to change lanes and the car next to you always politely lets you do so? Much of the time, another car is going to cut you off, either intentionally since they don’t want to let you into their lane, or unintentionally because they didn’t even see that you were trying to change lanes. Other cars on the road of the real-world are at times mean and cruel and those car drivers are determined to mess you up. This is what a self-driving car needs to learn about.
Besides having typical scenarios, the simulator needs to be populated by real-world entities. There need to be pedestrians, and they need to be realistic such as darting onto the road when they shouldn’t be doing so. There need to be bicyclists and they at times should veer into traffic or make illegal moves. There need to be motorcyclists that drive right next to your car and weave in and out of traffic. As much as possible, the simulator needs to have the same entities as the real-world. Plus, those entities cannot be programmed to be perfect and generous toward the self-driving car. Instead, they need to be programmed to be selfish, inattentive, and have a willingness to do what is wrong as much as they might have a willingness to do what is right.
A really good simulator will also have atypical situations too. During my weekly commute on the freeway, I have seen a motorcyclist get hit about once every two weeks or so. They often straddle the lanes and a car makes a lane change and plows into the motorcyclist. Most of the time, the motorcyclist is able to get up and continue, but sometimes they don’t. From a driving conditions perspective, when these incidents occur, the traffic patterns all change as the other cars are jockeying to get out of the way of running over the downed motorcyclist. Also, some cars will stop on the freeway to block other traffic and protect the motorcyclist. Some drivers will pull over to get out of their cars and then offer assistance to the motorcyclist. There are now “pedestrians” essentially on the freeway, which normally a simulator would not include. And so on.
Another aspect of a simulator is whether it provides sensory data for the self-driving car. A real-world self-driving car has sensors such as radar, cameras, and LIDAR (see my column about LIDAR). Those sensors are crucial to what the self-driving car is able to detect about the real-world. In most simulators, the simulator is not feeding sensory data to the self-driving car, and instead it is providing other info that is specialized and not part of what sensors would do. The problem here is that the self-driving car is not relying then onto the real-world sensor info, and instead it is being fed some transformed and ready-made info about the traffic and the scene. Again, this is a far cry from what the real-world consists of. Some defend these approaches by saying that the sensory fusion of the self-driving car is a different layer and that the simulator is going above that layer, which I concede, but this is not the same as what actually happens in the real-world.
As an analogy, suppose you were trying to figure out how to use your eyes, your nose, your ears, in order to understand the world around you. Your brain is cognitively trying to comprehend the world, and takes as input the sensory info coming from your eyes, your nose, your ears. If I were to say that I can just plug directly into your brain, and bypass your eyes, nose, and ears, would your mind be learning in the same manner as if it were using your actual senses? Though this might seem like a philosophical debate, it is actually more pronounced as there is definitely a mind-body connection that shapes how we learn and what we learn. A machine learning approach that “learns” without making use of the sensory data that it will be getting in the real-world is gaining a false and misleading perception of the real-world. It will struggle mightily once connected to real-world sensors that are detecting real-world objects and actions.
If you are interested in trying out some of the simulators that exist for training of self-driving cars, here are some notable ones that you might want to use:
Open Source Simulator.
Udacity is making available their self-driving car simulator via open source. It was constructed using the free game making tool Unity. You can use the existing tracks and also add new tracks. Go to the Udacity web site or to GitHub and you’ll find the simulator available there.
DeepTraffic was made at MIT and provides an ability to simulate highway traffic. It is a means to get your feet wet in terms of neural networks and traffic conditions.
Game Adapted Simulator.
Grand Theft Auto (GTA) has been enhanced via OpenAI and the DeepDive Project. Using the Python Universe, the enhanced GTA V provides a 3D world for the exploration of live action and the behavior of people.
Please make sure to keep in mind my aforementioned caveats when using any of these self-driving car simulators. Watch out for non-real-world aspects. Watch out for hidden assumptions embedded deep within the simulator. Watch out for narrow scenarios rather than having a wide diversity of scenarios. Watch out for lack of behavioral aspects and the assumption that other drivers and other people will all be accommodating and polite. Etc.
Real-world testing of self-driving cars is vital to the development of self-driving cars. Taking self-driving cars onto the road and having them experience the real-world is crucial to their maturation. This real-world experience can be aided by the use of simulations. You can do lots of simulations that produce tons of data, and do so at a fraction of the cost of the self-driving car being taken out onto the real roads. It is also obviously safer to use a simulator, while being on the real roads can be dangerous. Also, being on the real roads can be a public relations nightmare for self-driving car makers since whenever a self-driving car makes a mistake it can be captured on video and posted for all the world to see, versus using a simulator and discovering “mistakes” before they happen in the real-world.
That being said, a simulator is not a silver bullet for training the machine learning of self-driving cars. Simulators can only provide so much aid to the AI development for a self-driving car. There needs to be a balance of coupling between training a self-driving car in the laboratory via a simulator, and also simultaneously doing real-world driving. The collection and analysis of real-world driving data helps to make simulators more practical and usable, but it still does not obviate the real-world driving learning itself. Let’s make sure our simulators are aiming self-driving cars in the right direction and able to therefore make them more agile and ready for the real-world. Drive safely out there.
This content is original to AI Trends.