By Lance Eliot, the AI Trends Insider
What is common sense?
There was a story in the news the other day about a man that got his arm stuck in a toilet while trying to reach in and fish out his smartphone that he had dropped into the privy. Some immediately said he had no common sense. What person with any common sense would reach down so far that they would get their arm stuck?
Well, I suppose we could consider whether something like this has ever been taught in school. Probably not (I don’t recall in K-12 ever being told to not put an arm down inside the privy). How was he supposed to know that he should not take such action? Perhaps knowing not to do this is actually uncommon knowledge and so why do we assume it falls within the realm of common sense?
I am guessing that you likely believe it farfetched to think that a grown man would not realize the potential for getting his arm stuck in such a situation, and that in spite of the topic never being covered in classes, it just stands to reason that trying to reach down too far is bound to be a bad idea. Plus of course there’s the yuk factor involved in it too.
Let’s try a different angle on the topic of common sense.
Suppose you go to an obscure country and go hiking. You see a very pretty flower. You decide to go ahead and touch the flower, and even take some of the flowers with you. A few minutes after handling the flower, your hands and arms begin to show signs of a rash. Within an hour, you are dealing with itching hives all over most of your body. What happened? You opted to grab hold of a toxic flower. Locals see you and immediately know what you’ve done. They shake their heads and think that there’s another tourist lacking in common sense!
Was this awareness about grabbing the flower indeed a common-sense matter? Some would say that they had no idea that such a flower was toxic and so it isn’t right to characterize it as common sense. But, if that’s the case, what about the man that jammed his arm in the privy?
Defining what we mean by common sense can be somewhat problematic. For example, common sense can be culturally based, such as the instance of the toxic flower whereby those that lived in that culture and region all knew about the flower, while you as a tourist did not. Common sense can also vary over time, in the sense that if you go back in history, there are things a hundred years ago that people of that time period would say is common sense, and yet today we might not know at all.
Can common sense be learned? Some say that it is not learned explicitly and that you just somehow gain it implicitly. For little children, we teach them to not put their hand on a hot stove. You could assert that this is a form of explicitly learned common sense. You can’t normally lift an entire car by yourself, but do we explicitly teach children this? Do we have children go to a car and try to lift it? Not usually. Instead, the child learns that heavy objects cannot be lifted by their own strength alone, and they realize that a car is a heavy object. You might say that in this case the common sense about lifting a car is based on implicit learning.
Learning about things involves generalizing from what has been learned. The case of the child that learned about heavy objects and then generalized that this applies to cars too, it showcases that we don’t need to learn about everything in the world per se and can learn something that we can apply to other circumstances. This though can go awry. There is a famous case of a child that was scared by a white furred dog, and the child then became fearful of any animal that was white in color. Was this the proper kind of generalization? No. This was a form of generalization that was based on a faulty kind of thinking.
Does intelligence include common sense?
If we say that someone is highly intelligent, do we also simultaneously mean that they have a hefty common sense? I am sure that you likely know some geniuses that at times appear to exhibit very little common sense. Indeed, our society appears to accept the idea that someone that we consider having a very high IQ is likely to actually lack common sense (you might think of Sheldon in The Big Bang Theory as a portrayal of this). It’s as though we believe that their minds are so occupied with the tough stuff that those geniuses have little mental space for and little regard for the more mundane things that we pile into the rubric of common sense.
Even those high IQ people still though have some amount of common sense, since they appear to operate sufficiently in the real-world and know things that we might consider to be common sense. For example, we seem to know that you can’t be in two places at the same time. That’s just common sense, most would say, and I would claim that even the highest IQ person would know this too. In other words, we might joke that a certain person has “no common sense” but we are really exaggerating by using the word “no” and that they indeed have some amount of common sense. Their common sense might be spotty, and they get themselves on occasion into a bad spot, including reaching too far into a toilet, but nonetheless they do have a modicum of common sense.
We might then agree that intelligence does include common sense. A person can be low in intelligence or high in intelligence, and they can be low in common sense or they can be high in common sense. Whichever way it goes, there is going to be both intelligence and common sense. We can quibble about whether intelligence includes common sense, or whether they are colleagues of each other and neither is subsumed by the other. Anyway, let’s go with the notion that with intelligence there is also some amount of common sense.
AI is Lacking in Common Sense
For purposes of developing Artificial Intelligence (AI), we need to figure out what is meant by “intelligence” since we are trying to develop automated systems that do that same thing. As such, if in fact common sense is integral to intelligence, presumably when creating an AI system we would also intend to have it embody common sense. Without embodying common sense, we would be developing something that is less than what we would consider as intelligent and it would lack what might be considered as a vital piece of that puzzle.
Right now, most AI systems lack any semblance of common sense.
Indeed, in the AI field there is regular AI and there is AGI, which stands for Artificial General Intelligence. The AGI is considered the kind of AI that includes common sense reasoning. To-date, AI systems have been devoted to specialized tasks and have not had to contain common sense. Some say that these are weak forms of AI, and that a strong form of AI would embody common sense. Do we need common sense in AI to do specialized tasks? There are those that say you don’t need common sense in specialized areas, since the task is whittled down to something that only needs specialized knowledge and no common sense is required.
What does this have to do with AI self-driving cars?
At the Cybernetic Self-Driving Car Institute, we are developing AI systems for self-driving cars and also exploring the inclusion of common sense reasoning.
The existing AI systems for self-driving cars don’t have common sense reasoning. Instead, they are systems devoted to the task of driving, and it is claimed that driving does not need common sense. There is controversy over that claim. Is it that driving does not need common sense or is it that since we haven’t reached a point of being able to truly develop artificial common sense that we are simply OK with saying that it isn’t needed for the task of driving. You be the judge.
Thus, we have these options:
- Common sense is not needed at all for AI self-driving cars
- Common sense would be a nice-to-have for AI self-driving cars but isn’t required
- Common sense is a necessity for AI self-driving cars
If you believe that common sense is not needed at all for AI self-driving cars, you are akin to many AI developers that would say the same thing. If you are in the boat of saying that it would be a nice-to-have, I believe that many in the AI field would say sure, it would be nice to have, but they also would say it would be nice to be the king of the world, and so there are lots of things that would be nice to have. Why worry about something that’s just nice to have, they would say.
If you believe that common sense is a necessity for AI self-driving cars, you might be concerned that there are few efforts afoot to embody common sense into the AI for self-driving cars. There are some that think we won’t be able to achieve true self-driving cars, ones at Level 5, which are intended to be able to drive a car as a human would, unaided by a human driver, without a breakthrough in common sense reasoning. Those that think this are relatively few, and others in AI would scoff at them. No need to wait for common sense reasoning to be perfected, they say, and let’s just keep plowing forward on driving as a specialized task that does not require common sense.
How Much Common Sense Is Required To Drive?
Suppose though that common-sense reasoning is the secret ingredient that makes AI self-driving cars truly possible. You might argue that 90% of the task of driving does not need common sense and that it’s only a paltry 10%. Excuse me, but if you are having self-driving cars on the road that are supposed to be true self-driving cars, and if they are missing 10% of what they need to know, I don’t think we’d be satisfied with the end result. This implies that the AI of the self-driving cars won’t be able to fully perform the task at hand. I realize some might say it’s more like 99% for specialized knowledge and maybe 1% for common sense, but even this is still something to raise your eyebrows. If we ultimately take all 200+ million conventional cars in the United States alone and replace them eventually with all AI self-driving cars, are you willing to deal with the 1% of the time that involves them from not being able to properly perform the driving task in certain circumstances?
For those of you that recall AI efforts in the 1980s, you might remember the big hullabaloo that occurred about the need to achieve common sense reasoning. A consortium known as MCC, consisting of some of the biggest tech firms of the era, poured a ton of money into seeking common sense reasoning. Those of you that lived through that period will remember the common-sense engine Cyc, which got government funding and private funding. The idea was to codify all the simple truths of life, incorporating thousands upon thousands of common sense rules. Believe it or not, the Cyc effort still continues today (based on the tenacity of Doug Lenat), having rules into the millions, and there have been efforts to try and commercialize it. There are other similar kinds of efforts, such as the laudable work being done at AI2, the Allen Institute for AI, a bold effort by Paul Allen.
Seasoned AI developers and researchers are bound to say that they thought we left common sense reasoning efforts long ago. They are forgotten relics today, they say. Those efforts appeared to be an errand for fool’s gold, and at the time many were irked that monies flowed to something that was a seemingly insurmountable task. There were suggestions that it would take 350 human-years to achieve a common-sense reasoning capability. It takes some guts to be willing to persist on an endeavor that won’t have a substantial payoff for that length of time.
Over time, there have been heated arguments about whether it makes sense to try and codify common sense into individual rules. You can’t be in more than one place at a time. You can’t lift a heavy car. You shouldn’t put your arm down deep into a privy. It would seem that we would have a nearly endless list of such rules. How can you capture all of those rules? Suppose there aren’t just millions of such rules, but maybe billions or more? Furthermore, as mentioned earlier, common sense changes over time and thus whatever you consider to be common sense now might be outdated or need to be supplemented with newer common sense. It seems to have an endless scope and one that we can’t even figure out where the boundaries of the scope is.
In fact, some would say that forget about trying to do things the brute force way. Rather than trying to find all of these individual rules and codify them into a system, we might instead use machine learning to ferret out the aspects of common sense. If you use an artificial neural network to pattern on data, presumably it will pick-up the nature of the common-sense reasoning that is otherwise hidden within the data. Maybe children don’t actually learn common sense by individual rules, and instead their minds see the world around them and via neuronal patterning they come to gain common sense. It could be that we falsely turn these into individual rules, simply because its easier for us to explain what logically seems to be occurring, but in fact it might be a misleading representation of what actually is happening in the mind.
If you believe in the notion that common sense arises as a matter of course in the dense fabric of neuronal activity, you would likely be even more dubious about efforts to by-hand create artificial common sense by the entering of simplistic rules. Those that believe in the by-hand creation say that we just need more ways to do the by-hand codification. Perhaps we should do more crowdsourcing of the by-hand approach. Let’s get all 7.6 billion people on this planet and get them to help enter the common-sense rules. Imagine how quickly you might be able to get toward a common-sense reasoning system. This though obviously has numerous logistics issues, technological issues, and would seem a bit farfetched as a viable approach.
You might be thinking about systems like Google’s Knowledge Graph or Microsoft’s Satori and wondering if this is a sign of our reaching a common-sense reasoning capability. Though those kinds of efforts are encouraging, it’s not really what most would describe as an effort toward capturing full common sense reasoning. I assure you, trying to achieve common sense reasoning, whether via the by-hand approach or via the machine learning approach, it’s a really tough problem. That you can bet on. There’s no magic bullet that seems to be anywhere in sight on this.
Given all this discussion about common sense reasoning, where does it though arise in the driving task? Maybe those that say we don’t need it are right. Perhaps there isn’t any common-sense reasoning involved in driving a car. If so, you can drop this topic from the AI self-driving car field and instead consider it as an interesting curiosity for the rest of AI.
A few years ago, I went to an event that had only a dirt parking lot for the event and so I parked my car on the dirt. The event itself took place in an indoor venue. I went into the venue and enjoyed the event. Came out to my car a few hours later. During the event, rain had poured down. The dirt had become a muddy mess. I could not get my car out the parking lot, it was stuck in the mud. When I have gone into the venue, I had seen ominous rain clouds. You might say that common sense would have warned me to not park on the dirt, since I could have reasoned that when it rains that dirt turns to mud, and that cars don’t normally drive well in the mud.
Do we expect an AI self-driving car to have this kind of common sense reasoning? Should an AI self-driving car be able to consider where you park the self-driving car, and whether or not it is on dirt, and whether or not there is rain forecasted, and whether the self-driving car could get stuck in the mud if the rain turned it the dirt to mud, etc. Some would say this is crazy to consider that we would expect an AI self-driving car to figure this kind of thing out.
Let’s try something else. I was driving my car down a neighborhood street that I had not been on before. As I was driving, I noticed that there were three young boys perched on a rooftop of a home along the street. They seemed to be hiding, yet I could see their heads peering over the roofline. They seemed to be staring at the cars going down the street. I decided to make a U-turn, since the situation was odd. As I did so, a car ahead of me approached where the house was. The boys tossed water balloons at the car. This could have created an accident, and everyone was lucky that nothing bad happened per se.
Was it common sense that led me to make the U-turn? Would we expect an AI self-driving car to have that same kind of common sense? You might say that this circumstance is not part of the driving task per se, and that it’s an oddball and thus not fair to suggest that an AI system would need to ascertain something like this.
Let’s try this. I was driving on the freeway and up ahead I saw a pick-up truck that had a bunch of debris sitting in the bed of the truck. The debris was not covered up and was just sitting there, subject to the wind. I used what I believe to be common sense and figured that at some point that debris might fly out of the bed of the truck. I didn’t want to be behind the truck when it might happen. So, I moved over to the next lane. Sure enough, moments later, I saw the truck hit a pothole in the freeway, which bumped the truck enough that some of the debris spilled onto the freeway. The car directly behind the truck swerved wildly to avoid the debris, which then led to other cars all swerving madly.
Presumably, an AI self-driving car would be like those other drivers and have only reacted once the debris hit the roadway. Would we expect an AI self-driving car to have deduced that debris in the bed of a truck might come loose and fly out of the truck? Is that an obscure notion? I don’t think it is an obscure notion and assert it is something that we’d want an AI system to be cognizant of.
One way for the AI to have anticipated this would be if it had experienced it before. Suppose the AI self-driving car was caught in that circumstance, hopefully it would subsequently update itself to anticipate such occurrences in the future. Furthermore, if the AI self-driving car is connected to a cloud that is being used by the auto maker, this instance could go into the cloud, and at some point, all other of the AI self-driving cars by that auto maker might benefit from this learned aspect via an OTA (Over The Air) update.
Of course, we also need to consider that whatever is learned from such an instance is not overly generalized. Remember the case of the child that was frightened by a white furry dog and became fearful of any white colored animal? Would the AI of the self-driving cars “learn” that whenever a truck with a bed of debris is detected that it means debris will fall off the truck? In this case, it was due to hitting the pothole. Also, there wasn’t any cover or netting over the debris. We need to consider how would the AI be able to use common sense reasoning to generalize this into something practical and not become overly paranoid about all trucks hauling debris.
Notice that I mentioned that if the AI self-driving car experienced this kind of circumstance before, it might have explicitly learned subsequently about what to do. There are the implicit learning aspects too. Do we need to wait until AI self-driving cars have experienced the many myriad of circumstances before it can be ready to spot such situations? Maybe we could include common sense reasoning into the AI self-driving car, allowing it to anticipate situations that it has not necessarily learned explicitly. If we are dependent upon the AI only doing explicit learning, how many millions upon millions of miles of driving on our public roadways will be needed before the AI learns all that it needs to know? And, during the time, are we going to be vulnerable to the AI not having the needed common sense, and thus being a less safe driver than we otherwise want it to be?
Common sense is not an easy matter. We all take it for granted. Embodying common sense reasoning into AI self-driving cars is an open issue and a quite difficult one to solve. Though some will disagree about whether it is a nice-to-have or a necessity, it’s safest to suggest it is beyond the realm of not needed and in the realm of desirable.
This content is originally posted on AI Trends.