By Dr. Lance B. Eliot is the AI Trends Insider
You are right now reading this sentence. Your focus, presumably, involves seeing the letters on the page (or screen, if looking at this online), you can see that the letters are grouped together into words, and the words grouped into sentences, and sentences grouped into paragraphs. Your eyes are conveying the images of these characters to your brain. Your brain is analyzing the images and able somehow (we don’t yet know how) to turn these images into some kind of concepts, ideas, knowledge that it then combines with other concepts, ideas, knowledge, and ultimately makes sense of what this says.
Before I pointed out that you were reading the above words, were you aware that you were reading the words?
You were reading the words and doing so without having to think about the fact that you were reading the words. You were just reading the words. But, if I suddenly interrupted you and asked what you are doing, you would certainly have told me that you were reading the words. We’ll assume that at the time of reading the words that you actually were aware you were reading the words, since of course it could be that you weren’t aware of reading the words while actually reading them and only once I asked did you then get sparked into the awareness of what you were doing.
Kind of mind numbing to walk you through this, but it’s for a very good reason. As a human, you have a means to be aware of what you are doing, where you are, what you are seeing, and so on. You have self-awareness. Small children develop self-awareness over time. They at first aren’t able to readily think about themselves per se. They see you and can think about you. They can look at a dog and think about the dog. It is much harder for them to think about themselves. What is Jared doing right now, you might ask a small child named Jared. Depending upon the age, the child might not comprehend that you are asking about them, and will assume you are asking about someone else named Jared, and might even look around trying to see where this Jared is.
Eventually, a child develops into realizing that they exist per se, and that they can think about their own existence. If you ask Jared what is Jared doing right now, he’ll be able to use his now developed self-awareness to say that Jared is maybe playing a game on his smartphone. Once he gets a little more developed, he might be smarmy about his reply and tell you that Jared is answering your question. He now has advanced to knowing about himself and furthermore realizes he can joke with you about the aspect that though he was just playing the game that in the instant that he responds to you that he is now responding to you, rather than saying he is playing the game.
Some refer to this as knowing about knowing. We know that we know something, and we can be introspective about what we know. Do animals have this? Some experts say yes, some say no, some say that some animals have it, some say that some animals have it but only in minor ways. When you have an animal look at itself in a mirror, which seems to be a popular type of cat video on YouTube (i.e., filming your cat when it sees its own image in a mirror), the cat will often be wary of the mirrored image and even try to strike at it. This could be because it does not recognize itself, which certainly seems plausible because why would you recognize your own bodily form unless you had seen it before in a mirror, or it could be because the cat cannot mentally comprehend that it itself is a cat and is not particularly aware of its own existence (therefore, it has no recognition of what it is seeing in the mirror).
From an AI perspective, we care a lot about self-awareness.
Self-awareness seems to be an essential component of human intelligence. And, anything that is a significant component of human intelligence will most likely be needed to produce artificial intelligence. We need to understand the components that comprise human intelligence in order to produce artificial intelligence.
Well, Okay, I admit some say we don’t really need to understand human intelligence directly, and can just produce machines that do what human intelligence exhibits. They believe we should untether ourselves from worrying about cracking the hidden codes and mystery of the human mind, and just go ahead and build something in a way that acts intelligently. There might be more ways than one to skin a cat (sorry about that metaphor!), in that we might be able to successfully get to artificial intelligence and bypass figuring out how humans do it.
That being said, let’s get back to self-awareness. The ability to create something that is self-aware is a capability desired by those that are on the holy quest for artificial intelligence. Some call this artificial consciousness, or sometimes machine consciousness, and want to make machines that appear to have consciousness and be aware of it. It is a slippery slope since we need to define what we mean by consciousness. I am not going to belabor that whole debate. Instead, let’s put it this way, go with me that being self-aware involves being aware of your own consciousness.
Let me try this from another angle. There is knowledge. Knowledge consists of the things that you know about. You know how to drive a car. You know that roads are places you can drive a car. You know that a car normally cannot fly. Etc. There is also meta-knowledge. Meta-knowledge is considered knowledge about knowledge. You are aware that you know how to drive a car. You are aware that you know that roads are places that you can drive a car. You might also be aware that you cannot drive a truck, even though you can drive a car.
Notice that your awareness allows you to know not only what you do know, but also be potentially aware of what you do not know. Suppose I ask you if you can put oil into your car. Let’s suppose you say yes, you do know how to do so, since you are aware that you can put oil into your car. I ask you if you can change the oil in your car. You search your mind, you cannot find anything in it about how to change oil in a car, and so you then report to me that you do not know how to put oil into your car. You might find related info, such as that to change oil in a car you can take the car to the nearest car mechanic, and so you might say that though you don’t know how to change the oil directly, you can ultimately get the oil changed by taking it to someone that does know.
There are various types of awareness. One type of awareness is to be aware of what you’ve done, such as being aware that you yesterday went to the store and bought a can of peas. Another is to be aware of what you are planning to do. You might be aware that you have a goal of driving your car to the beach later this afternoon. Another form of awareness is about your body and your sensors. You are aware that your ears are not working very well at this moment because you just got out of the swimming pool and your eyes are still filled with water. Various debates exist among researchers about how many types of awareness there are. Some say it is a finite number of types and call them such aspects as agency awareness, sensorimotor awareness, goal awareness, and so on. Some argue that there are tons of different kinds of awareness.
A few AI researchers have boldly proclaimed (some say recklessly and loosely) that they somehow have created self-awareness in robots. There is one well known example of a researcher that claims he developed a robot that can recognize itself in a mirror. This is similar to my comment earlier about the cat and whether it can recognize itself in a mirror. Robots right now that seemingly recognize their mirrored image are criticized by some as a bit of a cheap trick and not truly representative of the kind of self-awareness that we speak of when referring to human self-awareness.
Why does this have anything to do with self-driving cars?
At the Cybernetic Self-Driving Car Institute, we believe that self-awareness for a self-driving car is essential for ultimately reaching a true Level 5 self-driving car (see my column about the Richter scale for self-driving cars).
Let’s take a look at what it means for a self-driving car to have self-awareness.
Today, a self-driving car will readily drive you along a freeway road and watch for things like cars ahead of you that might be slowing or stopping, or be looking for lane markers that suddenly shift direction and so the self-driving car needs to also shift direction. This is akin to the small child that can play a game on their smartphone, it is an act of doing something that they have an ability to undertake. Is though the self-driving car “aware” that it is in fact driving you along on the freeway? Is it aware that it is looking for the lane markers? Or, is it just carrying out the required actions and not in any sense aware that it is doing so?
At the most strategic level of the AI of a self-driving car, we posit that the topmost layer of AI must be doing self-awareness types of activities. The topmost layer should be observing its own behavior, and figuring out what the significance of that behavior is. It is an omnipresent overseer of itself. This is what we are suggesting be considered self-awareness in the case of self-driving cars.
I’ll freely state that this does not in any manner whatsoever indicate or imply that the AI must be conscious. We are merely borrowing the valuable aspects of self-awareness as it is understood to exist in humans and making sure that we embody that into the AI of the self-driving car.
Why would this make a difference? It will allow a self-driving car to do a much better job at driving a self-driving car. Maybe getting us that much closer to the Level 5 true self-driving car. By being aware of its own efforts, the self-driving car should do whatever any sentient being would presumably do, namely use the self-awareness to improve itself, improve whatever it is doing right now, improve what it will be doing next, and otherwise seeking safety and well-being for itself.
A self-driving car is driving along and detects that traffic up ahead has come to a halt. The AI has one kind of awareness that the self-driving car is going 50 miles per hour and headed straight into the stopped cars ahead. It attempts to calculate stopping distances and figure out what to do. As it is figuring out what to do, the car is still rolling forward at 50 mph. The self-driving car needs to be self-aware that it is taking time to determine what to do, and during that time it is getting itself into further hot water. Maybe it should therefore abandon the detailed time consuming analysis of what to do and pick instead a quick standby recourse that is ready for whenever needed. Or, maybe it should seek out assistance from other connected cars about what to do.
The self-awareness also applies to what the self-driving car itself can and cannot do. Just as I mentioned earlier that self-awareness encompasses things you know and also knowing if you don’t know something, likewise for the self-driving car. Suppose the guts of the self-driving car come back about this problem of what to do about the stopped cars ahead, and it recommends that the self-driving car immediately jam on the brakes. Is this the right solution?
Suppose the self-awareness element knows that the self-driving car has been having brake related issues lately. The brakes are not available at their usual capability. This is something that needs to be considered when determining the viability of slamming on the brakes. Furthermore, suppose the self-awareness knows that the occupants of the car are not wearing their seat belts. If the self-driving car proceeds to slam on the brakes, the occupants will go flying around the innards of the car and potentially get injured.
The topmost AI layer using this self-awareness opts to nix the proposed tactic of slamming on the brakes. Instead, it recommends that the self-driving car swerve to the left and go into the carpool lane. Now, it could be that the self-driving car doesn’t have enough occupants to legally go into the carpool lane, but the self-awareness figures that it will be worthwhile as a gambit to save the occupants from injury or death, and if this means getting a traffic ticket it is worth it to do so.
All of this kind of reasoning needs to be included into the AI of the self-driving car. It is the strategic level “thinking” that will get us toward cars that can drive as humans can. Humans are self-aware of what they are doing as they drive. At the moment of doing something, a human might not be aware of the awareness, but when prompted the human will either register what they are doing or possibly playback mentally what they were doing. Self-aware by a self-driving car should be taking place at all times, and then especially invoked when the circumstances warrant.
This is a very practical aspect of AI for self-driving cars.
Now, I know that some of you will try to take this to the extreme. If the self-driving car can be self-aware, does this take us down the path that it will soon be able to start making its own decisions? You might be aware that the famous computer scientist and mathematician John von Neumann indicated in the 1950s that someday AI systems might reach a point of what he coined singularity. Singularity is an idea that AI will become conscious and realize that it exists, and this might then trigger the AI to want to take over mankind. A runaway reaction of one AI combining forces with other AI’s might allow the AI to gang up on us humans.
I certainly enjoy watching science fiction movies depicting this aspect, but we aren’t anywhere close to that today. I think we are safe right now to try and leverage self-awareness aspects in our AI for self-driving cars. Of course, I could be saying this only because an AI system is pointing a laser at my head and threatening me if I don’t say that it is Okay to let AI systems become self-aware.
This content is originally posted to AI Trends.