By Lance Eliot, the AI Trends Insider
I have a beef with the now seemingly in-vogue use of the “superhuman AI” phrase that keeps popping up in the media.
When I was asked about “superhuman AI” at a recent Machine Learning and AI industry conference, I admit that I wound myself up into a bit of a tizzy and launched into a modest diatribe.
Now that I’ve calmed down, I thought you might like to know what my angst is about the so-called superhuman AI moniker and why it is important to give the matter of its use some serious consideration.
Have you noticed the phrase?
It can be subtle and at times easy to miss.
I’d guess that if you look around, you’ll see that the superhuman AI phrase might have been in any of a number of recent articles you have read about AI breakthroughs, or might have gotten mentioned during a radio broadcast or on a podcast that you listen to while in your car. If so, I realize you might have not given it any thought at all.
In that sense, you could argue that the superhuman AI phrase is not consequential and there’s no reason to get upset about its use. It is perhaps a kind of filler phrase that sounds good and hopefully most people know it likely lacks any substantive true meaning. Just more noise and nothing noteworthy.
On the other hand, there is a potential danger that this superhuman AI phrase is indeed being taken quite seriously by those that are not well familiar with AI, and thus it can tend to over-inflate what AI can actually achieve.
There are some in AI that seem to be pleased to inflate expectations about AI, but fortunately there are moderates that rightfully worry that these subtle attempts at overstating AI are going to get everyone into trouble.
The trouble can be that we all begin to believe that AI can do things it cannot and then allow ourselves to become vulnerable to automated systems that really have no bearing on this mythical and made-up notion of today’s AI.
For my article about AI as a potential Frankenstein, see: https://aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/
For the potential coming singularity of AI, see my article: https://aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/
For idealism about AI, see my article: https://aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/
For the Turing test and AI, see my article: https://aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/
Superhuman AI Phrase Is Usually Hyped Or Otherwise Misused
I’ll make a bold statement herein and claim that by-and-large when someone reports that they have developed an AI system that is characterized by superhuman AI, they are generally being misleading.
It could be marketing hype.
It could be personal bravado.
It could be they are purposely or possibly even inadvertently are deploying hyperbole.
It could be that the person is naive.
It could be that they are unknowingly overstating things.
It could be that they don’t care whether their statements are accurate or not.
It could be that they are saying it because everyone else is saying it.
And so on.
They might also believe that for their definition of superhuman AI, they are reasonably making use of the phrase.
This does bring up a bit of a conundrum about the superhuman AI phrase.
There is not an across-the board all-agreed standardized and codified definition that everyone has said that yes, this is what superhuman AI consists of.
Nobody has laid out precise and demonstrable metrics that we should use to decide concretely whether there is or is not superhuman AI involved.
Therefore, with no specific rules to be followed, you can use the phrase however you might wish. There isn’t a kind of word-usage cop standing by the roadside that has an improper-AI-phrases radar gun that can detect the true and proper use of the superhuman AI phrase. It is instead a wild west.
You can even fiendishly use the phrase to suggest or imply with a wink-wink that there is really a lot of super-duper AI in your system, but at the same time claim when pressed that you were intending to use it in a less-than over-the-top manner. The looseness of the existing commonplace unstated definition allows for making what you want of the handy and catchy phrase and gives you face-saving wiggle room to do so.
What Does Superhuman AI Mean
Let’s consider what the phrase potentially means.
The first word, superhuman, we all generally would agree means to accomplish something of an extraordinary nature beyond what a normal human might be able to do.
The other day a man lifted a car because his child had gotten pinned underneath it. Normally, it is doubtful that he could lift a car by himself. He somehow gained momentarily a kind of superhuman strength, perhaps due to adrenaline running through his body, and was able to lift that car.
You could say that he was superhuman.
But, does this mean that in all respects he is superhuman?
Can he lift a building like superman? No.
Could he even lift a car again? Unlikely, unless his child once again got stuck underneath one.
For all reasonable notions of superhuman, I think we could say that he momentarily displayed an extraordinary strength that a normal person might not typically have.
He wasn’t therefore a now-permanent superhuman that forever would have this super strength. He did not arrive here from the planet Krypton. Instead, he momentarily appeared to engage in an activity that most of us would probably be willing to say seems relatively superhuman-like.
Suppose though that we went out and found a really strong weightlifter and asked that person to lift the same car as the man that had been saving his child. If the strong lifter could do so, is that strong lifter also then someone that we would immediately applaud as being superhuman?
I’d suggest we would not.
This brings up the aspect that when we refer to someone as superhuman, we probably need to have some basis for comparison.
If the basis for comparison is solely confined to what the particular person could normally do, this would seem to quite dilute the idea of being superhuman.
I tried to open a screw-top can the other day and could not do so. I tried and tried. A few days later, I tried again and managed to squirm and grunt and got that darned lid to turn and come off.
I was now superhuman!
I don’t think that it seems fair or reasonable to say that I was superhuman in that case. Big deal, I opened a screw-top can that was somewhat jammed up. A lot of humans could do the same. Just because I was able to exceed my prior effort, it doesn’t seem to warrant handing me a trophy as being superhuman.
I think you would likely agree.
It was the famous Friedrich Nietzsche that many suggest first helped to bolster the notion of someone potentially being superhuman.
You might become superhuman by perhaps being genetically bred in a manner that gives you greater strength or greater intelligence than other humans. Or, maybe you have a cybernetic implant inserted into your body that gives you superhuman strength, or you take special drugs that make your mind more powerful and superhuman, similar to what is often portrayed in many science fiction movies.
There is a slippery slope between talking about superhuman and contemplating the aspects of Superman or Superwoman.
The word “super” is used in the case of superhuman, and can allude to having super powers, and so you then start to think of Superman or Superwoman, which even though they don’t exist and are only fictional characters, nonetheless the superhuman word gets the glow from those now ubiquitous fake characters. It is easy to therefore mentally get intertwined that “superhuman = superman or superwoman,” which is part of the reason that superhuman is a lousy word and distorts our sense of what is real and what is not.
In spite of the troubles associated with the meaning of the word superhuman, we’ll go ahead and add to the mess by appending the “AI” moniker to the superhuman word.
Now what do we mean?
If I develop a tic-tac-toe game that uses AI techniques and even executes on so-called AI hardware, can I claim that my tic-tac-toe game is superhuman AI?
I wouldn’t think so.
But, I assure you, there are some that would happily say it is.
It’s the best darned tic-tac-toe game player ever devised.
It exceeds anyone else in being able to play tic-tac-toe.
It will never ever lose a tic-tac-toe game.
Must be AI.
Probably a breakthrough.
Must be superhuman AI.
Where this especially seems to come up about using the phrase superhuman AI is when referring to automated systems that play games. Automated systems have been developed for chess that have been able to beat human grand masters and reach new heights of human chess play scoring.
Some say that’s superhuman AI.
Top playing automated systems for Scrabble reached a zenith around 2006.
More recently, in 2016, an automated system was a winner at Go, considered by many to have been a nearly unreachable goal due to the nature of the rules of Go and the kinds of strategies used. Superhuman AI. You’ve likely seen the plentiful ads for the IBM Watson about the winning at Jeopardy. Superhuman AI.
I want to outright congratulate those that were able to get automated systems to play at such a vaunted human level in those games. They used every computer science and AI technique and novelty to get to that accomplishment. Here, here!
Were those all superhuman AI examples?
Some say that games are great as a means to perfect many AI and computer science techniques and approaches, but they are quite narrow in their domains.
There are games with perfect information, meaning that you are informed of all events that occur throughout the playing of the game, knowing each of the moves as they arise and the starting position. Chess is an example of having “perfect” information since you know how the game starts and you know each move that occurs along the way. Players don’t somehow hide their moves. Also, there isn’t any “chance” involved in the game since there isn’t dice or something else being used to determine what the moves are. It is straight ahead. Imperfect information games are those not within the definition of a perfect information game.
Does playing games well or even extraordinarily with an automated system mean that it is superhuman AI?
Suppose that we don’t use any special AI techniques at all, and merely leverage having a much vaster amount of online storage available than a human could likely have in their mind, and we do a great job at searching an extremely large space of pre-calculated best moves (having been pre-calculated by human led direction for the automated system).
Is it fair to toss the “AI” into the superhuman wording?
Superhuman AI Outside Of Games
Let’s consider domains beyond those of playing games.
I had helped an assembly plant put a robotic arm into their assembly line, doing so to speed-up the line and reduce the labor needed to produce their product.
Most would agree that the field of robotics generally fits within the overarching definition of AI.
I’d like to brag about the robotic arm work, but admittedly the mechanical arm was merely trained by having a human move the arm back-and-forth until it caught onto the movements needed. I also added various safety related code to make sure the robotic arm wouldn’t go astray. It was a nice project and would also reduce the various repetitive motion injuries that the human workers had been experiencing while doing the same task. I also trained the former assembly worker how to take care of the robotic arm and be able to make changes and updates to the code as needed.
Was the robotic arm that I helped customize and got working an example of a superhuman AI accomplishment?
Yes, you could apparently make such a claim.
It used robotics, which as mentioned generally fits within the AI rubric. It was superhuman because it can easily beat any human at the assembly line task. Whereas before a human could do the overall task about six times per hour, the robotic arm could nearly double that pace (about 10 times per hour). The robotic arm could work 24×7, no need to rest or relax or take breaks. For all practical purposes, you could assert that it was superhuman in comparison to what a human could do. You could say it was superhuman beyond any other human, since no human could possibly unaided by machinery work like that.
Personally, it would give me heartburn to go around and say that this robotic arm was superhuman AI. But, that’s just me.
You might say that physical things don’t count for the superhuman AI moniker.
In the case of the robotic arm, it wasn’t “thinking” in any superhuman kind of way. Therefore, maybe we should only use superhuman AI when the matter at-hand involves thinking, akin to winning at chess or Scrabble.
Does the superhuman AI have to be the best in comparison to all humans? In other words, if we construct an AI system that plays chess, and it beats the topmost human chess players, we might then assert or infer that it can beat all humans in that domain and therefore it is rightfully superhuman.
If it cannot beat all humans in the domain, what then?
Someone develops a Machine Learning capability that is able to diagnose MRI’s and find cancerous regions, doing so more consistently than the average medical doctor and at times better than the best medical specialists in that domain. Let’s assume we cannot say it is always better than all humans in that respect. It is only sometimes better.
Coming Up With Categories Of Human AI Achievement
Some suggest that we ought to have a graduated series of categories that lead-up to being referred to as superhuman.
We might do this:
- Superhuman AI = better than all humans in the domain, as far as we can infer
- High-human AI = better than most humans in the domain, high as in heightened
- Par-human AI = similar to most humans in the domain or “on par” with humans
- Sub-human AI = less than or worse than most humans in the domain, sub-par
Notice that I’ve qualified the superhuman in two important respects, one by the aspect of saying it pertains to a particular domain, and the other that we can only infer that the AI in that case is better than all humans in the domain.
The latter assumption is due to the aspect that we cannot really say whether or not the AI might be better than all humans (unless we really can have all human’s line-up and showcase that none are better than the AI, all 7.5 billion people on the planet).
Let’s take chess.
Just because you can beat the most recent top-rated chess masters does not mean you can beat all humans in chess. There might be someone that is a chess wizard that nobody even knows exists. Or, maybe next week a chess wizard appears seemingly out of nowhere that can play chess better than any other human on Earth.
Thus, we’ll have to approximate that based on whatever kind of circumstance is involved, such as with chess, we’ll say that the AI is better than “all” humans but do so knowing that we are making a bit of a leap-in-faith on that aspect.
Some react to the superhuman AI by believing that the superhuman AI can do anything in any field of endeavor.
That’s why I’ve put into the aforementioned indications that it is the AI within the domain of choice, such as chess or Go.
This is also why the superhuman AI moniker is on that slippery slope.
If you tell someone that you have superhuman AI that can play chess, they might think this implies that AI can play any kind of game to that same topmost level, since we all know that chess is hard and therefore presumably the AI can just switch over and be superhuman at say Monopoly (which most people would say is a lot less arduous than chess and so it should be “easy” presumably for a superhuman AI chess playing system to win at Monopoly).
Of course, today’s so-called superhuman AI instances are all within a narrow domain and do not have the ability to just switch over on their own and be tops at other domains.
Worse still, some people that hear about something that is superhuman AI will often ascribe to the matter that is must also have common sense reasoning and also Artificial General Intelligence (AGI). If it can play chess so well, it likely can solve world hunger or clean-up our environment too. Nope, sorry about that but those aren’t in the cards right now.
For my article about the limits of today’s AI when it comes to common sense reasoning, see: https://aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/
For issues about AI boundaries, see my article: https://aitrends.com/ai-insider/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/
For reasons to consider starting over on AI, see my article: https://aitrends.com/selfdrivingcars/starting-over-on-ai-and-self-driving-cars/
For conspiracy theories about AI, see my article: https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/
Superhuman AI And Autonomous Cars
What does this have to do with AI self-driving driverless autonomous cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars.
There are some within the self-driving driverless car industry that are either using the phrase “superhuman AI” or are letting others use that phrase for them.
Is the superhuman AI moniker applicable to aspects of today’s AI self-driving cars?
For the fake news about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/ai-fake-news-about-self-driving-cars/
For the sizzle reels that mislead about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/sizzle-reel-trickery-ai-self-driving-car-hype/
Allow me to elaborate.
I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.
For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.
For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/
For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/
For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/
Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.
Here’s the usual steps involved in the AI driving task:
- Sensor data collection and interpretation
- Sensor fusion
- Virtual world model updating
- AI action planning
- Car controls command issuance
Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.
Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.
For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/
See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/
For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/
Returning to the superhuman AI discussion, let’s consider how this catchy phrase is being used by some in today’s AI self-driving car industry.
For the sensors side of things, it seems like anytime an improvement is made in being able to analyze an image and detect whether there is say a pedestrian in a picture, there is someone that will claim the AI or ML based capability is superhuman AI. This seems to suggest that if the shape of a pedestrian can be picked-out of a hazy image, it is somehow better than a human’s ability to do human-based image analysis. Rarely is there much substantive support provided for such a contention.
Furthermore, given the relatively brittle nature of most of today’s image processing capabilities, even if the new routine can do a better job at that one particular aspect, one has to ask whether this is a fair and reasonable way to then ascribe to it that it is superhuman AI. We all know that a human could do a much broader scope kind of image analysis and likely “best” the image processing software in an overall effort of doing image detection.
We also know that the image processing software has no “understanding” whatsoever about the image that it has detected. It has found a shape and associated it with something within the system tagged as a pedestrian.
Does it “know” that a pedestrian is a human being that breaths and walks and lives and thinks?
Does it “know” that a pedestrian might suddenly run or jump or shout at the self-driving car?
Yet, is it truly superhuman AI?
Seems like a stretch.
Trouble’s Afoot With Superhuman AI Proclamations
For those AI developers that would argue that it is superhuman AI, I’ll simply repeat my earlier qualm that for those that aren’t aware of AI’s limitations and constraints as it exists today, your willingness to toss around the superhuman AI moniker is going to get someone in trouble. The public will falsely believe that the AI of the self-driving car is more sophisticated and more capable than it really is.
Regulators are going to falsely believe that the AI of the self-driving car is more robust and safer than it really is. And so on, down the line for all of the stakeholders involved in AI self-driving cars.
I’d be willing to bet that this wanton use of “superhuman AI” will ultimately come to the spotlight when there are product liability lawsuits lobbed against the auto makers and tech firms that brandished such wording.
Didn’t the “superhuman AI” mislead consumers into believing that their AI self-driving car could do things that it really could not, will be a question asked during the case. By what manner did you arrive at being able to proclaim that your AI self-driving car had this kind of superhuman AI, will be another question. And so on.
For my article about the safety concerns of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/
For my article about how the levels of AI self-driving car might be misleading, see: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/
For the responsibility aspects of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/responsibility-and-ai-self-driving-cars/
For the dangers associated with being an egocentric AI developer, see my article: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/
For the marketing of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/marketing-self-driving-cars-new-paradigms/
Some argue that being perhaps over-the-top is the only way to make sure that funding and energy continues to pour into the AI field.
Presumably, a bit of hyperbole is worth the “cost” as it provides an overwhelming goodness when considered as a trade-off to otherwise losing steam and momentum in the quest to reach true AI.
If we went around and told people that we are in the midst of AI systems that are par-human, or subpar-human, it might be a shock that would undermine investments and faith in pushing ahead with AI.
Superhuman AI seems like a modest enough phrase that it can be used without having an abundance of guilt or misgivings, at least for some.
You’ll have to make that decision on your own and live with it.
Fortunately, I don’t think you need to be superhuman to make the right decision about this.
Copyright 2019 Dr. Lance Eliot
This content is originally posted on AI Trends.
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]