By Lance Eliot, the AI Trends Insider
AI of today is considered narrow, brittle, and not at all near to what any of us might reasonably agree is a true sense of intelligence. Today’s AI is exciting and helping to advance the role and capabilities of computers, but do not mistake this advancement with becoming sentient. Being able to beat top chess masters or Go players with today’s computing capabilities is not a sign of true overall computer-based intelligence. With faster processors and machine learning and lots of data, we still don’t have our finger on what it means to achieve agency. In a sense, we are still on the path of computational empowerment – and the grand question is whether somehow someway someday that with maybe a startling breakthrough we might tip over into true intelligence. It’s the essential tipping point.
Are you the person that’s is going to have some flash of insight that gets us across the chasm between contemporary narrow AI and over into AGI (Artificial General Intelligence)?
I hope so! I’m pulling for you to be the one. AGI is the nirvana that most AI researchers are aiming to reach. AGI would be a system that can exhibit the kind of intelligence that you see even in a young child, combining aspects of common sense reasoning with overall reasoning and with whatever else we want to ascribe to intelligent behavior. I would say that those of us slaving away at AI for all so many years are hoping that others will come along and jump on our shoulders and get us to the next step in AI evolution.
See my article about common sense reasoning in AI: https://aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/
See my article about the Turing Test and AI: https://aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/
There are some that believe you’d be better off forgetting what has already been done in AI, and start with a blank sheet. Maybe the ways we’ve devised so far are really a dead-end. You might try mightily to get to the next step, and it won’t ever happen because you are mired in what was done in the past. Thus, some say that you should shove aside the AI of today and think anew. I say if that will get us to AGI, go for it. Seems a bit over-the-top and I would think that even if what has been done today, even if the wrong path, we could learn from it and go toward another direction, but anyway, if it’s too much of an anchor than I am with you to drop it cold.
See my article about starting over with AI: https://aitrends.com/ai-insider/starting-over-on-ai-and-self-driving-cars/
Suppose we get to AGI, then what happens?
Some would say that we would have true AI. We would presumably have robots that actually can do the intelligent things that people do. It’s an open debate whether robots could physically do what humans do at that juncture. In other words, maybe we aren’t yet able to perfect the physical mechanisms of robots and so we have otherwise been able to imbue them with mental intelligence but not yet physically built them to be like humans. Some believe that there is a tie between us humans in terms of our bodies and our mental processes, such that a robot won’t be able to be as intelligent as a human unless it has a “body” akin to a human body. Others eschew this connection and say that you can have any kind of robot you want, having no body per se or some other kind of “body” and that it just won’t matter – the intelligence is a different beast altogether.
See my article about brainjacking and AI: https://aitrends.com/selfdrivingcars/brainjacking-self-driving-cars-mind-matter/
So, let’s say for the moment we do reach AGI, and it maybe it is in a robot or maybe not, since we aren’t sure whether the body aspects matter for reaching AGI. Does anything come after AGI, or is AGI the final end-point of artificial intelligence?
Artificial Superintelligence, a Step Beyond AGI
Well, some would say that there might be a step beyond AGI, namely ASI (Artificial Superintelligence). ASI would be the exceeding of human intelligence. AGI arrives at what we consider everyday intelligence, while ASI takes us beyond Einstein and beyond any kind of human intelligence we’ve ever known. We’re not even sure what this ASI would consist of. That makes sense, though, because we are trapped by our own human intelligence. Maybe our human intelligence lacks the imagination that would enable us to envision what superintelligence consists of.
Let’s just say that the ASI is like merging together all intelligence of everyone and it combines and synergizes. All rolled into one. And that whatever embodies this superintelligence will have that kind of mental capability. Does this mean that ASI includes an ability to read minds and have telepathy? That seems like a kind of cheating in that it goes beyond what seems like intelligence as we know it, but, hey, maybe you do get to read minds once you go beyond AGI. Who knows?
Take a look at Figure 1.
There is AI as we know it today, and which ultimately hopefully we’ll get to AGI, though maybe it requires us taking some kind of alternative path that we don’t even know about today. It could be that AGI is the end all. There might not be anything beyond AGI. Maybe we’d all be happy to have computers that have the same mental capabilities as us. But, it could also be that there’s something beyond us, and the AGI might become ASI.
How would the AGI become ASI?
If humans got today’s AI to become AGI, maybe we can push on AGI and get it to evolve into ASI. Or, maybe we aren’t smart enough to do that. Maybe the AGI will somehow be smart enough, even though it presumably is only just as smart as us. On the other hand, maybe the AGI when running on computers all around the globe all the time can figure things out that us humans cannot.
And so there are some that believe the AGI will encounter an intelligence explosion, as it were, and fuel upon itself to become ASI. There will be a kind of runaway reaction of self-improvement by the AGI. Think of this like a nuclear reactor that goes “critical” and there is an incredible mental chain reaction. Some would say that this could happen and be entirely out of human hands. We presumably don’t start it, other than we were the ones that got us to the AGI stage, and we nor can’t stop it once it gets underway. There aren’t any nuclear rods to pull out to slow down or stop the reactive matter.
I know that there are plenty of science fiction movies that depict this kind of thing. In some plots, the humans are unable to prevent or stop this from happening. In other cases, they manage to pull the plug just in time, or maybe take hammers and bats to the computers to disrupt them during the chain reaction. Or, a clever human happens to have a USB stick containing a computer virus that they manage to plug into their home PC and it infects the ASI everywhere, halting it in its tracks. By the way, so that you can sleep at night, rest assured I keep such a USB stick next to my bed at night, just in case.
ASI Emerges to Singularity
Anyway, this act of ASI emerging is often referred to as singularity.
Will we reach a point of singularity? It’s hard to say. It would seem like we need to first get to AGI. For those of you that are overly ambitious, you might say that you are going to skip AGI and go directly to ASI and get us to singularity. Good luck on that.
Let’s assume that singularity happens, then what? Is there something that comes after ASI? So far, it seems like most predictions are that either ASI opts to enslave all humans and/or kills off mankind, or, ASI embraces humans and helps save mankind and extends mankind. It seems like most people are thinking that the singularity will perceive us humans as nothing more than bugs or cockroaches, and so a vote today would probably produce the doomsday or “sad face” scenario as considered most likely. If its Ok with you, I’ll side with the glass is half-full group, and I’ll vote that the singularity likes us and has intelligence that can see the better side of things, so this is the “smiley face” or uplifting scenario.
With man’s inhumanity to man, I realize it maybe seems hard to believe that the singularity will consider us worthy. But, hey, maybe it will give the singularity something fun and challenging to do – it will seek to shape up mankind, and do so without actually enslaving us. That’s a pretty good mental problem to try and solve, you must admit. Wouldn’t a superintelligence want and maybe even need tough problems to solve?
Take a look next at Figure 2.
Will there be any substantial time between the achieving of AGI and the reaching of ASI? It could be that the intelligence explosion takes maybe weeks or months, maybe even years. It might sneak up on us. Or, it might slowly occur, and we know it is happening, and maybe we don’t mind since the ASI looks pretty good along the way. Are we lulling ourselves into then getting smacked when the actual singularity happens? You pick the outcome, smiley face or sad face.
There are some that suggest the AGI achievement will instantaneously produce the intelligence explosion and we’ll have ASI in some nanoseconds or picoseconds. Bam, we got AGI but then we got ASI, all in the blink of an eye. If you are the sad face outcome person, this is a bad thing and the crossing of the bridge on AGI was not where we should have gone. If you are the smiley face outcome person, this is a good thing and we didn’t need to wait around to achieve ASI. Singularity happened and we didn’t even see how it occurred, it just did.
That’s the Big Bang Singularity.
You could argue that there won’t ever be a singularity. Instead, we’ll reach AGI and that’s it. There is no more there. The AGI will be like us. Maybe it can make more of us, and we can make more of it. But, we can’t get the AGI to become ASI, and nor can the AGI get itself to become ASI. We’ve reached the pinnacle of intelligence, regardless of human based or computer based in form and capability. I know this seems kind of disappointing, since we are always wanting to reach to the stars and find new conquests. But, hey, if you are the sad face outcome person then you are relieved to think that the AGI is it. No worries about the possibly nefarious singularity.
I’m not sure we’ll be able to put together a proof that singularity cannot ever occur. In that sense, there will still be an urging and ongoing conquest ahead. I envision teams of AGI and humans, working arm in arm, trying in vain to find the vaunted ASI. It could go on forever.
That’s the Not Singularity. Maybe we can refer to it as waiting for Godot.
Let’s now consider whether we can reach AGI. I believe in my heart we can, but admittedly there’s not much right now that seems to argue for it. As mentioned before, the AI of today is only so qualified. If you put more processors into it, will that spark it to intelligence? If you have massive amounts of computer memory, will that do it? Even if we go quantum computing, is that really the spark or just the same kind of algorithms and computations that we are using today to do our narrow AI. It might make narrow AI really good, but will it bump us up to AGI?
There’s the Not AGI.
I’ll refer to it as the false hope. We might have a false hope that we can dramatically shift today’s AI into becoming AGI. For those of you that believe the Big Bang Singularity, it’s a relief to believe in the Not AGI, if you also believe that there’s a sad face outcome after reaching singularity. Much of this anguish has been expressed in other ways, and perhaps the famous book and movie about Frankenstein covers some of this ground.
See my article about Frankenstein and AI: https://aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/
What does all of this have to do with AI self-driving cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI for self-driving cars. We assert that there won’t be true AI self-driving cars until we all master at least common sense reasoning for AI, and perhaps also only if we also reach AGI.
I know those are fighting words. Allow me to explain.
There are various levels of self-driving cars. At the level 5, it is considered a true self-driving car, one that is driven by the AI and that no human driver is needed. The AI can drive the car as a human can and does not require human assistance. Self-driving cars at less than a level 5 require a human driver. This means that the human and the AI co-share the driving task. This though has inherent problems and can create potential and deadly results.
See my article about the levels of AI self-driving cars: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/
See my article that’s a framework for AI self-driving cars: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
Some believe that the path to a Level 5 self-driving car is by first achieving a Level 4, or maybe even a Level 3 and jumping up to a Level 5. Others believe that you can skip the lower levels and aim to directly reach a Level 5. In whichever way we get there, the question is whether or not with the AI we know today that we can reach a true Level 5 self-driving car.
You might have heard on the news that we seem to already have true Level 5 self-driving cars. I assert that’s a debatable claim. If you have a so-called Level 5 AI self-driving car that needs to be geo-fenced and that requires to already have detailed mapping of its surroundings, and if when it encounters an “abnormal” driving situation it must revert to only stopping the car and pulling to the side of the road, well, I don’t know about you but that’s not the Level 5 that I’m aiming for as a true Level 5 self-driving car.
Some say you don’t need common sense reasoning to be able to drive a car. I’m not so sure that’s a valid claim. Some say that the driving task is a very narrow task, like say playing chess or playing the game Go. I’m not so sure that’s a valid claim. It seems to me that if we are wanting to have a self-driving car that can be driven like a human driver, it requires that the AI be able to exhibit the same kind of intelligent behavior that a human driver does.
It is my postulation that we need more AI than we have today, and it is a kind of AI approaching the AGI that is needed, in order to fulfill a goal of having a true Level 5 self-driving car. Without those kinds of advances, I’d say we’ll be mired in a “5-ish” level self-driving car, but for which we would all reasonably agree is not a true Level 5. This doesn’t mean that the 5-ish level self-driving car won’t be useful, it certainly can be. We can get a lot of mileage out of it (sorry about the pun!), it just won’t be the same as a human driven car.
Now, if you are the sad face outcome person, you might say that we should be satisfied if we can get to the 5-ish level self-driving car, because maybe if we push to AGI in order to get to a true Level 5 self-driving car, we end-up with the AGI that bursts into ASI and all of mankind is destroyed. That’s a rather haunting viewpoint and might deter some AI developers, but probably not many. I’d say that most AI developers are wanting to either reach AGI, or reach a true Level 5 self-driving car, or both.
Indeed, you could see it as:
- It could be that our efforts to achieve a true Level 5 self-driving car will be the driving force that gets us to AGI.
- Or, it could be that we will otherwise discover AGI and then apply it to the area of AI self-driving cars and thus achieve the true AI self-driving car.
If we all somehow breakthrough to singularity, what happens with AI self-driving cars?
The sad face scenario is that the singularity takes over all of our AI self-driving cars and uses it in one fell swoop to try and kill us all off. In that sense, we have provided a mechanism for our own self-destruction by building and fielding the AI self-driving cars, just to make things easier for the ASI that decides we’ve got to go. Oops.
I’d prefer to end the discussion by focusing on the smiley face scenario.
The singularity, if it occurs, realizes that we need our AI self-driving cars, and improves it in ways that we couldn’t possibly achieve. These become Level 6 self-driving cars, or maybe Level 10 or Level 100. The singularity or ASI opts to benefit all of mankind and AI self-driving cars is just one of many ways in which it opts to do so. That’s the ASI that I’m hoping for.
Here’s then my final vote on all this. Hold onto your hats. We will indeed collectively achieve AGI and it will be an essential aspect for also achieving true Level 5 self-driving cars. There won’t be the Big Bang Singularity and instead it will occur over a somewhat lengthy period of time. During that singularity emergence, we’ll become at peace with the ASI and the world will be a better place for it.
If that’s too much smiley face for you, sorry about that, but I’m a glass is half-full kind of person. Or, am I just saying this because I am worried about Roko’s Basilisk.
Copyright 2018 Dr. Lance Eliot
This content is originally posted on AI Trends.