Key Equation for Predicting Year-to-Prevalence for AI Self-Driving Cars


By Lance Eliot, the AI Trends Insider

We all enjoy a good equation. How many times have you quoted or seen Einstein’s famous equation about matter and energy? Many times, I would wager. I’m assuming most of you likely memorized ad nauseum the Pythagorean Theorem of a-squared plus b-squared equals c-squared. Like it or not, the Pythagorean equation is a crucial building block for mathematics and infuses geometry and calculus (you likely haven’t used it in a while, and my bringing it up might trigger either pleasant memories of your math classes or cause you undue angst, oops).

One of the most famous probabilistic formulas is the celebrated Drake equation, which was devised by Frank Drake in the 1960s to help stir discussion and debate about the odds that there is life elsewhere in our galaxy and that we might be able to communicate with it. Some of you might be aware of the SETI (Search for Extraterrestrial Intelligence) program that actively is scanning for any sign from another planet that someone or something out there might exist. We here on earth are using various electronic and computer-based means to detect signals from outer space.

A particularly controversial element of such efforts is whether or not we should be undertaking a passive search or a more active search for intelligent life that might elsewhere exist. A passive search involves the act of simply trying to catch ahold of any signals and then be on-earth internally alerted that perhaps something is out there. An active search consists of sending out a signal to let the beyond-our-planet listeners know that we are here, and we do so in hopes of sparking a response. Of course, generating a response might be good or bad for us and the renowned Stephen Hawking had forewarned that we might stir-up a hornet’s nest that will ultimately cause our own destruction and demise.

There are some that say there’s little or no chance of any intelligence existing out there in our galaxy. Indeed, they would say that the chances are so low of such an existence that it is a waste of time and attention to be searching for it. Use our resources for other more worldly and sensible pursuits, they would assert. Plus, if by some bizarre chance there is something out there, the active search method is crazily dangerous and thus at least let’s stop any attempts to prod or poke the unseen and unheard from beast.

I realize to some this seems like speaking from both sides of their mouths in that they on the one hand are saying that there’s essentially no chance of anything out there, yet they also are willing to fret about poking it into awareness of us, but they would counter-argue that it is merely prudent to not do something that is unwise, even if the odds of the unwise act being fruitful are near to nil.

What is the chance of there being intelligent life somewhere in our galaxy other than here on earth, you might be pondering?

As an aside, there are usually wisecracks about the assumption that we here are intelligent life, and you can try to make that joke if you like, it’s mildly funny, I suppose, and you can also seek to debate whether we are “intelligent” at all and maybe there are other life forms out there that are a zillion times more intelligent than us, etc.

I’m not going to entertain those debates herein and merely lay claim that we are intelligent and that yes there might be other intelligent life forms, including ones that might be well-beyond us in terms of some kind of super-intelligence. Those super intelligent beings, if they exist, do not ergo mean that we aren’t intelligent at all, and their existence would instead just push our own self-inflated belief about our being intelligent into realizing we are of a lesser intelligence (and yet still retain the classification of being intelligent), I suggest.

Back to the question about the odds of intelligent life beyond us.

Let’s first agree that we’re primarily interested in intelligent life, meaning that if there is some kind of primitive life oozing someplace and for which it or they cannot communicate in any modern means, we’ll set those aside as being unworthy for the moment of trying to find. Sure, we’d keenly like to know that there is something percolating, though this is a lot less interesting overall than finding something already up-and-running that exhibits intelligence as we think today of the notion of being intelligent.

Presumably, an intelligent life form would be emitting various kinds of electromagnetic radiation, doing so as we indeed do here on earth. That intelligence might not be emitting the radiation for purposes of letting others know that they exist, and might simply be making the emissions as a natural act of how they live, similar to how we watch TV and use our cellphones (I assume most of us do so for our own benefit, and not due to hopes of signaling to other life forms that we exist).

Astronomer Frank Drake had been using a large-scale radio astronomical device in West Virginia in the late 1950s to scan for radio waves bouncing off our planet. His project was somewhat initiated due to ongoing debates at the time about whether or not there could be life anywhere else other than earth. Some said the idea of other life elsewhere was ludicrous. Some suggested that even if there was life elsewhere, it might be early in its development and therefore not yet sophisticated enough to communicate, either by intent or by happenstance.

100 Milion Worlds Likely Can Sustain Life

There were numbers floating around by scientists and astronomers especially that there might be 100 million worlds in the universe that could sustain life as we know of it. How did the 100 million number come to be derived? It was based on the belief that there might be 10 million million million suns, and perhaps one in a million of those suns that had various planets revolving around the sun, and of those perhaps one in a million million that were planets composed of the needed aspects to foster life. If you multiply that out, you arrive at the handy number of 100 million planets that in-theory could have life on it.

Frank Drake opted to put together a small conference of those keenly interested in the serious pursuit of intelligent life and hoped to get vigorous discussion going. In preparing for the conference, he decided to jot down a means to predict the odds of there being intelligent life in our galaxy. Using the same kind of logic that had been used to create the 100 million planets number of the universe, he thought it might be handy to write down the factors and craft an equation that all could see and chat about. The equation was intended for shaping debates and not an attempt to arrive at some kind of magical equation such as Einstein’s famous E = MC^2.

The equation that Frank presented has since then become famously known as the Drake equation, giving due credit to his having derived it. Over the years, there have been many that point out the equation as failing to include a number of other factors that should presumably be included. That’s fine and it was not Frank’s assertion that his equation was the end-all be-all. Today there are a slew of variants and many have added more factors, while some have altered his stated factors. You might say it is a living equation in that it continues to foster debate and continues to generate other formulas that could be better (or worse) than his original stipulation.

Frank Drake’s equation consists of trying to arrive at a number N, which would purport to be the number of civilizations within our galaxy that might exhibit intelligence and for which it might be possible to communicate with them.

You can argue somewhat that suppose there are intelligent life forms that are hiding and purposely not wanting to communicate with us, and you can also quibble with the idea that suppose there are multiple intelligent life forms on any given planet and does that count as one or maybe several such counts, and so on. Generally, the number N is going to be large enough that we can set aside those rounding error kind of exceptions and just contemplate the magnitude of the number N itself.

Here’s the Drake equation:  N = R-star x Factor p x Factor ne x Factor 1 x Factor i x Factor c x Factor L

Essentially, you multiply together seven key factors and it will get you to the number N. Each of the factors is logically sensible in terms of what you would expect to consider when making this kind of an estimate. The factors tend to build upon each other, doing so in the manner of considering a type of pie, wherein you might slice up a pie, doing so incrementally until you get to the final slice.

This is reminiscent of the earlier indicated 100 million number. You might recall it was derived by first considering the whole pie, namely how many total planets might there be in the entire universe (as based on the number of suns and how many planets they might have). Then, the pie was sliced-up by estimating how many of those planets might sustain life.

Drake’s equation does the same thing and takes the estimate deeper by slicing further to how many of those life forms might arise to intelligent beings, and how many of those might arise to intelligent beings that emit some kind of signal that we could detect such as maybe watching TV or using their smartphones (or whatever).

Let’s consider each of the factors in the Drake equation.

  •         The R-star is an estimate of the average rate of star formulations within our galaxy.
  •         The Factor p is the fraction of those R number of stars that would likely have planets.
  •         The Factor ne is the estimated average number of those planets that could support life.
  •         The Factor 1 is the estimated fraction of those planets that could develop life as based on the planets supporting the emergence of life forms.
  •         The Factor i is an estimate of how many of those planets that are able to develop life then produce intelligent life (which we’ll refer to as civilizations).
  •         The Factor c is the estimated fraction of those that have intelligent life for which they make use of some kind of technology that produces emissions of which we could detect.
  •         The Factor L is the estimated length of time in years that the intelligent life that is emitting such emissions does so.

I hope you can see that the Drake equation is actually quite simple and readily digested. I am not denigrating the formula by saying so. In fact, I applaud the equation for its ease of comprehension.

Had the formula been arcane, I doubt that it would have gained such widespread interest and popularity.

I’ll also mention that it is interesting that it has seven factors, rather than say a dozen or perhaps two dozen or more. When you consider the other kinds of factors you might want to include, this equation could easily grow to be a long list of factors. The beauty of having just seven factors is that the equation is kept to a core set that is again readily understood. Plus, it conforms to the equally famous indication in cognitive psychology that we humans prefer things that are about seven items, plus or minus two items, which was identified by George Miller in his well-known paper in 1956 that appeared in the journal Psychology Review.

Multiply the Factors to Get to N

Another fascinating aspect is that the factors are all multiplied together. Again, this suggests simplicity. If the factors involved doing complex transformations and using say square roots or a multitude of additions and divisions, it would be difficult to readily calculate and would be confusing to the naked eye. Instead, you’ve got a series of straightforward factors and they are multiplied together to arrive at the sought number N.

The factors appear to be simple and the equation appears to be simple, which makes it ideal for being used and discussed. Meanwhile, let’s all agree that coming up with the numbers that go into those factors is a bit more challenging. The numbers that you plug into the factors are going to be estimates. Those estimates are going to potentially spur tremendous debate.

In fact, most of the debates about Drake’s equation is not the equation per se, but instead the estimates that one might plug into the factors of the equation. That there is disagreement about the estimates is not especially unnerving, nor does it somehow undermine the equation itself. You need a lot of science to come up with the estimates. There can be bona fide disagreement about how you come up with the estimates and whether they are any good or not.

What was the N that Drake and his colleagues came up with in 1961, based on his formula? Well, the group decided a range would be the more prudent way to express N, and they generally arrived at a vale of between 1,000 and 100,000,000.

You might be puzzled at the range, since it obviously seems like a rather large range. Here on earth, if I told you that the number of cows on a farm ranged from one thousand to perhaps one-hundred million, you’d think I had gone nuts. One-thousand is a pretty small number, while one-hundred million cows is a humungous number (imagine how much milk you could produce!).

In defense of the estimated range, you could say that they came up with a number greater than zero and that it is also less than some truly large number such as estimates into the billions. Of course, some would claim that the “correct” number is so close to zero that it might was well be considered zero (these are the claimants that say there is no other intelligent life out there that is in a posture to somehow communicate with us).

People have played quite a bit with the Drake equation and its estimates. I mentioned earlier that you could propose to include additional factors, or modify existing factors, or drop-out some of the factors. Likewise, you can do all sorts of variants on how to arrive at the estimated values that plug into the factors. Numerous studies that use Monte Carlo models and simulate varying estimates have suggested that the N can vary rather wildly.

I’d say that the Drake equation was immensely successful in being able to get your arms around the question of whether or not there is intelligent life out there that we might be able to communicate with. Regardless of how “good” the equation is, and regardless of how hard or wildly differing the estimates of the factors are, there is nonetheless a healthy ongoing dialogue on the topic.

If we didn’t have a Drake equation it would make things immensely difficult to have a discussion on the topic at-hand about intelligent life elsewhere. Everyone would be waving their arms and not be able to be specific to the topic. Overall, the Drake equation highlights the value of having a kind of anchor, around which discussion can grow and mature.

Not having an anchor tends to ferment discourse that is obtuse and wandering when debating complex matters and particularly when there is heated and divergent views.

Having an anchor is like planting a tree, and you can then watch as additional discourse grows around it.

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One of the most outspoken national and worldwide debates involves when AI self-driving cars will be “here” in terms of being available for use. This question continually arises at conferences and by those within this field, along with it being asked by the general public, and by regulators, and by many other stakeholders.

I propose that we derive a kind of Drake equation to aid in the debate. Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article:

For the levels of self-driving cars, see my article:

For why AI Level 5 self-driving cars are like a moonshot, see my article:

For the dangers of co-sharing the driving task, see my article:

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see:

See my article about the ethical dilemmas facing AI self-driving cars:

For potential regulations about AI self-driving cars, see my article:

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article:

Characteristics of the Prediction Equation

Returning to the topic of predicting the advent of AI self-driving cars, let’s consider the characteristics of an equation that could aid in such an important endeavor.

First, consider the matter of who makes predictions about the advent of AI self-driving cars.

There are various technologists that offer their opinions about AI self-driving cars and proffer when we’ll see those vehicles on our streets and byways. One upside about technologists is that they hopefully are versed in the technology and are able to judge the efficacy of how AI and autonomous capabilities are progressing. Not all such tech-related prognosticators are particularly versed in the specifics of AI self-driving cars and so they are merely from afar trying to guess at what is happening, which I at times find to be an overreach on their part and can see that they are actually ill-informed on the matter (worse still, others that are not in high-tech arena assume that these false or misinformed prophets do know what they are talking about!).

They should stick to their knitting.

Let’s also though consider that those technologists truly versed in AI self-driving cars and making predictions might do so without any tangible rhyme or reason. Sometimes they make a “gut” or instinct proclamation. Sometimes they are so enamored by the allure of AI self-driving cars that they are overly optimistic and make predictions based on emotional excitement more so than serious thought.

Throughout the history of technology, we’ve certainly seen quite a number of rather overly optimistic predictions that did not come to fruition in the time frame offered. It’s an easy trap to fall into. There’s the classic 80/20 rule that the first 80% of something is easy or easier to get accomplished and the last 20% is arduous. Heady technologists will often experience the first 80% and extrapolate that the remaining 20% will proceed at the same pace. This often is not the case. It’s the last-mile problem of getting the hardest parts done at the end of the journey.

We also need to clarify that technology in the case of AI self-driving cars can cut both ways, presenting capabilities to achieve autonomy, but also inhibiting autonomy due to the lack of as-yet known approaches, techniques, and computing tools. As such, I find it useful to consider technological advances and how they are formulating, while also considering technological obstacles that are known are even unknown and will be discovered further down the road.

Many technologists that make predictions often do not include other seemingly non-tech related factors that can mightily impact the pace of technology. This is a delinquency of omission, one might say.

What kind of factors will affect the advent of AI self-driving cars?

For Now, Investment and Regulation Favors AI Self-Driving Car Development

There are economic factors that will either encourage spending on the development of AI self-driving cars or might dampen and undermine such spending if pulled away and used for other purposes. I’ve stated many times in my writings and presentations that one of the reasons that we’re seeing the rapid progress of AI self-driving cars is due to the money. Yes, follow the money, as they say. Prior to the monies now flowing into the development of AI self-driving cars, there wasn’t much being spent on it, other than dribbles and drabs, often in the form of research grants for university labs.

Another key factor is society and societal acceptance or resistance to the advent of AI self-driving cars. There could be a tough choice to be made about the progress for AI self-driving cars in terms of unleashing them onto our public roadways, and yet at the same time having them get involved into deadly car accidents. Will society accept the idea that to make progress there is a need to put AI self-driving cars onto our streets and yet those AI systems and self-driving cars might produce injuries or fatalities while still being tested and polished? Maybe yes, maybe not.

It is also crucial to consider the regulatory environment and how it can impact the advent of AI self-driving cars. Currently, regulations about using AI self-driving cars on our public roadways is relatively loose and encourages this budding innovation. If regulators are suddenly pressured to do something about AI self-driving cars, such as when deaths or injuries arise while the self-driving cars on the roads, it could quickly swing toward a tightened regulatory setting.

For my article about the grand convergence that has led to AI self-driving cars, see:

For the societal threshold versus no-threshold considerations, see my article:

For public attitudes about self-driving cars, see my article:

For my article identifying the Top 10 near-term predictions about self-driving cars, see:

For federal regulations related to AI self-driving cars, see my article:

Cannot Rely Soley on a Technologist Perspective

I trust that you are now convinced that any equation trying to predict the advent of AI self-driving cars should not rely solely on a technologist perspective alone. We would want to include the economic perspective, a societal perspective, and a regulatory perspective too. This provides a mixture of perspectives and will hopefully avoid getting caught unawares or blindsided when using solely a single factor.

Each of these factors is not necessarily independent of the other. In fact, the odds are they will tend to swing in the same direction together, though at times on a delayed basis.

For example, suppose the AI of an AI self-driving car is insufficient and makes a computing decision that produces a dramatic and headline catching fatality while driving on a freeway. This could turn public opinion sour. If the public gets extremely sour, regulators are likely to be pushed into or opt to volunteer to set things straight, doing so by making regulations more restrictive on AI self-driving cars. If the regulators get more restrictive, and if public opinion is negative, the auto makers and tech firms might pull back from pouring monies and resources into developing AI self-driving cars.

In exploring the set of factors, you might argue that each factor can end-up being a proponent for and propelling forward AI self-driving cars, or each factor can be an opponent that tends to cause resistance or a dampening to the advent of AI self-driving cars. This is a push-pull kind of tension. It would be vital to encompass this tension in the factors that are used in an equation for making such predictions.

Besides the core factors, there are other matters to be considered.

When someone says that AI self-driving cars are nearly here, I often ask what they mean by “AI self-driving cars” in terms of the levels of capabilities. They might be referring to say Level 3, which in my book is not what I consider a true AI self-driving car. They might even be referring to a Level 4, which I concede is closer to a true AI self-driving car, but I still argue it is not the AI self-driving car level that most people informally are thinking about. To me, Level 5 is the true AI self-driving car.

So, for the purposes of the equation, let’s assume that we are trying to predict the advent of true AI self-driving cars at the Level 5 of the accepted standard.

I’d like to also mention that we need to agree on what it means to have an advent of something. If you could make one AI self-driving car at a Level 5, have you achieved reaching an “advent” of that item? No, I don’t think so. Though you might have done a great job and arrived at Level 5 instance, until we have some semblance of those AI self-driving cars driving around, it seems questionable to say that there is an advent of them.

How many then is an advent? If there are dozens of true AI self-driving cars traveling on our streets, would that be an advent, or do we need more like hundreds, or maybe thousands. What is the number at which we would have a prevalence of true AI self-driving cars?

There are various ways to measure the prevalence. I’m going to keep things simplified and suggest that we use as a measure the percentage of cars in-use at the time. We will gradually see a switchover to AI self-driving cars, as mentioned earlier, and this will see the retirement of conventional cars and the rising population of AI self-driving cars.

Of the total population of all cars, we might agree that once a certain percentage becomes true AI self-driving cars, we have reached a prevalence. Assuming you are willing to go along with that premise, we can then debate whether it is 1%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, or possibly even 100% before you would refer to it as a prevalence (I used rounded numbers by the tens, but it of course could be any number of 1% to 100%).

Let’s Set 20% As Meaning Prevalent

I’m going to use 20% for now. Why? In various areas of study, 20% is often used to indicate a prevalence. This comes from environmental and often biological areas of research. It seems like a large enough percentage that it is isn’t trivial, and yet not so large that it seems somewhat impossible to reach. For those of you that were thinking more along the lines of a majority, such as reaching 51% or something like it, I certainly understand your viewpoint. Likewise, for those of you that were thinking of 90% or maybe 99%, I grasp that too. In any case, for now, I’m going to use 20%.

In the case of the existing number of conventional cars in the United States, which as mentioned is around 250 million, an advent of AI self-driving cars would at 20% be a rather daunting 50 million such cars. That’s a daunting number because think about how long it would likely take to reach that number. In other words, even if AI self-driving cars were immediately ready tomorrow, it would take a while to produce that many self-driving cars, along with a while for those AI self-driving cars to be purchased and put into use.

I’ve previously predicted that once we do achieve true AI self-driving cars, there is likely going to be a rather rapid adoption rate. I say this because those true AI self-driving cars are going to be money makers. When there is money to be made, the demand will go through the roof. This is not just fleets of cars, but as I’ve argued there will be an entire cottage industry of individual consumers that will buy AI self-driving cars to leverage those vehicles as both personal use and for making money.

For my article about the invasive curve and AI self-driving cars, see:

For the upcoming mobility-as-an-economy aspects, see my article:

For the affordability of AI self-driving cars, see my article:

For my article about the economic elements of induced demand, see:

I’ve now laid the groundwork for making an equation that can be used to predict the advent of AI self-driving cars.

The last piece to the puzzle is coming up with a base of when the advent might be reached. By using a base, you can then multiply it by the various factors and see whether the resulting N is the same as, larger than, or smaller than the strawman base. I’ll refer to the base as B-star.

Take a look at Figure 1.

We are trying to solve for N. There is the base B-star which is then multiplied by eight factors.

For definitional purposes:

  •         N is the number of Years-to-Prevalence (YTP), using plug-in 20% as a PV (Prevalence) factor
  •         B-star is the base number of years, which is then adjusted by each of the factors

The key factors consist of:

  •         Factor TA: Technological Advancements, a fractional amount, estimated
  •         Factor TO: Technological Obstacles, a fractional amount, estimated
  •         Factor EP: Economic Payoff, a fractional amount, estimated
  •         Factor ED: Economic Drain, a fractional amount, estimated
  •         Factor SF: Societal Favoritism, a fractional amount, estimated
  •         Factor SO: Societal Opposition, a fractional amount, estimated
  •         Factor RE: Regulatory Enablement, a fractional amount, estimated
  •         Factor RR: Regulatory Restrictions, a fractional amount, estimated

The equation consists of this:

  •         N (PV of 20%) = B-star x TA x TO x EP x ED x SF x SO x RE x RR

As per my earlier discussion, there are four overarching factors involving technology, economic, societal, and regulatory matters. For each factor, there is the push-pull, meaning that each factor can be construed as an element that will foster and push along the advent of AI self-driving cars, and there is also a companion factor that is the pull that yanks away from the advent of AI self-driving cars. That’s then four key factors which is doubled to account for the push-pull effect, arriving at 8 key factors.

Similar to the earlier remarks about the usability of equations, I’ve kept this equation to 9 elements, which is on par with the popular magical number seven plus or minus two rule-of-thumb. The factors are all readily comprehendible. The equation is readily comprehensible.

It is not intended to be the end-all be-all. It is intended to provide a kind of anchor around which discussion and debate can take place. Without having an anchor, arguments and discussions on this matter are often vacuous and roundabout.

Take a look at Figure 2.

As shown, I’ve populated a spreadsheet to make use of the equation.

At first, I’ve opted to show what might happen if you were to consider a solo-factor only perspective. For example, you might be using solely a technologist’s perspective and so those factors are the only ones populated (the rest are assumed to drop out of the equation, rather than being say the value of zero, which would of course wipe out the calculations). Likewise, I show a solo-economic perspective, and then a solo-societal perspective, and a solo-regulatory perspective.

In the last two rows of the spreadsheet, I provide a full mix.

I’ve also opted to show an optimist’s viewpoint and a pessimist’s viewpoint, doing so for each of the solo-factor instances and for the full mix instance. This is in keeping with trying to arrive at a range of values, rather than a singleton value. The optimistic view and the pessimistic view provide an estimated lower bound and an estimated upper bound, respectively.

For the base B-star, the question arises as to what number to use for it. Since there are many pundits that seem to be floating around the number of 15 years, I’ve used that in this illustrative example. We could use 5 years, 10 years, 20 years, 25 years, or 30 years, all of which have been bandied around in the media. Presumably, whichever base you choose, the factors should ultimately “correct” it to tend toward whatever the “actual” prediction will be.

We are in the midst of carrying out a Delphi method approach to arrive at substantive lower and upper bounds. The Delphi method is a well-established forecasting method, often referred to as ETE (Estimate-Talk-Estimate). In this case, a set of experts in the field of AI self-driving cars have been canvased to participate in a series of Delphi rounds. With each round, the selected experts can see the indications of the other experts and adjust their own estimates as they deem appropriate to do so.

Though the Delphi method is generally held in high regard, it can be criticized for its potential of groupthink and can at times be weakened by excessive consensus. Nonetheless, it is instructive, and another means to spark useful discussion about the topic.


When will there be an advent of AI self-driving cars? Some answer this question by amorphous hunches. Via the use of the proposed herein equation, it is hoped that a more tangible and structured discussion and debate can take place.

You might not like the factors used, or you might want to add additional factors, but at least either way this equation gets the tree planted. From these roots will expectantly spring a widely sophisticated undertaking on the advent question.

Some critics of AI self-driving cars have said that we’ll never have a true AI self-driving car. If that’s the case, I guess the number for N is either zero (which we’ll define as meaning it will never happen) or maybe infinite.  I suppose I’m more optimistic and would like to assert that there is a number for N, which is not zero and nor infinite, and more akin to a value less than one hundred, and likely less than 50.

I’ve already stated that the Gen Z will be the generation that determines the advent of AI self-driving cars, which I still believe to be the case.

For my article about my predictions and Gen Z, see:

Equations, we love them, and we at times hate them (such as when memorizing them for taking tests or quizzes). Take a look at my proposed equation and see what you think. Plug-in some values. Mull over what might occur in the future. Though not a crystal ball, it is a kind of playbook of how to think about the future and the emergence of true AI self-driving cars.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.