GoTo Fail and AI Brittleness: The Case of AI Self-Driving Cars

2052

By Lance Eliot, the AI Trends Insider

I’m guessing that you’ve likely heard or read the famous tale of the Dutch boy that plugged a hole in a leaking dam via his finger and was able to save the entire country by doing so. I used to read this fictional story to my children when they were quite young. They delighted in my reading of it, often asking me to read it over and over.

One aspect that puzzled my young children was how a hole so small that it could be plugged by a finger could potentially jeopardize the integrity of the entire dam. Rather astute of them to ask. I read them the story to impart a lesson of life that I had myself learned over the years, namely that sometimes the weakest link in the chain can undermine an entire system, and incredibly too the weakest link can be relatively small and surprisingly catastrophic in spite of its size.

I guess that’s maybe two lessons rolled into one.

The first part is that the weakest link in a chain can become broken or severed and thus the whole chain no longer exists as a continuous chain.

By saying it is the weakest link, we’re not necessarily saying its size, and it could be a link of the same size as the rest of the chain. It could be even a larger link or perhaps even the largest link of the chain. Or, it could be a smaller link or possibly the smallest sized link of the chain. The point being that by size alone, it is not of necessity the basis for why the link might be the weakest. There could be a myriad of other reasons why the link is subject to being considered “the weakest” and for which size might or might not particularly matter.

Another perhaps obvious corollary regarding the weakest link aspect is that it is just one link involved. That’s what catches our attention and underlies the surprise about the notion. We might not be quite so taken aback if a multitude of links broke and therefore the chain itself came into ruin.

The second part of the lesson learned involves the cascading impact and how severe it can be as a consequence of the weakest link giving way.

In the case of the tiny hole in the dam, presumably the water could rush through that hole and the build-up of pressure would tend to crack and undermine the dam at that initial weakest point. As the water pushes and pushes to get through it, the finger-sized hole is bound to grow and grow in size, until inextricably the hole becomes a gap, and the gap then becomes a breech, and the breech then leads to the entire dam crumbling and being overtaken by the madly and punishingly flowing water.

If you are not convinced that a single weakest link could undermine a much larger overall system, I’d like to enchant you with the now-famous account of the so-called “goto fail goto fail” saga that played out in February 2014. This is a true story.

The crux of the story is that one line of code, a single “Go To” statement in a software routine, led to the undermining of a vital aspect of computer security regarding Apple related devices.

I assert that the one line of code is the equivalent to a tiny finger-sized hole in a dam. Via that one hole, a torrent of security guffaws could have flowed.  At the time, and still to this day, there were reverberations that this single “Go To” statement could have been so significant.

For those outside of the computer field, it seemed shocking. What, one line of code can be that crucial? For those within the computer field, there was for some a sense of embarrassment, namely that the incident laid bare the brittleness of computer programs and software, along with being an eye opener to the nature of software development.

I realize that there were pundits that said it was freakish and a one-of-a-kind, but at the time I concurred with those that said this is actually just the tip of the iceberg. Little do most people know or understand how software is often built on a house of cards. Depending upon how much actual care and attention you devote to your software efforts, which can be costly in terms of time, labor, and resources needed, you can make it hard to have a weakest link or you can make it relatively easy to have a weakest link.

All told, you cannot assume that all software developers and all software development efforts are undertaking the harder route of trying to either prevent weakest links or at least catch the weakest link when it breaks. As such, as you walk and talk today, and are either interacting with various computer systems or reliant upon those computer systems, you have no immediate way to know whether there is or is not a weakest link ready to be encountered.

In the case of the “Go to” line of code that I’m about to show you, it turns out that the inadvertent use of a somewhat errant “Go to” statement created an unreachable part of the program, which is often referred to as an area of code known as dead code. It is dead code because it will never be brought to life, in the sense that it will never be executed during the course of the program being run.

Why would you have any dead code in your program? Normally, you would not. A programmer ought to be making sure that their is code is reachable in one manner or another. Having code that is unreachable is essentially unwise since it is sitting in the program but won’t ever do anything. Furthermore, it can be quite confusing to any other programmer that comes along to take a look at the code.

There are times at which a programmer might purposely put dead code into their program and have in mind that at some future time they will come back to the code and change things so that the dead code then becomes reachable. It is a placeholder.

Another possibility is that the code was earlier being used, and for some reason the programmer decided they no longer wanted it to be executed, so they purposely put it into a spot that it became dead code in the program, or routed the execution around the code so that it would no longer be reachable and thus be dead code. They might for the moment want to keep the code inside the program, just in case they later decide to encompass it again later on.

Generally, the dead code is a human programmer consideration in that if a programmer has purposely included dead code it raises questions about why and what it is there for, since it won’t be executed.

There is a strong possibility that the programmer goofed-up and didn’t intend to have dead code. Our inspection of the code won’t immediately tell us whether the programmer put the dead code there for a purposeful reason, or they might have accidentally formulated a circumstance of dead code and not even realize they did so. That’s going to be bad because the programmer presumably assumed that the dead code would get executed at some juncture while the program was running, but it won’t.

Infamous Dead Code Example

You are now ready to see the infamous code (it’s an excerpt, the entire program is available as open source online at many code repositories).

Here it is:

   if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0)

       goto fail;

   if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0)

       goto fail;

   if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)

       goto fail;

   if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)

       goto fail;

       goto fail;

   if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)

       goto fail;

err = sslRawVerify(ctx,

                   ctx->peerPubKey,

                   dataToSign,                /* plaintext */

                   dataToSignLen,            /* plaintext length */

                   signature,

                   signatureLen);

   if(err) {

    sslErrorLog(“SSLDecodeSignedServerKeyExchange: sslRawVerify “

                   “returned %d\n, (int)err);

       goto fail;

   }

fail:

SSLFreeBuffer(&signedHashes);

SSLFreeBuffer(&hashCtx);

   return err;

Observe that there appear to be five IF statements, one after another. Each of the IF statements seems to be somewhat the same, namely each tests a condition and if the condition is true then the code is going to jump to the label of “fail” that is further down in the code.

All of this would otherwise not be especially worth discussing, except for the fact that there is a “goto fail” hidden amongst that set of a series of five IF statements.

It is actually on its own and not part of any of those IF statements. It is sitting in there, among those IF statements, and will be executed unconditionally, meaning that once it is reached, the program will do as instructed and jump to the label “fail” that appears further down in the code.

Can you see the extra “goto fail” that has found its ways into that series of IF statements?

It might take a bit of an eagle eye for you to spot it. In case you don’t readily see it, I’ll include the excerpt again here and show you just the few statements I want you to focus on for now:

   if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)

       goto fail;

       goto fail;

   if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)

       goto fail;

What you have in a more abstract way is these three statements:

   IF (condition) goto fail;

   goto fail;

   IF (condition) go to fail;

There is an IF statement, the first of those above three lines, that has its own indication of jumping to the label “fail” when the assessed condition is true.

Immediately after that IF statement, there is a statement that says “goto fail” and it is all on its own, that’s the second line of the three lines.

The IF statement that follows that “goto fail” which is on its own, the third line, won’t ever be executed.

Why? Because the “goto fail” in front of it will branch away and the sad and lonely IF statement won’t get executed.

In fact, all of the lines of code following that “goto fail” are going to be skipped during execution. They are in essence unreachable code. They are dead code. By the indentation, it becomes somewhat harder to discern that the unconditional GO TO statement exists within the sequence of those IF statements.

One line of code, a seemingly extraneous GO TO statement, which is placed in a manner that it creates a chunk of unreachable code. This is the weakest link in this chain. And it creates a lot of troubles.

By the way, most people tend to refer to this as the “goto fail goto fail” because it has two such statements together. There were T-shirts, bumper stickers, coffee mugs, and the like, all quickly put into the marketplace at the time of this incident, allowing the populace to relish the matter and showcase what it was about. Some of the versions said “goto fail; goto fail;” and included the proper semi-colons while others omitted the semi-colons.

What was the overall purpose of this program, you might be wondering?

It was an essential part of the software that does security verification for various Apple devices like their smartphones, iPad, etc.

You might be aware that when you try to access a web site, there is a kind of handshake that allows a secure connection to be potentially established. The standard used for this is referred to as the SSL/TSL, or the Secure Socket Layer / Transport Security Layer.

When your device tries to connect with a web site and SSL/TSL is being used, the device starts to make the connection, the web site presents a cryptographic certificate for verification purposes, and your device then tries to verify that the certificate is genuine (along with other validations that occur).

In the excerpt that I’ve shown you, you are looking at the software that would be sitting in your Apple device and trying to undertake that SSL/TSL verification.

Unfortunately, regrettably, the dead code is quite important to the act of validating the SSL/TSL certificate and other factors. Essentially, by bypassing an important part of the code, this program is going to be falsely reporting that the certificate is OK, under circumstances when it is not.

You might find of interest this official vendor declaration about the code when it was initially realized what was happening, and a quick fix was put in place: “Secure Transport failed to validate the authenticity of the connection. This issue was addressed by restoring missing validation steps.”

Basically, you could potentially exploit the bug by tricking a device that was connecting to a web site and place yourself into the middle, doing so to surreptitiously watch and read the traffic going back-and-forth, grabbing up private info which you might use for nefarious purposes. This is commonly known as the Man-in-the-Middle security attack (MITM).

I’ve now provided you with an example of a hole in the dam. It is a seemingly small hole, yet it undermined a much larger dam. Among a length chain of things that need to occur for the security aspects of the SSL/TSL, this one weak link undermined a lot of it. I do want to make sure that you know that it was not completely undermined since some parts of the code were working as intended and it was this particular slice that had the issue.

There are an estimated 2,000 lines of code in this one program. Out of the 2,000 lines of code, one line, the infamous extra “goto fail” had caused the overall program to now falter in terms of what it was intended to achieve. That means that only 0.05% of the code was “wrong” and yet it undermined the entire program.

Some would describe this as an exemplar of being brittle.

Presumably, we don’t want most things in our lives to be brittle. We want them to be robust. We want them to be resilient. The placement of just one line of code in the wrong spot and then undermining a significant overall intent is seemingly not something we would agree to be properly robust or resilient.

Fortunately, this instance did not seem to cause any known security breeches to get undertaken and no lives were lost. Imagine though that this were to happen inside a real-time system that is controlling a robotic arm in a manufacturing plant. Suppose the code worked most of the time, but on a rare occasion it reached a spot of this same kind of unconditional GO TO, and perhaps jumped past code that checks to make sure that a human is not in the way of the moving robotic arm. By bypassing that verification code, the consequences could be dire.

For the story of the Dutch boy that plugged the hole in the dam, we are never told how the hole got there in the first place. It is a mystery, though most people that read the story just take it at face value that there was a hole.

I’d like to take a moment and speculate about the infamous GO TO of the “goto fail” and see if we can learn any additional lessons by doing so, including possibly how it go there.

Nobody seems to know how it actually happened, well, I’m sure someone does that was involved in the code (they aren’t saying).

Anyway, let’s start with the theories that I think are most entertaining but seem farfetched, in my opinion.

One theory is that it was purposely planted into the code, doing so at the request of someone such as perhaps the NSA.

It’s a nifty theory because you can couple with it that the use of the single GO TO statement makes the matter seem as though it was an innocent mistake. What better way to plant a backdoor and yet if it is later discovered you can say that it was merely an accident all along. Sweet!

Of course, the conspiracy theorists say that’s what they want us to think, namely that it was just a pure accident. Sorry, I’m not buying into the conspiracy theory on this. Yes, I realize it means that maybe I’ve been bamboozled.

For conspiracy theories in the AI field, see my article: https://www.aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

Another theory is that the programmer or programmers (we don’t know for sure if it was one programmer, and so maybe it was several that got together on this), opted to plant the GO TO statement and keep it in their back pocket. This is the kind of thing you might try to sell on the dark web. There are a slew of zero-day exploits that untoward hackers trade and sell, so why not do the same with this?

Once again, this seems to almost make sense because the beauty is that the hole is based on just one GO TO statement. This might provide plausible deniability if the code is tracked to whomever put the GO TO statement in there.

For my article about security backdoor holes, see: https://www.aitrends.com/selfdrivingcars/ai-deep-learning-backdoor-security-holes-self-driving-cars-detection-prevention/

For my article about stealing of software code aspects, see: https://www.aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/

For aspects of reverse engineering code, see my article: https://www.aitrends.com/selfdrivingcars/reverse-engineering-and-ai-self-driving-cars/

I’m going to vote against this purposeful hacking theory. I realize that I might be falling for someone’s scam and they are laughing all the way to the bank about it. I don’t think so.

In any case, now let’s dispense with those theories and got toward something that I think has a much higher chance of approaching what really did happen.

‘Mistakenly Done’ Theories

First, we’ll divide the remaining options into something that was mistakenly done versus something intentionally done.

I’ll cover the “mistakenly done” theories first.

You are a harried programmer. You are churning out gobs of code.

While writing those IF statements, you accidentally fat finger an extra “goto fail” into the code. At the time, you’ve indented it and so it appears to be in the right spot. By mistake, you have placed that line into your code. It becomes part of the landscape of the code.

That’s one theory about the mistaken-basis angle.

Another theory is that the programmer had intended to put another IF statement into that segment of the code and had typed the “goto fail” portion, but then somehow got distracted or interrupted and neglected to put the first part, the IF statement part itself, into the code.

Yet another variation is that there was an IF statement there, but the programmer for some reason opted to delete it, but when the programmer did the delete, they mistakenly did not remove the “goto fail” which would have been easy to miss because it was on the next physical line.

We can also play with the idea that there might have been multiple programmers involved.

Suppose one programmer wrote part of that portion with the IF statements, and another programmer was also working on the code, using another instance, and when the two instances got merged together, the merging led to the extra GO TO statement.

On a similar front, there is a bunch of IF statements earlier in the code. Maybe those IF statements were copied and used for this set of IF statements, and when the programmer or programmers were cleaning up the copied IF statements, they inadvertently added the unconditional GO TO statement.

Let’s shift our attention to the “intentional” theories of how the line got in there.

The programmer was writing the code and after having written those series of IF statements, took another look and thought they had forgotten to put a “goto fail” for the IF statement that precedes the now known to be wrong GO TO statement. In their mind, they thought they were putting in the line because it needed to go there.

Or, maybe the programmer had been doing some testing of the code. While doing testing, the programmer opted to temporarily put the GO TO into the series of IF statements, wanting to momentarily short circuit the rest of the routine. This was handy at the time. Unfortunately, the programmer forgot to remove it later on.

Or, another programmer was inspecting the code. Being rushed or distracted, the programmer thought that a GO TO opt to be in the mix of those IF statements. We know now that this isn’t a logical thing to do, but perhaps at the time, in the mind of the programmer, it was conceived that the GO TO was going to have some other positive effect, and so they put it into the code.

Programmers are human beings. They make mistakes. They can have one thing in mind about the code, and yet the code might actually end-up doing something other than what they thought.

Some people were quick to judge that the programmer must have been a rookie to have let this happen. I’m not so sure that we can make such a judgment. I’ve known and managed many programmers and software engineers that were topnotch, seasoned with many years of complex systems projects, and yet they too made mistakes, doing so and yet at first insistent to the extreme that they must be right, having recalcitrant chagrin afterward when proven to be wrong.

This then takes us to another perspective, namely if any of those aforementioned theories about the mistaken action or the intentional action are true, how come it wasn’t caught?

Typically, many software teams do code reviews. This might involve merely having another developer eyeball your code, or it might be more exhaustive and involve you walking them through it, including each trying to prove or disprove that the code is proper and complete.

Would this error have been caught by a code review? Maybe yes, maybe not.

This is somewhat insidious because it is only one line, and it was indented to fall into line with the other lines, helping to mask it or at least camouflage it by appearing to be nicely woven into the code.

Suppose the code review was surface level and involved simply eyeballing the code. That kind of code review could easily miss catching this GO TO statement issue.

Suppose it was noticed during code review, but it was put to the side for a future look-see, and then because the programmers were doing a thousand things at once, oops it got left in the code. That’s another real possibility.

For my article about burned out developers, see: https://www.aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

For the egocentric aspects of programmers, see my article: https://www.aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For my article about the dangers of groupthink and developers, see: https://www.aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/

You also need to consider the human aspects of trust and belief in the skills of the other programmers involved in a programming team.

Suppose the programmer that wrote this code was considered topnotch. Time after time, their code was flawless. On this particular occasion, when it came to doing a code review, it was a slimmer code review because of the trust placed in that programmer.

When managing software engineers, they sometimes will get huffy at me when I have them do code reviews. There are some that will say they are professionals and don’t need a code review, or that if there is a code review it should be quick and lite because of how good they are. I respect their skill sets but try to point out that any of us can have something mar our work.

One aspect that is very hard to get across involves the notion of egoless coding and code reviews. The notion is that you try to separate the person from the code in terms of the aspect that any kind of critiquing of the code becomes an attack on that person. This means that no one wants to do these code reviews when it spirals downward into a hatred fest. What can happen is the code reviews become an unseemly quagmire of accusations and anger, spilling out based not only on the code but perhaps due to other personal animosity too.

Besides code reviews, one could say that this GO TO statement should have been found during testing of the code.

Certainly, it would seem at the unit level of testing, you could have setup a test suite of cases that fed into this routine, and you would have discovered that sometimes the verification was passing when it should not be. Perhaps the unit testing was done in a shallow way.

We might also wonder what happened at doing a system test.

Normally, you put together the various units or multiple pieces and do a test across the whole system or subsystem. If they did so, how did this get missed? Again, it could be that the test cases used at the system level did not encompass anything that ultimately rolled down into this particular routine and would have showcased the erroneous result.

You might wonder how the compiler itself missed this aspect. Some compilers can do a kind of static analysis trying to find things that might be awry, such as dead code. Apparently, at the time, there was speculation that the compiler could have helped, but it had options that were either confusing to use, or when used were often mistaken in what they found.

We can take a different perspective and question how the code itself is written and structured overall.

One aspect that is often done but should be typically reconsidered is that the “err” value that gets used in this routine and sent back to the rest of the software was set initially to being Okay, and only once something found an untoward does it get set to a Not Okay signal. This meant that when the verification code was skipped, the flag was defaulting to everything being Okay.

One might argue that this is the opposite of the right way to do things. Maybe you ought to assume that the verification is Not Okay, and the routine has to essentially go through all the verifications to set the value to Okay. In this manner, if somehow the routine short circuits early, at least the verification is stated as Not Okay. This would seem like a safer default in such a case.

Another aspect would be the use of curly braces or brackets. Remember that I had earlier stated you can use those on an IF statement. Beside having use for multiple statements on an IF, it also can be a visual indicator for a human programmer of the start and end of the body of statements. Some believe that if the programmer had used the curly braces, the odds are that the extra “goto fail” would have stuck out more so as a sore thumb.

We can also question the use of the multiple IF’s in a series. This is often done by programmers, and it is a kind of easy (some say sloppy or lazy) way to do things, but there are other programming techniques and constructs that can be used instead.

Ongoing Debate on Dangers of GO TO Statements

There are some that have attacked the use of the GO TO statements throughout the code passage. You might be aware that there has been an ongoing debate about the “dangers” of using GO TO statements. Some have said it is a construct that should be banned entirely. Perhaps the debate was most vividly started when Edgar Dijkstra had his letter published in the Communications of the ACM in March of 1968. The debate about the merits versus the downsides of the GO TO have continued since then.

You could restructure this code to eliminate the GO TO statements, in which case, the extra GO TO would never have gotten into the mix, presumably.

Another aspect involves the notion that the “goto fail” is repeated in the offending portion, which some would say should have actually made it visually standout. Would your eye tend to catch the same line of code repeated twice like this, especially a somewhat naked GO TO statement? Apparently, presumably, it did not. Some say the compiler should have issued a warning about a seemingly repeated line, even if it wasn’t set to detect dead code.

You might also point out that this code doesn’t seem to have much built-in self-checking going on. You can write your code to “just get the job done” and it then provides its result. Another approach involves adding additional layers of code that do various double-checks. If that had been built into this code, maybe it would have detected that the verification was not being done to a full extent, and whatever error handling should take place would then have gotten invoked.

In the software field, we often speak of the smell of a piece of code. Code-smell means that the code might be poorly written or suspect in one manner or another, and upon taking a sniff or a whiff of it (by looking at the code), one might detect a foul odor, possibly even a stench.

Software developers also refer to technical debt. This means that when you right somewhat foul code, your creating a kind of debt that will someday be due. It’s like taking out a loan, and eventually the loan will need to be paid back. Bad code will almost always boomerang and eventually come back to haunt. I try to impart among my software developers that we ought to be creating technical credit, meaning that we’ve structured and written the code for future ease of maintenance and growth. We have planted the seed for this, even if at the time that we developed the code we didn’t necessarily need to do so.

As a long-time programmer and software engineer, I am admittedly sympathetic to the plight of fellow software developers. It is always easy to do second guessing.

For those that want to dump the matter onto the shoulders of the programmer that did this particular work of the “goto fail” issue, we can do so, but I think we need to have a context associated with it.

Suppose the programmer was hampered and not being provided by sufficient tools to do their work. Suppose the manager was pushing the programmer to just get the work done. Suppose the schedule was unrealistic and shortcuts were taken. It takes a village to develop software. If the village is not of the right culture and approach, you are going to get software that matches to that culture.

I am not letting individual developers off-the-hook. I am saying though that it is hard to go against the grain of your manager, your team, your company culture, if it isn’t allowing you to do the kind of robust and resilient programming that you think ought to be done. It is hard to be the one that is trying to turn the tide.

At the same time, I also want to point out that sometimes there are developers that aren’t versed in how to make their software robust or resilient. They have done software development and it seems to work out okay for them. They might not know of other ways to get the job done.

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. The auto makers and tech firms doing likewise are hopefully doing the right thing in terms of how they are developing their software, meaning that they need to recognize the dangers of the brittleness of the AI systems they are crafting.

Brittleness of the AI for an AI self-driving car is quite serious. If the AI encounters a weak link, imagine if it happens when the self-driving car is doing 65 miles per hour on a crowded freeway. Lives are at stake. This AI is a real-time system involving multi-ton cars that can quickly and in a deadly manner determine the life or death of humans.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

Returning to the topic of the “goto fail” and AI brittleness, we all need to realize that such a one line of code could upset the AI self-driving car cart, so to speak.

In theory, the AI systems of AI self-driving cars should have numerous checks-and-balances. The chances of any single line of code causing havoc should be extremely low. There should be fail-safe capabilities. The testing should be extremely extensive and exhaustive. Simulations should be used to help ferret out such anomalies prior to getting into the code of a roadway running self-driving car. And so on.

That’s the theory of it.

The real-world is different. In many of these AI systems there are tons of third-party code that is being used, and other packages and open source being used. For the AI developers tasked with developing the AI of the self-driving cars, they are likely assuming that those other bodies of code are already well-tested and will work as intended.

Maybe yes, maybe no.

There is such tremendous pressure to get AI self-driving cars onto the streets, pushed by this relentless idea that whomever is first will somehow win this moonshot race, there is likely a substantial amount of cutting of corners in terms of code reviews, tools being used, and the like.

I realize that some will say that this is yet another reason to rely upon Machine Learning and Deep Learning. Rather than writing code, you presumably can base your AI system for a self-driving car on the use of packages that can emit a large-scale artificial neural network and let that be the core of your AI for the driving task.

At this time, the AI stack for self-driving cars is still primarily of a more traditional nature and the Machine Learning and Deep Learning is mainly for only selected elements, most notably for the sensor data analyses. The rest of the AI is done the old-fashioned way, and for which the single line of code and weak link are a real possibility.

I don’t want to leave you with the impression that somehow the Machine Learning and Deep Learning is a silver bullet in this matter. It is not.

The packages used for the Machine Learning and Deep Learning could certainly have their own weaknesses in them. The resultant runnable neural network might be flawed due to some flaw within the Machine Learning or Deep Learning code itself. The executable might be flawed. We already know that the neural network itself can be “flawed” in that you can do various sensory trickery to fool some of the large-scale neural networks being constructed.

For the crucial aspect of safety and AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

For my Top 10 predictions of what is going to happen soon with AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/top-10-ai-trends-insider-predictions-about-ai-and-ai-self-driving-cars-for-2019/

For the fail-safe aspects that are needed in AI self-driving cars, see my article:https://www.aitrends.com/selfdrivingcars/fail-safe-ai-and-self-driving-cars/

Conclusion

The Dutch boy stopped the dam from breaking by plugging the hole with his finger. Heroic! We can all rejoice in the tale. It provides us with the realization that sometimes small things can have a big impact. There is the lesson that the weakest link, this seemingly inconsequential hole, could lead to much greater ruin.

How many of today’s budding AI self-driving cars are right now underway and have a hole somewhere deep inside them, waiting to become the spigot that regrettably causes the rest of the AI system to go awry and a terrible result occurs. Nobody knows.

How much effort are the auto makers and tech firms putting toward finding the hole or holes beforehand?

How many are putting in place error handling and error processing that once a hole arises during actual use and after deployment, the AI will be able to recognize and deal safely with the hole?

I hope that the tale of the Dutch boy will spark attention to this matter. I tried to showcase how this can happen in the real-world by making use of the infamous “goto fail goto fail” incident. It is a nice choice for this purpose since it is easily understood and readily publicly discussed. No need to search high and far to find some seemingly arcane example that most would try to write-off as inconsequential.

There is a huge body of water sitting at the dam, which we’ll say is the public and their already nervous qualms about AI self-driving cars. If even one hole opens up in that dam, I assure you the water is going to gush through it, and we’ll likely see a tsunami of regulation and backlash against the advent of AI self-driving cars. I don’t want that. I hope the rest of you don’t want that. Let’s make sure to put in place the appropriate efforts to seek out the weakest links in our AI systems and find them before it finds your AI self-driving car system, so we can keep it from destroying the whole dam.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.

For reader’s interested in a more detailed version of this piece, you can contact Dr. Eliot at ai.selfdriving.car@gmail.com or it can be found in his book “Spurring AI Self-Driving Cars” available on Amazon.