Noble Cause Corruption and AI: The Case of AI Self-Driving Cars


By Lance Eliot, the AI Trends Insider

Here’s an age-old question for you, do the ends justify the means?

Some trace the origins of this thorny question to the Latin collection Heroides as reportedly written by Ovid (Publius Ovidius Naso), namely it states “Exitus acta probat,” which loosely translated could be interpreted as to whether the ends essentially prove or justify the means.

Most of us probably believe it came from the works of Machiavelli, and the question certainly fits for his essays known as The Prince. Indeed, one would assume that Machiavelli would pose it not as a question and instead make it into an assertion, since his writings are about being conniving. About the closest quote one can find in Machiavelli’s classic The Prince is this: “Moreover, in the actions of men, and most of all Princes, where there is no tribunal which we can appeal, we look to results.” It’s a bit of a stretch to say that this excerpt matches to the precise nature of the ends justifying the means, and perhaps it might be saying that those that hold the keys will make the rules and therefore they can claim their aims are justified.

I am assuming that some of you would decry the entire notion that the ends justify the means. The implication of the notion is that you can do any darned untoward thing you want to do, as long as the ends that you are targeting are somehow noble.

I’d mildly object to the assumption that the ends being sought are of necessity noble, and it could be that the person arguing in favor of the ends justifying the means will want to convince themselves or us that the end is noble. I am saying that it might not really be. It could be a charade to make the means seem viable.

Let’s test the logic involved. I will make you into a better person if I beat you silly. I do so under the auspices that what doesn’t kill you will make you stronger (we keep hearing that one tossed around these days!). After you’ve been badly injured and permanently no longer have the use of your limbs and body, I declare that you are better off now and you should be thankful for my actions. It is hard to conjure up a circumstance wherein that after completely gutting you and barely keeping you alive, it has benefited you (we could try to craft some possibilities, but I dare say they would all pretty much be flimsy).

What sometimes happens is that people intending to do bad things will cleverly mask their upcoming bad deeds by wrapping it into a seemingly noble targeted ending. This generally allows them to get away with the bad things, since others will believe and at times rally in favor of the bad things, due to buying into the noble target ending that will presumably be someday reached.

Sometimes there are people that don’t intend to do bad things, but fall into the pit of doing bad things, along the way to achieving what seems like a noble ending. Of course, it could also be that the noble ending was a facade all along.

The problem with all of this ends-and-means stuff is that you might not know what is true versus what is blarney.

Maybe the ends hoped for are true and good, while the means are good too. Or, the ends hoped for are true and good, while the means are rotten. Or, the ends are terrible, but made to seem like they will be good, and the means are good, while there’s the other version which is the ends are terrible but made to seem like they are true and good, while the means are actually good.

Yikes, it can be confusing.

Those of you that might remember the Dirty Harry movies, it was the now-classic story of a cop that will do whatever it takes to get the criminal and illuminates this posture of the means justifying the ends. Get the bad guys at any cost. Break the law. Be as sneaky and dirty as might be necessary to win. We see this theme repeatedly in modern-day TV shows and films. The movie Taken is another such example (spoiler alert!), wherein we cheer for the father trying to save his daughter, regardless of not just how many crooks need to first die, but even innocent bystanders might end-up as collateral damage. It’s all okay, as long as the noble ends, in this case getting and expunging the dastardly criminals that have kidnapped his daughter and saving his daughter, remains the focus.

Can you remain a “good guy” and still break laws and possibly injure or kill innocents, assuming that your goal is something considered good?

This can be a difficult basis as justification. One can try to justify it, such as the master kidnapper in Taken is so horrible and possibly would continue on his horrific crime spree, so getting him killed is worth it, when you balance out the lives lost while trying to get him versus the grand total number of lives he might have injured or killed had he remained alive. A body count kind of rationalization.

Here’s a more everyday example for you.

When I was a university professor, one my classes involved making use of a simulation to pretend you were running a business. Teams of students would each have a business assigned in the simulation and had to make executive decisions about the business. Weekly, the simulation would indicate how the businesses were faring, doing so by simulating the businesses trying to sell their products and competing against each other.

The grades in the class were greatly determined by how well each team’s businesses made out in the simulation. The higher ranked your business was at the end of the simulation, the higher of a grade that went to your team. As you can imagine, the students were fiercely competing as they played the simulation. Members of each team typically pledged with their fellow team members that they would not speak to anyone else about their team strategy, for fear that it might get leaked to one of the other teams.

One of the teams decided to use a slightly different approach to win the simulation. They hacked it. By doing so, they were able to control the simulation results. They were cautious to make the simulation seem to still be working as expected. Cleverly, one might say, some weeks their team was not even in the top ranking. The hack was aimed to get them to the top pole position at the very end, and hopefully not arouse suspicion beforehand.

Turns out that their hack was discovered.

They claimed that their hack was justified. It was a means to an end. The end was to perform the best you could in the simulation. They found a means to do so. Nobody had said they could not do a hack. Yes, it was assumed that everyone would be using the simulation as intended, but there was not a specific declaration that you could not rig the simulation.

These students went even further and pointed out that in real-life, while in industry, there are all kinds of espionage taking place of one company spying on another company. In a sense, they were actually adding a real-world element to the simulation. This made the experience more powerful for everyone involved, they asserted.

What do you think, did their ends justify the means?

Another spoiler alert, the college did not believe it did in this case.

Noble Cause Corruption Explained

There is a phrase given to those that believe they have a noble end and yet seemingly diverge from a proper means to reach it, namely the “noble cause corruption” phenomena.

What happens is that when someone might have an ends that they think is noble, they can become corrupted in the pursuit of that noble end. This can include carrying out unlawful acts, immoral acts, and whatever else might be needed to reach the desired ends.

In the news these days there is a colossal example in business of a presumed noble cause corruption case. It is the case of Theranos. If you read any business-related news, you likely already know some aspects about the case. I’ll go ahead and provide a quick recap about the major points. This is all well-documented in many big-time media outlets, and especially in an expose written by John Carreyrou of the Wall Street Journal and later further elaborated in his book entitled “Bad Blood: Secrets and Lies in a Silicon Valley Startup.”

A Stanford University dropout, Elizabeth Holmes, at the age of 19, started a biotech firm named Theranos, and did so with the stated goal of being able to do a multitude of blood diagnostic tests via the use of a tiny drop or so of blood, using a single finger-prick device to get the blood. This seemed nearly impossible since you usually need to collect a much greater quantity of blood to do a multitude of tests, plus the kind of blood you get from a finger-prick is not as rich as you can get from using a conventional needle-in-the-arm to draw blood.

Her claims of being able to achieve these “ends” was a bold proclamation. It would change the world. Imagine how much easier and cheaper it could be to do blood tests. People would no longer need to fear taking a blood test. It could be done easily and just about anywhere. The blood-testing marketplace would be utterly disrupted and transformed. She went on a kind of public relations campaign for her company and the noble cause, holding the banner high, seeking to raise money and attention to her efforts.

Elizabeth became very practiced at presenting herself to the media. She combined a sense of humility with an indication of strength and confidence. She kept always hammering away at the ends and would shrewdly divert attention away from the means. She made the covers of some of the biggest media magazines. The story was like something out of a fairy tale. Young female entrepreneur, seeking to make the world a better place, and the media ate it up completely.

She got hundreds of millions of dollars in backing from some heavy weight investors, though few at the time seemed to realize that these investors were not biotech savvy. This likely helped the subterfuge. It is said that whenever anyone of biotech merit started to ask probing questions, Elizabeth and Theranos managed to avoid giving sufficient answers.

It turns out that the claimed technology did not exist and did not work as claimed. What makes the story especially notable is that Theranos did a deal with Walgreens and began actually performing the service for real people in selected cities in the United States. Sadly, many of the blood tests done turned out to be wrong. Indeed, over a million blood tests had be to revoked and redone. For any of you that happened to have taken one of those blood tests, it is unimaginable how you must feel, now knowing that what the blood test reported at the time was false or at best misleading. This impacted real people’s lives in real ways.

Elizabeth and Theranos were charged with a massive fraud by the SEC, formally filed on March 14, 2018. There are also criminal conspiracy and wire fraud charges that were leveled by the U.S. Attorney of the Northern District of California on June 15, 2018. Theranos ceased operations on August 31, 2018.

Some say that Elizabeth was a true believer in her cause and perchance got caught up in not being able to achieve the ends she desired. The means were not so good, but the targeted end was noble. Interviews with her defense attorney reveal that they are taking the stance that if she had not been so rudely and inappropriately interrupted in her quest, which therefore they contend shortchanged the time needed to perfect the technology, the ends would have been reached. It will be interesting to see how that “justification” plays out in the courtroom.

There are some that say that it was all a scam from day one. Those critics say that it must have been known by the founder and core team that what was being proposed was the equivalent of a perpetual motion machine.

No one in the media and nor the investors seemed to want to tell the king that maybe they weren’t wearing any clothes. Critics say it was easier and sold more newspapers and magazines if the headlines were that this was a miracle come true, plus the investors were being portrayed as lucky they got in, while other investors could only look from afar and be jealous of not having had a chance to lay money on the line for Theranos.

There are lots of other details that are fascinating about the case. It truly is akin to a great fiction novel or movie script. One key aspect that helped it all unravel was that the grandson of one of the major investors got hired by the company, and he discovered what was really going on, which the firm then attempted to suppress him and he took a lot of gruff accordingly. Now he is an unsung hero.

What can the Theranos case tell us?

The bigger the noble ends, the likely easier it is to justify the means. The more too that the means can get out-of-hand without causing too much of a ruckus, because you just come back to the ends and everyone starts smiling again.

Let’s switch the domain of focus and consider whether this kind of noble cause corruption can happen in the field of Artificial Intelligence (AI).

Yes, of course it can.

There are subtle ways this can arise, and other more apparent ways in which it can arise.

We’ve seen recently the concerns voiced toward the major tech and social media firms about their seeming lack of attention to privacy issues of AI systems and how they have either misled the public about the data being recorded or they have sold it or otherwise did not appear to be quite as careful about data privacy as many assumed they were.

For more about privacy and AI, see my article:

Critics argue that there was an undertone of noble cause corruption involved in these cases.

If you are trying to bring AI-advanced social media and new tech to the world at large, well, something is going to break along the way, so say the tech firms. It is a noble end. The means maybe gets a bit jumbled along the way, but that’s Okay, since the end is really good. In fact, as we all know, the tech industry relishes the credo that you need to break things to make progress, and if you aren’t breaking enough things, and fast enough, you aren’t doing enough.

Simply stated, the mantra has been “move fast, break things.” Seems like we can include subliminally that the ends justify the means.

Let’s take a look at another facet of this approach in AI.

Pursuing the Noble AI Quest At Any Cost

There are some critics that worry we’ll have massive unemployment among human workers by the emergence of AI systems. There are some AI devotees that say that’s not their problem. They are merely technologists trying to extend AI. Take it for what it’s worth, they retort. Some liken this attitude to a kind of noble cause corruption.

Those on the noble AI quest might argue that via whatever means possible, aiming to create true AI, and have an artificial form of intelligence, it is such a vaunted noble end, there is little need or concern about what might happen along the way to reach it.

Some critics at times refer to the emerging AI systems as a kind of Frankenstein problem. This is akin to the ends justifying the means. There are some too that are worried about a singularity and a backlash by the sentient AI toward all of humanity, though that’s a bit farfetched in comparison to where AI really is today.

For my article about AI as a Frankenstein, see:

For my article about the coming potential AI singularity, see:

For the aspects of super-intelligence AI, see my article:

For starting over with AI, see my article:

For my article about the AI Turing Test, see:

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. There are some industry critics that are concerned that there is a chance for some of the auto makers and tech firms to fall into the noble cause corruption basket as it applies to AI self-driving cars.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article:

For the levels of self-driving cars, see my article:

For why AI Level 5 self-driving cars are like a moonshot, see my article:

For the dangers of co-sharing the driving task, see my article:

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see:

See my article about the ethical dilemmas facing AI self-driving cars:

For potential regulations about AI self-driving cars, see my article:

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article:

Returning to the matter of the noble cause corruption and how it might apply to the AI self-driving car industry, let’s consider some of the ways in which this might happen.

Suppose an AI developer is under-the-gun to get a Machine Learning (ML) or Deep Learning (DL) system to work that will be able to analyze visual images and find posted street signs in the images. For example, using a convolutional neural network to try and detect a stop sign or a speed limit sign. The AI developer amasses thousands of images that are used to train the deep or large-scale neural network. Feeding those images into the budding AI system, the AI developer tweaks it to try and ensure that it is able to spot the various posted street signs.

As I’ve stated in my writings and presentations at conferences, oftentimes these ML or DL are quite brittle. This brittleness means that there will be circumstances in which a visual image captured while an AI self-driving car is underway will maybe not be properly examined by the ML or DL that’s been implemented and placed into the on-board car AI system.

The sensor data interpretation might state that there isn’t a stop sign in the image, even though there really is one there, known as a false negative. Or, the sensor data interpretation might state that there is a stop sign in the image, even though there isn’t one there, known as a false positive. These false indications can have a daunting and scary impact on the AI’s efforts to drive the self-driving car.

Imagine that you are driving along and for whatever reason fail to see a stop sign and run right through the stop without any hesitation. I’ve seen this happen a few times during my years of driving. It takes your breath away when you see it happen. The odds are that the driver might plow into someone or something and injure or kill someone, very frightening. You look in amazement when it happens and cannot believe what you just saw, especially if by luck no one gets hurt then you think it is a miracle that nothing adverse occurred.

The other case of falsely believing a stop sign exists when it does not, this too can potentially create a car crash or similar adverse event. If a car suddenly and seemingly inexplicably comes to a stop, there is a solid chance that a car behind might ram into the stopped car. I suppose if you had to choose between a car that doesn’t stop at a real stop sign versus a car that stops at an imaginary stop sign, you’d feel “better” about the stopping at an imaginary stop sign, though it all depends upon the specifics of the traffic situation at the moment.

For more about convolutional neural networks, see my article:

For one-shot Machine Learning, see my article:

For my article about ensemble Machine Learning, see:

For the importance of probabilities in AI systems, see my article:

Pinched for Time to Fully Test the Convolutional Neural Network

The AI developer that is crafting the convolutional neural network is pinched for time in terms of being able to fully test the system and has not yet vetted ways to avoid the false positives and false negatives. The AI developer was given a deadline and told that the latest iteration of the ML or DL needs to be pushed into the on-board self-driving car system right away. This can be done via OTA (Over-The-Air) electronic updating with the AI self-driving car.

This AI developer believes earnestly and with all their heart in the importance of AI self-driving cars. It is a noble end to ultimately be able achieve true AI self-driving cars.


Because it is believed that AI self-driving cars will save lives. People that are being killed daily in human driven car crashes are needlessly dying, since if we had AI self-driving cars there would not be such deaths, or so the pundits say (I refer to this as the “zero fatalities, zero chance” myth).

It is also a noble cause because of the mobility that will be spread throughout the world. People that do not have access to a car and getting around will be able to simply summon an AI self-driving car and be on their way. Some refer to this as the democratization of mobility.

There are other stated noble cause outcomes for the advent of AI self-driving cars and I won’t go into all of them here. Generally, it is rather well-publicized that there are claimed noble ends to be had.

The AI developer has to make a choice between proceeding with the release of his convolutional neural network into the active AI self-driving car on-board system, though the AI developer knows it is not ready for prime time, but this AI developer is faced with the urgency of a deadline and been told that the failure to download the latest version will hold-up progress on the budding AI self-driving car being trial fielded.

What should the AI developer do?

The target end is a noble one. Being the inhibitor of reaching the noble end, well, that’s a tough pill to swallow. In this case, the AI developer decides it is best to proceed with something, and not hold-up the bus, so to speak, and opts to go ahead and let loose the not-yet-ready convolutional neural network. Accordingly, the AI developers makes it download ready and pushes it along.

Noble cause corruption.

For my article about the dangers of groupthink and AI developers, see:

For the importance of internal naysayer AI developers, see my article:

For my article about what happens when AI developers get burned out, see:

For how the egos of AI developers can get in their own way, see:

The AI developer felt compelled by the noble cause to proceed with something they knew wasn’t ready and felt that the means was ultimately justifiable by the highly desirable ends. And though I’ve mentioned the instance of a visual image analyzer that fell under this spell, you should enlarge the scope and realize that any of the numerous AI subsystems could be equally pushed along and yet not be appropriately ready.

It could be the sensor elements involving the cameras, the radar, the ultrasonic, the LIDAR, and so on. This can also apply to the sensor fusion portion of the AI system. It could readily apply to the virtual world model updating portion. There is an equal chance that the same fate might befall the AI action planning portion, and likewise could happen with the car controls commands subsystem.

The advent of AI self-driving cars carries such a tremendous notion of noble cause that it is tempting by some to justify otherwise untoward actions to try and make sure that AI self-driving cars come to fruition. If you are creating an AI system that maybe does something more pedantic, such as an AI system that can help you play a video game or perhaps aid you in shopping for groceries, these are not nearly as noble.

AI self-driving cars have the drop-the-microphone noble cause. These are AI systems that are about saving lives. These are the AI systems about changing the world and making lives better.

There aren’t many AI systems that can claim that kind of double-whammy.

As earlier mentioned, the greater the noble ends, the chances of being slippery about the means will often be increased.

For my article about zero fatalities is a zero chance, see:

For my article about OTA, see:

For my article about safety and AI self-driving cars, see:

For more about mobility and especially the elderly, see my article:


There is a clear and present danger that the alluring noble ends of reaching a true AI self-driving car can be corruptive toward the efforts involved in developing and fielding AI self-driving cars.

AI developers involved in AI self-driving car efforts are not necessarily plotting evil deeds (some conspiracy theorists believe they are), and instead can simply find themselves confronted with seemingly tough decisions about the work they are doing. Perhaps having to decide whether their decisions are justifiable as balanced against the desired ends.

For conspiracy theories about AI systems, see my article:

For the need to be transparent with AI systems, see my article:

For more about ethics and AI self-driving cars, see:

I hope that AI developers and AI managers, along with all of those working at the various auto makers and tech firms that are devising AI self-driving cars, will take a moment to reflect upon whether there are any noble cause corruptive aspects involved in the efforts at their firm. If so, it is important to take the first step of recognizing the noble cause phenomena. Without realizing that the phenomena is or has taken over, you are less likely to be able to confront it.

Consider carefully the ends justifying the means, and make sure that you don’t fall into the trap of believing that any means is acceptable as long as the goal of producing a true AI self-driving car is reached. I could translate that into Latin but having it in English seems sufficient.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.