Bug Bounties and AI Systems: The Case of AI Self-Driving Cars

1178

By Lance Eliot, the AI Trends Insider

Bounty hunter needed to find a copper pot that went missing from a small shop. Reward for recovery of the copper pot will be 65 bronze coins. So said a message during the Roman Empire in the city of Pompeii. We don’t today know if any bounty hunter found the copper pot and claimed the bronze coins, but we do know that bounty hunting dates back to at least the times of the Romans.

In more modern times, you might be aware that in the 1980s there were some notable bounties offered to find bugs in off-the-shelf software packages and then in the 1990’s Netscape notably offered a bounty for finding bugs in their web browser. Google and Facebook had each opted toward bounty hunting for bugs starting in the 2010 and 2013 years, respectively, and in 2016 even the U.S. Department of Defense (DoD) got into the act by having a “Hack the Pentagon” bounty effort (note that the publicly focused bounty was for bugs found in various DoD related websites and not in defense mission critical systems).

According to statistics published by the entity HackerOne, the monies paid out in 2017 toward bug bounty discoveries totaled nearly $12 million dollars and for 2018 it sized up to be more than $30 million dollars. For bugs that are considered substantive issues by a software maker, the usual everyday bounty is around $2,000 per bug (once it is confirmed that the bug exists). Bounties though are decided by the eye of the beholder in the sense that whomever is offering the bounty might go lower or higher and in some cases there have been bounties in the six figure range, typically around $250,000 or so.

In the news last week was a bug discovered in Apple’s FaceTime video-chat feature, which allowed you to attempt a multi-party video-chat and then snoop on those that you were connecting to, though they did not actually connect and might be unaware that you are able to hear and possibly even see them. What makes this particular find notable is that the discoverer was a 14 year old that wasn’t any kind of super-hacker or big-time programmer (he’s in High School and was trying to do a multi-party video-chat with his friends about playing Fortnite).

As an everyday user, he perchance happened upon this bug. He then informed his mother. She earnestly tried contacting Apple, hopeful of earning a bug bounty (along with wanting to warn others about the snooping dangers), and discovered that it can be harder to report a suspected bug than you might think. She was informed that only those registered in the Apple developer program can report a bug. She then dutifully registered as a developer, and yet still apparently had an arduous path to get Apple’s attention.

This approach of purposely having a somewhat bureaucratic gate-keeping stopgap can make sense because there is a trade-off of the ease of reporting a bug so that a firm will know a potential bug exists, but the ease could encourage lots of false claims, and it takes precious time and resources for a company to try and assess each bug claim. Sometimes a software maker has an arduous path simply due to not having thought through the processes involved, while sometimes it is intentionally barrier-high.

Some are puzzled that any firm would want to offer a bounty to find bugs in their software.

On the surface, this seems like “you are asking for it” kind of a strategy. If you let the world know that you welcome those that might try to find holes in your software, it seems tantamount to telling burglars to go ahead and try to break into your house. Even if you already believe that you’ve got a pretty good burglar alarm system and that no one should be able to get into your secured home, imagine asking and indeed pleading with burglars to all descend upon your place of residence and see if they can crack into it. Oh, the troubles we weave for ourselves.

Those that favor bounty hunting for software bugs are prone to saying that it makes sense to offer such programs. Rather than trying to pretend that there aren’t any holes in your system, why not encourage holes to be found, doing so in a “controlled” manner? In contrast, without such a bounty effort, you could just hope and pray that by random chance no one will find a hole, but if instead you are offering a bounty and telling those that find a hole that they will be rewarded, it offers a chance to then shore-up the hole on your own and then prevent others from secretly finding it at some later point in time.

Well-known firms such as Starbucks, GitHub, AirBnB, America Express, Goldman Sachs, and others have opted to use the bounty hunting approach. Generally, a firm wishing to do so will put in place a Vulnerability Disclosure Policy (VDP). The VDP indicates how the bugs are to be found and reported to the firm, along with how the reward or bounty will be provided to the hunter. Usually, the VDP will require that the hunter end-up signing a Non-Disclosure Agreement (NDA) such that they won’t reveal to others what they found.

The notion of using an NDA with the bounty hunters has some controversy. Though it perhaps makes sense to the company offering the bounty to want to keep mum the exposures found, it also is said to stifle overall awareness about such bugs. Presumably, if software bugs are allowed to be talked about, it would potentially aid the safety of other systems at other firms that would then shore-up their exposures. There are some bounty hunters that won’t sign an NDA, partially due to the public desire and partially due to trying to keep their own identity hidden. Keep in mind too that the NDA aspect doesn’t arise usually until after the hunter claims they have found a bug, rather than requiring it beforehand.

Some VDP’s stipulate that the NDA is only for a limited time period, allowing the firm to first find a solution to the apparent hole and then afterward to allow for wider disclosure about it. Once the hole has been plugged, the firm then allows a loosening of the NDA so that the rest of the world can know about the bug. The typical time-to-resolution for bounty hunted bugs is usually around 15-20 days when a firm wants to plug it right away, while in other cases it might stretch out to 60-80 days. In terms of paying the bounty hunter, the so-called time-to-pay, after the hole has been verified as actually existing, the bounty payments tend to be within about 15-20 days for the smaller instances and around 50-60 days for the larger instances.

White Hat Hackers Try to Do Some Kind of Good

Who are these bounty hunters? They are often referred to as white hat hackers. A white hat hacker is the phrase used for “hackers” that are trying to do some kind of good. We normally think of hackers as cybersecurity thieves that hack their way into systems to steal and plunder. Those are usually considered black hat hackers. Consider that hacking is akin to the days of the Old West, wherein the good gun slingers wore white hats and the evil ones wore black hats (well, that’s what TV and movies suggest).

For anyone that knows much about hacking, such as trying to break into a system, it is somewhat frustrating that the mass media will often confuse true hacking from marginal hacking. If someone uses a social engineering technique to get your password, perhaps calling you on the phone and claiming to be with tech support and asking you for your password, few “genuine” hackers would consider that to be a form of hacking. The culprit merely tricked someone into giving up their password.

If instead the culprit had used some kind of password cracking program that they had written, or if they found some exploitable bug in the password entry program, it would give them more credence as a hacker. It used to be that most of the true hacking was being done by hard-core programmers that knew the inner sanctum aspects of various operating systems and other software. Lately, just about anyone can either use social engineering or can purchase via the dark web various cracking programs that need only to be run. These less bona fide hackers often have very little computer skills and sometimes don’t even know how to write a line of code.

This brings us to the topic of what kinds of software bugs the bounty efforts are looking for. Generally, the bounty program excludes things like social engineering. It’s more about having identified an actual bug in the system. The bounty hunter normally has to be relatively clever and try all sorts of potential exploits to find a hole. It can be a laborious process. There is no guarantee that the bounty hunter will find any holes. This doesn’t mean that there aren’t any holes, it just means that the bounty hunter couldn’t find them.

A firm might feel better about its software if dozens or perhaps hundreds or thousands of bounty hunters have tried to find software bugs and have not been able to do so. Again, this is not any kind of proof that no such bugs exist. But, if these multitude of efforts do not bring forth a bug, it would seem to suggest that they are either not there or perhaps very hard to find. This might imply that someone else of a dishonorable nature that comes along later on, not having anything to do with the bounty effort, will be unlikely to also find any bugs.

Suppose a bounty hunter finds a bug but decides not to tell the firm? That’s the classic conundrum.

If the firm provides a “safe harbor” protection via their VDP, meaning that they will not try to go after the bounty hunter for finding a bug, and if the firm offers enough of a monetary incentive, the bounty hunter is hopefully swayed toward reporting the bug to the firm.

On the other hand, the bounty hunter might be both a white hat and a black hat kind of hacker, such that if the bug is an exposure that could be exploited to steal or plunder, the value of the bounty might be insufficient and so the hunter keeps the bug under wraps.

The bounty hunter though that keeps secret about the bug in hopes of later utilizing it for some nefarious act will also then become potentially exposed to adverse legal repercussions, either by the firm suing them if they act upon the bug or possibly even have criminal charges aimed at them. And, the bounty hunter has to wonder whether perhaps some other bounty hunter might find the bug, in which case, the other bounty hunter will potentially claim the prize over them.

Often, for bounty efforts, more than one bounty hunter finds the same bug. The firm that is undertaking the bounty effort needs to figure out which of the bug reports are duplicative. They also need to figure out which bounty hunter should get the credit for having found the bug. In many cases, the bounty hunters use some kind of reporting system set up by the firm to indicate the bugs being found, and as a result the logging keeps track of which bounty hunter first reported the bug.

I’ve worked with companies that thought doing a bug bounty effort would be a “fun” and publicity worthy activity. I pointed out to them that beyond the aspects aforementioned about the possible dangers of doing such an effort, it also often produces lots of false reports. In essence, there are bounty hunters that are desperate to try and win some of the bounty and so they will log all sorts of wild things that are not bugs at all.

In the days of the Old West, suppose you offered a reward for the capture of Billy the Kid (a famous outlaw). If you did so and did not include a picture of what Billy looked like, imagine the number of bounty hunters that might drag into the sheriff’s office someone that they hoped or thought was Billy the Kid. You might get inundated with false Billy’s. This is bad since you’d need to presumably look at each one, asking probing questions, and try to ascertain whether the person was really Billy or not.

The same is the case for scrutinizing the bounty hunter submissions. There will be a lot of “noise” in the reported bugs, in the sense that many of the claimed bugs don’t exist, and the bounty hunter just thought they found one.

Unfortunately, being able to determine which of the reported bugs are valid and which ones are not will take a lot of laborious effort by your highly skilled software engineers. It means that they will be taken away from whatever else that they should be doing. I mention this because there is a substantive cost involved in assessing the bugs, and many firms don’t account for that cost when they decide to run one of these bounty efforts. They naively seem to think that only bona fide bugs will be reported. Not so.

If you are pondering what kind of bugs might be found, you can take a look at the Common Vulnerability Scoring System (CVSS) to see how bugs are labeled as either low, medium, high, or critical, along with seeing examples of such bugs. One example that is easy to describe is labeled as CVE-2009-0658 and involves the Adobe Acrobat buffer overflow vulnerability (which has since been fixed).

Essentially, if you tried to open a PDF document that contained a malformed picture (one likely purposely malformed), it would cause an overflow in the Adobe software buffer and allow a remote attacker to be able to then executive code on your system. This would be especially attractive to the interloper if you happened to have system privileges on your machine, and thus by opening the devious PDF in your Adobe Reader you would have opened up pandora’s box. Based on a combination of metrics including the attack complexity, user interaction required, and so on, it earned a CVSS v2 base score of 9.3.

In some cases, the firm doing the bounty program will make it open to the public. Anyone that wants to have at it, please do so. These are usually time-bounded. The firm will declare that the bounty program starts say a month from now and will last for 60 days. This helps to then spark interest and get those bounty hunters looking. There are also time un-bounded bounty programs, wherein a firm will at any time welcome a bounty hunter offering a proposed found bug.

During the days of the Old West, this kind of open call would often bring forth vigilantes and bounty hunters that had no idea what they were doing. It was a free for all. As such, some of the software bug bounty programs are at times public but still restricted in some fashion. For example, you might need to officially register with the bounty effort and provide some kind of evidence of your credentials.

There are also private-oriented bounty efforts. In the private instances, the firm will tend to seek out specific known white hat hackers and arrange for them to get access to the software that is going to be put through the wringer. This hopefully reduces too the chances of a black hat hacker getting involved.

Debate ensues in leadership circles about whether it is better to use a bounty approach or to instead hire a bug-finding firm to do the work instead. There are plentiful number of firms that will do security threat analyses and do the same kind of work that bounty hunters would do. You can establish the hourly rate or a set fixed price for them to assess your systems and try to find bugs. They can then work hand-in-hand with your software team and it is all done as a rather confidential matter.

Some would argue that you cannot possibly pay the same token that you would pay when doing bounty hunting. In other words, there might be hundreds of bounty hunters spending gobs and gobs of hours trying to find bugs. One of the bounty hunters finds a bona fide bug and you pay that person say $1,500. If you had been paying specialists to search for bugs, it might have cost you $15,000 or maybe $150,000 to have found that same bug. Thus, in theory, the bounty approach is a cheaper way to find bugs (maybe!).

Whether Internal Team Should Do Bounty Hunting is a Discussion

Some would even argue that your own internal software team should be doing the bounty hunting. I’ve had some lengthy discussions about whether to offer a “bonus” to any member of the team that finds a bug, which can unfortunately also produce counter-productive behavior. In one firm, the team members were planting bugs to be able to get bonuses when they found the bugs. This is not in the spirit of such an effort and there are ways to try and avoid getting into such an awkward and untoward predicament.

One argument against using your own team to find bugs is that they are too familiar with the software to potentially find the bugs. They wrote the software and so might make all sorts of assumptions that would blind them to finding bugs. By using outsiders, the outsiders are trying all kinds of wild tricks to find bugs. They don’t know where the bugs are. They use their outsider lack of awareness to try all avenues, and don’t assume that you must have done various testing and safeguards. The counter-argument is that you should simply divide your own developers into a blue team, red team, and sometimes a purple team, and thus gain a somewhat similar sense of outsider assessments.

There are bounty hunters that are interested in selling their find to the highest bidder. If the bounty provided by a firm does not seem sufficient, the hunter with a found bug could be tempted to find someone else willing to pay more. There is a black-market for the purchase of bugs, a marketplace somewhat readily found on the so-called Dark Web (these are parts of the Internet known for notorious or nefarious activity). It could be that an entity or agent that is up to no good might purchase a bug that seems useful for their untoward needs. Or, it could be a computer security firm that wants to showcase to its customers the kind of bugs it can find and so rummages around trying to buy up interesting or notable bug finds.

As per the case of the 14-year-old who discovered the FaceTime video-chat bug, a bounty hunter does not necessarily need to a true hunter at all. Someone that by accident happened to discover a bug might become momentarily a kind of bounty hunter. Let this be an eye opener for you that it sometimes pays to be on the watch for bugs in software. You might not be going out of your way to find the bugs, and yet if you land upon one, it could possibly pay off.

That being said, the effort to get a firm to pay you for the bug can be painfully slow and the firm might not ever opt to pay you, even if they have a bona fide bug bounty program in place. I would not suggest you quit your day job to become a software bounty hunter bent on making a fortune by finding bugs. There might be gold in them thar hills, but you will likely starve before you can find enough to make a living and put food on your table.

For my article about bugs in AI systems, see: https://www.aitrends.com/selfdrivingcars/ghosts-in-ai-self-driving-cars/

For my article about reverse engineering AI software, see:https://www.aitrends.com/selfdrivingcars/reverse-engineering-and-ai-self-driving-cars/

For code obfuscation and AI systems, see my article: https://www.aitrends.com/selfdrivingcars/code-obfuscation-for-ai-self-driving-cars/

For the dangers of back-doors in AI systems, see my article: https://www.aitrends.com/selfdrivingcars/ai-deep-learning-backdoor-security-holes-self-driving-cars-detection-prevention/

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. Besides our own efforts to find and eliminate any potential bugs, we also are able to aid other tech firms and auto makers by being private “bounty hunters” when requested, focusing on specifically AI self-driving car systems.

A macroscopic question though is whether or not the auto makers and tech firms should use bounty hunter efforts or not?

Similar to my earlier points, you might at first say that of course the auto makers and tech firms that are making AI self-driving cars should not undertake public oriented bounty hunter programs. Why would they allow hackers to try and find bugs in AI self-driving car systems? Isn’t this tantamount to having your home examined closely by burglars? In fact, it’s scarier than that. It’s like having an entire neighborhood of homes closely examined by burglars, and they might not just be interested in your jewels and money but maybe be a threat to your personal safety too.

When you consider that AI self-driving cars are life-or-death systems, meaning that an AI self-driving car can go careening off the road and kill the human occupants or humans nearby, it would seem like the last thing you would want to do is invite potential black hat hackers to find holes.

For my article about safety and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

The counter-argument is that if the auto makers or tech firms don’t do a bounty type program, will they end-up putting on the roads an AI self-driving car that has unknown bugs, for which the black hat hackers will ultimately find the holes anyway. And, once those holes are found, the dastardly results if exploited could be life-and-death for those using the AI self-driving cars and those nearby them.

I’d like to clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Some say that it would be dubious and actually dangerous for the auto makers and tech firms to consider doing a public oriented bounty program for finding bugs in AI self-driving cars. If those entities want to do a private oriented bounty program, involving carefully selected white hat hackers, it would seem more reasonable given the nature of the life-and-death systems involved.

Run a Private Bounty Program, Hire a Firm, Handle Internally – All Options

It becomes on the heads of the auto maker or tech firm then whether using a private bounty program is best, or whether to instead hire a firm to do the equivalent, or whether to try some kind of internal bounty effort. The presumption is that the auto maker or tech firm needs to decide what will most likely reduce the chances of bugs existing in the AI self-driving car systems. In fact, the auto maker or tech firm might try all of those avenues, doing so under the notion that given the importance of such systems and their critical nature, the more the merrier in terms of finding bugs.

There are some that believe that the auto makers and tech firms might not take seriously the need to find bugs and thus the assertion is made that regulations should be adopted accordingly. Perhaps the auto makers and tech firms should be forced by regulatory laws to undertake some kind of bounty efforts to find and eliminate bugs. This is open to debate and for some it is a bit of an overreach on the auto makers and tech firms. It is likely though that if AI self-driving cars appear to be exhibiting bugs once they are on our streets, the odds are that regulatory oversight will begin to appear.

For federal regulations and AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For local regulatory aspects, see my article: https://www.aitrends.com/selfdrivingcars/savvy-self-driving-car-regulators-spotlight-assemblyman-marc-berman-need-goldilocks-set-legal-provisions-ones-just-right/

For my article about the rise of public shaming of AI self-driving cars, see: https://www.aitrends.com/ai-insider/public-shaming-of-ai-systems-the-case-of-ai-self-driving-cars/

For my article covering my Top 10 predictions about AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/top-10-ai-trends-insider-predictions-about-ai-and-ai-self-driving-cars-for-2019/

One view is that there’s no need to do a large-scale casting call for finding bugs.

Instead, the AI self-driving cars themselves will be able to presumably report when they have a bug and let the auto maker or tech firm know via Over The Air (OTA) processing. The OTA is a feature for most AI self-driving cars that allows the auto maker or tech firm to collect data from an AI self-driving car, via electronic communication such as over the Internet, and then also be able to push data and programs into the AI self-driving car.

It is assumed that the auto makers and tech firms will dutifully and rapidly send out updates via OTA to their AI self-driving cars, shoring up any bugs that are found. Though this is supposed to be the case, there will still be a time delay between when the bugs are discovered and then a bug patch or update is prepared for use. There will be another time delay between when those patches get pushed out and when the AI self-driving cars involved are able to download and install the patch.

I mention this time elapsed periods because some pundits seem to suggest that if a bug is found on a Monday morning at 8 a.m., by 8:01 a.m. the bug will have been fixed and the fix sent to the AI self-driving car. Not hardly. The auto maker or tech firm will need to first determine whether the bug is really a bug, and if so what is causing it. They will need to find a means to plug or overcome the bug. They will need to test this plug and make sure it doesn’t adversely harm something else in the system. Etc.

Even once the patch is ready, sending it to the AI self-driving cars will take time. Plus, most of the AI self-driving cars are only able to do updates via the OTA when the AI self-driving car is not in motion and in essence parked and not otherwise being active. If you are using an AI self-driving car for a ridesharing service, the odds are that you’ll be running it as much as you can, nearly 24×7. Thus, trying to get the OTA patch will not be as instantaneous as it might seem.

We also need to consider the severity of the bug. If the bug is so severe that it causes the AI self-driving car to lose control of the car, such as if the AI freezes up, you are looking at the potential of an AI self-driving car that rams into a wall, or slams into another driver, or rolls over and off-the-road. The point being that you cannot think of this as finding bugs in perhaps a word processing package or a spreadsheet package. These are bugs in a real-time system and one that holds in the balance the lives of humans.

For aspects about OTA, see my article: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

For my article about the non-stop use of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/

For my article about the Internet of Things and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/internet-of-things-iot-and-ai-self-driving-cars/

For my article about the robot freezing problem and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/freezing-robot-problem-and-ai-self-driving-cars/

For those of you that pay attention to the automotive field, you likely already know that General Motors (GM) was one of the first auto makers to formally put in place a VDP, doing so in 2016. For their public bounty efforts, the focus has tended to be the infotainment systems on-board their cars or other supply chain related systems and aspects.

Overall, it has been reported that GM from 2016 to the present has been able to resolve over 700 vulnerabilities and done so in coordination with over 500 bounty hunters and hackers. Within the GM moniker, this effort includes Buick, Cadillac, Chevrolet, and GMC. Currently, an estimated seven of the Top 50 auto makers have some kind of bounty program.

This is overarching focus to-date though is different from dealing with the inner most AI aspects of the self-driving car capabilities. Recently, GM announced that they would be digging deeper via the use of a private bounty program. Apparently, they have chosen a select group of perhaps ten or fewer white hat hackers that had earlier participated in the VDP and will now be getting a closer look into the inner sanctum.

I’ve had AI developers ask me if they can possibly “get rich” by being a bounty hunter on AI self-driving cars. I wish that I could say yes, but the answer is a likely no. It might seem like an exciting effort of being a bounty hunter, wandering the hills looking for a suspect. It’s not as easy as it seems. The odds of finding a bug is likely not so high, and how much you’d get paid is a key question too.

Consider too that you would need access to the AI self-driving car and its systems to even look for a bug. Right now, there aren’t true AI self-driving cars that are readily and openly available on our roadways. Instead, the auto makers and tech firms are carefully watching over the AI self-driving cars that are on the public roadways. About the only means for you to get access would be to become a white hat hacker that gets invited into a private bounty hunter program for an auto maker or tech firm.

For the advent of ridesharing and AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/ridesharing-services-and-ai-self-driving-cars-notably-uber-in-or-uber-out/

For debunking myths about AI self-driving cars as an economic commodity, see my article: https://www.aitrends.com/selfdrivingcars/economic-commodity-debate-the-case-of-ai-self-driving-cars/

For the need of fail-safe AI, see my article: https://www.aitrends.com/ai-insider/fail-safe-ai-and-self-driving-cars/

For the debugging of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/debugging-of-ai-self-driving-cars/

Conclusion

When the outlaw Jesse James was sought during the Old West, a “Wanted” poster was printed that offered a bounty of $5,000 for his capture (stating “dead or alive”). It was a rather massive sum of money at the time. One of his own gang members opted to shoot Jesse dead and collect the reward. I suppose that shows how effective a bounty can be.

Bounty programs have existed since at least the time of the Romans and thus we might surmise that they do work, having successfully endured as a practice over all of these years. For AI self-driving cars, I hope you will ponder carefully whether the use of a bounty program is worthwhile or not. The key overall aspect is that we don’t want AI self-driving cars on our roadways that have bugs. I’ll put up a Wanted poster right now for that goal.

Copyright 2018 Dr. Lance Eliot

Follow Lance on twitter @LanceEliot

This content is originally posted on AI Trends.