By Lance Eliot, the AI Trends Insider
An anomaly is something considered out of the ordinary and often used to describe things or events that seem to be peculiar, rare, abnormal, or at times are otherwise difficult to even classify.
Sometimes an anomaly is unwanted and can be bothersome when performing a task, while in other instances an anomaly might shed new insight that no one previously gave due attention to. It can be hard to know whether an anomaly will be ultimately seen as desirable versus undesirable.
In the late 1800’s, Wilhelm Roentgen was working in his lab on an experiment to make electrons zip through open air. After repeated trials with various cathode rays, he noticed that a barium platinocyanide covered screen at the edge of his table became fluorescent.
This was an oddity. It was peculiar. He could have shrugged it off as an anomaly that was not worthy of further attention. Turns out that Wilhelm opted to study this aspect and it led him to the discovery of X-rays and X-ray beams.
There were other researchers doing similar work at the time of his discovery, but it was his willingness to entertain the anomaly and give it its due that gets him credit for being the discoverer of X-rays or what some in-the-know refer to as Roentgen-rays. You’ve perhaps seen in the history books his first formal X-ray image, consisting of his wife’s hand and depicted her finger bones and her wedding ring. One must say that she was quite brave to participate in the experiment, particularly since the nature of these electromagnetic waves and the hazards involved were not yet well understood.
Wilhelm’s anomaly provides an example of a situation when detecting and acting upon an anomaly paid-off. Sometimes an anomaly can be a fluke that provides no added value to what is being examined or studied.
It could be random noise that happened to be encountered when you were doing something else and thus had no true bearing on the phenomena that you were studying. If you then pursue the anomaly to try and figure out whether it has merit or not, you might be wasting valuable attention and resources on something that has little or no benefit in the end. You are likely at first hopeful the anomaly will be a Eureka kind of moment, but often it turns out to be something mundane such as noise or a transient issue that was then self-corrected.
When I used to teach university classes on statistics and AI, I would cover the various “exclusion” techniques that could be used to deal with suspected anomalies. One obvious approach is to simply discard the anomaly. This though can create issues since it can leave a somewhat unexplained hole or gap in your research.
Another approach to deal with an anomaly in your data involves Winsorising it.
The Winsorizing Technique
Winsorising is a mathematical technique in which you substitute the anomaly with something from the nearest other data that is considered not an anomaly (referred to as non-suspected data). But this can be a questionable practice since it implies that you actually obtained “real data” that further supported your other data, when instead you essentially made-up or manufactured data to your liking. The same can be said for any other method used to substitute the actual data for concocted data.
One criticism of scientific studies and especially those in the medial domain are that at times the scientists performing a life-critical study will opt to toss out an anomaly that appears in their research.
If you are trying to show that a new drug will save lives and prevent some dastardly malady from spreading, it can be tempting to disregard anomalies that might arise. By tossing out the anomaly or hiding it by a form of substitution, it could be that you are inadvertently hiding something that could be very telling. Perhaps the drug only works in certain situations and the anomaly could have revealed those crucial border-crossing aspects.
This is also why there is an ongoing clamor that researchers be willing to share their data when they publish the results of their work. By posting their data, it allows other researchers to examine the data and perhaps treat any anomalies differently than had the original researchers. This might reveal “censored” aspects of the study and thus open new questions about the conclusions of the research. In year’s past, it was difficult to share data, but nowadays with the Internet there is not much of an excuse that it is arduous to do so.
For my article about irreproducible aspects and the importance of data, see: https://aitrends.com/selfdrivingcars/irreproducibility-and-ai-self-driving-cars/
For the need to have transparency in algorithms and analyses, see my article: https://aitrends.com/selfdrivingcars/algorithmic-transparency-self-driving-cars-call-action/
For my article about the dangers of groupthink, see: https://aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/
For the sharing of Machine Learning data, see my article: https://aitrends.com/machine-learning/machine-learning-data-self-driving-cars-shared-proprietary/
You don’t have to be a scientist to encounter anomalies. We experience anomalies in our daily lives. At times, you might not notice the anomaly, while in other cases you notice it but write it off as a fluke. In other cases, you might divert your attention to the anomaly, though this can be a good or bad idea to shift your focus, depending upon the nature and value of the anomaly.
Driving Journey And An Anomaly
As an example, I was driving my car the other day on a lengthy driving journey and was using a major highway to do so. For hours on end, the traffic situation was relatively predictable and monotonous. It was a two-lane road in the northbound direction and passed through the California central region considered our state’s agricultural belt. Regular cars would drive in the leftmost lane or “fast lane” and the lumbering trucks filled with various agricultural products such as oranges, onions, and so on, kept to the slower rightmost lane.
If a lumbering truck was going excessively slow, the other trucks behind it would try to go around the slowpoke truck and do so by briefly getting into the fast lane. Regular cars in the fast lane hated to have this happen. It meant that the fast lane, which was moving at 80+ miles per hour, would now need to slow down to allow a 55 miles per hour truck to proceed into the fast lane. Cars would either pretend to ignore the turn blinkers of the trucks that were trying to signal they wanted into the fast lane, or the drivers of some cars would blatantly prevent the trucks from getting into the fast lane by not allowing any gaps between the faster moving cars.
On the occasions that I opted to let trucks in front of me, I could see via my rear-view mirror the pained expressions on the drivers that were in cars just behind me. They were exasperated that I was allowing the snail-paced trucks into the fast lane. How rude of me! Those other car drivers then rode on my bumper, trying to express their irksomeness and as though somehow it would pressure the trucks ahead of me to move along faster in the fast lane. Such is the civility of our roadways.
In any case, this was a routine matter and happened from time-to-time.
Most of the time, nearly all the time, the trucks were in the slow lane. I got used to passing truck after truck, all of them as though at a standstill in the slow lane, though it was just the perception based on the rapidly moving fast lane versus the much slower moving slow lane.
Towards the end of my drive, I saw up ahead that the trucks were in the fast lane. I figured that it was likely several trucks trying to pass a slower truck that must be turtle-like hampering the slow lane. I partially got into the slow lane to see what truck was causing the others to switch into the fast lane. To my surprise, there wasn’t any truck at all up ahead in the slow lane.
This was curious.
The trucks always loyally stay glued to being in the slow lane, unless there was a need to pass another truck, and in that case, they quickly got into the fast lane, went around the slower truck, and quickly got back into the slow lane. But, inexplicably, the trucks were all in the fast lane. They were still driving at the slower speed of 55 mph and yet all of them had decided to get into the fast lane.
What should I have done? I could just stay in the fast lane, cruising at the slower 55 mph and followed the lead of the trucks. Or, I could switch entirely into the slow lane and zip ahead of the lengthy line of trucks in the fast lane. This is the reverse of what you might normally expect, in that usually you zip past via the fast lane, but if the trucks wanted to hog the fast lane, it seemed like they were nearly begging me to go ahead and use the slow lane (at least that’s what I would have explained to a highway patrol officer that might have later stopped me for speeding past the trucks in the slow lane!).
I am sure that if there were other cars behind me at that point of the journey, they certainly would have been willing to use the slow lane for that purpose. Likely car after car would have come up to me, while I was lumbering in the fast lane behind the lumbering line of trucks, and then have gotten disturbed at the trucks being in the fast lane. Those car drivers would have probably gotten pissed off at the situation and then realized they could just skirt around the whole mess by using the slow lane. At that point, the slow lane would have become the fast lane, and the fast lane would have become the slow lane.
World turned upside down.
I wondered whether those other car drivers would even take a moment to ponder why all the trucks were in the fast lane. I’d bet that many of the drivers would not have given any thought to it. On these lengthy highway journeys, it seems like there are drivers that are just trying to maximize their speed and whatever seemingly legal or quasi-legal way to do so is fine with them. Not much thought involved. Almost a monkey-see and monkey-do kind of driving strategy. If there is a way to go faster, do so.
Well, I decided I would just stay in the “fast” lane and see how this matter evolves.
After an agonizing five to ten minutes of being part of the lumbering herd (a distance of about 8 miles), it finally became apparent as to what was going on. Up ahead there was an accident in the slow lane and it was marked off with cones and flares. I am assuming that the truck drivers either spread the word among themselves or that somehow it had gotten marked onto a GPS mapping system, though mine did not seem to know about the accident. This was a rather remote location and so unlikely that anyone was helping to mark an accident that had just recently occurred.
The trucks had wisely gotten into the fast lane, in-advance of coming upon the accident scene. This was a good move since it avoided having to jockey and slow down at the accident scene itself. Instead, the trucks kept their regular 55 mph and were able to scoot past the accident. Once they had gotten reasonably past the accident scene, the trucks began to move back into the slow lane. World order was restored once again.
Let’s revisit the story.
Why would I claim that this was an example of an anomaly?
Due to the aspect that the trucks normally were in the slow lane and only briefly would get into the fast lane. If I had been collecting data during my driving journey, it would have plotted on a graph as showing that 99% of the time the trucks were in the slow lane and maybe 1% of the time used the fast lane (for passing purposes). This use of the fast lane in the case of the accident scene traversal was perhaps in that 1% of the time that the trucks were using the fast lane, but I logically could discern that the trucks weren’t passing each other as was the custom for them to use the fast lane.
AI Self-Driving Autonomous Cars Aspect
Suppose that I was an AI system that had been driving my car. Would the AI have been able to discern that this seemed indeed to be an anomaly?
On the one hand, you might say that no, the AI would have not been able to do so. The trucks were legally in the fast lane, and they had been using the fast lane from time-to-time, so nothing about this would at first glance seem odd or untoward. The AI would presumably not especially care that those were trucks ahead of it rather than regular cars. Sure, the traffic speed had slowed down but if the AI is doing a pied piper kind of approach of regulating its speed by the traffic ahead of it, the AI would just slow down the car and match the speeds of the trucks. No big deal.
Furthermore, the AI would want to drive “legally” (presumably) and so the idea of switching into the slow lane to pass the trucks would not likely have been something that the automaker or tech firm had even included into the AI action plans for driving the car. Though using the slow lane for that purpose is not strictly illegal per se, it would be considered by many to be an improper or inappropriate driving tactic and so many AI systems would not consider it. I’ve debunked such ideas and have called for and predicted that AI for self-driving cars will need to be more flexible and not so narrow-minded, as it were.
For my article about the dangers of the pied piper approach, see: https://aitrends.com/selfdrivingcars/pied-piper-approach-car-following-self-driving-cars/
For various human foibles of driving, see my article: https://aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/
For the importance of AI systems doing defensive driving, see my article: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/
For the debunking of illegal driving by AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/illegal-driving-self-driving-cars/
Overall, it would be likely that the AI of a self-driving car would probably not notice anything particularly unusual going on and would have simply stayed in the fast lane and followed along with the traffic. This is unfortunate in that it could be important for the AI to be watching for and possibly acting upon anomalies that it might encounter.
In my example of the trucks, you could argue that it makes no difference whether the AI was able to detect the anomaly of the traffic situation. Yes, luckily, in this particular circumstance, the AI-driven car being in the fast lane and staying in the fast lane was fine, and probably the more appropriate response to the anomaly. But, the aspect that the AI didn’t even realize the occurrence of an anomaly and just blindly in a sense kept driving, that’s the part that might not work out so well in other occasions of anomalies.
Here’s another example that might better illustrate the matter.
I was on the freeway the other day and the traffic was light and moving along rather quickly (that’s a rarity in of itself here in traffic snarled Southern California). I noticed up ahead that a man was walking along on the edge of the freeway.
Allow me to explain that most of our freeways here are relatively well blocked off from any pedestrians getting onto the freeway. There tend to be fences and brick walls that separate the freeways from any nearby homes, businesses, and so on. You would usually need to walk-up an on-ramp or exit-ramp of the freeway to physically be able to walk along the freeway. There are signs at the on-ramps and exit-ramps that clearly state to not walk onto the freeway.
The only time that you would normally see a person walking along the freeway would most likely be if their car broke down. They might then be walking to the nearest ramp, so they could get off the freeway. But, this doesn’t happen very often either since there are numerous specially dedicated phone boxes on the freeway that stranded people can use to call for assistance. Plus, our freeways are so frequently being cruised by police and tow trucks that the odds are you won’t be stranded at your broken-down car for very long. And, it is generally publicized by the highway patrol that you should stay with your vehicle and not walk away from it (you can get a ticket for abandoning your car on the freeway).
Thus, the moment I saw a man walking on the freeway, I took a look at the side of the freeway to see if a car was broken down. I had not yet seen it and I looked up ahead and could not see one there either. This man, even if he was walking away from a broken-down car, appeared to be quite a lengthy distance from his car. I right away doubted that this was a situation of a broken-down car.
When I first noticed the walking man, I was driving in the slow lane of traffic, which would have meant that I would end-up going past him shortly, doing so just a few feet next to this mysteriously curious and unusually encountered walking man. I would have gone past him at around 70 miles per hour, which is about 100 feet per second. I decided that the whole thing smelled and my innate Spiderman sense was tingling.
I moved over into the left most lane of the freeway, trying to create as much a separation of space between me and the walking man for when I would zip past him. I also was keeping my eye on him. I was on alert. My mind and my hands and feet were ready in case anything untoward might suddenly arise and I might need to maneuver my car rapidly.
I had in mind that the walking man might not be content with walking along the side of the freeway. Perhaps he might opt to suddenly dart into traffic. Or, maybe he might throw an object into traffic. Who knows? I realize you might be sympathetic to the walking man and think that maybe I was being a bit paranoid, but as I’ve tried to explain, it is highly unusual to see a person walking along the freeway and especially when there is no clear cut of indication of why he is doing so.
I would say his presence was an anomaly.
Should I have just ignored what I considered to be an anomaly? I opted to give the anomaly some credence. I took action by moving over to the fast lane and by keeping my eye on the matter. This action was not especially risky or odd, since I could have readily been in the fast lane anyway. It’s not as though I suddenly slammed on my brakes or took any rash action. I took a subtle form of action that was intended to be a defensive form of driving and that would provide me with a lessened risk of exposure and a greater number of options if I needed to take other more pronounced actions.
What would AI do?
With today’s AI, the odds are that the AI would likely have detected the walking man. The odds are that the detection would have led to the walking man being marked as such in the virtual world model that would be used by the AI to grasp the nature of the surroundings of the driving environment. The AI would certainly already be generally programmed for detecting and monitoring the movement of pedestrians.
Would the AI though have taken any action? Perhaps not. The pedestrian did not appear to be a threat to the AI self-driving car. He wasn’t running into the lanes. He wasn’t making wild motions. There was nothing obvious about any dangers associated with the pedestrian. If you didn’t know any better, you would have classified the walking man as you would any person that might be walking on the sidewalk on any street that you might be driving on. In that sense, this seemed perfectly normal. At least it might seem so on the surface and without any deeper kind of assessment or analysis.
For the pedestrian aspects of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/avoiding-pedestrian-roadkill-self-driving-cars/
For my forensic analysis of the Uber crash in Arizona, see: https://aitrends.com/selfdrivingcars/ntsb-releases-initial-report-on-fatal-uber-pedestrian-crash-dr-lance-eliot-seen-as-prescient/
For accidents and AI self-driving cars, see my article: https://aitrends.com/ai-insider/accidents-contagion-and-ai-self-driving-cars/
What made the hair stand on the back of my neck was the notion that a pedestrian was in a place and time that he should not have been, or at least that would rarely ever occur. And, I had already tried to determine whether this was a “normal” occurrence by looking for a disabled car (I say normal in the sense of from time-to-time, but rather rarely, there might be a walking person on the freeway due to the broken-down car aspects), but none seem to be anywhere nearby.
And so we now reach the crux of my theme, namely, as a human driver, I would classify this walking man as an anomaly. And, I would then consider whether to give merit to the anomaly or shrug it off.
Here was my thinking:
- If I shrugged it off, I would presumably continue unabated and pretty much ignore the anomaly.
- If I thought the anomaly had merit, I would investigate further, hoping to ascertain the validity of the anomaly. If the anomaly seemed to have sufficient validity, I would then decide upon whether my course of action should be altered knowing that I seem to have a genuine anomaly in-hand.
I assert that any well-qualified AI should be able to do the same, and especially for AI self-driving cars, which involve life-and-death kinds of matters and indeed that an anomaly can ascertain the fate of the humans in the self-driving car or nearby to the self-driving car.
AI For Autonomous Cars Needs To Cope With Anomalies
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving driverless autonomous cars. One important aspect of the AI is its capability to identify, detect, interpret, analyze, and determine a course of action related to anomalies.
Allow me to elaborate.
I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.
For self-driving cars less than a Level 5 and Level 4, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.
For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/
For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/
For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/
Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 and Level 4 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.
Here’s the usual steps involved in the AI driving task:
- Sensor data collection and interpretation
- Sensor fusion
- Virtual world model updating
- AI action planning
- Car controls command issuance
Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.
Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.
For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/
See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/
For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/
Returning to the topic of anomalies, the AI of a self-driving car has to be able to properly identify that an anomaly potentially exists, and so the first part of anomaly handling deals with detection.
The sensors of the self-driving car will likely already have various programs that examine the sensory collected data to try and find patterns. These include visual processing routines that handle the data collected via the cameras, encompassing both video and still images. There is software that does likewise for the radar, and for the ultrasonic sensors, and for the LIDAR (if so equipped), and so on.
Many of these pattern matching algorithms for examining the sensory data were likely trained via Machine Learning (ML). This gets us to the first area of concern about anomaly detection by a self-driving car. If the Machine Learning consisted of data that was scrubbed and had no anomalies, the appearance of anomaly out-of-the-blue during actual use of the system might go completely unnoticed. The sensory data interpretation programs might just shrug off the outlier data and consider it part of the noise and other transients that one is going to get when using sensors.
That’s a tough aspect to overcome, namely, trying to figure out what is the usual kind of noise and transient data versus something that is a genuine anomaly worth considering. Suppose the AI was trained on all sorts of traffic signs, and then in the real-world a traffic sign that was not used in training is detected. The AI might opt to conclude that the traffic sign is not a traffic sign since it is outside the pattern of what constitutes a traffic sign.
I experienced this the other day when there was a hand-written sign that a roadway crew had put up to forewarn about a hole or divot in the street up ahead. They tried to make it look like a regular traffic sign, but it was obvious to the human eye that it was a quickly crafted ad hoc sign.
What would the AI do about it?
I would guess that the sensors would certainly have detected the presence of the sign. But, after trying to match it to the ones that it had learned from before, the odds are that it would be classified or categorized as just any kind of sign and not given its due related to the roadway and traffic situation (in contrast, for example, for the political elections, there are tons of signs put up all around town, none of which have anything to do with traffic, and thus it makes sense that a self-driving car would opt to ignore those signs).
The sensor data interpretation needs to be robust enough to give anomalies some attention, but at the same time if the anomaly is not relevant there is the issue of consuming the on-board processing cycles to try to ferret out the merits of the anomaly, which could perhaps starve some other crucial driving process. It is like a chess match that involves trying to determine how many levels deep, called ply, you want to do your analysis on. The deeper you consider the moves ahead in chess, the better the odds of making a good move now, but at the same time it chews up time and attention, which might be needed for other purposes (not so in a chess match, I realize, but this is so when driving a car).
For my article about the cognitive timing aspects, see: https://aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/
For more about Machine Learning and AI self-driving cars, see my article: https://aitrends.com/ai-insider/machine-learning-benchmarks-and-ai-self-driving-cars/
For making sense of road signs, see my article: https://aitrends.com/selfdrivingcars/making-ai-sense-of-road-signs/
For roadway debris analyses, see my article: https://aitrends.com/selfdrivingcars/roadway-debris-cognition-self-driving-cars/
Overall, there are some anomalies that genuinely do not exist but that the data or indication suggests it exists, and for which any pursuit is like going down a rabbit hole.
There are other anomalies that have genuine origination and so need pursuit. One means to try to gain an indication of whether the anomaly has legs, so to speak, involves doing a kind of cross-triangulation on the anomaly.
In the case of sensor fusion, when the various sensory devices have provided their interpretations, it is up to the sensor fusion portion of the AI to aid in figuring out what might be a bona fide anomaly versus what might not be. By comparing the results of interpretations from each of the different sensors, the sensor fusion has the unenviable task of trying to figure out the real truth of what is surrounding the AI self-driving car.
Suppose the cameras have detected a shadowy image of something at the side of the road. The image is so hazy that it is not readily possible to classify the image as being a pedestrian versus being say a fire hydrant or a street post (or, maybe it is a false reading of some kind). Meanwhile, suppose the radar has picked up a somewhat stronger set of signals and can present a more shaped outline of the object. And, let’s suppose the LIDAR has done the same in terms of providing a clearer shape. By triangulating the multiple sensors, the sensor fusion might be able to discern that it is something that does exist and not just noise, and furthermore that it is pedestrian and not just an inanimate object.
The sensor fusion then passes this along to the virtual world model portion of the AI system. Within the virtual world model, there is now a numeric marker placed at the position of the suspected object in the overall model, and it is furthermore categorized with a probability that it is a pedestrian. The AI action planning program now examines the virtual world model to figure out what action, if any, the driving of the car should undertake given this news that there might be a pedestrian at the side of the road.
For the use of probabilistic reasoning for AI self-driving cars, see my article: https://aitrends.com/ai-insider/probabilistic-reasoning-ai-self-driving-cars/
For the need for resiliency in self-driving car AI, see my article: https://aitrends.com/ai-insider/self-adapting-resiliency-for-ai-self-driving-cars/
For the use of Occam’s razor, see my article: https://aitrends.com/ai-insider/occams-razor-ai-machine-learning-self-driving-cars-zebra/
Trickiness Of Coping With Anomalies
Here’s the really tricky part that many AI systems are not yet considering.
It is somewhat easy to consider the role of anomalies at the sensor data analysis aspects. The same can be said about detecting anomalies at the sensor fusion portion. It gets more complex once you are considering the virtual world model and the AI action planning portions.
Let’s use my example about the walking man on the freeway.
I’m relatively confident that the AI self-driving car would be able to detect the walking man and determine that the object is a pedestrian. Sure, there could be issues trying to make this determination and it would depend on factors such as whether there is line-of-sight to the walking man for the sensors, and whether there is any weather that might be disrupting the sensor data such as rain or snow, etc.
Once the walking man gets placed into the virtual world model, would the AI realize that a walking pedestrian on the freeway is unusual? Would it be able to also extend that line of consideration and then look for other clues that might confirm the validity of the walking man being there, such as looking for a disabled car?
I’d dare say that most AI systems for self-driving cars would be unlikely at this time of being able to have that kind of anomaly seeking mindset.
In case you want to argue that the walking man was another example of no-harm no-foul in terms of if an AI system had not become concerned about the walking man (similar to my story about the trucks that got into the fast lane), I was waiting to tell you the end of the story about the walking man.
After I passed him, doing my 100 feet per second speed, and at a distance of about two lanes (let’s say about 15-20 feet from him), he subsequently ran out into traffic. Many of the cars coming up were moving so fast that one of them ended-up striking him (I heard about it on the news, didn’t see it happen directly). This happened in the slow lane.
I know that there are some AI pundits that will claim that had the AI self-driving car been in the slow lane it would not have hit the walking man because it would have miraculously made an evasive maneuver. I don’t think it makes any sense to say that in this circumstance. Physics prevents being able to avoid someone that suddenly darts in front of a car that is going 100 feet per second. You are just not going to be able to brake fast enough to avoid hitting that person.
Where would you swerve to? Into other lanes of traffic? Or, maybe into the ditch next to the freeway, but perhaps kill the occupants of the car?
There are even some AI developers and AI pundits that would say that if a human was stupid enough to run into traffic, the person gets what they deserve. This is even a dumber thing to say. Suppose the car driver had swerved into the ditch and died, thus keeping the walking man alive. Is that a “deserved” death in the estimation of this idea that you get your just deserves? I think not.
For the ethics dilemmas facing AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
For the potential need for ethics review boards, see my article: https://aitrends.com/selfdrivingcars/ethics-review-boards-and-ai-self-driving-cars/
For idealism by some AI developers, see my article: https://aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/
For egocentric AI developer mindsets, see my article: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/
Those same pundits might also argue that the walking man should not have gotten onto the freeway to begin with.
As mentioned earlier, there are fences and brick walls that separate the freeway. Yes, it is possible to climb over those walls. Should we put up barbed wire and maybe gun posts, and make it seemingly impossible to get onto the freeway (a kind of modern-day Berlin Wall), doing so because the AI is insufficient to try and figure out when a pedestrian is there and should be avoided?
I think not.
Those of us developing AI self-driving cars should be aiming to have the AI do the right kinds of actions, such as the action that I took, which I believe was a sound course of action. I had moved over into lanes away from the walking man and kept alert as to what the walking man was doing. There are other actions that could have possibly been done, such as maybe trying to block traffic and slow down traffic, or maybe call 911, but in any case, all of those actions rely on the realization that there was an anomaly afoot.
Robust AI for self-driving cars needs to give credence to anomalies. The AI needs to be overtly seeking out anomalies and giving them their due. This cannot though be done in a wanton fashion.
There is only so much processing and bandwidth that the AI on-board system can undertake and do so on a timely basis. The AI needs to be watching out for false positives and not take action that is otherwise unwarranted and might carry its own risks. Nor should the AI be taken in by false negatives. Anomalies, love them or hate them, but either way you need to deal with them.
That’s the rub.
Copyright 2020 Dr. Lance Eliot
This content is originally posted on AI Trends.
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]