Boeing 737 MAX 8 and Lessons for AI: The Case of AI Self-Driving Cars

3389

By Lance Eliot, the AI Trends Insider

The Boeing 737 MAX 8 aircraft has been in the news recently, doing so sadly as a result of a fatal crash that occurred on March 10, 2019 involving Ethiopian Airlines flight #302. News reports suggest that another fatal crash of the Boeing 737 MAX 8 that took place on October 29, 2018 for Lion Air flight #610 might be similar in terms of how the March 10, 2019 crash took place. It is noteworthy to point out that the Lion Air crash is still under investigation, possibly with a final report being released later this year, and the Ethiopian Airlines crash investigation is just now starting (at the time of this writing).

I’d like to consider at this stage of understanding about the crashes whether we can tentatively identify aspects about the matter that could be instructive toward the design, development, testing, and fielding of Artificial Intelligence (AI) systems.

Though the Boeing 737 MAX 8 does not include elements that might be considered in the AI bailiwick per se, it seems relatively apparent that systems underlying the aircraft could be likened to how advanced automation is utilized. Perhaps the Boeing 737 MAX 8 incidents can reveal vital and relevant characteristics that can be valuable insights for AI systems, especially AI systems of a real-time nature.

A modern-day aircraft is outfitted with a variety of complex automated systems that need to operate on a real-time basis. During the course of a flight, starting even when the aircraft is on the ground and getting ready for flight, there are a myriad of systems that must each play a part in the motion and safety of the plane. Furthermore, these systems are at times either under the control of the human pilots or are in a sense co-sharing the flying operations with the human pilots. The Human Machine Interface (HMI) is a key matter to the co-sharing arrangement.

I’m going to concentrate my relevancy depiction on a particular type of real-time AI system, namely AI self-driving cars.

Please though do not assume that the insights or lessons mentioned herein are only applicable to AI self-driving cars. I would assert that the points made are equally important for other real-time AI systems, such as robots that are working in a factory or warehouse, and of course other AI autonomous vehicles such as drones and submersibles. You can even take out of the equation the real-time aspects and consider that these points still would readily apply to AI systems that are considered less-than real-time in their activities.

One overarching aspect that I’d like to put clearly onto the table is that this discussion is not about the Boeing 737 MAX 8 as to the actual legal underpinnings of the aircraft and the crashes. I am not trying to solve the question of what happened in those crashes. I am not trying to analyze the details of the Boeing 737 MAX 8. Those kinds of analyzes are still underway and by experts that are versed in the particulars of airplanes and that are closely examining the incidents. That’s not what this is about herein.

I am going to instead try to surface out of the various media reporting the semblance of what some seem to believe might have taken place. Those media guesses might be right, they might be wrong. Time will tell. What I want to do is see whether we can turn the murkiness into something that might provide helpful tips and suggestions of what can or might someday or already is happening in AI systems.

I realize that some of you might argue that it is premature to be “unpacking” the incidents. Shouldn’t we wait until the final reports are released? Again, I am not wanting to make assertions about what did or did not actually happen. Among the many and varied theories and postulations, I believe there is a richness of insights that can be right now applied to how we are approaching the design, development, testing, and fielding of AI systems. I’d also claim that time is of the essence, meaning that it would behoove those AI efforts already underway to be thinking about the points I’ll be bringing up.

Allow me to fervently clarify that the points I’ll raise are not dependent on how the investigations bear out about the Boeing 737 MAX 8 incidents. Instead, my points are at a level of abstraction that they are useful for AI systems efforts, regardless of what the final reporting says about the flight crashes. That being said, it could very well be that the flight crash investigations undercover other and additional useful points, all of which could further be applied to how we think about and approach AI systems.

As you read herein the brief recap about the flight crashes and the aircraft, allow yourself the latitude that we don’t yet know what really happened. Therefore, the discussion is by-and-large of a tentative nature.

New facts are likely to emerge. Viewpoints might change over time. In any case, I’ll try to repeatedly state that the aspects being described are tentative and you should refrain from judging those aspects, allowing your mind to focus on how the points can be used for enhancing AI systems. Even something that turns out to not have been true in the flight crashes can nonetheless still present a possibility of something that could have happened, and for which we can leverage that understanding to the advantage of AI systems adoption.

So, do not trample on this discussion because you find something amiss about a characterization of the aircraft and/or the incident. Look past any such transgression. Consider whether the points surfaced can be helpful to AI developers and to those organizations embarking upon crafting AI systems. That’s what this is about.

For those of you that are particularly interested in the Boeing 737 MAX 8 coverage in the media, here are a few handy examples:

Bloomberg news: https://www.bloomberg.com/news/articles/2019-03-17/black-box-shows-similarities-between-lion-and-ethiopian-crashes

Seattle Times news: https://www.seattletimes.com/business/boeing-aerospace/failed-certification-faa-missed-safety-issues-in-the-737-max-system-implicated-in-the-lion-air-crash/

LA Times news: https://www.latimes.com/business/la-fi-boeing-faa-warnings-20190317-story.html

Wall Street Journal news: https://www.wsj.com/articles/faas-737-max-approval-is-probed-11552868400

Background About the Boeing 737 MAX 8

The Boeing 737 was first flown in late 1960’s and spawned a multitude of variants over the years, including in the 1990s the Boeing 737 NG (Next Generation) series. Considered the most selling aircraft for commercial flight, last year the Boeing 737 model surpassed sales of 10,000 units sold. It is composed of twin jets, a relatively narrow body, and intended for a flight range of short to medium distances. The successor to the NG series is the Boeing 737 MAX series.

As part of the family of Boeing 737’s, the MAX series is based on the prior 737 designs and was purposely re-engined by Boeing, along with having changes made to the aerodynamics and the airframe, doing so to make key improvements including a lowered burn rate of fuel and other aspects that would make the plane more efficient and have a longer range than its prior versions. The initial approval to proceed with the Boeing 737 MAX series was signified by the Boeing board of directors in August 2011.

Per many news reports, there were discussions within Boeing about whether to start anew and craft a brand-new design for the Boeing 737 MAX series or whether to continue and retrofit the design. The decision was made to retrofit the prior design.  Of the changes made to prior designs, perhaps the most notable element consisted of mounting the engines further forward and higher than had been done for prior models. This design change tended to have an upward pitching effect on the plane. It was more so prone to this than prior versions, as a result of the more powerful engines being used (having greater thrust capacity) and the positioning at a higher and more pronounced forward position on the aircraft.

As to a possibility of the Boeing 737 MAX entering into a potential stall during flight due to this retrofitted approach, particularly doing so in a situation where the flaps are retracted and at low-speed and with a nose-up condition, the retrofit design added a new system called the MCAS (Maneuvering Characteristics Augmentation System).

The MCAS is essentially software that receives sensor data and then based on the readings will attempt to trim down the nose in an effort to avoid having the plane get into a dangerous nose-up stall during flight. This is considered a stall prevention system.

The primary sensor used by the MCAS consists of an AOA (Angle of Attack) sensor, which is a hardware device mounted on the plane and transmits data within the plane, including feeding of the data to the MCAS system. In many respects, the AOA is a relatively simple kind of sensor and variants of AOA’s in term of brands, models, and designs exist on most modern-day airplanes. This is to point out that there is nothing unusual per se about the use of AOA sensors, it is a common practice to use AOA sensors.

Algorithms used in the MCAS were intended to try and ascertain whether the plane might be in a dangerous condition as based on the AOA data being reported and in conjunction with the airspeed and altitude. If the MCAS software calculated what was considered a dangerous condition, the MCAS would then activate to fly the plane so that the nose would be brought downward to try and obviate the dangerous upward-nose potential-stall condition.

The MCAS was devised such that it would automatically activate to fly the plane based on the AOA readings and based on its own calculations about a potentially dangerous condition. This activation occurs without notifying the human pilot and is considered an automatic engagement.

Note that the human pilot does not overtly act to engage the MCAS per se, instead the MCAS is essentially always on and detecting whether it should engage or not (unless the human pilot opts to entirely turn it off).

During a MCAS engagement, if a human pilot tries to trim the plane and uses a switch on the yoke to do so, the MCAS becomes temporarily disengaged. In a sense, the human pilot and the MCAS automated system are co-sharing the flight controls. This is an important point since the MCAS is still considered active and ready to re-engage on its own.

A human pilot can entirely disengage the MCAS and turn it off, if the human pilot believes that turning off the MCAS activation is warranted. It is not difficult to turn off the MCAS, though it presumably would rarely if ever be turned off and might be considered an extraordinary and seldom action that would be undertaken by a pilot. Since the MCAS is considered an essential element of the plane, turning off the MCAS would be a serious act and not be done without presumably the human pilot considering the tradeoffs in doing so.

In the case of the Lion Air crash, one theory is that shortly after taking off the MCAS might have attempted to push down the nose and that the human pilots were simultaneously trying to pull-up the nose, perhaps being unaware that the MCAS was trying to push down the nose. This appears to account for a roller coaster up-and-down effort that the plane seemed to experience. Some have pointed out that a human pilot might believe they have a stabilizer trim issue, referred to as a runaway stabilizer or runaway trim, and misconstrue a situation in which the MCAS is engaged and acting on the stabilizer trim.

Speculation based on that theory is that the human pilot did not realize they were in a sense fighting with the MCAS to control the plane, and had the human pilot realized what was actually happening, it would have been relatively easy to have turned off the MCAS and taken over control of the plane, no longer being in a co-sharing mode. There have been documented cases of other pilots turning off the MCAS when they believed that it was fighting against their efforts to control the Boeing 737 MAX 8.

One aspect that according to news reports is somewhat murky involves the AOA sensors in the case of the Lion Air incident. Some suggest that there was only one AOA sensor on the airplane and that it fed to the MCAS faulty data, leading the MCAS to push the nose down, even though apparently or presumably a nose down effort was not actually warranted. Other reports say that there were two AOA sensors, one on the Captain’s side of the plane and one on the other side, and that the AOA on the Captains side generated faulty readings while the one on the other side was generating proper readings, and that the MCAS apparently ignored the properly functioning AOA and instead accepted the faulty readings coming from the Captain’s side.

There are documented cases of AOA sensors at times becoming faulty. One aspect too is that environmental conditions can impact the AOA sensor. If there is build-up of water or ice on the AOA sensor, it can impact the sensor. Keep in mind that there are a variety of AOA sensors in terms of brands and models, thus, not all AOA sensors are necessarily going to have the same capabilities and limitations.

The first commercial flights of the Boeing 737 MAX 8 took place in May 2017. There are other models of the Boeing 737 MAX series, both ones existing and ones envisioned, including the MAX 7, the MAX 8, the MAX 9, etc. In the case of the Lion Air incident, which occurred in October 2018, it was the first fatal incident of the Boeing 737 MAX series.

There are a slew of other aspects about the Boeing 737 MAX 8 and the incidents, and if interested you can readily find such information online. The recap that I’ve provided does not cover all facets — I have focused on key elements that I’d like to next discuss with regard to AI systems.

Shifting Hats to AI Self-Driving Cars Topic

Let’s shift hats for a moment and discuss some background about AI self-driving cars. Once I’ve done so, I’ll then dovetail together the insights that might be gleaned about the Boeing 737 MAX 8 aspects and how this can potentially be useful when designing, building, testing, and fielding AI self-driving cars.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. As such, we are quite interested in whatever lessons can be learned from other advanced automation development efforts and seek to apply those lessons to our efforts, and I’m sure that the auto makers and tech firms also developing AI self-driving car systems are keenly interested in too.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. Period.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the matter of the Boeing 737 MAX 8, let’s consider some potential insights that can be gleaned from what the news has been reporting.

Here’s a list of the points I’m going to cover:

  •         Retrofit versus start anew
  •         Single sensor versus multiple sensors reliance
  •         Sensor fusion calculations
  •         Human Machine Interface (HMI) designs
  •         Education/training of human operators
  •         Cognitive dissonance and Theory of Mind
  •         Testing of complex systems
  •         Firms and their development teams
  •         Safety considerations for advanced systems

I’ll cover each of the points, doing so by first reminding you of my recap about the Boeing 737 MAX 8 as it relates to the point being made, and then shifting into a focus on AI systems and especially AI self-driving cars for that point. I’ve opted to number the points to make them easier to refer to as a sequence of points, but the sequence number does not denote any kind of priority of one point being more or less important than another. They are all worthy points.

Take a look at Figure 1.

Key Point #1: Retrofit versus start anew

Recall that the Boeing 737 MAX 8 is a retrofit of prior designs of the Boeing 737. Some have suggested that the “problem” being solved by the MCAS is a problem that should never have existed at all, namely that rather than creating an issue by adding the more powerful engines and putting them further forward and higher, perhaps the plane ought to have been redesigned entirely anew. Those that make this suggestion are then assuming that the stall prevention capability of the MCAS would not have been needed, which then would have not been built into the planes, which then would never have led to a human pilot essentially co-sharing and battling with it to fly the plane.

Don’t know. Might there have been a need for an MCAS anyway? In any case, let’s not get mired in that aspect about the Boeing 737 MAX 8 herein.

Instead, think about AI systems and the question of whether to retrofit an existing AI system or start anew.

You might be tempted to believe that AI self-driving cars are so new that they are entirely a new design anyway. This is not quite correct. There are some AI self-driving car efforts that have built upon prior designs and are continually “retrofitting” a prior design, doing so by extending, enhancing, and otherwise leveraging the prior foundation.

This makes sense in that starting from scratch is going to be quite an endeavor. If you have something that already seems to work, and if you can adjust it to make it better, you would likely be able to do so at a lower cost and at a faster pace of development.

One consideration is whether the prior design might have issues that you are not aware of and are perhaps carrying those into the retrofitted version. That’s not good.

Another consideration is whether the effort to retrofit requires changes that introduce new problems that were not previously in the prior design. This emphasizes that the retrofit changes are not necessarily always of an upbeat nature. You can make alterations that lead to new issues, which then require you to presumably craft new solutions, and those new solutions are “new” and therefore not already well-tested via prior designs.

I routinely forewarn AI self-driving car auto makers and tech firms to be cautious as they continue to build upon prior designs. It is not necessarily pain free.

For my article about the reverse engineering of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/reverse-engineering-and-ai-self-driving-cars/

For why groupthink among AI developers can be bad, see my article: https://www.aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/

For how egocentric AI developers can make untoward decisions, see: https://www.aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For the unlikely advent of kits for AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/kits-and-ai-self-driving-cars/

Key Point #2: Single sensor versus multiple sensors reliance

For the Boeing 737 MAX 8, I’ve mentioned that there are the AOA (Angle of Attack) sensors and they play a crucial role in the MCAS system. It’s not entirely clear whether there is just one AOA or two of the AOA sensors involved in the matter, but in any case, it seems like the AOA is the only type of sensor involved for that particular purpose, though presumably there must be other sensors such as registering the height and speed of the plane that are encompassed by the data feed going into the MCAS.

Let’s though assume for the moment that the AOA is the only sensor for what it does on the plane, namely ascertaining the angle of attack of the plane. Go with me on this assumption, though I don’t know for sure if it is true.

The reason I bring up this aspect is that if you have an advanced system that is dependent upon only one kind of sensor to provide a crucial indication of the physical aspects of the system, you might be painting yourself into an uncomfortable corner. In the case of AI self-driving cars, suppose that we used only cameras for detecting the surroundings of the self-driving car. It means that the rest of the AI self-driving car system is solely dependent upon whether the cameras are working properly and whether the vision processing systems is working correctly.

If we add to the AI self-driving car another capability, such as radar sensors, we now have a means to double-check the cameras. We could add another capability such as LIDAR, and we’d have a triple check involved. We could add ultrasonic sensors too. And so on.

Now, we must realize that the more sensors you add, the more the cost goes up, along with the complexity of the system rising too.

For each added sensor type, you need to craft an entire capability around it, including where to position the sensors, how to connect them into the rest of the system, and having the software that can collect the sensor data and interpret it. There is added weight to the self-driving car, there is added power consumption being consumed, there is more heat generated by the sensors, etc. Also, the amount of computer processing required goes up, including the number of processors, the memory needed, and the like.

You cannot just start including more sensors because you think it will be handy to have them on the self-driving car. Each added sensor involves a lot of added effort and costs. There is an ROI (Return on Investment) involved in making such decisions. I’ve stated many times in my writings and presentations whether Elon Musk and Tesla’s decision to not use LIDAR is going to ultimately backfire on them, and even Elon Musk himself has said it might.

I’d like to then use the AOA matter as a wake-up call about the kinds of sensors that the auto makers and tech firms are putting onto their AI self-driving cars. Do you have a type of sensor for which no other sensor can obtain something similar? If so, are you ready to handle the possibility that if the sensor goes bad, your AI system is going to be in the blind about what is happening, or perhaps worse still that it will get faulty readings.

This does bring up another handy point, specifically how to cope with a sensor that is being faulty.

The AI system cannot assume that a sensor is always going to be working properly. The “easiest” kind of problem is when the sensor fails entirely, and the AI system gets no readings from it at all. I say this is easiest in that the AI then can pretty much make a reasonable assumption that the sensor is then dead and no longer to be relied upon. This doesn’t mean that handling the self-driving car is “easy” and it only means that at least the AI kind of knows that the sensor is not working.

The tricky part is when a sensor becomes faulty but has not entirely failed. This is a scary gray area. The AI might not realize that the sensor is faulty and therefore assume that everything the sensor is reporting must be correct and accurate.

Suppose a camera is having problems and it is occasionally ghosting images, meaning that an image sent to the AI system has shown perhaps cars that aren’t really there or pedestrians that aren’t really there. This could be disastrous. The rest of the AI might suddenly jam on the brakes to avoid a pedestrian, someone that’s not actually there in front of the self-driving car. Or, maybe the self-driving car is unable to detect a pedestrian in the street because the camera is faulting and sending images that have omissions.

The sensor and the AI system must have a means to try and ascertain whether the sensor is faulting or not. It could be that the sensor itself is having a physical issue, maybe by wear-and-tear or maybe it was hit or bumped by some other matter such as the self-driving car nudging another car. Another strong possibility for most sensors is the chance of it getting covered up by dirt, mud, snow, and other environmental aspects. The sensor itself is still functioning but it cannot get solid readings due to the obstruction.

AI self-driving car makers need to be thoughtfully and carefully considering how their sensors operate and what they can do to detect faulty conditions, along with either trying to correct for the faulty readings or at least inform and alert the rest of the AI system that faultiness is happening. This is serious stuff. Unfortunately, sometimes it is given short shrift.

For the dangers of myopic use of sensors on AI self-driving cars, see my article:https://www.aitrends.com/selfdrivingcars/cyclops-approach-ai-self-driving-cars-myopic/

For the use of LIDAR, see my article: https://www.aitrends.com/selfdrivingcars/lidar-secret-sauce-self-driving-cars/

For my article about the crossing of the Rubicon and sensors issues, see: https://www.aitrends.com/selfdrivingcars/crossing-the-rubicon-and-ai-self-driving-cars/

For what happens when sensors go bad, see my article: https://www.aitrends.com/selfdrivingcars/going-blind-sensors-fail-self-driving-cars/

Key Point #3: Sensor fusion calculations

As mentioned earlier, one theory was that the Boeing 737 MAX 8 in the Lion Air incident had two AOA sensors and one of the sensors was faulting, while the other sensor was still good, and yet the MCAS supposedly opted to ignore the good sensor and instead rely upon the faulty one.

In the case of AI self-driving cars, an important aspect involves undertaking a kind of sensor fusion to figure out a larger overall notion of what is happening with the self-driving car. The sensor fusion subsystem needs to collect together the sensory data or perhaps the sensory interpretations from the myriad of sensors and try to reconcile them. Doing so is handy because each type of sensor might be seeing the world from a particular viewpoint, and by “triangulating” the various sensors, the AI system can derive a more holistic understanding of the traffic around the self-driving car.

Would it be possible for an AI self-driving car to opt to rely upon a faulting sensor and simultaneously ignore or downplay a fully functioning sensor? Yes, absolutely, it could happen.

It all depends upon how the sensor fusion was designed and developed to work. If the AI developers thought that the forward camera is more reliable overall than the forward radar, they might have developed the software such that it tends to weight the camera more so than the radar. This can mean that when the sensor fusion is trying to decide which sensor to choose as providing the right indication at the time, it might default to the camera, rather than the radar, even if the camera is in a faulting mode.

Perhaps the sensor fusion is unaware that the camera is faulting, and so it gives the benefit of the doubt to the camera. Or, maybe the sensor fusion realizes the camera is faulting, but it has been setup to nonetheless choose the camera over the radar, rightfully or wrongly. The decisions made by the AI developers are going to pretty much determine what happens during the sensor fusion. If the design is not fully baked, or if the design was not implemented as intended, you can definitely end-up with situations that seem oddball from a logical perspective.

This point highlights the importance of designing the sensor fusion in a manner that best leverages the myriad of sensors, along with having extensive error checking and correcting, along with being able to deal with good and bad sensors. This includes the troublesome and at times hard to figure out intermittent faulting of a sensor.

For my article about sensor fusion, see: https://www.aitrends.com/selfdrivingcars/sensor-fusion-self-driving-cars/

For the IMU and other sensors, see my article: https://www.aitrends.com/selfdrivingcars/proprioceptive-inertial-measurement-units-imu-self-driving-cars/

For newer kinds of sensors, see my article: https://www.aitrends.com/ai-insider/olfactory-e-nose-sensors-and-ai-self-driving-cars/

For my article about how Deep Learning can be used, see: https://www.aitrends.com/ai-insider/plasticity-in-deep-learning-dynamic-adaptations-for-ai-self-driving-cars/

Key Point #4: Human Machine Interface (HMI) designs

According to the news reports, the MCAS is automatically always activated and trying to figure out whether it should engage into the act of co-sharing the flight controls. It seems that some pilots of the aircraft might not realize this is the case. Perhaps some are unaware of the MCAS, or maybe some are aware of the MCAS but believe that it will only engage at their human piloting directive to do so.

Besides this always-on aspect, perhaps there are some human pilots that don’t know how to turnoff the feature, or they might have once known and have forgotten how to do so. Or, maybe while in the midst of a crisis, they aren’t considering whether the MCAS could be erroneously fighting them and therefore it doesn’t occur to them to disengage it entirely.

They might also during a crisis be trying to consider a wide variety of possibilities of what is happening to the plane. From a hindsight viewpoint, maybe it is easy to isolate the MCAS and for someone to say that it was the culprit, but in the midst of a moment when the plane is fighting against you, your mental effort is devoted to trying to right the plane, along with seeking reasons for why the plane is having troubles. There is a potential large mental search space that the human pilot has to analyze, and yet this is happening in real-time with obvious serious and life-or-death consequences involved.

What makes this seemingly even more subtle in the case of the MCAS is that it apparently will temporarily disengage when the pilot uses the yoke switch, but the MCAS will then re-engage when it calculates that there is need to do so. A human pilot might at first believe that they’ve disengaged entirely the MCAS, when all that’s happened is that it has temporarily disengaged. When the MCAS re-engages, the human pilot could be baffled as to why the control is once again having troubles.

Combine this on-and-off kind of automatic action with the throes of dealing with the plane in a crisis mode. You’ve got a confluence of factors that can begin to overwhelm the human pilot. It can be difficult for them to sort out what is actually taking place. They meanwhile will continue to do what seems the proper course of action, bring up the nose. Ironically, this is seemingly likely to get the MCAS to once again step into the co-sharing and try to push down the nose.

I’d like to do a quick thought experiment on this.

Imagine a car with two sets of steering wheels and pedals. We’ll put those driving controls in the front seats of the car. Let’s also place a barrier between the driver’s seat and the second driver that we’ll say is just to the right of the normal position for a driver. The barrier is sizable and masks the actions of the other driver.

The driver in the normal driving position is asked to drive the car. They do so. Suppose they drive it a lot, so much that after a while they kind of forget that a second driver is sitting next to them (hidden from view by the barrier).

At one point, the car starts to get into trouble and appears to be sliding out of the lane. The second driver, the one that has been silent and not doing anything so far, other than watching the road, decides they need to step into the driving effort and correct the sliding aspects. The first driver, having gotten used to driving the car themselves, and having no overt awareness that the second driver is now going to operate the controls, believes they are the only driver of the car.

The two drivers begin fighting with each other in terms of working the driving controls, yet neither of them seems to realize that the other driver is doing so. They are seemingly working in isolation of each other, though they both have their “hands” on the controls.

You might exclaim that the second driver should be telling the first driver that they are now working the driving controls. Hey you, over there on the other side of the barrier, I’m trying to keep you from sliding out of the lane, might be a handy thing to say. If there is no particular communication taking place between the two, they might not realize how they are each countering the other, and possibly making the situation worse and worse in doing so.

I’ve many times exhorted that in the case of AI self-driving cars we are heading into untoward territory as the AI gets more advanced and yet does not entirely drive the car itself. In the case of Level 3 self-driving cars, there is going to be a struggle of the human driver and the AI system in terms of co-sharing the driving task. In some ways, my thought experiment highlights what can happen.

That’s why some AI self-driving car makers are trying to jump past Level 3 and go straight to Level 4 and Level 5. Others are determined to proceed with Level 3. It’s going to be a question of whether human drivers fully grasp what they are supposed to do versus what the AI system is supposed to do.

Will the human driver understand what the Level 3 capabilities are? Will the human driver know that the AI is trying to drive the car? Will the AI realize when the human opts to drive the car? Will the AI realize that a human driver is actually ready and able to drive the car? When a crisis moment arises, such as the AI is driving the car at 60 miles per hour and suddenly determines that it has reached a point where the human driver ought to takeover the controls, this is a dicey proposition. Is the human driver prepared to do so, and do they know why the AI has determined it is time to have the human drive the car?

Much of this center on the Human Machine Interface (HMI) aspects. When you are co-sharing the driving, both parties have to be properly and timely informed about what the other party is doing or wants to do or wants the other party to do. For a car, this might be done via indicators that light-up on the dashboard, or maybe the AI system speaks to the driver.

This though is not a straightforward aspect to arrange for all circumstances. For example, if the AI speaks to the driver and explains that the driver needs to take over the wheel, imagine how long it takes for the speaking to occur, along with the driver having to make sure they are listening, and that they heard what the AI said, and that they comprehend what the AI said. This then also requires time for the human to consider what action they should take, and then take that action. This is precious time when there is a crisis moment and driving decisions need to be quickly made and enacted.

For my article about the dangers of Level 3, see: https://www.aitrends.com/selfdrivingcars/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/

For the bifurcation of autonomy, see my article: https://www.aitrends.com/selfdrivingcars/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

For my article about the cognition timing elements, see: https://www.aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/

For the analysis of the Uber incident, see my article: https://www.aitrends.com/selfdrivingcars/ntsb-releases-initial-report-on-fatal-uber-pedestrian-crash-dr-lance-eliot-seen-as-prescient/

Key Point #5: Education/training of human operators

One question that is being asked about the Boeing 737 MAX 8 situation involves how much education or training should be provided to the human pilots, in particular related to the MCAS, and overall how the human pilots were or are to be made aware of the MCAS facets.

In the case of AI self-driving cars, one obvious difference between driving a car and flying a plane is that the airplane pilots are working in a professional capacity, while a human driving a car is generally doing so in a more informal manner (I’ll exclude for the moment professional drivers such as race car drivers, taxi drivers, shuttle drivers, etc.).

Commercial airline pilots are governed by all kinds of rules about education, training, number of hours flying, certification, re-certification, and the like. I’m not going to dig further into the MCAS education and training aspects, and so let’s just consider what kind of education or training you might have for dealing with an advanced automation that is co-sharing the driving task with you.

For today’s everyday licensed driver of a car, I think we can all agree that they get a somewhat minimal amount of education and training about driving a car. This though seems to have worked out relatively okay, since most drivers most of the time seem to be able to sufficiently operate a normal car.

Part of the reason that we have been able to keep the amount of education and training relatively low for driving a car is because of the amazing simplicity of driving a conventional car. You need to know how to operate the brakes, the accelerator, the steering wheel, and how to put the car into gear. The rest of the driving task is about ascertaining where you are driving and then performing the tactical aspects of driving, such as speeding up, slowing down, and steering in one direction or another.

When you get a car, there is usually an owner’s manual that indicates the specifics of that brand and model of a car. Still, for a conventional car, there isn’t that much new to deal with. The pedals are still in the same places, the steering wheel is still the steering wheel. Switching from one gear to another often differs from car brand to another car brand, yet it doesn’t take much to figure this out.

I know many drivers that have no idea how to engage their cruise control. They’ve never used it on their car. They don’t care to use it. I know many drivers that aren’t exactly sure how their Anti-lock Braking System (ABS) works, but most of the time it won’t matter that they don’t know, since it usually automatically works for you.

As the Level 3 self-driving cars begin to appear in the marketplace, one rather looming question will be to what extent should human drivers be educated or trained about what the Level 3 does. In the case of the Tesla models, generally considered a Level 2, we’ve had drivers that seemed to think they can fall asleep at the wheel when the AutoPilot is engaged. That’s not the case. They are still considered the responsible driver of the car.

Things are going to get dicey with the Level 3 systems and the human drivers. They are co-sharing the driving task. Should the human driver of a Level 3 car be required to take a certain amount of education or training on how to operate that Level 3 car? If so, how will this education or training take place? Some pundits say that it can be just done by the salesperson that sells the car, but I think we’d all be a bit suspect about the thoroughness of that kind of training effort.

I’ve predicted that we will be soon seeing lawsuits against auto makers that might opt to either offer no training for their Level 3 cars, or scant training, or training that is construed as optional and so the human driver later on claims they did not realize the importance of it. Things are going to get messy.

For why an airplane autopilot system is unlike AI self-driving cars, see my article:https://www.aitrends.com/selfdrivingcars/airplane-autopilot-systems-self-driving-car-ai/

For my Top 10 predictions of what’s going to happen with AI self-driving cars in this year, see: https://www.aitrends.com/selfdrivingcars/top-10-ai-trends-insider-predictions-about-ai-and-ai-self-driving-cars-for-2019/

For the use of human aided training for AI self-driving cars, see my article: https://www.aitrends.com/ai-insider/human-aided-training-deep-reinforcement-learning-ai-self-driving-cars/

For my article about the foibles of human drivers, see: https://www.aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/

Key Point #6: Cognitive dissonance and Theory of Mind

A human operator of a device or system needs to have in their mind a mental model of what the device or system can and cannot do. If the human operator does not mentally know what the other party can or cannot do, it will make for a rather poor effort of collaboration.

You’ve likely seen this in human-to-human relationships, whereby you might not have a clear picture in your mind of the other person’s capabilities, and therefore it is hard for the two of you to work together in a properly functional manner. The other day I went bike riding with a colleague. I am used to vigorous bike rides, but I didn’t know if he was too. If I had suddenly started riding like the wind, it could have left him behind, along with his becoming confused about what we were doing.

Having a mental picture of the other person’s capabilities is often referred to as the Theory of Mind. What is your understanding of the other person’s way of thinking? In the case of flying a plane, the question is whether you comprehend what the automation of the plane can and cannot do, along with when it will do so. The same can be said about a car, namely that the human driver needs to understand what a car can and cannot do, and when it will do so.

If there is a mental gap between the understanding of the human operator and the device or system they are operating, it creates a situation of cognitive dissonance. The human operator is likely to fail to take the appropriate actions since they misunderstand what the automation is or has done.

For the MCAS, it would seem that perhaps some of the human pilots might have had an inadequate understanding of the Theory of Mind about what the MCAS was and does. This might have created situations of cognitive dissonance. As such, the human pilot would be unable to gauge what to do about the automation, and how to work with it.

Human drivers in even conventional cars can have the same lack of Theory of Mind about the car and its operations. In the case of having ABS brakes, you are not supposed to pump those brakes when trying to come to a stop, doing so actually tends to have the opposite reaction of your attempting to stop the car quickly. Some human drivers are used to cars that don’t have ABS and in those cars you might indeed pump the brakes, but not with ABS. I dare say many human drivers are at a cognitive dissonance about the use of their ABS brakes.

The same kind of cognitive dissonance will be more pronounced with Level 3 cars. Human drivers have a greater hurdle and burden of learning what the Theory of Mind is of their Level 3 cars, and the odds are those human drivers will be unaware of or confused about those features. A potential recipe for disaster.

For my article about accident contagions, see: https://www.aitrends.com/selfdrivingcars/accidents-contagion-and-ai-self-driving-cars/

For rear-end accidents, see my article: https://www.aitrends.com/ai-insider/rear-end-collisions-and-ai-self-driving-cars-plus-apple-lexus-incident/

For the secrets of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/

Key Point #7: Testing of complex systems

There is an ongoing discussion in the media about how the MCAS was tested. I’m not going to venture into the details about that aspect. In any case, it does spark the question of how to test advanced automation systems.

Let’s suppose an advanced automation system is tested to make sure that it seems to work as devised. Maybe you do simulations of it. Maybe you do tests in a wind tunnel in the case of avionics systems, or for an AI self-driving car you take it to a proving ground or closed track.

If the tests are solely about whether the system does what was expected, it might pass with flying colors. Did the tests though include what will happen when something goes awry?

Suppose a sensor becomes faulty, what happens then? I’ve actually had engineers that tell me there was nothing in the specification about a sensor becoming faulty, so they didn’t develop anything to handle that aspect, therefore it made no sense to test it for a faulty sensor, since they could already tell you that it wasn’t designed and nor programmed to deal with it.

Another kind of test involves the HMI aspects and the human operator.

If the advanced automation is supposed to work hand-in-hand with a human operator, you ought to have tests to see if that really is working out as anticipated. One guffaw that I’ve often seen involves training the human operator and then immediately doing a test of the system with the human operator. That’s handy, but what about a week later when the human operator has forgotten about some of the training? Also, what about a human operator that received little or no training, which I’ve had engineers tell me that they don’t test for that condition since they are told beforehand that all of the human operators will always have the needed training.

For the brittleness of AI systems, see my article: https://www.aitrends.com/selfdrivingcars/goto-fail-and-ai-brittleness-the-case-of-ai-self-driving-cars/

For the Turing Test and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/

For my article about simulations and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/simulations-self-driving-cars-machine-learning-without-fear/

For the use of providing grounds, see: https://www.aitrends.com/selfdrivingcars/proving-grounds-ai-self-driving-cars/

Key Point #8: Firms and development teams

Usually, advanced automation systems are designed, developed, tested, and fielded as part of large teams and within overall organizations that shape how these work efforts will be undertaken.

Crucial decisions about the nature of the design are not usually made by one person alone. It is a group effort. There can be compromises along the way. There can be miscommunication about what the design is or will do. The same can happen during the development. And the same can happen during the testing. And the same can happen during the fielding.

My point is that it can be easy to fall into the mental trap of focusing only on the technology itself, whether it is a plane or a self-driving car. You need to also consider the wider context of how the artifact came to be. Was the effort a well-informed and thoughtful approach or did the approach itself lend towards incorporating problems or issues into the resultant outcome.

For the burnout of AI developers, see my article: https://www.aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

For my article about the rock stars of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/hiring-and-managing-ai-rockstars-the-case-of-ai-self-driving-cars/

For the dangers of noble cause corruption in firms, see: https://www.aitrends.com/selfdrivingcars/noble-cause-corruption-and-ai-the-case-of-ai-self-driving-cars/

Key Point #9: Safety considerations for advanced systems

The safety record of today’s airplanes is really quite remarkable when you think about it. This has not happened by chance. There is a tremendous emphasis on flight safety. It gets baked into every step of the design, development, testing, and fielding of an airplane, along with its daily operation. In spite of that top-of-mind about safety, things can still at times go awry.

In the case of AI self-driving cars, I’d suggest that things are not as safety conscious as yet and we need to push further along on becoming more safety aware. I’ve urged the auto makers and tech firms to put in place a Chief Safety Officer, charged with making sure that everything that happens when designing, building, testing, and fielding of an AI self-driving car that safety is a key focus. There are numerous steps to be baked into AI self-driving cars that will increase their safety, without which, I’ve prophesied we’ll see things go south and the AI self-driving car dream might be delayed or dashed.

The role of the Chief Safety Officer in AI self-driving car is vital: https://www.aitrends.com/selfdrivingcars/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/

For safety about AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

Conclusion

I’ve touched upon some of the aspects that seemed to be arising as a result of the Boeing 737 MAX 8 aspects that have been in the news recently. My goal was not to figure out the deadly incidents. My intent and hope were that we could glean some useful points and cast those into the burgeoning field of AI self-driving cars. Given how immature the field of AI self-driving car is today in comparison to the maturity of the aircraft industry, there’s a lot to be learned and reapplied.

Let’s keep things safe out there.    

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.