Receptivity Of AI Autonomous Cars – How YIMBY Could Flip to NIMBY

Communities eager to have AI self-driving cars on their streets are the YIMBY’s (Yes in My Backyard); those opposed are NIMBYs. Beware of the instant flip from YIMBY to NIMBY. (GETTY IMAGES)

By Lance Eliot, the AI Trends Insider


You’ve likely heard or seen the acronym before.

Not In My Backyard (NIMBY).

That’s what some people say when there is something they believe to be untoward that is potentially going to be situated near to them.

It can be used in a rather literal sense, such as next door to your home there is going to be a shelter put in place and you object to the shelter being located in that particular spot. Or, it could be that the shelter will be somewhere in your neighborhood and you likewise believe it should not be located that close to where you live.

Keep in mind that you might actually welcome the notion of having shelters and you have no overarching objection to those facilities. Your carp is that it is being located in a place that you believe is inappropriate. You might have compelling reasons for your belief. Maybe there are school children that are in your neighborhood and you are worried about their safety as it relates to having a nearby shelter. Etc.

Oftentimes, people will indicate they favor something overall, and it is the specific placement that concerns them.

A frequent retort to that involves the suggestion that the person does not want to bear having the aspect near to them but seems willing to have someone else have to deal with the matter.

When respondent’s fill-in a poll or survey and say they are in favor of something, their response can change dramatically if the question indicates that the something will be in their backyard, so to speak.

Is it being two-faced or perhaps hypocritical to indicate that you favor something but not in your own backyard?

That’s the claim that some make against those that say yes to something and yet refuse to have it near to them.

This though seems a bit at times over-the-top because suppose the person genuinely believes that the matter should be dealt with and has other bona fide alternative suggestions about its placement. Suppose they advocated that the shelter should be built in an area more accommodating, and thus it is not just that the person doesn’t want it in their own backyard and could be that there are logical reasons to place it someplace else.

This is not to say that there aren’t some that are indeed perhaps being two-faced at times.

It could be that there is not a particularly valid justification to refuse having the matter located near to them.

They may fabricate a reason for their viewpoint on the matter.

They might even opt to avoid offering any rationale per se and instead just flatly insist that they don’t want the aspect located near to them.

The debate about locating something can be a mixture of rational discussion and heated emotion.

You might have some proponents of locating a matter in your neighborhood that have impeccably logical reasons to do so, and you might oppose it on purely emotional terms.

Or, maybe the proponents are the ones with the emotionally laden basis and you are the one with the rationally logical reasons against it.

Most times, it is likely that both sides have a combined mixture, namely they will vacillate between offering logical reasons and emotional responses, and it can be hard to separate those two elements when having a discussion or debate on the matter.

Consider the rise of nuclear power plants.

When nuclear power plants were first devised, there were many locales that welcomed having one built in their vicinity. It was considered by some to be a modernist element of society. It offered the creation of jobs in the locale where it was being placed. It was promoted as reducing energy costs and would therefore provide monetary savings to those that tapped into the power generated. And so on.

Eventually, there were various issues that arose about nuclear power plants and it became a kind of pariah to have one in your locale.

Nuclear power plants became the butt of jokes about how poorly run they were and the dangers they created. Some locales that had one were desperate to try to close down that nuclear power plant. Other locations that were approached to have a nuclear power plant built in their vicinity were quick to say NIMBY.

This illustrates too that the NIMBY is not confined to just being located next door to you or in your neighborhood and can include a much larger geographic factor.

The NIMBY might be that you don’t want something to be placed in your city, or maybe not in your county, or perhaps not in your state, and it could be that you enlarge the scope to not being in your country. I don’t want the Widget factory anywhere in the United States, you might contend, whereby the Widget factory is something you consider so untoward that you don’t want it within the borders of your country.

Another example would be nuclear waste.

Some people might say that there should not be nuclear waste stored anywhere in their country.

Others might be Okay with storing it someplace in their country, but not anywhere physically close to them.

The item involved does not necessarily need to be stationary.

In the case of nuclear waste, suppose there is a train that takes the nuclear waste from point A to point B.

During transit, the train is going to come through your city. Even though it might only be inside the confines of your city for presumably a short period of time, you might still have a NIMBY. You are maybe worried that the radiation could harm locals, or perhaps the train might derail and create a nuclear waste disaster in your town.

If we took a poll of the members of your town, it might not necessarily show that everyone feels the same way about the matter. There are likely to be some people that will be quite strongly voiced about the matter. They might be advocates in favor of the aspect and be outspoken for it, while there might be others that are on the opposite side and outspoken in opposition to it. Likely there will be some people that are on-the-fence and say they are open to learning more about it. There are bound to be some people that claim they don’t care either way and don’t want to get immersed in it at all.

I mention this to clarify that the NIMBY can be the viewpoint of just one person or it can be a group of people.

Furthermore, the NIMBY might not be unanimous among a group of people, depending upon how we choose the group.

If we ask just those that are in favor of the matter, presumably the vote would be unanimous in favor of it. If we ask the entire town, we’d perhaps have a segment that was the group in favor of it, we’d have the segment that was vitriolically opposed, we’d have the on-the-fence segment, and we’d have the don’t care segment.

All of these perceptions can change over time.

Consider Amazon’s efforts to find a place to put their second headquarters, often referred to as the HQ2. In some communities, there were advocates relishing that the HQ2 be placed in their city or town. There were some members of that city or town that were likely opposed, and others in the middle, and some that said they didn’t care either way.

Any of those positions on the matter were at times fluid and apt to change.

For some locales, they at first welcomed the HQ2 as a potential newcomer, and then later changed their minds and opposed it.

That’s how these things often go.

There is not a fixed-in-stone perspective and instead it will at times shift or transform as to whether people favor or disfavor a matter.

Now that I’ve covered NIMBY, time to mention its counterpart.

Don’t Neglect The YIMBY


You might not have seen or heard YIMBY, and it certainly is less well-known than NIMBY.

YIMBY means Yes In My Backyard.

It is considered the counterpart to the NIMBY.

On one side of the location issues you might have those saying keep the matter out of their locale, those are the NIMBY, and you might have equally strong advocates that are insisting the matter should be in the locale, the YIMBY’s.

Note that YIMBY is not the only acronym used to refer to those that want something in their backyard, but it seems to be on its way to becoming the most prevalent.

How do people form their NIMBY or YIMBY perspectives?

Some people might carefully research a matter, studying it quite closely. They might seek the viewpoints of established experts in the matter. Their NIMBY or YIMBY might be based on gobs of rationale, and they can spit out the twenty reasons for their position.

There are others that might have heard second-hand about the topic or marginally know much about it, but they too might form NIMBY or YIMBY positions, albeit not quite as well articulated and supported. There are some that might go with their gut. There are some that might figure that if Joe or Samantha are NIMBY or YIMBY, and if they believe in that person, they might simply form their own viewpoint based on a sense of “trust” in that other person’s position.

The information that either supports the NIMBY or the YIMBY positions can be widely available and highly accurate. Or, it can be sparsely available and at times riddled with flaws. There can be disinformation that distorts the matter and creates confusion about what is “true” versus what is “false” regarding the matter. There can be a lack of information on the topic, which can create a vacuum into which fake information can arise.

You can become saturated with information that seems completely sensible and valid for both sides of the NIMBY and the YIMBY debate, but it is voluminous, maybe highly technical and somewhat unreadable or not readily digested. It can be overwhelming. You might not know how to sort out what seems viable in the morass.

The difficulty can be further exacerbated by the classic dueling-experts.

This involves having one seemingly fully qualified expert that offers a viewpoint that seems to completely support the NIMBY, and yet have another equally fully-qualified expert that takes the side that seems to support the YIMBY.

How are you to decide when two renowned experts are diametrically opposed to each other’s claims or rational?

Local leaders will often have a stake in the matter too.

Perhaps the item to be potentially located in your town or city is considered good for the town or city, and a local leader therefore believes it will have tremendous benefits for the community. They might believe this in their heart and soul. They might also see this as a means to extend or expand their local leadership aims. Of course, there are local leaders that might be on the opposite side too, namely they believe earnestly in their heart and soul that the matter would be damaging or undermining to the community.

In that sense, there is at times a kind of ROI (Return on Investment) analysis that often occurs. This could involve trying to identify all the benefits, perhaps trying to quantify them in terms of how it might improve lives or make money or have other desirable outcomes. This then might be balanced by the potential costs. Perhaps the matter poses risks for the community. Maybe it would require an expenditure of monies and there is concern that the matter won’t pencil out as a profitable choice.

The cost-benefit analysis could be extensively undertaken and involve lots of surveys and include notable experts that weigh-in on the matter. Sometimes there is no particular overt cost-benefit analysis undertaken and the matter is more one of viewed in disconnected bits and pieces. There might be conjecture rather than solid analysis. That’s not to suggest that a cost-benefit analysis could not also be spiked or biased, which could indeed happen. Doing a comprehensive cost-benefit analysis can be costly in itself and thus the effort to do so needs to be considered accordingly.

AI Autonomous Cars And Local Receptivity

What does this have to do with AI self-driving driverless autonomous cars?

There are some communities that are eager to have AI self-driving cars get underway on their public streets, those are the YIMBY’s.

There are other communities that aren’t yet in the mindset of the YIMBY, and vary from being NIMBY’s to perhaps the let’s wait-and-see types.

There are some communities that aren’t even in the game as yet, so to speak, in that so far, there’s not been any auto maker or tech firm with AI self-driving cars that has approached that community about their interest in allowing AI self-driving cars on their roadways.

For my article about the public perception of AI self-driving cars, see:

Let’s consider the various aspects about this YIMBY versus NIMBY in terms of the advent of AI autonomous cars.

I’d like to first clarify and introduce the notion that there are varying levels of AI autonomous cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the automakers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5 or Level 4, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article:

For the levels of self-driving cars, see my article:

For why AI Level 5 self-driving cars are like a moonshot, see my article:

For the dangers of co-sharing the driving task, see my article:

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 or Level 4 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see:

See my article about the ethical dilemmas facing AI self-driving cars:

For potential regulations about AI self-driving cars, see my article:

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article:

Returning to the topic of YIMBY versus NIMBY and AI self-driving cars, I’d like to take a look at the present and likely future state of this matter.

YIMBY To The Rescue

First, right now, there is much more of a YIMBY that a NIMBY when it comes to AI self-driving cars.

There is an exciting allure about AI self-driving cars. It is exhilarating and offers great promise.

There is a kind of prestige associated with this new technology.

Most things associated with AI are hot right now, and this applies to self-driving cars too.

A community might be eager to try out these futuristic AI self-driving cars.

We’ve seen plenty of sci-fi movies about how the world will eventually have self-driving cars, and why not be one of the first communities to have them.

Your community might become known globally for being the first, or at least one of the first, as a showcase for the advent of AI self-driving cars.

Imagine if your community had been the birthplace of the Kitty Hawk and the origins of man-made flight.

In one sense, it is a somewhat easy decision to make in terms of YIMBY, because the investment by the community is minimal and the ability to change their viewpoint is highly flexible and can quickly be changed.

Unlike putting in a factory that might require a fixed asset and lots of local monies, the agreement to allow AI self-driving cars can be done nearly without spending a dime by the community.

No special facilities are needed, no costly investments to be made locally.

The auto maker or tech firm might need to establish a base in the community to house the AI self-driving cars, along with the AI developers and those maintaining the self-driving cars. This is a relatively small investment and offers some advantages to the community such as jobs and taxes to be paid, but overall it is not likely going to be much of either one. The odds are that the automaker or tech firm will bring into the community the needed skilled specialists and not much local hiring is likely.

Having the AI self-driving cars in the community might be an attraction for other purposes.

Perhaps the community can become more well-known, if it today is more of a sleeper kind of locale that not many know about.

Or, maybe it is known for tourism but not for industry. The advent of AI self-driving cars in that locale might create a perception to others that the locale is well-suited for industry. It now has the hottest technology in terms of AI and self-driving cars. It might be seen by other tech firms as a place to also locate their businesses, regardless of whether being into AI self-driving cars or not.

A prestige factor that can have a multiplier effect on a community.

Having AI self-driving cars might spur the community in other indirect ways.

Perhaps the educational system becomes inspired and it rallies teachers, administrators, and students to be interested in tech and STEM (Science, Technology, Engineering and Mathematics). Businesses might opt to invest in tech, having nothing to do with AI self-driving cars per se and more akin to becoming avid tech adopters.

Local leaders might be elated to have AI self-driving cars in their community as it shows a measure of positive outlook and progressiveness. No Luddites here, they might say. This could attract other investments into the community by businesses that see the local leadership as supportive of new innovations. An influx of new residents might arise as they perceive the locale to be betting on the future rather than mired in the past.

There are also the potential benefits from a new form of ridesharing.

Communities that are first to adopt AI self-driving cars might enjoy the presumed benefits of the increased mobility that AI self-driving cars promises. Pundits believe that we are heading towards a mobility as an economic kind of society, which perhaps these initially adopting communities will experience sooner than other communities.

For my article about ridesharing and AI self-driving cars, see:

For my article about the non-stop use of AI self-driving cars, see:

For driving controls aspects, see my article:

For the aspects of autonomy and self-driving cars, see my article:

NIMBY Is A Possibility Too

We’ve so far discussed the basis for the YIMBY perspective for a local community that opts to either invite in AI self-driving cars or that is approached about allowing for AI self-driving cars in that locale.

What about the NIMBY perspective?

Some might assert that the existing driving regulations don’t allow for AI self-driving cars and stand pat that the law is the law.

For communities that are willing to change their driving laws to allow for AI self-driving cars, which might also involve aspects of the state driving laws and federal regulations, this could be somewhat of a cost to undertake.

The cost would also possibly involve “political” capital in that the push to put in place laws that are more conducive to AI self-driving cars might be seen by some as wrong or ill-conceived, and later harm or cause the ouster of local leaders by voting against them or otherwise not welcoming their ongoing tenure.

There is also the specter of class action lawsuits against AI self-driving cars and the automakers and tech firms, for which this might dampen enthusiasm for AI self-driving cars depending upon the outcomes, and if so it could undermine those local leaders that had earlier been a proponent of self-driving cars.

For state and local laws about AI self-driving cars, see my article:

For federal regulations about AI self-driving cars, see my article:

For class action lawsuits about AI self-driving cars, see my article:

For the zero fatalities myth, see my article:

For safety aspects of AI self-driving cars, see my article:

There is a risk to the community that an AI self-driving car might cause or be involved in a car crash, or perhaps strikes a pedestrian, or runs over a dog, or in some manner gets entangled in a matter that causes injury or harm or property damage such as running into a wall or a light post.

There is ongoing debate about the nature and severity of this kind of risk.

One of the most notable examples was the Uber self-driving car incident that occurred in Phoenix, which I’ve extensively discussed and analyzed. The matter involved a self-driving car that ran into a pedestrian that was walking a bike across the street, doing so at nighttime and not in a marked crosswalk. Uber opted to temporarily suspend their trials and performed an internally sponsored review.

For my analysis of the Uber incident in Phoenix, see my article:

For my subsequent analysis of the Uber incident, see my article:

For the dangers of relying upon back-up drivers, see my article:

For idealism about AI self-driving cars, see my article:

What’s Going To Happen

Building trust and faith about AI self-driving cars is a matter that takes a lot of time and attention to undertake.

It is like filling up a pool or tub and the water takes quite a while to pour into it. Meanwhile, losing trust and faith can happen very quickly, almost like pulling out the plug and the water suddenly drains out.

I’ve predicted that early adopting communities will be quick to change their minds about AI self-driving car adoption if the instances of injury, death, or damages should arise.

This change of heart and mind can occur in an instant, particularly if the incident is severe enough.

Mitigating factors about why or how the incident occurred might help to soften the blow to the YIMBY, but it will certainly take off the glow and likely cast suspicion, plus a tight leash will be the potential consequence such that if another such matter arises, even if one less severe, it could cause the YIMBY to flip over to a NIMBY.

There is also the aspect of portraying AI systems, such as AI self-driving cars, as a danger overall to society. Some might liken an AI self-driving car to a kind of Frankenstein, suggesting it is a monster that needs to be caged or curtailed. Some are worried that once we’ve opened the door to AI self-driving cars, it will become a widespread takeover of our freedom and liberty, and who knows where the rampant AI will stop, if ever. These conspiracy theorists are on the look for the smallest signs of such a potential.

There are those that believe AI is headed towards a singularity.

Perhaps AI will develop and become a sentiment being, of which, the assumption by some is that it will squash humans like a bug. There is also the infamous paperclip AI dilemma, namely that a so-called super-intelligent AI system might try to maximize an aspect such as making paper clips, doing so at the cost of inadvertently destroying the rest of mankind in the unbridled quest to make paper clips.

For conspiracy theories about AI, see my article:

For the Frankenstein aspects of AI, see my article:

For my article about the paperclip takeover by AI, see:

For my article about the singularity, see:

Another possibility for the NIMBY is the potential for the loss of jobs.

This might seem counter-intuitive since the advent of ready mobility by AI self-driving cars is considered by some to be a sure sign of boosting a local economy and providing more jobs. The other side of the coin is that transportation jobs are potentially going to dry up, at least in terms of the driving of vehicles. Presumably, no more human driven ridesharing and so the Uber and Lyft of today that provide income for human drivers will no longer be doing so (it will be AI systems instead driving the ridesharing self-driving cars).

This lack of the need for human drivers could extend to buses, trucks, and other forms of transportation. All of those human drivers could be out of work. The counter-argument is that there might be other newer jobs that arise to replace those human driver jobs, especially if the volume of transport rises. In other words, there might be so much more transportation taking place that it might open avenues for other kinds of jobs.

For my article about the future of jobs, see:

For my article about impacts on personal rapid transit, see:

How does the NIMBY or YIMBY of AI self-driving cars compare to other kinds of “backyard” disputes?

The adoption of AI self-driving cars into a community is unlike the placement of a nuclear power plant, since the nuclear power plant is a relatively permanent kind of measure and has other massive consequences if something goes severely haywire. In theory, a community that discovers the advent of AI self-driving cars to be a danger to their community can readily stop or boot-out the AI self-driving car adoption, doing so at relatively small cost and effort (though there could be follow-up lawsuits asserting that the community made an unwise choice to begin with and is partially or fully to blame for the incident).

Come communities have embraced a closed track or proving grounds approach that can be used for the development and testing of AI self-driving cars.

This is a quite different kind of decision about AI self-driving cars in comparison to allowing AI self-driving cars to roam the public roadways in a community. A proving ground involves the setting aside of a relatively permanent piece of land and having fixed improvements built onto that land. In a manner of speaking, this might be more akin to the HQ2 example, though obviously on a much smaller scale. The risk factor is low in terms of danger to the community since the testing of the AI self-driving cars would presumably be primarily tethered to the closed track.

For my article about the closed tracks for AI self-driving car testing, see:

AI self-driving cars as a roaming element in community is more akin to a transitory backyard admittance.

It would seem unlikely to have the same kinds of sustaining benefits that say an HQ2 might provide, and nor the large-scale dangers of a nuclear power plant. For AI self-driving cars, the benefits are relatively low for the community in that kind of comparison, while at the same time, the investment by the community is also quite low. Overall, it is somewhat easy to start and somewhat easy to stop the advent of roaming AI self-driving cars in a community.

The risks to the community obviously involve the potential for serious harm if an AI self-driving car does something untoward, though presumably confined to one incident (after which, the community would likely halt or unwind the arrangement).

Are the members of the community willing to accept that kind of risk?

It is hard for them to likely know what the risk level is.

For example, the AI self-driving cars might be used in a confined geo-space. In that case, the risk of a haywire AI self-driving car is presumably only going to occur in that geographical area of the community, if something untoward does occur. There is the use of back-up human drivers to try to reduce the risks of the AI self-driving car getting involved in an incident, though this does not eliminate the risks and I’ve spoken and written extensively about the false assumptions about the use of back-up human drivers as a fail-safe (nor too will using remote operators provide any heightened reduction of such risk).

For the dangers of relying upon back-up drivers, see my article:

For the drawbacks of remote operators, see my article:

For the crossing of the Rubicon about AI self-driving cars, see my article:

For the invasive curve about AI self-driving cars, see my article:

For the concerns about fake news about AI self-driving cars, see my article:


For a community considering allowing AI self-driving cars to roam their streets, it is quite a toss-up right now as to whether to be on the YIMBY or the NIMBY camp.

We are still in the early days of AI self-driving car adoption.

Their use on public streets is generally being relatively constrained by the auto makers and tech firms. This likely though will begin to widen and expand as the developers become more confident about the safety of their AI self-driving cars. Doing so will likely increase the chances of something untoward happening.

One bad apple in the barrel can spoil the entire barrel. This suggests that if an AI self-driving car does serious injury or death, it could be that the public becomes distrustful of all AI self-driving cars, regardless of whom the auto maker or tech firm might be. It could be a broad stroke casting of aspersions across all AI self-driving cars.

An admonishment voiced by some pundits in the AI self-driving car camp is that society has to be willing to weigh the potential for injury or harm via AI self-driving cars against the daily injury and harm from human drivers.

This is a rationalist’s position that you should be willing to accept some amount of injury or death via AI self-driving cars if it will in-the-end reduce the number of injuries and deaths being caused by human driven cars. Though this might be the case, using such logic is rather hard to stomach from an emotional perspective. People accept the notion that human drivers cause injury and harm, and this is bad, and something should be done about it, but offering a solution that will produce injury or harm, even if less so than human drivers, involves a kind of strict adherence to numbers logic that is nearly unimaginable by most.

The media is a factor in this matter too.

For now, the media has been relatively supportive of the advent of AI self-driving cars, typically offering gee-whiz kinds of coverage. That being said, the media loves the man-bites-dog story. The media will readily turn against AI self-driving cars if an incident occurs that gets the media riled up. Nothing more the media tends to relish than a love them or hate them kind of situation, which will surely attract eyeballs. Don’t be surprised if the media swings overnight from AI self-driving cars as the do-all and best thing since sliced bread to becoming the worst malady to ever be made by mankind.



Probably the most predictable aspect is that we’ll see communities vacillating from one posture or another when it comes to the initial tryout of AI self-driving cars, and only time will tell which backyard, if any, will be putting out a “Welcome Here” sign or putting up a “Stay Out” sign instead.

Copyright 2020 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column:]