Beneficial Offshoots and Spinoffs of AI Self-Driving Cars


By Lance Eliot, the AI Trends Insider

What did we get by landing on the moon? Smarmy answers are that we got Tang, the oh-so-delicious artificially flavored orange drink mix, and we brought back to earth about 50 pounds of rocks and dirt in the Apollo 11 mission alone. Just recently it was discovered that over the course of our moon landings, we collected from the moon a rock that was actually originally on earth. Yes, the rock was originally here and based on various hypothetical proposals it was strewn to the moon many eons ago, and we happened to find it now and bring it home. It is considered one of the earth’s oldest rocks. Welcome home, wayward earth rock.

Seriously, it is hard to imagine that anyone really would though claim that the only benefits from the moon landing consisted of a drink mix and some rocks. I realize that most reasonable people would agree that going to the moon was an incredible human feat. It demonstrated an ability to seek and achieve what at the time was considered a nearly impossible task. The world was focused on something truly inspirational.

Assuming that you tend to agree with those sentiments and acknowledge the otherwise touted benefits of the moon landing efforts, we can certainly then engage in a dialogue about what else came from the monumental undertaking. Let’s do this dialogue under the basis that we already concede the huge overall benefits and then consider what other offshoots or spinoffs might also be attributable to the moon traveling.

You’d be relatively safe to argue that the moon effort spurred advances in electronics and in computing. Hardware advances in miniaturization and in the development of specialized chips and processors can be linked to the moon pursuits. Software advances in new programming languages and in the development of real-time mission-critical systems made great strides. Over the decade that involved preparing for the moon and undertaking a series of shorter trips, a lot happened in the progress of computers and in electronics.

I suppose cynics might claim that those advances might have happened even if there wasn’t a race to get to the moon. Certainly, it seems plausible that many of the advances would likely have taken place on their own merits, though whether they would have occurred with the same urgency, the same pace, the same intensity, seems quite dubious. Galvanizing attention on an overall goal that forced along the movement of electronics and computers seems likely to have sparked and pushed forward those offshoots and spinoffs more so than if they were merely acting on everyday economic pursuits.

I don’t think we have a time machine that would allow us to somehow replay the era and pretend that there was an alternative of not going for the moon, and then see how things fared.

We have satellites today that are essential to our lives, which you could argue came as an offshoot of the moon effort. Some suggest that microwaves and the everyday microwave oven that we warm-up our day-old burritos can be attributed to the moon research. The list of moon-effort spurred items is rather lengthy and at times perhaps puzzling, such as freeze-dried food (that makes sense for the space missions), cordless tools (hint, they needed portable drills during the moonwalks), scratch resistant lenses (for the space helmets), invisible braces (transparent ceramic materials used in the spacecraft), and so on.

The overarching theme is that sometimes when you are doing one thing, there can be various offshoots and spinoffs, offering a twofer. Your mainstay is your core attention. Meanwhile, additional benefits might arise. Whether you realize there are potential offshoots is another question. Sometimes, an inventor or innovation maker might not even realize that there is a possibility of spinoffs or offshoots. They are so ingrained in their core effort that they see nothing other than the core itself.

Good for them, unless of course there is a benefit that they are missing out on. If you have a twofer in your hands, it can be a shame to not leverage the second or offspring that comes from the original core. This might not hamper or undermine the original core. Instead, it is simply a lost opportunity. It can be an unrealized opportunity and you need to try and decide whether that opportunity was worthy of getting attention or not.

I’ll revisit this twofer notion in a moment.

Let’s shift our attention right now to another kind of moonshot, namely the efforts to achieve AI self-driving cars. I’ve repeatedly stated in my writings and presentations that getting to a true AI self-driving car is very hard. Very, very hard. Tim Cook, the CEO of Apple, has been famously quoted as saying that indeed an AI self-driving car is like a moonshot. The odds of success are ambiguous, and it is not a sure thing by any stretch of the imagination.

For my article about AI self-driving cars as a moonshot, see:

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. In addition, we are identifying offshoots and spinoffs, doing so along with auto makers and other tech firms in this niche.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article:

For the levels of self-driving cars, see my article:

For the grand convergence that has led to the AI self-driving car efforts, see my article:

For the dangers of co-sharing the driving task, see my article:

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the Top 10 predictions regarding AI self-driving cars, see:

See my article about the ethical dilemmas facing AI self-driving cars:

For potential regulations about AI self-driving cars, see my article:

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article:

Returning to the topic of beneficial offshoots or spinoffs, let’s consider how the AI self-driving moonshot-like efforts have a twofer built-in.

Take a look at Figure 1.

As shown, reading the diagram from left to right, there is an indication that there can be hardware related offshoots of AI self-driving car efforts, there can be software related offshoots, and there can be “transformative” offshoots.

The transformative offshoots consist of taking some innovation that originated outside of AI self-driving cars, and for which an AI self-driving maker than utilized the innovation and transformed it into something new or novel, thus there is an offshoot potential ingrained in that transformed variant which can be transplanted back into the non-AI self-driving car realm.

Let’s begin with a real-world example of an offshoot, in this case one that happens to be hardware related.

There was some eye-catching “offshoot” news recently in the AI self-driving car industry. In particular, Waymo, the Google/Alphabet autonomous vehicle entity, announced it would aim to sell or license its LIDAR sensor technology to third-parties, albeit only if those third-parties agree to not use the technology for AI self-driving car efforts.

For me, this is a loud bang of a starting gun that has gone off to highlight that the race for an AI self-driving car has also got lots of room for offshoots and spinoffs. I’m not suggesting this is the first time that any contender in the AI self-driving car space has ventured beyond their self-driving car pursuits. I am just emphasizing that having the 500-pound gorilla in the AI self-driving car arena make such an announcement is something worthwhile to notice. The clatter emanating from this will inexorably echo and reverberate, and we’ll see more of this soon by many others.

Those of you that are versed in LIDAR are already likely familiar with the path that Waymo opted to go, and I’ll take a moment to bring everyone else up-to-speed.

LIDAR is considered by many to be an essential type of sensor, combining together Light and Radar (LIDAR). You wouldn’t use LIDAR solely as the only sensor on an AI self-driving car and would instead have it act in a complimentary manner with say camera, conventional radar, ultrasonic sensors, and so on.

There are many that believe the use of LIDAR is crucial to achieving Level 5, while there are some, most notably Elon Musk and Tesla, asserting otherwise and thus Tesla’s aren’t outfitted with LIDAR. Musk though has expressed acknowledgment that he might be off-base about LIDAR and we’ll all have to wait and see whether his instincts were on-par or not.

For my article about LIDAR, see:

For the crossing of the Rubicon about LIDAR, see my article:

For my article on the dangers of myopic sensor usage, see:

For when sensors go bad, see my article:

In the case of Waymo, they are stanch believers in LIDAR. The use of LIDAR is essential to their AI self-driving aims. As such, they’ve been pursuing LIDAR since the early days of their initial formation and forays. At the time, they gradually decided that using off-the-shelf LIDARs from vendors was not their preference and so they opted to make their own LIDARs.

Some pundits have congratulated Waymo for taking their own path on LIDAR, allowing Waymo to presumably control and determine what the LIDAR does and how they will make use of it. Given that LIDAR is going to be key to their AI self-driving cars, and if someday they are able to succeed with a true Level 5, and if this translates into potentially millions upon millions of outfitted self-driving cars, their owning the LIDAR puts them into the driver’s seat, if you know what I mean.

This is an ongoing debate within many of the AI self-driving car making firms. The debate is as follows.

If you use a sensor like LIDAR and you become dependent upon a vendor to provide it to you, you might be doing so right now to get those prototype AI self-driving cars underway, but what happens if your AI self-driving car efforts really take off into the stratosphere. Suppose everyone wants your AI self-driving car and you are able to grab market share of many millions of such cars. Meanwhile, the maker of the LIDAR is along for the ride.

You could say that the tide rises all boats, meaning that if your AI self-driving car succeeds then the LIDAR maker is looking at a big payday as well. That seems healthy for all parties.

The downside could be that the LIDAR maker becomes the tail waging the dog. Suppose the LIDAR maker opts to go in some other direction that the AI self-driving car maker isn’t keen on, or otherwise there are disputes. The AI self-driving car maker is likely to have so enmeshed the LIDAR particulars into their code that trying to somehow unplug it and plug in some other LIDAR system is not going to be easy. In fact, it could be costly and painful, creating great disruption just as your AI self-driving car is perhaps on the verge of greatness.

On the other hand, some pundits say you’d be crazy as an AI self-driving car maker to devote your attention and precious resources to reinventing the wheel by making your own LIDAR. The number of LIDAR makers is rapidly increasing, and it seems that for each new dawn there is another LIDAR startup someplace. It’s hot.

Furthermore, the underlying technology of LIDAR is advancing at an astounding pace. Trying to keep up is daunting. Again, if your mainstay is the AI self-driving car aspects overall, focusing your attention on one particular aspect, the LIDAR, would seem to some as a distraction and worse. It could be worse in that you might not be keeping the same pace as other LIDAR makers and so end-up with something less-than the best that the market can provide.

Those same pundits would argue that if the AI self-driving car is making their own LIDAR, they are likely going to find themselves somewhat trapped in their own petard. Are they going to be able to advance their LIDAR at the same pace as the marketplace? Will they “compromise” to something less-so, wanting to stick with their own efforts. Might they get eclipsed, and yet be mired already in having done their own thing.

It can be a conundrum.

Part of this involves historical momentum too.

In the case of Waymo, they ventured into their own LIDAR at a time when arguably the number of LIDAR options was few. You could try to make the case that they by necessity chose to take the bull by the horn. Whether they would need to do so today, well, that’s a different question. And, once they started down the trajectory of their own LIDAR, one might argue that they had sown a path that would be hard to not continue with. It’s the same as if getting into bed with a particular LIDAR maker and then that’s what you have, though in this case it is of your own invention.

For Waymo, they’ve made their stand and it consists of having their own customized LIDAR. One of those models is known as the Laser Bear Honeycomb. It is considered a perimeter LIDAR sensor, typically used to sense around the bumper of a self-driving car. This is just one kind of LIDAR sensor and not the whole kit and caboodle that Waymo has in their tech arsenal.

Read the Simon Verghese LIDAR Post

The Laser Bear Honeycomb is considered a 3D LIDAR and has a Field of View (FOV) of 95 degrees on the vertical and 360 degrees on the horizontal. It also uses the multi-return per pulse capabilities that more robust LIDAR units now have, meaning that there is a chance of detecting objects in a more detailed fashion than otherwise by the more simplistic singular returns. The unit also allows for a range of zero (near to) which allows detecting objects immediately in front of the sensor versus some LIDARs that have a minimum distance before returns are able to be detected. If you’d like to see more details about the Laser Bear Honeycomb, take a look at Simon Verghese’s post at (he’s the head of the Waymo LIDAR Team).

I’m not going to delve into how their LIDAR compares to other marketplace options. That’s not the focus or theme herein.

The reason I’ve brought up the Waymo LIDAR and the announcement is due to the offshoot or spinoff notion. You might have noticed that I mentioned earlier that Waymo is restricting who can potentially purchase or license the LIDAR from them. They are excluding uses of the LIDAR for AI self-driving cars.

Why, you might be wondering? Simply stated, they aren’t willing to handover their own “secret sauce” to other AI self-driving car makers. If they did, one could argue that they would be essentially undermining their own efforts by arming competitors with the same armaments that they themselves possess. It might be likened to handing your special ICBM to someone else that can then use it to get to your level and possibly even surpass you.

One could also argue that they are perhaps better off not having other AI self-driving car makers use their LIDAR since they might get bogged down into dealing with those other AI self-driving car makers. In essence, suppose that they sold the LIDAR to the AI self-driving car maker X, and X began to toy with it and wanted to ascertain more deeply how it works and what it can do. In the act of doing so, would Waymo inadvertently spill the beans on other salient aspects of their own AI self-driving car efforts? It would be a possibility, and a dangerous and undesirable slippery slope.

Okay, so we have a major AI self-driving car making entity that is willing to provide as an offshoot or spinoff their own proprietary LIDAR, as long as it is used for anything but AI self-driving cars (there might be other restrictions they’ll eventually land on, which depends on what third parties approach them about the possible usage). Right now, Waymo is indicating that they anticipate the LIDAR could be used in diverse ways such as for robotics, in the area of physical security, in the use of agriculture applications, and so on.

Those seem plausible and it will be interesting to watch and see what takes hold.

As an aside, I was asked about the announcement while I was speaking at an industry conference and the question was about the money side of this matter. Specifically, the question was whether Waymo needed the money or revenue to keep in business and therefore they were now “desperately” seeking to leverage their technology. I held back my laughter. It is hard to imagine Waymo might do this because they are running low on cash and figured by selling LIDARs they could keep the lights on and still provide those green tea Frappuccino’s to the staff.

Though selling or licensing their LIDAR wouldn’t likely in the near-term bring much dough, it would nonetheless showcase the inherent value of the technology and IP that Waymo has been developing. Google or more properly Alphabet is feeding a reported one billion dollars per year into Waymo, according to industry estimates, which is not a lot when you consider that Alphabet is sitting on $100B in cash (as of the end of 2018), but it is pretty much a payout-now R&D bet on a bright future for Waymo, and there won’t be any substantive revenue for many years to come, until AI self-driving cars are readied for large-scale deployment. Meanwhile, Alphabet is rumored as gently gauging whether there are potential relevant investors that might want a stake in Waymo, such as a major auto maker. Doing so can help offset the cash burn and provide marketplace support that Waymo is a worthwhile bet, plus aid in suggesting the kind of valuation that Waymo embodies.

As mentioned earlier, sometimes a firm will have a twofer and realize the secondary or offshoot value that the core innovation has. The realization of a twofer means that you can put your toe into the water and see what happens. If the secondary or offshoot potential begins to seem viable and the marketplace laps it up, great. If the marketplace doesn’t seem to be able to find a use for it, well, you now know, and meanwhile you’ve been continuing to use it for your own core efforts.

You could even claim that it is a bit of rolling the dice without taking too much risk. Suppose that someone else discovers a fantastic way to use your tech. It becomes a huge smashing success in some other endeavor, one that you might not have ever considered on your own. Indeed, it could become its own tail wagging the dog, meaning that it somehow surpasses your own use of it and the true mission of the innovation is leveraged in a completely different way.

That doesn’t happen very often. But it’s a roll of the dice and likely worth seeing how the roll comes out.

There’s another angle on this too. It could be that while floating out the innovation to the marketplace, you end-up getting feedback that otherwise you would have been unlikely to get on your own. Regardless of how good your own internal team might be, there are chances that others mulling over your innovation, those newcomers external to your own resources, might come up with fresh ideas that could further burnish its value.

One could argue the counter-punch that suppose the marketplace finds flaws or blemishes that you had not identified, or that you identified and perhaps downplayed internally. Wouldn’t that undermine your innovation? I’d say no. If an innovation is still early in its life cycle, you likely would want to know about any such issues, hopefully surfacing those issues and correcting them before you get too much further along. The later that you discover such guffaws, the worse it usually will be, in terms of cost, time, and other factors.

I’m shifting away now herein about the Waymo announcement and want to cover other facets overall about offshoots and spinoffs, along with identifying other kinds of such aspects that might occur in the AI self-driving car arena.

Per my overall framework mentioned earlier, there could be offshoots in any of the realms of an AI self-driving car, encompassing the sensors, the sensor fusion, the virtual world model updating, the AI action planning, the car controls commands issuance, and too in the areas of the strategic AI, the self-aware AI, etc.

AI Self-Driving Car Sensors Most Ripe for an Offshoot

The sensors aspects are the ripest for an offshoot. If you are making a sensor that you devised specifically for AI self-driving cars, the odds are high that such a sensor can be used in other ways and other means. The most obvious would be in other kinds of Autonomous Vehicles (AV), such as using your sensor in an autonomous drone or an autonomous submersible vehicle.

Using your sensor in another family-related AV’s is not much of a stretch, admittedly. Presumably, those should be ways that already jump out at you.

A more pronounced stretch would be to consider using your sensors in something other than a vehicle. Move your mindset away from vehicles and consider how else might the sensor be used. Could it be an Internet of Things (IoT) device that might be used in the workplace? Or maybe in the home? There is no doubt that the IoT marketplace is enormous and growing, so perhaps you can re-apply your sensor into that space.

For my article about the rise of IoT, see:

For aspects about changes coming via 5G, see my article:

For the dangers of groupthink, see my article:

For the egocentric views of some AI developers, see my article:

One of the difficulties often times about brainstorming about other uses of your own internally developed innovation is that you might fall into a groupthink trap. If everyone on your team was brought to the table to develop a sensor for purposes of Y, they are likely steeped in the matter of Y. It’s all they think about it. It’s what they know best.

Trying to get them to go outside the box of Y is not usually readily done. In fact, sometimes they can be forceful about staying inside the box. This makes sense since they know the specific requirements that they built the thing for. When you try to suggest it might be used for Z or Q instead, it can generate acrimonious replies about the ten or twenty reasons why it cannot be used for those other purposes.

They might be right, they might be wrong.

You need to ferret out whether in fact trying to use the innovation for other purposes might be inappropriate, or whether it is just a hesitation based on an anchoring to what the team already knows. This can be difficult to discern. Trying to shoehorn an innovation into other uses might not be productive, and worse still might be untoward.

I’ve worked with some top tech leaders that were constantly coming up with new (and often wild) ideas about how they could repurpose their innovation. They’d be eating a meal and come up with another idea. They’d be on the phone and suddenly come up with an idea. They were like miniature idea generating factories.

At times this was handy and provided opportunity for adapting the innovation to some other notable use. In other cases, it was as though the innovation was a swiss army knife that could be used in a thousand ways, when the reality was that it was simply a toothpick and did not have any of the other tools, lacking a can opener, a knife, a screwdriver, and so on. I’m not saying that they could not have ultimately adapted the innovation, only that the distance was greater than was in the mind of the top leaders.

Sometimes bringing an innovation to the marketplace can be a fresh dose of reality to a top leader. Within the firm, perhaps it is hard for the staff to pushback on wild ideas. They don’t want to be pigeonholed as a naysayer. By allowing the innovation to touch into the market, it will be the marketplace that provides the needed feedback. This can get top leaders to listen and pay attention when they otherwise might have been hesitant to do so.

The other side of that coin is that sometimes the internal AI developers are so burned out that they cannot imagine taking on something new with their innovation. If you are pouring your heart and soul into a sensor for an AI self-driving car, and you are exhausted in doing so, even if there is a glimmer of promise for the sensor in some other ways, you cannot cope with the added effort that will undoubtedly fall onto your shoulders. Thus, you might subliminally nix the new use, somewhat due to basic survival instincts.

For my article about internal naysayers, see:

For more about AI developers that are burnt out, see my article:

For my article about noble cause corruption, see:

For the rise of startups in the AI self-driving car arena, see my article:

Besides sensors, there is a slew of other hardware that has the potential for being used beyond the realm of AI self-driving cars. There are specialized processors, GPU’s, FGPA’s, and the like, all of which can be applied to other fields of endeavor.

I realize that many of those hardware advances were already being done for other fields, and then were re-applied into the AI self-driving car niche. I’m not suggesting they were made necessarily initially for AI self-driving cars. In some cases, something that was made for another purpose has been brought into the realm of AI self-driving cars. Once it has been so transformed, it can potentially take on a new life in terms of not only satisfying the needs of AI self-driving cars, but turnaround and use that augmented hardware for other outside aspects that now are opened, which perhaps weren’t yet open, prior to the augmentation for the AI self-driving car needs.

My description about the hardware aspects can be readily applicable to the software aspects.

If you develop a simulation for AI self-driving cars, based on crafting a new way of doing simulations, it could be that you can re-apply that capability to other areas. Perhaps the simulation of an AI self-driving car driving in a traffic situation can be readily re-applied to simulating the efforts of a warehouse and the movement of goods within the warehouse. Again, there are some simulation packages that already had that purpose for warehousing, and they were re-applied into AI self-driving cars, but there are some simulations that were built solely focused on AI self-driving cars that I would say could be re-adapted for other uses.

Think about the entire software stack associated with AI self-driving cars. If you are an AI self-driving car maker, and if you have developed various tools and capabilities within that stack, you might be sitting on a potential goldmine of something that you could provide to the marketplace.

You’d need to decide whether or not you want other competing AI self-driving car makers to be able to use your new-to-the-market software. Is it something that provides you with a competitive edge? Would it reveal too much about your secret sauce?

We’ve of course seen some of the AI self-driving makers that have opted to not only bring an offshoot into the marketplace but even make it available as open source.

Uber’s Autonomous Visualization System (AVS) Released as Open Source

For example, at the Autonomous Vehicle (AV) 2019 conference, I had a chance to chat with Hugh Reynolds, Head of Simulation for the Advanced Technologies Group (ATG) of Uber. After having used a number of simulation packages, they developed an internal capability that they decided recently to share with the industry.

He and his team have released as open source version of their Autonomous Visualization System (AVS). It consists of an element known as XVIZ, which is a spec that deals with the managing of generated AI self-driving car data, and includes their, which provides a means to build web apps that leverage the data that’s based on the XVIZ formats. You can find these tools on GitHub (

There are already other AI self-driving car makers that have indicated they’ll likely be making use of the capability. Since it is open source, this reduces the qualms by those other AI self-driving car makers about necessarily getting locked into something that another maker might otherwise control. Making it open source might seem odd to some, but there is not just some kind of altruism in doing so, the odds are that this will ultimately also help Uber by spurring an ecosystem around the simulation and benefit the simulation by boosting it in ways that Uber itself might not have the time or have considered doing.

In the combination of both software and hardware, we’ve seen that the Machine Learning and Deep Learning aspects are also spurring offshoots. For AI self-driving cars, one of the most significant elements is the use of deep artificial neural networks, especially in the analysis and interpretation of sensor data. There are software tools and hardware capabilities of Machine Learning and Deep Learning that have been forged within the AI self-driving car space that are gradually coming onto the market for use in other domains.

For plasticity of Deep Learning, see my article:

For ensemble Machine Learning, see my article:

For my article about one-shot Machine Learning, see:

For my article about benchmarks and Machine Learning, see:

Suppose that while the engineers and scientists were working on developing the needed innovations and high-tech to get to the moon that they opted to right away do offshoots or spinoffs?

I ask the question because it brings up an important consideration about offshoots and spinoffs. What is the right timing for having an offshoot or spinoff?

Timing the Offshoot or Spinoff

Imagine the high-tech moonshot workers in the 1960s that rather than focusing on how to control the space capsule to land on the moon, instead they became attentive to making microwave ovens that could be used in the home. Maybe we would not have gotten to the moon. Or, maybe we would have taken ten more years to get there.

The point being that if you begin to take on the aspect of doing an offshoot or spinoff, there is a chance you are risking keeping to your knitting. You are maybe taking on more than you can chew. The problem could become one of the core getting second fiddle to the offshoot, which might not have been your plan, yet you fell into it, slowly, inexorably, like quicksand.

It is easy to do. Sometimes the offshoot gets all the glory. The core use is already well-accepted within the firm. Most take it for granted. The excitement about seeing your hardware or software applied to a new domain is rather intoxicating. Top leaders can readily get caught up in the allure and begin to inadvertently drain resources and attention away from the core use.

Advances for the core use begin to get pushed aside or delayed. Maybe the quality of the updates or revisions start to lessen. The other use of the core saps the energy and willpower that got the core to where it is. Sure, the other use might be promising, meanwhile sacrifices to the core can undermine the core overall.

I caution top leaders to make sure they have their ducks aligned when they make the decision to forge some kind of offshoot or spinoff. Are they ready to do so? How much of their existing resources will get pulled away to it? Will they provide as much attention to the core as they are to the offshoot, or will they subconsciously starve the core? These are all important matters to be discussed.

The timing question is a tough one to balance. You want to bring out the offshoot while the core is still considered new and worthy. If you wait too long and the core is now already eclipsed by other substitutes in the market, you missed your window of opportunity. The timing needs to be the vaunted Goldilocks mode, not too early, not too late, just the right temperature, as they say.

Another consideration is whether the innovation if created by an internally focused team is ready to deal with becoming a business within a business. When selling or licensing your innovation to other firms, you suddenly have a whole new enchilada to deal with, meaning that you need to provide service to that customer or set of customers. Is your internal team prepared to deal with external entities that want support or otherwise require a services aspect that your team was not having to do before?


There are some doom-and-gloom pundits that say we will never achieve true AI self-driving cars. We are all on a fool’s errand, they contend. Though I disagree with their assessment, I like to point out to them that even if they are right, which I doubt, but even if they are right, the push toward AI self-driving cars is creating numerous benefits that otherwise I assert would not likely exist.

In essence, I am claiming that the race toward true AI self-driving cars has other benefits beyond whether we actually are able to achieve true AI self-driving cars.

One obvious benefit is that conventional cars are getting more automation. As much as that seems good, I’ve also cautioned that we need to be leery of automating non-self-driving cars to the degree that humans get lulled or fooled into believing the AI can do more than it really can.

For my article about the dangers of human reliance on Level 3, see:

For the driving controls debate, see my article:

If you are intrigued by AI conspiracies, read this article of mine:

For my article about why I claim AI self-driving cars won’t be an economic commodity, see:

AI self-driving cars are an exciting notion that has energized the field of AI. It has helped move AI out of the backrooms of university labs and into the sunshine. As a former university professor, I still maintain my roots at numerous universities, and I’ve seen first-hand how AI self-driving cars are “driving” faculty and students into areas of AI that I believe would not have gotten as much attention otherwise.

Society as a whole has been energized into discussing topics about transportation that I believe would not have been as active or headline catching, were it not for the AI self-driving car efforts. Regulators are considering the advent of AI self-driving cars, which also brings up the topic of mobility and how can our society do more for increasing mobility.

In short, similar to the real moonshot, I’d argue that the advent of AI self-driving cars has become a motivator. It has inspired attention to not just AI self-driving cars, but encompasses far more, including societal, business, economic, and regulatory aspects. This inspiration sparks innovators, dreamers, engineers, scientists, economists, and all of the myriad of stakeholders that AI self-driving cars touch upon.

Whether you will grant me that the race toward AI self-driving cars has produced those aspects or not, at least perhaps we can agree that the advances made in AI, along with hardware and software, are having and will likely continue to have a profound spillover effect. The number of offshoots and spinoffs will gradually increase, and I predict you’ll see that the AI self-driving car pursuit produces more than you might have anticipated.

I don’t think we’ll look back and say that all we got was Tang, and instead we’ll be saying that without the AI self-driving car pursuit we wouldn’t have amazing advances that we’ll be relishing in the future. Admittedly, there won’t be moon rocks to look at, but it will still be good, mark my words.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.