Genius Shortage Hampers Solving AI Unsolved Problems: The Case of AI Self-Driving Cars

2072

By Lance Eliot, the AI Trends Insider

Is there a genius shortage that is impeding the progress of AI?

This is a pointed question that keeps coming up in the hallways of AI conferences and that people are whispering about. Sure, there has been some impressive efforts of newer AI systems that suggest we are making solid progress in AI, but it’s not particularly breakthrough-like improvements that have rocketed AI ahead and overcome some of the as-yet-solved thorny problems in AI.

I had one AI developer that took umbrage at the assertion that there is a genius shortage and insisted that they are a genius and the question itself seemed to undercut his prowess. I politely noted that the question does not say there aren’t any geniuses, only that there seems to be a shortage of them. I suppose then that if he wants to believe that he is a genius, he can do so, and the question still remains palatable.

Some people react by saying it is a blatantly stupid question. What does being a genius have to do with progress in AI? Do we need to have an Einstein of AI, or a Darwin of AI, or a Leonardo da Vinci of AI, in order to push further ahead on AI? Where does it say that the only means to progress in a field of endeavor is when you have a genius that happens to be in that field?

In essence, you might make the argument that by-and-large the progress in most fields of endeavor has been undertaken by “less than geniuses” that did the hard work and painstaking efforts to make progress. Lots of really smart people can perhaps do the work of those unicorn geniuses. Historians would likely indicate that those of a genius nature are far and few between, and you’d be unlikely to pin substantive progress in endeavors primarily due to those geniuses alone.

This also brings up the elephant in the room, namely what exactly is a genius and how would we know it when we see it. Einstein today is regarded as a genius, yet during his day there were others that thought he was unorthodox and even wrong in his viewpoints and would not have labeled him as a genius. The same can be said about Darwin and most others of the now labeled geniuses of all time.

What Is Genius Anyway

Some say that genius is in the eye of the beholder.

You might see someone do something and remark that the person is a genius, yet others might smirk that the person was not at all a genius and you were fooled or misled or misunderstood and assumed the person was a genius. You might be tempted to use IQ as a measure of genius and suggest that when you have a certain high number of an IQ that you are ergo a genius.

I don’t think the IQ test is the most reliable way to try and attest to whether someone is a genius or not. There are undoubtedly many that have top IQ’s and yet they do not manifest themselves into a genius category. I think we usually reserve the genius moniker for someone that accomplishes something of a magnitude that we ascribe as being genius level. As such, merely possessing a high IQ is not a sufficient means of joining the genius club. It probably helps to be able knock on the door of the club, but you need to do something with it.

The AI developer that was perturbed at the notion that he might not be considered a genius does bring up another facet of the matter. You might be a genius and yet nobody knows it, or at least nobody knows about you. You might be working in the backrooms and have not done anything that has caught the world’s attention. Or, it might be things they are doing now will someday have an incredible impact, but during your lifetime maybe you remain relatively unknown and unheralded, similar to what has sometimes been the case for some of the world’s notable geniuses in history.

At the AI lab you work at, right now, look around, and there might be a genius to your left or your right, on their way toward AI genius breakthroughs or perhaps will do so in a few years (or, if you prefer, look in the mirror!). These budding geniuses might be like the moth that will someday emerge as a butterfly, allowing their inner genius to make its way out and astound the world of AI by solving seemingly insolvable problems.

There is also the matter of genius as a sustained trait versus a transitory or eureka kind of flash of brilliance.

Some falsely assume that the infamous E=MC squared was a sudden harkening by Einstein, you might want to read the voluminous accounts of how he made his way towards the now famous equation over many years of efforts. It is said that Edison made thousands of attempts at perfecting the light bulb, which is somewhat of a mischaracterization, but in any case, it demonstrably is the case that he did not wake-up one morning with the solution in his mind out-of-the-blue as a genius flash.

Hindsight and the writing of history can at times bolster the case for someone being considered a genius. We might then fall into the trap of assuming that the genius flavor was the genesis for an amazing insight that others never had. In fact, many times the alleged insight was one that others also had at the time, and for a variety of confluences it turns out that the one person now having fame as a genius gets the glory, though many others were doing similar work at the time.

I’ve dragged you through the muddied waters so far about what exactly is this genius that some are saying we don’t have enough of. Let’s set aside the difficulty of defining this kind of genius, for the moment, and concentrate on whether there is a shortage of them.

If you are going to claim that there is a genius shortage, it implies that there is some magical number or desired threshold of geniuses that we are hopeful of attaining. Presumably, there must be an amount of the number of geniuses that you have in mind to be reached, and you are concerned that we don’t have enough of those.

Economic Supply and Demand of Geniuses

How many geniuses do we need in AI?

You could ask the same question of any other field of inquiry. How many geniuses does physics need to make abundant progress in physics? How many geniuses does chemistry need? How many geniuses does biology need? And so on?

You can also ask the same of arenas outside of science and engineering. How many geniuses are needed in music, and is there a shortage of them? What about art? What about any of the fine arts? Maybe we don’t have enough geniuses anywhere, in any field, and all fields are being held back because of it.

On the other side of the coin, do we have an abundance of geniuses?

If we had too many of them, I suppose we’d know. Perhaps there would be incredible breakthroughs in all fields at all times. This might happen like popcorn kernels that are popping, the breakthroughs would be sizzling and there would be no denying that we have a plethora of geniuses.

It would seem that we probably don’t have an abundance of geniuses, which seems perhaps obvious, and we cannot say for sure that we have a shortage, though it is a somewhat compelling argument to lay claim to the aspect that we might or must have a shortage if things aren’t progressing faster than they are.

 A recent research paper weighs into the genius shortage debate by trying to model the level of societal genius in an economic manner.

Research by Seth Benzell and Erik Brynjolfsson at MIT provides an interesting look at the so-called G factor, an economic parameter associated with genius. Their study entitled “Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and Growth” examines genius as a limiting factor in economic growth. They point out that though the advent of our digital world has allowed labor and capital to become more abundant, we are still limited due to the inelastically supplied complement of human genius. For their paper, see: http://ide.mit.edu/sites/default/files/publications/Digital%20Abundance%20and%20Scarce%20Genius%20for%20shortened%20abstract.pdf

You might be pondering how we as a society can perhaps make more geniuses. If there is a shortage of them, it would seem logical to try and make more of them.

Some would argue that genius is in your blood, it’s a DNA thing. You either have that secret sauce of intrinsic genius within you, or you do not. Others would say that it is something we can foster in people, perhaps by the right kind of training or education. It could also be a mixture, namely that you might need to have some innate genius for which it blossoms because of the right kind of training or education. This is the classic nature versus nurture debate.

By considering the matter overall as an economic one, it is a modeling exercise of having an out-of-balance of supply and demand, in geniuses, an otherwise scarce commodity.

We have a demand for more geniuses, seemingly, and our supply is too low. Its time to cultivate those latent geniuses. Find them, spur them on.

Making Geniuses Via AI Is Another Path

There’s another path that some AI developers are hoping for.

Maybe we can craft AI that has genius, therefore we won’t necessarily need as many genius people or at least not a lot more people to have genius in order to meet the lack of supply of genius. Use digital technology to make geniuses, either by the AI itself being a genius, or perhaps if that cannot be readily done then at least be an aid to boost humans into becoming geniuses.

It is like digging a ditch. If we can give people a shovel, it is an aid that augments their ability to dig the ditch, and therefore they can more readily do the needed digging. Better yet, have the shovel be a digging machine that without the need for a human to touch a shovel, the hole gets dug. For AI, either make AI that has genius capabilities and let it do the needed genius work or provide AI that gets humans into the genius realm that otherwise those humans could not have likely achieved.

For those of you that are into AI, let’s face it, the odds of crafting an AI system that has genius is rather unlikely right now, though I realize you might try to argue that examples such as a top-level chess playing AI system that exhibits “genius” in chess, or similar exemplars. I don’t think we’re reasonably talking about that same kind of limited domain “genius” as being the equivalent of human genius. Nor does it have the fluidity, plasticity, and other characteristics that I think we can reasonably ascribe to human genius.

I’m sure that I’ll get some flak email about this point. Some will say that a human genius in physics only has genius typically with respect to physics, and therefore that’s also a limited domain. And so on. I’m not going to try and address all of the back-and-forth herein, and just say that it seems a stretch to say that AI of today has genius.

There are many AI related initiatives that hope to spark genius-level performance in humans. These are often AI systems that try to get a human to think creativity, perhaps prodding the human to think outside the box. These AI systems even at times try to get a debate going with the human, forcing the human to mull over a topic in ways they might not have previously. In spite of such AI, we would likely all agree, I assume, the AI is not genuinely carrying on a discussion or debate that a human could, at least not in the sense of “intelligence” of humans.

On the matter of the potential for AI systems geniuses, worried humans are concerned that this might take us down the path of having an AI singularity or a super-intelligence, and for which we as humans might then become their slaves. Or, the AI might decide to wipe us out entirely. Is this merely conspiracy kind of talk? Or, is there merit to the dangers? Something worthy of further debate.

For my article about AI as Frankenstein, see: https://www.aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/

For my article about the AI singularity, see: https://www.aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/

For aspects of super-intelligence and the paperclip problem, see my article: https://www.aitrends.com/selfdrivingcars/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/

For my article about the Turing Test and AI, see: https://www.aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/

For AI conspiracy theories, see my article: https://www.aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

AI Self-Driving Cars and the Genius Shortage Topic

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One looming question for the auto makers and tech firms is whether or not there is a need for geniuses to make the advent of true AI self-driving cars become a reality.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Genius Shortage and Impact on AI Problems

Returning to the topic of a genius shortage in AI, let’s consider the nature of AI problems that need to be solved and whether we’ll need genius thinkers to solve those problems.

Also, let’s cast this into an applied realm by considering how the pace and advent of true Level 5 self-driving cars might be constrained or delayed if there aren’t those geniuses involved in self-driving car efforts.

AI of today lacks common-sense reasoning. This severely limits the ways in which we might make use of AI. Humans seem to know that the sky is blue, the road is flat, and other common-sense elements, all of which are essential to their thinking efforts. Though there are bold efforts to try and incorporate common-sense reasoning into AI, it’s a long way from arriving at anything close to what humans have.

Some also liken this to Artificial General Intelligence (AGI), namely having a type of AI that applies across domains and is not focused or fixated on a particular domain. The AI efforts to-date are primarily narrow in their scope. You might have an AI system that looks for cancer in MRI slides, yet that same AI system does nothing else beyond that narrow task. It would be handy to have AI that could work across many domains and be flexible and fluid in doing so, which humans generally are able to do.

For driving a car, there is an ongoing debate about whether or not the AI needs to have AGI. Human drivers do have AGI, therefore if the AI for a self-driving car is trying to drive like a human, presumably the AI needs to have AGI. Others claim that the driving task is narrow, and therefore there isn’t a need to have AGI for the self-driving car driving task.

What about common-sense reasoning? Humans use common sense reasoning when they drive a car. You look around the car and can see that there is a dog chasing a cat, and you reason that the dog will continue to chase the cat and might end-up in the street, in front of your car. You know that the dog is not likely going to stop chasing the cat and probably doesn’t realize the dangers of going into the street. Same might be said of the cat.

Does an AI self-driving car need to have common-sense reasoning, similar to the kind of common sense that humans have? Some say that the AI does not need overall common-sense and that it can be programmed sufficiently to have similar qualities. They would also argue that with the use of Machine Learning and Deep Learning, the AI will by osmosis end-up with a variant of common sense due to pattern matching into it.

Others worry that AI self-driving cars are not going to have a semblance of common sense, and as a result the AI will get the self-driving car into untoward predicaments. This will likely then lead to injuries or deaths. The injuries or deaths will cause the public and the regulators to want to slow down the pace of AI self-driving car development and fielding. The whole lack of common-sense reasoning might doom the advent of AI self-driving cars to a much longer and slower evolution, some believe.

If we had more geniuses in AI, would we by now have solved the AGI problem?

If we had more geniuses in AI, would we by now have solved the common-sense reasoning problem?

I don’t know how we can answer the question, since of course if you define genius as someone that would have solved those AI problems, the answer is that yes, those problems would be solved by now if we had them around.

We also don’t know that there is a magic formula that a genius would discover to then solve those AI problems. It certainly seems unlikely that a magic wand will produce a solution. In that sense, it would seem that this is the workhorse kind of genius needed, rather than the instantaneous flash of genius, though it is hard to say because there might be a magic wand and we cannot envision it as yet.

I’ve written and spoken about the ravenous desire by tech firms and auto makers for AI top talent. These AI rockstars are the hopes and dreams by those firms to find new solutions, faster solutions, more effective solutions. Depending upon what you consider genius, those AI rockstars might be that diamond in the rough.

For my article about AI rockstars, see: https://www.aitrends.com/selfdrivingcars/hiring-and-managing-ai-rockstars-the-case-of-ai-self-driving-cars/

For the common-sense reasoning issues in AI, see my article: https://www.aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/

 For Deep Learning, AGI, and plasticity, see my article: https://www.aitrends.com/selfdrivingcars/plasticity-in-deep-learning-dynamic-adaptations-for-ai-self-driving-cars/

More Open Problems in AI That Genius Can Tackle

I’ve so far named two open problems in AI, the need to develop common-sense reasoning, and the desire to have Artificial General Intelligence (AGI). Those are two of the biggies, but there are more such problems hounding us.

Another AI open problem involves the topic of learning. Today’s Machine Learning and Deep Learning is actually shallow when compared to human learning.

How can we get Machine Learning or Deep Learning that can do one-shot learning, whereby after experiencing only one or a few examples the AI is able to generalize and learn about a topic or matter? It’s a tough problem to solve.

There is the open problem of getting AI to learn-to-learn. We cannot keep setting up AI systems that we, the humans, have done the construction around what the AI will learn. Presumably, we want to get the AI to be able to learn about how it can learn, and then use that learning to do a better job at learning on its own.

Some even suggest that we need to have the AI begin as a kind of child-AI, and let it grow over time, similar to how humans start as babies and become children and become adults. Perhaps that’s the only way toward getting AI that is more robust.

For one-shot Machine Learning aspects, see my article: https://www.aitrends.com/selfdrivingcars/seeking-one-shot-machine-learning-the-case-of-ai-self-driving-cars/

For federated Machine Learning, see my article: https://www.aitrends.com/selfdrivingcars/federated-machine-learning-for-ai-self-driving-cars/

For my article about Deep Learning and ensembles, see: https://www.aitrends.com/selfdrivingcars/ensemble-machine-learning-for-ai-self-driving-cars/

For my article about the AI Machine-Child approach, see: https://www.aitrends.com/selfdrivingcars/ai-machine-child-deep-learning-the-case-of-ai-self-driving-cars/

Object recognition is another open problem in AI.

Today’s AI systems that do object recognition are not doing the same kind of “recognition” that humans do.

When a human sees a dog running toward the street, the human uses the shape and movement of the dog to figure out that it is a dog, and correspondingly has a lot of knowledge about what dogs are and what they do. AI systems are merely labeling the image of a dog that it might be a thing called a dog, and don’t have any semblance of what a dog is per se.

Autonomous navigation is another open problem in AI, including advances desired in SLAM (Simultaneous Location and Mapping).

Most AI self-driving cars rely upon a GPS system to enable them to navigate, in addition to object and scene recognition. Suppose though that we didn’t have a GPS available or it was conked out, what would the AI self-driving car do then? A human driver can drive a car without a GPS, and so should the AI be able to do for a self-driving car.

Theory of Mind is another interesting and important AI problem.

Humans that interact with other humans will tend toward having a Theory of Mind about the other person, being able to guess what the other person might be thinking about, or what the person might do in a given situation. The AI systems of today do not have much if any of a Theory of Mind embodiment. They aren’t “thinking” about the thinking of others. Likewise, for humans, a human interacting with an AI system is unlikely to be able to discern what is in “the mind” of the AI system, which can lead to some dangerous kinds of dissonance.

In the case of AI self-driving cars, when human drivers are driving a car, they are usually anticipating the actions of other drivers. You watch the car ahead of you and can guess that based on their driving behavior, they are a timid driver and therefore you can anticipate other potential driving moves the person will make. Few AI self-driving car efforts are seeking as yet to embody this kind of capability.

For why object poses is a difficult issue in AI, see my article: https://www.aitrends.com/ai-insider/machine-learning-ultra-brittleness-and-object-orientation-poses-the-case-of-ai-self-driving-cars/

For how Theory of Mind might be a factor in the Boeing 737 incidents, see: https://www.aitrends.com/selfdrivingcars/boeing-737-max-8-and-lessons-for-ai-the-case-of-ai-self-driving-cars/

For more about SLAM, see my article: https://www.aitrends.com/selfdrivingcars/simultaneous-localization-mapping-slam-ai-self-driving-cars/

For safety aspects of AI self-driving cars, see: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/

Across the Spectrum of AI Self-Driving Car Elements

I’m not going to enumerate all of the open problems in AI, but I’ve provided some of the more notable ones and hopefully provided you with an indication of where there might be value in adding a “genius” to try and solve those problems.

If you have exceptional AI developers that are gifted, and they are tackling those problems, can you consider them to be geniuses? If they are not quickly solving these arduous problems, does that imply they must not be geniuses? This is all quite a quagmire.

Let’s focus instead on the areas in which we need some super thinking to solve, and I’ll focus on the AI self-driving cars realm.

All of the open AI problems I’ve already mentioned herein are obviously applicable to the AI self-driving cars field, and so keep that in mind. Also, I’d dare say that any AI problem in the AI self-driving cars field is likely applicable to AI overall. I mention this to highlight that you’d be hard pressed to claim that there is something applicable only to AI self-driving cars and could not ultimately be carried over into other AI areas.

Using my framework about AI self-driving cars, let’s consider each of the major elements.

The sensors of an AI self-driving car are the key to sensing what is around the self-driving car. Maybe there are new kinds of sensors that nobody has yet even invented or considered, for which a “genius” might come out of the woodwork and create.

Or, maybe the sensors we have today can be vastly improved. Perhaps there are new ways to develop cameras that will radically improve the images or video captured by the sensors on an AI self-driving car. Maybe there are breakthroughs to be had in radar, ultrasonic sensors, LIDAR, and so on. This also would include not just the hardware aspects, but also the software that does the object recognition and interpretation.

We might be lucky to have a “genius” that can vastly improve sensor fusion. The capability of cohesively bringing together the sensory data and make sense of it, well, it’s a tough problem. Similarly, the use of virtual world models could use a “genius” to make those more powerful and capable. The same can be said of the AI action planning portion of an AI self-driving car, and likewise for the car controls commands issuance.

There is also the need for AI self-driving cars to be self-aware. A human driver knows that they are driving a car. The human presumably keeps tabs on themselves, realizing when they are getting sleepy or impaired. We need the AI in an AI self-driving car to have a similar kind of self-awareness. It might take a “genius” to get us there.

For AI that’s self-aware, see my article: https://www.aitrends.com/selfdrivingcars/self-awareness-self-driving-cars-know-thyself/

For Multi-Sensor Data Fusion (MSDF), see my article: https://www.aitrends.com/selfdrivingcars/multi-sensor-data-fusion-msdf-and-ai-the-case-of-ai-self-driving-cars/

For my article about new kinds of sensors such as olfactory, see: https://www.aitrends.com/selfdrivingcars/olfactory-e-nose-sensors-and-ai-self-driving-cars/

For my Top 10 predictions about AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/top-10-ai-trends-insider-predictions-about-ai-and-ai-self-driving-cars-for-2019/

Conclusion

Do we have a shortage of geniuses in AI? Besides the aspect that you could presumably say the same thing about nearly all other areas of study, let’s just say that if we had more geniuses it might be helpful.

Notice that I say that it might be helpful, rather than categorically saying it would absolutely be helpful. We don’t know that the geniuses would necessarily be ones that would help us make progress. Suppose there are geniuses that are devious and opt to take us down a bad path? Or, maybe there are geniuses that are trying to do their best, and yet waste our attention on something that won’t payoff. Who knows?

We also tend to think of geniuses as being solitary actors. The image we tend to have is someone that otherwise has a strange or unpleasing personality and they work alone, toiling away, coming up with miraculous new ideas and inventions.

This kind of stereotype tends to bely the reality of “geniuses” that work with others in teams, and might themselves not necessarily produce something new, and instead be an inspiration or guide to others that do. There can even be a team of geniuses, though we often assume they won’t get along and will all be pushing and pulling at each other to showcase who the real genius is.

If you are a genius, please jump in and help out on solving these thorny AI problems.

If you are not a genius, maybe you can become one, so please make a go of it.

If you are trying to help someone else to become a genius, try not to go too far since making a genius is not a sure thing.

If you are an AI developer seeking to craft AI-genius, good luck to you and aim to ensure it won’t wipe out humanity.

There is a famous quote by philosopher Arthur Schopenhauer, “Talent hits the target no one else can hit; genius hits the target that no one else can see” (in his book, “The World as Will and Representation”), which is worth ruminating on.

Maybe I’ve not even listed the solutions that a genius will come up with, presumably being able to see problems and solutions that the rest of us aren’t even yet able to discern. Go for it, geniuses, let’s see what you can do.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.