AI Machine-Child Deep Learning: The Case of AI Self-Driving Cars

1773

By Lance Eliot, the AI Trends Insider

Did you play with blocks when you were a child? I’m guessing you probably did, or at least you have seen children playing with blocks. For children, it can be an enjoyable pastime and seemingly keep them busy. For parents, the blocks are a quick and easy diversion for occupying a child and the parents generally assume that the child will somehow be better off as a result of playing with the blocks.

If you study cognitive development, you likely know that block playing can be a significant means of formulating various key cognitive skills in children. In addition to the cognition aspects, the physical manipulation of the blocks will tend to aid the maturation of various body agility and coordination skills. There’s also the commingling of the mind and the body in the sense that the child is not solely gaining cognitively and not solely gaining in physical movement but gaining in a synergistic way of the mind and the body working together.

There’s a lot going on with the stacking of those blocks!

When my children were quite young, I watched in fascination as they played with blocks. It was as though you could see the gears turning in their heads as they would examine the blocks. Here’s what I guessed was happening. What’s this, a block that is all blue in color? This other block is all red in color. That’s interesting and worthwhile to note as a difference. Say, this blue block is bigger than the red block. That’s another interesting difference. I wonder if I can do anything with these blocks? I’ll grab one and see if I can throw it. Oops, now I don’t have the block near me anymore – make mental note, don’t toss away a block because it won’t be within your grasp anymore. And so on.

Consider for a moment how much a child can learn by simply playing with a small set of blocks.

They learn about the physics of objects, such as whether they are solid or not, whether an object is strong or squishable in being able to be compressed, heavy or lightweight to hold, graspable or too large to grab, etc. They learn about the colors of the blocks. Are they the same color, are they different colors, does the color have anything to do with the other properties of the block?

They learn about what they can do with blocks. I can try to put one on top of the other. That was fun, the red block is now stacked above the blue block. Oh no, the red block fell off the blue block. Why did that happen? Apparently, the red block was so much bigger than the blue block that the blue block could not serve as a pedestal for the larger block. Can I put the smaller block on top of the larger block? Can I shake the two blocks once I’ve stacked them and will they stay stacked?

What was most exciting for me was to then observe my kids as they were able to eventually begin to play with the blocks and yet did not necessarily need to touch or handle the blocks to do so. It is one thing to reach out and handle a block, it’s another to represent them in your mind and be able to “handle” them in a virtual sense. I would ask them to pretend in their minds that they were placing the blue block on top of the red block, and then ask them whether the blue block would be able to stay there or whether it would fall over.

That’s a lot of thinking.

You also need to “disconnect” your motor skills for a moment to try and mull over the matter in your mind. As you know, whenever you are sleeping you seem to be able to disconnect your motor skills and not physically act out your dreams. Well, a child has to learn that they can imagine something in their minds and yet not necessarily need to engage their body in those thoughts. It takes a bit of doing to realize those are two different things, one being thoughts, the other being physical manifestations.

When they became more proficient in simple block-like tasks, I would increase the level of cognition needed. I might tell them that there are now three blocks, even though they only see two in front of them. This third block is orange in color. It is the same size and shape as the red block. I might then ask them whether I could stack the imaginary orange block on top of the blue block.

That’s another leap in mental ability.

This involves not actually having a physical representation in front of you. You need to mentally conjure the notion of a block. I would do this at first with the other blocks present, making it a bit easier because they could look directly at the existing blocks and use though as a means to envision an imaginary block. I would later on take away the other blocks and do the same kind of pretend. This means they need to have in their minds the prior images of the blocks that they did actually see, along with now faking the aspect of having additional blocks that they cannot see and have never seen.

You might start-off by giving a child blocks that are the same shape, weight, size, colors, and so on. After they play with those, and presumably learn some initial aspects, you might switch things up and put out blocks that are different in size and weight. Then, use ones that differ in shape and colors. Then, put markings on the blocks such as an X and an O shape. Then, put stick-like drawings of a cat, a dog, and other animals. Then, put letters of the alphabet on the blocks.

The notion is that you are progressively graduating the child to more complex elements. You don’t necessarily need to explain this to the child. You just give them blocks with increasingly complicated matters and have them figure things out. The variety also keeps the child engaged. If the child has the same blocks over and over, the odds are that eventually the joy of playing with the blocks will wane. By changing up the aspects of the blocks, it gets the child reengaged. It also presumably stimulates the nature of their learning and expands what they are learning.

You might want to keep this in mind the next time that you buy a set of blocks for your close friend’s young child. You are not merely buying blocks for the child, you are providing a learning experience. It could be that those blocks that you gave to that child were the key ingredient to the child later going to a top tier college and graduating summa cum laude. Well, perhaps that’s a bit of an overstatement about the learning power of those blocks, but you get the idea.

Professor’s Homework Was for Students to Play with Blocks

When I was a university professor teaching Computer Science (CS) classes and particularly for my AI classes, I would assign AI development programming homework and projects that involved the stacking and use of blocks. This is a handy way to have the CS students learn about the principles of AI.

At first, the programming assignment would consist of merely receiving simpleton commands about make-believe blocks and the program would need to respond with the state of the blocks after pretending to perform the commands. A command might be to pick-up a red block and stack it on top of a blue block. The program would need to then report what the state of the blocks was. In this case, the answer would be that the red block is now on top of the blue block, and the blue block is sitting on the table.

Once the students got a hang of that aspect, the next step was to increase the “cognition” required by their programs. Their software would need to deal with many blocks and deal with a wide variety of blocks. This would increase the set of commands and increase the complexity of keeping track of the state of the blocks.

I then opted to have them begin to turn their “cognition” only programs into dealing with a real-world of actual blocks. A room with a robotic arm had a set of blocks on a table. They had to give commands to the robotic arm about moving of the blocks. They also had a camera that provided visual images of the blocks. The students had to write code that would take in the images and use those images to figure out where the blocks were, and also what kind of instructions were needed to send to the robotic arm for movement to touch and move the blocks.

I’d usually have them write this in the AI languages of choice at the time, such as LISP or Prolog, and at first not allow them to use any open source libraries, forcing them to write most of everything from scratch. I figured it was good for them to know the details. After we got done with those aspects, I’d then allow them to use the open source libraries, which you can imagine came as a great relief for some, while others preferred to cling to their own code. That’s another handy lesson for them too.

They would then add a Natural Language Processing (NLP) capability to their budding AI program. They were to assume that someone wanting to play with the blocks could enter in narrative that their program had to interpret. This was harder than forcing the user to enter strict commands. Instead of my entering that the command of put the red block on top of the blue block, I could enter something like take that one block and put it on top of the other block. Notice that I did not mention the block color. Their NLP would need to ascertain that the verbiage was ambiguous and therefore ask for more clarity from the user.

I’d start this with the narrative being in written form. After that was done, I’d then have them allow the user to enter their dialogue via written and in verbal oration. This got the students to deal with NLP in both modes. The ante was increased too by having the user be able to give a meandering dialogue that wasn’t necessarily directly about the blocks. I might carry on a dialogue where I first talk about how my day is going, and then mention something about stacking a block. The NLP had to be able to parse the dialogue and figure out what was useful and what was not, at least for the task at hand.

All told, this was a handy way to introduce the students to various AI techniques and approaches.

The sad thing is that all of the work was essentially unusable toward other kinds of AI problems. Yes, the students themselves had learned key AI aspects and could write anew code for a different type of AI problem, and they could even potentially reuse aspects of what they had put together for the blocks world. But, what they could not do was say to their blocks world, hey you, I want you to now learn something entirely new.

Wouldn’t it be great if you could develop an AI system that was a learning one, which could go beyond whatever particular domain aspect you crafted it for, and it would be able to learn something else entirely, leveraging what it already knew?

Let’s return to the story about the children playing with blocks. I think we would all agree that the children are not merely learning about blocks. If they were “learning” like most of today’s AI programs, they would only henceforth be able to use their blocks learnings to play with more blocks.

When my children went into our backyard to play, I noticed that they right away took various toys in the yard like a tricycle and a rocking horse and put one on top of the other. When I asked them about this, they reported that these were like the blocks. You could stack them on top of each other. I asked them to look at the houses in our neighborhood. Could those be stacked on top of each other too, I asked? Yes, they said, though they also noted that it would be a rather arduous thing to do and that stacking them was not likely.

In essence, they learned not just about blocks, but also the nature of objects and the characteristics of objects, along with how to cope with objects.

Regrettably, the AI programs that my students wrote were pretty much one-time specific to the blocks domain. Those programs could not be unleashed to learn about other everyday things and leverage what they now “knew” about blocks.

This is one of the greatest issues and qualms about today’s AI.

By-and-large, most AI development is being done as a tailoring to a particular domain and a specific problem in-hand. It makes them narrow. It makes them brittle. They lack any kind of common-sense reasoning. They are unable to extend themselves to other areas, even areas of a related nature.

Most of today’s AI systems are each a one trick pony.

For my article about AI and common sense reasoning issues, see: https://www.aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/

For my article about so-called super-intelligent AI, see: https://www.aitrends.com/ai-insider/super-intelligent-ai-paperclip-maximizer-conundrum-and-ai-self-driving-cars/

For the dangers of irreproducibility in AI, see my article: https://www.aitrends.com/selfdrivingcars/irreproducibility-and-ai-self-driving-cars/

For aspects about ascertaining AI and the Turing Test, see my article: https://www.aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/

Today’s AI systems cannot be self-applied to other domains and nor be expected to learn what to do.

Even the vaunted chess playing programs are pretty much dedicated to playing chess. They do not on their own have a capability to be presented with a different kind of game and reapply what they “know” about chess to the other game. It requires a significant amount of human AI-developer effort to rejigger such an AI system from one domain to another.

My children certainly were able to readily leverage their learnings of one domain into another. For example, I started them with checkers. When I got them to next start playing chess, they already understood aspects such as the placing game pieces on squares and the moving of pieces from one square to another, which they had learned from playing checkers. They knew all kinds of tactics and strategies of playing checkers, of which, they could reapply those to chess. I’m not saying checkers and chess are the same. I am saying that they learned about board game playing and could leverage it to learn a completely different board game.

And so we are currently facing a situation in AI of having to develop each new AI application and do so by significant and prolonged manual intervention by a human AI developer. There are some pretty wild projections that to develop all the different kinds of AI apps that people seem to want, you’d need to enlist millions upon millions of AI developers. That’s just not practical.

For my article about AI developer burnout, see: https://www.aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/

For the dangers of AI developer groupthink, see my article: https://www.aitrends.com/selfdrivingcars/groupthink-dilemmas-for-developing-ai-self-driving-cars/

For aspects about AI developers and internal naysaying, see my article: https://www.aitrends.com/ai-insider/internal-naysayers-and-ai-self-driving-cars/

For the concerns about normalization of deviance in AI, see my article: https://www.aitrends.com/selfdrivingcars/normalization-of-deviance-endangers-ai-self-driving-cars/

It also seems to me to be a kind of tossing in the towel if you are merely going to hire AI developers, one after another, and try to create a globe that is filled with AI developers (heaven forbid!). One would hope that we aren’t going to just be making AI systems that each are their own separate island. The larger vision would seem to be that we’d want AI systems that can learn on their own. In this manner, there aren’t armies of AI developers needed.

Learning Challenge is the Bigger Challenge in AI

In fact, there are some AI purists that suggest we are all right now being distracted by writing these one-off AI systems. Sure, it is fun, and you can make money, and you are solving an “immediate” problem that someone has stated. But the purists are worried that we are not confronting the bigger challenge, the learning challenge.

How can we make AI systems that can learn and do so far beyond whatever particular learning aspects that we started them with? That’s what are focus should be, these purists insist.

Maybe we ought to be focusing on making an AI system that is like a child. This AI system begins with the rudiments that a human child has in terms of being able to learn. We then somehow mature that child and get it to become more like an adult in terms of cognitive capability. We could then presumably use this adult-like AI to then be applied to various domains.

Let’s consider that you want to create an AI system for medical diagnosis purposes. Today, you would likely study what a trained and proficient medial specialist does when diagnosing something. You would then try to pattern your AI around that kind of cognition. You might gather up thousands of images of say cancer scans and use Machine Learning or Deep Learning to pattern match on those images. The resulting AI system appears to be able to do a “better” diagnosis than the human medical specialist, perhaps being more consistent in detecting cancers and so on.

Your aim seemed to be to target doing the task as proficiently or more so than an adult human. Some would say you went after the wrong target. You might be better off to have started with a “child” kind of AI system that you could mature and graduate toward doing this adult-like task.

This is the crux of the AI machine-child deep learning notion.

It is believed by some that we need to first figure out how to create a machine-child like capability, of which, we could then use that as a basis for shaping and reshaping toward other tasks that we want to have performed. By leaping past this machine-child, you are never likely going to end-up with anything other than an “adult” single domain system that cannot be sufficiently leveraged towards other domains.

Now, I realize that some might recoil in horror. What, you want to replicate human intelligence by creating child-like AI? It seems like a science fiction novel. Humans create AI in a child-like manner. Humans then mess-up and the machine-child becomes a malcontent AI adult that decides to turn on its “parents” and kills all of humanity. Yes, we all know about the dire predictions of the coming singularity.

I don’t believe that’s an apt way to portray this. If you’ve already bought into the idea that we are trying to create some kind of adult-like AI, what makes it so strange to instead focus on a child-like AI that can be progressed towards an adult-like AI? It would seem that you would at least be consistent and object to the adult-like AI, if you also were opposed to the machine-child idea.

For my article about the potential of AI as a Frankenstein, see: https://www.aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/

For the predictions about the singularity, see my article: https://www.aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/

For the idea that maybe AI should start over, see my article: https://www.aitrends.com/selfdrivingcars/starting-over-on-ai-and-self-driving-cars/

For egocentric viewpoints of AI developers, see my article: https://www.aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For my article about conspiracy theories and AI, see: https://www.aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that some AI purists are asking is whether or not the auto makers and tech firms are taking the right tactic to developing AI for self-driving cars, and perhaps the AI community ought to instead be taking a concerted AI machine-child approach.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the AI machine-child notion, let’s consider how AI self-driving cars are being developed and whether the AI purists have a sensible idea that perhaps the AI community is currently off-target of what should be taking place in this realm of AI.

Let’s start by considering how humans learn to drive a car.

Drive Legally at Age 14 in South Dakota

In most jurisdictions, the youngest that you can begin driving a car is around 16 to 17 years of age. There are some rare exceptions such as South Dakota allowing a driver at the age of 14. The basis generally for using the mid-teen to late-teens as a threshold point is the belief that the human has to be mentally and physically mature enough to take on the rather somber and serious nature of the driving task.

Your arms and legs need to reach the pedals and the steering wheel, and you need sufficient command over your body and limbs to appropriately work the driving controls. You need to have the cognitive capability to perform the driving task, which includes being able to detect the roadway surroundings, assess what the traffic conditions are, make reasoned decisions about the driving maneuvers, and carry out your driving action plan. You need to be responsible and take charge of the car. You need to know the laws about driving and be able to perform the driving task as abiding generally by those laws.

Could an even younger person possibly drive a car? Sure, you could likely drive a car at perhaps the age of ten. For farmers, it was not unusual to put to work a young person on a tractor. Of course, you can contend that driving a vehicle on a farm is not quite the same complexity as driving it while on a crowded freeway or in a packed inner-city location. In any case, there is nothing that necessarily precludes the possibility of being able to drive at a younger age. It all depends on the mental maturation and the physical maturation.

You could potentially remove or mitigate the physical requirements of being able to drive. With today’s voice command systems, you don’t necessarily need to use pedals for braking and accelerating and could instead use voice instructions to the vehicle instead. You don’t necessarily need a steering wheel and the use of arms and hands, since you could use a facial tracking system and aim the car by the use of your head or eyes. You might suggest that these alternatives are not better than the traditional physical controls and that the usual physical controls are tried and true, which makes sense, but the point simply being that we could find a means to accommodate a human driver that has not reached a particular physical size and they could still drive a car.

For brainjacking to drive a car, see my article: https://www.aitrends.com/selfdrivingcars/brainjacking-self-driving-cars-mind-matter/

For the use of ensemble Machine Learning, see my article: https://www.aitrends.com/selfdrivingcars/ensemble-machine-learning-for-ai-self-driving-cars/

For federated Machine Learning, see my article: https://www.aitrends.com/selfdrivingcars/federated-machine-learning-for-ai-self-driving-cars/

For my article on the benchmarks of Machine Learning capabilities, see: https://www.aitrends.com/selfdrivingcars/machine-learning-benchmarks-and-ai-self-driving-cars/

The cognitive aspects are likely not so easily overcome in terms of being able to accommodate a younger and younger driver. Yes, with the advent of AI self-driving cars, there is less that the human driver needs to undertake in terms of the driving task. But do we want to have potentially a child that serves as the “last resort” human driver that is supposed to be ready to take the controls if the AI is unable to perform the driving task? I’d doubt that we would want this.

So perhaps we’d all settle on the notion that having a human driver be able to start toward driving at the age of their mid-teens is about right. We could try to push to an early age, though this seems like it is heightening the risks of untoward aspects during the driving task.

How does a human learn to drive a car?

I remember that with my children, they began by taking a class in how to drive. The class consisted of classroom work wherein they learned about the rules of the road and the laws that govern driving. They then got into a car and drove with a driving instructor, along with times that I went with them and coached or mentored them as they were learning to drive. The driving was initially in relatively save areas such as an empty mall parking lot. After this was used, the next step was a quiet neighborhood with little traffic, and then next was streets with a semblance of traffic, and then next was a harried freeway, and so on.

How are we getting AI systems to be able to drive a car?

It is rather unlike the way in which we get a human to learn to drive a car. The AI system is developed as though it is an adult driver and we then test it to see if it can perform as such. There is not particularly a learning curve per se that the AI itself has to go through. Yes, I realize that he Machine Learning (ML) and Deep Learning (DL) is undertaken, but it is done mainly for the capability of being able to detect the surroundings of the self-driving car, such as whether there are cars nearby or pedestrians in the street. The ML and DL is not similarly focused on learning the rules of the road and the laws of driving and other “cognitive” elements of driving a car, instead those tend to be baked into the AI system by the AI developers.

Here’s now the point by some AI purists that pertains to this matter.

They would say that we should be trying to develop an AI system that has the capacity to learn, in the equivalent fashion somehow of what a human teenager does, and we should then use that foundation to essentially teach the machine-child to be able to drive a car. This would be equivalent to the human teenager learning to drive a car.

Notice that I am purposely saying “equivalent” because I want to separate the notion of the AI being the exact equal to a human versus it being of some equivalent nature. I don’t want to get us stuck in this discussion on whether the AI is using the same biological kinds of cognitive mechanisms as a human. Some say that we’ll never get a “thinking machine” unless we can precisely replicate the human brain in automation, while others contend that we don’t necessarily need to crack the code of the brain and can instead construct something that is equivalent. I’m not going to get us bogged down in that debate herein and so please go along with my saying the word “equivalent” in this discussion.

You could suggest that we are currently doing a top-down approach to constructing the AI for self-driving cars and that this alternative is a bottoms-up approach. In this bottoms-up approach, you focus on creating a systems environment that has a capacity to learn, and you then put it toward learning the task at hand, which in this case is driving a car.

Would we be better off going in that direction as a means to achieve an AI system that can sufficiently drive a car that’s the near same as a human that can drive a car?

Turtle vs. Hare Approach to AI System Progress

It’s hard to say. I think it is considered a much longer path due to the aspect that we don’t yet know how to construct this kind of an open-ended learning AI system that could do this. In the classic race of the turtle versus the hare, the top-down approach gets us out-the-gate right away and shows progress, while the bottom-up approach is more like the hare that will plod along slowly.

There are some that assert that we aren’t going to be able to achieve true Level 5 AI self-driving cars and we’ll eventually hit the limits of this top-down approach. At that point, the world will be asking what happened. How come the vaunted true Level 5 was not achieved? If you were to say that it was because we started at the wrong place, it could be a bit disturbing.

The AI purists would say that the glamour of today’s progress on AI self-driving cars is regrettably merely reinforcing the no-successful-end top-down approach. Yet, they would say this is a tease and confusing us all into not doing the true hard work of aiming at the bottom-up approach. Not only won’t we get to the true Level 5, but the AI field overall will be hampered and not have made as much progress because we avoided the machine-child path.

For the boundaries aspects of AI self-driving cars, see my article: https://www.aitrends.com/ai-insider/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/

For the bifurcation of autonomy, see my article: https://www.aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

For the arguing machines approach, see my article: https://www.aitrends.com/features/ai-arguing-machines-and-ai-self-driving-cars/

For my article about safety and AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

Here’s another twist on this topic that you might find of interest.

Maybe the AI purists are right and we need to focus on the AI as a learning system, crafting a machine-child, for which we then advance and progress and mature it into various kinds of adult-like AI systems.

If they are indeed right about this, what is the lowest “age” machine-child AI system that we should be trying to develop?

For the moment, in terms of driving a car, I suggested that we’d aim at a teenager machine-child cognitive level. That seems to fit with the cognitive maturation of when humans learn to drive a car. Perhaps though the teenage cognitive level is too old. We might need to aim at a younger cognitive maturity age.

We can revisit the blocks world. I had mentioned that children use blocks at a very young age and that the act of doing so is much more than simply playing with blocks. These quite young children are learning all kinds of aspects about the world overall. They are also learning to learn. They must gauge what they don’t know and how they will learn it. This is a dovetailing of learning something while also learning about how to learn something.

Does it seem plausible for AI developers to construct an AI system that magically is at the teenage years of cognitive capability, or do we need to aim at a much younger age for the machine-child that we want to build? The human teenage cognitive skillset already includes the learnings of having played with blocks as a child. It could be that we can’t leap past that when artificially creating such an AI system. This block playing as a baby or infant could be integral to being able to ultimately produce a machine-child that has the teenage cognitive capabilities.

I know it seems farfetched to consider that you might need to start at the baby or infant level and begin by having an AI system that plays with blocks. From blocks to driving a car? That seems not so related.

I’ll offer a variant on the blocks world and see if it helps. Most children are likely to have had a tricycle or something similar when they were very young. You might have grown-up riding a Big Wheel, which is a famous kind of tricycle that is today listed in the National Toy Hall of Fame. For those of you that are nostalgic about the Big Wheel, you’ll be pleased to know that there is an annual celebration that occurs in San Francisco on Easter Sunday’s of people riding on Big Wheels. It is a BYOBW, Bring Your Own Big Wheel event!

Anyway, I bring up the topic of tricycles to offer something that might seem closer to the nature of driving a car.

I noticed that when my children were very young and rode their tricycles, they would at first bump into things as they rode the contraption and had to get used to being in motion while using the vehicle. They quickly figured out that they needed to steer the tricycle and avoid objects. They realized there were stationary objects that had to be avoided. They soon realized there were moving objects that had to be avoided, including other tricycle riders and “pedestrians” such as parents walking around as the children were riding. They had to mentally calculate the speeds, direction and distances of other objects and relate them to their own speed, direction and efforts of “driving” the tricycle.

The kids also learned that they at times needed to quickly hit the brakes on the tricycle to avoid hitting things or to be able to make other kinds of maneuvers. They realized or learned that they could accelerate via pumping their legs on the tricycle pedals, and the use of acceleration was vital to how they traversed an area. They made a mental map of the area in which they were riding and would try to optimize how to get around the backyard area and then out to the front yard area.

Does this sound familiar in terms of the kinds of cognitive and physical skills needed for being able to later on drive a car? I’d say so.

I know it is still a jump to go from riding a tricycle to being able to drive a car, but the point is that if you weren’t sure how learning about blocks was related to driving a car, hopefully you can see that riding a tricycle is very much related to being able to drive a car. The tricycle riding leads to a set of cognitive and physical skills that can serve as a handy base when later on learning to drive a car.

Per the AI purists, we might need to focus on developing AI systems at the infant or baby cognitive level and get those AI systems to mature forward from that starting point.

When I mention this notion at AI conferences, there is usually someone that will say that this could lead to a kind of absurdity of logic. I seem to be suggesting that teenage age is too late, so we need to aim at infants or toddlers. But, maybe that’s too late and we need to aim at a baby or even a newborn. But, maybe that’s too late and we need to aim at conception.

At that juncture of the logic, we seem to have hit a wall in that it maybe no longer makes sense to keep going earlier and earlier in the life cycle. And, if we can readily claim that jumping into the life cycle at any point will deny us the earlier learnings, it would seem that we have no choice but to start at the start and cannot merely pick-up the mantle at a later point such as a baby or infant.

Others would say that this is an absurdity of logic reduction and that we can get onto the life-cycle merry-go-round at a place that it is already spinning and still be fine. We don’t need to reduce this to some zero point.

Let’s pretend that we agree to shift attention of the AI community toward developing an AI machine-child system. We hope this will get us more robust adult-like AI systems. We especially hope that it will get us a true Level 5 AI self-driving car system, wherein we are using the AI machine-child to have it gradually become the equivalent of a licensed human driver.

There are other aspects about childhood of humans that we need to wonder whether they are essential to progressing the AI machine-child toward machine-adulthood.

For example, there is a period of time when a child will undergo so-called childhood amnesia. Usually around the age of 7, your memories of your younger days begin to rapidly erode. You seem to retain key learnings, but specifics of particular dates, events, and other aspects are gradually lost. No one yet knows why this takes place in humans.

One theory is that your brain is undergoing a radical restructuring and reorganization, which it cognitively needs to do to get ready for further advancements. It is perhaps akin to a house that was fine when you only had a few people living in it, but when you get toward twenty people the house needs to be overhauled. You need to knock out some walls and make space for what’s going to come next. Maybe that’s what happens in the brain of a toddler or young child.

Others say that you maybe don’t lose any of your memories at all. They are all still there in your noggin. Perhaps the brain has merely put many of those thoughts under lock-and-key. The assumption is that you don’t need them active and they can be archived. This is why at some later point in life you might suddenly have a flashback to a younger age, doing so because the lock-and-key was opened for that particular filed-away item.

In any case, if we build ourselves an AI machine-child that is at a young age of say 3 or 4 years old, cognitively as equated to a human, and if we progress forward the machine-child, will we eventually need it to undertake the childhood amnesia that humans seem to encounter?

Perhaps the AI machine-child won’t need to do so. Or, maybe we don’t do so and then the AI machine-child gets stuck and cannot get past the age of 7 in terms of cognitive maturation. Some would say you need to do likewise with the AI machine-child as you would with a human child, while others say that we don’t necessarily need to replicate the same aspects as a human child and that we are carrying the metaphor or analogy of the machine-child too far.

The aspects of how to progress forward or mature the AI machine-child gets us into the same kind of bog. For example, a child does not just sit in a classroom all day and night and learn things. They wander around. They sleep. They eat. They daydream. They get angry. They get happy. The question arises whether those are all inextricably bound into the cognitive development.

If all of these other experiences are integral to the cognitive development, we then are faced with quite a dilemma about the AI machine-child.

Whereas we might have assumed we could build this AI cognitive machine and mature it purely in a cognitive way, perhaps we need to have it experience all of these other life related experiences to get the cognitive progression that we want. I’m sure you’ve seen science fiction movies whereby they decide that they need to raise the AI robot as though it is a human child, aiming to give it the same kinds of human values and experiences that we have as humans.

Would we build the AI machine-child and then need to act like it is a foster child and adopt it into a human family? Also, if so, this implies that it would take years to progress the AI machine-child, since it is presumably taking the same life path as a human. Are we willing to wait years upon years for the AI machine-child to gradually develop into an adult-like AI?

I think you can see why few AI developers are pursuing this path and especially as it relates to AI self-driving cars. Imagine that you go the head of a major automotive firm and try to explain that rather than building an AI system today that will drive self-driving cars tomorrow, instead you are proposing to develop an AI machine-child which after maturing it for the next say 15 years it might be able to act as a teenager and you can train it to drive a car then.

Boom, drop the mic. That’s what would happen. You’d get a startled look and then probably get summarily booted out of the executive suite.

For those of you intrigued by the AI machine-child approach, I’m guessing you might have already been noodling on another aspect of the matter, namely, whether there is any top-end limit to the cognitive maturing of the AI machine-child.

In essence, maybe we could keep maturing cognitively the AI machine-child and it would surpass human cognitive limits. It would just keep learning and learning and learning. This takes us to the super-intelligence AI debate. This also takes us into the debate about whether we are going to reach a point of singularity. Of course, you could maybe try to argue that if this machine-child is somehow the equivalent of humans, perhaps it does have an end-limit, as it seems humans do, and the machine-child will eventually reach a point of dementia.

For my views about the singularity, see my article: https://www.aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/

For AI fail-safe concerns, see my article: https://www.aitrends.com/selfdrivingcars/fail-safe-ai-and-self-driving-cars/

For qualms about motivational AI, see my article: https://www.aitrends.com/selfdrivingcars/motivational-ai-bounded-irrationality-self-driving-cars/

For the crossing of the Rubicon and AI, see my article: https://www.aitrends.com/selfdrivingcars/crossing-the-rubicon-and-ai-self-driving-cars/

Conclusion

I hope that when you next see a child playing with blocks or riding on their tricycle that you will admire all of the hidden learning and cognitive maturation that is taking place right in front of you, though it might not be evident per se since you cannot peek into their brains. Will we only be able to ultimately achieve true AI if we cannot replicate this same life-cycle of cognitive maturation?

If you believe that we are currently on an AI path to a dead-end, you might find of value the AI machine-child approach. In a sense, we might need to take two steps backward to go five steps forward. The steps forward at this time are maybe going to hit a brick wall. Instead, the AI machine-child might be the means to get past those barriers.

The topic of AI machine-child often gets chuckles from people and they toss off the topic as a crazy sci-fi kind of notion. They might be right. Or, they might be wrong. It’s not so simple as making a hand wave of claiming that the notion has no merits. Even if you don’t buy into the notion entirely, there are bits and pieces of it that might be applied to our AI approach of today.

Excuse me for a moment, as I hear the AI machine-child crying in the crib, and I want to get over to it before it gets itself into a robotic tizzy. Working on a future self-driving car driver and I need to make sure it grows up ready to hit the road.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.