Why AlphaGo Is Not AI

1231

What is AI and what is not AI is, to some extent, a matter of definition. There is no denying that AlphaGo, the Go-playing artificial intelligence designed byGoogle DeepMind that recently beat world champion Lee Sedol, and similardeep learning approaches have managed to solve quite hard computational problems in recent years. But is it going to get us to full AI, in the sense of an artificial general intelligence, or AGI, machine? Not quite, and here is why.

One of the key issues when building an AGI is that it will have to make sense of the world for itself, to develop its own, internal meaning for everything it will encounter, hear, say, and do. Failing to do this, you end up with today’s AI programs where all the meaning is actually provided by the designer of the application: the AI basically doesn’t understand what is going on and has a narrow domain of expertise.

The problem of meaning is perhaps the most fundamental problem of AI and has still not been solved today. One of the first to express it was cognitive scientist Stevan Harnad, in his 1990 paper about “The Symbol Grounding Problem.” Even if you don’t believe we are explicitly manipulating symbols, which is indeed questionable, the problem remains: the grounding of whatever representation exists inside the system into the real world outside.

Read the source article at IEEE Spectrum