Google has developed some awesome mobile applications that realize visual and audio recognition. Google just recently announced an open source Natural Language Understanding (NLU) system called SyntaxNet. This NLU system is built upon Google’s TensorFlow, an open source neural network framework. Google has been able to achieve an overall 90 percent accuracy rate with their system. This is quite an accomplishment from just ten years ago, where part of speech tagging consisted of simply identifying entity extraction (verbs, nouns, etc.).
Understanding user intent
Having a system that understands natural language and all of the ambiguity that goes with it, especially in English, to achieve a 90 percent accuracy rate is impressive to say the least. However, this is not really AI or cognitive computing despite what the pundits write in the press. The ability to interpret user intent is a whole different ballgame that separates the cognitive computing from the statistical reasoning solutions.
There have been a couple of companies that have shown impressive results with understanding user intent. Siri creators Dag Kittlaus and Adam Cheyer of Viv Labs, recently demonstrated an AI personal assistant called Viv. The engineers had ordered pizza using Viv personal assistant. Viv understood the engineer’s location and their pizza preferences, placing an order to the closest pizza store that could fill their order. This was done without doing a Google search, placing a single phone call or any typing at all, actually. Moreover, they did it without downloading an app. This is a remarkable feat in itself.