Guide to cognitive computing: An interview with solutions architect, Chris Ackerson – IBM Watson

770

Solutions architects are the experts on our team at understanding and implementing Watson technology. They have developed this expertise by providing technical support to our partners through multiple mediums. Through their work, they have a deep understanding and point of view about the Watson APIs, but also the cognitive landscape at large. I interviewed solutions architect, Chris Ackerson on his thoughts on Watson and cognitive computing, as well as his specific tips and resources.

Where do you see the Watson APIs growing in 2016 and beyond?

The Watson Developer Cloud launched back in 2014 with a single service, the QA API. Since then we’ve expanded to 30+ including APIs for natural language processing, computer vision, speech recognition among other capabilities. In addition we decomposed the QA API into independent functional APIs including dialog, natural language classification, retrieve and rank, and document conversion greatly increasing the flexibility developers have in building conversational applications.

The net is that the number of use-case patterns that developers are experimenting with has exploded. We continue to bring new APIs to the platform – we just released emotion modeling and an adaptable visual recognition service – but 2016 has brought an enhanced focus on identifying repeatable use-case patterns and building accelerators to help developers stand up applications quickly. Examples of “accelerators” can be found in the official Application Starter Kits hosted on Watson Developer Cloud. We find them in the enhanced tooling that will be released this year for building conversational apps and domain annotators. And they are present in the ever-growing open source tools and sample apps available in the Watson Developer Cloud and Cognitive Catalyst git repositories.

 

Read the source article at IBM – United States