Slowly but surely, artificial intelligence (AI) is advancing. Where it stands in comparison to human intelligence is difficult to say because people excel across a broad set of intellectual tasks, while AI tends to be narrowly focused. But machines keep getting better at tasks like sifting through vast quantities of data, understanding natural language, and recognizing objects in images.
Over the weekend, scientists from a Silicon Valley company called MetaMind described their efforts to advance the state of the art in a research paper, “Dynamic Memory Networks for Visual and Textual Question Answering.”
The paper explains improvements in memory and input modules for a system called a dynamic memory network (DMN), a general architecture for answering questions about text and images. A DMN might, for example, parse text describing a series of events in which a person passes through a series of rooms and drops an object in one of them. If asked about the location of the object, the DMN could infer the correct answer from the story.
MetaMind’s researchers have devised a way to have their data model learn the facts it needs to reason without labeling those facts in a training session. Their approach involves creating a component that considers text in a bidirectional manner, rather than analysis that flows one way from beginning to end.