Defining Intelligence: An Overview of Machine Learning, Beyond the Hype and Into the Methods and Applications

1120

By Maryanna Saenko, Analyst, and Evan Kodra, Data Scientist, Lux Research

Artificial intelligence (AI), whether for cars, drones, medical diagnostics, or Internet searches, is a wildly exciting topic that inspires as much excitement as it does fear.  AI is one of the most over-hyped phrases of the 21st century. Today, we’ve only implemented the bare minimum capabilities of AI research to make our machines and systems smarter, faster, and more efficient. However, implementing AI techniques in today’s products and processes comes with a host of challenges. First and foremost among those challenges is setting realistic expectations for how well any given AI system will work, regardless of whether it’s for object recognition, language processing, or movie recommendation.

  • AI has endless implications for businesses across every industry, from health care to automotive to chemicals and materials manufacturing, and already enables Internet searches, financial forecasting, weather prediction, traffic routing, and airline pricing, among thousands of other applications.
  • Futurists tout both the end virtues of AI as well as the apocalyptic consequences of the technology. Unfortunately, the basis for both views lies in serious misconceptions of what AI is and how it is evolving. We must overcome these flawed perceptions or risk investment in, partnership with, and implementation of technologies that overpromise and under deliver.

Lux Research provides a framework for navigating the space of AI using a Sankey Diagram Visualization that breaks the space down into three key areas: Applications, Domains, and Methods.

LuxGraphic

The visualization maps techniques to domains to overall disciplines. Methods are the technical approaches that computer and data scientists apply to solve machine learning challenges. Domains are essentially fields of study within AI. AI applications are the complex tasks that computers must complete to successfully execute higher level functions.

  • Understanding AI methodologies and domains based on their core concepts leads to insights on hype versus reality of some of the more common sight techniques. For example, deep learning approaches can be extremely useful, but will also have their limitations. Deep learning works well where there are massive data sets that are well labeled, and can bring huge advances in areas like speech, voice, and object recognition.
  • There is no singular solution that will enable “intelligent machines,” but rather, AI will continue to grow as a combination of techniques and approaches used to solve discrete problems.

Successfully applying AI techniques requires understanding the characteristics and constraints of available data.  Not all AI methods are suited to the same types of problems, as some techniques are best suited to huge data sets where the data is all of one type, whereas others are better when the input variables span a range of data types, such as images, text, and sound.

  • No one set of methods or tools can easily create successful applications. These applications are solvable, but generally require a clever combination of methods.
  • Domain-specific solutions will still be better bets than generalized solutions for decades to come. The success of AI techniques for specific applications has as much to do with the available data as it does with the mathematical methods, making specific domain solutions possible — but generalized solutions extremely difficult, if not unlikely.

The figure above shows the landscape of AI through the relationships between these different levels.  It is important to note that this figure is representative, but not exhaustive. In some cases, the relationships could actually be thought of like a Venn diagram. For example, while the figure taken literally would imply that statistics, data mining, and machine learning are independent, in reality there is a large amount of overlap among the three disciplines. Deep learning could also fairly be considered just another type of regression. However, we attempted to disentangle and make orthogonal as much as possible the important components of the landscape for clarity.

In panel A, AI applications are shown on the left. Their relationships with domains, on the right, are illustrated via connections. For example, affective computing applications predominantly utilize computer vision (e.g., facial expression recognition), natural language processing (e.g., semantic analysis of text), and computer audition (e.g., inferring emotion from voice). The size of each connection is used to portray the relative importance/predominance of each domain to each application area.

Panel B maps domains to the AI methods and techniques that they harness. For example, computer vision has been using deep learning successfully, especially in the past several years; previously, computer vision relied more heavily on older regression techniques, such as random forest decision tree models and support vector machines. Images and videos can in total comprise massive, but somewhat redundant, datasets; hence in some cases, dimensionality reduction serves to compress information in the process of making predictive inferences. Pattern similarity methods can infer similarities among different sets of images or videos.

Finally, panel C shows the connections between methods and technique and their root disciplines. For example, deep learning is, in essence, a class of methods that came from the machine learning community. More classic Regression techniques are also claimed in Machine Learning but from a historical perspective are more heavily rooted in Statistics.

Visit Lux Research for more details on the report titled, “Defining Intelligence — An Overview of Machine Learning, Beyond the Hype and into the Methods and Applications.”