The Cognitive Intersect of Human and Artificial Intelligence – Symbiotic Nature of AI and Neuroscience

2252
Source: geralt/pixabay

Neuroscience and artificial intelligence (AI) are two very different scientific disciplines. Neuroscience traces back to ancient civilizations, and AI is a decidedly modern phenomenon. Neuroscience branches from biology, whereas AI branches from computer science. At a cursory glance, it would seem that a branch of science of living systems would have little in common with one that springs from inanimate machines wholly created by humans. Yet discoveries in one field may result in breakthroughs in the other— the two fields share a significant problem, and future opportunities.

The origins of modern neuroscience is rooted in ancient human civilizations. One of the first descriptions of the brain’s structure and neurosurgery can be traced back to 3000 – 2500 B.C. largely due to the efforts of the American Egyptologist Edwin Smith. In 1862 Smith purchased an ancient scroll in Luxor, Egypt. In 1930 James H. Breasted translated the Egyptian scroll due to a 1906 request from the New York Historical Society via Edwin Smith’s daughter. The Edwin Smith Surgical Papyrus is an Egyptian neuroscience handbook circa 1700 B.C. that summarizes a 3000 – 2500 B.C ancient Egyptian treatise describing the brain’s external surfaces, cerebrospinal fluid, intracranial pulsations, the meninges, the cranial sutures, surgical stitching, brain injuries, and more.

In contrast, the roots of artificial intelligence sit squarely in the middle of the twentieth century. American computer scientist John McCarthy is credited with creating the term “artificial intelligence” in a 1955 written proposal for a summer research project that he co-authored with Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon. The field of artificial intelligence was subsequently launched at a 1956 conference held at Dartmouth College.

The history of artificial intelligence is a modern one. In 1969 Marvin Minsky and Seymour Papert published a research paper titled “Perceptrons: an introduction to computational geometry” that hypothesized the possibility of a powerful artificial learning technique for more than two artificial neural layers. During the 1970s and 1980s, AI machine learning was in relative dormancy. In 1986 Geoffrey Hinton, David E. Rumelhart, and Ronald J. Williams published “Learning representations by back-propagating errors” which illustrated how deep neural networks consisting of more than two layers could be trained via backpropagation.

During the 1980s to early 2000s, the graphics processing unit (GPU) have evolved from gaming purpose towards general computing, enabling parallel processing for faster computing. In 1990s, the internet spawned entire new industries such as cloud-computing based Software-as-a-Service (SaaS). These trends enabled faster, cheaper, and more powerful computing.

In 2000s, big data sets emerged along with the rise and proliferation of internet-based social media sites. Training deep learning requires data sets, and the emergence of big data accelerated machine learning. In 2012, a major milestone in AI deep learning was achieved when Geoffrey Hinton, Alex Krizhevsky, and Ilya Sutskever trained a deep convolutional neural network with 60 million parameters, 650,000 neurons, and five convolutional layers, to classify 1.2 million high-resolution images into 1,000 different classes. The team made AI history by through their demonstration of backpropagation on a GPU implementation on such an impressive scale of complexity. Since then, there has been a worldwide gold rush to deploy state-of-the-art deep learning techniques across nearly all industries and sectors.

In the future, the opportunities that neuroscience and AI offer are significant. Global spending on cognitive and AI systems is expected to reach $57.6 billion by 2021 according to IDC estimates. The current AI renaissance, largely due to deep learning, is a global movement with worldwide investment from corporations, universities, and governments. The global neuroscience market is projected to reach $30.8 billion by 2020, according to figures from Grand View Research. Venture capitalists, angel investors, and pharmaceutical companies are making significant investments in neuroscience startups.

Today’s wellspring of the global commercial, financial and geopolitical investments in artificial intelligence owes, in some part, to the human brain. Deep learning, a subset of AI machine learning, pays homage to the biological brain structure. Deep neural networks (DNNs) consist of two or more “neural” processing layers with artificial neurons (nodes). A DNN will have an input layer, an output layer, and many layers in between—the more artificial neural layers, the deeper the network.

The human brain and its associated functions are complex. Neuroscientists do not know many of the exact mechanisms of how the human brain works. For example, scientists do not know the neurological mechanisms of exactly how general anesthesia works on the brain, or why we sleep or dream.

Similarly, computer scientists do not know exactly how deep learning arrives at its conclusions due to complexity. An artificial neural network may have billions or more parameters based on the intricate connections between the nodes—the exact path is a black-box.

Read the source article in Psychology Today.