In February 2018, Dean of the School of Engineering Anantha Chandrakasan, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) director Daniela Rus, and James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, participated in an “AMA” (ask me anything) session on Reddit. DiCarlo, the Peter de Florez (1938) Professor of Neuroscience, described his research goal in the AMA as follows: “to reverse engineer the brain mechanisms that underlie human visual intelligence, such as our ability to recognize objects on a desk, words on a page, or the faces of loved ones.” He tackled several Reddit users’ questions related to this goal, including the following:
U/DOPU: There’s often great confidence (at MIT, particularly) placed on the idea that a better understanding of the brain and the mind that runs on it could very well lead to better computer algorithms and to general, strong AI. Two instances where this idea does seem to have had moderate success are perhaps reinforcement learning (somewhat inspired by VTA error-signaling neurons) and deep neural nets (somewhat inspired by [the] visual cortex). And yet so many other computational systems that exhibit astounding amounts of problem-solving capabilities seem to draw very little inspiration from nervous systems—Wolfram Alpha, for example, or Boston Dynamics’ robots, or a lot of the expert systems AI work done a few decades ago. What is the argument for going about attempting to figure out human intelligence such that we can use it on machines, besides the couple [of] examples I listed above? Wouldn’t it perhaps be easier to just focus on building general AI? In other words, why the focus on human intelligence?
DICARLO: Great question! You correctly point out that not all areas of progress in AI-related systems have been driven by detailed knowledge of the brain and the mind (although many of these are at least brain-inspired).
So one way to phrase your question is this: What is the most efficient path to discover human-level AI? Path 1: Have engineers work on their own to see how far they can get. Path 2: Have engineers work with guidance from the brain and the mind. No one knows the answer to the question of which path is faster.
However, the recent successes (esp[ecially] reinforcement learning and deep CNNs [convolutional neural networks]/deep learning) have shown that Path 2 can deliver a very impressive return. The human brain has had millions of years to develop its capabilities—while our engineers could probably work faster than evolution, it might take many many years to find processing strategies that work as well as the brain in some aspects of intelligence. Thus, Path 1 may be very, very long. So why not take a huge shortcut and look to the brain and the mind (Path 2)?
Also note that the above considerations are only about the question of paths to AI. But Path 2 has additional human benefits to it as well. An engineering description of the brain will not only allow us to build better machines. It will also allow us see new ways to repair, educate, and perhaps even augment our own minds!