At the March 1 symposium launching the MIT Intelligence Quest, a panel discussion explored the social implications of artificial intelligence, ranging from unequal access to technology and its benefits to the threats of surveillance and false news. Among the groups at MIT actively working to examine and address such issues are the School of Humanities, Arts, and Social Sciences (SHASS) and the MIT Media Lab. The leaders of those entities shared thoughts on why these concerns—though not as new as one might think—are newly urgent.
Melissa Nobles, Kenan Sahin Dean of SHASS; professor of political science: “What are the social, economic, political, artistic, ethical, and spiritual consequences of trying to make what happens in our minds happen in a machine? Who does this machine answer to? How do we ensure that the results of our efforts act as moral agents in society? Answering these questions responsibly means first backing up a little. What does it really mean to think? What is intelligence, anyway? Philosophers, social scientists, and artists have been grappling with these questions for centuries, but today these questions are being asked in a different context. We are on the verge of incorporating incredibly sophisticated tools for autonomy, prediction, analysis, and sensing into devices and environments that are as intimate to our daily experiences as our own clothing. These questions, in other words, are moving very rapidly out of a theoretical or speculative domain. They are headed directly into our lives and how we live them.”
Joi Ito, director, MIT Media Lab: “At the Media Lab we use the term ‘extended intelligence,’ rather than ‘artificial intelligence.’ Some of the problems of automation aren’t actually new. MIT mathematician Norbert Wiener in his book The Human Use of Human Beings, calls institutions ‘machines of flesh and blood.’ The idea is that any bureaucracy is a form of automation. If you look at the markets, they have certain evolutionary systems that cause injustices and harm, and we have trouble regulating and controlling them. So, I think there are some new problems, but there are also a lot of good, old-fashioned problems related to complex, self-adaptive systems that are evolving in an uncontrolled and harmful way. People like [environmental scientist] Donella Meadows and [MIT professor emeritus and inventor of system dynamics] Jay Forrester [SM ’45] modeled these complex systems and were trying to suggest how we might intervene. There are some really interesting new problems, such as the reinforcing of biases by algorithms, but the fact is that these reinforced biases exist because of those old automated systems. And now we have booster rockets on those systems that make them even harder to control.”