“We have this assumption that algorithms created by computers or artificial intelligence are superior to the human mind,” says Renée Gosline, senior lecturer and research scientist in marketing at the MIT Sloan School of Management. But as we increasingly lean on machines to help us make decisions—a phenomenon that Gosline, whose research focuses on how technology affects our perceptions and behavior, detailed in a recent TEDx talk in Boston on “The Outsourced Mind”—it’s important to examine that assumption. “We may not realize that because the algorithm was programmed by humans or learns from humans, it is prone to human bias,” she says. Spectrum spoke with Gosline about why individuals and companies should be concerned about this dynamic, and what can be done to fix it.

How can outsourcing our decisions to computers lead to amplified bias? Why is this a problem?

RG: We often assume that machines are rational, objective, and impartial, but that thinking could lead us down the wrong path. One example is in the images that pop up if you search for “female beauty” or “three black teenagers.”

In both cases, you’ll see results that show clear racial and gender bias. This shows that machines have a particular idea of what a beautiful woman looks like, and that has implications for the billion-dollar beauty industry as well as for health behaviors. In addition, chat bots are being used more and more for things like therapy, financial advising, and disease management.

If you have a bot that is making assumptions about whether or not you are a good candidate for a loan based on your demographic information, that could be problematic, especially if that bot has learned discriminatory behaviors.

Bias doesn’t necessarily refer to racial or gender discrimination, it simply means that there is a nonrational assumption being made that is leading the decision in a direction that may not be objectively the most beneficial. Every company, whether or not it is concerned with social inequality, should worry about cognitive bias in machine decision-making because it can lead to bad decisions.

What can we do to prevent human bias from crossing over into machines?

RG: We know from behavioral science and behavioral economics that the most effective way to combat cognitive biases is to make the unconscious conscious. To make people aware of these heuristics that are flavored with erroneous or biased assumptions, and bring them to the conscious level, so that people see that they’re jumping to conclusions that may be problematic.

It’s also important to promote inclusivity among the engineers, the leaders, and the strategists who are creating the technologies of the future. By having different perspectives present when decisions are made and architectures are created, you’re far less likely to produce something that is tone-deaf, or, at worst, harmful in terms of leading people down a biased road.

I teach Behavioral Economics and Behavioral Science, and I try to impress on my students that (a) technology is prone to the same biases that humans are, and so (b) it is important, as leaders, engineers, and businesspeople, that when they go out in the world and create the next great thing, they are conscious about unpacking and dismantling their own cognitive biases, lest they end up spreading them through whatever kind of business or algorithm they put forth in the world.

Why is this so important now?

RG: Now is when this is all being built, so this is the best time for us to be thinking about these issues. Now is the time for us to think about this as we construct this new world. Technology presents a tremendous opportunity to be an equalizing force. We just need to be mindful that there is a potential dark side to letting the machine take the lead.

Topics

Share your thoughts

Thank you for your comments and for your role in creating a safe and dynamic online environment. MIT Spectrum reserves the right to remove any content that is deemed, in our sole view, commercial, harmful, or otherwise inappropriate.

Your email address will not be published. Required fields are marked *