Back in 2016, when Reed Kopp began pursuing his PhD in the Department of Aeronautics and Astronautics (AeroAstro), he was staring down the barrel of years of mind-numbing data analysis. His project focused on designing and characterizing next-generation composite nanomaterials intended to make spacecraft and aircraft stronger, stiffer, and lighter. Since the materials are prone to failure due to hard-to-predict cracking, Kopp needed to study in detail where and why such cracking emerges.
He brought his samples to synchrotron light source facilities, where he bombarded them with X-rays to take intricate 3-D images of the interior of the materials as they experienced stress. The experiments yielded a total of 30 terabytes of microcomputed tomography (μCT) scan data that Kopp then sifted through by hand, image by image, to characterize the weak points and damage-propagation trends that caused the composite to fail.
“For a given scan,” says Kopp, “that could take about 15 hours of manual labor where we’re staring at a computer screen the entire time.” It’s time-consuming: the 2016 data set took well over a year to analyze. Kopp says it’s also impossible to be objective—repeated analyses will find different vulnerabilities in the same material.
This is exactly the kind of problem that would benefit from automation. And it is where the MIT Quest for Intelligence stepped into the picture.
A team of computer engineers within the Quest work with scholars across MIT to provide AI tools that accelerate research. Kopp and his supervisor, AeroAstro professor Brian Wardle SM ’95, PhD ’98, leapt at the chance to partner with the Quest, initially providing 70,000 μCT images that had already been manually classified to train and validate an AI model. This model has since successfully located damage in scans 10 times the size of the training scans, saving hundreds of hours of manual labor.
Automating a task like this just wasn’t a possibility before, making this approach revolutionary for the field of materials science. “To teach an AI model to think like we do, but do it faster and with larger amounts of data, is huge,” says Kopp.
Launched two years ago, the MIT Quest for Intelligence—now part of the MIT Stephen A. Schwarzman College of Computing—aims to bring the MIT community together to answer two monumental questions: How does human intelligence work, and how can human intelligence be reverse-engineered to build smarter machines that will benefit the world? Simply put, “if we know more about how we reason and how we learn to speak, translate, and make decisions,” says Quest executive director Aude Oliva, principal research scientist at the MIT Computer Science and Artificial Intelligence Laboratory, “we could gain the insights needed to really advance AI.”
Researchers from countless fields are generating tsunamis of data that cannot be processed and understood using traditional techniques. According to Oliva, these circumstances make it thrilling to find ways for AI to assist.
When the Quest agrees to take on a project, there’s usually some function or nugget of code that humans just don’t know how to articulate or write, according to Josh Joseph SM ’08, PhD ’14, chief intelligence architect for the Quest. “So we use the tools in machine learning to determine what that function should be,” he says. By collaborating with researchers across an array of disciplines, Joseph and his team also get to discover whether AI techniques touted in the literature actually work in the real world or if they disintegrate on impact with, he explains, “all the grit and nuance and noise that can show up when you apply these tools to real-world problems.”
The Quest is currently supporting close to 100 research teams. Their projects span an extraordinary breadth of topics. Inside 500 Technology Square in Cambridge, for instance, postdoctoral researcher Amin Espah Borujeni works in the lab of Christopher Voigt, the Daniel I.C. Wang Professor of Advanced Biotechnology in the Department of Biological Engineering. He’s reprogramming cells to do things that (to borrow a line from the old Tropicana Twister ad) “Mother Nature never intended”—but ones that would be rather useful to humans, such as treat disease and develop alternatives to fertilizers.
The challenge is that introducing genetic manipulations or designing genetic circuits can burden growth or even kill cells. Up until now, it hasn’t been possible to predict such outcomes, let alone prevent them, because the thousands of genes inside each cell are part of a complex network of interactions that’s hard to model without huge data sets. But this too-many-variables-insufficient-data problem is exactly what the Probabilistic Computing Project, located across the street in the Department of Brain and Cognitive Sciences, has been puzzling through for the last decade. The Quest brought Espah Borujeni and these researchers together.
The collaboration “is one of the first times we’re working closely with a partner to solve an actual scientific problem,” says Vikash Mansinghka ’04, MNG ’09, PhD ’09, head of the Probabilistic Computing Project and a principal research scientist.
Espah Borujeni is excited by the hope that the model they’re codeveloping will push forward the limits of synthetic biology. “I can see a real future there,” he says.
There are other examples as well. The MIT Libraries have boxes stuffed with letters, memos, and documents that need to be reviewed, cataloged, and tagged by hand. Katherine Gallagher, an AI software engineer with the Quest, has worked with a team of undergraduate researchers to begin using machine learning to find an automated way to tackle this task. Their first prototype uses an image classifier to categorize documents and an image-to-text converter to extract information.
Gallagher is also collaborating with the Massachusetts General Hospital to identify viable livers for transplant from biopsy images. Of the 14,000 people each year that require such a transplant, only 8,000 receive one. An additional subset of fatty livers may in fact be viable, but they are often discarded at hospitals where physicians have not been trained to review the tissue samples properly. Using computer vision and machine learning to review these samples could help save people’s lives, according to Gallagher.
The Quest is knitting people together across MIT, connecting them in a rapidly expanding network agnostic to academic boundaries. Of this interdisciplinary fabric Gallagher says, “I think there’s nearly unbounded potential for new discoveries and solutions.”