Since joining MIT’s faculty nearly 40 years ago, I have witnessed a fascinating range of moments that shifted our trajectory in some fundamental way. Those I admired most strengthened our whole campus community while simultaneously extending its impact far from home.
With Project Athena in 1983, MIT revolutionized its own teaching and research environment with ubiquitous computing—and ended up driving technological advances with global impact, from the X Window system to Kerberos authentication. When women faculty in the School of Science documented serious inequities in their resources and lab space compared to their male colleagues, MIT made it public and changed course, setting a remarkable example for all of higher education and beyond. And with each of its major initiatives in digital learning, from MIT OpenCourseWare to MITx and edX, MIT has set out to reach a broader educational audience—and changed the game.
As you’ll see in this issue of Spectrum, these big familiar examples speak to the same culture of bold experimentation that continually drives new thinking and new trajectories at MIT, from economics to architecture, management to materials science, music to mechanical engineering.
Looking ahead, the fall opening of the MIT Stephen A. Schwarzman College of Computing represents another thrilling shift in the Institute’s trajectory—the most significant restructuring of MIT in nearly 70 years. By tapping into the power of computing to advance diverse fields of study and enriching computing with insights from disciplines across MIT, the college will play a vital role in MIT’s work to invent the future—and continue to make a better world.
By the time you read this letter, a task force of faculty, students, and staff will have submitted ideas for the college’s design, from organizational structure to faculty appointments to the social implications of computing. And in August, the college’s inaugural dean, Daniel Huttenlocher SM ’84, PhD ’88, will arrive on campus and begin turning ideas into action.
Once again, we are charting a new path while staying true to our guiding mission.
Learn-to-Sail classes are offered free to all members of the MIT community, with more than 2,400 people participating every season.
“The foundation of the program is teaching basic skills, and there is a hardy core of volunteers who enjoy sharing their passion with newcomers almost every evening of the week,” says Franny Charles, MIT’s long-time sailing master.
MIT’s recreational sailors also can procure a sailing card and use the fleet; more than 3,000 cards are issued annually. As a result, the MIT Jack Wood Sailing Pavilion on the Charles River is a hub of activity all summer long. The facility is open seven days a week from April 1 through November 15.
A variety of sailboat types allows students to challenge themselves. MIT has more than 100 boats, including:
31 Tech dinghies
24 Club flying juniors (used by the racing team)
6 420s (also a racing boat)
6 gaff-rigged catboats
6 laser class Olympic boats
1 foiling catamaran
1 foiling moth class sailboat
1 1902, museum-quality, 51-foot Herreshoff sloop suitable for sailing around Massachusetts Bay
In 2004, Professor Edward “Ted” Adelson was focused on a successful career studying human and artificial vision.
Then he had children.
“I thought I’d be fascinated watching them discover the world through sight,” says Adelson, the John J. and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences at MIT. “But what I actually found most fascinating was how they explored the world through touch.”
That fascination led Adelson to invent a touch-based technology: a sort of artificial finger consisting of a gel-based skin covering an internal camera. The device could chart surface topographies through physical contact—creating something like sight through touch. That technology is now the lifeblood of GelSight, the startup that Adelson founded in 2011 along with two MIT colleagues.
Originally “a solution in search of a problem,” as Adelson describes it, GelSight now produces bench-based and handheld sensors deployed for quality control in industries such as aerospace and consumer electronics. The company is also pursuing other commercial applications. Based not far from MIT in Waltham, Massachusetts, GelSight is closing its second round of financing and appears poised for profitability.
On the surface, GelSight’s story reads like another MIT cradle-to-corporation fairy tale. But the team’s odyssey from concept to company was filled with complex passages the founders were ill-prepared to navigate.
“We were academics,” Adelson recalls. “We had this technology and thought it would be easy to transform it into a profitable enterprise. We learned very quickly that the technical invention is the easiest part for people like us. Developing a product and building a company is way harder. That requires the effort and expertise of many smart people who must work a very long time. Fortunately, we had great connections available to us through the MIT community. The resources we were able to tap into at MIT were essential in creating and sustaining GelSight.”
Cofounders meet on campus
Adelson’s first collaborator in developing the underlying technology was Kimo Johnson, who joined his laboratory as a postdoc in 2008. “Ted had invented this material that could make very precise measurements in 3-D,” says Johnson, CEO and cofounder of GelSight. “We published several papers on the technology as academics tend to do. But we also made videos and posted them on YouTube. The response was amazing. My inbox was flooded with emails asking about potential applications. That was when we realized we should form a company.”
As a first step, Johnson collaborated with students in a course called iTeams at the MIT Sloan School of Management to draft a hypothetical business plan built around GelSight technology. The plan proposed a potential application in the inspection of helicopter blades. That exercise helped him understand how valuable a handheld device that employed GelSight technology could be to professionals who inspect and repair critical surfaces. “This was another piece of information that encouraged us to move forward,” says Johnson.
Adelson and Johnson met GelSight’s third cofounder in 2010 at an on-campus seminar on imaging and computer vision. János Rohály, a former MIT research scientist, had founded Brontes Technologies in 2004. That startup, which applied computer vision in dentistry, was acquired in 2006 by 3M. It was the incarnation of every MIT startup’s dream.
“After my talk, Ted and Kimo introduced themselves and told me about GelSight,” recalls Rohály, who is now CTO of GelSight. “I was captivated by their technology and invited them to make a presentation to my colleagues at Brontes. A little later I realized I was losing sleep fantasizing about their technology. In 2011, when they formed the company, they reached out to me. I had the entrepreneurial experience and the knowledge of the MIT network that could help them. And I joined the team.”
Tapping MIT’s broad network
With Rohály on board, the GelSight team turned to MIT’s teeming startup network for help plotting its next crucial steps. “MIT sits in the middle of the Boston-area startup ecosystem,” says Adelson. “This ecosystem is populated with technologists, investors, business people, lawyers, and other professionals. Together they form a vibrant group of people who are constantly networking, sharing ideas, and encouraging each other. That energy and activity is critical to launch a startup company. It was for us.”
The MIT ecosystem delivered in a big way for GelSight. The company’s founding trio received consistent support and encouragement from the MIT Venture Mentoring Service (VMS), which provided business advice, financial guidance, and introductions to potential manufacturing partners, customers, and investors. (VMS will be celebrating its 20th anniversary in 2020.)
“Neither Ted nor I had the slightest business experience,” says Johnson. “At the Venture Mentoring Service, we could rely on seasoned entrepreneurs who were ready to share their experience and expertise with us. There are so many challenges a fledgling company faces. Negotiating contracts, for example. It takes an experienced entrepreneur to know where to make concessions and where to push back. We got that and much more from the Venture Mentoring Service. In the early days, they almost served as a board of directors for us.”
GelSight got another big boost when their technology was featured in a 2011 MIT News article. “That article generated an enormous amount of interest,” says Johnson. “There are so many subscribers across so many industries. And MIT News gets copied on so many technical news sites. In fact, it was that article that connected us to a person in business development, who in turn connected us to our biggest consumer electronics customer.”
GelSight’s founders also made critical connections through the MIT Deshpande Center for Technological Innovation and the MIT Technology Licensing Office. The MIT Industrial Liaison Program put the young company in touch with a series of potential customers, including Boeing. “In our first years, we essentially bootstrapped the company, selling benchtop systems to customers in industries including cosmetics, abrasives, and aerospace,” says Johnson. “These were mostly connections we’d made through MIT. And they were enough to keep us going and slowly growing.”
Unlike many startups, which seek rapid growth and an early sale, GelSight has plotted a more gradual growth curve. In 2014, thanks to a connection obtained through the MIT network, the company received an inquiry from a China-based manufacturer of smartphones. That company had a slew of complex measurement problems they thought they might resolve with GelSight’s capacity to measure surface topography. That sale—GelSight’s first large-volume order—changed both the company’s manufacturing practices and its focus.
“Up until that point, we’d been selling single systems to R&D laboratories,” says Johnson. “This sale showed us that our real value would be in quality control. We shifted toward process development and systems for mass production and inspection.” Buoyed by the China sale, GelSight held its first round of financing in 2015. Capital infusions came from Omega Funds—a Boston-based venture capital firm that specializes in biotechnology and medical device companies—and Ping Fu, a technology innovator and investor Rohály knew from his days at Brontes. Both Fu and Omega Funds managing director Richard Lim sit on GelSight’s board of directors.
Rohály credits MIT for much of the success in GelSight’s first round of financing. “MIT gives you a tremendous boost when you approach people,” says Rohály. “Just the name alone. This is true not only in technology circles, but also in business circles. Especially with investors. If you are from MIT or have technology invented at MIT, people are interested in seeing that technology.”
He also credits MIT and its ecosystem for sustaining the GelSight enterprise through all phases of its development. “There is a can-do attitude among MIT people that I have rarely seen elsewhere,” he says. “They can attend to any problem at any level and have the confidence in their ability to solve it. Too many times, in other venues, I’ve seen people stumble before problems because they don’t trust their ability to solve them. That doesn’t exist at MIT. When there’s a problem, [MIT people] say great, let’s start working on it.”
In the past few years, GelSight has hit several important milestones. In 2017, the company successfully deployed its technologies at mass production and inspection facilities. The following year, GelSight was selected to provide surface inspection technology for the manufacturing operations of a top aerospace company.
This too has helped GelSight gain credibility with investors. “Until recently, investors would ask us whether people would actually buy our products,” says Johnson. “Now, when we have major companies selecting our technology to inspect their flagship products, that’s validation.”
In 2019, the cofounders say GelSight plans to step off the brakes and hit the gas. Over the past few years, the company has spent significant time and resources resolving scientific questions about the technology to ensure it can be produced on a broader scale. Now GelSight is working to close its second round of financing. This new capital will enable the company to ramp up manufacturing and accelerate its business plan.
“We’ve been extremely attentive to managing cash flow and operations,” says Rohály. “And we’ve found a nice sweet spot in aerospace and electronics. We’re also continuing to push for customers in new spaces. The amazing thing is that 90 percent of our current customers come from inbound interest, from customers reaching out to us and asking us to solve their problems.”
The GelSight team still seeks advice from partners in MIT’s entrepreneurial ecosystem. But now the company’s leaders also offer insight and advice to other MIT inventors seeking to bring laboratory creations to market. “We’re very much a part of the broad MIT network,” says Johnson. “We’ve learned firsthand how much can be gained by experienced professionals sharing their knowledge within a larger community. Now we’re in a position to give back to the community that has helped us so much.”
Meeting the growing energy needs of our technological age while addressing global climate change is a daunting undertaking. That’s why the MIT Energy Initiative (MITEI) continually draws together wide swaths of the Institute’s intellectual, organizational, and policy resources to take on the challenge.
“MITEI has a mission of bringing together science, innovation, and policy to transform the world’s energy systems,” says Robert C. Armstrong, MITEI director and the Chevron Professor of Chemical Engineering. “Our goal is to reach across campus to get as many different disciplines as appropriate to work together and tackle these complex problems.” MITEI works with almost 35% of the MIT faculty on its three major objectives: research, education, and public outreach.
Among its most visible projects is its series of “Future of…” studies, comprehensive multidisciplinary research reports that explore paths to meeting future energy demands under carbon dioxide emissions constraints. To date, MITEI has produced “Future of…” studies on energy sources such as solar, natural gas, coal, and geothermal, and on vital parts of the energy infrastructure, including the electric power grid and the nuclear fuel cycle.
The latest report is The Future of Nuclear Energy in a Carbon-Constrained World. This title neatly sums up the study’s major point, which as study co-chair Jacopo Buongiorno PhD ’01, TEPCO Professor and associate head of the Department of Nuclear Science and Engineering, explains, is that “nuclear can and should play a big role in decarbonizing the power sector.”
The study points out that reaching this goal will require not just technical innovations, such as new reactor designs, but also updated policy and business models, regulations, and construction techniques.
Changing nuclear landscape
In some ways, the new study harkens back to The Future of Nuclear Power, a report released in 2003—even before MITEI was formally established in 2006 by MIT’s then-president Susan Hockfield, professor of neuroscience. However, Buongiorno says, “The landscape for energy and nuclear in particular has changed dramatically since 2003.”
Vast new natural gas resources have been tapped, and attention to climate change and the need for decarbonization have increased. The nuclear industry was hit hard by both the 2008 economic crisis and the 2011 nuclear accident in Fukushima, Japan.
Furthermore, emerging technologies continue to increase the value of nuclear energy in terms of decarbonization. Fourth-generation reactor designs are more efficient and more accident-tolerant; today’s small, modular reactors offer more flexibility and versatility than traditional large-scale nuclear plants. “If you sum these all up, we thought that it was a good time to take a fresh look at the prospects of nuclear,” says Buongiorno.
Buongiorno points out, “We looked not just at electricity, but at the other energy applications of nuclear systems, for example, heat for industry or production of synthetic fuels or hydrogen—essentially a way to penetrate markets that are not traditional for nuclear. Nuclear traditionally has been used for power. But the idea here is to go after also the massive carbon emissions that are outside the electricity sector.”
Such ideas capitalize on the fundamental function of a nuclear reactor: creating heat. Typically, plants create steam to turn turbines that generate electricity, but heat itself can also drive industrial processes—a concept made even more attractive by the higher operating temperatures available with advanced reactor designs. In addition to exploring new technology, however, Buongiorno and his colleagues also examined the policy and economic issues that have stalled the growth of nuclear power. Their analysis shows that trying to meet energy needs solely through renewable sources will raise the cost of decarbonization while slowing its progress. The study makes the case that, ultimately, nuclear is an important avenue to a low-carbon future.
MITEI’s “Future of…” reports have been well received, with impacts that reach beyond the expected audience of government policy makers and energy industry wonks. This latest effort has been no exception. The report was released in September 2018 to what Buongiorno calls “an overwhelming reaction—the amount of attention exceeded my wildest expectations.” Following the initial rollout of the study, Buongiorno and his colleagues embarked on what amounted to an “almost nonstop world tour” to present their findings—traveling from London, Paris, and Brussels to India, China, Japan, and Korea. The executive summary has been translated into six languages, and the entire report was translated into Chinese. Such a globetrotting presentation, gathering reaction and feedback from scientists and policy makers around the world, highlights another difference between this study and the 2003 report. “That study had focused primarily on the United States and North America, with implications for the rest of the world,” Armstrong says. “This most recent study has, by design, taken a global approach.”
Despite all the positive reaction to the latest MITEI effort, Buongiorno admits that there’s a difference between people paying attention and taking action. “It’s hard to assess whether this is going to have a real impact,” he observes. “Will people actually implement our recommendations or not? That remains to be seen.” However, some short-term impacts are already evident.
“We’ve been invited to states in the US to talk about the value of the existing nuclear fleet,” Buongiorno says, noting that the study provides useful information for decision makers charged with determining whether plants should be shut down or kept operating when their licenses expire.
Given the pattern set by previous MITEI efforts, it’s also a safe bet that its long-term influence will be significant. Armstrong cites the 2011 Future of Natural Gas study as an example. “I think that one was particularly impactful,” he says. “The report pointed out the likely possibilities that shale gas could remake the gas business in North America; it could revitalize the chemical industry by providing lowcost feedstocks; it could provide substantial new jobs in the natural gas sector; and it could potentially reshape the global gas business. We’ve actually seen that come to pass.
“We also pointed out that it had the potential at low cost, which we were projecting, to contribute significantly to meet the challenge of climate change. And that’s also come to pass.”
Unexpected and unconventional recommendations such as these are something of a hallmark of the “Future of… ” studies, many of which have inspired new ways of thinking about old questions.
John Parsons, an economist at the MIT Sloan School of Management and co-chair of the nuclear study, points out that the new report, for example, contradicts the common belief that the main driver of cost for nuclear plants is the reactor itself and related systems. Actually, he explains, “The large cost of the power plant is in the civil engineering around the reactor, big civil structures, and in particular things like the containment building and the basemat, as well as the site preparation.” He adds, “We identified ways to reduce these costs.”
That sort of insight likely comes more naturally to an economist than a nuclear engineer—which is exactly why MITEI takes an interdisciplinary approach to energy research.
“In order to inform policy makers and thought leaders about the big challenges in meeting climate change and still providing more energy, we need to get all of those disciplines together,” Armstrong says.
Armstrong believes that such a multidisciplinary effort is particularly at home at MIT. “That’s part of the culture here, developed over many, many years. The faculty have a substantial trust and admiration for one another’s capabilities and are happy to work together on these kinds of joint projects. It’s hard to replicate that in other places,” he says.
MITEI’s work stands apart for other reasons too, according to Parsons. “There are three things. Number one is the attention to cutting-edge technological change. Number two is a lack of a bias toward one technology or another. Number three is a hard-nosed economic attitude. We’re not sunny optimists,” he says. The next “Future of… ” study is already well underway, focused on energy storage. “As we get more and more renewables in the electricity system, it becomes more apparent that there are substantial challenges from intermittency that are intrinsic to solar and wind,” Armstrong explains.
The newest project emerged in part as a result of the 2015 Future of Solar Energy study. Says Armstrong, “One of the major conclusion areas was that we needed to prepare for large penetration of solar by developing appropriate storage technology.” Following the successful pattern set by previous MITEI studies, the project brings specialists in different storage technologies together with experts in policy. The goal of the study, which Armstrong anticipates will take another two years, is to “help the public and policy makers understand what we need by way of storage technology to have a carbon-free world.”
Whatever the results, chances are good the next report will reflect the attitude common to all the “Future of…” studies. As Parsons describes it: “We’re sure there is some way to make the world better, but you really have to prove that you can do [something] with whatever technology and whatever economic paradigm you’re proposing.” It’s a combination of being both visionary and yet completely practical, he says.
Mark Wolverton is a 2016–17 MIT Knight Science Journalism Fellow.
In a typical supermarket, all of the fresh food—fruits and vegetables, meats, dairy, and bread—line the perimeter of the store. The expansive middle, meanwhile, features aisle after aisle of processed foods. “It’s all of these crazy crackers and chips, and stuff that didn’t exist before,” says Deborah Fitzgerald, the Leverett Howell and William King Cutten Professor of the History of Technology in the Program in Science, Technology, and Society at MIT. “I started wondering where it all came from. There had to be a driving force that made people think this was a great idea.”
Acclaimed for her book Every Farm a Factory (Yale University Press, 2003), in which she explored the history of agricultural industrialization in the United States, Fitzgerald went on to spend nine years as Kenan Sahin Dean of MIT’s School of Humanities, Arts, and Social Sciences. Now she is at work on a new book examining the origins of America’s current supply of food products.
The story begins with World War II, when the country mobilized to feed 6 million soldiers stationed abroad in 23 different climatic regions. “All of the food they ate was made in America and shipped to wherever they were,” Fitzgerald says. At the center was the US Army Quartermaster Corps, which exerted a profound, yet understudied, effect on the trajectory of our nation’s agricultural system. “It was an amazing operation that has been written about very little.”
The military solved its provisioning problems by working with food companies to create food described as “time-insensitive”—bland, processed meals that could withstand the rigors of overseas shipping and be carried by soldiers into battlefields anywhere. That meant heavily processed and preserved foods like cans of beef stew and chili con carne, “meat bars” (made of compressed and dehydrated meat), biscuits, cookies, and candy that could give soldiers energy and nutrients in a hurry.
To accommodate this rapid shift, the Quartermaster Corps requisitioned massive quantities of produce from distinct areas—fruits from California, dairy from Wisconsin and New York, grains from the Midwest—consolidating industries geographically. “Before the war, farmers grew a little bit of everything, but that became less realistic as the war developed,” Fitzgerald says. “This big national crisis turned around the way agriculture was done.”
After the war, those changes stuck. Midwest farmers, for example, suddenly found themselves with bumper crops of wheat and corn with no obvious civilian market. Food companies stepped in, creating new products to utilize the surplus. “They had to turn it into something—so welcome, Doritos!” Fitzgerald says. To make these processed foods more palatable to civilians, the companies tapped new technologies in coloring and spray-on flavor to create an amazing variety of foods—a trend that continues to fill the middle aisles of supermarkets today. Fitzgerald says the story of 20th-century processed food is an intriguing lens through which to view the history of technology generally—especially in a place like MIT that has so much faith in the positive potential of technology.
“People tend to think that all of the things we are consuming were developed for a reason, and that’s because they are better,” says Fitzgerald, who has written an article on the history of processed foods for a forthcoming issue of Osiris, an annual journal dedicated to the history of science. “I want people to see the links between their experience and the larger cultural context,” she says, explaining that she has found change comes most often in response to specific cultural and economic realities. “It’s almost never because it was intrinsically better.”
The question is still open, though efforts to address it have run the gamut from the 1908 book Mars as the Abode of Life, in which astronomer Percival Lowell made his case for a lost Martian civilization, to the more than a dozen NASA missions that have explored the Red Planet.
Now, a team from MIT and Harvard is developing an instrument that could quickly provide convincing evidence of life on Mars, either at present or in its not-too-distant past.
Gary Ruvkun, a molecular biologist at Harvard Medical School and Massachusetts General Hospital (MGH), started thinking in the early 1990s about sending a robot to Mars that would look for DNA using polymerase chain reaction (PCR) technology. PCR is sensitive enough, in principle, to detect even a single genome. Furthermore, the detection of long, complex DNA molecules would strongly suggest biological origins—a prospect that could be verified by more detailed measurements.
Ruvkun and biologist Michael Finney PhD ’86 discussed the idea at a December 2000 Christmas party, and word subsequently reached Claude Canizares, MIT’s Bruno Rossi Professor of Physics. Canizares told Maria Zuber, a planetary scientist who now serves as MIT’s vice president for research, saying it sounded “kind of crazy.” But Zuber was intrigued. She soon contacted Ruvkun, telling him, “I want to work with you.” The Search for Extraterrestrial Genomes (SETG) project was thereby launched, with Ruvkun and Zuber as principal investigators.
A key advantage of their strategy, explains Zuber, the E. A. Griswold Professor of Geophysics, “is that if you’re looking for DNA-based life, you know exactly what to look for. So that ought to be one of the first things you do when searching for life beyond Earth.”
MIT research scientist Christopher Carr ’99, SM ’01, ScD ’05, SETG’s science principal investigator, agrees with this reasoning for starting with “life as we know it” before undertaking a more general search for the unknown. “If you lose your keys in a parking lot at night,” he says, “it makes sense to look under the streetlights first if you think you might have dropped them there.” But there are other arguments to be made for the approach.
All known life forms are based on DNA and RNA, polymeric molecules that are capable of storing information. The basic ingredients for these polymers, and for life in general, can be found throughout our solar system.
What’s more, Earth and Mars have exchanged surface and subsurface rocks: roughly 4 billion years ago, during the Late Heavy Bombardment period that followed the formation of the planets, countless meteoroids shot from one nascent planet to the other. A significant fraction of those objects, moreover, did not experience sterilizing heat during launch or atmospheric entry. Thanks to all this material exchange, Carr says, “if there’s life on Mars, there’s a good chance it’s related to us”—meaning it would have DNA or RNA, which is exactly what he and his colleagues hope to find out.
The SETG team is assembling and testing an autonomous device hardy enough to perform in situ DNA and RNA sequencing on the surface of Mars, or in other extraterrestrial venues, working from samples delivered by a rover vehicle’s robotic arm. Their plan has evolved from PCR to single molecule DNA sequencing—and the model of device currently in favor is Oxford Nanopore Technologies’ MinION.
About the size of a granola bar and weighing just a few ounces, MinION has sequenced entire genomes while proving itself in numerous harsh environments, including on board the International Space Station and under water. Although other researchers carried out the space station testing, SETG researchers have operated the sequencer successfully in volcanic craters in the Argentinean Andes and on Devon Island in the Canadian High Arctic, a location that’s served as a Mars analog for scientists since around 2000.
In May 2018, Zuber and MIT postdoc Noelle Bryan took the MinION onto a reduced-gravity aircraft (the “Vomit Comet”) where sequencing reads were obtained under zero gravity and Mars gravity (0.376 g) conditions.
In further tests, Carr used a vacuum chamber in the SETG MGH lab to simulate Mars temperatures and pressures. So far, the sequencing technique has performed well under those conditions too. The SETG team has demonstrated all steps of the process, and they’re currently working to produce a fully automated “end-to-end-validated instrument” that can operate under Mars-like conditions. Reaching this critical step, Carr says, “would give us confidence that this could become a flight-ready instrument.”
“We won’t be ready for the Mars 2020 Rover mission,” adds Zuber, but that will not be their last chance, as launch opportunities for reaching the Red Planet come every two years. After some technical progress on their end, plus luck in their bidding to get into space, the SETG researchers just might realize the “crazy” vision Ruvkun conceived more than a quarter century ago.
Steve Nadis is a 1997–98 MIT Knight Science Journalism Fellow.
Rising nearly 300 feet from the ground, the Cecil and Ida Green Building, aka Building 54, stands out as not only the tallest building on MIT’s campus but also (until recently) the tallest building in Cambridge, Massachusetts. Yet it’s not obvious from the outside what actually goes on within this imposing 55-year-old structure designed by the late I.M. Pei ’40.
People on campus tours often hear about the annual pumpkin drop or about instances when students have commandeered the Green Building’s LED-equipped windows to play giant games of Tetris. But not everyone learns about the groundbreaking work carried out inside—such as the development of chaos theory, seismic tomography, numerical weather prediction, climate modeling, and far-reaching NASA missions.
This is the headquarters of MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), and plans are now underway to give Building 54 a major facelift, including a new LEED-certified addition that will offer a window into the important work taking place inside.
The $60 million upgrade will allow construction of an Earth and Environment Pavilion designed to be a vital center for environmental and climate research on MIT’s campus. With assistance from the Institute and generous private donors— including John H. Carlson; George Elbaum ’59, SM ’63, PhD ’67; Fred A. Middleton Jr. ’71; Neil Pappalardo ’64; and Shell—EAPS recently passed the midway point on its $30 million fundraising campaign for the new pavilion and other improvements to the Green Building, such as a renovated lecture hall (54-100) to be renamed the Shell Auditorium.
The project will yield about 12,000 square feet of additional space, providing new meeting places, classrooms, and study areas. The enlarged and revamped Green Building is expected to help EAPS attract and retain top faculty and students. But the more ambitious objective is to enhance the research undertaken within the department by co-locating EAPS and the MIT-Woods Hole Oceanographic Institution Joint Program with the MIT Environmental Solutions Initiative, affording greater opportunities for interaction and the cross-pollination of ideas.
“The future is not something to be predicted, but to be made,” MIT professor of digital media Nick Montfort writes in The Future (The MIT Press, 2017), a book that examines concepts of the future through the work of writers, artists, inventors, and designers. In Chapter 6, “Pre-Invention of the Web,” Montfort reveals how visionary work by Vannevar Bush, MIT’s first dean of engineering; MIT Professor Tim Berners-Lee; and two other pioneers, Douglas Engelbart and Ted Nelson, came together to shape the World Wide Web. This excerpt centers on Berners-Lee’s contributions.
Probably even more familiar to us today than the Interstate Highway Network, which was formed, post-Futurama, beginning in the 1950s, is our World Wide Web, a global information system that is now accessible instantly not only at workstations and notebook computers, but also on phones. This system carries a tremendous number of commercial interactions along with an unprecedented store of information, and it also has a recognized inventor. Tim Berners-Lee proposed this system early in 1989 and implemented enough of the system to load the first Web page later that year. He did have support from others on the project, including Robert Cailliau, but Berners-Lee’s work and vision were at the core of the Web, and he is its first author.
The World Wide Web (and the futuremaking work that preceded it) holds several important lessons for future-makers. As is particularly clear in considering Douglas Engelbart’s work and his predecessor hypertext system, an effective vision of the future is one that is engaged with society and builds on personal experience. Engelbart’s vision, like Ted Nelson’s concept of hypertext, involved higher-level concepts connected to specific, concrete ideas and examples. An effective vision is one that can scale up to widespread use and to new types of use, for instance, by groups of collaborators.
Such a vision can draw on utopian modes of thinking and description, and can be exhibited directly as well as described and discussed in writing. And as for the Web itself, related to and in contrast to Vannevar Bush’s early system and some of [Ted] Nelson’s rich concepts of hypertext, this system took root because it was simple enough to be adopted, and because it was open and available to everyone.
Berners-Lee dedicated the Web to everyone in the world, asking for no royalties, filing for no patents, and ensuring that Web technologies would be unencumbered and free for anyone to use. Instead of becoming a monopolistic system limited to those in wealthy countries with financial resources, the Web—even if aspects of it present problems at times—has, as advertised, become remarkably worldwide and open to all sorts of businesses, universities, organizations, and individuals….
Berners-Lee and his collaborators didn’t make up every concept that is the foundation of the Web—they were aware, directly and indirectly, of existing hypertext ideas. The success of the World Wide Web is surely due to two specific factors beyond determination and cleverness:
First, the Web is a simple system, much less powerful than Nelson would like. Not only does it lack built-in support for specific types of hypertext such as stretchtext, it also doesn’t even have two-way links. A central registry could provide for such links, as well as transclusion [Ed. note: an advanced form of hypertextual quotation] with appropriate payments for authors. But the Web doesn’t require any central authority—or, at least, it requires only the hierarchical aspects of the underlying Internet that were already there. The Web would be much less useful without the ultimately centralized Domain Name Service (DNS) that resolves verbal names such as “mit.edu” into numeric addresses. But this system was developed in the 1980s, and predates the Web. Once you can convert your domain names into addresses, your requests only need to route through the Internet to locate a Web server and retrieve information from it. A person who wants to set up a new Web server can just set one up without any interaction with a central registry. In the worst case, dealing with a central authority just means the equivalent of registering a new domain.
Letting people know…that the new server is there is helpful, of course, and in the 1990s a new type of business emerged to help people locate Web resources—including hand-made directories (Yahoo!, Open Directory) and search engines (AltaVista, Google). Such services work to patch up the decentralized Web and allow the discovery of Web resources that would otherwise be obscure. But the Web didn’t need to have them in place at the very beginning.
They could, and did, grow up afterward. The Web, as it first existed, was a very simple hypertext system. It didn’t attempt to solve every problem with an elaborate initial design. Second, the standards of the Web were offered to everyone rather than being restricted by patents or copyrights. Berners-Lee insisted that the Web not be encumbered, and there are concrete reasons this may have helped the system to succeed. For instance, one of the Web’s early competitors, Gopher, offered generally similar ways to traverse hypertext resources online and began gaining traction in 1991. Gopher was more limited in some ways, because of its strongly hierarchical format, but also offered some features that the early Web lacked. While not the only factor that led the Web to prevail, Gopher was dealt a blow in early 1993, when its owner, the University of Minnesota, said that it would charge to license its Gopher server, the dominant one. The choice in the early 1990s between a clearly free and open technology and one that might face further restrictions helped to make one of them—the Web—look like a better choice.
…[P]ioneer Ted Nelson isn’t a full-on fan of the World Wide Web, even though this famous system has broadened access to some forms of hypertext. He writes, “Trying to fix HTML is like trying to graft arms and legs onto hamburger… EMBEDDED MARKUP IS A CANCER.” He continues, “HTML is precisely what we were trying to PREVENT—ever-breaking links, links going outward only, quotes you can’t follow to their origins, no version management, no rights management.” Without knowing about Nelson’s contributions to hypertext and computing, this may seem like pure negativity; if one knows just a little about history, it may seem like sour grapes. I tend to think that this perspective comes from a different view of what the future could have been. It has particular virtues, but was also complex, more difficult to implement, and required a centralized system for rights management.
On the one hand, a wider array of features didn’t mean, by itself, that Nelson’s system was better. On the other hand, the Web, however successful it has been, is not beyond critique.
As far as future-making is concerned, these are the two, clear lessons from the early success of the Web:
The right level of simplicity/complexity is important, even if it means removing some of the features of a vision, and of a systematic future, that other future-makers really love. A vision has to be understood and accepted, and one that is too complex to understand or implement has little chance.Openness, an ability to be shared, and freedom to study and build on a system are really important to whether or not people choose to adopt and further develop new ideas and systems.
Researchers measure the suitability of machine learning in the workplace
how machine learning
will impact future
employment have found
that, in most occupations,
machines will need
to work together
because they provide
different skills. Illustration: Niki Hinkle
How will automation affect employment? Which tasks could be enhanced by machine learning and which might render human labor obsolete? Erik Brynjolfsson PhD ’91, director of the MIT Initiative on the Digital Economy, postdoctoral researcher Daniel Rock SM ’16, PhD ’19, and Tom Mitchell at Carnegie Mellon University have found some answers to these questions using a suitability for machine learning (SML) rubric.
While the media might foretell a future overtaken by robots with vast swaths of the workforce displaced from their jobs, that won’t happen anytime soon, Brynjolfsson says. The team’s rubric serves as a guide to which jobs or occupations could be reorganized—not eliminated. “We’re very far from what researchers call artificial general intelligence, where AI can do the full spectrum of things that humans can do, like the Hollywood robots HAL or the Terminator. There’s almost no occupation where machine learning just runs the table and can do everything,” says Brynjolfsson, the Schussel Family Professor of Management Science.
The pair started by working with a team of machine learning experts to create a 23-question rubric that could differentiate between tasks that were suitable for machine learning versus those that weren’t. They then applied the rubric to the O*NET OnLine network data set, a resource that covers 964 occupations mapped to 2,069 direct work activities shared across occupations. One by one, they applied the questions to each job—Does this task require complex, abstract reasoning? Does it require wide-ranging conversational interaction?—and then used a human intelligence task crowdsourcing platform to score each job based on its suitability for machine learning.
“This rubric gives a map of where human labor will be more valuable, versus where machines will be increasingly able to do things,” Brynjolfsson says.
In most cases, Brynjolfsson says that machine learning will only affect parts of jobs. This means the task of leaders will be to redesign and reengineer jobs, not simply eliminate them. He calls this the “reinvention” of jobs.
“Most occupations consist of a couple of dozen or more distinct tasks. For instance, there are 27 distinct tasks that a radiologist needs to do. One of them is reading medical images. But there are other things that they have to do, like counsel patients, coordinate care with other doctors, and so forth,” he explains.
So, while a robot might be able to read images, could it deliver a compassionate diagnosis? Probably not.
In that vein, low-SML tasks often involve “empathy, human relations, persuasion, teamwork, care, and comforting,” says Brynjolfsson. “We’re deeply wired to connect to other humans, so compared to machines, we have a comparative advantage in connecting to other humans.”
Humans are also better at creativity than robots (for now), as well as at tasks that involve manual dexterity. Hence, jobs such as massage therapy and archaeology have low SML scores, whereas mechanical drafters and credit authorizers—jobs that require repetition and routinization— yield higher ones.
He says that the rubric will help leaders decide how to retrain their workforce and determine which skills to invest in heavily. Ultimately, he says, organizations and employees who achieve a symbiotic relationship with machines will thrive.
“People who will be most successful will be [those] who can leverage machine learning systems by combining human and machine insights,” Brynjolfsson says.
For instance, a computer might help a radiologist scan images more quickly, leaving time for the doctor to see more patients. The sooner society realizes this kind of benefit, the better, Brynjolfsson says. He points to electricity in the 1890s and the early 1900s. A game-changing innovation to be sure—yet the resulting productivity surge didn’t happen until the 1920s. Why? Nobody knew how to reorganize their workforce to embrace the budding technology.
The same holds true with reorganization today.
“How do we change our business processes, how do we change our skills, how do we change the products and services we deliver to take advantage of this?” he asks. “That’s harder. That requires a lot of creativity and entrepreneurship. But I hope it will happen faster than 30 years this time around. In fact, I’m sure it will.”
Alula Hunsen ’21, still remembers the moment his academic trajectory at MIT changed. He was taking the final exam for a differential equations class at the end of his first year, furiously working through problem after problem, when he had a realization. “It was a really hard exam, but I was really enjoying myself,” he recalls. “I was just so confused as to what was happening because I had never engaged with anything in that way.”
Hunsen arrived at MIT with a plan to major in bioengineering, a choice that felt obvious having grown up with parents who were organic chemists, and after having enjoyed advanced biology in high school. “I felt like that was the area where I could best succeed,” he explains.
However, Hunsen, who is supported by a scholarship from the Thomas A. Pappas Charitable Foundation, found himself struggling to connect with the content in his introductory biology and chemistry classes at MIT. “I understood what was happening, but I didn’t understand how we build up to the level at which they were teaching the subject, so I felt really detached from the material,” he says.
In Hunsen’s introductory math class, however, he was immediately attracted to the stepwise manner in which the material built from established principles to interesting abstractions. “I found myself being challenged in a way that I really appreciated,” he recalls.
Still, Hunsen felt intimidated by the prospect of switching his major to math—that is, until the next semester when he took a differential equations class with Bjorn Poonen, the Claude Shannon Professor of Mathematics, whom Hunsen describes as a math legend. By the time finals rolled around that spring, Hunsen was sold. “I got over my fear of math by realizing that I could take it a step at a time, and that I didn’t need to do that major in any way but the way that I wanted to do it,” he explains. “I kind of removed the artificial pressure I put on myself and just went for it.”
Now Hunsen is considering another adjustment to his trajectory: a double major in math and economics, which would allow him to continue engaging with the aspects of math he likes, while also applying math to real-world situations. “I enjoy the abstractness of math, but economics has given me a framework for understanding what’s going on in the world around me. I can immediately see what I would do with an economics degree,” Hunsen says. After MIT, Hunsen envisions pursuing economics in an academic or a government policy setting.
Hunsen’s desire to understand topics from the ground up also extends to articles he writes for MIT’s student-run magazine Infinite. A recent story explored the relationship between music and fashion: “I wanted to build the history and the background of how black music has influenced streetwear, and how that has existed for the length of streetwear and black music’s existence,” Hunsen says. He has also published opinion pieces about social justice issues such as prison reform in the Tech.
In his free time, Hunsen can often be found falling into deep reading rabbit holes online. “It’s fairly random—I follow a bunch of news media sites on social media, and whatever they post, I’ll follow that to the article and fall into a hole from there,” he says. For example, Hunsen recently parsed Ta-Nehisi Coates’s “The Case for Reparations” in the Atlantic, using the article’s citations to find books and papers on sociology and African-American studies.
What motivates Hunsen to keep exploring new paths? “On some level it’s just as simple as doing what I like to do and knowing that I’m going to be able to continue doing it even more.”