Why do some companies retain happy employees, while others find their ranks thinned by burnout and absenteeism? Is it possible for companies to boost their bottom line while also boosting employee satisfaction?
Zeynep Ton, adjunct associate professor of operations management at the MIT Sloan School of Management, answered these questions in her landmark 2014 book: The Good Jobs Strategy: How the Smartest Companies Invest in Employees to Lower Costs and Boost Profits (Amazon Publishing/New Harvest). Now, she works with a growing number of companies to implement the Good Jobs Strategy through MIT Sloan and the nonprofit Good Jobs Institute. Spectrum asked her to describe the results.
What problem has your research solved?
ZT: A widespread assumption is that people are just a cost to be minimized and that companies should work to minimize that cost. But I’ve found that underinvestment in people leads to operational and customer service problems, which leads to lower sales, which leads to shrinking budgets. This vicious cycle is costly for investors. It hurts customers. It is downright brutal on workers, from their wages to their schedules to their treatment and dignity. Everyone loses. There has to be a better way.
What sector could benefit the most from your work?
ZT: My research has been in retail, but the framework is applicable to other settings. Quest Diagnostics recently applied it in its call centers. If you want to fix income inequality and increase median wages, we need to transform low-wage service jobs first: retail sales, cashiers, food workers. Their median wages are poverty level, the schedules are unpredictable, and employees often lack meaning or purpose.
How do we fix the issue?
ZT: When I examined successful companies, like Mercadona or Costco, I found that they created an entirely different, human-centered operating system to make their employees productive. There are four components to my strategy. First, offer less. A typical store offers, say, so many kinds of toothpaste. How long does it take someone to shelve them and be knowledgeable about each type? Offering less can reduce costs and increase customer and employee satisfaction. Second, standardize and empower, instead of adopting a culture where rules come from the top down. More people will follow company standards if they feel like they’re involved in creating them and empowered to improve them. And how many times have you heard, “Sorry, we can’t help you, this is against our policy”? A smart company knows to empower employees to provide customer service at their discretion. Third, cross-train.
Many retailers manage variability in customer traffic and needs by changing the number of employees, which creates unpredictable work schedules. Instead of changing quantity, change what employees can do. Take Mercadona: If you’re standing in line with nobody to help you, you don’t hear, “Sorry, that’s not my department.” Chances are, any employee you see can leave the soup aisle to come help. Finally, operate with slack.
Many retailers cut corners by understaffing. Model retailers overstaff, building in slack—instead of being so busy coping with issues caused by understaffing, employees can spend time looking for ways to improve and innovate. The combination of these choices with investment in people and strong values is what simultaneously produces great outcomes for workers, customers, and investors.
Why is your approach different from others?
ZT: It appeals to people’s heads but also their hearts. I recently asked the co-CEO of a retail chain based in Washington state, ‘This requires a huge transformation. Why take it on?’ He said, “I wanted to create an organization that would stay around, but there’s also a moral argument: Why on Earth not do this?”
It’s been so nice to work with organizations. It’s inspiring to see how excited people are, because the way that they change not only will result in their company being more successful, but it will also affect the lives of people—vulnerable people. My mission is to improve the lives of low-wage workers in a way that benefits customers and companies. It has to benefit companies, or it’s not going to be sustainable.
Developing this capacity in machine learning could better equip it for human interaction and a host of medical applications
Tommi Jaakkola addresses a packed auditorium during the popular course he co-teaches with three other faculty members, 6.036 Introduction to Machine Learning. Photo: Lillie Paquette
The hottest ticket on the MIT campus this fall isn’t a seminar on virtual reality, a talent-scouting hackathon, or a robotics demonstration. It’s an undergraduate class in the Department of Electrical Engineering and Computer Science (EECS) whose humble course catalog label, 6.036, belies its exponentially growing popularity. The subject, Introduction to Machine Learning, was first offered in 2013 and now attracts hundreds more students than can fit into a 500-seat lecture hall. In addition to enrolling droves of EECS students, 6.036 brings in registrants from nearly every discipline MIT offers, from architecture to management. The irresistible draw? A chance to get a jump on the most powerful driver of technology innovation since Moore’s Law.
If artificial intelligence is the rocket ship to which tech giants like Google, Apple, and Facebook have strapped their fortunes, machine learning (ML) is the rocket fuel, and boundaries between the two are increasingly blurred. While machine-learning techniques are intricate and various, the discipline differs from traditional computer programming in one foundational way: instead of writing out in advance all the rules that govern a piece of software’s behavior, machine learning attempts to equip computers with a means of inferring those rules automatically from the various inputs and outputs they encounter. Consider email spam filters (one of the earliest ML applications): it would be impossible to predict every possible instance of what counts as spam. So instead, spam filters learn directly from the data (and from the labels on those data that users provide), making the application more flexible, more automated, and more effective over time.
Building on strong underpinnings in computer science, optimization, and mathematics with a recent wave of new faculty hires, MIT has amassed “very good expertise in the foundations, theory, algorithms, and some applications of machine learning,” says Stefanie Jegelka, the X-Window Consortium Career Development Assistant Professor in EECS—herself one of those hires (she joined MIT in 2015). She adds that “the research activity here, and in the Boston area as a whole, fosters the kind of interdisciplinary research that increases the impact of machine learning” on applications like robotics, computer vision, and health care. But because machine learning has recently experienced an explosion in effectiveness, distilling reality from hype can be difficult. “Paradoxically, the public tends to both underestimate and overestimate machine learning capabilities today,” says Tommi Jaakkola PhD ’97, Thomas M. Siebel Professor of Electrical Engineering and Computer Science. “In the context of narrowly defined tasks such as image analysis and game playing, the potential of machine learning already exceeds public perception. But in open-ended tasks requiring flexible common-sense reasoning, or pulling together and combining disparate sources of information in a novel way, the imagined capabilities may reach somewhat beyond where we actually are.” The Introduction to Machine Learning course—which Jaakkola teaches alongside three other instructors—appeals to students as a means of accessing the ground truth of the field as a whole.
Machine learning and medicine
One of those 6.036 co-teachers is Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science, who has an acutely personal interest in figuring out the big questions in machine learning. In 2014, she was diagnosed with breast cancer, and a portion of her current work is focused on making ML more relevant to medicine, where it might someday unlock advances in cancer diagnosis and personalized treatment. Since so much of medical data is stored in the form of written records and other text-based data, Barzilay’s background in natural-language processing (NLP)—a computer-science discipline that focuses on engineering software that can interpret written and spoken language—provided a toehold for applying machine-learning techniques to medical problems. It also drives Barzilay’s concern about the “interpretability” of ML models when used in medical diagnoses. “Today the majority of the machine learning field focuses on making accurate predictions,” Barzilay explains. “But for medical predictions, accuracy isn’t enough—you really need to understand why the software is making a recommendation, and you need to make that process transparent. That means generating rationales that humans can inspect and understand.”
The rise of big data has been instrumental in driving advances in machine learning, because the software models—especially ones relying on a technique called deep learning—“are very data-hungry,” Barzilay says. For companies like Google or Facebook that serve more than a billion users, this appetite is easily sated. In the medical field, a similar surfeit of data exists. But because medical records are not standardized, ML models—which must be “trained” to recognize relevant features in data by feeding them thousands of hand-labeled examples—run into problems that social network recommendation engines and email spam filters don’t.
“Let’s say you are working with diabetes, and you labeled text-based diagnosis data for one hospital in great detail,” Barzilay explains. “Now, you go to another hospital and you still want to be able to predict the disease. Most likely, the performance of the machine-learning model will be degraded because they put the same kinds of data in a different format.” That means re-training the model again and again—an expensive and impractical prospect, especially for life-or-death medical matters. The key question, says Barzilay, is how to develop models that can transfer their initial learning to new data sets with much less “supervision,” as it’s technically termed, while retaining their predictive powers.
A new research paper by Barzilay, Jaakkola, and Barzilay’s PhD student, Yuan Zhang, offers the beginnings of an answer. They use a machine-learning technique called adversarial training, in which two systems learn about the same data by pursuing competing goals. In Barzilay and her collaborators’ work, one ML system learns how to classify data according to labeled examples—for example, patient pathology reports noting evidence of cancer. Meanwhile, another ML system learns how to discriminate between the cancer labels and another kind of evidence that may also be present in the reports, albeit less extensively labeled—say, evidence of lymphatic disease. By working in concert, the two ML systems teach each other how to correctly classify evidence of cancer and lymphatic disease, despite the dearth of training data for the latter. Since individual medical records almost always encode many different aspects of a patient’s health, such a model could offer a powerful way of automating disease detection.
Reverse-engineering intelligence
Addressing the “transfer” problem in machine learning from another angle is Josh Tenenbaum PhD ’99, an MIT professor of brain and cognitive sciences affiliated with the multi-institutional Center for Brains, Minds, and Machines (CBMM). “When you think of machine learning these days, you think of big data,” he says, “but when we talk about learning in school, it’s really generalization that we prize. How can you figure out how to take something you’ve learned from one instance and generalize it to situations that aren’t quite like the ones you’ve been in before?”
Tenenbaum’s Computational Cognitive Science group uses computational models of learning to investigate how the human mind pulls off this feat. “We’re reverse-engineering intelligence—which means doing science using the tools of engineering,” he says. “We feel that if we’ve understood something important about how the mind and the brain work in engineering terms, then we should be able to put that into a machine and have it exhibit more human-like intelligence.”
An area of machine learning called Bayesian program learning (BPL) has captured the major interest of Tenenbaum and his collaborators as a means for implementing this more human-like learning capability in computers. Based on Bayesian statistics—a branch of mathematics dedicated to making precise inferences based on limited evidence—BPL has been shown to enable a computer to learn how to write unfamiliar letterforms (such as “A,” or a Chinese logogram) more accurately than a human can after just one training example. The research, done by Tenenbaum in collaboration with his former student Brenden Lake PhD ’14 as part of Lake’s MIT dissertation, made headlines in the popular press last year.
Computers capable of this kind of one-shot learning—using models that more closely correspond to how our own minds work—would create a powerful complement to the capabilities already exhibited by deep-learning software, whose artificial reasoning can be inscrutable to human users. Tenenbaum’s collaborations with MIT neuroscientist and CBMM investigator Rebecca Saxe PhD ’03 focus on illuminating how social intelligence manifests in human beings, with the aim of implementing it in computers—relying on some of the same Bayesian mathematical frameworks that power one-shot machine learning. “We want to build machines that humans can interact with the way we interact with other people, machines that support ‘the 3 T’s,’” Tenenbaum says: “We can talk to them, teach them, and trust them.”
Long before machine learning reaches that apex, the discipline must coalesce from its current state—a cluster of multidisciplinary ad-hoc investigations and successes within narrow domains—into “a really systematic understanding of how to take these advances and put them in the hands of less sophisticated companies who don’t have armies of PhDs at their beck and call,” says EECS professor Sam Madden ’99, MNG ’99. As co-director (with Jaakkola) of the Systems That Learn initiative at the MIT Computer Science and Artificial Intelligence Laboratory, Madden hopes to achieve that very end—turning machine learning into a broadly understood computing infrastructure that anyone can leverage.
“I like to make an analogy with computer programming,” Madden says. “Today, using machine learning is like writing code in assembly language—very technical and low level. What I want is for it to be more like using Microsoft Excel. You don’t need a computer science degree to be able to use Excel effectively to do data analysis. It would be really cool if we could package up machine learning in a similar way.” Until then, MIT course 6.036 will likely continue on its current enrollment trend: standing room only.
In late 2015, Provost Martin A. Schmidt SM ’83, PhD ’88 asked the then-new director of the MIT Libraries, Chris Bourg, to lead a task force on the Future of Libraries, consisting of 30 representatives from MIT’s faculty, student body, and staff. A year later, the task force released its preliminary report. Spectrum asked Bourg to discuss how the MIT Libraries will evolve to advance the creation, dissemination, and preservation of knowledge.
How would you describe the primary value of libraries in tackling complex, unanswered questions?
CB: To my mind, and this is the philosophy I have brought to the MIT Libraries, the primary value that libraries provide to scholars and researchers is not answers, but rather the tools, resources, collections, expertise, and space to productively explore a range of questions and to be inspired to ask new questions.
There is a popular quote by author Neil Gaiman: “Google can bring you back 100,000 answers. A librarian can bring you back the right one.” I don’t think that’s quite right—in many cases, Google is quite good at bringing back answers to factual questions; but libraries and librarians are much better at helping scholars and researchers develop strategies for tackling complex questions that require interdisciplinary approaches. We connect people to ideas and knowledge that they didn’t know they were looking for—which is often how breakthroughs on big questions and hard problems happen.
Is the role of MIT’s research libraries changing in the 21st century, or just their tools and platforms?
CB: I think the fundamental role of research libraries will always be to provide enduring, abundant, equitable, and meaningful access to knowledge. Certainly, the tools and platforms for doing that will continue to evolve, as the forms by which scholars express, consume, and analyze knowledge move from static, physical forms to dynamic, interactive, networked digital forms.
In today’s environment, for example, providing access to knowledge includes having a licensed drone pilot on the MIT Libraries staff, who accompanies an EAPS [Earth, Atmospheric and Planetary Sciences] class on a research trip to Death Valley to obtain 3-D images of terrain the students could not access on foot. Another change is that modern research libraries must ensure that our collections are accessible not just to human readers, but also to text- and data-mining applications, algorithms, and machine-learning tools. And at the MIT Libraries, we are responsive to our community’s desire to interact with our content in more active, innovative, and participatory ways—through annotation, mashups, and other creative uses and reuses. This is what we mean in the Future of Libraries report when we call on MIT and the world to “hack the library.”
The report recommends an ambitious increase in the digitizing of analog collections. What are some of the priorities?
CB: As we ramp up our digitization efforts, we are looking to prioritize collections that are distinct to MIT, and that are likely to have greatest value to the scholarly community. Because we have high-quality digital versions of MIT theses only from 2008 forward, the MIT Theses Collection is exactly such an example. But with over 109,000 theses published by MIT graduates from 1868 to 2007, we will have to prioritize within this collection as well. We also have a sizable collection of MIT-generated technical reports that are not available online. Both of these collections represent some of MIT’s unique contributions to scholarship, and digitizing them is the best way to increase their accessibility and impact.
What trends are you seeing in terms of how researchers on campus—as well as off-campus and non-MIT-affiliated researchers—are engaging with the MIT Libraries?
CB: What is interesting is that while we have seen an expansion of the ways in which researchers engage with the libraries in online environments, the role the libraries play in providing space for quiet contemplative work has remained very important to faculty, students, alumni, and other researchers. Researchers also are increasingly coming to the libraries to learn about and use new kinds of tools and techniques for engaging in digital scholarship. We are seeing a shift from the library as a place for consuming knowledge, to a place to both consume and to create new knowledge.
In addition, the MIT Libraries are a highly sought-after space for a variety of scholarly events: book talks, student orientation events, panel discussions on current event topics. So while the MIT Libraries are working hard to make our collections and services broadly available online, we also have to continually update our physical spaces to ensure they support the full range of research, teaching, and learning needs of our community. Finally, scholars and students alike are also increasingly looking to the libraries as partners in research and development, especially in creating new tools and technologies to reinvent scholarly communication and improve learning and research.
In the late 16th century, William Lee, a humble parish priest in Calverton, England, invented a machine to relieve his mother and sister from the drudgery of knitting. His revolutionary “stocking frame” vastly reduced the time to knit textiles. Why, then, was it not widely adopted until two centuries later?
Daron Acemoglu, MIT’s Elizabeth and James Killian Professor of Economics, credits the winds of political change. One of the 10 most cited economists in the world, Acemoglu says the ability to empower revolutionary innovation “goes back to the institutional fabric of society.” A prolific contributor to his field, Acemoglu has written on topics ranging from the persistent wage gap between higher-skill and lower-skill workers to the cascading effects of economic shocks within particular industrial sectors. His area of deepest exploration has been the relationship between political systems and economic growth—including the ways governments shape the course of innovation.
In his recent, influential book Why Nations Fail: The Origins of Power, Prosperity, and Poverty (co-authored with James Robinson), Acemoglu uses Reverend Lee as a case in point. With the help of his local representative to Parliament, Lee got an audience with Queen Elizabeth to ask for a patent. She refused. He got the same reaction from her successor, James I, and from the king of France. Acemoglu says their reasons were the same: fear that all the people knitting clothing by hand would be put out of work, would rebel, and would destabilize the political system. Beholden to the small wealthy class that controlled banking and industry, the rulers opted to uphold the status quo.
Sweeping innovation requires an “inclusive” government, says Acemoglu—one that considers everyone’s needs and abilities, not just society’s elite. “If we only enable a small fabric of society to become leaders or scientists, it’s like having our limbs tied to our bodies,” he says.
Lee died penniless. But his invention survived, was refined by others, and helped spur one of the main industries of England’s Industrial Revolution some 200 years later. Political upheaval began in England in 1688, when both houses of Parliament united against James II, forcing him to abdicate the throne. The English Bill of Rights then gave more power to Parliament, especially to the House of Commons. By the late 1700s, commoners were allowed to take out patents, borrow money, and start companies, and the Industrial Revolution began.
Of course, Queen Elizabeth was also right. The Industrial Revolution did create instability. It destroyed many existing institutions and created horrible working conditions for factory workers. It sparked the rebellion of Luddites as well as demands for better working conditions.
As Acemoglu and Robinson write in Why Nations Fail:
Economic growth and technological change are accompanied by what the great economist Joseph Schumpeter called creative destruction. They replace the old with the new. New sectors attract resources from old ones. New firms take business away from established ones. New technologies make existing skills and machines obsolete . . . . How do governments shape the course of innovation?
For innovation to flourish, says Acemoglu, political leadership must deal with, rather than prevent, such destruction. Inclusive governments and economic systems help people through the destructive years and empower them to participate in the new economy. “It can only work in a democracy, where there’s participation by the public,” he adds. “Most of the important decisions are in our hands.”
Acemoglu notes that countries governed by dictatorships or elected leaders who cater primarily to an elite ruling class tend to be “extractive,” not inclusive. They may increase economic growth in the short term by extracting resources, such as oil or cheap labor, and moving most of the wealth to the elite. But that can last only as long as the resources are available—or willing.
Furthermore, he cautions that even within democracies, voters must stay vigilant to ensure their societies remain inclusive, and therefore hospitable to innovation. Political candidates may succumb to wealthy interests that fund elections, while government propaganda and biased news organizations may conceal political agendas. “There’s not any guarantee,” Acemoglu says, “that democracy is self-correcting.”
Richard Brandt is a 1991–92 MIT Knight Science Journalism Fellow.
“Education and learning are fundamental to a strong society and economy… Enabling individuals to do their very best and reach their full potential, whatever their background, is a key priority for Community Jameel and the world. That is exactly why we are establishing the Abdul Latif Jameel World Education Lab with MIT.” —Fady Mohammed Jameel, president of Community Jameel International
“Through J-WEL, we will forge new and long-lasting collaborations as we learn, share, and train together, using the assets developed at MIT as well as by leveraging the community convened by J-WEL.” —Sanjay Sarma, MIT Vice President for Open Learning
The Abdul Latif Jameel World Education Lab (J-WEL), cofounded in May 2017 with Community Jameel, has the grand goal of sparking a renaissance in education. Here’s how:
Transforming 21st-century education is a goal MIT is already working on…
Steered by executive director M.S. Vijay Kumar, MIT’s associate dean of digital learning, J-WEL is an anchor entity within MIT’s open education and learning initiatives. These include the MIT Integrated Learning Initiative (MITili), devoted to the science of learning; pK–12 efforts to improve STEM education in primary and secondary education; and digitally focused endeavors—such as MITx, OpenCourseWare, and the Digital Learning Lab—overseen by the William A. M. Burden Professor of Physics and recently appointed dean for digital learning Krishna Rajagopal. MIT is engaged in several collaborative initiatives that seek to improve teaching by providing flexible instructional tools and professional development opportunities for educators, including the Tata-MIT Connected Learning Initiative (CLIx), which advances educational quality and access through technology in underserved Indian schools; the Woodrow Wilson Academy, an MIT collaboration out of the Teaching Systems Lab; Fly-By-Wire, led by AeroAstro professor Karen Willcox; and MIT-Educator, which focuses on curriculum design and pedagogy for higher educators. Also among the Institute’s global education bonafides are the popular, kid-friendly coding platforms Scratch and App Inventor. MIT has a long history of collaborative formation of new universities, such as Singapore University of Technology and Design and Brazil’s Instituto Tecnológico de Aeronáutica. And on the policy front, the School Effectiveness and Inequality Initiative, with its rigorous economics research, has contributed to educational reforms.
…but it’s not a goal MIT aims to reach alone.
J-WEL will create a locus of ongoing engagement for universities, foundations, corporations, and schools from the US and around the world to define and address their own specific goals for their regions’ needs. Members of J-WEL will work with MIT resources through J-WEL Weeks, signature events held twice a year on campus, as well as J-WEL Exchanges providing deeper dives into specific aspects of education. Continued interaction with MIT faculty and staff, as well as online modules, webinars, and research briefs, will build a community of global colleagues. Kumar also notes that beyond its philanthropic support, “Community Jameel is making important contributions to J-WEL by identifying target needs, and in identifying other initiatives and agencies engaging in this space with whom we might collaborate and cooperate.” Kumar emphasizes that J-WEL will address opportunities for improving education in both developing and developed countries, including in the US: “These kinds of needs are everywhere.”
Learning is a lifelong process.
The lab will concentrate on three levels of education: pK–12, higher education, and workplace learning. On the pK–12 track, Angela Belcher (James Mason Crafts Professor of Biological Engineering and Materials Science) and Eric Klopfer (director of the Scheller Teacher Education Program and the Education Arcade) serve as faculty directors; Hazel Sive, professor of biology and member of the Whitehead Institute for Biomedical Research, is J-WEL’s director of higher education; and George Westerman, of the MIT Sloan Initiative on the Digital Economy, serves as the workplace learning lead. Among the many challenges facing these sectors, Klopfer and Belcher are eager to address a “global epidemic in science education” that favors shallow memorization, while Sive is interested in engaging on problem-solving cultures and curricula relevant to students’ lives and careers. Westerman shares additional concerns about rapid shifts in the employment landscapes of both emerging economies and developed nations. “As automation reshapes the corporate workforce, the challenge goes beyond training,” he points out. “Companies, and people, need to understand what the worker of the future will be doing.” Meanwhile, for learners at all stages of life, in many parts of the world, basic access to education is a critical issue. Empowering underserved populations—such as displaced populations, and girls and women worldwide—is a guiding focus of J-WEL.
There’s more than one avenue for transforming education.
J-WEL members may choose to concentrate on new teaching methods, digital tools, curriculum design, institution formation, capacity building (including teacher training), or even on nationwide educational reforms. On the higher education track, Sive envisions members coming to MIT to “explore, redesign, and reform” their educational systems. She offers the example of MIT-Educator, now in its test phase, in which visiting participants from Tunisia have designed new courses from the ground up, informed by case studies presented by MIT faculty. In the pK–12 arena, improving tools for assessment is one way to help shift the conversation around learning. Klopfer points to research by MITili director and neuroscientist John Gabrieli PhD ’87, the Grover Hermann Professor in Health Sciences and Technology, on how to measure executive function, an important set of mental skills for young learners to develop. “Until the assessment regimen shows the value in such things,” Klopfer says, “schools are going to be reluctant to put much emphasis on them.”
J-lab rigor will be in full force.
J-WEL’s founding is consistent with a focus by Community Jameel and its chairman Mohammed Jameel ’78 on collaborating with MIT to create a better future. Abdul Latif Jameel Poverty Action Lab (J-PAL), established in 2003, seeks answers to poverty in a changing world. Abdul Latif Jameel World Water and Food Security Lab (J-WAFS), established in 2014 addresses water and food scarcity issues. Both emphasize rigorous research and measurable, systemic change over time, and J-WEL will take a similar approach to education. The randomized evaluations in which J-PAL specializes are of particular interest to Kumar: “The initiatives we launch can’t just be based on an idea; they have to be founded on research and early, meaningful, quantitative and qualitative evidence.”
“Quality at scale” must be a consistent theme.
“Through connecting with MIT, J-WEL members will articulate their goals of engagement. Figuring out what resources we can bring to address those goals in a scalable way will be very important for J-WEL,” Kumar says. Belcher suggests: “Think of J-WEL as a hub whose ideas and curriculum and technologies can be distributed to all parts of the world.” Yet, as Klopfer observes, “Some of the same issues may ultimately affect kids in Detroit and in Mumbai, but they manifest themselves in different ways.” For J-WEL to maximize its impact, Klopfer suggests making connections between underlying causes “to help disparate geographic entities to simultaneously think about solutions, so we are not solving one-off problems, but rather dealing with a network of related issues.”
Education at MIT will be clarified and strengthened as a result.
“The mind-stretching way we empower our students with problem-solving skills at MIT is not how I was educated [in South Africa],” says Sive, “and not how many students in universities in the world are educated.” J-WEL will offer a chance for MIT faculty not only to articulate and celebrate their most successful educational practices, but to apply for funding to scale those up for global application in collaboration with colleagues around the world. MIT students will also have the opportunity to become J-WEL ambassadors, highlighting their own learning experiences. J-WEL is a way for MIT to share what it does best, and to analyze what it can do even better in service of global education.
Unexpected outbreaks of viruses like Zika and Ebola may seize headlines, but there is a less exotic and even more formidable threat on the horizon. “A post-antibiotic era,” says the World Health Organization, “in which common infections and minor injuries can kill—far from being an apocalyptic fantasy—is instead a very real possibility for the 21st century.”
Microbial pathogens, including the kinds of bacteria and fungi we come in contact with every day, are designed by evolution to play cat and mouse with a host’s immune system. Driven by the excessive and often unnecessary use of antibiotics—whether in animal feedstocks or to treat human infection—these nimble organisms are mutating at an accelerating pace, some capable of foiling even last-resort medications. “It’s very scary,” says Elizabeth Nolan PhD ’06, an associate professor in the Department of Chemistry, whose research on infectious disease is aimed at the problem of antibiotic resistance. “There are more and more drug-resistant strains of bacteria being found, strains that can travel around the world wherever humans go.”
In the US alone, antibiotic-resistant superbugs currently cause 2 million cases of illness and 23,000 deaths a year, according to the Centers for Disease Control. A recent British government assessment projects an astonishing 10 million annual deaths globally from superbugs by the year 2050.
Given these stakes, says Nolan, “We really need people to think outside the box.” And that is precisely what she and a contingent of fellow MIT scientists have set out to do. Deploying the latest technologies and working within and across diverse fields in science and engineering, these researchers are developing new tactics in the battle against superbugs.
Starve them out
Nolan’s chosen strategy uses metals essential to an organism’s survival. “Humans have three to five grams of iron inside our bodies, which is critically important for our health,” she says. “Many kinds of bacteria also need this iron, but it’s hard for them to find it.” During infection, microbes and hosts compete for iron and other metals, and this contest has provided Nolan with ideas for new therapies. In a series of studies, she has investigated the metal-acquisition systems in such pathogenic bacteria as Escherichia coli and Salmonella. Inside the infected host, these bacteria fabricate molecules called siderophores, which are set loose in the environment outside of cells.
“Siderophores scavenge iron from the host, and deliver it to the bacterial cell,” says Nolan. The human immune system fights back through a metal-withholding response, which includes unleashing proteins that can capture certain iron-bearing siderophores. In short, as Nolan puts it, “There’s a total battle for nutrient metals going on. The question is whether the host outcompetes the microbe, or vice versa.” To give an edge to the host, Nolan has been exploring several strategies. One involves tethering antibacterial cargo to siderophores and unleashing them against specific pathogens. Another, in partnership with researchers at the University of California, Irvine, is designed to boost the immune system’s metal-withholding response by generating siderophore-capturing antibodies in the host. In early laboratory tests of this method, Nolan and her partners successfully inhibited the growth of Salmonella. “We are really excited about the possibility of immunizing against bacterial infections,” she says.
Nolan sees great potential in fundamental research aimed at revealing the structural and functional properties of the human immune system’s metal responses. In one recent study, for instance, she discovered that calprotectin, an abundant, metal-sequestering human protein that is present at sites of infection, has uniquely versatile properties that allow it to seize whatever metal an infectious microbe requires for its survival. This is the kind of discovery that might someday generate a new antibiotic therapy. It is another reason why Nolan is confident, she says, that “deciphering the pathways used by organisms and hosts for sequestering nutrient metals will lead to new insights for preventing and treating disease.”
Chart their defenses
With the help of the latest technologies, it is now possible to map microbial behavior in the finest detail.
“We couldn’t easily explore drug resistance before, but CRISPR technology makes it much easier for us to manipulate the genome,” says Gerald Fink, another researcher working in this area, who is the Margaret and Herman Sokol Professor in Biomedical Research at the Whitehead Institute, and American Cancer Society Professor of Genetics at MIT. Fink is using the popular DNA-editing technology CRISPR-Cas9 to unravel antifungal resistance in the human pathogen Candida albicans.
Why study fungi? Unlike bacteria, whose toxins damage host cells, they often do harm simply by growing in the wrong places—so while C. albicans “ normally lives in our gut happily and harmlessly,” says Fink, it can prove deadly if it moves elsewhere (by way of catheter or prosthesis, for example). Fungi, like bacteria, can also develop resistance to antibiotics. Masters of disguise, they evolve mechanisms to evade detection by altering the composition of their cell membranes. Fink notes that there is huge natural variation in resistance among fungi. “Bacteria and fungi have been here for hundreds of millions of years, and there’s no game they haven’t played,” he adds. “We’re just trying to keep one step ahead.”
Fink previously created a working model of harmless baker’s yeast to serve, in his words, as “a paradigm for all higher cells.” Now he is working to create a comparable paradigm with C. albicans, a fungal pathogen whose invasive behavior can range from superficial skin infections to life-threatening systemic infections. Using CRISPR, Fink’s lab is systematically snipping out genes to determine which ones help C. albicans live outside of the gut and also survive immune system defenses. Fink’s lab has already found a number of genes promoting C. albicans’s drug resistance, and hopes that as the entire genome is decoded, “we can know what the enemy looks like and think about designing new antibiotics.” CRISPR-based tools have also begun to revolutionize the detection of infection. A new method that uses a modified genome editing enzyme, Cas13a, comes from James Collins, MIT’s Termeer Professor of Medical Engineering and Science and a member of the Broad Institute at MIT and Harvard.
Collins has collaborated with Broad colleague Feng Zhang, James and Patricia Poitras Professor in Neuroscience, and others to develop a highly sensitive diagnostic platform they named SHERLOCK (for “specific high sensitivity enzymatic reporter unlocking”). Using chemicals and biomolecules freeze-dried on a piece of paper, SHERLOCK not only identifies a bacterial pathogen quickly from just a few strands of DNA, but also determines whether that microbe is resistant to certain antibiotics and susceptible to others. At a cost of 61 cents per test, SHERLOCK—which can also detect cancer mutations and viruses such as Zika—is cheap and durable enough for any clinical setting, including those in developing countries. “It’s a platform with transformative power,” Collins says.
Send in your best agents
In addition to his work in diagnostics, Collins is taking direct aim at bacterial defenses, fabricating what he calls “next-generation antimicrobial agents” that could overcome antibiotic resistance.
A founder of the new field of synthetic biology, Collins has spent much of the past decade developing intricate biomolecular models of bacterial cells that shed light on their metabolic state—how they produce and consume energy, and what conditions promote or stymie growth. He has taken a special interest in “persisters,” strains of bacteria that deviously go dormant in the presence of antibiotics, leading to the kind of chronic infections plaguing tuberculosis and cystic fibrosis patients.
Recently, Collins and colleague Graham Walker, American Cancer Society Professor of Biology at MIT, have worked out how bacterial metabolism determines whether an antibiotic “will kill the bug, stop it from growing, or make it more resistant,” says Collins. What’s more, their labs have engineered a way to manipulate bacterial metabolites, substances produced by bacteria to regulate their own development, to make these pathogens vulnerable to antibiotics.
Metabolic tuning could resensitize previously antibiotic-resistant strains. “This has largely been overlooked by the drug discovery community and the clinical community, but we think it’s a gold mine that can be harnessed to boost our existing arsenal of antibiotics,” Collins says. Up to 50% of all the antibiotics prescribed in the US are not needed or are not optimally effective as prescribed.
Collins’s group is also designing new weapons to attack pathogens. “We’re looking to engineer and enhance bacteriophages, naturally occurring viruses that go after bacteria, to make them more effective.” In one venture, they have endowed bacteriophages with enzymes that break up biofilms, the gooey matrix produced by bacterial pathogens that often kicks off infections in artificial joints, implants, and pacemakers.
These new tools will be arriving not a moment too soon, he says. “Nature is remarkably clever, and the next pandemic is coming, and it could be a bacterial pathogen,” he says. “I hope we will be in a good position to address it.”
Try diplomacy
While she shares this hope, Katharina Ribbeck, associate professor of biological engineering, takes a radically different view of the problem: “We need to step out of the arms race and instead form an alliance with problematic microbes,” says Ribbeck. “But how?”
In a word: mucus. Ribbeck sees myriad opportunities for coping with problematic pathogens by exploiting this primitive product of the immune system. Trillions of microbes, many benign and performing vital functions, live inside mucus, which lines the intestinal tract, lungs, mouth, nose, and other orifices in humans, coating some 2,000 square feet of internal surface area.
“Somehow our mucus keeps microbes in check, whether they are beneficial or serious pathogens,” says Ribbeck. “Mucus doesn’t kill them, like antibiotics, but it tames them.” In recent studies of two types of Streptococcus bacteria found in saliva, one associated with cavities and the other with healthy oral conditions, Ribbeck gained insight into how the infrastructure of mucus keeps the two types in balance. “Preventing certain microbes from teaming up and surrounding themselves with a protective biofilm is at the core of mucus function,” she explains. “In this state, they can’t dominate as easily, and are more vulnerable to the immune system.”
Now Ribbeck seeks to leverage “biochemical motifs” of mucus to achieve a new repertoire of responses to microbial pathogens. She’s discovered mucus components that can be used to suppress and dissolve the dangerous biofilms built by pathogenic bacteria, and with natural and engineered polymers, believes she has found a way of dislodging tenacious pathogens, thereby preventing infections and empowering the immune system and antibiotics to perform better.
“We could apply our synthetic dressing on real wounds and on mucosal surface infections such as those in the digestive tract, mouth, or lungs, allowing both antibiotics and the immune system better access to subdue harmful microbes,” she says. “This could solve some of the most vexing problems related to resistance.”
Ribbeck’s strategy depends on strengthening the body’s beneficial microbes, striking a balance with pathogenic strains and encouraging a diverse microbiome. “Microbes don’t necessarily want to harm us; they just want a safe place to eat and divide,” she says. “By harnessing mucus to help us with microbes, we can domesticate them and find better ways to protect ourselves.”
A whiff of feline will send mice running for cover. That makes sense, given the propensity cats have for eating mice, but, in fact, it’s more than sensible: the behavior is built into the mouse genome. “Even a laboratory mouse that has never been exposed to cats will respond to their scent in fear,” says Gloria Choi, the Samuel A. Goldblith Career Development Professor of Brain and Cognitive Sciences, and an investigator at the McGovern Institute for Brain Research at MIT.
Most smells, however, have no intrinsic meaning to an animal. The connection has to be learned. This learning process intrigues Choi and has inspired her to study how mice link particular smells to specific behaviors. In her most recent work, she discovered that oxytocin, known as the “love hormone” because of its role in forming mother-child bonds, also plays a role in binding smells with social behaviors, such as mating or frightening off an intruder.
Early on in her career, Choi decided to study the senses because they act as gateways into the brain and make an ideal starting point for asking questions about how the brain works. Happenstance led her to join an olfaction lab, working with Richard Axel at Columbia University. Using mice, Axel had discovered the receptors in neurons that detect odors. “Smell is the primary sense mice use to interact with the world,” says Choi.
Choi stuck with olfaction because smell has deep meaning for humans. A familiar scent can be transcendent, taking one back in time or to faraway places. “It’s very personal,” says Choi, “and very experiential.”
To unlock the secrets of smell’s power to recall memories and guide action, Choi used optogenetics, a technique that uses laser light to stimulate and activate neurons selectively. Such artificial stimulation allowed her to simulate the sense of smell precisely so that she could study the neural circuits that respond to odor detection. In early experiments she did with Axel, this technique allowed her to zero in on a brain region called the piriform cortex as the seat of learning about a smell.
In more recent work, she and her team trained mice to associate a particular smell with a reproductively receptive female to illustrate a positive social interaction. To illustrate an aversive one, they associated a different smell with an aggressive male intruder. The team used both genetic and pharmacological techniques to manipulate the neurons involved in learning these associations to tease apart the neural circuitry.
Choi had hypothesized that special signaling molecules would drive learned associations between smell and behavior. When looking at the possible molecular candidates, she decided to zoom in on oxytocin based on previous research suggesting its strong role in directing social behavior. The research revealed that oxytocin is required for learning social associations, but not for other types of behaviors, such as craving food or feeling stressed. Choi speculates that an array of molecules, oxytocin included, may form a molecular code that governs different types of behavioral responses to smell.
To search for that molecular code, Choi is using single-cell RNA sequencing, a technique that reveals all of the molecules at work inside a cell. This technique will allow her to form a list of candidate molecules and the receptors that detect them to study in relation to learning. “If this mechanism exists, perhaps it exists not only for learning with smell, but also for learning with any other sense,” says Choi.
This work on neural signaling is dovetailing with Choi’s other work on immune signaling in the brain. Immune molecules may also influence behaviors linked to smell. For instance, immune signals present during illness can damp the association between smell and the desire to eat. “We want to understand how the immune system modulates the brain,” says Choi. “This is one of the most exciting emerging fields in neuroscience.
In March 2018, if all goes as planned, a SpaceX Falcon 9 rocket will send an instrument designed and fabricated at MIT, the Transiting Exoplanet Survey Satellite (TESS), into high Earth orbit. From an altitude of about 400,000 kilometers, TESS will conduct an all-sky survey that brings a new perspective to the search for planets beyond our solar system.
On the fourth floor of MIT’s Building 37, a dozen computers working in parallel will process data relayed from the orbital satellite, and about 30 MIT scientists and engineers will pore over it for clues about our astronomical neighborhood. “MIT has the overall technical and science responsibility for the mission,” says TESS principal investigator George Ricker ’66, PhD ’71, a senior research scientist at the MIT Kavli Institute for Astrophysics and Space Research. Fellow Kavli scientist Roland Vanderspek PhD ’86 serves as deputy PI. Altogether about 300 researchers from more than a dozen universities and other institutes are taking part in this NASA Explorer mission. TESS’s unique capabilities should enable it to pick out small, relatively nearby planets (within a few hundred light-years of Earth) that offer at least some of the conditions deemed necessary for life. The ambitious endeavor could give humanity its best shot yet at understanding just how unusual our planet is, or isn’t, in the grand scheme of things.
To fulfill its scientific objectives, TESS will rely on the “transit method,” looking for periodic dips in a star’s brightness that could be caused by an orbiting planet passing in front of it. “If you know the star’s size, you can figure out the planet’s size from the percentage of light that’s being blocked,” Ricker explains.
That task falls to the satellite’s four CCD cameras—precise photometers fabricated at MIT Lincoln Lab and on campus under the watchful eye of instrument manager Greg Berthiaume ’86. Vanderspek, meanwhile, is responsible for achieving the requisite stability and sensitivity of those cameras, charged with surveying 90% of the southern sky in TESS’s first year of operation and a similar portion of the northern sky a year later. Ricker estimates this will turn up approximately 3,000 “transit signals,” or instances of temporary dimming, of which 1,700 might be confirmed as planets by ground observations in the third year.
Leading the TESS science team are Harvard-Smithsonian astronomer David Latham ’61 and Sara Seager, MIT’s Class of 1941 Professor of Planetary Science and Professor of Physics, as well as an AeroAstro faculty member. Joined by new MIT physics assistant professor Ian Crossfield, they will select 100 candidates for follow-up observations, whittling that down to a list of at least 50 confirmed small planets, roughly Earth-sized, a subset orbiting within the host star’s “habitable zone” at a distance where surface water could exist in liquid form. The Hubble Space Telescope and successors like the James Webb Space Telescope would then train their sights on the exoplanetary “Top 50,” as would large ground-based telescopes. “We’ll definitely want to look at their atmospheres to check for the presence of gases—such as oxygen, water vapor, and methane—that may be associated with life,” says Seager.
TESS will complement NASA’s Kepler mission, which has already discovered more than 2,000 confirmed exoplanets within a small patch of sky. With its broader sky coverage, TESS can find planets about 10 times closer, circling much brighter stars, making it easier to determine their mass, density, composition, and other properties.
TESS joins other exoplanetary research underway at MIT. Julien de Wit PhD ’14, a postdoc in the Department of Earth, Atmospheric and Planetary Sciences who recently accepted an offer to join the MIT faculty, is part of an international team that spotted seven Earth-sized planets orbiting the star TRAPPIST-1. Seager, meanwhile, is the principal investigator of ASTERIA (Arcsecond Space Telescope Enabling Research in Astrophysics), a low-cost mission that’s set to launch a cereal box-sized satellite this year in the hopes of finding the best Earth analog yet. Says Seager, “We’re following any leads we can to learn more about exoplanets.”
At the same time, perhaps, sentient creatures on one of those worlds might be going through a similar exercise, trying to learn about Earth and its curious inhabitants.
Steve Nadis is a 1997–98 MIT Knight Science Journalism Fellow.
Architect William O’Brien Jr. opens archetypal forms to new interpretations
Like the African masks that inspired it, the Mask House’s slatted front façade serves as a physical and spiritual threshold, marking the border to a solemn dimension, and promising refuge to those who enter it. IMAGE: WOJR
At first glance, WOJR Organization for Architecture looks like any other design office in Cambridge—a single ground-floor room sandwiched between two lopsided storefronts set on a minor city thoroughfare. Inside, three associates toil at workstations at a long table. Renderings, mockups, and a library adorn the walls around them.
But a closer look reveals a practice that is like few others, a teeming space where past, present, and future eddy in silence. At one workstation, a young associate charts the locations of Narragansett Indian burial sites on Rhode Island’s Block Island, to avoid building atop the relics. A polished stainless-steel arch—a scale model for a competition—hovers near the edge of the communal table, portal to an unseen but imminent universe.
On the side wall, perched on a bookshelf, an African mask scowls like a stern sentinel, flanked by photographs of the anthropomorphic mask sculptures WOJR prepared for a Spring 2017 exhibition in Switzerland. The masks are abstractions from forms and volumes in WOJR’s Mask House, a contemplative home and retreat created for a client in upstate New York.
“We believe that the making of architecture is the making of artifacts,” says William O’Brien Jr., eponymous founder and principal of WOJR, and faculty member since 2009 in the MIT Department of Architecture, where, among other duties, he coordinates the first semester studio in the Master of Architecture program. “They are both objects that are imbued with meaning that the viewer then unpacks. We want to bridge the gap between inception and perception, to imagine how an object or form will be interpreted at the same moment we’re making it.”
More than an act of will, architecture for O’Brien is an act of cultural expression— an inquiry into the archetypal forms he finds in prehistoric Iceland or baroque Rome, as well as into the meanings present and future viewers will glean from them. J. Meejin Yoon, head of the Department of Architecture at MIT, characterizes O’Brien’s work in terms of “elucidation, refinement, and crystallization.” In her view, “He bridges formal and material concerns of the discipline with representations that test, balance, and literally draw original ways forward with great mastery.”
While his work is firmly rooted in history, O’Brien brings to it a vital digital fluency. He entered the profession at a pivotal time, just as it had begun to fully digest the digital design technologies that had so radically transformed—some might even say hijacked— it. “We’re in a post-digital moment in architecture,” he explains. “For designers of my generation, and even more so in the next generation, we are increasingly facile with the range of digital methodologies. We’re no longer entertained, in the same way as in the late ’90s and early 2000s, by the formal novelties that can be produced by complex digital processes. Instead, we can go back to thinking conceptually, and use these technologies to support that thinking.”
Studies in perception
Born and raised in Stow, Massachusetts, O’Brien was an undergraduate at Hobart College hovering between career options in music and design before he opted to pursue a master’s in architecture at the Graduate School of Design at Harvard University. In 2010, he was a finalist for the MoMA PS1 Young Architects Program and a winner of the Design Biennial Boston Award. In 2012, he received a Rome Prize Fellowship in architecture at the American Academy in Rome. The following year, Wallpaper* named him one of the world’s top-20 emerging architects, and Architectural Record gave him its Design Vanguard Award.
O’Brien has earned his accolades by seeking out unusual situations and challenges that test and expand his vision. In a house design for a site in Durango, Colorado, he took a very familiar postwar form—the A-frame house—and repeated it in an idiosyncratic, asymmetrical chain. When two brothers asked for twin houses for their land in upstate New York, he drew on a mathematical principle called minimal dissections to create a square house and a hexagonal house made up of the same parts. His modular approach to questions of form—examining, adapting, and assembling old shapes in new ways—could also create new possibilities and processes for builders.
Some of his designs become buildings. Some of them don’t. One of his most ambitious designs, however—the suggestive and secluded Mask House—is slated for construction by 2018. Set in Ithaca, New York, the 587-square-foot domicile was commissioned by a filmmaker whose younger brother drowned in a nearby lake. It is an almost otherworldly response to the client’s desire for contemplation and sanctuary. One approaches the stilt-supported home on a dark metal gangway. Embedded above a hillside, its façade is concealed behind a broad, slatted wall—a barrier that marks the passage between inside and outside, between man and nature, even between life and the hereafter. The interior is dominated by a large central room with light-colored wooden paneling, broad wall-sized windows, and a conical metal fireplace and chimney— an aesthetic that speaks in potent silences. A small sleeping nook is sculpted out of the far wall.
In 2015, O’Brien was invited to display his work by BALTSprojects, a gallery in Zurich, Switzerland, that specializes in art and architectural exhibitions. His first impulse was to display drawings and renderings from the Mask House. Instead, he and his colleagues decided to build masks—three-dimensional sculptures in metal, wood, and marble, all inspired by the forms and volumes of the Mask House. The sculptures are studies in perception—a distillation of the experience one might feel approaching, entering, and living in the Mask House. Yet they are also discrete objects that exist and communicate on their own. “I wanted to explore the domain between architecture and art,” says O’Brien. “To create objects that say something about architecture but that aren’t architecture in themselves, conceptual pieces that can get us thinking about architecture in a new way.” The show, which closed this past May, is scheduled for several European venues next year.
While he never pursued music as a profession, music continues to inform his research and practice. Both, for O’Brien, are inquiries into form. “I was fascinated by music theory,” he says. “I loved learning about the different musical formats, from classical to contemporary, and about how musicians deviated from those formats—while still acknowledging them— to create something new.”
It would not be accurate to call O’Brien a traditionalist or an iconoclast. Architecture for him is an infinite discussion on the continuum between past and future, maker and viewer, idea and matter. And in this discussion, he thinks, the architect can best serve as moderator. “I don’t believe that architects create meaning,” he says. “I think we need to envision how our creations will be received. And, whenever possible, to reduce the possibility that these creations will be wildly misinterpreted, whether today or tomorrow. We’re so deeply steeped in history. We don’t have to reinvent the wheel. Our task is to take something that we think we know, and then by subtle alterations and processes of defamiliarization, make something new from the known.”
Innovation and entrepreneurship are not one and the same, although aspiring innovators often think of them that way. They are told to get an idea and a team and to build a show-and-tell for potential investors. Luis Perez-Breva PhD ’07 describes another approach in Innovating: A Doer’s Manifesto for Starting from a Hunch, Prototyping Problems, Scaling Up, and Learning to Be Productively Wrong (The MIT Press, 2017).
A serial entrepreneur, Perez-Breva has honed this approach during his decade at MIT as originator and lead instructor of the Innovation Teams Program jointly operated by the School of Engineering and MIT Sloan School of Management. In his book, he shows that to start innovating does not require an earth-shattering idea. All it takes is a hunch, which you then give the structure of a problem. As Perez-Breva writes, “Innovations accrue their novelty as you innovate. They are more easily deemed innovations in hindsight than at their beginnings. In hindsight they can be judged by how they ultimately empower others—a community— to achieve new things.”
In this excerpt from Chapter 2, Perez-Breva discusses how to get started on solving a big problem that initially may feel out of reach.
Making the problem tangible
The problem you are proposing to solve very likely lives at a scale far beyond your immediate resources. So, you need to find an easier or more accessible version of your innovation problem to solve. Put another way, you need to scale the problem you want to solve down to a scale at which it corresponds to the resources you have to understand the problem.
For a mathematical problem, a figure would help you turn something otherwise abstract into something tangible that helps your intellect and senses work together. Blueprints or models serve the same purpose for practical and engineering problems. For entrepreneurship and innovation, you might use a slide-deck. But that can help you only so much; a lot remains abstract. If you’ve brought your problem to a resource-friendly scale, though, there are other things you can do to realize the same value that figures offer in solving mathematical problems. You can physically prototype any aspect of your problem, or, as [Hungarian mathematician George] Pólya puts it, you can build a prototype that “assumes the condition of the problem satisfied in all of its parts.” That, of course, might include a gizmo, but it can also include an organization, distribution, marketing, manufacturing, and so on. At that scale, you don’t have to limit yourself to prototyping form alone. You should strive to prototype function.
So, you can generalize problem solving to innovating if you work on bringing the problem first to a resource-friendly scale, and work at that scale to make the problem tangible, prototyping all aspects of your eventual solution—and, in so doing, implicitly outlining all areas in which innovations may be required. After that, your task will be to understand your problem at one scale and work toward scaling up successive demonstrations of the problem.
The final problem you’ll solve will likely differ substantially from the problem you thought you were solving at first. That is because your first expression of the problem was, with high probability, ill informed if not outright wrong. That’s all right; the purpose of your first hunch was to get you started. You’ll discover how wrong as your innovation prototype evolves toward scale.
Scale
The objective of bringing a problem to a different scale is to enable quick and tangible experimentation on the aspects of the problem most critical to move forward. It is also helpful to begin separating the nature of the problem from the magnitude of the impact to which one aspires.
There are many ways to work on the scale of a problem. A common one is to change the size of the community—for instance, “Let’s start with five people and then ramp up to twenty.” But there are other, more effective ways to bring a problem to a scale that is more amenable for quick experimentation—for instance, introducing assumptions to extract the most knowledge and impact from the resources at hand.
Let me give you two examples of the interplay between resources and scale.
In class, a group was interested in devising a system to detect infectious diseases quickly. To realize their initial vision, they would have needed a lab with biosafety level 2 or 3. The stage their hunch was at, though, did not justify the investment in resources and skills required to access and use such a facility.
They could have stopped at that. Instead, they introduced an assumption of scale: a strawberry is a bacterium. The effect of this particular assumption of scale was (a) because of how easy it is to extract DNA from a strawberry, they could forgo dealing with the complexity of establishing what constitutes a good enough sample of bacterial DNA for their experiments; (b) they didn’t need the extra resources of a biosafety lab right away; and (c) getting out of the straitjacket of thinking in terms of the biosafety lab freed them to think about a simple device with which to experiment on the problem. After that, it took only a week to bring together the parts and knowledge they needed and come up with a plan for how to move forward. They were able to articulate their plan by building on conversations with industry experts and a demonstration of a small working device.
More generally, working on the scale of the problem also introduced a very convenient change to the sequence of proofs of concept. Once their device was ready, the knowledge they would need next would transcend their assumption of scale. At that point, they would need access to the specialized lab only to test for that specific knowledge, and they could either contract out the testing or rent the lab whenever they were ready to move forward.
In a different setting, during a lecture in which I challenged students to think about how to prototype their ideas (form and function) all in one day, a team complained that their idea could not be prototyped. They were thinking about a pill that would emit a signal when dissolved in the stomach and help measure patient compliance. Their main concerns were miniaturizing the electronics to fit in a pill, any regulatory unknowns that might exist, and what a safe signal strength would be.
On that occasion, the answer to the question of scale was to assume a much bigger human. In other words, miniaturizing electronics and inserting them in a pill were considerations for down the road—considerations that might indeed require significant innovations. At that day’s stage of their thinking, though, the team needed to characterize the problem. Coating the necessary electronics in cereal, choosing a recipient of the right size to simulate a stomach proportional to the size of the pill, coating the simulated stomach with material that had a density similar to that of a human body, and then developing a number of test scenarios would at least give their questions the next level of specificity they would need to outline all the manufacturing steps, regulatory measurements, and reasoning about the mechanisms by which they could actually measure compliance.