If we know one thing about solar energy’s promise, it’s that sun-powered systems make the most sense in sunny climes. But is that the whole story? No it’s not, according to a recent report by two MIT researchers. Say you put a 1,000-watt solar system in an Arizona desert, and the same system in colder and cloudier Ohio. The researchers note that the Arizona system would yield 25 percent more kilowatt hours of electricity than the Ohio one — no surprise there. But because Ohio typically burns coal with much higher sulfur levels than Arizona’s, the environmental payoffs from the more northerly solar installation would be more than three times those for its Arizona cousin. The researchers, graduate students Michael Adams and Katherine Martin of MIT’s Engineering Systems Division, made the finding as part of their ongoing analysis of solar power’s environmental impacts. Stephen Connors, director of MIT’s Analysis Group for Regional Energy Alternatives and a staff member of the Laboratory for Energy and the Environment (LFEE), is overseeing the project. LFEE research engineer Edward Kern, a solar expert, is also part of the team.
Crystalline structures that look like “tiny beach balls or blueberries” are among key pieces of evidence from the current Mars expedition that water once flowed on that planet’s surface. So says MIT’s John Grotzinger, who spent the most recent semester planning the rover tours of the Red Planet for the National Aeronautics and Space Administration. But why do images of the minuscule spheres build the case for a once water-rich Mars? Volcanic or meteoric events could produce the same structures, says Grotzinger. But because the spheres found by the rover Opportunity are widely dispersed, the water explanation works better. And there’s other supporting evidence, too: disk-shaped indentations about a third of an inch across in rock indicating that a sulfate mineral, possibly gypsum, had at some point dissolved in water; the presence of another mineral, jarosite, that essentially won’t come into being without water; and surface rock formations with structures specifically suggesting that water once flowed over them. “Our observations together make a strong case that water was widely present on the Mars surface” millions of years ago, says Grotzinger, a professor of earth, atmospheric, and planetary sciences.
LOW-TAR NOT LOW ENOUGH
An exhaustive study of the cancer-related advantages of low-tar cigarettes has reached a disquieting conclusion: there basically aren’t any. The study, led by MIT economist Jeffrey Harris, reviewed the records of nearly 1 million men and women smokers to look for evidence of differences in lung cancer death rates linked to tar levels of the cigarettes these individuals favored. The researchers’ key finding is that the smokers of very low-tar (7 milligrams or less) and low-tar (8-14 milligrams) were as likely to contract and succumb to lung cancer as those who smoked midtar (15-21 milligram) products. Only smokers who chose unfiltered high-tar (22 or more milligrams) cigarettes faced a higher risk. Meanwhile, all four groups had drastically greater chances of dying from lung cancer than non-smokers. “I would suggest that we rethink our current system of rating products according to machine-measured tar deliveries,” says Harris, who is a practicing physician as well as a professor of economics and a member of the Harvard-MIT Division of Health Sciences and Technology. Involved in the study besides Harris were Michael Thun and Jeanne Calle of the American Cancer Society.
FACES IN CONTEXT
Among things we do better than the most advanced computers is spotting a face even when it’s much too far away for us to make out features like noses or eyes. MIT neuroscientist Pawan Sinha cites the example of someone well back in a bunch of marathoners headed your way. Our brains, he notes, figure out that the “diffuse blob” on top of a far-away runner’s body is in fact a face — a capacity that computer designers have yet to achieve with their vision systems. The reason we can make such judgments, obviously, is the context: If there’s a body attached, the blob must be a face. But now Sinha and his co-workers, graduate students David Cox and Ethan Myers, are unraveling just how our brains accomplish this feat. Using a sophisticated technology called functional magnetic resonance imaging that reveals what’s going on in the brain, the researchers had volunteers look at a series of images: facial close-ups, distant and blurry faces that were in the “right” place — at the top of the body, say — and blurry faces that were either without context or in the wrong place, such as between the knees. They then checked to see which images caused a face-specific area in the brain to light up, indicating heightened activity in that region. Result? Only the clearly defined faces and the blurry ones in proper contexts caused responses. Sinha, an assistant professor of brain and cognitive sciences, says the results show that “the neural circuitry in the brain can use context to compensate for extreme levels of image degradation.” He adds that the finding could have clinical applications, among them providing new ways to identify conditions like autism, whose victims have trouble integrating different types of information. The work was done as part of a major collaborative project between MIT and Boston’s Children’s Hospital.