In an era when “alternative facts” and “fake news” are political catchphrases, surveys are a crucial snapshot of public opinion. Like voting, surveys provide a megaphone to the electorate.
“In democracies, where the citizens have a voice in the direction of politics, we obviously want to know: What does the public think?” says Adam Berinsky, the Mitsui Professor of Political Science. A specialist in measuring public opinion, Berinsky designs surveys that strive for impartiality and accuracy.
He says well-designed surveys accurately capture public opinions, and they explain how firmly the public holds them.
“In survey design, there are two questions we need to ask. The first is, ‘Whom do we interview?’ The second is, ‘What questions do we ask?’” he explains.
The “who” question has grown complicated. A generation ago, surveys were conducted via old-fashioned telephone, with pollsters coldcalling people. He dubs the 1980s the “golden age” of polling, when almost everyone had a landline— and actually answered it. These days, less than 10% of people agree to be interviewed, he says, thanks to caller ID on cell phones and other changing technologies.
For that reason, Berinsky’s surveys are typically conducted online. A good survey, he says, is “concise, clear, intelligible.” He aims for efficiency, taking no more time than a typical old-school phone call.
“Going into the survey, I know about how many questions I can ask people in 10 to 15 minutes, which still gives me time to ask them 30 to 40 questions,” he says.
Of course, if you wanted to get a 100% accurate measure of public opinion, you’d have to survey the entire population. That is impractical in a nation of 330 million people, so good surveys use random sampling for best results.
“Think of it as a doctor taking a blood test. They can’t drain your whole blood supply, so they take a sample. If your cholesterol is too high in that sample, chances are it is throughout your bloodstream. That’s the beauty of the survey: we can learn about the whole American public without having to talk to everyone,” he explains.
To ensure an appropriately broad swath of the population is surveyed, pollsters used a practice called multistage sampling, which divides a population into clusters. In this system, a canvasser might select 10 states, then choose 10 towns within those states, and, finally, identify 10 neighborhoods within those towns for polling. Such samples are the gold standard for polling.
In addition, Berinsky says, simple random samples have two important properties: each individual is chosen for inclusion in the sample by chance, and each member of the population has an equal chance of being included in the sample.
For national surveys, Berinsky typically aims to survey 1,000 respondents. “That sample size is adequate to describe national public opinion with reasonable certainty,” he says. Since this swath is so broad, though, he always aims for understandable questions, with clear response options asked in logical order.
Where should voters go for reliable survey data? Berinsky says that while many news organizations are accused of bias, their surveys are generally reliable and agenda-free. “These are people who really want to get the polls right, so they win when they get the outcome correct and then get the story right,” he says. As for the next election cycle? The jury is still out.
“In the last 20 years, it has become harder and harder to conduct polls in advance of elections. In the last couple election cycles, for example, for whatever reason, Democrats were more willing to talk to pollsters than Republicans. Our polls, therefore, have tended to overestimate support for Democratic candidates,” he says. “This is something we need to keep in mind when we think about polls going forward. Good polls measure public opinion writ large. That is always the goal.”