Assoc. Prof. Josh Dyck
’s election analysis and views on polling have been featured in The New York Times, Time, Reuters, ESPN the Magazine, the Globe and Mail and countless other media outlets. A member of the Political Science Department faculty, Dyck is also co-director of the university’s Center for Public Opinion
, where he leads national polls on topics ranging from sports-related concussions to federal elections.
For the past two years, he has served also as pollster for Boston’s WHDH-TV, an NBC affiliate. He recently talked to the UMass Lowell Alumni Magazine about the seemingly inscrutable business of voter polling. For the full interview, check out the fall issue of the magazine
Q: How has election polling changed with cellphones, the web and other technology?
JOSH DYCK: As you’d probably guess, 20 or 30 years ago everyone was doing their polling by phone. People were only just starting to use devices like caller ID or voicemail; they still didn’t have much of a way to not answer the phone. As a result, response rates were higher than they are today, and polling results were generally more accurate. Since then, the world has gotten a lot more complicated. Most people today use cellphones, and response rates on them are much lower; even on landlines they’ve lowered over the years. Overall, the typical response rate we’re getting today is between 10 and 15 percent. So achieving good samples is harder than it used to be, and more expensive.
Q: How do you compensate for this?
JD: One way some polls lower cost is by getting rid of live interviewers, using automated machines instead — robo-callers — and doing shorter polls. Another method is to conduct your polls online. The gold standard, though, is still to use live interviewers, and to do the poll by phone — both cellphone and landline.
Q: How do you get the cell numbers you call, since they’re not publicly listed?
JD: Actually, they are. Pollsters get them the same way marketers do — people put their information on everything these days, and companies collect that data, which they sell.
Q: How do you choose whom you call?
JD: One way is just to use what we call random-digit dialing (RDD), where you’re not controlling whom you call. The other way, for election polling, is to make calls from registered-voter lists.
Q: Which method does the Center use?
JD: We use random-digit dialing on pretty much all our calls. The problem with relying on voter lists is that you tend to miss the effect of “shocks” — shocking events, shocks to the system — like Donald Trump and Bernie Sanders this year during the primaries: both of them were bringing out people to vote who weren’t typical voters, some of them who’d never voted before. So of course they didn’t show up on the lists the pollsters were using. That can create a lot of uncertainty, which is why we prefer RDD.
Q: How do you know that the people you’re reaching are a representative sample?
JD: They’re probably not. For example, you’re likely to get a disproportionate number of folks over 65 in your sample pool. And not enough in the 18-29 group. And we expect that. So we allow for it by using what we call “post-weighting” — we weight our responses to reflect the groups that might be under-represented.
Q: What if response rates continue to drop? Is there a point at which you’ll have to change your approach?
JD: Probably. In 10 years, say, if the rates drop to 2 percent, or 5 percent — to a point at which polling by phone is longer viable — pretty much everyone will be doing their polls online. We’re not there yet, but it’s approaching. It’s something we have to continually assess. This is a field, definitely, that changes with technology.
Q: Do you ever think the polls themselves might be self-fulfilling? If one candidate, for instance, is shown to be far ahead, people might just go with him or her, despite their preference?
JD: There’s not much evidence to support that. If anything, the opposite seems to be true: in the last week or two before an election, the polls generally tighten. If poll results were benefiting the front-runner, as you suggest, you’d expect the opposite. I think a lot of people miss one real value of polling: not necessarily always to tell us who’s going to win, but to tell us when an election is close. When that happens, as it often does, the polls can actually put upward pressure on turnout.
Q: In your view, over the last several election cycles, what polls have been best and worst at predicting outcome?
JD: I’m going to give you a real non-answer answer: there’s no single poll that’s as good as the overall polling average. Statistics tell us this: the average of all the well-conducted polls will almost always hit the nail on the head. You can get those averages at RealClearPolitics.com or the Huffington Post.
As for the weaker polls, I’m not going to name any names, but generally the least reliable ones are those using automated voice responses as opposed to live interviewers.