Declining response rates for polls mean we must rely on the shrinking minority of Americans that agree to be interviewed to represent the broader public. Josh Clinton finds that Democrats were more likely to agree to be interviewed than Republicans or Independents in 2020. Common corrections could not compensate, as the partisans who do respond aren’t representative of those who don’t. Amnon Cavari finds that the people who refuse to participate in polls are less educated and less interested in politics. This means our measures of polarization overestimate partisan differences by speaking only to the highly engaged. We rely on public opinion surveys, but small response biases can paint a misleading picture.

Guests: Josh Clinton, Vanderbilt University; Amnon Cavari, Reichman University

Studies: “Reluctant Republicans, Eager Democrats?” and “Survey Nonresponse and Mass Polarization.”

Transcript

Matt Grossmann: Are polls misrepresenting Americans? This week on the Science of Politics for Niskanen Center, I’m Matt Grossmann. Polling is ubiquitous, helping us predict elections and understand American public opinion. But declining response rates mean we’re reliant on the shrinking minority of Americans that are interviewed to represent the broader public. If they’re unrepresentative, we may be getting a misleading picture. New evidence suggests that because not all Americans are equally willing to participate. The polls may get elections wrong and make the country look more polarized than it is.

Today. I talk with Josh Clinton, of Vanderbilt University about his new public opinion, quarterly article with John Lapinski and Marc Trussler, Reluctant Republicans, Eager Democrats. They find that in 2020, Democrats were more likely to agree to be interviewed than Republicans and especially independents. Common corrections were not able to compensate as the partisans that do respond aren’t representative of those who don’t.

He says that contributed to the big 2020 polling miss. I also talked to Amnon Cavari of Reichman University about his American political science review article with Guy Freedman, Survey Nonresponse and Mass Polarization. They find that response rates have gone way down over time. Not because we can’t reach people, but because they refuse to participate. Those who are interviewed are more educated and interested in politics on several issues. This means our measures of polarization overestimate our differences. They both say we can no longer assume that those who do respond are representative of those who don’t. Clinton began with explaining the 2020 polling miss.

Josh Clinton: The big findings was that as many people have known that pre-election polls in 2020 had a large amount of polling error in 30 years. All right? And at the same time, there’s been more polling being done now than ever before. And so one difficulty is like, what’s the source of that error? What actually happened? And the big difficulty is that we often try to look at polling and try to diagnose polling errors by looking at who responds. But that’s really hard to do without some information about who doesn’t respond. And so most of the time all we know is who’s responding. And though trying to figure out the counterfactual about, well, who’s now talking to pollsters is really, really hard. And so what we were able to do is we were able to use voter registration data that was being used to conduct polling to actually directly compare the characteristics of people who responding to polls to those who were not.

And so we can all use demographic information and information collected in voter files, but in so doing, we show that there’s a pretty large difference in terms of the people who were cooperating with human interviewers versus those that were not. In particular among the 10 states that we looked at, Democrats or people who were identified as Democrats by the voter file were three percentage points, more likely than Republicans to cooperate with pollsters and six percentage points more likely to cooperate than independents.

Now, whether that’s because Democrats were over eager or Republicans were especially reluctant, that’s hard for us and impossible for us to determine because all we can really figure out is what the difference is. But even so, that suggests that there’s something going on there. And we show that when you correct that by making the assumption that if Democrats or Republicans and independents all cooperate equally, then in a lot of states we’re able to reduce the amount of polling error by about an average of four points. So that’s computed by looking at the deficits and what the poll said between Biden and Trump rather, and what the certified vote was. And so that margin shrinks by four percentage points, but importantly, it doesn’t shrink everywhere. And in particular, in critical swing state like Wisconsin and Pennsylvania, there was still a substantial amount of polling error even after we equalized cooperation, which suggests that there’s something else going on there or accordingly we’re lacking good enough data about the partisanship of voters to truck that in balance.

Matt Grossmann: Cavari finds that who responds to polls is increasing polarization in some issue areas.

Amnon Cavari: So the main finding is an association between the decline of response rate and phone based probability samples and conventional measures of polarization. Mainly as response rates decline reaching today about six to 8% response in nonacademic polling, probability samples, increasingly over represent people who are more engaged in politics. Who further suggest that the select group of engaged people who respond to surveys do not accurately represent the political divide among Americans. Mainly, we show that engagement bias elevates measures of polarization on some topics, economy, energy, immigration, downplays them on other issues, mainly foreign affairs and has no effect on party owned issues such as civil rights and social welfare.

The main takeaway is about polarization of mass Americans, but we argue to be one of the most heated topics in political science today, especially in American politics. We show that by relying on probability surveys, the primary tool of assessing polarization, we are miss measuring polarization. While we agree that the United States has become more polarized in the last few decades, we suggest that we maybe failing to assess correctly the scale and scope of mass polarization. In a way, this may be fueling the growing divide we currently defined in American politics. Those who are more engaging politics, voice and opinion that is strongly aligned with their party affiliation and therefore generate a perceived polarization of Americans.

This perceived polarization pushes and justifies the elite divide we find in our representative institutions. This is further of course strengthened by the electoral process in the United States, through partial turnout in general and primary elections, which gives more weight to the more engaged, active, and participator Americans. Again, those are the ones who are more polarized.

Matt Grossmann: Cavari says it illustrates a broader problem with modern surveys.

Amnon Cavari: So a second takeaway is about survey data more generally. Though we do not cross out the use of survey data until we use often in our own research. We suggest that scholars should exercise great caution in using survey data and generalizing from them. Social scientists should ask themselves if there is an association between possible biases and survey data and their measured outcome. This is especially true when probability samples end up with extremely low participation rates. The evidence here is really clear. About one in 15 Americans respond to commercial surveys today.

It is not clear that we can confidently generalize from that person to the other 14 because we cannot confidently state that those who respond are not systematically different than those who do not respond. On some measures, we may be able to rely on such data, but on others, we may not. The onus is and the researcher to justify and generalization, they make from such data. And this is a steep hill to climb because one of our main problems in assessing the effect of non-response is the fact that we lack information about those who do not respond. We simply do not know what their political attitudes are.

Matt Grossmann: Clinton agrees. It might be the tip of the iceberg for problems in survey research.

Josh Clinton: So much of what we know about politics and political sciences nowadays based on polling. But if the underlying data that we’re analyzing to do that analysis, if we’re missing key parts of the population, then that’s really troubling. As political scientists, as policy makers, as people are interested in the ability to use polls, to make statements about accountability and representation, that’s a meta problem that’s really, really concerning. And so we’ve got great statistical analysis like we can do whizbang things with the machine learning and all this other crazy stuff. But at the end of the day if our data that we’re trying to analyze is somehow systematically different, and the people who are talking to us and taking political surveys or surveys out politics are different those who are not, then the whole edifice of what we’re doing, what we’re able to learn is somewhat compromised.

Matt Grossmann: They both looked at survey cooperation.

Josh Clinton: Cooperation rate is a technical term that basically means that if we actually know that a human interviewers talking to actual person, not a dial tone, not a business, but you’re talking to an actual human being. What percentage of people who cooperate, which is that agree to take the survey versus those who hang up the phone. So what that means is that among the people who are actually human beings that are being talked to, Democrats were 3% points, more likely that the Republicans to continue to actually take the survey rather than just hang up and six percentage points more likely than independence to actually take the survey. Now overall, the ability to get people on the phone in general is just an unmitigated disaster. I mean, that’s a technical term. I think right now response rates for polling is about low single digits like five, 6% points. If you’re lucky, which means that of every 100 human beings you’re trying to contact, five of them are willing to talk to you.

Which is like right off the bat, who show the difficulties involve with polling and the wondering like, “Well, who are these five people? Are they really normal?” By definition, they’re not because only five of out of 100 are talking to you, but that is the challenge that polling it shows. And so what we’re showing is that even among the already low response rates, so there’s this difference between Democrats and Republicans and independents, based on their political persuasions or at least correlate with their political persuasions about who’s willing to conduct that interview in the 2020 presidential election surveys that were done.

Amnon Cavari: Today people are automatically called and they can decide to pick up the phone or not. So if they pick up the phone, that’s one stage, but if they do not pick up the phone, we fail to contact them. We don’t know a lot about them. And likely it is because of different mechanisms that are not connected sufficiently with anything that we can explain. For example, it could be because of caller ID features, screening abilities that some put in place and some do not. So we don’t know exactly why they contact. They fail to come up in our samples. Cooperation rates is something different. In cooperation rates what we have, those who were already contacted, they actually picked up the phone, they’re ready to talk to us. And then we ask them questions about politics. And they say, “No, we do not want to answer your survey.”

Those people are people that I can actually start talking about them. So those are people that I can actually say. They profusely made a decision not to talk to me about politics, which I can assume that it is connected to personal preferences, knowledge and interest in politics, which we examine as well in the article. As such, we can expect that if the decline response rate is attributed more to the cooperation rates, rather than the contact rates, it could be that it is associated with measured attitudes. And that’s exactly what drove us to examine in this paper, in this paper, what we are trying to see to what extent is the decline response rate attributed to contact versus cooperation. That’s one thing. And then whether each one of them explains the change in polarization.

Matt Grossmann: Clinton’s analysis began from working on errors in the 2016 and 2020 elections.

Josh Clinton: I mean, I’ve been doing election work at NBC News since 2010. And so I’m a senior election analyst there now. And so I remember sitting through the 2020 election watching the results coming in and just saying what in the world’s going on? Am I having a flashback to 2016? Because `the polls were telling a very different story. Then what the actual votes were slow as they were to be encountered because the pandemic that were actually telling. And so this uncertainty about what we can learn from pre-election polls, because they seemed to be off was an issue that was digging away at me. And I was a member of the 2016 task force by the American Association for Public Opinion Research that looked into the polling at 2016 and said, “Well, there’s also an error there.”

And so there was some claims that maybe if we wait by education, the difference between education levels, that would fix things, but quite clear that was not the case. And so for better or for worse, I agreed to be the chair of the 2020 task force. And so that made election night particular perplexing because I realized that now I just took on a huge burden that I had to come up with some fix for, or try to diagnose what happened. And so this paper’s an offshoot to some of the work that we did there.

Matt Grossmann: They got information comparing who agreed to participate from the voter file.

Josh Clinton: And so what we did is we used the voter file, but as you note, that’s a bit of a danger zone. Because the amount of information, the quality of that information varies tremendously across different states. And so there’s all these commercial voter files like TargetSmart is one we use or L2 or Catalyst and what they all have their special algorithms to try to figure out what the partisanship of voters are. And so in some places you record that when you register to vote. So their state source primary. So you self-report what your party is. But even then that can be goofball, right? Because in a state like Tennessee, where there’s only one game in town, maybe you’ll send Democrats who registers Republicans because the most interesting races are on the Republican side, not the Democratic side. So even then that’s a measure that’s with error. So then the question’s well, how do you figure out what someone’s partisanship is?

And that becomes difficult. And so you can see basically what primary elections they voted in the past, what the party registration was, or even what a lot of people also do is look at what the precinct level voting behavior was. And so I’m going to assume that you’re the same partisanship as whatever the model vote is in your particular geographic location. But of course that’s like an ecological inference that has tremendous of amount of error. And so this is one of those measures that I think is like not great, but except for all the alternatives.

And so I think there’s huge caveats to that. And I think you see the impact of that. When we did the adjustment for states on Wisconsin and Pennsylvania, it didn’t make a big difference. And why? Because the ability to identify the partisanship of respondents and non-response in those states using the available demographic information or the available voting information is really, really limited. And so if I’m trying to make an adjustment based on bad data, like news flash is not going to work too well. And that’s exactly what happened. And so I think this is a real limitation that we don’t know precisely who’s responding and who’s not responding. And that, and that makes things, makes things complicated.

Matt Grossmann: The exit poll they looked at was just a pre-election survey.

Josh Clinton: So when I say exit poll, it was called exit poll, but traditionally it was like a normal pre-election polls. These are polls that are being done because of the pandemic. Which changed everything, which again, maybe this was also partially response over the polling error. We can’t know how it generalizes. But a lot of the national election poll was doing in addition to the in person, exit polling on election day, they were also doing regular human interviewer and polls, just like any other pre-election polling calling off registered voter lists. And so in that sense, it closely resembled a lot of the high quality pre-election polling that was being done.

Matt Grossmann: They also confirmed well known biases in response beyond partisanship, but some are easier to correct than others.

Josh Clinton: At the end of the day all we were trying to figure out, if you talk to a human being, what fraction of those are willing to complete your survey? And as you know, that we found partisanship differences, but there’s also differences by race, gender, and age, in particular non-white response, were less likely to take a survey. Females are slightly more likely to cooperate with a human interviewer, and older people are more likely to cooperate with the survey. And so these highlight the general problem of polling, which is that the people you talk to and the people who are willing to talk to you more particular are not necessarily a random sample of the electorate or the population. In particular there’s well known biases according to race, gender or age, and what we’re saying, maybe even partisanship that affect who responds.

And so this is why one of the critical tasks of pollsters is to try to figure out, “How do I adjust the people? How do I weight the data to make it sure that the data that I’m analyzing is reflective of the population that I care about?” And so in terms of race, gender, and age, as you know, these are well known differences in terms of the participation of people in terms of taking surveys. And so this is why pollsters wait. They traditionally wait on race, gender, age, and increasingly now education after 2016, because we know is that there’s difference in the likelihood that people respond. And so what we show is in addition to that, even if you control for those differences, there seems an additional difference in terms of the cooperation willingness between Democrats, Republicans and independents that also seem to be potentially relevant for diagnosing survey responses. But then how do we account for that is a little bit more complicated than the straightforward demographic information.

Matt Grossmann: Election polling misses are one thing, but are we getting a broader misperception? Cavari is connecting two broad trends. We know that polarization is increasing at the same time as survey response declines.

Amnon Cavari: So looking at data over time in both ANES data and Pew data that we examined, we see two very clear trends that over time we see an increasing divide between the parties. We call that polarization, Democrats versus Republicans. We see that very clearly over time, we talked about in a lot of different… We see a lot of work on that in political science. So that’s one thing. At the same time, we also see a decline in response rates, fewer people, fewer Americans are willing to respond to surveys. So the question is when we talk about an increasing polarization, could it be that the reason for that increasing polarization is because the people we measured 20 years ago are not the same people we’re measuring today.

Maybe we’re not comparing apples to apples. In 20 years ago, there was a certain amount of people who are answering the surveys. Today fewer people are answering the surveys and those people are systematically different. In what way? They’re more engaged, they’re more educated, they are more involved in politics, they’re more interested in politics, which means that they may be more opinionated. As more opinionated, it could be that they are actually more divided over policy. And therefore our measure of polarization actually goes up.

Matt Grossmann: They try to distinguish the effect of cooperation from a simple time trend.

Amnon Cavari: The concern was okay, so we have a decline in response rate. So in the past it was high, it was above, in the 1990s we are talking about near 50%. So this was high in the past. But since early 2000s, we see a rapid decline from about 30% to about less than 10%. So the question is first of all, what is driving the decline? That’s one thing, but also what is driving the effect on polarization? So we know that at the same time, we see an increasing polarization of Americans, but to what extent is this actually correlated? We find very strong, negative correlation. So as survey response declines, measure of polarization increase. We had to entangle that both in terms of kind of response, but also to include any indicator of time, which we did in this article. And we show that having controlling for time change, we still see that declining unit response decline, response rates, especially cooperation leads to increasing measures of polarization.

Matt Grossmann: They find differences across issues based on whether the more informed are more or less polarized than the less informed.

Amnon Cavari: So we divided into three kinds of issues. First is the idea of issue or party owned issues. Those are issues that we should expect strong division between the parties, for example, abortion or other human rights issues or social welfare. We know that Democrats and Republicans are taking very different views on that. So that’s one kind of issues. But there are other issues such as performance, domestic issues, economy, immigration, energy. On those issues, we should not expect as strong as a divide. So the argument is we see an increasing divide, but maybe that is explained to some extent by the declining unit response rates.

And the last one is foreign policy. Foreign policy again, it is a performance issue and on that issue, but we have a different thinking about that issue. On foreign policy, most Americans are little informed about foreign policy and because they’re a little informed about foreign policy, they’re more likely to follow elite cues and elite cues are more divided, then they’re more likely to be polarized. But those who are more engaged, more involved, more knowledgeable about politics, we know that they are actually listening to other kinds of cues. Mainly, they’re looking at non-partisan professional cues and they are actually not likely to revert to their partisan and cues and are more likely to listen to structure, proposal attitudes that give them more information.

They’re actually able to shy away from the political divisions. So those are the three kinds of topics that we’re looking at. We break the data into those three types of issues. And we find that on party owned issues, unit response, decline unit response that has no effect whatsoever on polarization. That is not working there, but on domestic issues and domestic performance issues, we see that the declining unit response increases polarization and on foreign policy exactly in reverse, declining unit response, decreases polarization. What happens that on foreign policy, we have the more educated, more knowledgeable who are less polarized on foreign policy.

Matt Grossmann: Cavari says that pre-election polling could be better than issue polling, but only if we can adjust.

Amnon Cavari: Our argument in our article is that survey data with low response rate can be representative as long as the sample bias is not correlated with the measure of interest. We argue that a correlation exists for measure of polarization. Mainly people who are answering surveys are more engaged, opinionated, and therefore more polarizing those who are not responding. But for pre-election polling, this is not necessarily the case, or at least was not viewed as necessarily the case. A thorough studied by Jennings and Malaysian and from nature in 2018, shows that while response rates are declining in most countries, they examine about 45 countries. The frequency of pre-election surveys and waiting abilities allow us to rely on pre-election polling to predict accurately election outcomes. They show that the bias in survey is not correlated with the measure outcome, which is vote choice. It is also useful that in elections, we are interested in those who are more likely to vote.

Indeed not 7% of the population turn out to vote, but this small category of people may be more representative of the actual voters. So that is what we had in the background when we wrote the 2018 piece. The Clinton and all paper shows that June response in a 2020 election was correlated with partisan response. People who are answering the surveys are more enthusiastically Democrats, their finding concurs actually with our main takeaway. Survey samples may provide an accurate representation of the population, as long as the outcome variable is not correlated with any bias in the sample.

If response rates are correlated with the vote choice, then any generalization would be biased. And because we do not have sufficient information about those who do not respond, any post-survey adjustment may not correct for that bias. And it actually can increase that bias. The Clinton and all paper actually shows that when they apply weights, they get rid of some of the bias, but they’re not able to solve the rest of the bias.

Matt Grossmann: Clinton says pre-election polling still has uses despite the difficulties.

Josh Clinton: And I think there is a role for pre-election polling. And I think the role for pre-election polling is first of all, it does give us some sense about what races are close or not. Gives some notion about what the electorate’s going to look like. And I think that’s important. I think in a democracy, having some expectations about what election could look like are really important for the peaceful transfer of power. You don’t want election results to come out of nowhere. And for there to be partisan meandering. And increasingly we’re seeing nowadays about the election, no one expected this outcome. So I think pre-election polling helps, make, give the populous an idea of what’s likely to happen to help situate themselves and provide some context for if the change is going to occur, whether that’s likely or not.

So I think it’s useful for helping to tell the story about how the country’s going to transfer from one party to another party or keep the same party. It gets people in the right mind space for an interpreting one election with the outcome of election. But then also I know everyone focuses on the horse race and I think that’s not necessarily healthy, but the part of the pre-election polling that I think is healthy is again, the ideas about what voters are talking about, what they care about. What do they think about the particular policies? Why are they casting their ballots? I know that’s not as attracted to talk about as who’s up and who’s down because it’s a more nuanced story, but I think it’s a more foundational story and goes more to the heart of what democracy is.

An election just says, who wins, who loses. But you think about one story what’s going on in 2020, it’s like, well, did the Democrats overreach? So the over-interpret what the voters wanted. Well, how would you know? The only way you would ever know is not by looking election returns, because that’s just binary. You said Biden versus Trump. I mean the only way you would ever know about what the voters want expect kind of are thinking about in terms of what their concerns are, is by looking at what they actually tell you.

Matt Grossmann: He says there are differences between election and issue polling. They’ll both may be off.

Josh Clinton: So on the one hand, one thing that distinguishes pre-election polling from other polling, which makes it harder is that pre-election polling has this added degree of difficulty where like in a normal election poll, so we do some polling here in Vanderbilt for the State of Tennessee. And so what benefits us is that we know what the population of registered voters in the state look like. We know the partisanship, we know the demographics. And so we can actually make sure that our polls as best we can. Assuming there’s not this within party variation, which there very well could be. But it closely resembles what the state looks like, but in a pre-election polling, you don’t have that. Because it depends upon who’s going to vote in 2020? So think about the lead up in 2020, what was the expectation about how much support President Trump was actually going to turn out? Given his historically low approval numbers, did people think that it was going to be a record turnout in the Republican side?

Given that he was historical levels of low approval. We’d never seen that before in contemporary history. Was there an expectation that Trump would be so much higher in the midst of an ongoing pandemic? In terms of who was going to vote? I’d say no, I’d say, I don’t think people are kind of… Some maybe made, but in general, I don’t think we had a great sense of what was going to happen and that’s the problem with pre-election polling is that it requires you to have an assessment of who’s actually going to vote and that’s something that’s unknown. And that’s differentiates itself from normal polling. And so that’s added level of difficulty that’s there. The second thing is I think that we expect different things from the different types of polls that also increase the level of difficulty.

For a pre-election polling you’re trying to nail a horse rose poll or being off by one or two or three percentage points means the rule of difference. But for issue polling, well, we’re off by three or four points. The level of precision that you expect from an issue poll is much more caution. You just want to know, does a majority support this policy or not? And so even the way we think about polling in terms of what we’re trying to learn, I think is qualitatively different in ways that affect the accuracy and the usefulness of that particular polling. And so not only is it harder to do pre-election polling because you want to try to figure out, you need to know what the elections going to look like. But also the expectations about how precise you have to be are also harder and higher because now you’re trying to analyze a two or three percentage point difference, which could matter a great deal for an election that’s highly polarized. Whereas issue polling, generally people are just saying, “Look, how much support is there? Is there a lot? Or there’re a little?”

There’s that quantification is much lower demands. And so I think some of the issues that we identify in pre-election polling almost a surely affect issue polling. To be clear. But I think the magnitude of those issues is probably less because A, we know what the population is in a state. And then secondly, that we don’t have the same level of expectation about the quantification. We don’t really care if support for particular policy is 67% or 63%.

Matt Grossmann: Cavari argues that issue polling can be good at illustrating the level of support, but maybe not for the gap between partisans.

Amnon Cavari: We concur with, although time has passed, we’re talking about almost 20 years since that study, we concur today that issue polling is not necessarily biased. In issue polling we usually want to assess overall support for a policy. It may be sufficient to rely on the extremes, assuming that those who do not respond are somewhere in the middle, but with some directional preferences, I don’t need all to respond. I just need sufficient and representative representation of the two or more sides. Our concern is when we try to measure the extent of the divide, taking the two extreme groups and saying that they represent the divide of all Americans, the focus here is on the data, but our sample incorrectly measures the data. So I don’t think that we should cross away the whole idea of survey, especially for issue polling. But there are things that we just can’t measure. Divide is one of them, Clinton and all shows that it may not be accurate to use pre-polling surveys for voting preferences, but that does not necessarily mean that everything in polling is broken.

Matt Grossmann: Unfortunately, most media polls are even more unrepresentative.

Amnon Cavari: The lower the response rate, the higher the bias we can find. And we show our results are consistent, both in peer service in which we show the decline from 25% to 6%, we show that with ANES data, the decline from highs of 70% to 30s, and then going up to about 40 to 50%, we show actually those changes. So we see that in different levels of response rate. We actually argue, and we show that through education, that once you get to the around 5% response rate, you are starting to not be able to say that this is a representative sample of the population. It may be representative on gender. Maybe it is representative on race, maybe on party, on other measures, but education shows that it actually does not.

And beyond education, we need to question and media has to account for that when they are reporting a survey in which we have one in 21, one in 30, responding to a survey and willing to pick up the phone and answer to a survey, then they need to do a lot, to persuade us that this can be generalized to the entire population.

Matt Grossmann: Polarization is real, but it is exaggerated in some areas.

Amnon Cavari: The evidence that we have of polarization is real. Our argument is not the polarization does not exist, but only that is somewhat exaggerated. We actually find that response rates are not associated with polarization on party owned issues. We also show that elite polarization measured using congressional actions is positively correlated with mass polarization on some issues. Polarization is a feature of U.S. politics today at the elite and mass level. We only argue that our measure of mass polarization may be exaggerated or at least on some topics. And we see that strongly on immigration, perhaps the strongest evidence is on immigration, but also on economy. And on end on energy. For such an important feature of American politics, polarization, I think that we need to get it right, to try to find a way to measure it correctly, rather than assume that our measures are giving us the proper or the accurate scale and scope of the phenomena.

Matt Grossmann: But survey response bias may inflate our estimates of effective polarization as well.

Amnon Cavari: But focusing specifically on effect of polarization, I think there is a strong indicator, a strong reason to believe that it should have an effect. Conventional measures of effect of polarization are based on survey data, how one sees members from the other political group and their own political group. We should expect that those most engage and opinionated would be also more opinionated about members of the out group, or at least in relation to themselves. In a different study, in a different political setting of course, I find that people who identify as moderates and center parties show weaker effect to polarization.

So I assume that by sampling strong partisan, we may be exaggerating the strength of effect polarization in the United States. We are not considering the moderates because we are losing them, but even more problematic, we are not considering the moderate partisans which I think are even more important to me. Those are not people who are saying, “We are in the middle.” Those who are saying that, “Yeah, I am a Democrat or Republican, or I’m a liberal or I’m conservative, but my views are not as extreme.” We are losing them. Those are important people when we try to assess effective polarization.

Matt Grossmann: And change our views of each political party.

Amnon Cavari: So if we’re taking surveys of Democrats or of Republicans, we are looking at those who see themselves as strong Democrats or strong partisans. And assumingly the results will demonstrate those who are taking the harsher, the stronger views, but we won’t see the spectrum of views of each party. And this is something that will lead us to, first of all, believe that the party is more homogenous while, it is not as homogenous as it may be. We don’t know. So that’s the problem that we have. We don’t necessarily know.

Currently, we assume that they are more homogenous. That’s the way we treat that in the media, in our research, they may not be regular or ordinary Democrats, ordinary Republicans. Some of them are taking very strong views on issues, but some of them are not. And that is true for both opinions or policy preferences, evaluation of candidates, and also voting. As long as we don’t have information about those relatively moderates, we are losing information about them. And that is we’re losing information, not only losing information about them, we’re losing our ability to assess what those parties are. What do they stand for? What is the mix of views that we have in those parties.

Matt Grossmann: Waiting may not help, but transparency and increasing response rates could.

Amnon Cavari: We simply do not know the views of people who do not respond. Any post waits assume something, which we just don’t know, whereas in pre-election polling, which is a problem in Clinton and all assessed that in pre-election poll, we can use some measures of previous voting with voting in the districts or other things that we can use. With attitudes we simply do not know. We have no source of information about their opinion. So all of those solutions of post survey weights may drive us to even stronger biases or more problems. And this is something that we should be very wary of doing. Having said that, I am still convinced that this is the best way to measure public opinion. And we want to measure public opinion as people are interested in politics and people are interested in studying politics, we need to know what people want, think and do politically.

So we need to find a way to do that. I am not suggesting that we avoid polling, we should embrace polling and find ways to make them better. So some of that is, will be to either first of all, to be as transparent as we can. And by the way, I should note that Pew data, the reason that we use Pew data, because they are as professional, transparent as they are. They actually provide all the information that I needed for the paper, everything from response rates, cooperation rates, contact rates, everything is available there, which is remarkable. That’s why we’re able to use that data.

I expect all of other pollsters to present that data. So we’ll be able to evaluate maybe the size of the error term that is possible there. So that’s one thing. But other things, posters should make strong attempts to increase response rates, and they have ways to do that. It may be more expensive. It may be more troublesome. It may be a slow word. They won’t be able to do a survey and give us a result immediately the day after what attitudes of Americans about a speech that the president made one day ago. It’ll be slower, it’ll be more expensive, but it will probably be more accurate. We can rely on that.

Matt Grossmann: Clinton is now looking at a day to day variation in responses and change over time in responsiveness in panels.

Josh Clinton: So one thing that we’ve done for example is I we’ve gotten some survey data that’s pretty high frequency, thousands of interviews in every single month, even 1,000 of interview a day. And so it’s interesting that if you just take this massive people who are taking these surveys, because nowadays there’s so many survey companies and more people are being surveyed. It’s actually insane. A lot of kind of public opinion research that’s being done. But it’s interesting that if you just look at who’s responded at any given day, and if you wait those respondents to be nationally representative. So you hold fixer demographics, you get tremendous amount of partisan variation nonetheless, day to day. That’s not being driven by demographics. And so one thing is trying to figure out, does the fluctuations that we see in partisanship in terms of who’s responding, how does that correspond to the political environment?

So do we differently see that there’s this enthusiasm gap? So when your team is winning, you’re more likely to tell up pollster about that versus vice versa. Which is a different story than just saying that Democrats are more eager and Republicans are less eager. And maybe a more nuanced story about depends upon, the political travels of your particular party that you’re associated with. The other thing I’ve been looking at is a difficulty that we have in kind of non-response to that we don’t know who the non-responders are. And so therefore you can’t really compare. And so we were also limiting what we could do because we had partisanship, but as we talked about that measure is really, really crude. Particularly in the states like Pennsylvania and Wisconsin, but there’s other ways to think about who’s responding. Who’s not responding that may give us a little bit of a handle on this.

And so there’s other ways of doing surveys instead of just randomly interviewing a bunch of different people at different times. We can also look at panels. And so there’s a couple panels that exist where we recruit people at the beginning and then we ask them to take surveys over time. And so that has the benefit of conditional on responding, which is important. Conditional on agreeing to join the panel, which isn’t everybody. And some of the biases that we talked about may affect that. But once they join the panel, we often ask them a whole bunch of questions about trust, about their partisanship and all these different aspects. And what we’ve been looking at is looking and see, can we look at some of these long running panels by Pew or by the voter study group.

To figure out, when do people stop taking the survey and basically who it tries, who stops participating and what kind of opinions are predicted by that? So are the high interested people, more likely to continue participating? Are people voted for Trump in 2016, more likely to drop off? Are people who are more trusting of institutions, more likely to commute taking? And so that gives us more flexibility because the counterfactual there is people who stop taking relative, but we know what their information is when they first recruited in 2016. So been looking at some of that information that’s going on there. And then the final thing, not there’s too much going on here, but the final thing is a lot of this is depending upon survey respondents. But on the some level, well, we don’t really care if there’s non-response. I probably should have said this first. We don’t really care if there’s non-response, if the people don’t respond don’t vote.

Matt Grossmann: Cavari agrees that online panels could help us investigate, but they won’t solve the problem.

Amnon Cavari: We need to understand how those panels were created, what attempts were made to get people into those samples. And there are companies today that are trying to create a more representative, a more random selected samples into their online panels. So in a way, comparing that all the time to a goal standard, I think is problematic in of itself is a problem. I’m not sure we solve the problems with online samples. We are actually, maybe there even strengthened, but at least we are aware of those problems. And I think that is an important feature of any research that we do, especially on survey data. We know what the source of the error is.

Matt Grossmann: There’s a lot more to learn. The Science of Politics is available biweekly from the Niskanen Center and part of the Democracy Group Network. I’m your host, Matt Grossmann. If you liked this discussion, you just check out our previous related episodes. I think you’ll like interpreting the early results of the 2020 election, the hyper involved versus the disengaged, how political values and social influence drive polarization, is demographic and geographic polarization overstated? And the role of political science in American life. Thanks to Josh Clinton and Amnon Cavari for joining me. Please check out Reluctant Republicans, Eager Democrats and Survey Nonresponse and Mass Polarization. And then listen in next time.

Photo credit: iStock