What can the 2020 election teach us about polling and politics? On the afternoon after Election Day, Matt Grossmann hosts the first-ever live edition of the Science of Politics podcast with G. Elliott Morris, data journalist at The Economist to discuss where exactly the models went wrong (and what they got right). Together, they review early results, compare them to the polls and models, and start thinking about how the results should revise our theories and models of American voting and elections.
Listen to the Audio
Watch the Video
Read the Transcript
Matt Grossmann: Welcome to the first live edition of The Science of Politics podcast. I am Matt Grossmann and I’m speaking on behalf of the Niskanen Center. Joining me today is G. Elliot Morris, who is the pride of the Texas Government Department and the data journalist at The Economist. And we are speaking to you the day after the election on only a few hours of sleep, so forgive that. But it’s an exciting day to talk about polling and modeling election outcomes and how everything fared. So Elliot, I think the big question from last night is how we should be evaluating what looks like a pretty sizable polling error, maybe about three points on average and across swing states, and how we should be thinking about how the models and the polls performed given that.
G. Elliott Morris: Well, you’re right. The polls seem to have missed quite a lot of Trump’s support, especially in heavily white working-class counties, but also in a bit of a shakeup of the 2016 error in areas with lots of Hispanics and even in African-American-heavy precincts. The only real place where the polls looked like they’re hitting their targets is in suburban areas with blocks of white college-educated voters. It looks like states like Georgia and Texas are pretty close to what we expected, at least at the presidential race. While right upset in Florida, certainly being the most notable polling error, I would pose also wrong right in the Midwest by quite a lot, almost enough to really make the models wrong on the wrong side of 50/50.
Matt Grossmann: So how are you thinking about it? I mean, night before the election, I think you had a 96% chance that Biden was going to win, which he still looks like he will, and 70 or so percent chance that the Dems were going to take the Senate, which it doesn’t look like they will. So how should we evaluate the model, and modeling and polling in general, given those results?
G. Elliott Morris: Let’s just keep the presidency on our focus now. My colleague actually is the one who did our Senate model, so let’s sort of deal with that a bit later. Our presidential model looks to have the result inside its confidence interval. We gave a 95% chance of Biden winning more than 259 electoral votes. And it looks like he’ll get… Or, 249. Sorry. It looks like he’ll get 290 or maybe 306 if he can pull out a win in Pennsylvania. So that’s inside our range of predictions, which is good.
One way to read the model is to say, “Oh, we’re super confident in a Biden victory,” we gave him a 97% chance. The other way to think about it is that our model told us that Biden would probably win the election, even under the case of a large systematic bias in the polls, perhaps even larger in 2010 and 2016, and that looks to be exactly what’s happening. That’s kind of almost a cop-out answer, but probabilistically, we said Joe Biden’s lead is so large that even a two times polling error from 2016, wouldn’t be enough to help Trump. And it looks like that’s what’s going to shape up.
Matt Grossmann: So part of what you and everyone else have to do is figure out what the distribution of polling errors looks like. And we only have a certain number of elections, but we do have a few now, where there has been a substantial polling error. And it looks at least from the early results, like this is in the same direction as 2016 and 2018, in the same types of states and areas. What can we learn from that? Is there sort of a broader directional problem with polling, or should we be less confident in our models, given those errors?
G. Elliott Morris: I think, yes to both of your questions. Here’s the really troubling thing. If you took the polling error in each state in 2016, and tried to predict the 2020 error, you’d explain nearly half of the variance in the polling error. That’s a correlation quite close to a 0.3, almost. That’s really high and it’s troubling for pollsters because it points to them having systematic problems that they haven’t solved since 2016.
In fact, the problems seem to be a bit worse this time with polling error on average, close to two points on the democratic two-party vote share, versus one point or one and a half points in 2016. And that’s, again, really pretty troubling if you’re a pollster, especially if we’ve spent the last four years trying to reckon with the fact that polls weren’t sampling enough non-college white voters, or were missing Trump supporters in other ways. I mean, if polls have started weighting by education, clearly that’s not the answer anymore. So they have a big [inaudible 00:05:55] out of them.
Matt Grossmann: And how about in sort of telling us where to focus? I mean, I guess a baseline I would use is just a uniform swing. So if we just thought that the Democrats were going to do five points better nationally, and they really only did two or three points better, it seems like we would get a pretty similar distribution of results that we got this time. How well, I guess did adding in that pre-election polling to that just general national trend help us figure out which States we should be focused on?
G. Elliott Morris: In terms of the polling error, one way to think about it is what a model based on the fundamentals and uniform nationals’ thing would have predicted. And our fundamentals had Biden winning by five points, which would have translated to a bare electoral college majority. That seems like what we’re going to get. And so from an added information point of view, that means the polls aren’t really telling us much about electoral politics that we can’t figure out based on these theories of the elector, which I guess causes some reconsideration for how we are parameterizing these forecasting models.
Matt Grossmann: Did we get a sense of Arizona was more likely than Iowa or anything? I mean, was there any kind of sense in which having that polling data told us how things would be different from 2020, rather than just, they’d move nationally?
G. Elliott Morris: Different from 2016?
Matt Grossmann: Yeah.
G. Elliott Morris: Right. Again, this is really troubling because the poll said that there should be a shift toward Biden in pretty big numbers, some of them working-class, white voters, non-college educated whites in the Midwest, and that hasn’t materialized. That’s the same exact error that we got in 2016. Now, it’s not entirely uniform, because the Florida errors playing on some sort of under sampling with Trump leaning Hispanics, maybe it’s a sort of Latin America population problem with the polls, not sampling enough of them, or maybe there’s something else going on here. Maybe the polls are sort of missing low social trust voters, or some other theory. It looks to be a pretty uniformly, bad performance for the polls. Again, the average bias, not absolute error. Average bias was two and a half points. That’s happening everywhere.
Matt Grossmann: So we were supposed to have a reckoning in the polling industry after 2016. And we had a bit of one, but we went back to fairly similar processes. People are again predicting another reckoning in the industry. I know you’re working on a book about the history and state of polling. Where do you think we are? Is this a crisis? Is there a sense that there will be a lot of changes by the time we get to 2024 or 2022? Or is this the same kind of situation where people are doing the best they can each cycle?
G. Elliott Morris: Well, that’s a great question. You’d have to ask a pollster what their proposal for changing the methods are. Clearly what they’ve done over the past four years hasn’t worked, so they need to think about new solutions for polling Americans, or for at least trying to weight their data properly, or figure out what exactly is going on to cause and under sampling of Trump voters, even when you have correct demographic. Or even political benchmarks in the case of some polls that weighted by the past vote, or sample voters based off the voter file, even those overestimated Biden’s vote share. Now, we do need to wait for some results. It’s still early. We’re still in this sort of early sort of almost like twilight zone territory with quantifying the polling error. [crosstalk 00:10:02].
Matt Grossmann: What are we looking for that would change this? If say Biden is just doing fabulous in California and ends up much closer to the eight point national, would that change how we think about it? Or what do we need to know more about the polling error that we’ll learn in the next few days?
G. Elliott Morris: I’m not so sure about the California thing. I mean, if national polls are good, then it doesn’t really matter because we need state polls to be good too. And so lots of those national pollsters who were doing state polls evidently still have problems, even the big media pollsters. But I should note, our election model had Biden winning nationally by about eight points, whereas the national polling average themselves had him closer to 9 or 10, I think. So there could be something about the population of state pollsters who are conducting state polls that have sort of figured out how to pick up on the Trump voters in a way the national polls are going. But I wouldn’t be looking to the California numbers to benchmark.
What we really need to know, we need final counts in the Midwest. We need to know if it’s a bare margin for Biden in Wisconsin, Michigan and Pennsylvania, or if it’s going to end up closer to 2 points. No, Wisconsin is sort of final now, so it looks like it’ll be one point unless their re-canvassing or recounting turns up a bunch of extra voters. But I don’t think it’s going to change the overall story here. I think we know enough at this point to know the overall trend, which is that we’re seeing an under sampling of Trump supporters, even if you control for your demographics. That’s going to be a problem no matter how things shape up.
Matt Grossmann: Well, not necessarily specific to Trump voters. I know you said you don’t do the Senate model, but the polling miss was pretty widespread; senate, House, State legislative districts, all across. Does that point us toward more explanations about the samples versus the way they answer questions? Are there any hints from, I guess, the pattern being kind of uniform under counting of Republican support?
G. Elliott Morris: So are there some hints at better technologies? Again, it looks like the error happened across pole modes, pole populations. The only bright spot really is that lots of the polling firms that said they were discovering shy Trump voters, or they were finding under counting problems ahead of time, looked to have stumbled upon something. Now, whether or not their methods of calling people or adjusting their samples is the right way to do it, there’s clearly some truth to the argument that the traditional pollsters are doing something wrong. So maybe we do need to move away from live caller or low response rate to phone surveys, which I should add, lots of traditional phone calling firms are doing. The P Research Center does most of their work online now.
But even the online polling panels, like [inaudible 00:13:19] who say they get representative samples, and based off of selecting demographic and political characteristics of the respondents, seem to have missed something here. It’s unclear if it’s because of a late shift or something else, but there was an issue. I don’t want to be too doomsday-ee about it, but it’s where we are.
Matt Grossmann: One of the early stories, some people think it’s being overplayed, but I don’t, is that we’re seeing some real racial depolarization, that is white voters seem to be moving more in the democratic direction than minority voters and Latino voters. At least in Florida and in South Texas seem to have moved a bit toward Biden. Now, there were some hints of that in the pre-election polling, if you use the same surveys, comparing 2016 to 2020. But I just wanted to get your comment on if there was beyond just the under sampling of white working-class voters… Not under sampling, the miss on the move that they would make toward Biden. What should we conclude so far about this miss, or was it a miss on Hispanic polling?
G. Elliott Morris: It’s unclear to me whether it was a miss. On average in Florida, we had Biden up a point or two. But lots of the Hispanic cross-tabs of the polls themselves, which of course, our model doesn’t see in cross-tabs, it only sees the top lines, but those cross-tabs indicated a shift toward Trump. So they’re at least hitting the right direction of the polling shift.
One more thing, the polls also picked up on a pretty big shift toward Democrats among college-educated whites, but it seems like the error is coming in the miscalibration of Trump’s support among the working-class white group. If the polls are broadly right on suburban or college-educated white movement toward Biden over the past four years, and they’re close to right, at the very least nationwide, and probably even in Florida, too, about a swing toward Hispanics, then really the best explanation for the error is the white non-college educated errors in vote share for Biden.
Matt Grossmann: And the polls did see high turnout, and we did get high turnout. But of course, it’s always been difficult to try to poll who is a likely voter. This year we had an additional complication of this very big partisan split in early versus day-of voting. Is there anything to conclude so far about that? Does that just increase uncertainty in the polling? Does it change the dispersion across the polls? How big of a source of error was that potentially?
G. Elliott Morris: Yeah, there are two things that I’ve been thinking about as far as the increase in the early vote goes. The first is, it makes our likely voter filters harder to calibrate. Right? You know, historically people have maybe been okay at gauging their likelihood to vote in person, but maybe they’ve overstated their likelihood to cast ballots by mail or early, and maybe there a partisan component to that. We’ll have to do some more digging, some real research into how different pollsters are doing or conducting their likely-voter filters to really know if certain strategies are worse than others in a pandemic election year.
But the other thing to note is that there are other externalities with polling error that come from increased mail ballot usage besides the actual methodological component. That is that there are a bunch of mail ballots that just aren’t being counted in Georgia right now. And if they are all disproportionately Biden, that would push the polls toward the incorrect direction. Right? If that’s happening in other states too, right, [crosstalk 00:01:36].
Matt Grossmann: You wrote a pre-election article about this foreseeing that, looking at the rejection rate, but the rejection rate includes ballots that came in after the fact. So is that something that you see happening? There are reports and conspiracy theories, but some reasonable reports about the mail being slow and some of those ballots not making it. So did that prediction come true?
G. Elliott Morris: Yeah. To the extent we predicted it, sure. You know, we found that in the more diverse states, especially that aren’t equipped to deal with mail-in or early voting, that there would be a bias toward the Democrats. I’m not sure to the extent that applies to Florida, which just seems to, or sorry, to Georgia which just seems to not be counting, or to delivering its mail on time. But it could be biasing … It could explain one point of the two or three point miss in the polls or perhaps three or four point miss in the polls. But it’s certainly not the safe ticket out that the pollsters are probably hoping for.
Matt Grossmann: Now, a lot of people yesterday and in the weeks leading up to the election were looking at this early voting data, looking at modeling from the voter file. Did we learn anything from that process about the likelihood that you’re going to get some different information out of that than the polls? Was it a mistake to make predictions based on party registration, turnout rates, based in that early-voting data?
G. Elliott Morris: Yeah, so I’m torn. Historically the early vote tells us nothing. Historically the trends in registration tell us nothing about an election. It could just be the case that this time the registration trends in Florida for example just so happened to point in the right direction, but it’s not a safe bet in the future. So I can’t really talk about how much we know from that. The early-vote trends I think if anything pointed in the wrong direction, right, or at least in how people … Maybe this was just a by-product of how people were interpreting the numbers.
But in Texas, the early voting returns were a source of massive enthusiasm for Joe Biden and for other Democrats, right, when the pollsters had it right in saying that it was a likely Trump victory. So maybe there’s also some room for calibration here. Right? We’re talking about a one to two standard deviation polling error. That’s still not like a cataclysmic failure of the polling industry. It’s concerning because the errors are pointing in the same direction as they were last time, which hints that there’s sort of troublingly little progress being made to address these problems.
We’re probably still at the point where polls are better for handicapping elections than other sources of information.
Matt Grossmann: Yeah. I was just going to say, if we’d had the same size polling error in the other direction, there probably wouldn’t be quite as many calls for fundamental change.
G. Elliott Morris: Right.
Matt Grossmann: Yeah. So I guess one source of the problem in the early vote is if people tell you that they’ve already voted. How do you weigh that versus people that are going to say they’re going to show up on election day? Another is, I mean we just had such a partisan split in early voting. One theory I had is that maybe it was just the case that if you were a Democrat and you hadn’t voted early in a state where you had lots of opportunity to do that, then maybe that was sort of a worse sign than we were getting credit for, so Republicans who said they were voting on election day were more believable.
G. Elliott Morris: Yeah. It’s hard to calibrate that without knowing the results from Pennsylvania in my mind, but that certainly seems like the most likely explanation.
Matt Grossmann: I made a list before the election of some things that might explain the polling error if Trump were to win. But I think we can still go through a similar kind of list. We’ve already talked about one of them, which is late-rejected or unreturned mail ballots. That’s a little bit … We talked about them not getting there. I guess what about this, how big of a problem was it for Democrats that a lot of them were requested, and many of those that were requested were not returned?
G. Elliott Morris: Well I guess I’m more concerned … Well, the only data that I’ve really looked at is not with the return rate, but with the rejection rate. So that’s troubling because there are new partisan and racial imbalances in rejected ballots this year because so many Democrats had voted by mail. So [crosstalk 00:22:39].
Matt Grossmann: And that was big enough to matter in like a state like North Carolina maybe.
G. Elliott Morris: Right. Our modeling focused on the demographic correlates of absentee or early voting, or some mail-in voting in the case of North Carolina, but different in the cases of other states. And yeah, we found that younger, less white and poor voters more likely to cast their ballots by mail. They are disproportionately more likely to vote for Democrats. So if there are higher rejection rates, it’s going to hurt them more. But we only found, again, that that was the source of a one or two-point bias at most in the worst affected states.
And those worst affected states are sort of Democratic anyway because they have so many younger non-white and poorer voters in them that tend to vote for Democrats. One of the other analyses we conducted, which was entirely separate was the problem that we’re referencing now in regards to Georgia, which is what happens if the ballots never arrive. Not necessarily the rate at which they are rejected after they get there, but if there are problems with the USPS or something. That could have a much larger effect, closer to six points or something.
Matt Grossmann: So will we learn … I mean we have Pennsylvania and North Carolina where right now they can come in after the election day, and some states like Michigan and I guess Georgia where they cannot. So I guess would that tell us that it was a big problem if a whole bunch of new votes come in after election day, or would that just tell us people follow the rules in their state?
G. Elliott Morris: Yeah. I would be really interested to see an analysis across states of how many ballots were returned or were sent out and didn’t make it by election day. I imagine we can get some information by looking at states like Pennsylvania and Wisconsin that allow you to return the ballot after. But I think we would really need an individual state-level information to really know. But right, I mean if we found that there could be a six percentage point bias on margin if there were rejection or non-return rates approaching five to 10%. You know, crazier things have happened this year, and that was a polling error, so that’s something to look for. I guess one point we’re making is that it might not be safe to assume that the polling error was entirely the pollster’s fault. That might just be worth being clear about, and we have a [crosstalk 00:09:22].
Matt Grossmann: Yeah, and I think 2000 even when we just had in-person voting, part of the explanation for 2000 ended up being the butterfly ballot in Florida. And so yeah, if you have someone intending to vote for a Democratic candidate and their vote doesn’t actually materialize, then they may have told the pollster correctly that they were voting.
One other side on the turnout part is the possibility of a Republican turnout surge. There are some conservatives interpreting the election as a repeat of 2004. Obviously not in necessarily the re-election of the president, but in the idea that these missing white voters came out, that there was a lot of people who did not vote in 2016 who showed up, and looked more like Trump voters from 2016. How big of a source of error would you think that was, and is there any way to determine that or correct for that?
G. Elliott Morris: Yeah, because here’s the thing. Most of the pre-election polls showed that the switch from voting for Trump to Biden were … Most of the aggregate change in Biden’s vote share versus Hillary Clinton’s was from vote switchers. Right? So I’m thinking maybe they just reached a whole lot of Trump supporters who were willing to tell a pollster that they changed their mind and not a lot who were willing to say they were still supporting the president, which I guess fits with these theories that they would be skeptical of admitting their support. Right?
G. Elliott Morris: I raised this possibility online a few weeks ago. The question is how explainable is that by our models. Right? Is this a normal source of polling error, or is this something new exclusive to our era of sort of elite membership in the Democratic party, or a feeling of alienation among Republicans, and there’s an identity component here too of course.
As you mentioned, David Shore, I don’t know if you credited Shore, but he has this theory that it’s about untrusting survey takers. That it’s not just low education. It’s people who don’t trust other people, don’t trust social institutions, who don’t respond to polls, and so that wouldn’t be a shy Trump voter in the sense that they’re lying to pollsters. It’s just about whether you reach the people at all. And if we’re talking about surveys where the phone pollsters are getting 4% response rates, it seems like a pretty big potential source of error I guess. How much do you buy that, and is there any way to correct for that?
Yeah. Right? Well first off, I was hoping you were going to raise this again. I didn’t mean to not credit Shore upfront so, “Hey. If you’re out there.” Yeah. I think this is one of the likelier explanations. And my contribution to this is that, again in putting it in terms of our models. So our fundamentals forecast, right, the prediction that we would make to the national popular vote based on economic growth and the president’s approval rating was Biden +4, non-Biden [inaudible 00:12:31], or Biden +9. That at least attributes some evidence to this idea that the polls were under-sampling Trump supporters or otherwise missing support for him, even if they were conditioning on the national environment.
Shore’s explanation or proposed explanation for this is certainly appealing. I would really love to see an election model that took this into account quantitatively, and I imagine he has work on this. So that would be I think certainly worthwhile for us to see in the public sphere. You know, if we can adjust our polls ahead of time that’s good for handicapping, but it also might help us … I mean the point of forecasting is not just to handicap. Right? It will help us calibrate a lot of our expectations, at least journalistically, academically as well. So the only thing I can say is, I think it’s a great theory and I wish I knew more about it, but I haven’t seen a whole lot of work.
Matt Grossmann: Is it a self-fulfilling prophecy though, where we now have people who don’t trust polls very much? They might be more likely to be Republicans or Trump supporters at this point. They might be less likely to respond to polls next time as well, that the educational divide in the parties seems to be continuing. So are there reasons to expect that this is going to get worse before it gets better?
G. Elliott Morris: I don’t see any reason to think that it’s going to fix itself, so yes.
Matt Grossmann: The other popular explanation, as you know, but that hasn’t generated a lot of academic academic support, is that there are shy Trump voters, which I just want to distinguish I’m using to talk about people who are talking to a survey interviewer, but do not report that they are supporting Donald Trump. So-
G. Elliott Morris: Actively lie to the interviewer?
Matt Grossmann: Lie or say that they’re undecided when they’re really supporting Trump. Maybe they’re lying to themselves too. I don’t know. There’s several versions of the theory, but to distinguish it from a sampling issue. This is, you have them on the phone, but you didn’t get the correct report. So I guess what is the state [inaudible 00:30:48] in regard to that, and is there any reason to revisit that in this election?
G. Elliott Morris: I think the research has pretty well debunked the idea that people actively lie on the phone about their political support. We have a whole lot of evidence that shows that non-response is related to supporting your candidate, and so it’s much more likely they’re just not answering the phone than that they see some sort of social desirability bias and support for Trump than actively lie to someone. Right? The good news is, if that’s true, you can fix the weighting methodology issues, and you might have to invent some new source of polling.
We’re on the verge of maybe wanting to incorporate a lot of new methods or be more experimental. Right? And most pollsters are doing that. If it’s a methodological issue, you can fix it with those new methods. But if people are fundamentally just predisposed to lying about their support for Trump, there’s not a whole lot you can do for that. So I’m a bit optimistic. Maybe I’m a bit naive on this, too, but it seems like the research is more pointing toward the methods fix than the lying.
Matt Grossmann: So some of the research on the kinds of things we usually use for social desirability bias, like using a list of things or using prods that try to lessen social desirability bias, all of that doesn’t seem to produce much. As you know, there’s some Republican pollsters who want to move towards things like asking you who your neighbors are supporting rather than asking you who you’re supporting. Is it a time to analyze those other kinds of questions where we’re not even asking people who they’re supporting or is that unlikely to yield fruit?
G. Elliott Morris: Maybe we do need new methods for detecting social desirability bias and detecting support for Trump. It’s always troubled me that the evidence against social desirability bias in polls comes from polls or it comes from online list experiments or split form surveys or split method surveys. Maybe the other way to do it is to do long-term recontact surveys, where you’re going to ask the same person the same question over and over again, until they give you the straight answer. But that’s super costly. I’m not a survey methodologist, so I don’t have the perfect answer here. But look, I think pollsters are going to be as inclusive, defining an answer as possible. So perhaps it’s not even something we’re considering now.
Matt Grossmann: So one other common explanation is just late breakers. I think you posted that the exit poll at least said that people who claim to decide in the last week went for Trump overwhelmingly, which was the same pattern as 2016. So that might not be a source of polling air, but it’s a source of forecasting here from the polls, I guess. To what extent should we credit that, that there were undecideds or third-party supporters? They were real, but they didn’t decide till the last minute. And if so, is there anything to do about that?
G. Elliott Morris: The late deciding block, as you mentioned a second ago, it could be one of the sources where Trump supporters are parking themselves in the survey until election day. But it’s worth noting that the late deciding block this year looks to be 4% of the electorate. All of those voters would have had to vote for Trump to explain the error. And what’s much closer to a 50, 40 or 60, 45 split than that. So one thing I’m hesitant of though is letting pollsters off the hook for this. The 4% undecided rate, like I said, couldn’t explain all the error nationwide, certainly not in Florida. I don’t think that we should be comforted by that answer. And after 2016, it’s so fashionable to rail against the polls that eventually you want to be contrarian and just say, “Oh, the methods are working. We can adjust the methods. This is an intractable problem with polls.” And that’s true. But this year it really does look like there are long standing methods problems with the polls too. It’s not just that people make up their minds late.
Matt Grossmann: So one of the reasons that we do this comparison of the polls to the outcomes, on the one hand, it’s not a great intellectual exercise to figure out whether asking someone beforehand, who they’re going to vote for predicts who they’re going to vote for in terms of its theoretical import. But the reason we spend a lot of time on it is because it’s our potential for a ground truth about whether polls work. So to what extent is that a good way of evaluating survey research? Should we extrapolate from this a broader problem with surveys and public opinion data that we use in a non electoral context?
G. Elliott Morris: Yeah, so there are two answers here. The first is that public opinion polls match government benchmarks across the board. The stuff like, “Do you won a microwave in your home? ” Administrative data. Polls are able to replicate that across methods. That’s reassuring and it points to your implication here that using election calibration proposals is not the best way to measure how worthwhile they are, how accurate they might be. Again, this year it’s really hard to figure out if there’s an issue with our likely voter screens or if there’s a broader issue with sampling voters.
Now it’s probably obviously a mixture of the two, but only one of those issues matters if you’re asking all Americans whether they support a policy or how they feel about upcoming war, stuff that policymakers really need information on. So I guess I have a bit more faith in their ability outside the electoral context. Certainly that’s been historically true, but I’m really looking forward to a harsh vapour reconciliation with the pollsters themselves. If this is a systematic issue with the polls, the public opinion organization in charge of those pollsters should try their hardest to really suss out the truth here. So I’m hoping we get something pretty hard hitting and reflective this time around.
Matt Grossmann: So it’s hard to remember, but one of the initial critiques of the aggregators and the modelers was that we were going to deter pollsters. That doesn’t seem to have been the problem this year. We had quite a bit of state polling. How do you feel about your role in that? Is this a good thing to encourage? Do we have too much polling at this point? Are we not learning anything from each additional one? Should we be paying less attention rather than more?
G. Elliott Morris: Well, we certainly have too many polls that aren’t accurately measuring the rates, right? Look, I have a firm belief, you mentioned I’m writing this book about how polls can help influence democratic outcomes. So I have a firm belief that more data is probably better. It allows for asking different types of questions. It allows for averaging opinion, which is typically better than having one poll on one question. So I will take as much data as I can get. And again, if these are just issues for horse race handicapping, then that’s not super concerning to me as someone who just wants to be able to tell policymakers what the opinion in their district or the country says.
But in the 24 hours after an election, I think the question that’s going to be on our minds is, “Do we really need all these pollsters benchmarking the horse race if they’re not going to increase our probabilistic accuracy or even just our opinion accuracy?” So one of the reasons why I thought polls would be better this year is because we had so many national high-quality pollsters sampling opinion at the state level. That hasn’t turned out to be true. So maybe there will also be a [inaudible 00:39:55] with the role of polling and of horse race coverage if these errors really do keep persisting. And like I said, they’ve matched the pattern of the 2016 error really well. So there does seem to be something wrong here. It’s not just two random polling errors.
Matt Grossmann: So you’ve been working with several political scientists on the forecast and you know how it’s developed within the discipline. You mentioned that the fundamentals based models will probably turn out to be pretty accurate. The average of those reported NPS, which were all over the place. It seems like they’ll be pretty accurate as they were before. And we now have a few elections where the state poll based alternatives you might argue haven’t haven’t fared so well. So in 2000, there was one. In 2016, there was one. And this year there was one. And so [crosstalk 00:40:55]
G. Elliott Morris: And in 2012, depending on how you measure the economy. Our fundamentals model in 2012 was also better than the polls. So sorry, you can finish your question.
Matt Grossmann: No, so yeah. So what should we conclude from that? Certainly there’s people within political science who say that we should go back to our traditional forecasting, it’s pretty good, we shouldn’t try to overstate our ability to address electoral college versus popular vote splits, that kinds of things. [crosstalk 00:09:29].
G. Elliott Morris: I don’t agree with the latter bit. In the election forecasting context, if we’re going to be making bottles to explain outcomes, not just behavior, we have to do at the state level, that’s just the institution we have. But certainly I agree with the overall critique. If our job is to explain politics and polls aren’t a useful tool for that, then there’s a reconsideration of using them as a tool. Now again, I don’t think we’re quite at the level where polling is going to be, in the future, worse than the replacement level fundamentals models. There could be some sort of Trump or modern GOP specific reason why the polls are biased toward the Republicans. Like [inaudible 00:42:20] says, like we were hinting at earlier, if there’s an interaction with low social trust with party support, then you’ll get the error.
But if that fades away, if we have more class oriented politics instead of race-based or race oriented politics, maybe that would change and the errors wouldn’t be so correlated with non-response. So I’m not ready to write off the polls, in summary. But one thing that we said when we published our election model was, “Don’t doubt the fundamentals.” And we got hammered for this. There were allegations that we were over fitting our fundamentals bottles, that we were trying to take into account too many variables or training them in the wrong way, but they are more accurate than the polls this year and I think that’s worth emphasizing after the fact.
Matt Grossmann: So there was also discussion about correlations across states during this cycle. Did we learn anything about that from the outcomes? I know last night there was the controversy over the New York Times’ needles, trying to draw on Florida data to predict Georgia and North Carolina. So we got a mini version of that. On the other hand, we ended up with pretty close to uniform swing from last time and to the extent that it wasn’t pretty explainable. So we didn’t get these tail crazy outcomes. So I guess what can we say about that debate about the correlations across states in polling error?
G. Elliott Morris: So yeah, worst orienting around that national error. Again, on average polls are off for around two points on vote share right now, if we project the county level trends in the data we have for the rest of the counties. So that leaves the residual errors in Florida and in the Midwest to be particularly large compared to the nation as a whole. So it seems like we’re having, again, these errors that are canceling themselves out where the polls maybe underestimated suburban shifts away from the Republican party, but underestimated them toward the Republican party among rural white working class and Hispanic voters. And so what that means in the Midwest, or maybe I should say in the Northern battlegrounds, if someone from Pennsylvania would not like to be called from a Midwestern state, we’re seeing pretty uniform under performance, closer to six points or seven points on margin for the Democrats in Ohio, Iowa, Michigan, Wisconsin. Slightly less in Pennsylvania, depending how the counting happens there.
And I guess if you’re training a fixed effects model to predict how good the polls are, you’re going to have a negative residual it seems like in Texas, too, which certainly underperformed expectations of our model six point margin is… Actually, sorry. I misspoke. That would be better for Biden to be a two point over performance of the uniform swing. And we’d probably have the same thing in Arizona and Nevada. So there could be this correlated South Western swing going on in the other direction. But again, we should wait for final results before really tallying up those errors. Our direction should be broadly right, but there could be some nuance we’re missing.
Matt Grossmann: So someone listening live asked us to comment on Trafalgar. So without being too specific, maybe just address this issue that we have pollsters that get it right, but they would have gotten it right if they just moved their polls three points toward the Republicans on average. They didn’t release a lot of their methods, but they are addressing many of the things that we’ve been talking about, social desirability bias sampling. So how should we evaluate those claims and how do you expect that we will? Is he just getting a lot more business no matter what?
G. Elliott Morris: Look, I watched a segment with Michael Smerconish on CNN with Robert Cahaly the CEO of Trafalgar Group. He said, “Either Biden’s going underperform on Tuesday and you’re going to be the next genius of politics or you’re the dumbest pollster I’ve ever met.” Not in those exact words. And I’ll say, it looks like people are going to give them credit for the former probably. But in terms of actually evaluating the claim, there are some things about Trafalgar data that really point not toward the genius, but toward getting lucky with the numbers. The best indication is that the cross tabs are not actually reflecting the motor results.
So Trump winning 40% of voters under 30, Trafalgar group said is not happening in the exit polls. Trump winning 30% or 40% of black voters is not happening in the exit polls. It’s closer to 8 or 10 points. So if the numbers under the hood aren’t right, I think that points to something, I don’t want to say nefarious, but this points to a different explanation with the top line of the poll. And I’d just say, Nate Silver’s probably right when he says, “It’s not a good sign if the poll is saying the same thing every single time. You can guess what it says before the poll comes out.”
That’s sort of a hint that something’s going on here. That cross-tab information is also a dead giveaway that they’re probably just looking into or biasing their data in the Trump favorable predictions. And hey, it’s paid off for them these past couple of times. So maybe it’s a good business strategy, but it’s probably not a good public opinion.
Matt Grossmann: So, I guess I want to give you a chance to address the more skeptical question, more skeptical reaction, which is, “Hey, you said there was only a 4% chance of Trump winning.” It doesn’t look like that after today, I guess. How would you respond to that?
G. Elliott Morris: It doesn’t look like that’s going to happen. So, I think, again, it’s useful not to think about the forecasts as just a probabilistic number. The reason we attach the number is that’s the best summary of the distribution of what we’re giving out, but it could be misleading. So the better way to think about our forecast, at least this year, is that we said Joe Biden would win the election, even in the case of a severe polling error. That’s probably what we’re going to end up with, given that he’s secured Wisconsin and will very likely secure Michigan by the end of the day. And hey, he could even get Georgia, which the polls had him favored in. And that would put him even closer to, I guess, 317 or so votes, 320 maybe. That’s getting back up closer to the center of the distribution. An average of 340 electoral votes is not that off.
So, I think it’s easy to hate on the forecasters after they get Florida so wrong, or at least in the case of … Let me back up. This year earlier on in the election night, it was easy to hate on pollsters because it looked, not only based on Florida, but also based on like the New York Times needle on Georgia and North Carolina, that they were headed for a pretty bad six or seven point miss across the board. That’s not really what we’re looking at now. We’re looking at Democrats meeting expectations and their polls, again, suburban states with under performance in areas that’s not going to end up being consequential in the Midwest, and then this very weird sort of Miami-Dade specific event in Florida. That’s something that the forecasting models take into account. It’s well within that historical or it’s well within our uncertainty interval, our range of predictions.
So if anything, the forecasting models added context this year that you would have missed if you were just inhaling, pulling data would show day after day after day. And they point a win for Biden nationally and really good margins in the swing states. My first methodological idea to take around with our model is to extrapolate or to look at better extrapolating trends over the past couple of weeks. So there was a noticeable tightening in the polls over the past couple of weeks to slow down in the last couple of days, very similar to what happened in 2016 when there was a sharp trend toward Clinton and then the tiny [crosstalk 00:51:21]-
Matt Grossmann: We blamed that on the original Comey letter and the second Comey letter at the time, but maybe it’s worth a look back at that.
G. Elliott Morris: Right. So maybe it’s time to … Right. Now that we have two examples of this happening, it could be a pattern now to keep our eye on. But yeah, the forecasts are going to get a lot of hate for not having the exact combination right, but without them, I think we would have been worse off. The alternative, it’s not clear to me, is better than not having them.
Matt Grossmann: So as you know, I usually have political science researchers talking about new articles on this podcast, and you have an ongoing relationship with political science. And there is a broader, I guess, there’s a broader connection between data journalism and political science, that the day before the election and the day of the election, in some ways, is the height of people’s interest in things that might look like political science. So I guess to what extent are we harnessing that well to teach broader lessons about American politics?
G. Elliott Morris: Well, I think conversations like this are a really good example of doing that. We’re having this conversation about polling and to some extent political science methods or voter behavior the day after the election, which we maybe wouldn’t have otherwise. So I think it’s incumbent on us to have conversations like this, to really improve the public understanding of political science. Also, I should give a lot of credit for 538 here because they bring on people like you and tons of other political scientists to write articles and takes even live on election night. So that’s a direct example of forecasters elevating the profile of political silence. Now, sometimes it doesn’t pay off. 2016 is a pretty good example of political science models, or at least some high profile political science models, getting the election very wrong, not only to the extent that they’re forecasting the popular vote and not the electoral college last time around.
So it’s not always a fruitful relationship, but at least the political scientists that I talk to think that having the very high profile public political science forecasting symposiums and members of public scholarship reflects well on the political science academy in elevating profiles. But hey, I’m not a political scientist, so I’d leave that up to you and your peers.
Matt Grossmann: But you’re a big reader. So I mean, what should we be in … We’ve been talking about how pollsters should react and how forecasters should be, but what are the big lessons out there that you want to understand that this election seems to be bringing to light?
G. Elliott Morris: So in terms of broader behavioral lessons, I’ll probably blog about this. There are two things that I’m thinking about. One, it looks like there’s further geographic polarization in 2020, which probably goes against my expectations. Again, this could just be because of the polls, but it looked like the swing against Hillary Clinton, the swing from Obama to Trump was driven a lot by personal characteristics of Hillary Clinton that might not be a good explanation based on the further swing toward Trump and rural areas. So that’s worth dwelling on, I think. And then it’s also worth dwelling on the Republican party’s future with minoritarian politics, especially of the ethnic variety. If Donald Trump can win 30% or 40% of the vote in Miami-Dade County with the type of campaign, the type of rhetoric that he’s had this year, that probably prompts a lot of reconsideration over how we thought he would be punished for that.
And if suburban white people are the only ones who have a problem with the things he is, some of the rhetoric from the Trump campaign, at least to punish him at the polls, we probably also have to reconsider the future of the party, if they just keep holding on to the Senate and pushing minority policy views and that sort of thing.
Matt Grossmann: So if the cottage industry after 2016 was about racial resentment and sexism as being drivers of the vote, I guess, to what extent was that a mistake? And to what extent will there be a, do you see a new one out of this election?
G. Elliott Morris: So it’s unclear to me if it was a mistake or even that it didn’t translate, or even that her … Sorry, let me back up. So I wrote this piece a few months ago saying racism isn’t so large a factor in support for Biden as it was a support for Hillary or Trump in 2016 versus 2020. Neither is sexism. Now, the areas with the highest concentration of sexist and racist voters, at least where it matters in the Midwest, do seem to have shifted toward the Democrat, like those polls would suggest. But if our inferences about how these traits play with voter behavior are based on polls, and polls aren’t accurately representing the electoral environment, maybe we also need to think about whether or not that’s accurately capturing voter psychology. I mean, now we’re getting into, we might be approaching conspiratorial territory here, thinking that branch of academia is misfiring. But if the polls aren’t right at forecasting an election, it might be worth dwelling on whether or not they’re right at explaining the behavior in that election too.
So one final point on this. To go back to what you said, if the cottage industry after 2020 is about geographic polarization, I think the coronavirus might have a lot to do with this. To speak bluntly, there were terroristic threats to multiple governors this year from rural America because they didn’t want the cities and the government treading on their rights. However silly some elites might think those objections are, or however silly they might be in reality too. If the Republican party can continue to capitalize on that anti-elite sentiment, even when Democrats run a candidate like Joe Biden, who tries to cater to them in rhetoric and in this air that he carries himself with, it’s … Well, two things. Again, it’s worth considering how the Republican party continues to thrive in our geographic system of representation. But it also probably, at least it makes me really worry about partisan antipathy starting to really intensify with its geographic … I’ve been up since 3:00 in the morning. Geographic component too.
Matt Grossmann: But let’s end there because obviously we did have the president last night say that we make complaints about the votes needing to be stopped counting where he was ahead and count until the end where he was behind. I guess, how worried should we be? On the one hand, it looks like the result is more likely to go against Trump than not. But there are threats to democracy associated with how elections are interpreted, how elections are fought after afterwards. So, where are we on that and how worried should we be?
G. Elliott Morris: Well, this goes back again to something we were talking about with this red shift earlier in the conversation. Your listeners will know the political science pretty well on this, but the threat of violence against your partisan opponents when you don’t get what you want out of your government is sort of at least a recent high. I guess we don’t have a whole, really [inaudible 00:59:49] of too historical a view of polling on tolerating violence against Democrats if they win the election or Republicans, if you’re a Democrat and maybe the two party popular vote goes a different way than the electoral college. So again, it’s at least a modern high. That’s concerning, especially if you have a president who stokes those divisions for his own personal gain.
If what we’re predicting happens and Joe Biden wins the presidency, I’m probably more concerned about those threats than I would be if Donald Trump was still in power placating those voters. Now there are some downstream effects. He might lose a little bit of this microphone, his bully pulpit if he’s not the president, but it’s not conspiratorial to worry about these things. But now I am approaching the end of my skis with political science, so I’d hand it back to you.
Matt Grossmann: So, we will continue learning about the election in the days ahead. I want to encourage everyone to subscribe to, well, The Economist overall. I’m a decades long subscriber, but also the checks and balances newsletter that Elliot puts together. And we’ll look out for the book on polling. Does it have a name yet?
G. Elliott Morris: It’s called No Margin for Error, tentatively.
Matt Grossmann: All right. We’ll see if No Margin for Error, we stick with that by next year. Thanks. Thanks for joining us on light sleep. And everyone else, we’ll see you next time on The Science of Politics. Thanks so much.
G. Elliott Morris: Pleasure.