We have overestimated the influence of partisan misinformation during political campaigns. But that doesn’t mean we’re well-informed. Americans know little about important public policy issues and they “know” things that aren’t so. Emily Thorson finds that Americans concoct information about current policy to match what they think they know. It’s not that they are fed misinformation but that the media report little about the details of current policy, leaving voters to make up the facts. Correcting this misinformation about existing policy can make a difference and help Americans evaluate new proposals for policy change.

Guests: Emily Thorson, Syracuse University
Studies: The Invented State


Matt Grossmann: The impact of policy misinformation this week on the Science of Politics for the Niskanen Center, I’m Matt Grossmann. Research suggests that we’ve overestimated the influence of partisan misinformation during political campaigns, but that doesn’t mean Americans know what they should about key issues and debates. The public not only knows little about important public policy issues like social security, immigration, and national debt, but also knows some things that aren’t. So this week I talked to Emily Thorson of Syracuse University about her new Oxford book, The Invented State. She finds that Americans can cocked information about current policy to match what they think they know.

They’re not necessarily fed misinformation. In fact, the media report little about the details of current policy, but they often assume important details that aren’t true like that China holds our national debt and can repossess our property or that refugees don’t face any background checks. Correcting this misinformation can make a difference and it might be easier than trying to intervene in partisan debates about policy proposals. Thorson thinks we should at least get on the same page about what current policy entails, a necessary step to evaluate new proposals for change. I think you’ll enjoy our conversation. So tell us about your new book, The Invented State. What did you find?

Emily Thorson: Well, the book really focuses on misperceptions about what the government actually does. So misperceptions about how public policy actually works. I find that misperceptions about what the government does are really widespread. Many of them are held in equal amounts by both Democrats and Republicans. So I’m talking about misperceptions like TANF benefits are time unlimited, which is not the case. It’s temporary, is in the name of the program, or the misperception that refugees do not actually need to undergo background checks before admission. These types of misperceptions are really widely held, and I argue that they are widely held, not necessarily because people are encountering misinformation on social media or on television, but because information about how public policy actually works, it just isn’t really available. The media doesn’t tend to cover current policy nearly as much as it covers policy conflict, policy outcomes. And so the basic information about existing policy just doesn’t get out there.

And what that means is that people kind of try to figure it out on their own. They engage in inductive reasoning to try to figure out how social security works, how refugee admission works, and often they get it wrong. But the good news is that these misperceptions are really correctable. People process the corrections, they understand them, and they even recall them up to a month later. And they also, correcting these types of misperceptions seems to have downstream effects on attitudes. So when people find out how many of these public policies actually work, they become more supportive of them.

Matt Grossmann: So this seems quite a bit different than the dominant discourse about misinformation, especially around the 2016 election and social media. So how did you get to your focus and why does it differ from the dominant concerns about misinformation?

Emily Thorson: Well, I started studying misinformation and misperceptions over a decade ago around the birther movement, and that kind of got me interested in it. And one of the things that became pretty clear the more I studied misinformation, was that most misperceptions, and generally when I say misperceptions, I mean false beliefs inside people’s heads versus misinformation, which is false belief out there in the world. Most of… When we think about highly politicized misinformation of the type we think about around the 2016 election, so fake news headlines like the Pope endorsed Donald Trump, the reason we think that these are concerning, the reason that there’s so much foundation money poured into correcting these, the reason that there’s so much academic research on it is because we have this real normative concern that if people consume this false information, it will change their attitudes and behaviors in ways so that they act differently or behave differently in ways that they would if they were fully informed.

And the problem with that supposition is that we don’t have a lot of good empirical evidence that is the case. What actually seems to be the case is that when it comes to these highly politicized misperceptions like Obama’s birthplace, it’s not that people are walking around thinking, “I’m not so sure about this Obama guy. Oh my goodness, he was born outside the United States. I guess I won’t support him.” Rather, what happens is that they walk around disliking Obama strongly and therefore are both more likely to be exposed to this misinformation in the first place and are substantially more likely to believe it.

And so the causal arrow runs from attitudes to misperceptions, not from misperceptions to attitudes. And so when I started conceptualizing this book, I really wanted to take a step back and instead of saying, starting from what misinformation is out there in the world, and now let’s kind of look at what effects it has, I wanted to think, well, what misperceptions are inside people’s heads? And in particular, what misperceptions are inside people’s heads that might really have these negative downstream consequences that might threaten democratic competence, that might threaten their ability to act in ways that are consistent with their underlying values and attitudes? And policy misperceptions seemed like a good place to start, partially because policy is less politicized when it comes to just sort of basic public policy. With some exceptions, it’s not as much of a hot button partisan issue.

Matt Grossmann: So how do you go about thinking about how much people know about current public policy? I know you differ from the usual kind of political information scales that ask people about how many Supreme Court justices are there or who’s the Speaker of the House. But it’s hard to think about what the baseline should be in terms of what people know about public policy. So how do you go about it? And I guess how sure are you that people know less than they think they know?

Emily Thorson: I chose to take sort of a bottom-up approach. I mean, often when scholars study political information, we kind of start from, “Well, what should people know? What do we as political scientists or we as journalists or we as Pew researchers think that people should know?” And sometimes this is civic knowledge like how many justices are on the Supreme Court? Sometimes maybe it’s knowledge about how the government works, but we start from what should people know? And then now let’s quiz them on this. Instead, I took this sort of bottom-up approach where I asked people first, “What issues matter to you?” So I’m thinking about policy issues, what matters to you? And a lot of people said things much as in the Gallup most important problem questions like the economy or healthcare. And so then given what is important to them, so I already know that this is connected to their underlying values, I said, “All right, well why don’t you explain the US healthcare policy to me. Give me the facts about it.”

And so I actually asked them in interview context to explain how they thought policies worked, what they thought the government actually was doing in regards to immigration or in regards to social security. I use prompts like, “Explain it to me like you would explain it to a 4th grader,” to get them to, instead of giving me their opinions, give me sort of their factual beliefs about public policy. And the limitation of doing this in an interview context is that you have a much smaller sample. And so I followed this up with large-scale representative surveys. So I used those interviews to develop a first set of misperceptions, so false beliefs that seemed to come up again and again in these interviews and then followed up the interviews with a survey of a different population to ask these more traditional factual questions about public policy.

I’ll note that just in general, most factual questions are not about public policy. That indeed, honestly, even opinion questions about public policy are kind of shockingly rare when you go into a [inaudible 00:08:49] or looking at things. If we look even at sort of the… we have some good tracking information, for instance, on the ACA, but most existing public policies they get past and then everyone stops talking about them, survey researchers stop talking about them, political scientists stop talking about them, journalists stop talking about them. But guess what? They’re still there and people are still using them and people still kind of are forming opinions around them, but with very little factual information to go on.

Matt Grossmann: So you said, I think people inductively reason to come up with some of this information, I might put it less charitably, is people make up information that they don’t know about policy. And I think one of the interesting bits there is your theory that people don’t necessarily need to hear it from anywhere. It could just kind of follow from their opinion. So kind of explain what you think is going on in people’s minds when they’re reasoning about policy and come up with some of this misinformation.

Emily Thorson: Well, I think when people are trying to figure out something, so I’ll give a specific example. One of the ways that I try to investigate what’s going on inside people’s heads when they’re engaging in this inductive reasoning is by using open-ended questions. So often in political science, we use open-ended questions to ask about attitudes. So perhaps the most well-known or widely used open-ended question on the ANES asking, “What do you like and dislike about the Republican and Democratic Party?” I however, took a different approach and asked about their factual beliefs or factual understanding. So I asked them, for example, why do you think social security is facing financial difficulties? And then they answered why they thought that it was. And I will say the plurality of people got it right. And they said, “Well, demographic shifts mean that it’s harder to keep for people to keep contributing money to social security.”

But about a third of people answered basically that someone did something wrong and that is why social security is in trouble. Either the folks managing it did a bad job, they diverted the funds to somewhere else, they used it to buy gold-plated toilets for Congress or fraudsters. People are using the social security benefits who shouldn’t, and that’s why it’s in trouble. So about a third of people said this, and this is actually really consistent with sort of a known cognitive bias, which is that we are… humans are kind of wired to believe that things are caused by intentional action. So if we know that social security is in trouble, our first instinct to say, “Well, this must be someone’s fault as someone must have done something to make this happen,” as opposed to, “Well, there are larger demographic shifts that have led to this.” And so when people are reaching for explanations, the intentional action comes more easily.

To give another example that’s maybe a little bit more accessible, more than half of the folks I surveyed thought that China owned the majority of US debt. And this is not true. The time I read the book, it was somewhere around 10%. But it’s pretty clear to me, and I think it really intuitive why they might think this. Because when most of our debt, most of the US debt is owned by institutions or governments within the United States. So it’s not external, it’s internal, but that doesn’t really make any sense in our concept of debt that we have firsthand experience with.

If I owe credit card debt, I owe it to an external actor to, for example, the credit card company or the mortgage lender. And so it makes sense that debt has to be owed to something external. If you make the analogy of the national debt with household debt, which is several smart people have argued, we shouldn’t be making that analogy, but it’s a really natural one for people to make. And so they kind of engage in this inductive reasoning. “I know we have a lot of debt. Debt has to be owed to someone externally to us. China. I hear a lot about China, probably it’s owed to them.” It is a logical chain of reasoning. It just happens to be wrong. And this I think is what explains a lot of these misperceptions, is that people are engaging in faulty inductive reasoning.

Matt Grossmann: The interesting thing was they often went further than that. There might’ve been someone who heard that China owned some US debt from somewhere, but it would be surprising to learn that they had actually heard that China could repossess US properties. But some of them took that additional step. So it seems like they might be hearing their opinion for the first time in telling it to you.

Emily Thorson: Absolutely. And I think that that is something that I kind of get at in the conclusion a little bit, which is that increasingly over the course of this book, I became convinced that facts are not things we carry around in our head. They’re things we construct in the exact same way that we construct attitudes. And it doesn’t really make that much sense to be thinking about facts as that substantively different from attitudes. There are some that we are very sure of, right? I am pretty sure about my age and height, but then a lot of them are just somewhere in the middle and they often are kind of constructed in response to survey questions to some degree.

Matt Grossmann: So you try to go about the puzzle of why people might not know much about current policy by looking at media stories. So tell us about that analysis and why you think the media doesn’t cover current policy details more?

Emily Thorson: Often when we talk about media coverage of policy, the narrative as well, they don’t cover policy because they’re so focused on covering elections and the game frame and strategy coverage. I think that that’s all true and reasonable, but it’s also the case that when we think about policy coverage, we just tend to lump it all together into one giant category. But there are a lot of different types of policy coverage. You can be covering policy that is under debate that Congress is currently debating.

You can be covering policy from sort of an opinion perspective. I think that this is a good policy. You can also be giving descriptive information about existing policy. And so I did some content analyses looking at when the media covers policy. I looked at how the media covered Medicare and immigration policy. So this was restricting just to articles that were about policy.

So they had the word Medicare in the headline or they had the words immigration policy in the headline or lead. And so I’m already eliminating all of the game frame strategy, Republican versus Democrat ones, just focusing on policy and just looking at those, I wanted to know whether the majority of articles included information about existing policy, basic facts about here’s what Medicare is or here’s what US immigration policy actually looks like.

I found that less than half of them actually included any information about existing policy. It was slightly more common to include information about policy outcomes. So what is the effect of immigration policy? What is the effect of Medicare, which is reasonable? This is also good information for the media to include. So I’m not necessarily critiquing them for including what they do include, but it is notable that if you are someone like most of us who weren’t around when Medicare was passed, then it would be really useful to be able to open up a newspaper article, read an article about Medicare that has just four or five sentences about what the program actually is.

Matt Grossmann: I think you also found that they sometimes cover policy proposals without actually covering the current policy, which I guess might make it easier for politicians to propose current policy as if it’s a new thing.

Emily Thorson: Exactly. And just from a basic standpoint, in order to evaluate a policy proposal, you ideally want to be able to weigh it against the status quo. That seems like it would be really useful information to have, and often it’s simply not included.

Matt Grossmann: So you mentioned that in the campaign context, a lot of misinformation actually comes after someone has made up their mind between the two candidates. So how do we know that that’s not the main way that misinformation develops in public policy as well? That really this misinformation isn’t a cause of anything, it’s just after the fact justification?

Emily Thorson: I think sometimes it probably is. I want to preface it by saying that I think that there are certainly probably times when people form policy misperceptions based on their own partisan identity. For example, Democrats are much more likely to think that they hold the misperception that the US spends more money on the military than on healthcare. And I suspect part of this is because they like healthcare and don’t like the military. So some of these misperceptions are at least partly informed by partisan predispositions.

But to get to your question, the way that we know that a misperception or a piece of misinformation has a causal effect on attitudes is through experiments. And we have two options when we’re thinking about running these experiments. One is to randomly assign some people to get a piece of misinformation and some people not to, and then you compare their attitudes afterwards.

The second is to, which is a little bit more ethical, is to assign some people to get corrections and some people not to, and then compare their attitudes afterwards. I will note that I did this in a separate paper in the BJPS with some co-authors where over the course of the 2020 election, we gave people misinformation and corrections that were very politicized. They’re all about kind of hot button political issues, about Kamala Harris, about Joe Biden, about Donald Trump.

 We found just almost no effect on attitudes either of the misinformation itself or of the corrective information. People’s attitudes on these political figures just didn’t shift based on one additional piece of misinformation or one additional correction, which I think is not necessarily a bad thing. Should your entire opinion of Joe Biden be changing based on one piece of information? Maybe not. But the story is a little bit different when it comes to these policy misperceptions.

So what I do is I randomly assign some people to receive corrective information about policies and some people not to. What I find is that receiving this corrective information about existing policies changes attitudes as compared to the control group. So when you tell people how public policy actually works, their attitudes change. And this is not necessarily always the case when it comes to the highly partisan politicized misinformation.

Matt Grossmann: So these are mostly survey experiments where you’re correcting people’s misperceptions then asking them pretty much at the same time how their opinions have changed. How much do you think you would expect that kind of analysis to really tell you something about a real world situation in which someone was to say, encounter this in media coverage? How much would you expect the same findings to travel?

Emily Thorson: That’s always a good question. I think one reason why I sometimes refer to these policy misperceptions as low hanging fruit from a journalistic perspective is that one of the reasons why they are so correctable, and I can’t emphasize enough how these misperceptions are. The fact that in a survey experiment I can tell people the correct information and then they remember it an entirely a month later is really impressive that it has this kind of long-lasting effect just getting one piece of information.

 I think one of the reasons for this is that there is this kind of information vacuum around these policy issues. When you give people this corrective information, it’s not competing with a lot of other information. In contrast, if you give people another piece of information about Joe Biden, it’s competing with a really saturated information environment. So these are policy issues are areas where people have relatively low information and where their attitudes are less tied up with their partisan identity.

And this makes it a really good avenue for attitude change, for effective attitude change. And so I don’t know, is the answer. If I followed up with these people a month later, would it still have an impact? Maybe. I guess I am optimistic that if journalists, for example, committed to consistently including basic descriptive information about public policy in their coverage.

 So here’s the rule. Every time you talk about social security, you need to include these four boilerplate sentences about how social security works. Yes, I do think that it could have actual effects on how people think about social security, their support for it, their beliefs about what does and does not need to change about the policy.

Matt Grossmann: So part of the effects that you get are, as you say, from the fact that this is outside of a pure partisan context, a lot different than a campaign. But is that just partly the kind of examples that we’ve chosen or a general fact about policy misinformation? So we might think about things like there are death panels in Obamacare or Joe Biden is for open borders that are information about current policy, but are just much closer to the kind of campaign misinformation that we think about and therefore might just be more partisan to get to them and less changeable.

Emily Thorson: Yes, absolutely. I think that the closer you get to these hot button partisan issues, even when they are policy… There’s not something magical about policy that makes it nonpartisan. It’s just that politicians, we have a hundred policies that affect Americans’ everyday lives. Politicians only regularly talk about four of them. That’s just the way that kind of things work these days. And so if you choose some of those four, yeah, it’s going to be a lot tougher to shift attitudes on those.

 People are much more locked in when it comes to those policy. For example, abortion. It’s going to be tougher for any given piece of information to change attitudes on abortion just because people have such strong beliefs already. I will say Brendan Nyhan and his co-authors have a project looking at the effect of information about existing policy in the realm of election administration and find that giving people descriptive information about how elections actually are run and how they work is pretty effective at increasing trust in elections, even more so than interventions that use co-partisan actors.

So it’s more effective if you’re a Republican to hear, “Hey, here’s how elections actually work,” than it is to hear from a fellow Republican saying, “I think elections are great.” Right? Again, giving people information about how policy actually works seems to have this impact on their attitudes that other types of messages don’t.

Matt Grossmann: So I want to go into some of your examples in more detail. We’ve talked a little bit about the national debt, but this is, I think one of my least popular lectures just trying to explain to students the national debt. So I want to get into that a little bit more. How do people think that that works and why does their lack of real understanding matter?

Emily Thorson: Well, we sort of talked about this before, but if people think that the national debt is owed to entities outside the United States, this has a couple of potential implications. One, it makes it a much bigger problem. If suddenly the US is walking around owing most of its national debt to China, or Japan, or Spain, then this gives these other countries a lot of leverage over us. People are like, “Oh my God, if I owed all of my entire house to the bank, then they could just repossess my house at any time.”

So it changes how important they think that the issue is. It moves it up in salience. And it also means then in political rhetoric, if Donald Trump or Joe Biden says, “I pledge to cut the national debt,” people say, “Oh, thank goodness he’s doing that because now China can’t come and take my house from me.” And so it changes what they think the consequences of the policy are, and it changes how important they think that it is. Versus if people think, “Well, the national debt is just owed to entities within the United States, then we can just keep on keeping on. It’s fine. Nothing terrible is going to happen.” So I think it does change the consequences, people’s beliefs about consequences when you change beliefs about how the national debt actually works.

Matt Grossmann: But I sympathize with some of your respondents here and even trying to explain, let’s say they do have a general perception that as I think most economists do that there will be a problem at some point or there are risks associated with it. Is part of what people are doing, just sort of reflecting that, “Yes, it is a problem of some kind. I’m not sure how it would actually be resolved.” And I guess if so, then it might not necessarily have that same impact if they were able to explain it, but explain it in a way that it was still a problem.

Emily Thorson: Yeah. I don’t think that the idea is that once people learn that China doesn’t own most of the national debt, they’re going to say, “Oh, it’s no problem at all.” But it might just reduce it, right? If we have the problem scale from zero to 10, maybe it takes it from an eight to a six, which is a lot.

Matt Grossmann: So how about social security? Another one that a lot of people misperceive. How do people think it works compared to how it does work, and why does that matter?

Emily Thorson: There are a couple of misperceptions around social security. One is that the way social security is funded is many people believe that you basically contribute money to a savings account while you are working, and then when you retire, that money is paid back to you. And I think that the origin of that misperception seems very clear to me and maybe to anyone who lived through Al Gore’s presidential campaign. When you talk to people about a lockbox, that sounds like it is a place where you put your savings. And again, like the national debt, it makes clear how important these metaphors are and how they can have of downstream effects that even the people who put them out there didn’t intend. I don’t think it was Al Gore’s intention to sort of mislead the American public about how Social Security works, but when people don’t have a lot of other information, then these metaphors can play a really big role in how they form their factual beliefs. So that’s one misperception that I found.

Another is this one about why Social Security is in trouble and is Social Security in trouble because, as one respondent put it, not enough people are having babies, or is Social Security in trouble because people are fraudulently claiming disability checks because the US government is using the money to put gold toilets on Air Force One because somebody mismanaged the money in a way that took it away from Social Security and gave it to the military? So is it because of demographic shifts, larger structural factors, or is it because of people doing bad things? And about a third of people believe that it’s the latter, that it’s people doing bad things. And this has to do with our general tendency to sort of make causal inferences about intentionality that we tend to think it’s somebody’s fault.

I mean, my children do this all the time. If they can’t find their Kindle, it’s not like, “Oh, I must have left my Kindle somewhere.” It’s, “You moved my Kindle. Somebody must have moved this. Somebody must have moved my stuff.” So I think it rings true to me that people tend to make these types of inferences.

Matt Grossmann: So if I was explaining this to someone, my first instinct would be to go to the beginning and to say, well, when we set it up, we gave money out to the existing old population that had not contributed to Social Security originally. Do you think, and that sort of explains both of the misperceptions, sort of why your contributions are not in a lockbox and why it might matter that we’re having demographic changes over time. But that might presuppose more, I guess, logic than people are kind of going through here. So to what extent are even the people who do know, are they going through thinking about the history of the program or is it just a lot more surface level than that?

Emily Thorson: I think it’s a lot more surface level. I think that, and again, it’s really hard for me to say, when I am asking people in an open-ended context, why is Social Security facing financial difficulties? And they type something in their answer box, for how many of those people are they typing a belief that they have carried around in their heads with them for 2, 5, 10 years? And for how many people are they thinking about this for the very first time because I am asking it? I can’t tell you that. I don’t know. In the same way that we don’t know when we ask attitudinal open-ended questions or closed-ended questions. In any survey context we cannot know whether beliefs or attitudes are constructed on the spot or whether they already exist.

That being said, I do think that it tends to be not based on a deep consideration of the historical context of Social Security, but rather just thinking, well, Social Security, well, I guess I definitely see some money being taken out of my paycheck every week. Where is that going? Maybe it’s like going into a bank account and then later I can get the… So it’s this sort of surface level reasoning that gets them there rather than a deeper historical context.

Matt Grossmann: So President Biden recently announced some changes to refugee policy, so it’s a good time to talk about your findings there. What do people know and not know about refugee policy and how much difference does that make?

Emily Thorson: So this is a experiment that I did, and the paper version of this is co-authored with Lamis Abdelaaty and it’s in the APSR. And what we were interested in was kind of the relative effect of giving people information about existing policy versus information about policy outcomes. Because most often when researchers try to do corrective interventions about policy, they tend to focus on policy outcomes. So we’ll say, “Hey, we really want you to like the ACA, so we’re going to tell you about the reduction in the uninsured that comes because of the ACA. We really want you to be more supportive of refugee policy. So we’re going to tell you that most refugees don’t commit crimes and that most refugees…” I mean we’re going to correct these outcome misperceptions. Versus telling people here is how refugee policy actually works. Refugees need to be vetted. They go through a number of background checks, here’s the legal definition of a refugee, and they’re resettled by these organizations, et cetera.

So again, these are aspects of refugee policy that have been in place for decades and decades. These are not novel aspects of refugee policy. And so we randomly assign people to either see information about current policy, about existing policy, how the refugee policy actually works or about policy outcomes. So these types of, what percentage of refugees are criminals, how many are using the welfare benefits, et cetera. And the answers to both are very, very, very, very tiny, clearly. So the answers make refugees look good.

And what we found was that giving people information about existing policy increased their support for refugee-friendly policies and for refugees more generally a lot more than giving people outcome information and definitely more than the control condition where people got no information. So just describing to people how the refugee admission policy works was extremely effective in improving attitudes towards refugees.

Matt Grossmann: But applying it to the current context, it seems like every choice we would make about how to inform people or what to inform people about would be politicized, might be more likely to move them in one direction than another. So about the current policy related to Biden’s announcement, we could tell them things about how similar or different President Trump’s policies were. We could tell them things about how long people stay in the US while they’re being adjudicated. We could tell them about where they’re from, about their claims. It just seems like, do we know that providing information in general is going to increase people’s support or have we just selected some pieces that are favorable for the policy?

Emily Thorson: This is a really great question and one of the reasons why in almost every study in this book I try to do it across multiple policy areas. So the refugee paper is an exception, but most of the studies in this book I’ll look at four different policy areas to make sure that the effects of the corrective intervention or whatever else, other mechanism I’m trying to understand is consistent across all of them. Because I think it is also very plausible that sometimes you describe to people how a policy works and they say, “Oh my God, that’s a horrible policy. I hate that policy.” And as you say, this also depends on what information about the policy you give them.

So I’m sure that either you or I could design a intervention that gave people true information about refugee policy that made them dislike the policy. And so yes, of course, what information you choose is always going to shape the attitudinal effects. I think I would argue that the informational interventions that I provide in the book they read almost like Wikipedia summaries. They’re not really me picking and choosing only things that make policies look good. They tend to be just the very basic here is how this policy works, intro textbook type of descriptions. And those, at least in my experience, tend to make people like policies slightly more. Which I think makes some intuitive sense to me because most policies got there based on a lot of tweaking and figuring out how to make this policy appeal to the widest constituency possible.

That’s how policies get passed. They get passed partially because they managed to get either some Democrats and some Republicans on board, or all Democrats and one Republican depending on the policy. So you craft this policy specifically for the purpose of it being liked and popular. And so it does make sense to me that when you tell people how these policies work, they like them and they’re relatively popular, usually. Although always the government is not in the habit of passing deeply, deeply unpopular policies, we leave that to the Supreme Court.

Matt Grossmann: So social welfare is another area you look at where I think there’s a little bit more historical recognition that misinformation about prior policies might have played a role in welfare reform. So thinking about how long people were on a program, what the recipients were like of a social welfare program, whether getting a job resulted in you losing your benefits, all those kinds of things in those debates were mentioned as things people might or might not know about the policies. So how should we update that given your findings?

Emily Thorson: I think that in some ways you could write a book about any one of these policies. You could write The Invented State: Welfare Policy and have an entire book that just delved into that. And this is one of the things that was so tricky about writing this, right? The inductive processes that lead to these misperceptions are different for each of these policy areas. The misperceptions themselves are different. The partisan distribution of the misperceptions is different. I often kind of think about the invented state when it comes to different policy areas is having different degrees of what Larry Bartels calls partisan surmise. We have partisan surmise that we kind of make partisan-based inferences. And some of our inferences have a lot of it, and some of it have only a little bit. And so different policy areas differ in their amount of sort of partisan surmise.

The amount that kind of elite rhetoric played a role in this, the amount that inductive reasoning, the amount that lived experience. If you are talking about social welfare programs, people who have actual personal experience with these social welfare programs are going to have different misperceptions than those who don’t. So as much as I try to make a general argument about policy misperceptions, it’s also tricky because they do differ. So what you’re talking about in terms of the welfare state, I think it’s likely that elite rhetoric probably played a much bigger role when you think about the Clinton era kind of debates over welfare and how welfare was framed during that time. And I think it would be naive to think that that didn’t have an impact on people’s factual beliefs.

 But again, if I was writing The Invented State: Welfare Policy, then I would be really interested to look at how misperceptions varied by age. Are people who were around for that and who remember that, do they have a different set of misperceptions than people who were born post Clinton? That would be a really interesting thing to look at and might give us some leverage on how much this elite discourse directly contributed to some of these misperceptions.

Matt Grossmann: So you mentioned some direct implications for journalists that they should dedicate more paragraphs to context than explaining current policy. What about for policy advocates or political practitioners? Do they know this more? Do they already surface those facts that are most important to influence opinions or what can they learn from this?

Emily Thorson: So you’re thinking about kind of interest groups, for example. I mean, again, it depends on the policy. But certainly if I were running a group that was advocating for refugees, then one lesson I might take from this was, wow, I should be pushing out descriptive information about how refugee policy works as opposed to maybe just pushing out kind of inspirational stories about refugees, right? Because there’s a lot of things that we might, intuit would improve attitudes towards refugees. If I tell somebody really inspiring story about a refugee who went to Harvard and is now a biochemist, that’s going to make people like refugees more, which it might, right? But it also is the case that if you tell people, here are the six steps that refugees need to go through before they’re admitted to the United States, that also makes people more supportive of refugee friendly policies. And so I think adding information about existing policy to your list of possible informational interventions would be one lesson that policy advocates could take from this.

Matt Grossmann: So that would be if they are taking it as lessons for their communication strategy. But it seems like people might also be taking policy design implications from this kind of information. I’m thinking about, for example, the regular pattern of people trying to design social welfare programs or other kinds of incentives as tax incentives rather than direct provision, as stemming from people basically designing around people’s misperceptions that these things are very different and thus you could propose the same policy or very similar policy through a different mechanism. Not thinking you could change anyone’s current views of how it works, but maybe take advantage of them.

Emily Thorson: No, I think that makes a lot of sense, right? I mean, we could even imagine if you the government or you anybody wanted to start correcting people’s misperceptions about how social security works, then you could make changes to your people’s pay stubs, right? What do people’s pay stubs look like? They say social security on there, and maybe there’s another little sentence underneath it that goes those to fund current recipients of social security, right? There you’ve corrected a misperception about social security. So yes, I think that there are lots of ways that you can design policy in small ways, in large ways that increase I guess, transparency for people about how these policies actually work, whether that’s how they’re funded or how they’re executed, etc.

Matt Grossmann: So you’ve told us some stories and given some evidence about how basic policy information might increase support for programs, but we also have some interesting, well-established patterns that I wanted to see if I’d get your comment on. One is that often when a proposal is introduced is actually kind of the height of its popularity in Congress and as it’s debated and people learn more about it, its support goes down. And it also seems to be the case that when policies are passed that might satisfy some concern and reduce additional desires for more policies. So I’m thinking about, for example, the Obamacare story where it basically is proposed, it gets enacted. In that time, it’s reduced popularity quite a bit and then it doesn’t rise in popularity again until there’s an effort to repeal it. So I guess how is people’s information about policy playing into this pattern?

Emily Thorson: I mean, this is all surmise, right? Then I’ll preface it by saying this is surmise. I don’t have actual data on this, but I think it makes sense to me that one of the things that came along with the effort to repeal it is information, right? So when you’re writing an article that says, “Hey, Republicans are saying they’re going to repeal Obamacare,” not every article, but many articles will also say, “As a reminder, here’s what Obamacare actually does.” People are like, “Oh yeah, I get it now. That’s what it does. Yeah, that stuff seems good. Let’s keep that,” right? And I’m not saying that the policy threat mechanism, which I think other scholars have proposed isn’t also working. I think both of those things can be working at the same time, but anytime there’s an opportunity, anytime that the media have an incentive to cover policy, then one unintended effect is that people will learn more about that policy and sometimes it’s because it’s going to be repealed, sometimes it’s because it’s being debated, etc.

Matt Grossmann: So some policy issues both have a lot of misinformation, but also are very tied to people’s core beliefs about society and about politics. And immigration I think is an example where there’s a lot of misinformation about current policy and yet we might be skeptical that correcting that misinformation would have a big effect because people have pretty core attitudes that are hard to change about just whether we should have more or less. So yeah, I guess how should we think about the role of information in that kind of scenario where the beliefs are probably stable and real, the misinformation is still high?

Emily Thorson: So I think a good general rule of thumb is when you’re thinking about will a piece of information or misinformation have an effect? So will this have a causal effect on attitudes? The chances for any given piece of information or misinformation to have an effect are highest when attitudes are weak and when information is low, when pre-existing information is low. So on issues where people don’t know much about and have weak attitudes than any additional piece of information or misinformation is going to have a larger effect. And that’s one of the reasons why these kind of policy areas are low hanging fruit. Immigration doesn’t quite fit that because people tend to have slightly more information about immigration, not as much as some issues I would say, but more. And they tend to have stronger attitudes and stronger attitudes tend to coexist with more politicized issues.

And because immigration has been such a politicized issue, so immigration is an issue that people have more information and stronger attitudes that are tied with their partisan identity. So I think that what this means is that we should potentially be slightly less worried about misinformation because any given piece of misinformation out there in the world, a fake news article on Facebook is probably not going to sway people a whole bunch because they’re pretty locked in to either being, yes, immigrants are good, or no immigrants are bad.

It also means that this kind of corrective interventions that I’m talking about policy information about current policy is likely going to be less effective than with an issue like the national debt. Will it be ineffective? If I had to register a hypothesis, I would probably say yes, we could move the needle on immigration attitudes with information about existing policy. I’m not certain of that. It’s an empirical question, but given that I was able to move the needle on refugees, I think it’s implausible that I could move the needle on immigration. Do I think I could move the needle on abortion? Probably not. That’s a lot more locked in.

Matt Grossmann: Let me ask you a little bit more pointedly because it seems like immigration is sort of a good example where the actual view that we think people have from correlating their opinions and trying to dig deep into what changes them is something like people are opposed to, they don’t want the foreign born population to increase too much. They don’t want America to racially change too often. And yet that is very different than what they say if asked to give their immigration opinion. And they will often use facts in rationalizations, but they’re often about economic effects of immigration and other factors that they think are sort of a more socially acceptable explanation for their view. So I guess I’m just wondering how much information is playing that view or playing that role more than-

Emily Thorson: Yeah, I feel like for people whose opposition to immigration is rooted in racism, you’re not going to solve that with policy information. I think that there are also a number of people out there whose opposition to immigration is well not rooted in necessarily high-minded beliefs about economic competition is a little bit less just a product of racism. And with those people I can imagine policy information might move the needle a little bit.

Matt Grossmann: So we’ve talked about how your analysis is quite different from the dominant literature on misinformation that is about its use in campaigns, and obviously we’re in the middle of 2024. So I wanted to give you a chance to kind of relate back to that literature. Do we know, given your findings about policy information, how should we think about this kind of dominant concern about misinformation in campaigns?

Emily Thorson: I think people overestimate the potential impact of misinformation in campaigns for a couple of reasons. One, most people don’t see much misinformation. Most people don’t see much political information period, especially on social media. I think when you’re talking about [inaudible 00:51:08] Fox News, that’s a slightly different matter. So most people do not see misinformation, to the extent that they do, it tends to be misinformation that reinforces their already existing views. So if you are seeing misinformation about Donald Trump, like false claims about Donald Trump on Facebook, chances are very high you are a liberal.

And so you already have those kind of locked in views about Donald Trump. In addition, as I just referred to, most of the time when it comes to presidential elections, people’s attitudes are very strong and stable and they’re not going to be swayed by one fake news headline or one piece of misinformation. I think when misinformation gets more dangerous is when it gets picked up by mainstream media organizations when political candidates can be really effective at making it a centerpiece of their campaign. So if you can just hammer one piece of misinformation over and over and over and over again, then it can start to have potentially an impact. I don’t think that misinformation should necessarily be our primary concern in the 2024 presidential campaign.

Matt Grossmann: So what’s next for you in this agenda, anything you want to add about what you’re working on now?

Emily Thorson: Well, I guess related to this question of misinformation and the effects of misinformation, I just published a piece in nature with some co-authors, a perspectives piece, making the argument that kind of the public discourse about misinformation gets a lot wrong. There is some sort of assumptions that there is mass exposure that many people see lots of misinformation and the empirical evidence doesn’t really support that. And we’re also push people to kind of think about conceptualizing a problem of misinformation a little bit differently.

So rather than thinking about these questions of mass exposure, think about the people on the fringe who are opting into news sources or places where they see this kind of harmful misinformation. So we know from descriptive research that misinformation consumption tends to be really concentrated among a small group of people who opt into this kind of information. And so how do we figure out who those people are, what’s driving them and what the potential harmful effects are is potentially a better use of our time and energy than just focusing on trying to count how many pieces of fake news people saw on Facebook during the election.

Given that the latter is unlikely to have a big effect on attitudes or behaviors. So that is one big project. A second that is a Cambridge Elements book that just came out looking at how news coverage of misinformation shapes attitudes. So as I just said, misinformation is maybe a little bit of a smaller problem than we intuitively imagine it to be, but the media certainly does not cover it as if it’s a small problem. They cover it as if it’s an enormous, huge problem. And so what are the effects of consuming this kind of one might call it a moral panic coverage about misinformation on things like political trust and trust in media? I find that it doesn’t seem to have an effect on political trust, but it actually exposure to news coverage of misinformation increases trust in mainstream media, especially print media.

Matt Grossmann: There’s a lot more to learn. The Science of Politics is available bi-weekly from the Niskanen Center, and I’m your host, Matt Grossmann. If you like this discussion, here are the episodes you should check out next, all linked on our website. When information about candidates persuades voters, how does the public move to the right when policy moves left? Did Facebook really polarize and misinform the 2016 electorate? How news and social media shape American voters and how media coverage of Congress limits policymaking? Thanks to Emily Thorson for joining me. Please check out the Invented State and then listen in next time.