Do our social media feeds polarize us, with algorithms that lure us into echo chambers and trap us with viral political content and misinformation? Andy Guess is part of four new papers that suggest these claims are overblown. The big social science collaboration with Meta found that reducing exposure to content shared by those that agree with you politically does not change political attitudes. Neither does reducing reshared content or changing algorithmic feeds to reverse chronological feeds. Some conservative Facebook users are in a bubble, but we may not be able to blame the algorithm for our polarization. 

Guests: Andy Guess, Princeton University

Studies: Like-minded sources on Facebook are prevalent but not polarizingReshares on social media amplify political news but do not detectably affect beliefs or opinionsHow do social media feed algorithms affect attitudes and behavior in an election campaign?Asymmetric ideological segregation in exposure to political news on Facebook.

Transcript

Matt Grossmann: Do algorithms polarize us? This week on The Science of Politics, for the Niskanen Center, I’m Matt Grossman. Our social media feeds polarize us by luring us into echo chambers through algorithms that show us what we want to hear, trapping us with viral political content, and even spreading misinformation. At least that’s the common story. How true is it? Social scientists are now collaborating with social media companies to find out, and four new papers from the collaboration just came out in Science and Nature, all using data from Facebook to test these claims. This week I talked to Andy Guess at Princeton University who was a co-author on all of the papers, and a lead author on two.

The first finds that reducing exposure to content shared by those who agree with you politically doesn’t change your political attitudes. The second adds that reducing reshared content doesn’t either. In the third paper, they changed algorithmic feeds into just showing you reverse chronological feeds, and again, didn’t change attitudes. But in the fourth paper, they find that some conservative Facebook users are in a media bubble. So some people are in real echo chambers, but they seem to have selected into them. We may not be able to blame the algorithm for our polarization. I think you’ll enjoy our conversation. So let’s start with your new Nature article that investigates the possibility of echo chambers on Facebook and some potential causes for them. What were the main findings and takeaways?

Andy Guess: So on this paper, the team was able to do something really unique, which is combine a massive descriptive analysis of Facebook platform data and then follow that up with an experiment that actually changed what some people saw on their feet. And so on the descriptive part, what was really interesting was the ability to measure the share of content from what we’re calling like-minded sources. So what like-minded sources refers to in this paper are friends pages or groups who are predicted to be politically congruent or congenial. So to be clear, this is not talking about say, web publishers or news publishers or domains, it’s actually talking about content that is shared by users pages or groups that are predicted to be politically similar to a user.

And so for the first time, what we get is an estimate of the share of people’s feeds that actually come from like-minded or the opposite cross-cutting sources. So to get a sense of some of the findings you get here, the median user sees just over half of their content from these like-minded sources. But there’s one important caveat to this, which is that when we’re talking about like-minded, that’s largely content that we wouldn’t typically think of as political news. And so when we look at content that was specifically classified as being political news and information, that represented a pretty small share of the content. So less than 7%. So in this paper when we’re talking about exposure to content from like-minded sources, it’s a range of content that’s being shared by friends pages or groups onto your feeds, that are themselves being predicted to be ideologically similar.

And then maybe the third descriptive point there is that about a 5th of Facebook users are indeed getting a large share of their feed exposures from these like-minded sources. So if you want to think about people who colloquially we would think of them as deep in an echo chamber. About 20.6% of Facebook users are estimated to get over 75% of their exposures from like-minded sources as opposed to either cross-cutting sources or neither. And so that just puts some numbers to the intuition some people have about the extent to which people are on echo chambers on Facebook specifically and what that distribution looks like. So then there’s an experiment that is conducted on platform with a subset of users.

And so having established some of these basic descriptive facts, then the question is, well, what if we could reduce the share of like-minded content from like-minded sources that people are seeing on their feeds? This is something that is often either explicitly or explicitly suggested in discussions about how to reform or improve social platforms. And so this is one of the great benefits of this collaboration, is that we were actually able to do that or do a version of that. And so in this experiment, a random subset of consented users who were part of this experiment were shown, they were given versions of the feed that down ranked content from like-minded sources by about a third. So essentially these people in this experimental treatment group were seeing a lower share of content from like-minded sources than they would have.

And this experiment lasted for three months. So basically encompassing from the end of September for three months through the 2020 election and afterwards. And so what this did was that it did increase exposure to content from users pages and groups with different political leanings. And so we could track what the effects of that were on various outcomes. And there were eight different outcomes that were focused on in the paper. So things including effective polarization, ideological extremity, candidate evaluations and belief in false claims. And in all of those we were not able to find statistically significant effects. So in other words we didn’t detect any changes on those outcomes as a result of this intervention.

Matt Grossmann: So we’re going to be talking through four papers today that were published in short order and that had large co-author lists. I want to give you a chance to talk about your role in these projects and how they came together.

Andy Guess: Sure. So like you said, there’s four papers that came out simultaneously. It’s a result of a huge collaboration and there’s actually more papers to come, probably at least a dozen more. I was a part of the lead author team for two of the papers that came out along with Jenn Pan and Neil Malhotra and Pablo Barbera who’s on the Meta side. And then I’m a co-author on all of these papers, but I do want to give a shout out to all of the lead authors on these two. So on the like-minded paper that we were just discussing that was published in Nature, the lead authors are Brendan Nyhan, Jaime Settle, Emily Thorson, and Magdalena Wojcieszak, along with Pablo Barbera again. And then on the segregation paper, which we will talk about, the two academic leads on that are Sandra Gonzalez-Bailon, and David Lazer.

And then Joshua Tucker and Talia Stroud are the PIs of this entire collaboration without whom none of this would’ve happened. I really want to give a shout-out to them as well as innumerable incredible Meta research scientists and engineers who worked tirelessly with all of us across many of these papers. So needless to say, I’m going to give you my understanding of what we found on these papers. I can’t be sure that all of my many illustrious co-authors would have the exact same interpretation of the many results that are reported in these papers. I’ll do my best to give as an accurate a representation as I can, but I just want to say that this is mostly coming from me and these are my interpretations.

Matt Grossmann: Let’s stick with the like-minded sources paper for a second, because it does seem that the null results here on the experiment are getting some attention as undercutting some common claims about people going down rabbit holes or somehow getting into bubbles as a result of the way that the feed is showing them like-minded content. So is it true that they’re in contrast with that common view? And is there any way to resuscitate the view that it is the Facebook feed that is causing this?

Andy Guess: Well, there are definitely a number of caveats that I think are worth mentioning. The experiments that we ran as a part of this project, they occurred over a three-month period. Now in my corner of social science, that’s a longer experiment than I’ve ever been privileged to be a part of. And I think that’s on the longer end, the far longer end of experiments that you’ll find in social science experiments, especially in the area of political communication. But I can also see why someone might say, well, three months is great, but Facebook has been around since 2004 and some people have been on these platforms for more than a decade, and we’re changing the experience of some people for three months at the end of this very long period in which people’s attitudes and opinions and experiences may have been shaped by the platforms.

And so that’s one caveat that you might want to include and that might potentially contextualize some of the findings and in particular some of the null findings that we have on these outcomes. I think another really important thing to keep in mind, and I alluded to that with the point about the share of content that is actually political in people’s feeds, but more generally social media consumption itself is just a fraction of most people’s overall information diets. So if you take into account information that people are getting offline through television, podcasts, friends, talking with colleagues or coworkers.

So while we were able to I think conduct some pretty strong interventions on these platforms in ways that were unprecedented, it’s important to keep in mind that even if you have the biggest and most powerful lever that you could imagine on one big platform, that’s still just a fraction of what people are encountering and the kind of information that people are engaging with across the totality of their lives. And so I think that’s also something to keep in mind when we think about what are the kinds of reasonable effect sizes that we might expect from experiments like this.

Matt Grossmann: So you also published an experiment where you stopped showing reshared content on people’s Facebook feeds. And this is another, I guess, common claim that you hear, is that this is about virality of a few posts that might polarize people, but again, you found that it did change what people saw but not their attitude. So how would you interpret that?

Andy Guess: We were able to suppress reshared content from the feeds of people who are participating in this separate experiment that you just referred to. And like you said, that does result in quite a few meaningful changes to people’s experiences on the platform as well as their engagement. One of the big headline findings that popped out immediately to us was that usage of the platform goes down. If you just think about overall time spent, that goes down as a result of suppressing reshared content from people’s feeds. So when you suppress reshared content from people’s feeds, you get some changes that might be expected. So when you see less reshared content on your feeds, you’re on average getting less political content, you’re also getting less content from untrustworthy sources, right?

So if you think that reshares and just more generally content that goes viral is a vector for misinformation, then this is one piece of evidence consistent with that. Then when we look down at the kinds of effects that we might see on individuals attitudes or behaviors and other kinds of outcomes that we measure using our surveys, we again see this general pattern of limited effects. So very difficult for us to distinguish any kinds of effects from zero, I would say with one really interesting exception, which is that within the sample we see that when we’re removing reshared content from people’s feeds, people’s levels of news knowledge on average decreases. So basically that means that people’s ability to correctly identify events in the news that had happened in the past week in a survey actually degrades it, it gets worse.

So that might be surprising, especially if you think that reshared content and more broadly speaking virality is spreading links to low-quality news, things that might be misleading, misinformation, et cetera. But I think what we did is we uncovered a really interesting and counterintuitive nuance here, which is that, so A, most of the news about politics that people see in their Facebook feeds come from reshares, full stop. When you take that reshared content out of people’s feeds, that means that they are seeing less virality-prone and potentially misleading clickbait, but that also means they’re seeing less content from trustworthy sources as well.

And since content from trustworthy sources is even more prevalent among reshares than on net, people are actually being exposed to less accurate information. And so that’s being reflected in people’s performance on these knowledge questions in the surveys.

Matt Grossmann: So in the third experiment, you changed the order of items on people’s feed to be with the reverse chronological feed. This was of course what people have been asking for with the Threads, competitor to Twitter that Meta unveiled, and again, is something that is sometimes blamed for what kinds of content people are exposed to and polarization that might result. But again, you found that this did not seem to change people’s attitudes. So what were the changes and why didn’t they affect attitudes?

Andy Guess: It’s really interesting when we were developing these studies, so early 2020, over the summer, and we wanted to do an experiment that could implement some sort of change to the feed ranking algorithms on Facebook and Instagram, reverse chronological ordering was something that quickly came to mind for a lot of us. I think one big reason why, is that it’s a really convenient and easy to characterize baseline. It’s a very simple rule. People intuitively understand it. It’s also a feed ranking system that predates the engagement based algorithms that are used today. So Facebook and Twitter started out using these kinds of feeds and it would give us a really, I think, relatively straightforward comparison with the status quo personalized feed ranking algorithm.

And so there was a practical reason for choosing that. And then as we were developing the studies and actually going in the field and collecting data, it started to be much more of an explicit policy proposal that we saw coming from a number of corners from civil society and policy advocates of various kinds. This is an area where I think there are scientific and practical reasons for choosing this intervention that started to align also with some of the policy discussion. So that’s nice when that happens. And so when we look at what we actually found when we did this experiment, so again, we’re taking consented users, a subset of which are randomly assigned to get this reverse chronological version of the feed. So it’s like the most recent feed in Facebook or Instagram. Instead of the default algorithmic feed, we immediately see a bunch of changes from the platform data.

So first with the reshares experiment, we see that the use of the platform goes down, so time spent goes down. Second, we see that engagement of various kinds with content also goes down. Third, we see that content from friends and groups becomes a greater share of people’s feeds in the chronological version relative to content from friends or mutual follows in the case of Instagram. So you could probably imagine why that might be the case. So then moving on to different types of content that people are seeing in their feeds. So here things get really interesting. So one is, referring, this is related to the conversation about the like-minded paper. So when we switch people to the reverse chronological feed, the proportion of content that people see on average on their feeds from like-minded sources actually goes down.

So implying that there’s something about the algorithmic feed that is promoting content from like-minded sources in one way or another, and also the proportion of the feed from moderate or mixed sources goes up in the reverse chronological feed relative to the algorithmic feed. When we look at political content or political news, we see that that goes up as a share of people’s feeds on average in the chronological version of the feed. So that’s also interesting because it suggests that the default algorithmic feed is somewhat suppressing or at least not encouraging political content on average for people. And then another one that people might be interested in is that the proportion of people’s feeds from untrustworthy sources goes up upside, almost doubles. And so that’s an interesting finding as well.

Some of these are from a very low base, so the proportion of people’s feeds at baseline from untrustworthy sources is less than 3%. The proportion of people’s feeds that are political news, it’s about 6%. So some of these categories are pretty low. But the direction of those changes might be somewhat surprising. And then when we move down to effects on individual attitudes and knowledge and self-reported behaviors, we’re again getting this general picture of zero or null results and effect sizes that we can’t really distinguish from zero, with a couple of exceptions, but that’s the general picture. As for why that’s the case, I think there are some similar reasons to the like-minded paper.

So we have a three-month intervention, we’re making some pretty big changes in terms of the order in which content is being presented and even which content people are seeing on their feeds. But there’s a lot of different changes happening simultaneously. And I think the net effect of all of those were difficult to predict. And this is happening in the context of larger information environments that by definition we were not able to affect in the course of our studies.

Matt Grossmann: So as you’ve mentioned, some of the changes that you made reduced engagement on the platform, and I don’t think any of them increased engagement. So that suggests that the algorithm is working as intended at least from the point of view of the company and maybe the users. Is that because we’re wrong about what we want to see when we’re making these proposals? Is it because we’re trying to go against human nature, we know it, but we want to stop it? And does it suggest that to the extent that there’s a problem here that it’s more about us than about the platform?

Andy Guess: That’s a really interesting question. First I think it’s really hard to disentangle the user from the algorithm. User behavior is always going to take place in the context of in its techno social context. And so everything is intertwined here. So user behavior is occurring in the context of algorithmic selection and vice versa. And with these studies what we’re able to do is we’re able to change one thing and hold everything else constant and we can observe the differences in the user behavior, but that only gives us one slice of a very complex picture. The other thing I would say is that, yes, I think there’s a perspective here in which we’re observing revealed preferences that may or may not be the same as stated preferences that people might tell you say in the survey or in open responses.

But my understanding actually is that at least Meta in some of their more recent updates to their algorithm does take into account user surveys, which ask about longer term satisfaction with the platform. And I’m sure that other platforms do something similar. And I think the reason is that there has been a lot of criticism that these ranking algorithms and perhaps recommendation algorithms on other platforms are so strongly fine-tuned to peoples short-term and immediate satisfaction possibly at the expense of longer term health. And so what it might look like if we were to wait longer term and just more general satisfaction with a platform than they are currently, I think it’s hard to say, and I think that’s something that could be answered with experiments that we didn’t conduct and that I hope that could be conducted in the future.

So you’re right, we did see decreases in platform use and engagement across the experiments that we ran in these studies that were published so far. But to me that doesn’t rule out the possibility that there are other changes to algorithmic systems that could improve things from a normative perspective, perhaps even from a subjective perspective of users without having such a negative effect on engagement. I think that’s something that has yet to be fully explored.

Matt Grossmann: So of course we didn’t run an experiment where we compared social media to any other kind of media, but as a casual observer, it does seem that the studies of TV effects both if you think of the Fox News studies and some international studies as well, have tended to find larger effects on individual attitudes and behavior. I asked my own Twitter followers in a very unscientific poll why they thought that was, and the plurality answer was that the TV effects are likely to be much larger, but the second one was that the social media studies were bad. So hopefully that one [inaudible]. But how would you characterize the state of that research and in that same difference if you see it?

Andy Guess: That was my prior understanding of the state of the literature going into these studies that basically effects of television on political outcomes seem to be at least somewhat stronger than the effects of social media. We do have to keep in mind that the number of studies that credibly estimate social media effects is still pretty low even after these that were just published. And so I think there’s a lot to learn and the literature is still developing. But I really wouldn’t strongly contradict that characterization that it seems that TV effects are stronger than social media effects on a certain range of political outcomes that have been studied.

Another interesting point to keep in mind here though is if you just look at the raw amount of time on average that people spend on TV versus social media in the US, people on average still spend way much more time watching TV, and that means it’s still really important. And so again, I think the important findings that we have found in these studies on the effects of Facebook and Instagram should still be taken in the context of a wider information ecosystem that includes things like television, radio, and of course podcasts.

Matt Grossmann: Another finding that you emphasized that could explain the difference is that very little of people’s feeds is usually politics or is about politics. The bubbles that we were investigating that you were investigating here were mostly between the left and the right, but of course there’s also potentially a political bubble that includes most of us and doesn’t include anyone else. Anything to say about that storyline, is that may be reinforced by your findings?

Andy Guess: One of the clearest findings I think that comes out of all of these studies, and it’s just a descriptive one, is how relatively unimportant politics is just if you look at the share of content that people see and engage with. I think that’s very sobering and an important reminder to those of us who think, talk and write about politics, that for a lot of people, that’s actually not constituting the vast majority of how they experience social media. I’ll just point out again, one, I think very vivid illustration of that from the chronological feed experiment that we ran. So again, at baseline in the control group, something like 6% of people’s feeds is anything approaching political news, maybe 13 to 14% is political in one way or another, according to the classifiers that we used, moving people to the reverse chronological feed.

So not taking people’s past engagement into account actually increases the share of political content and political news that people see. So that suggests that when you design algorithms that predict what they think you want to see, you’re actually seeing less politics. So obviously there’s a group of us that I think are getting more politics because that’s what the algorithm would predict that we want, but we’re just a minority of people on these platforms.

Matt Grossmann: So in the final article that was more descriptive, you found that conservative users on Facebook were more in a bubble of news content than users on the left, and that that included a lot more posts that were labeled misinformation. So does that kind of finding revive the conventional story and just say that it needs to take account of this more specific subset of users, and if so, if it’s not being created by the algorithm, how was that bubble created and reinforced?

Andy Guess: Well, there’s one aspect of that that I think doesn’t really contradict the previous understanding at all, which is that there are asymmetries in the consumption of an engagement with untrustworthy content online. That’s what my previous work with a number of co-authors, including Brendan Nyhan, who’s a co-author in these studies, has found, which is that regardless of what you think about the overall level of the consumption of untrustworthy content, these asymmetries are very clear in the data. So in 2016 we found this as well, that there was a huge difference in the share of people’s information diets online that were Trump supporters versus Clinton supporters.

And so I think the evidence in the segregation paper that was just published, is consistent with that. So what’s different though is until now we really haven’t had a good view of what’s happening on the platform itself. We haven’t really been able to look inside the black box and get a sense of what people are actually seeing on their feeds. That was always the big missing piece in research on exposure to an engagement with untrustworthy content. I think this paper really gives us unprecedented evidence on that and I think unlocks a piece of the puzzle.

Matt Grossmann: So given that you have been involved in that past research, did this update your views at all about social media responsibility for 2016 and that whole debate about misinformation? And if you could also just address the conservative complaint, which is that the mainstream media is disproportionately composed of liberals as might be the people who report misinformation on Facebook. So is this misinformation really misinformation?

Andy Guess: Well, so on the first question, these are studies that we conducted in 2020, and we know that between the 2016 election and the period when we conducted these studies, a lot changed in the ways that platforms, including Facebook, so Meta, dealt with integrity issues, so-called integrity issues, so involving untrustworthy content, misinformation, hate speech, and et cetera. So a lot of protection and safeguards were put in place that I think reduced exposure to a lot of content that was much more widespread in 2016. And so we’re getting a window into what people’s experiences were like after these measures were put into place. And so for that reason, and for others as well, I’m not sure that this gives us any reason to second guess or revise our understanding of what happened in 2016.

So the general takeaway, this is my gloss, but I think the general takeaway is that consumption of untrustworthy content online was generally very low. It was very low as a share of people’s online information diets. However, there was a subset of people with strongly and mostly conservative information diets that was consuming a lot of online misinformation. And so that’s consistent with the kind of segregation finding in the new paper where you do see that a huge amount of misinformation is being consumed by the subset of people. However, that’s not consistent with a narrative in which rampant online misinformation swayed people’s vote choices, so that you had people who were undecided between one candidate or the other, and they encountered a bunch of fake news that pushed them towards Trump in 2016. So I think that’s the general takeaway.

I think nothing that we found in the most recent paper, if you ask me, really changes that understanding. I think one thing that I have personally updated my views on, is the extent to which there is political segregation in general on Facebook. So untrustworthy fake news aside, the extent to which the kinds of news that liberals are seeing and that conservatives are seeing is pretty different. That seems to be much more pronounced, at least in 2020 on Facebook than I think prior evidence, including some of my own work has suggested. And again, this is the case where better data and being able to observe things that we couldn’t observe before have really given us a fuller picture of what’s happening.

In particular with affordances and features of the Facebook platform that don’t exist on other platforms or just on the web. So one of the big findings of the segregation paper is that pages and groups are driving a good chunk of the political segregation. And so those are particularly Facebook native features. And that to me is that’s a really important aspect of this, and that adds to our understanding of what’s potentially driving these patterns.

Matt Grossmann: And back to the conservative explanation real quick, they would say that the mainstream media is at least center-left. And so what you’ve uncovered is that conservatives are just a minority of the media and they’re off more by themselves. And to the extent that it’s considered misinformation, part of it is just, again, you have the same labeling authorities that are talking about, that are disproportionately from the center-left.

Andy Guess: So in terms of how we labeled content that was considered untrustworthy on the platform, we were going primarily by the third-party fact-checking partnerships that Meta had already set up and that they use for their own integrity efforts. And so I know that a lot of the criticisms focus on the role of fact-checkers, fact-checkers being a part of the mainstream media, and whether there are any biases in the ways that fact-checkers determine the veracity of particular claims. So what I would say to that is that there have been a number of studies independent of these papers that we’re talking about, that have tried to look at whether if you give members of the lay public the same fact-checking tasks as the professionals, do they come up generally with the same answers as to which claims are true and which claims are false.

And generally speaking, it seems like you can get politically balanced members of the public move towards the same kinds of conclusions about which stories are reliable and which stories aren’t reliable. And so A, I think that gives us more confidence that fact-checking isn’t just an arm of a political movement, and B, I think that gives some optimism about the ability to crowdsource some of these efforts so that there’s less reliance on relatively small groups of professionals and add some transparency to that process as well.

Matt Grossmann: So tell us about working with social media companies. This was pretty direct. I know that there’s been a mixed history of success and failure with that in the social sciences. So to what extent did that affect what you were able to do, and is this likely to be the start of better joint research?

Andy Guess: To me, this was really the holy grail of research on social media and politics. I think many of us were advocating one way or another for the ability to not only work with platform data, so going beyond custom bespoke experiments and survey data. So working with real platform data, but also being able to work with the platforms themselves to make changes to people’s platform experiences and then see what the results of those changes are. I think that’s been really crucial for the success and the validity of our studies, because while I think we can learn a lot by doing research in the lab, in artificial environments where we can really control every aspect of what study participants see and do, there is a crucial element of ecological validity that has always been missing.

If you think about the things that we think really matter for people’s experiences on social media, it’s things like the social context, what your friends are sharing, people’s awareness of the audience for the things that they post and share. And those things are very, very difficult to replicate in an artificial research setting. And so being able to do these on platform experiments in which we can observe not only platform behavior, but also other individual level characteristics of these users, including survey responses, is really unprecedented and I think a way forward for doing research not only on social media and politics, but you could imagine any number of other incredibly important topics like mental health and wellbeing, et cetera.

So for me, this opened up the door to all sorts of research that I think is only beginning to answer some really important questions that researchers and the public have. In terms of working with the platforms, for me it’s been great. The way that the project was set up I think it was done with a lot of foresight and I think maximized the intellectual freedom that the academic partners had in our ability to design the studies and ask the questions that we thought were important. And it also protected the privacy of users and users ability to consent to participation in the study and consent to the sharing of data. And so I think we’ve landed on a model that enables research on these important questions in a way that safeguards a lot of the concerns that I think the public has had about research in this area.

Matt Grossmann: So your papers will appear alongside a lot of chemistry and biology papers in Science and Nature, and they’re not the first social media papers to be published there, but obviously if you just read the social science papers that were published in Science and Nature, you’d get a very unrepresentative cast of social science generally. So why is it that the major journals are so interested in social media research compared to other areas of social science and how different is working with them than social science journals?

Andy Guess: That’s an interesting question. I was looking at the issues of science that were published since our papers came out, and actually the current cover of Science magazine shows a picture of wildfires in Australia. And I think that illustrates what these journals are interested in, which are timely issues of public concern. So in this example, climate change, and the extent to which that intersects with cutting edge research regardless of the field. Social media is I think one of many topics that fall into that category. And so I think if you look within political science or within communication or economics, so the disciplines that are represented by the authors on these papers, this is obviously not a representative subset.

But if you look at the kinds of timely topics of public concern that these large interdisciplinary journals are interested in, then I think social media and elections fall squarely into that.

Matt Grossmann: Anything different people should know about working with the journals or the review process that changes what we all see in the end?

Andy Guess: Well, there are a few, I think obvious differences. So the way you write articles for these journals is very different from the journals that we normally publish in in political science. They’re much shorter. A lot of the details on methods and measurement goes into a very long appendix that many readers will never read. So that’s one big difference. In terms of the review process, what’s interesting is that you are very likely to encounter reviewers who are from different fields. I don’t know for sure, but it seems likely that at least a few of our reviewers were from very different disciplines coming from fields with different methodological approaches, assumptions, et cetera.

And so that leads to I think a more unpredictable and challenging review process, but one that I think is also more fulfilling and ultimately more rigorous, because you’re getting very close reads and you’re getting critical feedback from many different angles, and these are angles that you don’t usually get when you just stay within your discipline. And so I found the review process to be incredibly rigorous, certainly the most rigorous that I’ve ever experienced thus far. I think that’s one of under the hood differences. And then in terms of what people see, I think the articles just read different and feel different, and that’s because there’s a bigger audience and a more interdisciplinary audience than we’re typically used to when we’re publishing in political science.

Matt Grossmann: So you’re also publishing the work at a time when there’s increased calls for regulation of social media. And I happen to be at Chautauqua this week where there’s a lot of discussion of free speech, and so people self-selected to attend that week, and yet when one of the speakers asked about social media regulation, there was a lot more support than for other kinds of potential infringements on free speech or restrictions. So to what extent are we too quick to jump on the platforms or why is it that we tend to separate that as an issue in terms of speech protections and thinking about, well, the platform serve this societal role in the spread of misinformation, or maybe that’s just partisanship or a bias that individuals have. I wanted to get your thoughts on that.

Andy Guess: Social media regulation is an interesting area because it still seems to me like a lot of people across the political spectrum support some regulation or some policy change or another. The question is whether we can get some consensus or at least some overlap on what something feasible might be. It’s not so much one side wants to do something and the other side doesn’t want to do anything. And so that’s already a different dynamic. And so I think the fact that a lot of people seem at least willing to want to do something, means that there’s a search for easy to understand fixes that everyone can get on board with and that perhaps don’t have obvious partisan ramifications. And recently that has led to discussions about algorithms.

And there are some good reasons for this of course, algorithms of the kind that we’re studying are opaque to people. They are proprietary. People don’t fully understand how they work. And as we’re seeing they have effects that are unpredictable and also perhaps counterintuitive. What we, I think want to, one of the takeaways that I think it’s important to have from these papers is that you can try to implement something relatively straightforward like replacing the algorithmic feed ranking system with reverse chronological version. But there are going to be a number of different effects on people’s experiences of the platform that are difficult to easily characterize or predict. And there are going to be unintended consequences.

And so to the extent that there were hopes that there might be a silver bullet, like we could turn this style and all of the polarizing content and the misinformation and the hate speech and everything that we were worried about is all going to go down at the same time. I think we’re showing that things are a bit more complicated in the sense that effects sometimes move in opposite directions. There are trade-offs in terms of whether you prefer to prioritize, say, reducing untrustworthy content versus other kinds, and there are just unintended consequences. And so I think what we’re showing is that if there are hopes that there would be a simple solution that everyone could get on board with, that that might not be the case and it might be less the case the more we learn about the effects of these systems.

Matt Grossmann: So of course we’re always trying to learn about a general topic, in this case, the effects of social media, but we always have a specific domain that we’re able to study. And for you it’s almost all Facebook in 2020 with a little bit of Instagram as well. So those are a lot larger than some of the social media studies that we sometimes try to generalize from to other kinds of platforms. But it does raise the question of how widely these kinds of findings could apply. On the one hand we might always just want a caveat that we studied them here in this place at this time.

But on the other, it certainly seems that a lot of these debates we’ve been having, especially about Twitter since Elon Musk took over, but also about some of the new competitors, might be a little overblown in terms of how much these changes about the algorithm are really going to change the content of the platform versus just who’s the user base for the platform and what are they interested in. What do you think?

Andy Guess: I’m always a little hesitant to generalize from findings on one platform to another or from one time to another. I guess I would use this as an opportunity to make a related point, which is that we didn’t study all the possible channels for political influence that might be plausible. We’re primarily looking at effects of some algorithmic changes or feature changes on these platforms on individual’s behavior engagement and some individual level outcomes like attitudes, opinions, knowledge, and self-reported behaviors. What we’re not looking at, for example, are the effects of these systems on the behavior of political elites and leaders and how that might affect the incentives or behavior of other actors in the political system or in other elements of the media ecosystem.

I think that’s what makes it a little difficult to extrapolate from say Facebook or Instagram to Twitter, is that Twitter while it has lower penetration than Facebook, so less than a quarter of US adults use Twitter, according to the latest Pew data that I’ve seen. There’s a huge overrepresentation of political elites, journalists, media elites on Twitter, and historically they’ve often used it to learn from each other. There’s a diffusion of ideas, catchphrases, strategies, perceptions, sense of what’s important, what isn’t important, et cetera. And so I think it’s really hard to know whether the changes on the Twitter platform, the platform formerly known as Twitter over the past eight to 10 months have had an effect on those kinds of mechanisms.

But I think if you think that it had an effect, it would be those kinds of mechanisms that we really weren’t testing at all in these studies.

Matt Grossmann: So that’s related to a question I had after reading all of this, was if all of this is in some sense a downgrade or a continuation of a downgrade that research has put on the effects of social media on polarization and these other measures of attitudes and behavior, what should be upgraded? What else is out there that maybe people are paying less attention to?

Andy Guess: Definitely the mechanism that I just referred to. So effects on elites, I think that’s really important. Another one that I’ll mention, and this is one that is going to be explored in forthcoming papers in this collaboration. So some of the outstanding questions are going to be addressed in other papers that will come out at some point as a result of this collaboration. But I would say one really important outstanding question is focusing on groups that might be small as a share of the user population of these platforms, but large enough in absolute numbers. So when you’re studying social media, you’re in a very interesting space where you could say, only 10% or 20% of users see X, Y, Z, or do X, Y, Z.

But when you’re talking about a platform that has more than 200 million monthly active users, 5%, that adds up to a lot of people. I think research that focuses on relevant subgroups and the particular dynamics of those subgroups and the effects of different platform features and algorithms on those subgroups, I think that’s a really important piece of the story here-

Matt Grossmann: Is that like everybody who votes in a congressional primary could actually be in a pretty small group of people who voted in elections?

Andy Guess: Exactly. And you think about the traditional sample size for a research study in political science, maybe a thousand or a few thousand people. So these have typically been groups that are really hard to study with any precision using the standard research design toolbox. Given the kinds of sample sizes that we’re operating with, which are an order of magnitude, larger or more, we do have some opportunity to look at some of these more important subgroups and to be able to say a little bit more about whether social media effects that might not matter for the average user might actually be more consequential for other kinds of users. And so I think there are many questions of that kind, which I think are really important for future research.

Matt Grossmann: There’s a lot more to learn. The Science of Politics is available bi-weekly from the Niskanen Center. I’m your host, Matt Grossmann. If you liked this discussion, here are the episodes I would recommend next, all linked on our website. Did Facebook really polarize and misinform the 2016 electorate? How online media polarizes and encourages voters. How news and social media shape American voters. How misperceptions and online norms drive cancel culture, and how Fox News Channel spreads its message and persuades viewers. Thanks to Andy Guess for joining me. Please check out “Like-minded sources on Facebook are prevalent but not polarizing,” and “How do social media feed algorithms affect attitudes and behavior in an election campaign?” And then listen in next time.