Podcasts
August 20, 2025

Making AI policy: Are we falling behind or rushing in?

As the next AI cycle begins, state and national governments are trying to keep up. And AI policy now matters for energy, health, education, foreign, and economic development policy as well. What can we learn from the early AI legislation? Chinnu Parinandi finds that partisan alignments and institutional capacity shape where and how consumer protection versus economic development AI policies appear in the states. Heonuk Ha finds an AI boom in congressional legislation with key thematic clusters—from innovation and security to data governance and healthcare, finding Democrats to be more active and more positive in promoting AI. They both say it is early days with a complex effort ahead.

Niskanen Center – The Science of Politics · Making AI policy: Are we falling behind or rushing in?

Transcript

Matt Grossmann: Making AI policy, are we falling behind or rushing in? This week on The Science of Politics, for the Niskanen Center, I’m Matt Grossmann. As the next artificial intelligence upgrade cycle begins, the states and Washington are trying to keep up. Congress just considered a moratorium on state regulation, but elected to keep the door open. And now AI policy is becoming energy, health, education, foreign, and economic development policy all in one. What can we learn from the early efforts to make AI policy?

This week, I talked to Chinnu Paranandi of the University of Colorado about his co-authored business and politics article, Investigating the Politics and Content of US State Artificial Intelligence Legislation. He finds that partisan alignments and institutional capacity shape where and how consumer protection rather than economic development policies appear in the states. Turning to the national level, I also took to Heonuk Ha of the University of Michigan about his AI and Society article, When AI Meets AI. Heonuk identifies key thematic clusters from innovation and security to data governance in healthcare in AI, bills in Congress, finding Democrats to be more active and more positive in promoting AI. I think you’ll find these conversations timely starting with Chinnu. So tell us about the major findings and takeaways from your work on state artificial intelligence legislation. What’d you find?

Chinnu Parinandi: Great. So what we were interested in was this idea that sort of, this is a domain where technology and technological development is outpacing regulation and policymaking and we wanted to look at what was going on in the US states with respect to developing a nascent policy and regulatory climate related to the issue of AI. And what we found were a couple of things. So there’s this economic cyclicality to the adoption of AI policy, so states at least from 2018 to 2022, looking at that time series, states are less likely to adopt AI policy when inflation goes up and when unemployment goes up, which suggests that lawmakers are paying attention to optics and don’t want to be blamed for policies that the public might think are displacing jobs or worsening economic conditions. The other thing is that there’s a proclivity among states with unified democratic government to adopt AI policy. That kind of fits in line with what we know about regulation generally, which is that the Democrats tend to be more interested in sort of creating regulations to establish bounds of behavior in industry and Republicans are less sort of optimistic of doing that.

Matt Grossmann: So walk us through how you identified the AI bills and what they included, and talk about the limitations of what you’ve been able to do so far.

Chinnu Parinandi: Yeah, so great question. I think I’m going to focus on limitations at the risk of overselling our results. So AI obviously is a vastly burgeoning industry and field and our study is already dated because it ended in 2022 and sort of we’re light years ahead of that now. What we did was we looked through the National Conference of State Legislatures maintained a database of policies that the states had adopted and introduced related to AI. And so what we did was we used that as our sort of initial list of policies and we were able to identify 181 bills across multitude of states related to AI and then we used LegiScan to identify the roll call votes on those bills to identify sort of how voting patterns look at the level of the legislator. And so we were able to identify the bill, the bill number, the state, whether the bill originated in the lower versus upper chamber of a state’s legislature. And that ended up being the backbone of our data that we analyzed.

What we also were able to do drilling down on adoption and looking at the adoption of consumer protection-oriented AI legislation versus non-consumer protection-oriented legislation was we did a close reading of all of these bills to identify whether a bill mentioned in its summary or in the legislation as a whole, whether the bill mentioned protections or safeguarding the public with respect to AI technology or maintaining equality of opportunity. And if a bill met each of those conditions using, again, we didn’t use a language model, we just used human coders and readers. If a bill met either of those two conditions, it was classified as being consumer protection-oriented. If it didn’t meet those conditions, it was classified as non-consumer protection-oriented. And then we were able to conduct a supplemental analysis to identify that basically Democrats are more interested in adopting consumer protection-oriented AI bills.

There’s not a clear party trend for the non-consumer protection-oriented one, which means that both political parties are adopting bills that are basically geared towards economic development or using AI as a vehicle to claim economic development. In case you’re interested in concrete examples of what these look like, a consumer protection-oriented AI bill is going to be one that bans an algorithm from throwing out applicants from employment websites based on what their names look like or a consumer protection-oriented bill is going to be one that prevents an algorithm or website from denying a home loan to someone based on what they put as their address on the loan application. A non-consumer protection-oriented bill is going to be one that looks at creating a commission to give investment to Amazon for building a data center in the State of Michigan. And so that’s kind of walking through the process of classification and identification of AI legislation.

Matt Grossmann: So you did identify some differences by party, but it doesn’t seem like there’s a huge polarization yet in AI policy. Is that accurate? And is there any other, I guess, source of differentiation via chamber or region or anything else that stood out as to how this is different?

Chinnu Parinandi: So, again, awesome question. I actually think this is an area where both parties are in on it. There are slight differences in terms of the application of where government regulation should occur. So California has adopted these bans on using AI for discriminatory policymaking. And I can speak outside of this paper and talk about a lot of the work I’m doing now, which is linking the energy and AI sectors together with respect to data center construction, AI and the explosion of AI is one of the fastest growing areas of our economy and I think that politicians on both sides of the aisle are viewing this as an area of investment. And so you’re actually seeing a lot more hesitancy among Democrats to really regulate in this area and tie the hands of what industry can do. They’re letting industry kind of create the parameters of the debate.

For example, Josh Shapiro, the governor of Pennsylvania, has talked about potentially leaving the PJM, which is in energy, there are like these wholesale markets that the federal government regulates where energy is traded within the wholesale market and that’s distinct from a retail market. And you have a number of states, not named Texas, that are part of these multistate wholesale markets. And Shapiro is trying to establish Pennsylvania as a hub of the data center economy and is worried that if Pennsylvania’s part of the PJM then the state does not have control over the supply of energy to match the requirements of data centers. And so this is an area where there’s bipartisan go all in on trying to let industry tell them how the AI sector should be regulated.

Matt Grossmann: So on the one hand you were able to study kind of the beginning of this policy area and that might be related to some of your other work on policy diffusion, but on the other hand we haven’t necessarily seen the results of all of that yet. But what are the early signs of sort of who is a policy entrepreneur here? Are there networks by which policy might be moving from a few early adopter states to later adopter states and kind of what’s driving this process?

Chinnu Parinandi: The interesting thing with the paper that I’m talking about here is that our lag variables, in social science we use spatial lag variables to identify evidence of diffusion even though it’s the best solution, but it’s not a great solution because it’s possible that actors could be working at the same time on something and adopt it simultaneously and we code it in the lag, but it would not be correct. But the lags are not significant in this paper, the spatial lag variable. So we didn’t find evidence of classic forms of diffusion that exists in so many other laws and so many other areas of state politics. But that doesn’t mean that diffusion isn’t happening.

I think what’s happening in this area is that consumer advocacy groups have been the entrepreneurs in the states that adopted consumer protection-related laws. I think major industrial outfits, including your big tech players have been the entrepreneurs on the other side in terms of the non-consumer protection-oriented laws. They’re really the ones that are leading the discussion in terms of how states are trying to compete with each other to attract investment. And so I think you’ll start to see more classic diffusion in the years ahead, but we’re still not at the point where states are really paying attention to what other states are doing and acting on it in the form of passing copycat laws, etc.

Matt Grossmann: So we said the benefit is that you’re able to see the beginning of the process, but one downside is your data so far end I think right at the public introduction of ChatGPT. So since then there might have been quite a bit of action and I know the federal government just considered in fact the house passed a federal preemption of all-state AI regulation, so there does seem to be now a lot of activity since. So what can you tell us about what’s been happening since your paper and what might’ve been previewed by it and what you think is new?

Chinnu Parinandi: So what’s happened since the paper, and I’ll segment it in two different phases. The first phase is since the end of the paper, before the federal government’s recent involvement, and the second segment is post federal government involvement. So in the era since the paper ended and the federal government’s involvement, you saw a lot of state-driven competition to lure AI-related investment less on the algorithm protection side other than the area of privacy with respect to deep fakes, with respect to protecting people in terms of if they’re doxed and things like that, AI-related applications that are malicious. There have been attempts to do that at the state level, but in terms of broader protections and areas that we might see them, like for example in a world where we get genetic testing and health insurance companies can discriminate based on if I have a higher likelihood of developing cancer based on the results of a genetic test and then I get denied insurance, you would expect maybe to see more regulation on that end, but there really hasn’t been that type of regulation.

So you’ve seen more state-level attempts to lure industry from big players in AI and data centers. What you’ve seen since the federal government’s involvement, which has really been driven by the Trump administration and Trump’s return to the White House, is a federal government attempt to really I think push the construction of an AI-driven economy led by industry. And so data centers, many of which are crypto-based data centers, the emergence of Northern Virginia as the data center capital of the United States. I think also the emergence of really almost like a state-driven capitalism mindset where you have a group of industries and the White House trying to develop some type of national-level AI policy is what I expect.

In terms of what it’s going to look like, I actually, I think that it’s going to be something that could be driven based on most of my work’s on energy and electricity, and this may be something where we get some clues from the early 20th century in terms of when major investor-owned utilities actually lobbied state governments to take action with them as major players to write regulations for how electricity, which then was a new-found technology, but to write regulations about how electricity should be used, how it should be billed, etc. So I think that we could be at the beginning of that kind of national-level policy and maybe the creation of a national-level regulatory agency like FERC that manages these sorts of things, but I don’t know how far we are on that road yet.

Matt Grossmann: So the bill that would have preempted this regulation kind of has in mind an industry view that government doesn’t know enough to regulate yet that we have pretty low institutional capacity and expertise here. Were there signs of that in some of the early legislation bills that might not make much sense a year later or is there a coherent set of problems that people have identified that they’re ready to take a stab at regulating?

Chinnu Parinandi: So I’m going to say in this answer that both parts of your question actually have evidence. So there are a coherent set of, and I’ve mentioned a few of them that have popped up in legislation, and I should add that our paper looked at bills that were never adopted. And so there were bills proposed that looked at things like me being denied health insurance because of a genetic profile or being denied auto insurance because of sort of an address or other kinds of things that show up in an algorithm. So there are problems with respect to consumer protection that the legislation was trying to address.

I think that what happened though was that regulators operate better, as you know, across almost every single domain when they have information and when they’re able to project and see what the implications are of a lack of regulation or improper regulation, and when they’re able to know what the public wants and what the entities being regulated want. And I think the difficulty with AI that I have seen in this legislation both in my paper as well as other work on AI, is that it’s really difficult for regulators, and actually lawmakers across the aisle really, to know what the public wants because that’s who they ostensibly represent. It’s difficult for them to know what the public wants. They kind of know what the entities being regulated want but only so much. And so it’s difficult for them to develop a coherent regulatory regime having super incomplete information.

Matt Grossmann: So we are seeing quite a bit of activity, but it still seems divorced from some of the public conversation, which sometimes takes extremes like that we’re facing a possibility of human extinction or mass unemployment. So to what extent is this just being treated as a fairly normal policy problem versus some kind of massive risk? And if so, what explains that?

Chinnu Parinandi: I think it’s being treated as a fairly normal policy problem and I think it’s because the big players currently are really the Amazons of the world, the big AI-related outfits who see this as a profit opportunity and I think it is a profit opportunity for them. I think the issues about the social impacts, and I think this is a great comment that you’re making because we’re dealing with deaths of despair and conversations about the causes of that and whether greater economic dislocation can cause greater social human capital upheaval.

I think those conversations have been pushed to the side. I think that as we move from a world where traditionally economic dislocation has been a blue-collar problem and not a white-collar, upper middle class kind of problem, and I think as we move from a world where economic dislocation increasingly occurs in occupations that historically have been exempt from that dislocation, we’re going to start to have these conversations. And I think that because a lot of the people with the potential of being dislocated have greater political capital and greater economic means, I think that these sorts of conversations could happen sooner rather than later, although I’m loath to predict when they’re going to happen.

Matt Grossmann: So, as you mentioned, you study other areas of policy that also have a lot to do with technology like energy and health policy. What D.C. is different and similar here to the ways that policymakers are approaching this and the way that they deal with sort of what they don’t know is well as what they know?

Chinnu Parinandi: I think one of the biggest differences is the degree to which there’s a lack of a strong regulatory environment in AI. And what I mean by that is that even in the early days of energy, for example, like in the early 20th century, there were early attempts by individual municipalities for example, to regulate the behavior of firms. And that led obviously to some undesirable outcomes where elected politicians would basically try to lower electricity prices to pander to voters and then firms couldn’t make capital improvements on infrastructure, and that led to suboptimal provision.

And so then you had firms approach state governments and then you had the birth of state public utilities commissions. So in the energy area you had very early on regulation so that industry kind of knew the bounds and parameters with which it had to work with the public. With health obviously because of various restrictions through law on how health information can be shared, to what extent health information can be shared because of the political salience of health and healthcare, that’s always been an area with much greater regulation.

Matt Grossmann: This is also obviously a fast-moving area where we might expect policy to be responding to the current or past problems, but new aversions and new uses arising sort of too quickly for policy to keep up. So are there signs of that so far and what are the historical parallels that might be useful?

Chinnu Parinandi: So I think this is an area where technology is moving much more quickly than policy, so I think that there are historical parallels. The historical parallels are really from actually renewable energy, especially 25 years ago when there was an explosion in the growth of renewable energy technology globally and also in the United States. What happened in that case was that you had things like microturbines and offshore wind and a dramatic increase in… These ideas had been around for a long time, but the technology finally caught up where you could implement the idea. And so what you had there was that government recognized the emergence of these technologies and worked with firms and also investor-owned utilities that you would think would be opposed. But I think there was a recognition that you had to have the major players on board and give them a seat at the renewable energy table.

And there were regulations written in the form of RPS that allowed for IOUs to have a future in renewables and not basically lose their entire operating model when fossil fuels went down as a share of total energy production. And I think ultimately that in that sector, in that domain, you eventually had an ebbing, technology is still increasing, but you had policymakers know more over time and catch up in a way with the technological increase. I think that’s a model that could work here. I think that it will take, one question I have about AI, and this is actually an open question, is that I think it’s a much more concentrated in terms of the number of industry players, a much more concentrated and smaller number of players than even renewable energy. And that presents benefits and challenges, I think, one of the benefits is that theoretically coordination should be easy, but one of the challenges is that if all of the players are on the same page, it’s going to be hard to convince them to do something they don’t want to do.

Matt Grossmann: If you had to I guess make the best case for what states could be doing here, what is the sort of best case scenario for there really will be laboratories of innovation here, there’ll be different models that we need to try, and then make the contrary case that we’re likely to not be able to take advantage of state differences?

Chinnu Parinandi: I think that one area where you could see laboratories of innovation is that there still is tremendous economic diversity across the states. So, Michigan for example, is still a big manufacturing player and this goes at all levels of manufacturing from advanced design. The example of EVs and autonomous vehicles is that, well, Silicon Valley knows how to do the programming, but Detroit still knows how to implement the programming on a vehicle. To the extent that AI is becoming interwoven in every industry, I expect to see state variation in terms of how AI is being implemented based on each state’s economic profile.

So in Colorado for example, we’re doing lots of projections about how weather volatility or weather variability is changing the risk calculation of ski resorts and how ski resorts can sort of be all weather resorts. And so I could imagine in Colorado that AI, and the integration of AI, would sort of help in terms of developing what those simulations look like and help in terms of targeting tourists, which is a huge part of Colorado’s economy, to these all weather resorts. Whereas in Michigan, I’d expect some of the more advanced manufacturing applications to be reflected in policy. And so I see some of the benefits being state-level sort of specialization, developing much more robust AI policy regimes as a whole.

I think some of the dangers are what I talked about, which is the case of you need a federal response when states are competing with each other and then you’re having forever chemicals dumped across state lines and things like that. And so having a federal response to manage the excesses of state competition is something that I think would be desirable in this area.

Matt Grossmann: So this is obviously a new area of research, what should we be doing that we aren’t already doing? And is there anything from your history of seeing how energy policy has been looked at in the political sphere that might be useful in thinking about what mistakes we might make early on in considering a new policy area and how we should research it going forward?

Chinnu Parinandi: Great question. So I’m going to give you the kind of political science answer in terms of what scholars should be doing and people that are researching this kind of stuff. I think it’s really important for political scientists to understand, to the extent that we can, because we’re not trained as engineers and we’re not trained in some of the more technical aspects of dealing with these areas. But I think it’s very important for us to make a good faith attempt to understand the technical aspects of the policies that we study. And because one of the issues is if you fail to understand those and you lump different policies together, then it’s very easy to lose track of what the regulators are dealing with because when you study something like regulation, regulators are put in a really difficult spot because they have to manage industry and public expectations. They have to manage different principles like Congress and the president for example, and they also have to deal with highly technical domains where the people with the information advantage are the entities being regulated.

And so I think one of the difficult things for observers and analysts is that if you don’t understand the technical components, then it’s easy to get lost in terms of figuring out where the regulatory game is going. And so that’s the advice that I give to political scientists generally that are getting interested in this. The other piece of advice is a follow-up, which is that because of the integration of the economy, so I’ll take energy because that’s the area that I do the most work on. So forever in energy, we used to pretend, and we didn’t totally pretend, it was kind of true, there used to be a transportation sector and there used to be an electricity sector and they were different. So the electricity sector was when you went and flipped your switch on in your house, transportation sector was when you drove your car to work to the grocery store.

What’s happened with electric vehicles is that those two are integrated now. So the transportation sector and the residential sector to some extent are blurring and that’s creating all sorts of challenges for both sectors in terms of how they’re set up structurally and how they’re set up in terms of regulation. I think here, for those political scientists and social scientists studying things like AI and things like political economy, sectors are converging and blurring together. And so it’s really important as a field that we understand that we have to learn about more than one sector, we have to learn about the intersection of different sectors because regulators are no longer working in a static world where they’re regulating one sector at a time. They’re in a dynamic world where they’re moving between different sectors simultaneously.

Matt Grossmann: And anything we didn’t get to that you wanted to include or anything you want to tout about what’s next?

Chinnu Parinandi: Yeah, I’m really excited to say what’s next. So I’m doing some work where I’m surveying legislators about their priorities and their impressions of what the public want with respect to AI in a handful of states. This is a foray into qualitative Rick Fenno style interviewing, which I’ve never done before and I’m very excited to do because I want to get a lot of great open-ended stuff. And so I’m going to report on that in hopefully the months ahead. And then a further downstream piece of work is on looking at the political economy of data center location in the US states as states are competing with each other with respect to where to site data centers and looking at how the resources of states in terms of water, energy mixes, and cooling ability are influencing the siting. So those are the two things in the future that I’m really excited about.

Matt Grossmann: So there’s a lot of action in the states. Where does federal policy fit in? I asked Heonuk Ha about his analysis of bills on AI in Congress. So tell us about the findings from your new article when AI meets AI.

Heonuk Ha: So I think the motivation for this research is really comes from the fact that AI, like many other emerging technologies, I think it brings both tremendous opportunities and serious challenges. So I think especially the US Congress is the central player in shaping policy, and one way we can understand its interest and priorities is just looking at their bills and the members proposed. So in the lawmaking literature, technology policy has often seen as a less partisan and ideologically less divided, so some people they have written papers about other policy areas like public health or education defense, but I didn’t see many kind of the literature dealing with AI or emerging technology.

So, I mean, that was my motivation. So my goal here was to see what aspects of lawmakers are focusing on, and I wanted to have a clear picture of how congressional action can shape the direction of AI policy in years ahead. So this was kind of the motivation why I studied this project. And overall, I think the main findings and takeaways are I found both differences and overlap in how Congress is approaching in AI bills. So the House and Senate share some core priorities, but I think the house tends to be more positive in terms while Senate is a bit more cautious, I think. So Democrats and Republicans in party lines, they often emphasize different aspects of AI in the topic analysis.

And Democrats, I think, they lean more towards on the public welfare issues, but Republicans towards more security at risk. But both recognize AI is important and Democratic Party is more positive than Republican Party in terms of the ratio. And even with the differences, I think there is a bipartisan agreement that AI needs some level of regulation, especially when it comes to the data privacy and misuse. So this is kind of my motivation and my findings and takeaways.

Matt Grossmann: So without getting too far into the weeds, walk us through the methods a little bit. How did you identify these bills and divide them based on topic and sentiment, and what are kind of the strengths of that approach, and any weaknesses?

Heonuk Ha: So, I mean, let’s begin in a simple way. So when people think of AI, they may picture smart machines, so tools process information learning from data and make predictions or decisions. But AI policies is about more than just technology itself, it’s about rules and incentives and guidelines that shapes how AI is developed and used. So I mean, these policies can take many different forms, laws, executive orders, agency regulation, and official guidelines. So I mean, for defining and categorizing AI bills, I mainly follow criteria from Brennan Center for Justice and I use their data set along with congress.gov. So their guidelines, including things like setting limits on AI use, requiring evaluations, adding transparency in rules and creating oversight rules, protecting consumers, setting data rules, and directing agencies to use or study AI.

So this is how I selected 200 AI bills in between 2021 and ’24. So to find the main themes I used the machine learning. So that’s why the title is when AI meets AI. So I use the machine learning technique for the topic modeling or LDA, so this is kind of just think of it as sorting a big stack of bills into piles based on the words they use. So each pile here, each topic has its own signature words. So this gave me five main themes. So I think I’m going to go into more details of each theme later, but here briefly speaking, technology and innovation and government offer operations and content regulation, research and education, data governance and healthcare. These are big five themes machines selected. For this sentiment, I use the dictionary of the words are labeled positive or negative to see the overall tone of each bill, and then I aggregate it at the bill level.

So I guess the positive sentiment means that bill frames AI as the opportunities like can help us our society. But negative sentiment, it reflects more about the concerns, risks, harms, or even ethical issues. And the neutral base we’re in between recognizing both sides. So I think the one strength of my approaches that I could analyze a large volume of bills efficiently, I think. So I dealt with 204 bills. So by using short summary, I did a pre-processing of pre-data processing work. So for the usually 50 to 100 words each bill summary, so I could process and compare many pieces of legislation at once, and machine learning made it possible to spot some parents and themes across the entire set.

So I think that’s kind of a very strong point in my paper, but there is some limitations of course. So because I used the summaries instead of the full bills, it might miss some kind of the importance kind of details or frame. So that’s probably one limitation. And also, topic modeling could be also subjective. The machine assigned a label to each copy depending on the set of words generated, but the interpretation is all up to me, so that’s probably subjective can be an issue. And also, the sentiment analysis can be just complicated or subtle context by only focusing on the positive or negative words. So, however, even though we have this kind of challenges, I see more increasing numbers of papers using both this method and I hope future studies continue to improve these challenges.

Matt Grossmann: So we’re studying a relatively short period here, but quite a bit has happened in that period and you saw a surge in AI bills in the most recent period. So what is the big trend over time, and was it like the introduction, the public introduction of ChatGPT that might’ve driven this recent surge?

Heonuk Ha: So, I mean, I first tried to find some AI-based prior to 2021, but there were not that many. So I mean I think because of the ChatGPT in 2022, I remember, I think the numbers were surging a lot or not. So I think my paper starts with brief descriptives about the number of bills proposed in each year. So I think between 2021 and ’24, 204 AI bills were proposed and 113 came from house and 91 from Senate, and roughly 70% sponsored by Democrats and only 30% by Republicans. So in terms of the geographic information, California led with 30 bills, but AI bills are coming from all across the state. So I think even the legislators from small states, they also have a high number of AI-based proposed.

Matt Grossmann: So you mentioned that you found Democrats were more interested in this and their bills were more positive about AI. That may surprise some people, but you also said that there’s quite a few similarities across the parties. So what would be the signs that we are starting to see some polarization around this issue versus just kind of different interests?

Heonuk Ha: So I think I found some interesting parents with respect to the chambers and party lines. So in terms of the chambers, I think both House and Senate focus on topics like government operation, content regulation, and technology and innovations. These were two really big topics for both chambers, but the House tends to frame AI more positively than Senate. The Senate was more cautious and balanced. I think they might be because senators, they have a longer term, broader constituencies, and often take a slower and deliberative approach, so I think that’s probably why there is a difference in the tone between the two chambers.

So when it comes to the parties, I think this is a little bit more interesting to me. So Democrats showed more interest in public or welfare topics like education and human-centered AI. Republicans lean more towards national security, defense, and risk mitigation. And also, Democrats tend to be more positive overall. But when you think about the previous literature on other topics or other policy area, many research found that democratic parties are favored for the regulation. But in terms of the AI, when I think about the case of AI, I think maybe the democratic members, many of them are from urban constituencies and maybe when you think about the group of people they support, I think it might make sense. So yeah, that’s probably why Democratic Party might be more positive than Republican Party focusing on the benefits and opportunities.

Matt Grossmann: So let’s talk a little bit about more of the substance of some of these bills. What are they actually trying to do? What do these categories entail in terms of what the bills would actually kind of pull off? And are these just sort of starting point bills, things like let’s have someone study something versus here’s a formal regulation that we’re now ready to propose.

Heonuk Ha: Before I talk about this, when I use this topic of modeling method, I can choose how many topics I want to dig into, but I chose five following the previous research. But because if I choose, if I said too many topics is kind of too broad and parsimonious, so I tried a couple different, but I think at this in my model, I think the five worked really well. So I want to go over each of the topics and what this means. So first is the kind of technology and innovation. Of course, we can expect. These bills are about pushing the boundaries of what AI can do. So think of things like setting national standards, funding big technology programs and making sure that the US stays ahead in the innovation because it’s directly related to the big economic market.

And second is government operation and content regulation. This is about how the federal agency use AI and how we manage AI-generated content. We have some kind of the side effects like disinformation and defects. So each one mix of improving government efficiency while also keeping eyes on what’s created and shared, so this is about the second topic. And the third one is research and education. So this really supports R&D and AI research and training the next generation of the workers and keeping country safe, so it’s about building the knowledge base. That was the third one.

The fourth one is the data and digital governance. So when you think about the AI, a big concern is the data of privacy, so this topic addressed this issue. So it was about privacy, digital platforms, and data protection about this kind of topic. So this topic really focused on how AI handles personal data and how algorithms are held accountable and how oversight bodies can keep things in check. And the last, the fifth one is healthcare and human service. So I think this is actually the way we think how AI can be used. So it covers AI’s role in improving health and social service, so making healthcare more efficient, personalizing treatment, and ensuring that AI benefits people directly and maybe more equally. So these five things show that Congress is looking at AI from multiple angles like innovation, governance, or education data and public welfare.

Matt Grossmann:

So you’re also doing some work on comparing bureaucratic influence in AI policy and analyzing the quality of AI governance across nations. So what are you finding so far there?

Heonuk Ha:

So thank you for bringing up my other project. That’s my AI policy diffusion study covering AI-aided countries between 2017 and 2023. So actually my interest in policy diffusion actually goes back to my very first American politics class with Professor Chuck Shipan, and ever since then it has been in my mind. So in this study I used a data set from Stanford AI Index and which tracks whether each country-adopted AI initiative in each year. And also, AI laws are the products of the complicated calculations and dynamics among political actors. But I think that creating strategic AI initiatives are purely bureaucratic work and responsibilities, so I use the Cox-Hazard model to examine how agency professionalism, political interference, and impartiality influence the timing of AI policy adoption using the bureaucracy data from freedom data set.

I think the result was really interesting. Professionalism and impartiality both have positive association with the likelihood of AI adoption. However, the political interference has a negative association. Also, I’m very interested in the societal effect of AI adoption, especially in the areas like government effectiveness, e-government, and e-participation. These are really big topics in public administration. So in the separate project with Brian Min at Wayne State University, we look at how AI adoption affects these three areas and we found positive association across the board. So both papers are currently under review, so fingers crossed.

Matt Grossmann: So you’ve studied some legislative proposals and some executive branch policies. It sounds like the executive branch is much more comprehensive. Are there any other differences that you think are coming to the fore in the bills versus the executive branch policy?

Heonuk Ha: So, I mean, I say legislators and executive branches. I mean, their approach on AI policy comes from slightly different angles, and those different really shaped the policy landscapes. So I think Congress tends to be slower and more deliberative and probably more process so it gives a chance to broader debates about AI’s risk and benefits. Also, bills often reflect negotiation and compromise, so there might be more veto points and potential delays. However, I think that’s why it’s harder, but it has a stronger binding power.

On the other hand, I think the executive branch can move faster. So presidents and federal agencies, they can use the tools like executive orders, agency rules, or guidance documents, so they can set the priorities and create standards so they can sometimes jump in on emerging issues before congress has a chance to act.

So in today’s polarized political climate, I think the president are more likely to rely on executive orders, so this can give them the kind of the first mover advantage in shaping even in AI policies. So I think I’d probably say that executive action can be more targeted and flexible and adjusting to new developments as they happen. So I think in the best case, these two approaches can work together. So Congress setting the broad long-term framework and executive branch filling the details and reacting in real time. But if we think about the current political situation, they can also clash when the political priorities are different from each other.

Matt Grossmann: So we also often have a challenge in policymaking where we’re fighting the last war or we’re trying to regulate a problem that developed in recent years. This seems to be an issue area where we’ll face that more and more because the technology is evolving so quickly. So how much did you see that in your research so far that we are regulating or trying to take advantage of the technologies that are no longer evident by the time we might get around to actually passing policy?

Heonuk Ha: I mean, I think this is a real risk. Yes, I think that AI technology is evolving so fast, especially if you see past five years, it has been developed a lot. So it’s really hard to catch the problems in time, and designing and implementing the right policy can be challenging because this is a very complex process because there are so many voices involved. But I think this is not completely hopeless. Sometimes, actually quite oftentimes, technology solves its own problems and market solves its own problems. So AI market is quite competitive now, I think, and consumers are very sensitive to the quality.

So I mean, issues like hallucinations or high-error rates often get fixed in the next generation of the models or by other smart people just coming up with better solutions and ideas. And also, I mean, we’re not starting from zero, I think. There are existing laws and policies, they cover related issues like data protection, antitrust, monopoly, regulation, consumer protection, and product liabilities. So those frameworks can be applied to the AI, but we just need to tailor more focusing on AI. So it’s definitely challenging and risky, but I think we have some ways and measures.

Matt Grossmann: And what about for policy makers? Are there any practical lessons so far for advocacy groups or legislators or administrators that are working in this area?

Heonuk Ha: Yeah. I mean, this is somewhat kind of the brief research, so I’m not sure if we can give a really big implication, but I think the one takeaway is that AI policy, this is not a single-issue space. So the topic model shows that congressional attention is spread across multiple domains, and this means that legislators and advocacy groups, they need to be clear about which dimensions they’re trying to influence because of the stakeholder committees and policy tools, it differs in each. And one critical takeaway is that the passage rate, the success rate for the AI base is very low, so the legislators may need to be more strategic in building coalitions.

So because so many proposals are closer around similar topics, consolidating efforts and well crafted and bipartisan, it can increase the chance of success actually. I mean, in the bill analysis or bill process, I mean, success is more important. And also, maybe another lesson is about tone and framing. So my sentiment analysis also suggests that bill frames more positively than negatively. So I mean, Congress also sees the right opportunity from AI. So I think it can attract broader bipartisan support in AI use in broad area. So for the out-of-focus groups, I mean this means they can probably identify shared priorities only and maybe frame or frame them in kind of the inclusive way and help connect our legislators across party and chamber lines.

Matt Grossmann: Anything else we didn’t get to that you wanted to include or anything you want to tout about what’s next?

Heonuk Ha: So I have many research ideas on AI, but I want to do one by one. That’s the recommendation from my advisor. So political science and public administrations, they offer a wealth of well-developed theory, and I see tremendous opportunities to apply them in the AI policy space. I think it’s still a blue ocean. So first of all, AI mentioned before, I’m working to analyze executive orders and agency rules on AI in the US context. That’s going to be the papers. And also, I want to highlight three ongoing project. First, I think in the global level in the AI research, measurement is really interesting and developing issues. There are so many different aspects of AI governance, so we want to measure how well each country is doing, in which part.

Thankfully, several institutes and scholars, they have developed various measurements and indicators on various aspects of AI governance, like regulation, ethics, innovation, and providing infrastructure. So I’m studying global AI policy indicators and measurements with Dan Schiff at Purdue. So we examine how countries compare across different dimensions of AI readiness. So I mean, why the organizations like Stanford For Choice have developed important metrics, but their measures are very different in terms of the method, the years they cover, the countries they cover. So our work synthesizes these measures into composite scores using vector analysis, so we want to provide more comprehensive and compatible picture of global AI readiness. So this can be used in the global AI research later.

And also, in collaboration with Detroit Metro Area Communities Study at the University of Michigan, we’re conducting a survey to understand digital people’s perception of AI use in the city. I mean, the survey to the citizen is a big part of the AI research so we explore which values they prioritize, like equality, accountability, or some other key values, which values they prioritize and their openness to adopting AI in different policy domains, and also about their concerns about privacy. So this is about the survey work I’m doing with DMACS in Michigan.

And third, this is kind of the long term and local-level project with Kaylyn Schiff at Purdue and some graduate students, so we’re building a data set on the city’s use of AI for top 100 cities in the past six, seven years. So surprisingly not only big cities but also small cities, they have been adopted various types of AI use in the past couple of years, so we are collecting the data set, which city in which year, which policy, which use and policy is adopted. And also, we collect the data set about city CIOs and CFOs about their professionalism, so their education and their previous work experience. So we want to analyze how the city leadership capacity shapes our AI adoption. So this is kind of the long-term project, but I mean, there are some qualitative research about cities’ use of AI, but comparing 100 cities can be really interesting. So I think it’s a fun area to study and I’m still learning a lot every day.

Matt Grossmann: There’s a lot more to learn. The Science of Politics is available bi-weekly from the Niskanen Center, and I’m your host, Matt Grossmann. If you like this discussion, here are the episodes you should check out next. They’re all linked on our website. When and Where Can Climate Policy Succeed? How Bureaucrats Make Good Policy? Do Democrats and Republicans Get Different Results? Do Congressional Committees Still Make Policy? And Policymakers Follow Informed Expertise. Thanks to Chinnu Paranandi and Heonuk Ha for joining me. Please check out Investigating the Politics and Content of US State Artificial Intelligence Legislation, and When AI Meets AI, and then listen in next time.