In an age of hyper-partisan politics, the one thing most people seem to be able to agree on is that artificial intelligence (AI) needs to be regulated. A recent Morning Consult poll revealed that a significant majority of Americans believe there should be both national and international regulations on the research and development of AI.

In a survey of 2,200 American adults, 73 percent of Democrats said there should be U.S. regulations on artificial intelligence, as did 74 percent of Republicans and 65 percent of independents. …

Respondents to the poll also largely backed the idea of international regulations on AI, with 69 percent of Democrats and 71 percent of Republicans supporting the idea. A smaller majority (62 percent) of independents said they believe international rules should be implemented governing the technology.

There’s probably a lot baked into these results. Fears of autonomous vehicles, privacy invasions, and labor dislocation are likely the leading concerns among those polled, but it would be interesting—and illuminating—to understand precisely what it is about AI that is driving people’s concerns. Part of the problem might be the inherently inscrutable nature of AI decision-making. As Will Knight pointed out in a recent article for MIT Technology Review, “[t]his raises mind-boggling questions.”

As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable?

Perhaps part of the concern identified in the Morning Consult poll is what Knight identifies here. Because, as Knight points out, “machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system,” it’s difficult to afford machines the same level of trust we grant our fellow humans. While there are certainly avenues for helping to calm these and other emerging fears of advanced AIsuch as getting the algorithms to make an attempt at answering the reasoning behind their decisionsfor the time being, we need to recognize that we’re in the midst of rising public anxiety over AI developments.

The fears over AI, while at a very early stage, are an expected feature of the “techno-panics” associated with emerging technologies. As described in a paper from the Information Technology and Innovation Foundation’s Daniel Castro and Alan McQuinn, these panics are part of a broader privacy panic cycle. These cycles, they argue, are the result of people tending to fear the unknown, especially with regards to rapid and sudden changes in technological progress that threaten to upend the status quo.

The privacy panic cycle describes the recurring pattern of privacy fears that appear following the introduction of a new technology. It charts how perceived privacy fears about a technology grow rapidly at the beginning, but eventually decline over time.

The cycle is composed of four stages: trusted beginnings, rising panic, deflating fears, and moving on. So where are we with fears over AI? Based on the cross-ideological concerns, Mercatus senior research fellow Adam Thierer noted in the accompanying graphic that we’re still in the “rising panic” stage of the current AI hysteria.

18010587_1496829563680986_1788213404004197131_n

Unfortunately, we haven’t yet reached peak AI hysteria. That boiling over point, however, is probably coming sooner than a lot of people expect—legislators, regulators, researchers, and those techno-optimists who tout this technology’s benefits should be prepared. Although the anti-AI frenzy will inevitably pass, there are going to be some bumps along the way. To ease the transition through this phase of the panic cycle, legislators and regulators can start by simply avoiding the use of apocalyptic, doomsaying language when discussing this technology. They should also refrain from reflexively responding to these passing concerns with overly burdensome rules and restrictions. We’re at a key development point in AI. Avoiding prescriptive mandates and technology-specific regulations is more important than ever, especially in the wake of the public’s creeping fears.

As people become more acclimated to the many benefits AI is poised to offer us, they’ll incorporate its use into daily life more and more. As with most technologies, once AI is palpably assimilated into our lives, the how-could-we-ever-have-done-without-it mentality will sink in, just as it did with the camera, RFID chips, and the smartphone. Until then, expect a lot of wariness and uncertainty from across the ideological divide.