Whether you realize it or not, we’re already speaking to artificial intelligence (AI). Siri and Cortana turn our mobile devices into virtual assistants and Amazon Echo functions as a home-based gateway to online music streaming, Wikipedia inquiries, online shopping, and more. AI is already all around us—we just don’t think of it as AI once it becomes a reality of daily existence. Now the trick is getting AI to speak to us.

The process of understanding particular decisions made by algorithms is a relatively opaque process. AI is still something of a “black box” problem for researchers. Inputs and outputs are readily observable, but the operations of the algorithms themselves are far more difficult to discern. This is particularly true when examining AI research predicated on structures of neural networks—an artificial matrix that attempts to mimic some of the basic functions of the human brain through a complex web of neurons that are individually “activated” upon recognizing a defined input. Although some disagree, the current reality is that the complexity of these systems is so profound that the process of algorithmic decision-making is largely inscrutable, even to the researchers who study AI. (This video from Google is a great high-level presentation of some of the important highlights in AI—from the uncertainty around ideal approaches to research and development to how AI is already all around us.)

But maybe the key to solving this dilemma doesn’t lie in algorithmic transparency, as some have suggested. Instead, if we can get AI to explain why a particular decision was made, not merely observing what the decisions were, we can start to get a better glimpse into how we can improve algorithmic functionality.

Consider the example of a small child wreaking havoc at home. A time-out or strongly worded scolding can potentially halt the rampage. However, over time such behavior is unlikely to subside if the underlying cause for the temper tantrums remains unaddressed. In order to identify that cause, the parent needs to understand why the child is upset. The best, most effective mechanism for identifying the root cause, and thereby applying the most appropriate form of corrective action, is speaking with the child in an attempt to understand his or her impetus for raining ruination upon the home.

Just as speaking to the child can reveal motivations, so too can speaking to AI help diagnose and treat undesirable outcomes. Language, after all, is already at the heart of AI development. This type of feedback can have immense benefits for improving AI, while allaying the public’s fears over potentially improper decision-making.

Communicating with AI won’t be siloed to backend developers capable of parsing through code and structuring complex decision matrices. Companies like Drive.ai are currently looking into ways of communicating the intended actions of driverless cars to pedestrians and non-autonomous motorists. How they accomplish that task—which is more about projecting the appropriate information in a digestible format than detailing the reasons behind a particular action—is of paramount importance for easing the minds of people interacting with robots and autonomous systems. The difficulty lies in creating a shared language that comes across as intuitive and easy to understand. Those are difficulties that will likely be compounded if calls for algorithmic transparency are heeded by regulators reacting to knee-jerk technopanics.

As such, regulators should avoid preemptive regulation that attempts to address unsubstantiated harm scenarios. Instead, agencies should forbear from meddling too intrusively in these early stages of this technological revolution. A recent report from Stanford University concurs, arguing:

Rather than “more” or “stricter” regulation, policies should be designed to encourage helpful innovation, generate and transfer expertise, and foster broad corporate and civic responsibility for addressing critical societal issues raised by these technologies. In the long term, AI will enable new wealth creation that will require social debate on how the economic fruits of AI technologies should be shared.

I tend to agree. In fact, many of the policy recommendations offered by the Stanford report closely echo the arguments I made in a recent comment filing to the Office of Science and Technology Policy on AI. The short version is (1) we shouldn’t speak of AI in hyperbolic, doomsday terms and (2) regulators should embrace restraint. Additionally, it’s important to bear in mind that many of the perceived problems that may materialize down the road may end up being addressed better by industry. In fact, it’s already clear that the tech sector is taking potential concerns to heart. Some of the biggest companies operating in this space, including IBM and Alphabet, are already coming together in an attempt to hash out a framework for a self-policing consortium that will address the thorny ethical concerns associated with AI development.

Between industry-led standards and companies getting out ahead of potential problems, a regulatory policy of forbearance, and continued technical developments that can help shed light on the technology’s decision-making, AI policy is in a good position to weather a multitude of potential future problem scenarios. Fear mongering over SkyNet and utopian dreams of a post-scarcity society are the extreme positions in these debates. While I’d prefer a situation more akin to the latter scenario, the more likely reality is that we’ll end up muddling through a lot of meddlesome issues before actualizing some of the big benefits this technology has to offer.

Unlike Apple co-founder Steve Wozniak, I don’t think these developments mean “the future is scary and very bad for people.” When looking at the potential AI holds to disrupt the world, it’s easy to see the downsides. It’s easier still to entirely whitewash the potential for good. It’s not the future that people find terrifying, but the unknown. Any steps we can take to ameliorate the uncertainties surrounding an AI’s decision can go much further than simply advancing the underlying technology: it can help appease some of the broader public fears that could potentially drive bad knee-jerk public policies.