It should probably come as no surprise that artificial intelligence was absent from the first presidential debate.

AI hasn’t made much of a splash in this election cycle, and the subject matter is notoriously confusing. Even so, given some of the headline-grabbing developments in autonomous vehicle technology, one might expect a greater focus on this issue — especially as ongoing advancements herald significant changes to American life.

Uber is now deploying autonomous vehicles on the roads of Pittsburgh. Tesla’s Autopilot is being implicated in a recent spat of roadway deaths. Algorithms are now being used by prosecutors to assess the likelihood of recidivism for inmates up for parole.

Even leaving aside more near- and medium-term watershed developments, like autonomous vehicles, artificial intelligence technology is already all around us. Siri, Cortana, Amazon Echo, Google search and a host of other narrowly-tailored AIs are part of daily life.

As this technology continues advancing, privacy, consumer protection, safety and ethical considerations will become more pressing areas of focus for lawmakers and regulators. The calls for “something to be done” will only grow stronger as the social implications of AI come into the limelight.

As Sam Harris noted in a recent TED Talk, one solution for addressing these problems may be an AI “Manhattan Project” that seeks “to understand how to avoid an arms race and to build [the technology] in such a way that it aligns with our vision.”

Luckily, various firms and nonprofits are already starting to take the ethical and safety concerns associated with AI seriously — from Elon Musk’s OpenAI initiative to the recently announced Partnership on AI to Benefit People and Society.

But these developments are merely harbingers of even more radical changes just on the horizon. How government responds to ongoing developments could run the gamut from benign statements of administrative policy to heeding calls for, as some have argued, the need for new federal regulatory agencies to manage this technology.

For entrepreneurs preparing to make big investments in AI development, there are a number of broad issues to take note of.

First, be aware of ongoing discussions surrounding issues such as “consumer harm” and “consumer protection.”

The Federal Trade Commission, for example, embraces a discretionary model of regulating issues related to consumer protection.

However, what precisely constitutes a “harm” to consumers remains unclear, governed largely by the agency’s statutory authority to police “unfair and deceptive practices.” If an algorithm fails to deliver on its promise to consumers, one could very easily imagine a situation in which a startup faces fines or is subject to enforcement action due to violations of privacy or user agreement violations.

Second, a pre-market approval for deploying AI, while not actually regulated by law just yet, may not be too far off.

For example, the National Highway Traffic Safety Administration recently delivered its guidelines on the development and deployment of autonomous vehicle technology. Although NHTSA’s mandate is limited to regulating safety on American roadways, the agency has proposed additional authorities that would grant them pre-market approval of AV technology, as well as post-sale regulation of software changes.

That means any AV technology — that is, the AI software that automates the vehicle — would likely have to undergo some type of NHTSA approved vetting process before being deployed. Even after hitting the roadways, the updates to the software would then be subject to scrutinous “surveillance.”

Finally, startups should keep an eye on the ongoing workshops being sponsored by the Office of Science and Technology Policy.

While it is unlikely that any administration policy resulting from OSTP fora would result in an executive action or order on AI, if the resulting policy recommendations mirror the NHTSA guidelines, it could result in a stalking horse for regulatory action from the next administration.

In short, if you’re an innovator working in AI research or development, don’t sweat too hard over the regulatory landscape right now. For the time being, the most likely cause of concern would come from FTC violations, but even then consumer “harm” from AI is a very fuzzy issue.

It’s notoriously difficult to assess the “black box” operations of algorithms, and while it may not seem like it, existing regulatory authorities are likely sufficient to deal with any consumer protection issues that arise in the near future.

Nonetheless, when designing new AI systems and building businesses to house those innovations, entrepreneurs should be aware of the potential regulatory fault lines that litter this nascent ecosystem.

For now, permission to innovate doesn’t require prior approval. Whether or not the next administration pursues more heavy handed regulatory action for AI is still up in the air.

Op-ed by Ryan Hagemann; originally run in 1776