It looks like Congress is finally jumping into the autonomous vehicle (AV) game. A number of representatives recently announced plans to get to work on AV legislation. Unfortunately, as a recent Wired article noted: “Everything is racing ahead of a regulatory structure ill-equipped to usher in this change.” As a result, Congress has its work cut out for it in the coming months.

In order to effectively address the concerns associated with AVs, legislators need to consider three issues (among others) when crafting legislation: (1) liability, (2) privacy, and (3) federal regulations.

Rethinking Liability

Liability is one of the recurring issues with AVs. Since the “driver” is a computer, who is to blame when an AV fails to operate appropriately and crashes?

In a working paper titled “The Reasonable Computer: Disrupting the Paradigm of Tort Liability,” Ryan Abbott discusses the need to embrace negligence standards with regards to liability for automated systems. The reasons for why negligence may be preferable to the current strict liability regime, he argues, are as follows:

First, it is both a consumer and business friendly solution. For consumers, while it would make it more difficult to recover for accidents, it would reduce the risk of accidents. Most consumers would probably prefer to avoid harm rather than to improve their odds of receiving compensation. For businesses, it would lower costs associated with liability (which may also result in lower consumer prices). Second, shifting to negligence would not require government funding, additional regulatory burdens on industry, or new administrative responsibilities. Third, it is an incremental solution that relies on existing mechanisms for distributing liability, and it builds upon the established common law. There may be less risk that shifting to negligence would produce unexpected outcomes than more radical solutions. Fourth, for the above reasons, shifting to negligence is a politically feasible solution.

He argues that once it is shown “by a preponderance of evidence that a computer tortfeasor is safer on average than a person” it makes sense to shift from a regime of strict liability to negligence. He notes:

The inquiry should also focus on whether automation reduces, or is expected to reduce, overall accidents, not whether it did in fact reduce accidents in a specific instance. Thus, if Tesla can prove that it is more likely than not its self-driving cars are safer than human drivers, this should be sufficient to shift to negligence even in a case where a particular substitution of a human driver with a self-driving car resulted in more accidents. Better that there should be fewer accidents in total even if one self-driving car ends up particularly accident prone.

While the issue of strict liability versus negligence gets tricky vis-a-vis automated operations, the key insight is the need for some objective measure of what constitutes a safe automaton. In general, I think he gets the sentiment correct here. And it would appear—according to a recent after-accident report conducted by NHTSA’s Office of Defects Investigation—that Tesla’s autosteering technology is in fact making the vehicle significantly safer. While the current level of accident reduction—40 percent, according to the report—may not serve as evidence enough to justify shifting towards a negligence for AVs operating on the roadways, some baseline will undoubtedly need to establish what constitutes “safer than the average person” for an AV.

Here, Congress can set the tone. In order to do so, however, legislators will need to abstain from defaulting to the 1 fatality per 100 million miles assessment of automotive safety. Using the current formula for assessing the safety of AVs, many hundreds of millions of miles would need to be logged before the vehicles could be considered safe enough for American roadways. Alexander Hars, writing for Inventivio, suggests a better way:

we should not just use accident statistics to compare the reliability of human drivers and self-driving cars, we should compare their driving behavior on the frequency of entering safety-critical situations.

In fact, the issue of assessing what constitutes a “safe” vehicle is even more complex; so complex, Hars argues, that more elaborate models are necessary to capture the many complexities at play in determining AV safety benchmarks. He goes on:

Chance (or luck) may play a larger role in human driving than we like to think. That more accidents don’t happen might not be so much due to the great capabilities of human drivers but to the relatively low frequency of difficult, unexpected situations. … Thus we need much more refined models of driving risk.

Any legislation that hopes to be effective in ensuring AV safety, effective deployment and adoption, and creating market certainty will require establishing some level of baseline safety for AVs. That, as both Abbott and Hars discuss, is the necessary precondition for moving from the status quo to a more autonomous future. That baseline should not be predicated on the existing failure rate estimation using vehicle miles traveled.

Privacy on the Roadways

I’ve written extensively on the many competing equities at play with privacy. This debate is particularly pronounced in the emerging technology space due to the wide range of data collection and connectivity. Last November, the Niskanen Center submitted comments on the privacy implications of commercial drones to the Federal Trade Commission (FTC). We argued for regulatory abstention from privacy rules for drones. Those same arguments can aptly apply to AVs.

Privacy risks are often trotted out as major policy concerns associated with the deployment of autonomous vehicles. However, legislators should be wary of enacting legislation that attempts to establish federal standards for privacy issues. Here, federalism and the common law are better equipped to handle the complexities. Where consumer harms do materialize—actual, not merely hypothetical, harms—sufficient authority is already vested in the FTC and other federal regulators to address those issues in an ex post fashion.

In short, Congress should abstain from attempting to codify statutory privacy requirements for autonomous vehicles. Instead, leave those issues to be worked out by the courts.  

Regulatory Forbearance

Elaine Chao, the new Secretary of Transportation, recently announced the Department of Transportation (DOT) would begin reviewing AV guidelines released last September. Given the signals that Congress intends to legislate on this very issue, it’s an auspicious moment for AV regulations. Luckily, the Department has plenty of guidance on how best to proceed on this issue.

In the wake of the National Highway Traffic Safety Administration’s (NHTSA) call for comments on the Federal Automated Vehicle Policy, a number of respondents opposed the Agency’s proposed new authorities. The pre-market approval authority was under particularly heated fire, and numerous organizations and industry firms argued vehemently against it. The Niskanen Center concurred, while also arguing the Agency did not, contrary to its own assessment, possess an “authority to regulate the safety of software changes provided by manufacturers after a vehicle’s first sale to a consumer”—the post-sale authority to regulate software changes, as described by the policy document.

As I discussed in a previous post, autonomous vehicles and other emerging technologies are posing unique conundrums to regulatory agencies. Part of the problem is Congressional leaders’ all-too-usual willingness to pass the puck to the bureaucratic state. If legislators are serious about legislating on this issue, they have to avoid the temptation to pass any further leadership to DOT. Any bill that attempts to do so will simply be putting what little progress has been made by NHTSA and DOT in jeopardy.

Conclusion

To sum up, any AV legislation that passes from Congress to the President’s desk should:

  1. Establish a clear and well-defined baseline for what constitutes a “safe” AV, preferably not based on the current failure rate estimation framework currently employed by DOT.
  2. Refrain from including any privacy provisions that are technology-specific.
  3. Avoid deferring leadership on the issue to DOT. For too long Congress has deferred to federal agencies to sort out policy questions. Legislators should take this opportunity to lift the burden of policymaking from DOT and assert its primacy over the AV issue.

Additionally, whatever legislation ends up being promulgated should be predicated on a sound and empirically-informed cost-benefit analysis. Otherwise, we run the risk of adding further delays to the deployment of what is bound to be a life-saving technology. Congressional leaders shouldn’t want to bear that responsibility, and that means they need to properly craft this legislation from the get-go.