A recent poll suggests the American public is wary of letting their cars do the driving for them.
As of a week ago, only 23 percent of Americans felt comfortable with the idea of riding in an autonomous vehicle. Although 42 percent indicated “they might in the future,” that still leaves a large number of people who are significantly suspicious of the new technology. With numerous companies rolling out “live testing” of driverless cars across the country, should regulators intervene?
Short of completely banning the new technology, however, is there any role for safety regulators? To answer that, it’s worth broadening the question a bit. How, if at all, can regulators effectively regulate new emerging technologies that have no historical precedence? From artificial intelligence and autonomous vehicles to CRISPR gene editing technology and the Internet of Things (IoT), regulators seem to be besieged on all sides by a mind-bogglingly punctuated rate of technological progress. What, if anything, is to be done?
To begin, it’s important to recognize that delays in rolling out new technologies can have negative effects on consumer welfare. That is, regulators can have an equally injurious effect on the public by forestalling the entry of emerging technologies into the marketplace. Regulatory action can have dire effects on consumer welfare—just as much, or perhaps more so, than inaction.
Autonomous vehicles are a perfect example of this effect. Every day of delayed deployment results in more fatalities and injuries on American roadways that otherwise may have been prevented. Of course, this is an unproven technology, and we are only now seeing empirical evidence suggesting that even in its early stages it may be safer than human drivers. See, for example, the National Highway Traffic Safety Administration’s (NHTSA) after-accident report on last year’s Tesla crash, which concludes, among other things, that “Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation.”
However, the regulations proposed for emerging technologies are not uniquely the purview of public safety regulators. It has been suggested that impediments to market entry for new telecommunications services, for example, have negatively impacted consumer welfare on the order of billions of dollars annually. Even non-safety critical technology regulations can have profoundly negative effects on Americans, even if it’s just a hit to their pocketbooks.
There are many reasons regulatory rulemaking processes unintentionally result in consumer harms, from methodologically specious cost-benefit analyses to institutional dynamics that favor slower action. What is clear, however, is that regulators are severely limited in their ability to keep pace with progress in the digital era. That needs to change if America is going to continue reaping the benefits of innovation.
To that end, what follows is a new framework—a proto-manifesto—for regulators and others who confront the difficulties of appropriately tailoring new rules for this new digital frontier. How we regulate the technologies of the future will have an immense impact on the well-being of individuals not just in the United States, but around the world.
Addressing the need for flexibility and expediency, while balancing public concerns, is of utmost urgency in the emerging technologies space. If regulators are to meet the challenge of keeping up with the speed of technological progress, a good first step would be embracing an official policy that can better equip them for that task. Luckily, those principles have already been written. In fact, they’ve been around since the days of the Clinton Administration.
One Framework to Rule Them All
The National Telecommunications and Information Administration (NTIA) recently released a green paper discussing its proposed next steps on addressing the IoT. In it, the Agency details its proposed next steps on the IoT and states it will:
Continue to foster an enabling environment for IoT technology to grow and thrive, allow the private sector to lead, and promote technology-neutral standards and consensus-based multistakeholder approaches to policy making at local, tribal, state, federal, and international levels on issues ranging from U.S. security and competitiveness to cybersecurity, privacy, intellectual property, the free flow of information, digital inclusion, interoperability, and stability related to the IoT.
NTIA goes on to reaffirm its commitment to an important policy document that helped guide the development of the early Internet: The Framework for Global Electronic Commerce. The Agency mentions the Framework throughout the green paper, and notes its continued support for the general approach laid out by the document:
Dating back at least to the 1997 Framework for Global Electronic Commerce, the U.S. Government has been operating under the principle that the private sector should lead in digital technology advancement. Even where collective action is necessary, the U.S. Government has encouraged multistakeholder approaches and private sector coordination and leadership where possible. When governmental involvement is needed, it should support and enforce a predictable, minimalist, consistent, and simple legal environment for commerce.
These general principles focused on policy prescriptions that would allow the Internet, unencumbered by ex ante regulations, to flourish. These same principles can be equally effective in promoting the development of other emerging technologies. Trial and error experimentation drives innovation and technological progress, and the best environments for such experiments are those unencumbered by unnecessarily prescriptive rules. Fostering that level of dynamism requires a recognition of the value of market-friendly environments in promoting high levels of innovation. Balancing the requirements of market innovation and the interests of public interest regulators is the genius of the Clinton Administration’s Framework.
By permitting the private sector to lead, government regulators help to set the stage for the emergence of new markets. Regulators accept their role as the back-end risk mitigators and overseers of ongoing developments. In this capacity, those same regulators can fulfill their obligation to protect public safety and consumer confidence without stifling the dynamic progress that market forces excel at encouraging.
In order to see the benefits of this approach moving forward, regulators will need to apply these principles to the regulatory rulemaking process in a more consistent fashion. Luckily, we already have a sense for how this process can work: permit the “private sector to lead” in crafting standards, and permit regulators and other stakeholders to comment on the results, conditioned on a timely and transparent review process.
It may seem counter-intuitive to suggest that industry ought to set its own best practices and standards for technologies that may have an impact on public safety and security. However, the trouble with imposing an ex ante standard on a new technological innovation is there’s no way of knowing whether it’s the right standard to achieve a particular end; and regulators have even less knowledge of what may or may not be appropriate than the technologists, engineers, and businesses investing in these new products and services.
As a result, the best means of achieving public interest goals has to begin where the knowledge resides: with the innovators, entrepreneurs, and industry firms actually involved in the creation and testing of new technologies. A recent report from Securing America’s Future Energy (SAFE) makes this point abundantly clear:
When an emerging technology is safety critical it is imperative for industry to proactively formalize safety assurance strategies, not just for the sake of the public, but also to protect the nascent technology against regulations that, while well-intentioned, do not reflect a complete understanding of the technology.
It goes on to note that “industry is ultimately accountable for educating regulators and customers on the state of [autonomous vehicle] technology, and the steps being taken to ensure it is deployed in a safe and responsible fashion.” If they do not, burdensome regulations are likely inevitable. It’s for these reasons that the SAFE report concludes by suggesting that governments ought to “retain a flexible framework to allow for these technologies to mature and contribute to public safety.”
Similarly, the Online Trust Alliance (OTA) recently released the second version of its best practices framework governing the IoT. A multi-tiered certification process, OTA’s IoT Trust Framework is a set of best practices that can help guide manufacturers of IoT products and services. More importantly, it can help correct natural information asymmetries on the consumer side of the market. By publicly communicating which products meet certain baseline standards of security, the public is better informed in its decisions and consumer trust in the market is strengthened.
Ensuring that consumers are appropriately informed about the risks they face with new technologies is the best mechanism for ensuring public acceptance, while ameliorating the concerns faced by regulators. So long as industry takes the lead on sharing information with the public, regulators will be less burdened by the hue and cry to “do something.” Industry should set the tenor for which best practices and standards strike the best balance between promoting innovation, protecting public safety, and engendering trust in the market. Then, regulators and other stakeholders are in a better position to address any outstanding concerns in a multistakeholder forum that focuses on promoting discussion and compromise.
Although the multistakeholder process has its drawbacks—mainly, lengthy timeframes compounded by inconsistent processes and uncertain goals—it has also contributed to the development of a relatively flexible approach to regulating new technologies. Unpredictability still abounds, but the willingness of agencies to avoid unduly prescriptive and precautionary rules is a net benefit for many of these technologies.
For example, the Obama Administration issued an executive order charging NTIA with overseeing a multistakeholder process for crafting privacy voluntary best practices governing the use of commercial unmanned aerial systems (UAS). Although in this case, the UAS industry wasn’t technically leading on standards-creation, the multistakeholder process itself did yield a consensus-based set of standards that necessarily involved compromise between privacy advocates, industry groups, and other interested stakeholders. NTIA, to its credit, largely acted as a facilitator for conversation, not a prescriptive regulator. Consequently, the resulting best practices showcased the best of democratic negotiations while avoiding the need for onerous regulations.
Of course there are also times when regulators will necessarily need to act to protect public safety concerns. A perfect example of this is the previously mentioned NHTSA after-action report on the Tesla Autopilot accident. Instead of preemptively banning, or throwing up roadblocks to forestal the deployment of the Autopilot technology, NHTSA merely observed developments as they occurred and intervened in order to investigate the resultant crash to determine if intervention may be necessary or appropriate. Such intervention, they determined, was not required.
Indeed, had the Agency banned the technology in advance, we never would have acquired the data that concluded Tesla’s technology actually contributed, on net, to reduced crash rates. Luckily, NHTSA and the Department of Transportation seem to be embracing a similar mentality to NTIA in promoting the use of industry-led standards and multistakeholder engagements in developing policies for autonomous vehicles.
In short, multistakeholderism has become, and will likely continue to be, an important component of the regulatory rulemaking process for emerging technologies. Ensuring consistent processes, transparency, and clear and accelerated timelines for such engagements, however, will be key to ensuring that innovation isn’t hamstrung by unnecessarily complicated and lengthy bureaucratic timetables. In those respects, there is still much that can be done to improve upon the multistakeholder process.
Towards a New Manifesto
Just as the Department of Commerce has embraced the Framework for Global Electronic Commerce as its regulatory lodestar, so too should other federal agencies. Accepting these basic tenets can then guide Agency rulemaking towards a recognition of the value in pushing industry to develop standards. These in turn can then be reviewed by regulators and other stakeholders, alleviating public concerns and achieving regulatory aims without overly burdening the federal bureaucracy. In essence, the process flow best suited to emerging technology regulatory rulemaking—one that appeases regulators, industry, and the public—proceeds as follows:
Governance of new, untried and untested technologies should begin with industry issuing standards and best practices. A multistakeholder review process—facilitated but not dictated by the appropriate federal agency—should follow, with clear process guidelines and objective goals and deliverables. This process should in no way be predicated on a presumption of regulatory action, but merely serve as a forum for discussion. Public comments should be sought throughout the process. During this time, firms should be granted a default approval to continue operating. Regulators should observe-and-respond to ongoing developments, proposing new rules only if a risk-based assessment warrants further action.
Forbearance and humility are important traits for regulators to exhibit. However, merely calling for a “hands-off” approach to regulating new technologies tends to ring hollow. Regulators have institutional mandates to abide by, and asking them to stand by and simply do nothing doesn’t sell in the court of public opinion. In fact, it doesn’t sell among many technologists and Silicon-Valley types, either. As Garrett Johnson and Greg Ferenstein noted in a recent piece discussing the unique political perspectives of Silicon Valley: “Innovation happens too quickly for the lumbering legislative process to keep up, but in some cases technologies may perform better in a new regulatory framework. Or regulatory certainty may be better than regulatory uncertainty.”
We should push agencies to embrace a new manifesto for addressing emerging technologies—a positive, market-friendly agenda that can help guide better decisions, while freeing those institutions to permit trial-by-error experimentation in the marketplace. What we need now, more urgently than ever, is a bottom-up approach to regulatory action that ensures innovation can flourish while minimizing the cries for preemptive rules. Otherwise, we run the risk of forestalling progress in favor of perpetuating the status quo.