On Tuesday, the Niskanen Center submitted comments to the National Science Foundation in response to the question of “whether the [2016 National Artificial Intelligence Research and Development Strategic Plan] should be revised and, if so, the ways in which it may be improved.” We believe it should, and offered two recommendations that could make the strategy significantly better.

First, we argue the National AI Strategy would be immensely improved by adding a strategic goal that would aim to “Review and identify administrative rules, regulations, and policies that may hinder the research, deployment, and adoption of AI systems within the federal government and private sector.” Given that the original 2016 strategy was crafted under the auspices of the Office of Science and Technology Policy (OSTP), this is an appropriate inclusion. As we note in the comments:

Under its originating statute, OSTP is charged with “provid[ing] the President with periodic reviews of Federal Statutes and administrative regulations of the various departments and agencies which affect research and development activities, both internally and in relation to the private sector, or which may interfere with desirable technological innovation, together with recommendations for their elimination, reform, or updating as appropriate.” This provision is no accident; indeed it was echoed as a priority for OSTP’s predecessor, the President’s Committee on Science and Technology.

Our second recommendation focused on the need to expand the scope of Strategy 3 (“Understand and address the ethical, legal, and societal implications of AI”) to include consideration of how “ethically-neutral design standards would compare to a framework that prioritizes a static set of ethical value judgments.” In particular, we argued that Strategy 3 should avoid attempting to “solve age-old epistemic and metaphysical questions about ‘justice and fairness’” and focus attention instead on more technically-feasible means of addressing particularized — and identifiable — consumer harms that result from the deployment of automated decision-making systems. In order to ensure the widest range of voices is heard in pursuing such an undertaking, our comments further emphasized the need for Strategy 3 to explicitly actualize this recommendation by convening a multistakeholder process under the purview of OSTP.

We then conclude by noting:

While it is not possible to accurately diagnose all potential future problems that may result from the widespread use of AI, preemptively addressing the rules that may hinder this technology’s deployment is well within the ability of the government. OSTP must help lead this effort, as detailed under its originating statute, by pinpointing those areas of the CFR and other regulations that should be updated to help realize the benefits AI can offer the American people.

From the executive summary:

The 2016 National Artificial Intelligence Research and Development Strategic Plan was a commendable step forward in crafting a national strategy for research and development in artificial intelligence. Given the intense investments being made by China and other countries, it is imperative that the United States not only matches, but surpasses these efforts. To achieve a vision of continued American leadership in technological progress and innovation, the National AI Strategy requires some updates to maximize the effectiveness of these investments. These comments will argue in favor of two such updates.

First, the National AI Strategy should include an additional strategic aim that focuses on how best to remove burdensome regulatory barriers, which present unnecessary obstacles to ongoing research, testing, and commercial deployment of new artificial intelligence systems and technologies. Second, the third strategy should be expanded to include a consideration of the trade-offs associated with incorporating ethical frameworks into the architectural design of artificial intelligence systems. In particular, we recommend explicitly broadening the strategic aim to consider outcomes-based governance strategies that correct specific, identifiable harms that may result from the application of autonomous decision-making systems.

Read the full comments here.