Fears surrounding artificial intelligence (AI) and autonomous systems often stem from what happens when the software is confronted with a no-win ethical dilemma. For example, does an autonomous vehicle prioritize the safety of its passengers, or the lives of innocent pedestrians on the sidewalk? This is the classic trolley problem, which distracts from more realistic concerns about AI such as managing bias and developing transparent systems. Focusing on trolley problem scenarios also has the effect of eroding public trust in AI and autonomous systems.

While ethics should be considered when building AI systems, it’s not the job of ethicists to solve the problems inherent in the development of new AI technologies. As an eponymously titled Wired article recently put it: when it comes to autonomous vehicles, lawyers, not ethicists, will solve the trolley problem.

New advances in AI and autonomous systems are challenging our definitions of knowledge and raising complicated ethical questions about the use of AI in high-stakes decision making. From determining creditworthiness and automating medical diagnosis to dispensing justice, AI systems are already all around us. As the ubiquity of AI dawns on public consciousness, concerns are being raised about how an AI system arrives at a decision. The hysteria surrounding the perceived ethical dilemmas associated with AI among policy makers and consumers alike have the potential to stymie the evolution of new technologies that will dramatically improve our lives––while also clearing a pathway for other countries to take the lead in this new age of machine intelligence.

Reaping the vast economic and social benefits AI has to offer will be dependent upon building public trust in these systems. Humans tend to trust other humans more than they do machines or algorithms; trust in these systems dissolves when we witness them fail or make mistakes. How will we engender the same trust in AI as we do in human-to-human interactions?

Easing anxieties over AI will be the responsibility of industry leaders, AI experts, and policymakers. Focusing on the net positive benefits of introducing AI into a wide range of applications will go along way to building public trust. Baking in sensible ethical standards will also aid this trust-building effort.

When it comes to building ethical standards into AI systems, we should promote consensus-based standards derived from a diverse group of experts and interested parties. Examples of this convergence can already be observed.

One area of AI, robotics, was the first to develop guidelines that calls out the need to address ethics during the development process. The British Standards Institution published a report in 2016, detailing “guidelines for the identification of potential ethical harm” when building robots. In December 2016, the Institute of Electrical and Electronics Engineers released and requested comments on an AI-ethics focused document called Ethically Aligned Design. The purpose of the document “encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies.” The report is the result of a 108-member group consisting of thought leaders in technology, AI, and ethics.

Tech companies including Alphabet, Amazon, IBM, Facebook, and Microsoft are reportedly coming together “to ensure that [AI] research is focused on benefiting people, not hurting them.” It’s clear that the community of researchers and technical experts who are focused on resolving these issues recognize the need to consider the ethical impacts of AI on society. They take it seriously, and they’re working day-in and day-out to make sure their work is done for the general betterment of humanity.

As new developments in AI reshape everyday tasks, and challenge the traditional way of conducting business, there is growing emphasis on building systems with ethical considerations in mind. However, ethicists should not be employed for the role of subjecting AI advancements to vague philosophical litmus tests that have little bearing on real-world scenarios. Instead, increasing awareness amongst the AI research and development community of the need to follow consensus-based ethical standards will help ensure new technologies are designed with social and economic impacts in mind, and help foster trust between AI systems and the general public.

The possibilities of AI are endless. From the invention of the wheel to the Internet, technology has changed the way we live our lives for the better. Sometimes, technological advancements have unintended consequences that necessitate intervention. However, we shouldn’t allow the fear of the unknown to prevent us from reaping the benefits of new technology. With respect to AI, there will be challenges and unintended consequences, but it is important to recognize that these challenges and potential pitfalls are already being met with solutions from the community developing these new technologies.