Maker Pro

Support for AI Regulation in Europe and the U.S. Grows

January 29, 2020 by Luke James
Share
banner

Google’s Sundar Pichai, called for a sensible approach for the regulation of AI, an area that he thinks is too important for governments and regulatory bodies to ignore due to major developments in health tech and self-driving cars, among other things.

In 2018, Google pledged that it would not use AI in applications related to weapons, surveillance that violates international norms, or ways that go against human rights. 

Now, Sundar Pichai, the head of Google and its parent company Alphabet has noted, while writing in the Financial Times, that although AI has massive potential, there are plenty of considerable dangers that outweigh them, including the malicious use of computer-generated clips that are designed to look real, or ‘deepfakes’. 

 

Regulation of AI Widely Supported

The comments from Pichai, published in mid-January around the same time as the European Commission announced that it is considering a five-year ban on facial recognition, have led to several industry leaders, including the head of responsible AI at Pricewaterhouse Coopers (PwC), Maria Axente, to also show their support for the regulation of AI.

Speaking to BBC News in the UK, Axente said, "The question is how can it be done in a way that doesn't kill innovation, as well as continue to balance the benefits of AI with the risks it poses, as AI becomes more embedded in our lives?"

"Regulation and self-regulation, via a code of ethics and an ethics board, might not be enough to do that."

Pichai noted that there is an important role for governments to play in the regulation of AI, and that now is a good time to start to achieve “international alignment” as the United States and the European Union begin drawing up their own approaches. 

He added that “Sensible regulation must also take a proportionate approach, balancing potential harms with social opportunities” and went on to say that this could incorporate existing standards such as Europe’s famous (or infamous!) General Data Protection Regulation. 

 

Artificial intelligence graphic.

 

A European Framework on the ‘Rules’ for AI

While policymakers worldwide are looking at ways to tackle the risks associated with AI and its future development, the EU can be considered a front-runner with regard to its current and ongoing efforts. 

It began in January 2017, when the European Parliament drew up a code of ethics for robotics engineers and tasked the Commission to consider creating a European agency for robotics and AI that would be responsible for providing the technical, ethical and regulatory expertise needed in an AI-driven environment. 

In 2018, it did just that, adopting a communication to promote AI development in Europe, and in 2019 it published a coordinated plan on AI – endorsed by the Council of the European Union – to coordinate the national AI strategies of EU Member States. 

In April 2019, the ethics guidelines for trustworthy AI were published. And although these are not binding, they offer guidance on how to facilitate the secure development of ethical AI systems in the EU, the core principle being that the EU must develop a ‘human-centric’ approach that aligns with European values. 

With the new European Commission President Ursula von der Leyen announcing that the Commission will soon put forward further legislative proposals for AI to achieve a coordinated European approach, and policymakers worldwide looking at how to tackle the risks associated with AI in response to growing demand for more government oversight, one thing is clear -– the world is starting to wake up to the impact that unregulated AI may have on our world. That can only be a good thing. 

Related Content

Comments


You May Also Like