The laws of Artificial Intelligence: are there any?

10 November 2016

University of Waikato Law lecturer Sean Goltz says we need to be cautious with the growing use of AI

University of Waikato Law lecturer Sean Goltz says we need to be cautious with the growing use of AI.

What was once considered science fiction has become an ordinary part of our lives. Artificial Intelligence (AI) appears in the products we buy and the services we use, making things easier and more efficient. Nothing wrong with that, right?

University of Waikato Law Lecturer Sean Goltz says we need to be cautious with the growing use of AI. He’s researching ways to develop technology-driven legal tools to regulate the harmful effects of technology, as well as harness technology for the benefit of the law.

While it hasn’t reached iRobot status, experts remain wary of the possible harm AI could cause.

Theoretical physicist Stephen Hawking wrote that in the short term, AI’s impact depends on who controls it; in the long term, it depends on whether it can be controlled at all. An example illustrating Hawking’s concern is autonomous killing machines currently developed by more than 50 nations. According to the US Government Accountability Office, in 2012 76 countries had some form of drone and 16 countries possessed armed drones. This makes a war fought by killer robots no longer a hypothetical concept.

Some commentators suggest the international community should join together in an attempt to regulate all kinds of autonomous systems. Perhaps, draft a new international agreement to ban killer robots.

The other suggestion comes from the scientists working to develop safe and beneficial intelligent technologies. AI scientist Stephen Omohundro identified a number of “drives” that should appear in sufficiently advanced Artificial Intelligence systems of any design to prevent problems that may stem from accidents in machine learning systems—unintended and harmful behaviours that may emerge from poor design of real-world AI systems.

However, it’s not all doom and gloom when it comes to AI systems, says Mr Goltz.

AI and the law

Mr Goltz says AI can have a practical application in the field of law. AI-assisted compliance systems are being designed to make government or corporate compliance regimes more effective and efficient by automating the compliance-checking tasks.

No doubt the alleged AI legal application that has received the most public attention is ROSS, a system supported by IBM’s Watson division (the platform that uses natural language processing and machine learning to derive insights from large amounts of unstructured data). ROSS was described as the world’s first artificially intelligent attorney. It was designed to read and understand language, search through the entire body of law (including case law and secondary sources) when asked questions, and then generate responses. It provides references and citations to back up its conclusions. ROSS also learns from experience, gaining speed and knowledge the more you interact with it.

One of the co-founders of the ROSS team describes it as "the best legal researcher available". Even without being made available to the public or presented in any public demo, ROSS has become a symbol of legal AI technology.

Another new application developed by Mr Goltz is It aspires to be the world’s largest search engine of legislation and the “policy advisor of the future”. is supported by Microsoft and Google and makes extensive use of both systems’ machine translation to offer laws from China, Mexico and Spain, among many others, in English.

Mr Goltz also plans to develop a system that has the capacity to identify fines within legislation by training a model (software) and develop a compliance system that will be able to identify risks.

Read the full version of Mr Goltz’ article.