AI Challenges: Liability

Articles in the Series

This article is part of the series "How AI challenges humanity". In the article we'll look at how systems powered by AI will probably affect how we think about liability, accountability and responsibility. As always first through a pessimistic and then through an optimistic lens.

Threats

Questions of liability mainly arise in two forms: Was the machine used in a malicious or careless way? And: Was the machine built to perform the task as expected? In the mechanical world these questions get answered regularly and that’s why there are driving tests for drivers, checklists in planes and standards for mechanical parts.

When software takes over some cognitive tasks, a new line of interaction between humans and machines develops. Imagine an autonomous car entering a parking lot on a patch of dirt. It isn’t sure enough whether it’s safe to proceed but it doesn’t ask the passengers to take over control. Instead it asks them if it’s safe to go. One of the passengers selects “Yes”. They actually meant, “Yes, it’s safe to enter the parking lot. Just go around the camouflage-wearing man at the entrance like you normally do.” The car actually wanted to know if it’s safe to go straight ahead and so drives right into the man.

These interactions need to be designed carefully. Waymo, the company that used to be Google’s self-driving car project, shows visualizations of the car’s view of the world on screens inside the car to let the passengers anticipate the car’s next actions and feel safe.

Defibrillators have failed to save lives by having a user interface that, for legal reasons, could not tell the user assertively enough what to do next.

/As the cognitive tasks machines take over become more complex, so does the/ /interface between the machine and the person. Designers alone can not solve/ /these problems./

Opportunities

The questions we ask about autonomous system’s liability always existed but were never answered fully as long as humans did the tasks. Is one driving test really enough to let people drive cars? Did the physician take the right decision?

Lawmakers now have an interesting opportunity: Together with the public and the industry they can define tests that autonomous systems have to pass in order to get approved. Previously nobody ever found out if the radiologist's error was due to fatigue, bad lighting, lack of training or something else. Medical meta-studies criticize the lack of rigorous data collection after surgery, even in clinical trials.

In an automated setting the mistake is added to the testset. If at all possible, no future system will make the same error ever again.

Automatic logging can document and communicate how a system arrives at a conclusion. A machine translator could explain why it chose a word or a particular grammatical structure.

This will change the nature of insurance. Systems that can demonstrate their error rates on millions of test examples or in millions of hours in a simulation are easier to insure.

Ways Forward

Societies need to have honest discussions about acceptable error rates. Humans make a lot of mistakes. So expecting 100% performance from machines is unrealistic and dangerous.

Deploying a cancer-detection system that makes less mistakes than humans will save lives. But public perception will focus on the spectacular single case. Not the yearly statistic. Currently this can be witnessed in the coverage of accidents of cars with autonomous features.

Regulators and the public must embrace the uncertainty of probabilistic software. As long as it performs better than humans or the previous version it’s better to deploy the software. In cases where the software fails it must record and document the error.

It was never easier to make sure a mistake is never made again. Public datasets and simulation environments will help in that endeavor.

Email me the next article!

Be the first to get an email when we publish another high-quality article.

Close

More articles about machine learning!

Join our newsletter subscribers and get an email when we publish new content.