📺 Stream EntrepreneurTV for Free 📺

The Curious Case of AI and Legal Liability AI is no longer a futuristic ideal from sci-fi movies. It's here, and it's affecting the way we do business. Have you considered the legal implications of the fact that the future is now?

By Andrew Taylor

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur South Africa, an international franchise of Entrepreneur Media.

Bigstock

While we couldn't hope to provide an exhaustive analysis of the circumstances in which the use of artificial intelligence could result in legal liability, it is the intention of this article to provoke some thought about the way in which we integrate AI into our lives and, significantly, into our businesses.

While South Africa lags behind the western world in terms of technology adoption and diffusion, it is without question a matter that warrants a good measure of foresight. This was made particularly relevant when Uber's autonomous vehicle killed a pedestrian in the US earlier this year.

Nowhere are these concerns around the intersection of artificial intelligence and legal liability more applicable. However, it bears mentioning that we have already been living with software systems which, to a greater or lesser degree, have artificially subsumed the role of a human(s) in a given process.

Consider the mid 1980's case of Therac-25, a Canadian-designed radiation dosing machine that incorrectly dosed six patients with a fatal cocktail of radiation.

Moving forward

Conversely, however, the use of modern artificial intelligence and software processes to assist humans in their endeavours has yielded untold gains in efficiency and efficacy across innumerable areas of application.

Indeed, the uncertainty surrounding liability in our overly litigious society is likely to have hindered the development and commercialisation of many AI solutions that could have been revolutionary, for fear of the possible liability that could ensue as a result of their use. Little doubt, then, that sci-fi has not done very much to aid the cause of the AI evangelists. How then, do we attribute liability to AI?

The problem with conventional criminal and civil liability is that it relies, in large measure on the application of objective standards — criminal liability in South Africa specifically calls for the act (or omission) of a human being and must be a voluntary act. Attributing this standard to AI means that criminal liability cannot ensue for an AI system.

Naturally, there are other forms of liability, but this — at its core — calls for a re-examination of the standards of what constitutes conduct for purposes of criminal conduct. This does not even begin to touch on the hurdles encountered in establishing "fault' on the part of the AI.

Governing AI

The answer lies in the detail of the rationalisation of the decision-making process of the particular application of AI. Perhaps, if we are able to tease out the way in which the AI arrived at the decision as opposed to a black box approach that examines only the result, then we are making some strides to ascertaining whether liability should arise in a given circumstance.

What is clear, is that we need to have a framework in place for the promulgation of appropriate laws that would govern the proverbial Skynet and when liability should arise.

The European Union has made some progress in this regard, having called for an EU-wide legislative framework that will govern the ethical development and deployment of AI, and the establishment of liability for actors, including robots.

It may sound far removed from your day-to-day business, but this may impact your business sooner than you think — from chat bots that enter into contracts, insurance AI that quantifies your risk profile and premium, and legal AI that diagnoses your legal cases using historical case law, to AI that aids judges avoid inherent biases and mete out appropriate sentences, the future is very much here.

From the leading edge

South Africa has an opportunity to lead the regulation of this new frontier and prevent the all too familiar lag of legislation in the dust of technology. It requires a regulatory approach where various formulations of product liability, design and programming liability can be negotiated by informed stakeholders to cater for these new forms of technology and the situations where they go awry, and to more accurately reflect the ethics and concerns of our society.

It is undoubtedly a tricky and murky road, where no system is error-free and wrongfulness of AI is a hard sell, but nevertheless, one which must be explored. In the interim, companies need to ensure that sound corporate governance is practised in all decisions that involve AI, to record the risks identified and to carefully manage its execution and implementation.

Andrew Taylor

Managing Partner: Henley Estates

Andrew Taylor is a managing partner at Henley Estates, part of the Henley & Partners Group, a global leader in citizenship by investment programmess, with offices in South Africa. 
Thought Leaders

It's the End of the Entrepreneurial Era As We Know It

With the rise of advanced technologies and AI, are we losing all sense of the independent business person and entrepreneur?

Side Hustle

He Started a Salty Backyard Side Hustle That Out-Earned His Full-Time Job and Now Makes Over $1 Million a Year: 'Take the Leap'

In 2011, Kyle Needham turned his passion for oysters into a business that saw consistent monthly revenue "right away."

Science & Technology

The Future of Retail Is in Immersive Real-Time 3D Experiences

Retailers, consumer goods companies, and customers can benefit from the heightened, streamlined experiences offered through real-time 3D.

Innovation

10 Things You Didn't Realize Were Invented in the 2000s

Many of the products and services born in the aughts helped shape the world we live in today.

Money & Finance

12 Books That Self-Made Millionaires Swear By

The bookshelves of millionaires can inspire you to build your wealth. Here are 12 must-reads they recommend.