Financial Data
Updated 19 Sep 2019


The curious case of AI and legal liability

AI is no longer a futuristic ideal from sci-fi movies. It’s here, and it’s affecting the way we do business. Have you considered the legal implications of the fact that the future is now?


Andrew Taylor, Entrepreneur, 09 June 2018  Share  0 comments  Print


All the answers to your unique business lifestage questions

While we couldn’t hope to provide an exhaustive analysis of the circumstances in which the use of artificial intelligence could result in legal liability, it is the intention of this article to provoke some thought about the way in which we integrate AI into our lives and, significantly, into our businesses.

While South Africa lags behind the western world in terms of technology adoption and diffusion, it is without question a matter that warrants a good measure of foresight. This was made particularly relevant when Uber’s autonomous vehicle killed a pedestrian in the US earlier this year.

Nowhere are these concerns around the intersection of artificial intelligence and legal liability more applicable. However, it bears mentioning that we have already been living with software systems which, to a greater or lesser degree, have artificially subsumed the role of a human(s) in a given process.

Consider the mid 1980’s case of Therac-25, a Canadian-designed radiation dosing machine that incorrectly dosed six patients with a fatal cocktail of radiation.

Moving forward

Conversely, however, the use of modern artificial intelligence and software processes to assist humans in their endeavours has yielded untold gains in efficiency and efficacy across innumerable areas of application.

Related: Digital transformation: 3 easy steps to success

Indeed, the uncertainty surrounding liability in our overly litigious society is likely to have hindered the development and commercialisation of many AI solutions that could have been revolutionary, for fear of the possible liability that could ensue as a result of their use. Little doubt, then, that sci-fi has not done very much to aid the cause of the AI evangelists. How then, do we attribute liability to AI?

The problem with conventional criminal and civil liability is that it relies, in large measure on the application of objective standards — criminal liability in South Africa specifically calls for the act (or omission) of a human being and must be a voluntary act. Attributing this standard to AI means that criminal liability cannot ensue for an AI system. Naturally, there are other forms of liability, but this — at its core — calls for a re-examination of the standards of what constitutes conduct for purposes of criminal conduct. This does not even begin to touch on the hurdles encountered in establishing ‘fault’ on the part of the AI.

Governing AI

The answer lies in the detail of the rationalisation of the decision-making process of the particular application of AI. Perhaps, if we are able to tease out the way in which the AI arrived at the decision as opposed to a black box approach that examines only the result, then we are making some strides to ascertaining whether liability should arise in a given circumstance.

What is clear, is that we need to have a framework in place for the promulgation of appropriate laws that would govern the proverbial Skynet and when liability should arise. The European Union has made some progress in this regard, having called for an EU-wide legislative framework that will govern the ethical development and deployment of AI, and the establishment of liability for actors, including robots.

It may sound far removed from your day-to-day business, but this may impact your business sooner than you think — from chat bots that enter into contracts, insurance AI that quantifies your risk profile and premium, and legal AI that diagnoses your legal cases using historical case law, to AI that aids judges avoid inherent biases and mete out appropriate sentences, the future is very much here.

Related: A self-service legal portal for entrepreneurs

From the leading edge

South Africa has an opportunity to lead the regulation of this new frontier and prevent the all too familiar lag of legislation in the dust of technology. It requires a regulatory approach where various formulations of product liability, design and programming liability can be negotiated by informed stakeholders to cater for these new forms of technology and the situations where they go awry, and to more accurately reflect the ethics and concerns of our society.

It is undoubtedly a tricky and murky road, where no system is error-free and wrongfulness of AI is a hard sell, but nevertheless, one which must be explored. In the interim, companies need to ensure that sound corporate governance is practised in all decisions that involve AI, to record the risks identified and to carefully manage its execution and implementation.

Entrepreneur Mag Logo

Copyright is owned by Entrepreneur Media SA and/or Entrepreneur Media Inc.
All rights reserved. Click here to read our editorial disclaimer.

Rate It12345rating

About the author


Andrew Taylor, Entrepreneur


Introducing the theft & fidelity protection for your business

Theft and fidelity cover are often confused with each other. Bryan Verpoort discusses the difference between the two and why your business should be putting measures in place for both of these risks.

Login to comment