Evolution or revolution? The future of regulation and liability for artificial intelIigence
The A.Ideas series presents opinion and views around artificial intelligence emerging from discussions at the OECD Conference "AI: Intelligent Machines, Smart Policies".
This article is part of the Forum Network series on Digitalisation
New technologies invariably encourage debate over whether existing regulation is sufficient, or if change is required. Artificial intelligence (AI) is no different in this sense, but it arguably presents a unique challenge as AI replicates human agency and judgement.
AI has the potential to affect almost every aspect of work and society, wherever there is opportunity for human thought and decision-making to be replicated and surpassed by machine-learning and automation technologies. These technologies could bring huge benefits to society. They also bring risks.
As the deployment and commercialisation of AI becomes more prevalent, more pervasive, and more ‘disruptive’ in society, concerns naturally increase about the risks that the new technology may bring. Questions are raised as to whether existing regulatory and liability regimes are adequate. On the one hand, there are concerns that if the existing legal and regulatory structures are inadequate to deal with the challenges of new technologies, citizens will be exposed to unacceptable risks, and will be left uncompensated for damages they suffer as a result. On the other hand, it is recognised that inappropriately targeted, or unnecessarily intrusive legal and regulatory intervention could serve to stifle the development of beneficial technologies.
Questions of responsibility and liability for the safety and security of AI is on the radar of policy makers around the world. The European Parliament recently called for EU-wide legislation that will regulate the rise of robots, including an ethical framework for their development and deployment and the establishment of liability for the actions of robots including autonomous vehicles. In the UK, the House of Lords Select Committee on Artificial Intelligence is currently taking evidence on a wide range of issues, including liability and regulation.
Is modification of the existing rules and frameworks sufficient to deal with this new technology, or should an entirely new system of rules for AI be introduced?
Current regulation and liability
In jurisdictions around the world, there is a spectrum of laws and regulation that applies to the safety of products and, to a lesser extent, to services. Products and services that pose little or no safety risk may be subject to self-regulation, or to non-mandatory industry standards or codes of practices. As the safety risk increases, the level of regulatory oversight and mandatory rulemaking tends to increase.
Historically, there has been a wide divergence around the world in the approach taken to the management of consumer safety from a legal and a regulatory perspective. In some regions, the general tendency has been to deal with the protection of the safety of consumers by imposing requirements that need to be met as a condition precedent to the products being marketed, with clear responsibilities being imposed on the parties considered to be responsible for safety compliance. Regulatory and criminal sanctions are imposed in case those requirements are breached by the person responsible. This approach is typical in, for example, Europe. Some other jurisdictions, at least historically, have tended to take a somewhat “lighter” touch to aspects of pre-market regulation, and rely on liability rules and litigation systems to act as the incentive to avoid placing consumers at risk. The United States is the most notable example of this. Over past years, however, these historic distinctions have come to be blurred, as in some jurisdictions, we have seen greater emphasis on supporting litigation systems and consumer claims (eg the EU), whilst in regions (eg the US), regulation has started to take on a more “European” flavour.
These policy approaches are important, because they influence what should be the policy response to the development and deployment of AI technologies, and the approach taken to managing any new risks arising from those technologies.
We need clearly to articulate what it is, if anything, about AI, that cannot adequately be dealt with within existing systems in order to protect citizens from unwarranted risk. It is important to resist the temptation to focus too much on hypothetical risk scenarios, and allow academic debate around those scenarios to dictate policy without sufficient connection to the real-world risks, and without sufficient recognition that history has often demonstrated that legal regimes can sometimes prove themselves to be very flexible to deal with new challenges. To the extent gaps or vulnerabilities exist, an approach needs to be identified that adequately addresses the issue, without unduly stifling innovation or creating unintended sources of inequity.
Evolution or Revolution
As a general proposition, liability and regulatory regimes can operate effectively to protect the public if they can place legal responsibility on a party that has the ability to have reasonable influence over the safety of the activity. It does not necessarily mean that the party has to be able to have absolute control over safety, or even that it has any level of direct control. It can be sufficient if the responsible party has the ability to pass on the risk to a third party who does have reasonable influence over safety.
The deployment of AI technologies certainly has the potential to make the protection of safety more complicated, but it doesn’t necessarily follow that the new systems of liability and regulation are needed.
The reality is that as the use of AI becomes more prevalent in an increasing array of applications, the solutions that are needed to deal with liability and responsibility issues will vary. In the world of autonomous machines, the management of safety, including the systems for allocation of responsibilities, will differ from application to application. The rules applicable to responsibility for autonomous passenger vehicles will be different to those which determine responsibility for the safety of autonomous systems in factories. The use of AI to aid in the design of large construction projects will raise issues of liability and responsibility that are different to those arising when AI is used to aid the diagnosis of a patient’s illness and prescribe medication.
Without doubt, there will be aspects of the deployment of AI technologies that will call for a regulatory response to help clarify or determine questions of responsibility and liability, and to help ensure a high level of protection from unacceptable risks in the community. But this is not an area where a broad brush will be the right tool.
The greatest risk is that the speed of development of technology will lead to a rush to regulate and legislate on the part of policy-makers, creating regimes that stifle good innovation in the short term, and prove to be unfit for purpose in the long term.
GOT A FEW MORE MINUTES?
OECD work on artificial intelligence http://www.oecd.org/going-digital/ai/
ABOUT THE AUTHOR
Rod Freeman is an international products lawyer. He is a partner in the London office of Cooley, one of the world’s leading tech-focused law firms. Rod is routinely ranked as a leading product liability lawyer in the major legal directories, where he is described as “the outstanding go-to specialist for product liability matters” and “without a shadow of a doubt the most knowledgeable product liability lawyer in the European space”. He works alongside the world's leading tech and innovative companies to help them solve their product-related issues, protect their reputations, defend their products and build innovation-based success.