No Room for Complacency: Making ethical artificial intelligence a reality

Banner image: Shutterstock/Lukassek

Like Comment

This article is part of a series in which OECD experts and thought leaders — from around the world and all parts of society — address the COVID-19 crisis, discussing and developing solutions now and for the future. Aiming to foster the fruitful exchange of expertise and perspectives across fields to help us rise to this critical challenge, opinions expressed do not necessarily represent the views of the OECD.

Join the Forum Network for free using your email or social media accounts to share your own stories, ideas and expertise in the comments.


In April 2018, the House of Lords AI Select Committee I chaired produced its report AI in the UK: Ready, Willing and Able?, a special enquiry into the United Kingdom’s artificial intelligence (AI) strategy and the opportunities and risks afforded by it. It made a number of key recommendations that we have now followed up with a short supplementary report, AI in the UK: No Room for Complacency, which examines the progress made by the UK Government, drawing on interviews with government ministers, regulators and other key players in the AI field. 

Since the publication of our original report, investment in, and focus on the United Kingdom's approach to artificial intelligence has grown significantly. In 2015, the United Kingdom saw GBP 245 million invested in AI. By 2018, this had increased to over GBP 760 million. In 2019, it was GBP 1.3 billion.

The UK Government has done well to establish a range of bodies to advise it on AI over the long term. However, we caution against complacency.

Artificial intelligence has been deployed in the United Kingdom in a range of fields—from agriculture and healthcare, to financial services, through to customer service, retail, and logistics. It is being used to help tackle the COVID-19 pandemic, and is also being used to underpin facial recognition technology, deep fakes and other ethically challenging uses.

Our conclusion is that the UK Government has done well to establish a range of bodies to advise it on AI over the long term. However, we caution against complacency.

There are many bodies outside the framework of government that are to a greater or lesser extent involved in an advisory role: the AI Council, the Centre for Data Ethics and Innovation, the Ada Lovelace Institute and the Alan Turing Institute.

Co-ordination between the various bodies involved with the development of AI, including the various regulators, is essential. The UK Government needs to better co-ordinate its AI policy and the use of data and technology by national and local government.

A Cabinet Committee must be created; their first task should be to commission and approve a five-year strategy for AI. This strategy should prepare society to take advantage of AI, rather than feel it is being taken advantage of.

The Green and the Blue: How AI may be a force for good by Luciano Floridi & Anna Christina (Kia) Nobre, University of Oxford

In our original report, we proposed a number of overarching principles providing the foundation for an ethical standard of AI for industry, government, developers and consumers. Since then, a clear consensus has emerged that ethical AI is the only sustainable way forward.

The United Kingdom is a signatory of the OECD Recommendation on AI, embodying five principles for responsible stewardship of trustworthy AI and the G20 non-binding principles on AI. This demonstrates the United Kingdom's commitment to collaborate on the development and use of ethical AI, but it is yet to take on a leading role.

The time has come for the UK Government to move from deciding what the ethics are, to how to instill them in the development and deployment of AI systems. We say that our government must lead the way on making ethical AI a reality. To not do so would be to waste the progress it has made to date, and to squander the opportunities AI presents for everyone in the United Kingdom.

We call for the Centre for Data Ethics and Innovation to establish and publish national standards for the ethical development and deployment of AI. These standards should consist of two frameworks: one for the ethical development of AI, including issues of prejudice and bias; and the other for the ethical use of AI by policymakers and businesses. 

However, we have concluded that the challenges posed by the development and deployment of AI cannot necessarily be tackled by cross-cutting regulation. Understanding by users and policymakers needs to be developed through a better understanding of risk—and how it can be assessed and mitigated in terms of the context in which it is applied—so our sector-specific regulators are best placed to identify gaps in regulation.

Listen to more OECD Podcasts and Follow us to never miss a track!

AI will become embedded in everything we do. As regards skills, government inertia is a major concern. The COVID-19 pandemic has thrown these issues into sharp relief. As and when the COVID-19 pandemic recedes, and the UK Government addresses the economic impact of it, the nature of work will have changed and there will be a need for different jobs and skills.

This will be complemented by opportunities for AI, and the Government and industry must be ready to ensure that retraining opportunities take account of this. 

The Government needs to take steps so the digital skills of the United Kingdom are brought up to speed, as well as to ensure that people have the opportunity to reskill and retrain to be able to adapt to the evolving labour market caused by AI.

It is clear that the pace, scale and ambition of government action does not match the challenge facing many people working in the United Kingdom. It will be imperative for the Government to move much more swiftly. A specific training scheme should be designed to support people to work alongside AI and automation, and to be able to maximise its potential.

The question at the end of the day remains whether the United Kingdom is still an attractive place to learn about and work in AI. Our ability to attract and retain the top AI research talent is of paramount importance, and it will therefore be hugely unfortunate if the United Kingdom takes a step back, with the result that top researchers will be less willing to come here.

The UK Government must ensure that changes to the immigration rules promote—rather than obstruct—the study, research, and development of AI.

Find out more about the OECD's work on AI

Related topics

Tackling COVID-19 Artificial Intelligence Digitalisation

Whether you agree, disagree or have another point of view, join the Forum Network for free using your email or social media accounts and tell us what's happening where you are. Your comments are what make the network the unique space it is, connecting citizens, experts and policy makers in open and respectful debate.

Lord Tim Clement-Jones

Former Chair of House of Lords Select Committee on AI / Co-Chair of APPG on AI, House of Lords, United Kingdom

Lord Clement-Jones was made CBE for political services in 1988 and life peer in 1998. He is Liberal Democrat, House of Lords spokesperson for Digital. He is former Chair of the House of Lords Select Committee on AI which sat from 2017-18; Co-Chair of the All-Party Parliamentary Group (“APPG”) on AI; a founding member of the OECD Parliamentary Group on AI and member of the Council of Europe’s Ad-hoc Committee on AI (“CAHAI”). He is a former member of the House of Lords Select Committees on Communications and the Built Environment; and current member of the House of Lords Select Committee on Risk Assessment and Risk Planning. He is Deputy-Chair of the APPG on China and Vice-Chair of the APPG’s on ‘The Future of Work’ and ‘Digital Regulation and Responsibility’. He is a Consultant of DLA Piper where previous positions include London Managing Partner, Head of UK Government Affairs and Co-Chair of Global Government Relations. He is Chair of Ombudsman Services Limited, the not for profit, independent ombudsman providing dispute resolution for communications, energy and parking industries. He is Chair of Council of Queen Mary University London; Chair of the Advisory Council of the Institute for Ethical AI in Education and Senior Fellow of the Atlantic Council’s GeoTech Center.