Values, Ethics & Collective Intelligence in the Age of AI

This OECD Forum 2019 background note was used to prepare speakers on the panel Values, Ethics & Collective Intelligence in the Age of AI, which took place at the OECD headquarters from 15:00-16:30 on Monday, 20 May. 
Values, Ethics & Collective Intelligence in the Age of AI
Like

From the people we meet to the roads we take, a growing array of human decisions are increasingly being shaped, if not determined, by our digital devices. And from our ability to secure a loan to our chances of landing a job, these algorithmic decisions come to affect our lives in critical ways. In the face of this seemingly irreversible rise of Artificial Intelligence (AI), the OECD – driven by its mandate to promote Better Policies for Better Lives – seeks to define rules of the game for the common good. Yet, how are we to ensure that AI becomes “less Artificial, and more Intelligent”

To be sure, AI brings huge potential benefits. By offering tremendous efficiency gains and expanding the frontiers of the possible, these technologies can deliver better services, foster innovation and stand to increase human welfare in remarkable ways. Through their ability to learn from vast datasets, they can complement human abilities by managing complexity, improving decision-making and helping us address key global challenges such as better diagnosis and treatment of disease, helping finding new ways to reach the objectives set by the SDGs or climate change. As steam power and electricity were in their eras, AI can be regarded as a general-purpose technology with the potential to bring significant improvement to many areas of our economies and societies.

Nevertheless, the benefits of AI are matched by significant risks. Indeed, Artificial Intelligence can be used for the best, or for the worst. Facial recognition technologies can help identify and arrest criminals, but are increasingly used to target, profile and often surveil millions of unaware individuals. Malevolent intentions are not even a necessary condition for detrimental outcomes. Learning from all too human datasets, AI-driven decisions can further replicate and amplify societal biases – thereby exacerbating problems such as discrimination, racism and inequality in all its forms – beneath false airs of algorithmic objectivity. 

It is crucial to guide the development of AI technologies and their uses, and ensure key actors do not operate in a vacuum. Private actors develop ethics committees, but may be confronted with difficult decision linked to financial incentives. Researchers need to seek the approval of institutional review boards, whose reference frameworks may have become inadequate to anticipate the potential outcomes of advances in data science. 

Societies as a whole must take these decisions. If machines are to take on an increasing set of tasks and responsibilities traditionally performed by humans, their performance cannot be evaluated through the sole prism of efficiency. Whether taken by humans or algorithms, decisions entail trade-offs. In the same way, the distribution of limited resources requires prioritisation, so other criteria such as fairness, safety, transparency and liability must be brought into the equation. To do so, engineers will need to work alongside ethicists and social scientists, who are best suited to assess the moral and societal implications of their innovations. 

Yet, the very definition of what constitutes good performance is ultimately subject to common agreement on the ends that we wish to attain. And that is where the ethical conundrum begins: societal and cultural preferences vary. Autonomous vehicles are set to hit the road but the largest survey of its kind, Moral Machine, revealed that when faced with an inevitable accident, people across the globe disagree over who should bear the brunt. This is further complicated by the fact that technological innovation does not stop at national borders. 

In the end, developing trustworthy AI may primarily require us to make good use of our own collective intelligence. Initiatives such as Moral Machine, or the Finnish “1 percent” AI scheme, which aims to equip citizens with the necessary information to weigh-in on how the country should invest in and regulate AI, provide important pathways to do so. Likewise, international co-operation will be paramount. In its absence, firms and states alike will be pressured into an ethical race to the bottom by competitors not bound by the same moral standards. 

If we can shape our tools, they also shape us in return. Increasingly, AI technologies are teaching us to jump through hoops, one captcha at the time. This trend is in urgent need of reversal. If we are to let these technologies exercise ever-growing responsibilities, it will be key to ensure that they can be trusted to act in a manner that we understand, and agree with. Human values and ethics have always been up for debate. Yet, they may have never been more needed than in the age of AI. 


  • How can we effectively encourage the collaboration across disciplines and sectors that is key to the development of trustworthy AI?
  • Beyond embracing an interdisciplinary approach, can stakeholders – citizens included – harness their collective intelligence to shape the development of AI?
  • Is it possible to ensure that AI is developed and used for the good of all, despite the fact that different societies and cultures have very different conceptions of what "good" even means?
  • What are the advantages and limitations of different approaches to guiding the development and use of AI, from hard regulation through to informal norm setting?

OECD Forum 2019: World in EMotion

 Banner image: McDobbie Hu on Unsplash

Please sign in

If you are a registered user on The OECD Forum Network, please sign in