The Other Half of the Truth: Staying human in an algorithmic world

Jun 07, 2019
3
1

This article is part of the Forum Network series on Digitalisation and reflects on discussions at OECD Forum 2019.

The hype and hope around machine learning and artificial intelligence (AI) has increased dramatically in recent years. Intelligent systems are being deployed in a vast range of sectors ranging from health, finance and criminal justice to insurance and advertising. Every major “big tech” company is investing heavily in these technologies. But what is the source of all this hype?

More accurate, efficient, fairer and consistent decision-making usually top the list of “big advantages”. As opposed to human decision-making, algorithmic counterparts do not tire, are not moody and cannot be bribed or persuaded. AI will be objective in its decisions and treat people equally. AI can help us to make just decisions by taking the fallible human element out of the equation.

Photo by Wesley Tingey on Unsplash

While this sounds promising it is only half of the truth. We have long been aware that biased and discriminatory decision-making is one of the biggest challenges in AI. Algorithms learn from historical data and thus also learn from our past. The unfairness and injustice of our world is reflected in the data fed into these algorithms. Historical data in criminal justice, recruitment and financial services reflect the past and sometimes the bad decisions we have collectively made. This means that AI can replicate our biases, reinforcing stereotypes and perhaps creating new ones.


While this is neither new nor surprising, what is actually astonishing is one of the typical reactions to these observations: “Yes, of course, but this has always been the case. Humans are biased and so are algorithms. Humans can be bad decision-makers and so can AI. Why do we expect more of algorithms than we expect of people?”

“But this has always been the case” is one of the least ambitious sentences in human history. This argument uses human decision-making as the benchmark against which algorithmic decision-making should be evaluated. It implies that if technology is not exacerbating the current situation, it is not problematic. As long as the status quo can be upheld innovation is not controversial, even though we know that human decision-making is often problematic.

Photo by Joanjo Pavon on Unsplash

Many troubling things have “always been the case” until we decided that it is time to change them. At certain points in history it has “always been the case” that women were not allowed to own property or work, that people were not allowed to vote because of the colour of their skin and that certain sexual orientations were (and sometimes still are) deemed “illegal”. Just because things have “always been the case” does not mean that they are ethical, socially acceptable or aspirational.

The value of innovation is to improve society and foster human flourishing. Innovation should have the purpose to increase equality, equal opportunity and inclusion. Innovation should bring us closer together and not set us further apart.


Of course, it is clear technological fixes alone cannot solve societal problems. However, technology can be one part of the solution. Closing this door is defeatist and regressive, and may be a reaction to the technical challenges ahead or simply a reflection of human biases. However, denying that AI can play a role in increasing fairness is equally problematic. Just because this task is challenging and multifaceted, does not mean that this path should be abandoned. If we do not attribute a potentially important role to AI in this quest we also shun the ethical responsibility to seek solutions.

And that is exactly what we, as the designers, purchasers, users and regulators of AI must do: seek solutions. Thanks to a vivid field of research on fairness in AI we are constantly improving and finding new ways to detect and mitigate bias in algorithmic systems. The importance of this research cannot be overestimated. Our aspiration and research trajectory should be to devise algorithms to make better decisions in the future. An algorithmic world gives us a chance to learn about ourselves and our blind spots, for better or worse. We must seize this opportunity to build a future where prejudice and discrimination “will never be the case.” 

OECD Principles on Artificial Intelligence

Continue the conversation and help us co-create the agenda

All of the discussions you have on the Forum Network inform our thinking for the OECD Forum event each year – join to respond to Susan's article and comment to help us co-create the agenda

Related Topics

Artificial IntelligenceOECD Forum 2019

Find out more about OECD Forum 2019: World in EMotion

OECD Forum 2019: World in EMotion

Banner image: Simon Wijers on Unsplash

Sandra Wachter

Associate Professor and Senior Research Fellow in Data Ethics, AI, Robotics and Internet Regulation, University of Oxford

1 Comments

Asanga U. Ranasinghe 23 days ago

I'm really surprised to hear that people actually think that AI is "...always the case..". That is very disturbing. 


Technology should definitely assist and flourish humanity exactly as the author of this article says.