AI regulation in Europe? What consumers have to say

Banner image: Shutterstock / Albert Garrido
AI regulation in Europe? What consumers have to say

Artificial intelligence is already present in one way or another in many products and services we use on a daily basis. AI is being used to decide if consumers can take out insurance, it runs in the background of self-driving cars, smart home assistants depend on AI and it operates chat bots for all types of companies.

It is fair to say that it will transform society and consumers’ lives. Because of the sometimes drastic and all-embracing impact that AI will probably have on our markets and society, policy makers have started weighing options whether and how to regulate it. For us, the starting point of the regulatory debate is two-fold: to examine if consumer rights can be applied effectively to AI and to better understand consumers’ awareness, concerns and expectations of it.

Together with our member organisations, we therefore carried out a cross-country consumer survey which was conducted in November and December 2019 in Belgium, Denmark, France, Germany, Italy, Poland, Portugal, Spain and Sweden. The one-sentence summary of the results would be: Consumers see the potential of artificial intelligence but raise serious concerns. Indeed, the potential is certainly there: 91% of respondents do perceive AI to be useful — for instance, to predict traffic accidents (91%) or to predict their health (87%) and financial problems (81%). The flipside, however, is that when we asked about the added value of AI services people have been faced with, respondents do not give high marks. 45% for instance say that AI in home virtual assistants delivers no added value at all.

In its White Paper from February of this year, the European Commission identified two indispensable elements to help AI thrive: excellence and trust. When we look at the trust part, the results of our survey are not reassuring. In Belgium, Italy, Portugal and Spain most people (64%) agree or strongly agree that companies are using AI to manipulate consumer decisions.  In France, Denmark, Germany, Poland and Sweden 52% of respondents (strongly) agree. [1]

AI is transforming the relationship between consumers and companies, a relationship in which the trader has the upper hand.

AI is transforming the relationship between consumers and companies, a relationship in which the trader has the upper hand. Traditionally, consumer law has been established to even out this imbalance. This has, for example, resulted in information obligations or the establishment of a list of prohibited unfair commercial practices. The crux of AI though is that the product or service consumers are being confronted with changes as the algorithm adapts. Likewise, a trader might not be able to predict what its self-learning chat bot might recommend to a consumer (for instance in terms of an insurance product) or what a self-driving car or a robot might do in a certain situation.

It is essential therefore that thorough risk assessment is done before an AI product is placed on the market and that life-long monitoring and risk management procedures are put in place. Human oversight is essential, as well as the ability of authorities to carry out compliance checks.

Also on the Forum Network: Automation Shouldn’t Always be Automatic: Making Artificial Intelligence Work for Workers and the World by Daron Acemoglu, Elizabeth and James Killian Professor of Economic, MIT

We believe our entire legal framework to protect consumers needs to adapt to this changing environment. There is a need for legal obligations to regulate the development and use of AI to ensure it respects EU consumer rights as well as fundamental rights and values. One such legal obligation would be to give consumers the choice to reject decisions taken about them based on AI. And consumers agree. A clear majority of respondents think that they should have the right to say “no” to automated decision-ma­king. The numbers are as high as 78% in Italy and Portugal and 80% in Spain.

The European Commission will propose rules on AI in 2021. However, a fierce debate is being waged about what type of AI should be regulated and what obligations these rules should introduce, or if rules are needed at all. The latter is the favourite option for those who claim that otherwise EU companies might fall behind the US and China. In this context, it is good to know what consumers think. Our survey has brought to light that in Belgium, Italy, Portugal and Spain more than half of respondents (51%) disagree or strongly disagree that current regulation is not adequate to efficiently regulate AI. In all nine countries, less than 20% feel that current rules can adequately protect them from the potential harm AI poses.

Soon policy makers in the EU will have to decide how they want to regulate and steer the deployment and use of AI in Europe. There is a lot at stake — for companies, consumers and society at large. This is the right moment for the European Commission, MEPs and Member States to listen to consumers’ expectations.

But in an ever more globalised and digitalised world, European consumers’ well-being depends not only on the EU’s own policies and regulations but also on international standards and co-operation. It is therefore very welcome that the OECD already in 2019 issued its recommendations and principles for responsible stewardship of trustworthy AI, which require inter alia that all AI actors should respect the rule of law, human rights and democratic values, such as freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness and social justice.

Check out the OECD.AI Policy Observatory

The OECD’s principles and the EU’s upcoming legal framework for AI will hopefully act as lodestars to establish fair and human-centred AI policies around the world.

[1] For some of the questions, respondents were asked to express whether they “strongly agreed’’, “agreed”, “disagreed” or “strongly disagreed” with certain statements. For these questions, respondents in Denmark, France, Germany, Poland and Sweden also had the possibility to answer with “no opinion”. This option was not available to respondents of the postal questionnaire in Belgium, Italy, Portugal and Spain.

Related topics

Artificial Intelligence Digitalisation Privacy & Cybersecurity

Whether you agree, disagree or have another point of view, join the Forum Network for free using your email or social media accounts and tell us what's happening where you are. Your comments are what make the network the unique space it is, connecting citizens, experts and policy makers in open and respectful debate.

Please sign in

If you are a registered user on The OECD Forum Network, please sign in