This article is part of a series in which OECD experts and thought leaders — from around the world and all parts of society — address the COVID-19 crisis, discussing and developing solutions now and for the future. Aiming to foster the fruitful exchange of expertise and perspectives across fields to help us rise to this critical challenge, opinions expressed do not necessarily represent the views of the OECD.
Join the Forum Network for free using your email or social media accounts to share your own stories, ideas and expertise in the comments.
Originally delivered as an intervention at the OECD Parliamentary Group on AI meeting, which took place on 2 December 2020.
There will be no artificial intelligence (AI) revolution if we are not able to earn our citizens’ trust. In order to earn it, I’d like to share some suggestions with you that come from my time as Minister in charge of public sector modernisation in Portugal, and which help me in discussions on AI as Member of the European Parliament.
First of all, we should start using AI applications that have a positive impact. Why not use public services to lead the way? Health care, education, communication with tax authorities and social services—they can all be improved using AI.
In Portugal there are many examples of what we are doing in a partnership between AI researchers and our public services. For example, we are using employment data to detect patterns of long-term unemployment and signal people at risk so we can provide personalised training to them. Another project, “Decoding the Legislation—Consumer Rights Law”, aims to develop natural-language processing algorithms, allowing citizens to search and interpret consumer rights legislation documents through plain-language queries. At the end of the project, we expect to have a functional prototype where citizens can ask questions online about Consumer Rights Law and obtain answers automatically. Lastly, using daily data collected by the Gambling Inspection and Regulation Authority, we can analyse users’ activity online to detect pathological behaviours associated with addiction.
In order to control the risk associated with these projects, it’s very important to promote a culture of experimentation in the public sector when developing new services. So, in 2017 we created LabX precisely with this goal in mind, not only for tech-based solutions but also for other disruptive innovations. Our experience can be replicated anywhere in the world to experiment with the use of AI in public services, as well as its regulation using regulatory sandboxes.
While on the topic of trust, we must ensure citizens are able to understand the gist of what we are doing. We need to work on digital skills and literacy, and AI must be developed with the close involvement of civil society organisations.
In Portugal we have launched a programme called Incode, which covers all of these goals, from digital inclusion to high-level research. It is structured around five pillars—digital inclusion, education, upskilling and reskilling public and private sector workers, specialisation, and research on AI—with these pillars then cross-promoting each other. For instance, high-level research on AI helps to improve digital education, literacy, and training, which in turn raises awareness of the importance of investing in research and attracts talent.
A specific example of trust through inclusion is a programme we have that promotes “Women in Tech” careers. Besides the literacy and trust aspect, this also ensures more diversity leading to better and more inclusive products.
Borrowing a metaphor from the clothing industry, we do not just want to be a code factory: we want to be an active part in the process of designing and developing new digital products and services. This requires citizens with skills, and that trust technology. Good policy choices can make a difference.