This article is part of a series in which OECD experts and thought leaders — from around the world and all parts of society — address the COVID-19 crisis, discussing and developing solutions now and for the future. Aiming to foster the fruitful exchange of expertise and perspectives across fields to help us rise to this critical challenge, opinions expressed do not necessarily represent the views of the OECD.
Join the Forum Network for free using your email or social media accounts to share your own stories, ideas and expertise in the comments.
Originally delivered as an intervention at the OECD Parliamentary Group on AI meeting, which took place on 2 December 2020.
The COVID-19 pandemic continues to impact on our lives, with direct implications on our health as well as indirect social and economic effects shaping the well-being of societies. These testing times have changed the focus of all policy-making discussions. We continue to learn by experience the immense value of good health, as well as the importance of a publicly provided and universally accessible healthcare system. We are reminded of our right to health, both individually as well as collectively. We are reminded of the necessity of a rights-based system, of taking a holistic approach.
On the other hand, as humanity scrambles to find a solution to the pandemic, we are also reminded of the critical role technology could play in dealing with this multifaceted crisis. The pandemic has revealed the need to have a truly “smart healthcare system”, which learns to optimise the capacity of healthcare services, anticipate disruptive events and properly manage medical supplies, personnel and patient flows.
Artificial intelligence (AI) has so far provided a hand in this direction, where it has been used to track the spread of disease; manage hospital capacities; identify high-risk patients; develop drugs and a vaccine; and, nowadays, determine new variants of coronavirus.
“Testing and tracing” apps have been one of the most visible and publicly debated cases of AI in healthcare. These apps have put the limelight on striking a balance between AI’s potential to serve the individual and collective interest of health, and the protection of our individual rights. On the one hand, these apps are framed as important tools to control the spread of the virus, contributing to the collective interest and providing valuable information to design strategies for exit from lockdown. On the other hand, without appropriate regulatory frameworks placed firmly within democratic governance structures, they risk personal data either being commercialised for profit or being abused by state authorities. In either case, AI risks creating human rights violations.
Health and/or privacy? Substitutes or complements…
Overall, the COVID-19 crisis has shown with stark clarity both the advantages and the dangers of artificial intelligence in healthcare; indeed, we stand at a critical juncture. Will surveillance for the sake of health purposes be governed through citizen empowerment, or will it lead to a totalitarian shift? Will co-operation, solidarity and multilateralism rise to the task or will isolationist reflexes sharpen? Both questions are relevant in any discussion of AI and healthcare. The former relates to the regulatory framework and democratic governance structures for the protection of human rights. The latter relates to whether AI in healthcare services will be driven by collective interest or by individual profit-seeking objectives.
Clearly, health and privacy can never be substitutes for each other; their complementarity is undeniable. Any regulatory framework has to take this into account, providing for both health and privacy rather than forcing a choice between them. Only then will public trust in AI be enshrined, only then will technology serve the welfare and well-being of societies.
There are several existing frameworks honed by the Council of Europe, among others, that serve as a strong base for the checks and balances necessary for such a regulatory framework. These include the European Social Charter that clearly states the “right to health” in Article 11, the Oviedo Convention and the Convention 108+ that ensures protection of personal data and privacy, to name a few. However, there is a need for a “dedicated legal instrument” governing AI, one that takes into account its specific characteristics and that is preferably binding and with a global reach. The global pandemic has reminded us of how important multilateralism and collaboration are in dealing with global-scale events. As such, in a recent Recommendation (Rec 2185 (2020) for which I was the rapporteur, the Parliamentary Assembly of Council of Europe (PACE) called for such an instrument that would lay down benchmarks in areas such as privacy, confidentiality, safety of data, informed consent and liability.
The need for a strong regulatory framework…
These are all areas that bear significant human rights violation risks as well as significant health risks, and require embedding checks and balances within our regulatory frameworks. Both the formulation of this regulatory framework, and its content, must be based on principles of inclusivity, accountability and human rights.
The intelligence inherent in AI comes from the ability of machines to learn, where data and algorithms are an integral part of this process. The outcome of AI depends on the quality of data and algorithms. Data should be unbiased, the ownership of data should be clearly defined in the legal framework and algorithms should be transparent enough to identify the liability of stakeholders. The responsibilities of all stakeholders need to be clearly delineated in order to prevent damage, and to repair or compensate for harm in the worst-case scenario. A proper regulatory framework would ensure all of these properties for data, algorithms and indeed the whole of the AI process.
Furthermore, the framework regarding AI should be based on multi-stakeholder accountability, independent oversight, and adequate evaluation of socio-economic and human rights impacts. As such, governance structures that will allow for sovereign evaluation, screening and certification of AI applications for health care services at national level is critical to ensure their safety and rights compatibility. These processes should be handled by independent authorities that are neither politically nor commercially driven.
Such public certification of AI applications would not only build public trust, but would also allow users to give truly informed consent. Rather than only receiving information about an AI application, users would be empowered to understand the implications of their decision before they “click and consent”. As such, public certification could be a source of concise information, alongside other mechanisms, that ensures informed consent of patients or health care professionals who will be the end users of AI.
The pandemic has also been a clear reminder of the critical role of the public sector, especially in health care services. To date, the private sector has piloted most of the research and development of AI applications for health care while the public sector has been in the back seat. Therefore, we are reminded of the urgency to foster strong public sector involvement. There is a need to resist the commercial capture of public health through AI. For this reason, we should encourage national public health-care authorities to adopt a strategic approach, co-ordinating digitalisation policies, research and investment, as well as management and exploitation of personal data. Seeking a healthy balance between individual, business and public interests and will ensure the protection of our right to health.
For the efficiency and safety of AI-driven health applications, we need to ensure that technology never fully replaces humans so that all health care decisions are validated by adequately trained professionals. Indeed, AI is only as good as the data, human capital and expertise of the interdisciplinary team involved. Teams formed by people who understand biomedicine, biology and population models, people who know of infections and virology, people who understand computing—all working in unison.
Any regulatory framework should ensure these multifaceted quality certifications: regarding the composition of the teams, the nature of the data and the operation of the algorithms. Had there been a trusted and well-defined regulatory framework already in place, maybe AI could have had a much larger positive impact on the managing of this pandemic; the public’s concerns regarding the misuse and abuse of data by states or the private sector would have been mitigated. This experience points to the need to speed up the work, both optimising solutions for the current pandemic and preparing for such events in the future. The COVID-19 outbreak has shed light on the most critical aspects of this much-needed regulatory framework. Such a framework should define respective liabilities of all stakeholders. It should put in place the necessary conditions and guarantees so that while working towards the collective interest, we also protect human rights. It should ensure that data and algorithm quality are guaranteed to prevent deepening existing inequalities, and that technology for monitoring and tracking is only used temporarily—not as a permanent fixture.
This is a call to speed up the formulation of a legal and regulatory framework, that will catch up with the pace of technological development. We need a new era of citizen empowerment in health data, built on a foundation of mutual support and global solidarity. Only then will we not only overcome this pandemic but also be ready to tackle the next one.
Please sign in or register for FREE
If you are a registered user on The OECD Forum Network, please sign in