Some modest proposals for the governance of artificial intelligence

Regulation needs to limit risks and enable opportunities. Testing products before they go to the market has done a good job in pharma: why shouldn't be possible in digital tech? Image: “Artificial Intelligence & AI & Machine Learning” by mikemacmarketing, licensed under CC BY 2.0.
Some modest proposals for the governance of artificial intelligence
Like
The Forum Network is a space for experts and thought leaders—from around the world and all parts of society— to discuss and develop solutions now and for the future. It aims to foster the fruitful exchange of expertise and perspectives across fields, and opinions expressed do not necessarily represent the views of the OECD.

What should be done to address the emerging risks with the development of new artificial intelligence while seizing the opportunities it offers? It must be said that other crises, such as climate change, social inequity, and biodiversity destruction, appear far more serious. But humans need to make decisions together to address those crises. And if artificial intelligence can help, it must be nurtured and guided in the right direction.

So what to do about the risks of artificial intelligence?

Risks and Opportunities

1. Define risks. There is currently no artificial intelligence capable of making autonomous decisions and acting independently of humans. But in the hands of humans, existing technology is already very powerful. And dangerous. Current digital intelligence is capable of extraordinary results to help humans in diagnostics, graphics, translations. But the risks are just as important. 

- Disinformation. Already today, artificial intelligence can be used to develop misleading information and content, capable of appearing perfectly legitimate, but containing falsehoods. The amount of fakes will inexorably multiply.

- Cybersecurity. Connected to the previous point is a noticeable increase in phishing attacks and  the production of malware. Apparently interesting newsletters containing dangerous links may arrive in our inboxes. Scams will in turn multiply inexorably.

- Online hate. The exploitation of online disputes will be allowed to deepen, with artificial intelligence analysing the reasons for conflict and producing increasingly precise attack strategies. Here, too, the main effect could be quantitative.

- Privacy. According to some observers, current artificial intelligence can be used to overcome many existing barriers to protecting personal data.

- Hallucinations and biases. It is totally proven that current artificial intelligence used as "oracles" can provide wrong, invented or distorted information. The flaw of course is insuperable. But the apparent eloquence can mislead and lead to too much trust in the answers of language models.

- Automated weapons. Problem-solving-oriented artificial intelligence applied to weapons of all kinds can be launched into enemy territory with an objective and allowed to operate without limits for the purpose of maximizing damage to opponents.

- Social control. Companies, states, and criminal organisations can succeed in developing systems that generate very widespread forms of social control. It is perfectly possible for this to occur without due transparency.

2. Work carefully on opportunities. For now, language models seem particularly useful when they are trained on very controlled and well defined datasets. In this way they become extraordinarily useful. Errors are naturally limited. Productivity has increased. Maximum effectiveness is achieved when humans use properly trained artificial intelligence in a conscious way. If this happens in expanding industries, then this becomes a win-win situation. Otherwise, there are forms of redistribution. But there are many expanding sectors, and especially those related to building new sustainable production systems are likely to grow in a sustained way for a long time 

3. Propose solutions. The European Commission is in the process of producing an AI Act that could set the world on fire. But the complexity of that programme is still quite large. And the trade-offs it will have to make are important. Some proposals need to be analysed and discussed. And at the end of this post I will collect some of them. But first, two further clarifications are needed.

4. Maintain proportions. General artificial intelligence that exceeds human capabilities - capable of forms of reasoning that surprises humans, acting independently of human orders to arrive at goals, and even developing goals of its own - is still a long way off. Conceptually, it is not impossible that it can be achieved, given recent developments. Science in this area is making great strides. But what we are observing and testing right now is nothing of the sort.

5. Know the context. Those who develop these technologies today operate in a totally unregulated context, operating ambiguously between science, nonprofit, for-profit enterprise, with knowledge that is rather difficult for most people to interpret. Very generous remuneration and incredibly unfair working conditions lead to flawed decisions on the design of innovations and the correction of errors or defects. This means that the incentives leading to design choices for these technologies produce a mismatch between the enormity of investment directed toward increasing the power of artificial intelligence, thus its risks, and the minimal effort devoted to aligning artificial intelligence with the values of social sustainability, thus the pursuit of opportunities. While those who should regulate these activities operate in a rather slow, unfocused, and not necessarily competent environment. Regulators' incentives may be influenced by forms of lobbying and interests other than those of the citizens for whom they are supposed to work. Start-ups, non-profits, independent scientists, and producers of open source solutions, appear to be struggling in the face of the speed and depth of innovations of the few super-technology producers that have come forward in recent times. Suggestions should take this into account.

Proposals

1. Every AI product should have a label that allows anyone to know its origin. Watermarking is such a practice. Each tag can contain a record of the chain of steps that led to its production.

2. Every new artificial intelligence product must pass tests following a process agreed upon with dedicated and independent public agencies. The purpose of this procedure is to prevent technologies that are unnecessarily dangerous from coming to market. In pharmaceuticals, this practice has led to excellent results from the standpoint of innovation while minimising risks.

3. The testing procedure will have a preferred route if the products are presented as dedicated to perform certain precise functions, if they demonstrate that they have been trained on controlled and qualitatively valid data sets (data in a standard format so that they can be verified by independent third parties). In such cases, it can be expected that the testing phases will be relatively quick.

4. Testing procedures for generalist technologies, non-transparent in their operation and in the data used to train them, geared toward generating unpredictable solutions and potentially capable of acting autonomously from humans will have to remain in the research dimension for a long time, probably never leave it, or do so only after very careful testing.  The social, cultural, and ecological damage generated by technologies that claim to be geared toward general artificial intelligence will be the responsibility of the producers.

5. Data on the operation of these technologies, according to publicly agreed protocols, will have to be accessible to independent scientists who in turn will have to process them to make them freely understandable to the public.

6. The public sector needs to be committed to increasing investment in education and literacy on the entire artificial intelligence pipeline.

The topic is so complex that one cannot avoid the conclusion that much more work is needed to get to the bottom of it with decently useful proposals. Proposals need to be concrete and they should solve the problem of containing risks without limiting opportunities. 

-----------------

Links

  • “We must slow down the race to God-like AI”, FT
  • AI Act: a step closer to the first rules on Artificial Intelligence, EuroParl
  • Test GPT-4, OpenAI
  • State of AI 2023, Stanford



Please sign in

If you are a registered user on The OECD Forum Network, please sign in

Go to the profile of André VIEIRA
10 months ago

Great article, thanks for this.

Go to the profile of David H. Deans
10 months ago

In my opinion, the "European Commission AI Act" should focus equally on the challenges and opportunities. The upside economic opportunities will need nurturing, otherwise Europe will once again be playing catch-up with American innovators. A regulatory-first mindset can limit innovation.