The world is moving so fast that tomorrow seems like yesterday. Nowhere are the changes coming faster than in the science of artificial intelligence. Applications that sounded like science fiction only a decade ago are now part of our daily lives.
A radical transformation of our societies
We encounter AI applications multiple times every day. “Learning algorithms” greet us on our digital devices, influence what we buy and guide the news we watch and read–and sometimes even write the articles. Soon driverless cars and trucks will flood our streets and highways.
On a far larger scale, in the coming years AI will radically transform the future of work, the way in which economies function and probably the manner in which we fight our wars.
The promise of AI is tremendous. Machine learning in particular allows us to process data at unprecedented scales. We can see patterns, detect problems earlier and allocate resources more efficiently.
For instance, consider the field of medicine. Researchers found that AI algorithms combined with data from pathologists lowered the error rate for detecting breast cancer from 3.5% to 0.5%, potentially saving untold numbers of lives.
The potent combination of big data and AI could help avert famines, track the spread of diseases and identify migration trends. Analysing the explosion of information in today’s digital world enables businesses to predict customer behaviour, improve productivity and spot emerging economic and social trends.
AI, however, is not a silver bullet. There is a danger of overpromising. There is an equal danger in hyping the fear factor surrounding AI. Both grow out of the difficulty most of us have comprehending not just the potential of AI but how the science behind it works.
There is no stopping the evolution and rise of artificial intelligence. That shouldn’t even be the goal. Rather, policy makers, regulators, business leaders, AI researchers and the public should be asking what sort of framework is needed to promote the ethical development of artificial intelligence and safeguard against potential abuses.
The complexity of artificial intelligence may well require that safeguards become integrated into the design process, otherwise they will never keep up with the speed of development. That means industry has a vital role to play in creating legal and fair guidelines.
But the implications of AI are so widespread that there must be a robust and inclusive global conversation. Issues like accountability, transparency, privacy and preventing discrimination are too big to be left to commercial enterprises, no matter how well-meaning they might be.
An international dialogue has been initiated
We need a wide debate and fortunately, the discussion is under way. Industry groups, pioneering businesses, academics and think tanks are studying the issues from every angle. Progressive governments are also recognising the need for pro-active leadership.
For instance, the G20 and G7 have both taken up studies of the digital transformation, including the impact of AI. Japan has emerged as a strong voice calling for an international dialogue with all stakeholders.
At the the G7 ministerial on Information Communications Technology in April 2016 Takamatsu, Japan, the Japanese government drew attention to the need for high-level principles and safeguards for the benefits and risks of AI networks.The same issues were on the table last March in Tokyo at the Artificial Intelligence Network Society international forum.
Most recently, the G7 Declaration mandated the OECD to help ensure that an open, updated, informed and engaged dialogue with relevant stakeholders raise awareness on the need for a human centric approach to AI and channel efforts towards a socially beneficial AI.
"AI’s many benefits must be spread as evenly as possible".
Here at the OECD, the potential of artificial intelligence is a key component of the two-year “Going Digital” project and a focus of our latest Digital Economic Outlook. We are using the full resources of the organisation to help create policies that will maximise the opportunities of the digital transformation and minimise the risks.
As part of that exercise, the OECD will hold a conference, “AI: Intelligent Machines, Smart Policies,” in Paris on October 26 and 27. The goal is both to inform our future work at the OECD and to initiate a broad engagement with the policy, research, academic and public sectors.
No one wants to stifle the vital research and creativity that will help fulfil the promise of AI. No one wants to handicap research at the cutting edge of this technology.
But we should not proceed without considering the values that should guide AI design. We should not proceed without trying to reach a global consensus on the type of institutional framework necessary to promote and control artificial intelligence as it moves deeper into the social and economic mainstream.
The challenge is imminent. If we wait too long, tomorrow will be yesterday.
GOT A FEW MORE MINUTES?
ABOUT THE AUTHOR
Douglas Frantz is OECD Deputy Secretary-General and Project Leader for the OECD’s Going Digital project. He actively contributes to the strategic direction of the OECD’s development agenda, and oversees the Organisation's global relations portfolio, as well as the activities on science, technology and innovation.