Shaping human-centered artificial intelligence

The A.Ideas series presents opinion and views around artificial intelligence emerging from discussions at the OECD Conference "AI: Intelligent Machines, Smart Policies". David A. Heiner and Dr. M-H. Carolyn Nguyen explain Microsoft's vision for AI: to amplify human ingenuity with intelligent technology.
Shaping human-centered artificial intelligence
Like

This article is part of the Forum Network series on Digitalisation and Trust.

Amplify human ingenuity with intelligent technology

Thimi is a tiny village in Nepal, one of the world’s poorest countries. Most of the people who live there subsist on marginal lands, relying on livestock to earn a living. Rajesh Ghimire and his wife, Sharadha, are one such family. They worked hard to build up a modest herd of 45 cows, goats and buffaloes. About seven years ago, a heat wave struck the region, triggering an outbreak of anthrax among their animals. About half died. Rajesh and Sharadhra are still trying to recover economically. Recently, their 20-year old niece, Melisha, saw an opportunity: she and her friends developed a mobile application (app) that uses artificial intelligence (AI) to notify village farmers, like her aunt and uncle, when their animals face a health risk. Melisha’s work was especially impressive as very few girls study science in Nepal and very few women work in tech. The app analyses real time information from sensors that are strapped on the animals to monitor their vital signs and activities (such as temperature and step count). Although the project is still in its infancy, the app already identified anthrax in one cow early enough to prevent a wider outbreak on a farm. 

“AI could double annual economic growth rates by 2035, and boost labour productivity by up to 40%”.

This is but one example of Microsoft’s vision for AI: to amplify human ingenuity with intelligent technology. Microsoft is working toward shaping “human-centered” AI—where human intelligence is augmented through advances in computer vision, speech recognition, natural language processing and machine learning. AI developed in this manner can help people achieve more in nearly any field of human endeavor.

The promise of AI is that the knowledge gained from applying analytics and machine learning to today’s wealth of available data will enhance any decision-making process to achieve better outcomes. Recent research by Accenture estimated that AI could double annual economic growth rates by 2035, and boost labour productivity by up to 40%. Today’s AI technology can already save thousands of lives and improve the performance of many systems across all sectors. For example, in healthcare, AI can reduce hospital readmission, enhance the quality of care for managing chronic disorders, and catch preventable errors in hospitals (the third leading cause of death in the US) by recognising anomalies in best clinical practices. 

Microsoft is investing heavily in the research and development (R&D) of AI technologies to realise our vision. For example, in August our researchers announced an AI system that can recognise words as they are spoken more accurately than most people—an industry milestone. These capabilities can be integrated into accessible applications such as Seeing AI, a free Microsoft mobile app that helps the visually impaired engage more fully in professional and social contexts. It includes features such as friend recognition,describing people and their emotions, using store barcodes to identify products, and reading restaurant menus out loud. The biological computation group at our research lab in Cambridge is collaborating with biologists, radiologists, and other medical experts to use advanced computational methods to understand the behaviour of and intereaction between cells, which will help to “debug” cancer in a person to provide personalised treatment. AI can also enable us to better address environmental concerns. Microsoft recently launched AI for Earth—a new initiative dedicated to sustainability challenges, including agriculture, water, biodiversity, and climate change. We will invest up to $2M in qualified initiatives and offer non-governmental organisations and other groups access to AI tools, services, and technical support. 

We are enabling broad access to AI capabilities through a variety of programmes: Microsoft Cognitive Services, the Microsoft Cognitive Toolkit, the Bot Framework, and Azure Machine Learning. These AI technologies enable software developers, enterprises and others to make use of advanced AI techniques in building their own solutions. We also open-sourced the Microsoft Cognitive Toolkit, a set of capabilities used by Microsoft engineers to achieve breakthroughs in AI, and are contributing data sets to advance development of machine comprehension of text.

How to build trust in AI?

Like other technological advances over the past 200 years, AI will raise a number of new societal challenges, and it is imperative that we take preemptive steps to benefit from all the innovations that AI offers. A human-centered approach can only be realised if relevant stakeholders from business, government, civil society and the research community collaborate on a common value framework. Microsoft’s CEO, Satya Nadella, offered some thoughts to start this dialogue last year. We believe that AI should: be designed to assist humanity; be transparent; maximise efficiencies without destroying human dignity; respect privacy; have algorithmic accountability so that humans can undo unintended harm; and guard against bias. Complementing these values: empathy, education, creativity, judgement and accountability should be considered by all those developing, deploying and using these technologies.

Satya Nadella quote

As AI plays an increasing role in people’s lives online and offline, appropriate design and economic and social choices are essential for creating technologies that are deemed trustworth by individuals and society at large. The computational power and learning capabilities of machines must be coupled with the sensitivity and emotional intelligence of humans. We believe the starting point for building trust in AI is to take a human-centered approach, with system designs that are grounded in wide-reaching values: systems must operate safely and reliably; be fair and treat everyone with dignity and respect; be secure and protect privacy; be inclusive by benefitting and empowering everyone; be transparent and provide explanations on key elements; and be accountable for how they operate.

A common vision, with shared principles, will enable all stakeholders involved to shape the future of AI. A first step was taken in September 2016, when Amazon, DeepMind/Google, Facebook, IBM and Microsoft came together to form the Partnership on AI (PAI) “to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.” Since then, a range of companies, civil society organisations, researchers, and others from the US, Europe and Asia have joined. 

Partnership on AI

As AI is still at a nascent stage of development, open dialogue between government, business, civil society and academic researchers is essential to shaping its continued development and realising the potential benefits. Working together we can identify and prioritise issues of societal importance as AI evolves, enable sharing of best practices and motivate further R&D as new issues emerge. We believe that policy discussions should aim to promote the broad development and deployment of AI across sectors and continued innovation, encouraging human-centered outcomes. We believe policymakers should: 

  • Continue to convene dialogue among government, business, researchers, civil society and other interested stakeholders on how AI can be shaped to maximise its potential and mitigate risks, including the adoption of practical guiding principles for human-centred AI;
  • Stimulate the development and deployment of AI across all sectors and business of all sizes, including the application of AI to address public and societal challenges, such as empowering underserved communities and those with disabilities, and adoption of AI in the public sector;
  • Develop privacy laws with a view toward enabling the benefits of AI while preserving privacy;
  • Invest in skills development training initiatives for people at all stages of the job continuum;
  • Encourage sharing and promulgating best practices in development and deployment of human-centred AI, through industry-led organisations such as PAI;
  • Fund short- and long-term multi-disciplinary R&D  in human-centered AI technologies and how AI can be used to provide insights into its potential socioeconomic impact.
  • Develop shared public data sets and environments for AI training and testing, to enable broader AI experimentation, testing and comparisons of potential solutions to address ethical concerns. 

Human-centered AI can help create a better world, and it’s important to continue working together to actively realise this future. An inclusive approach that values multi-disciplinary and multi-stakeholder contributions and actions can motivate an open and collaborative model to policy development that would best equip society in addressing these rapidly evolving technologies.

GOT A FEW MORE MINUTES?

OECD Conference on Artificial Intelligence - "AI: Intelligent Machines, Smart Policies"

Read on The Forum Network:

ABOUT THE AUTHORS

David A. Heiner is Strategic Policy Advisor at Microsoft Corporation. Dave focuses on the societal implications of the rapid development of computer technology, particularly artificial intelligence. For many years Dave served as a Vice President and Deputy General Counsel at Microsoft, with responsibility at various times for privacy, competition law, telecommunications law, accessibility law, human rights, online safety and the company’s work with international standard-setting organizations. 

Dr. M-H. Carolyn Nguyen is a Director of Technology Policy at Microsoft, focused on policy issues related to internet governance, the digital economy, and artificial intelligence. Her past activities have included policy initiatives on open/big data, machine learning, data ethics, the internet of things, intelligent systems, privacy, and their effects on existing social, economic, and policy frameworks. She has worked at the intersection of these disciplines, developing an evidence base to encourage a more holistic approach to policy. 

Related Topics

Artificial Intelligence


OECD Going Digital


Please sign in

If you are a registered user on The OECD Forum Network, please sign in