French President Emmanuel Macron on #ArtificialIntelligence

‪“#AI will raise a lot of issues in #ethics, in #politics, it will question our #democracy and our collective preferences.” #healthcare #mobility Must read interview of #EmmanuelMacron on #ArtificialIntelligence in #Wired Magazine by Editor Nicholas Thompson
French President Emmanuel Macron on #ArtificialIntelligence
Like

ON THURSDAY, EMMANUEL Macron, the president of France, gave a speech laying out a new national strategy for artificial intelligence in his country. The French government will spend €1.5 billion ($1.85 billion) over five years to support research in the field, encourage startups, and collect data that can be used, and shared, by engineers. The goal is to start catching up to the US and China and to make sure the smartest minds in AI—hello Yann LeCun—choose Paris over Palo Alto.

Directly after his talk, he gave an exclusive and extensive interview, entirely in English, to WIRED Editor-in-Chief Nicholas Thompson about the topic and why he has come to care so passionately about it.

Nicholas Thompson: First off, thank you for letting me speak with you. It was refreshing to see a national leader talk about an issue like this in such depth and complexity. To get started, let me ask you an easy one. You and your team spoke to hundreds of people while preparing for this. What was the example of how AI works that struck you the most and that made you think, ‘Ok, this is going to be really, really important’?

Emmanuel Macron: Probably in healthcare—where you have this personalized and preventive medicine and treatment. We had some innovations that I saw several times in medicine to predict, via better analysis, the diseases you may have in the future and prevent them or better treat you. A few years ago, I went to CES. I was very impressed by some of these companies. I had with me some French companies, but I discovered US, Israeli and other companies operating in the same field. Innovation that artificial intelligence brings into healthcare systems can totally change things: with new ways to treat people, to prevent various diseases, and a way—not to replace the doctors—but to reduce the potential risk.

The second field is probably mobility: we have some great French companies and also a lot of US companies performing in this sector. Autonomous driving impresses me a lot. I think these two sectors, I would say, healthcare and mobility, really struck me as promising. It’s impossible when you are looking at these companies, not to say, Wow, something is changing drastically and what you thought was for the next decade, is in fact now. There is a huge acceleration.

NT: It seems you’re doing this partly because it is clearly in France’s national interest to be strong in AI. But it also seemed in the speech that you feel like there are French or European values that can help shape the development of AI? Is that correct, and what are those values?

EM: I think artificial intelligence will disrupt all the different business models and it’s the next disruption to come. So I want to be part of it. Otherwise I will just be subjected to this disruption without creating jobs in this country. So that’s where we are. And there is a huge acceleration and as always the winner takes all in this field. So that’s why my first objective in terms of education, training, research, and the creation of startups is to streamline a lot of things, to have the adaptable systems, the adapted financing, the adapted regulations, in order to build champions here and to attract the existing champions.

LAURA STEVENS

But you’re right at the same time: AI will raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences. For instance, if you take healthcare: you can totally transform medical care making it much more predictive and personalized if you get access to a lot of data. We will open our data in France. I made this decision and announced it this afternoon. But the day you start dealing with privacy issues, the day you open this data and unveil personal information, you open a Pandora’s Box, with potential use cases that will not be increasing the common good and improving the way to treat you. In particular, it’s creating a potential for all the players to select you. This can be a very profitable business model: this data can be used to better treat people, it can be used to monitor patients, but it can also be sold to an insurer that will have intelligence on you and your medical risks, and could get a lot of money out of this information. The day we start to make such business out of this data is when a huge opportunity becomes a huge risk. It could totally dismantle our national cohesion and the way we live together. This leads me to the conclusion that this huge technological revolution is in fact a political revolution.

When you look at artificial intelligence today, the two leaders are the US and China. In the US, it is entirely driven by the private sector, large corporations, and some startups dealing with them. All the choices they will make are private choices that deal with collective values. That’s exactly the problem you have with Facebook and Cambridge Analytica or autonomous driving. On the other side, Chinese players collect a lot of data driven by a government whose principles and values are not ours. And Europe has not exactly the same collective preferences as US or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution . That’s the condition of having a say in designing and defining the rules of AI. That is one of the main reasons why I want to be part of this revolution and even to be one of its leaders. I want to frame the discussion at a global scale.

The key driver should not only be technological progress, but human progress. This is a huge issue. I do believe that Europe is a place where we are able to assert collective preferences and articulate them with universal values. I mean, Europe is the place where the DNA of democracy was shaped, and therefore I think Europe has to get to grips with what could become a big challenge for democracies.

NT: So the stakes here in your mind aren’t just French economic growth, it’s the whole value system that will be incorporated into this transformative technology the world over. And you want to make sure that the values you have, your country has, your continent has, are involved in that?

EM: Sure, exactly. I want to create an advantage for my country in artificial intelligence, directly. And that’s why we have these announcements made by Facebook, Google, SamsungIBM, DeepMind, Fujitsu who choose Paris to create AI labs and research centers: this is very important to me. Second, I want my country to be part of the revolution that AI will trigger in mobility, energy, defense, finance, healthcare and so on. Because it will create value as well. Third, I want AI to be totally federalized. Why? Because AI is about disruption and dealing with impacts of disruption. For instance, this kind of disruption can destroy a lot of jobs in some sectors and create a need to retrain people. But AI could also be one of the solutions to better train these people and help them to find new jobs, which is good for my country, and very important.

I want my country to be the place where this new perspective on AI is built, on the basis of interdisciplinarity: this means crossing maths, social sciences, technology, and philosophy. That’s absolutely critical. Because at one point in time, if you don’t frame these innovations from the start, a worst-case scenario will force you to deal with this debate down the line. I think privacy has been a hidden debate for a long time in the US. Now, it emerged because of the Facebook issue. Security was also a hidden debate of autonomous driving. Now, because we’ve had this issue with Uber, it rises to the surface. So if you don't want to block innovation, it is better to frame it by design within ethical and philosophical boundaries. And I think we are very well equipped to do it, on top of developing the business in my country.

But I think as well that AI could totally jeopardize democracy. For instance, we are using artificial intelligence to organize the access to universities for our students That puts a lot of responsibility on an algorithm. A lot of people see it as a black box, they don't understand how the student selection process happens. But the day they start to understand that this relies on an algorithm, this algorithm has a specific responsibility. If you want, precisely, to structure this debate, you have to create the conditions of fairness of the algorithm and of its full transparency. I have to be confident for my people that there is no bias, at least no unfair bias, in this algorithm. I have to be able to tell French citizens, “OK, I encouraged this innovation because it will allow you to get access to new services, it will improve your lives—that’s a good innovation to you.” I have to guarantee there is no bias in terms of gender, age, or other individual characteristics, except if this is the one I decided on behalf of them or in front of them. This is a huge issue that needs to be addressed. If you don’t deal with it from the very beginning, if you don’t consider it is as important as developing innovation, you will miss something and at a point in time, it will block everything. Because people will eventually reject this innovation.

NT: So the steps you’re taking to guarantee that is that all of the algorithms developed by the French government will be open, algorithms developed by any company getting money from the French government will also be required to be open?

EM: Yes.

NT: And is there a third step you’re doing to help guarantee this transparency?

EM: We will increase the collective pressure to make these algorithms transparent. We will open data from government, publicly funded projects, and we will open access from this project and we will favor, incentivize the private players to make it totally public and transparent. Obviously some of them will say, there is a commercial value in my algorithm, I don't want to make it transparent. But I think we need a fair discussion between service providers and consumers, who are also citizens and will say: “I have to better understand your algorithm and be sure that this is trustworthy.” The power of consumption society is so strong that it gets people to accept to provide a lot of personal information in order to get access to services largely driven by artificial intelligence on their apps, laptops and so on. But at some point, as citizens, people will say, “I want to be sure that all of this personal data is not used against me, but used ethically, and that everything is monitored. I want to understand what is behind this algorithm that plays a role in my life.” And I’m sure that a lot of startups or labs or initiatives which will emerge in the future, will reach out to their customers and say “I allow you to better understand the algorithm we use and the bias or non-bias.” I’m quite sure that’s one of the next waves coming in AI. I think it will increase the pressure on private players. These new apps or sites will be able to tell people: “OK! You can go to this company or this app because we cross-check everything for you. It’s safe," or on the contrary: “If you go to this website or this app or this research model, it’s not OK, I have no guarantee, I was not able to check or access the right information about the algorithm”.

NT: When you talk about how AI will transform democracy, do you imagine a day where you make decisions based on recommendations from AI-based algorithms, where there’s a system that tells you what a labor reform should be and you say, “OK?”

EM: At this point, I think it could help you. But it will never replace the way you decide. When you make a decision, it’s the result of a series of cross-checks. AI can help you because sometimes when you pass a reform, you’re not totally clear about the potential effects, direct or indirect, and you can have hesitations. So it can help you to make the right decision. An algorithm is relevant for this part of the equation. For instance, on economic and social reforms, to have a clearer view about direct and indirect measurable effects. But on top of it, when you take a political decision, you need to have a part of personal judgment. That’s the quality of the decision maker, and artificial intelligence will never replace that. And there is a thing that AI could never replace; which is accountability and responsibility. Because this is his decision and will be held accountable for it, a political leader could never say, “OK I’m sorry this decision was bad because it was a decision of an algorithm.”

NT: Let’s get back to disruption for a second. You’ve talked a lot about transportation, you talked about it in your speech. AI is going to massively disrupt transportation, and it’s going to make a lot of people lose their jobs as we go to driverless cars. It will create new jobs, but this is already an area where people in France have been protesting. There were railroad strikes this weekend, there were trucker strikes this fall. Aren’t you taking a huge risk by aligning yourself with a force that is going to disrupt an industry that has already been protesting like crazy?

EM: Look, I think in this country—and in a lot of countries—you have a tradition of controversy. I launched a series of reforms that a lot of people thought impossible to be conducted in France. So, I'm absolutely sure it's possible, when you explain to people, when you have the energy and determination, to pass such reforms. I’m certainly not reluctant to do so and I’m certainly not, I would say, upset or threatened by dealing with artificial intelligence and convincing my people of its rightful implementation. As consumers, they are already big fans of artificial intelligence. And big fans of innovative solutions. All the tech guys can tell you that the French market is a very good market. People love technology here. I think that’s why the overall philosophy I have stuck to from the very beginning of my mandate is to say: blocking changes and being focused on protecting jobs is not the right answer. It’s the people you need to protect. You do so by giving them opportunities and by training and retraining them again to get new jobs. Don’t block the change because it’s coming and people will accept it. But try to be at the fore-front of change to better understand it and deal with it. Change can destroy jobs in the very short run, but create new ones in other sectors at the same ti

For me, one of the key issues of artificial intelligence is that it will probably reduce the most replicable and straining human activities. And naturally you will raise a whole range of other opportunities for people with low, middle and high qualifications. The big risk for our society is to increase opportunities only for very highly qualified people and, in a way, very low-qualified workers. It is especially necessary to monitor the qualification of the middle class, because they can be the most disrupted. If I take your examples, that would encompass taxi drivers, people working in the industry, or people working in highly repetitive tasks. So you have to train them either to change their sector of activity or to increase their qualification to work with a machine. We will need people working with machines.

For I do not believe that autonomous vehicles will exist without any drivers at all. For me, that’s pure imagination. You already have fully automated programs to drive planes. Therefore we technically could have planes with no pilots. But you still have two pilots in every plane. Even if almost everything is automated. That’s because you need to have responsibility, precisely. So what we will reduce with autonomous cars is the number of risks. What you will reduce is how painful it is to be a driver for a long period of time ; but you will need people to make the critical choice at critical moments for autonomous vehicles. I’m almost sure about that. So AI will change the practice but it will not kill transportation jobs in many cases.

Bottom line, my point is: I can convince my country about change precisely because I embrace it. My role is not to block this change, but to be able to train or retrain people for them to get opportunities in this new world.

NT: Got it. I want to ask you a military question. I know that the UN has had discussions on restrictions on lethal autonomous weapons. Do you think machines—artificial intelligence machines—can ever be trusted to make decisions to kill without human intervention?

EM: I’m dead against that. Because I think you always need responsibility and assertion of responsibility. And technically speaking, you can have in some situations, some automation which will be possible. But automation or machines put in a situation precisely to do that would create an absence of responsibility. Which, for me, is a critical issue. So that’s absolutely impossible. That’s why you always need a human check. And in certain ways, a human gateway. At a point of time, the machine can prepare everything, can reduce uncertainties, can reduce until nil the uncertainties and that’s an improvement which is impossible without it, but at a point of time, the go or no-go decision should be a human decision because you need somebody to be responsible for it.

NT: Let me ask you about the national competition in artificial intelligence. Elon Musk tweeted some months ago: “Competition for AI superiority at national level most likely cause of World War3 in my opinion.” Do you think Musk is overstating it? Or do you think that this is going to get very intense, particularly between the United States and China?

EM: I think it will become very intense. I will not be so pessimistic, because I think that the core basis of artificial intelligence is research. And research is global. And I think this artificial intelligence deals with cooperation and competition, permanently. So you need an open world and a lot of cooperation if you want to be competitive. And at a point of time, in some issues, you need competition. But I think you will have to rethink a sort of sovereignty. I addressed that in my speech today. Artificial intelligence is a global innovation scheme in which you have private big players and one government with a lot of data—China. My goal is to recreate a European sovereignty in AI, as I told you at the beginning of this discussion, especially on regulation. You will have sovereignty battles to regulate, with countries trying to defend their collective choices. You will have a trade and innovation fight precisely as you have in different sectors. But I don't believe that it will go to the extreme extents Elon Musk talks about, because I think if you want to progress, there is a huge advantage in an open innovation model.

https://www.wired.com/story/emmanuel-macron-talks-to-wired-about-frances-ai-strategy 


Please sign in

If you are a registered user on The OECD Forum Network, please sign in