The Cyberspace Arms Race: Artificial intelligence and cyber security

Every day we witness remarkable progress in the field of artificial intelligence but we have only scratched the surface of what it could be used for – or against. Dudu Mimran, Chief Technology Officer of the Deutsche Telekom Innovation Lab, Israel, explores the impact of AI on cyber security and how policy makers should react to this potentially unique technological revolution.
The Cyberspace Arms Race: Artificial intelligence and cyber security

This article is part of the Forum Network series on Digitalisation & International Co-operation

A double-edged sword

AI has already entered the cyber security arena through defence tools, with most applications being in the fields of malicious behaviour detection and helping prioritise incidents. AI-driven automation and decision making have the potential to handle a significant portion of the tedious tasks fulfilled today, helping employees focus on responsibilities which require the touch of a human expert. Yet a large, industry-wide challenge for which AI can be a game changer relates to the scarcity of skills: ISACA predicts that there will be a shortage of two million cyber security professionals by 2019. 

2016 cybersecurity skills gap


Further, the threat of cyberattacks powered by AI is increasingly likely. In the world of artificial intelligence the necessary tools, algorithms and expertise are widely accessible and cyber attackers will not refrain from abusing them to make their intrusions more intelligent and faster. Such attacks will be fast, agile and of a magnitude we have not yet experienced: a new genre of AI-based defense tools will have to emerge.

So is artificial intelligence worrying?

The underlying theme in many AI-related discussions is fear, a very natural reaction to a transformative technology that has played a role in many science fiction movies. This fear can be broken down into two parts: the fear of change, which is inevitable as AI is indeed going to transform many areas in our lives; and the more primal fear linked to the potential emergence of soulless machines aiming to annihilate civilisation. I see these threats – or opportunities – staged over four phases: the short, medium, long and really long term.

The short term

Primary concerns can be found in the area of hyper-personalisation, that is, algorithms getting to know us better then we know ourselves. High amounts of personal data distributed across different vendors and residing on their central systems not only pose a risk to our privacy, but also create greenfield opportunities for attackers to abuse and exploit us in unimaginable ways. It is already possible to build an intelligent machine that, through microtargetting, profiles citizens according to their demographic, behavioural and psychological attributes in order to deliver personalised messages. Disguised as adverts, the content and design can be automatically adapted to each person with the goal of changing the public state of mind, posing a serious threat to democracy.

The medium term

Whether bound to a mental or physical effort, whatever can be automated eventually will be, disrupting many industries and dramatically impacting society. In many ways it will improve our lives, for example faster and more accurate disease detection; but these changes will also have challenging side effects, notably by increasing economic inequalities that disproportionately hurt those that are already disadvantaged. It will widen the gap between knowledge workers and manual labourers and further intensify new inequalities based on access to information. It is quite difficult to predict whether the impact on some industries will be short term, and workers will flow to other sectors, or if this will cause overall instability. This should be studied further by each industry that is expecting disruption.

The long term

We will see more and more intelligent machines with the power to impact the life and death of humans, and threats will be driven by humans who will hack the logic of such systems; examples include autonomous vehicles, which could run someone over on the street, or intelligent medicine inducers, which could kill a patient. This problem is severe because adding additional controls into the AI cannot protect us from the malicious intentions of attackers. Society may design AI for good but abuse of it for other purposes could lead to a situation where technology would eventually spiral out of everyone’s control.

The really long term

This threat still belongs to the realm of science fiction but it describes a case where machines turn against humanity, having gained the power to cause harm to ensure their self-preservation. From a technological point of view such an event could already happen today if we were to decide to put our fate solely into the hands of a malicious algorithm.

Source: Dudu Mimran

What policy makers should do to protect society

Areas of impact

  • Private data: a fundamental intervention that can become a protective measure for citizens is the way private data is collected, verified and, most importantly, used. Without data most AI systems cannot operate, and this can be used as an anchor of control.
  • Cybercrime: another area of intervention should be the way cybercrime is kept in check by law enforcement, especially with regard to the pieces of the law enforcement puzzle that are currently missing. For example, attribution is a field of cybersecurity which suffers from under-investment because it lacks commercial viability. Centralised investment is required to build the foundations of attribution technology in future digital infrastructures. Blockchain, as a trusted distributed ledger, could be the perfect tool to provide evidence of decisions taken by machines that can stand in court. An interesting example in this field is the Serenata de Amor Operation, a grassroots, open-source project which has been built to fight corruption in Brazil by analyzing public expenses and looking for anomalies using AI.


What is attribution technology?

  • Collaborative research: There are areas in the cyber world where investment would be in the public interest and that also call for multilateral research. One fascinating avenue of inquiry could be how to use AI in regulation itself, and especially the enforcement of regulation. Another one is building accountability into AI in order to record decisions taken by algorithms; however, while documenting these decisions should reside in the public domain, the privacy of the intellectual property of the vendors should be maintained. 
  • Central design: a significant paradigm shift that policy makers need to take into account is the strategic change from centralised systems to distributed technologies, as the latter present significantly less vulnerabilities. It is necessary to study, and eventually create, a roadmap of centralised systems that should be transformed into distributed ones.

Challenges for policy makers

  • Today AI advancement is considered a competitive frontier among countries and this leads to a situation where many developments are kept secret. This path leads to a loss of control, especially with regard to the potential future abuses of these technologies beyond their original purposes. The competition phenomena create a serious challenge for society as a whole. It is not clear why people treat weapons more harshly than advanced information technology that can eventually cause more harm.
  • Our privacy is abused by market forces that are pushing for profit optimisation at the detriment of consumer protection. There are conflicting forces at play for policy makers.
  • People across the world are different in many ways, while AI is a universal language. The desire to set global ethical rules inherently conflicts with national preferences.
  • The questions linked to the ownership and accountability of algorithms in a world where they can create damage are open ones, with many diverse opinions. This is all the more complicated as online platforms are global, while regulations tend to be local.
  • What other alternatives there are, besides the idea of a basic income, for the millions that will not be part of the knowledge ecosystem, as it is clear that not every person who loses a job will find a new one. Pre-emptive thinking should be conducted to prevent turbulence in disrupted industries.

A discussion on the limitations of the power of policy makers is required in the world of technology and AI. In a very similar fashion to written words, coding and software have become a new way to express oneself, and aspirations to control or regulate this new language are destined to fail. Policy makers should understand these new boundaries and the new responsibilities they dictate.


OECD Conference on Artificial Intelligence - "AI: Intelligent Machines, Smart Policies" -Paris, 26-27 October 2017

OECD work on information security and privacy

Dudu Mimran's blog


Dudu Mimran is the CTO of Telekom Innovation Laboratories in Israel and CTO of the Cyber Security Research Center at Ben-Gurion University.

OECD Going Digital project

Related Topics

Privacy & Cybersecurity Artificial Intelligence

Please sign in or register for FREE

If you are a registered user on The OECD Forum Network, please sign in

Go to the profile of Leslie Brissett
over 5 years ago

The impact of the AI seems to be that the humans who create them, seem to not wish to face the "dark side" of being human. Our capacity to hate and destroy if not acknowledged will manifest in surprising ways. If the role of quantum entanglements are to be believed, simply the idea of a thing creates the thing. We need to support those developing AI and the codes to work on their on psyche so that it is not projected out into the technology they develop.

As Einstein is quoted as saying, " An idea never leaves its source"

Go to the profile of Dudu Mimran
over 5 years ago

Thanks for the comment Leslie. Indeed people create the technology and I think inevitably project themselves into their creations which most of the times serves as the source of value while on the flip side can be abused for harm doing.