This extract is adapted from Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence. Copyright (c) 2021 by Kate Crawford. Used with permission of the publisher, Yale University Press. All rights reserved.
The Forum Network is a space for experts and thought leaders—from around the world and all parts of society— to discuss and develop solutions now and for the future. It aims to foster the fruitful exchange of expertise and perspectives across fields, and opinions expressed do not necessarily represent the views of the OECD.
Let’s ask the deceptively simple question, What is artificial intelligence? If you ask someone in the street, they might mention Apple’s Siri, Amazon’s cloud service, Tesla’s cars, or Google’s search algorithm. If you ask experts in deep learning, they might give you a technical response about how neural nets are organized into dozens of layers that receive labelled data, are assigned weights and thresholds, and can classify data in ways that cannot yet be fully explained.1 […] In one of the most popular textbooks on the subject, Stuart Russell and Peter Norvig state that AI is the attempt to understand and build intelligent entities. “Intelligence is concerned mainly with rational action,” they claim. “Ideally, an intelligent agent takes the best possible action in a situation.”2
Each way of defining artificial intelligence is doing work, setting a frame for how it will be understood, measured, valued, and governed. If AI is defined by consumer brands for corporate infrastructure, then marketing and advertising have predetermined the horizon. If AI systems are seen as more reliable or rational than any human expert, able to take the “best possible action,” then it suggests that they should be trusted to make high-stakes decisions in health, education, and criminal justice. When specific algorithmic techniques are the sole focus, it suggests that only continual technical progress matters, with no consideration of the computational cost of those approaches and their far-reaching impacts on a planet under strain.
In contrast, I argue [in my book Atlas of AI] that AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power. […]
Once we connect AI within these broader structures and social systems, we can escape the notion that artificial intelligence is a purely technical domain.
Once we connect AI within these broader structures and social systems, we can escape the notion that artificial intelligence is a purely technical domain. At a fundamental level, AI is technical and social practices, institutions and infrastructures, politics and culture. Computational reason and embodied work are deeply interlinked: AI systems both reflect and produce social relations and understandings of the world. […] To understand how AI is fundamentally political, we need to go beyond neural nets and statistical pattern recognition to instead ask what is being optimized, and for whom, and who gets to decide. Then we can trace the implications of those choices.
Seeing AI Like an Atlas
How can an atlas help us to understand how artificial intelligence is made? […] Perhaps my favorite account of how a cartographic approach can be helpful comes from the physicist and technology critic Ursula Franklin: “Maps represent purposeful endeavors: they are meant to be useful, to assist the traveler and bridge the gap between the known and the as yet unknown; they are testaments of collective knowledge and insight.”3 Maps, at their best, offer us a compendium of open pathways— shared ways of knowing—that can be mixed and combined to make new interconnections. But there are also maps of domination, those national maps where territory is carved along the fault lines of power: from the direct interventions of drawing borders across contested spaces to revealing the colonial paths of empires. By invoking an atlas, I’m suggesting that we need new ways to understand the empires of artificial intelligence.
We need a theory of AI that accounts for the states and corporations that drive and dominate it, the extractive mining that leaves an imprint on the planet, the mass capture of data, and the profoundly unequal and increasingly exploitative labor practices that sustain it.
We need a theory of AI that accounts for the states and corporations that drive and dominate it, the extractive mining that leaves an imprint on the planet, the mass capture of data, and the profoundly unequal and increasingly exploitative labor practices that sustain it. These are the shifting tectonics of power in AI. A topographical approach offers different perspectives and scales, beyond the abstract promises of artificial intelligence or the latest machine learning models. The aim is to understand AI in a wider context by walking through the many different landscapes of computation and seeing how they connect.4
There’s another way in which atlases are relevant here. The field of AI is explicitly attempting to capture the planet in a computationally legible form. This is not a metaphor so much as the industry’s direct ambition. The AI industry is making and normalizing its own proprietary maps, as a centralized God’s-eye view of human movement, communication, and labor. Some AI scientists have stated their desire to capture the world and to supersede other forms of knowing. […] One of the founders of artificial intelligence and early experimenter in facial recognition, Woody Bledsoe, put it most bluntly: “in the long run, AI is the only science.”5 This is a desire not to create an atlas of the world but to be the atlas—the dominant way of seeing.
This colonizing impulse centralizes power in the AI field: it determines how the world is measured and defined while simultaneously denying that this is an inherently political activity. […] Just as there are many ways to make an atlas, so there are many possible futures for how AI will be used in the world. The expanding reach of AI systems may seem inevitable, but this is contestable and incomplete. The underlying visions of the AI field do not come into being autonomously but instead have been constructed from a particular set of beliefs and perspectives. The chief designers of the contemporary atlas of AI are a small and homogenous group of people, based in a handful of cities, working in an industry that is currently the wealthiest in the world. Like medieval European mappae mundi, which illustrated religious and classical concepts as much as coordinates, the maps made by the AI industry are political interventions, as opposed to neutral reflections of the world.
[My] book is made against the spirit of colonial mapping logics, and it embraces different stories, locations, and knowledge bases to better understand the role of AI in the world. […] [It] is not a story about code and algorithms or the latest thinking in computer vision or natural language processing or reinforcement learning. Many other books do that. Neither is it an ethnographic account of a single community and the effects of AI on their experience of work or housing or medicine—although we certainly need more of those. Instead, this is an expanded view of artificial intelligence as an extractive industry. […]
Extraction, Power, and Politics
Artificial intelligence, then, is an idea, an infrastructure, an industry, a form of exercising power, and a way of seeing; it’s also a manifestation of highly organized capital backed by vast systems of extraction and logistics, with supply chains that wrap around the entire planet. All these things are part of what artificial intelligence is—a two-word phrase onto which is mapped a complex set of expectations, ideologies, desires, and fears. AI can seem like a spectral force—as disembodied computation— but these systems are anything but abstract. They are physical infrastructures that are reshaping the Earth, while simultaneously shifting how the world is seen and understood.
It’s important for us to contend with these many aspects of artificial intelligence—its malleability, its messiness, and its spatial and temporal reach. The promiscuity of AI as a term, its openness to being reconfigured, also means that it can be put to use in a range of ways: it can refer to everything from consumer devices like the Amazon Echo to nameless back-end processing systems, from narrow technical papers to the biggest industrial companies in the world. But this has its usefulness, too. The breadth of the term “artificial intelligence” gives us license to consider all these elements and how they are deeply imbricated: from the politics of intelligence to the mass harvesting of data; from the industrial concentration of the tech sector to geopolitical military power; from the deracinated environment to ongoing forms of discrimination. The task is to remain sensitive to the terrain and to watch the shifting and plastic meanings of the term “artificial intelligence”—like a container into which various things are placed and then removed—because that, too, is part of the story.
Simply put, artificial intelligence is now a player in the shaping of knowledge, communication, and power. These reconfigurations are occurring at the level of epistemology, principles of justice, social organization, political expression, culture, understandings of human bodies, subjectivities, and identities: what we are and what we can be. But we can go further. Artificial intelligence, in the process of remapping and intervening in the world, is politics by other means—although rarely acknowledged as such. These politics are driven by the Great Houses of AI, which consist of the half-dozen or so companies that dominate large-scale planetary computation.
The intensification of technocratic power has been under way for a long time, but the process has now accelerated.
Many social institutions are now influenced by these tools and methods, which shape what they value and how decisions are made while creating a complex series of downstream effects. The intensification of technocratic power has been under way for a long time, but the process has now accelerated. In part this is due to the concentration of industrial capital at a time of economic austerity and outsourcing, including the defunding of social welfare systems and institutions that once acted as a check on market power. This is why we must contend with AI as a political, economic, cultural, and scientific force. Alondra Nelson, Thuy Linh Tu, and Alicia Headlam Hines observe, “Contests around technology are always linked to larger struggles for economic mobility, political maneuvering, and community building".6
We are at a critical juncture, one that requires us to ask hard questions about the way AI is produced and adopted. We need to ask: What is AI? What forms of politics does it propagate? Whose interests does it serve, and who bears the greatest risk of harm? And where should the use of AI be constrained?
These questions will not have easy answers. But neither is this an irresolvable situation or a point of no return—dystopian forms of thinking can paralyze us from taking action and prevent urgently needed interventions.7 As Ursula Franklin writes, “The viability of technology, like democracy, depends in the end on the practice of justice and on the enforcement of limits to power.”8
[My] book argues that addressing the foundational problems of AI and planetary computation requires connecting issues of power and justice: from epistemology to labor rights, resource extraction to data protections, racial inequity to climate change. To do that, we need to expand our understanding of what is under way in the empires of AI, to see what is at stake, and to make better collective decisions about what should come next.
Find out more about Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence, by Kate Crawford (published in April 2021, © Yale University Press)
1. See, as one of many examples, Poggio et al., “Why and When Can Deep—but Not Shallow—Networks Avoid the Curse of Dimensionality.”
2. Russell and Norvig, Artificial Intelligence, 30.
3. Franklin and Swenarchuk, Ursula Franklin Reader, Prelude.
4. For an account of the practices of data colonization, see “Colonized by Data”; and Mbembé, Critique of Black Reason.
5. Bledsoe quoted in McCorduck, Machines Who Think, 136.
6. Nelson, Tu, and Hines, “Introduction,” 5.
7. Danowski and de Castro, Ends of the World.
8. Franklin, Real World of Technology, 5.
And find out more about the OECD's work on Artificial Intelligence