Rewiring the Firm: Algorithmic management and the future of work

Jeremias Adams-Prassl offers an in-depth analysis on how labour markets are being impacted by automation and artificial intelligence. Banner Image: Shutterstock/Zenzen
Rewiring the Firm: Algorithmic management and the future of work
Like
The Forum Network is a space for experts and thought leaders—from around the world and all parts of society— to discuss and develop solutions now and for the future. It aims to foster the fruitful exchange of expertise and perspectives across fields, and opinions expressed do not necessarily represent the views of the OECD.


Debates about automation and the future of work, and the rise of artificial intelligence in particular, have long been couched in terms of job loss and technological unemployment. These are important challenges—not least when we think about the role of labour market regulation in creating incentive structures that drive such automation. Upon closer inspection, however, they are not borne out in the numbers; indeed, “Employment grew in all OECD countries over the period 2012-2019.”

The absence of widespread unemployment, however, does not meant that technological change is not bringing about a fundamental reorganisation of labour markets. The last decade has seen a rapid “wiring [of] the labour market”, notably in the rise of the “gig economy” and platform work. More recently, an even more fundamental shift is underway, driven in particular by the remarkable renaissance of artificial intelligence (AI) research and deployment.

Perhaps the starkest illustration is the rise of algorithmic management, sometimes also known as “people analytics” or “big data HR”—a consequence of automation that I have previously termed the “(re-)wiring of the firm”. As the cost of data collection and processing continue to fall, employers are increasingly able to deploy technology to monitor—and control—the workplace to a hitherto unimaginable degree: a return of Frederick Winslow Taylor’s infamous theories of “scientific management”, but without the significant transaction cost that previously placed a natural limit on monitoring employees.

What started with specific management tasks in the gig economy has today spread across the world of work: we are witnessing explosive growth both in terms of which management functions can be automated, as well as the scope of workplaces where technologies are deployed. As Wolfie Christl, author of one of the largest and most comprehensive study of algorithmic control and surveillance practices at work to date, concludes: “In many areas of the world of work, the processing of personal data…has become virtually ubiquitous.”

There are clear benefits to deploying AI systems at work—especially in limited contexts and for specific purposes, such as occupational safety and health or in tackling the COVID-19 pandemic.

At the same time, however, there is a quickly growing body of empirical evidence that suggests caution is required, for example where algorithmic systems are used to specifically target individual workers who might agitate for higher wages and organise or support unionisation. Similarly alarming are the problems of algorithmic discrimination, the extensively documented ability of artificial intelligence tools to produce discriminatory outcomes across a number of protected characteristics, driven both by data reflecting existing patterns of disadvantage and processing techniques that lead to novel forms of discrimination.

How should we respond? Domestic and international regulators are increasingly becoming aware of the need to develop targeted regulation of algorithmic management—both in the gig economy and workplaces across the socio-economic spectrum. The European Union’s recent proposal for an AI Act explicitly recognises the deployment of artificial intelligence systems for algorithmic management tasks (both in terms of hiring, and subsequent managerial activity) as a “high-risk context”—without, however, providing much by way of substantive protection.

Targeted, sector-specific approaches are more promising: the recent ley rider (“Rider Law”) in Spain amends existing provisions on the information rights of worker representatives to cover all “parameters, rules and instructions on which the algorithms or artificial intelligence systems are based [that] may affect working conditions, access and maintenance of employment, including profiling”. Similarly, the European Commission’s recent proposal for a Directive to improve working conditions in platform work includes a series of provisions specifically targeted at algorithmic management, covering both monitoring and automated decision-making systems, as well as rights to explanation and human review of significant decisions.

In the long-term, the experience of AI at work might provide useful lessons for regulating fast-moving technological development more broadly—not least given the important role of social dialogue, one of the most flexible and responsive modes of regulation to provide the level playing field required to support innovation, and protect fundamental rights at work.





To learn more, check out also the OECD's Future of Work initiative, which examines the impacts of megatrends digitalisation and  globalisation on the labour market

Check out the OECD's Future of Work initiative, which examines the impacts of megatrends digitalisation and  globalisation on the labour market.

Please sign in

If you are a registered user on The OECD Forum Network, please sign in