AI and employment from a trade union perspective

Veronica Nilsson explains that trade unions are deeply concerned about the impact of AI on both the quantity and quality of jobs, as well as the ethical issues surrounding its uses.
AI and employment from a trade union perspective
Like
The Forum Network is a space for experts and thought leaders—from around the world and all parts of society— to discuss and develop solutions now and for the future. It aims to foster the fruitful exchange of expertise and perspectives across fields, and opinions expressed do not necessarily represent the views of the OECD.

Register for OECD Forum virtual event: How AI Might Change Our Jobs & What We Can do About It!

Last March, over 1,000 technology researchers and leaders issued an open letter calling for an immediate, six-month moratorium on the training of the most advanced AI systems, warning of the “profound risks” these systems pose “to society and humanity”.

Whether you agree or not, this highlights the speed at which technology is progressing, and the profound unease that many people feel about the uncontrolled deployment of AI.  

Trade unions are deeply concerned about the impact of AI on both the quantity and quality of jobs, as well as the ethical issues surrounding its uses.

AI has the potential to create jobs in developing and installing AI systems, but also has the potential to destroy jobs, replacing humans with AI-automated tasks. So much so that recent advances have expanded the range of skills susceptible to automation and jobs vulnerable to displacement. Whereas other automating technologies have tended to take over routine tasks performed by lower-skilled workers, AI has made the most significant progress in recent years in performing non-routine, cognitive tasks. As a result, the OECD has found that workers most exposed to recent advances in AI are in high-skilled occupations, such as managers, science and e ngineering professionals, along with legal, social, and cultural professionals.

Also on the Forum Network: AI-powered HR technology has a disability problem by Susan Scott-Parker, Chief Executive, Business Disability International; Founder, Disability Ethical AI Alliance

AI recruitment tools have become the first line of defence against high-volume online hiring. But unless the unintended consequences of AI-powered HR technology are urgently addressed, hundreds of millions of people worldwide face lifetimes of economic and societal exclusion, highlights Susan Scott-Parker OBE. 

While AI has the potential to improve job satisfaction by automating repetitive tasks, it may also make work more precarious. By automating tasks previously requiring extensive human know-how, AI could lead to a significant de-skilling of workers, enabling a higher use of temporary contracts and hiring of less qualified, and lower-paid, workers.  

In 2021, the European Trade Union Confederation (ETUC) reported on at least eleven websites offering companies software that gives them “total control over employees’ computers” or similar.

Algorithmic management – or the use of AI in people management – can increase work pace and expose workers to increased physical safety and mental health risks. The extensive monitoring and surveillance of workers enabled by AI also raise serious concerns for workers’ rights to privacy. 

In 2021, the European Trade Union Confederation (ETUC) reported on at least eleven websites offering companies software that gives them “total control over employees’ computers” or similar – such use of surveillance software grew during COVID lockdowns, and is very likely to be in breach of the EU’s general data protection regulations. Just as worryingly are reports of AI being used for union-busting by identifying signs of workers organising.

AI also risks affecting work opportunities. It is increasingly being used in recruitment processes, from identifying prospective candidates to final interviews. Bias can be built into AI systems through the choice of parameters, the choice of data, and through biases in the data (which may be incomplete, incorrect, or outdated, and reflect historical biases). The lack of transparency in algorithmic decision-making makes discrimination more difficult to detect, making it very difficult for workers to use the legal protections of non-discrimination law. 

To deal with the concerns about the impact of AI on the quantity and quality of work – and equality of opportunity - there are some basic principles that trade unions propose to be applied, by governments, organisations and companies in the development and use of AI, in regulation and softer instruments. These include:

  • Humans in control. AI is a tool that should assist humans, not take responsibility away from them. At the very least, the use of AI should protect human dignity and uphold human rights. Its use should be transparent to workers, service users and consumers, while organisations should be accountable for decisions made by AI, and people should be able to have access and control over their data collected by AI;

  • Information, consultation, and negotiation. Workers should be informed and consulted about the use of AI and should have a right to collective bargaining with employers about its impact in the workplace;

  • Just Transition. The idea of a socially fair transition in which no one is left behind is usually applied to the shift to the low carbon economy but should apply equally to the digital revolution. There is a tendency among policy-makers to see Just Transition as largely a question of skills, but just as important is governance: social dialogue to manage change and channel investment to create new jobs and better employment services where they are most needed. Nonetheless, investment in education and skills training remains crucial. Young people in school and higher education need to be equipped with the skills to work with AI, and people in employment need training to use it. Older workers, and less digitally literate workers, must not be left behind;

  • Fairness. Inequality is at a higher level in OECD countries than it has been for half a century. AI, owned by some of the world’s largest corporations, seems unlikely to change that, and might well exacerbate inequalities without regulation, active management of change, and stronger redistributive policies (including better social protection, fair taxation - especially of multi-national enterprises, and strengthened collective bargaining). Governments are already facing serious social and political consequences for not breaking the cycle of successive crises – the financial crisis, the COVID pandemic, and the cost-of-living crisis – in widening inequality. They simply cannot contemplate walking into a climate and digital revolution which may have the same results of benefitting the rich and further disadvantaging the rest.

As the IMF warned last month, it is crucial that we do not repeat the mistakes made in past waves of technological change and wait until problems are already embedded before taking action. The future of AI is in society’s hands: governments must act now to take charge of the trajectory.  

TUAC believes that the OECD should play a critical role in shaping the rules and policies that are needed. The OECD’s role in AI should not only be to help ensure AI “drives innovation” and respects “human rights” as the recent Ministerial Council meeting declared, but to ensure that AI leads to greater prosperity in inclusive and sustainable ways.





Watch the Launch of the Employment Outlook 2023

OECD’s annual report on jobs and employment in OECD countries. Each edition reviews recent trends, policy developments, and prospects. A statistical annex provides data on unemployment rates, incidence of part-time employment, employment/population ratios, and activity rates. Also included are data on expenditure on labour market programmes, average annual wages, and earnings dispersion. Special Chapters examine issues of topical interest.

Please sign in or register for FREE

If you are a registered user on The OECD Forum Network, please sign in

Go to the profile of Oli
Oli
2 months ago

The impact of AI can not be ascertained without accurate data gathering. Irrespective of one's position on the value of AI, we need to provide some measure of reliable data on changes in the economic system that can be said to be as a result of AI.

Once the data is available then we can then determine if there has been significant changes in employment / unemployment as a result of AI intervention in the production cycle.

Considering that the human economic system is essentially demand driven, aggregate demand for certain services will drop with any significant rise in unemployment. Basically, AI bots are neither paying taxes nor union dues and society will either restructure or be left with a zero sum game.