Register for this virtual event now! |
The Forum Network is a space for experts and thought leaders—from around the world and all parts of society— to discuss and develop solutions now and for the future. It aims to foster the fruitful exchange of expertise and perspectives across fields, and opinions expressed do not necessarily represent the views of the OECD.
Discussions on Artificial Intelligence (AI) in the workplace often focus on the future. Speculations on the potential displacement of human labour by algorithms and robots have already pervaded public discourse for decades. However, as AI penetrates workplaces, substantial changes have already taken place. The problem of discrimination in AI-driven recruitment, which has recently received significant attention from the European public, is just the tip of this iceberg.
The use of AI is affecting long-established norms and practices that have historically protected workers from exploitation and occupational hazards. The workplace structure has long relied on these norms as foundational elements. But just like corrosion, slowly eating away at metal and altering its integrity, the use of software in management gradually erodes these norms. As AI is often portrayed as a mysterious black box, companies are taking advantage of the confusion and obscurity that typically surrounds emerging technologies, and use it as an excuse to dodge the rules.
Maximum physical workload provides a clear example. Excessive lifting poses an obvious health risk and overexertion is among the top causes of disabling injuries. That is why workload standards have been established. These norms emerged out of workers’ struggles in the manufacturing industry, gradually expanded to other sectors, and eventually made their way into national laws. Labour codes proudly house them. But the question remains: do the companies using AI for work management still adhere to these norms in practice?
As AI is often portrayed as a mysterious black box, companies are taking advantage of the confusion and obscurity that typically surrounds emerging technologies, and use it as an excuse to dodge the rules.
Under Polish law, employers are required to provide employees with a set of work rules. Traditionally it was a simple enumeration of what was permitted and what was not. Workers knew how many packages they had to lift and how it impacted their wages. These rules would include expected physical workload and maximum volume limits. Task distribution was the responsibility of the boss. However, with the advent of AI-driven software, in recent years there has been a significant disruption in these workplaces. Today, employees no longer receive tasks from their human supervisor. Their work is supervised and assessed by software. This has resulted in a disturbing situation where employees are not aware of the performance metrics they are being assessed against, or the corresponding rules they are expected to follow. The outcome is predictable. Workers take on excessive workloads, often to the detriment of their health, due to the fear of earning less money or losing their job.
Also on the Forum Network: Automation Shouldn’t Always be Automatic: Making Artificial Intelligence Work for Workers and the World, by Daron Acemoglu, Professor of Economic, Massachusetts Institute of TechnologyAutomation is nothing new, but the last two decades have witnessed rapid advances technologies without the corresponding investments in those that complement humans, explains Daron Acemoglu. |
This problem was initially brought up by a group of Amazon workers from Poznań after one of their colleagues passed away abruptly while performing warehouse duties. The union suspected that the death was due to overexertion, but their attempts to acquire precise information about performance norms were impeded. They were told that they could not receive the data, as the work was overseen by a software system, and its inner workings remained concealed in mystery.
When the parliamentary committee discussed this incident with trade unionists and labour inspectors, it discovered that it was not an isolated case. New business practices are spreading rapidly. Information that was previously accessible to employees is now treated as a business secret. The use of AI is employed as an excuse to restrict access to data. As a result, employees are left in the dark about key aspects of their work and are often unable to hold their employers accountable.
Undoubtedly, comprehending the intricacies of a convolutional neural network would be a non-trivial task. However, the “AI in business” category is incredibly broad, encompassing a wide range of technologies, many of which are relatively unsophisticated. To put things into perspective: the decision on bonuses or firing employees is frequently delegated to a random forest classifier - which is essentially a glorified decision tree. This process can be understood with knowledge of the input data. That is why it is critical to grant workers access to their personal data as well as the metrics used to evaluate them.
Recent legislation we have drafted offers a pragmatic remedy to this problem. It revises the Polish Trade Union Act to accommodate the evolving technological landscape. Union members are granted the right to access information on current performance standards at their workplace, which has been widely recognized as a fundamental labour right in Polish labour law. However, the current legislation fails to address new forms of information that emerged in the past decade. Our amendment rectifies this situation.
The use of AI is employed as an excuse to restrict access to data. As a result, employees are left in the dark about key aspects of their work and are often unable to hold their employers accountable.
We drew some inspiration from the measures that have recently been implemented in Spain, taking a slightly more limited approach that aligns with the Polish Trade Union Act. The Act outlines the information that an employer is obligated to provide at the request of a trade union. We propose to expand this catalogue by adding a new item. If a computer programme is responsible for making decisions related to employees’ wages or working conditions, then the employer is required to provide the trade union with the technical details of how the programme operates. This includes the underlying rules, parameters, and instructions that the software relies on to make decisions.
The parliamentary committee has drafted and approved the proposal, which has received wide support from trade unions and industrial experts, as well as broad, cross-party support - a noteworthy achievement in the politically polarized environment. The proposal has now entered the consultation stage. Despite vocal opposition from some American corporations, we remain optimistic about its future adoption. The ongoing debate has highlighted the need for new, effective tools to safeguard workers’ safety - and basic human rights.
As software-driven management becomes increasingly prevalent, the failure to adapt our labour laws to this profound change may result in a practical decline of labour standards: harming workers, further eroding the European social model, and potentially setting us back to a state reminiscent of the 19th century. This is the risk we should not take.
// Banner image: Shutterstock/BluePlanetStudio
Generative Artificial Intelligence tools such as ChatGPT, have taken the world by storm. As technology advances and new tools emerge, it is essential that governments, education institutions and businesses understand how to leverage and adapt to these technologies and how to govern them to ensure they are beneficial for humanity and the environment.To learn more, attend the 2023 International Conference on AI in Work, Innovation, Productivity, and Skills |
And check out also the OECD's work on the Future of Work |
Please sign in
If you are a registered user on The OECD Forum Network, please sign in