How AI Might Change Our Jobs & What We Can Do About It

AI progress is now indistinguishable from humans in certain fields. These advancements, along with the decreasing costs of developing and implementing AI technologies, indicate that OECD economies are on the brink of an AI revolution that will profoundly transform the workplace.
How AI Might Change Our Jobs & What We Can Do About It
Like

Register for OECD Forum virtual event: How AI Might Change Our Jobs & What We Can do About It!


The 2023 edition of the OECD Employment Outlook includes a special focus on the impact of artificial intelligence (AI) on the labour market. While the adoption of AI still remains relatively low, rapid progress, falling costs and the increasing availability of workers with AI skills indicate that OECD economies might be on the brink of an AI revolution.

AI, including the latest generative AI models, will significantly increase the range of work that can be automated beyond just routine tasks, with nearly every sector and occupation likely to be affected. The speed of this development is unprecedented.

While there is no evidence yet that AI has had an impact on overall employment, companies adopting AI indicate they are doing this to improve productivity and reduce staff costs, and three in five workers in the manufacturing and finance sectors say they are worried about losing their job to AI in the next 10 years.

More on the Forum Network: AI and Employment from a Trade Union Perspective by Veronica Nilsson, General Secretary, Trade Union Advisory Committee to the OECD (TUAC)

Veronica Nilsson explains that trade unions are deeply concerned about the impact of AI on both the quantity and quality of jobs, as well as the ethical issues surrounding its uses.

The OECD has found that the workers most exposed to recent advances in AI are in high-skilled occupations, such as managers, science and engineering professionals, and legal, social, and cultural professionals. Trade unions fear that by automating tasks previously requiring extensive human know-how, AI could lead to a significant devaluing of workers, enabling a higher use of temporary contracts and the hiring of less qualified, and lower-paid, workers.

On the whole, the early evidence suggests that high-skilled workers are likely to benefit from AI because it makes them more productive, while low-skilled workers still face the highest risk of automation. Without the right safeguards and policies in place, AI risks further increasing disparities in the labour market.

It is not all bad news. AI can have a positive impact on the workplace, as it can reduce tedious and dangerous tasks, including data entry, scheduling appointments, meeting minutes, analysing legal documents, admin, and paperwork. Yet even here, there are risks. By automating simple tasks, AI may leave workers with a more intense, higher-paced work environment. In addition, so-called algorithmic management - the use of AI in people management - may change the way work is monitored, posing risks to privacy, autonomy, job quality and satisfaction.

Designing AI Driven Recruitment for More Inclusive Hiring

One particular area where AI is beginning to be used is in hiring and job-seeking. Natural language processing that improves job descriptions, algorithms that target vacancies at the right candidates, vision and voice recognition software to carry out automated interviews and chat bots that take over much of the standard communication with candidates—are just a few of the technologies that have emerged in recent years.

Hundreds, if not thousands, of companies and developers are selling these types of products, touting their potential benefits including lower costs, faster recruitment, reaching larger candidate pools. But AI also risks making the hiring process, and especially the initial filtering rounds, very unpersonal, with initial video interviews, analysed on eye movement, gestures, even hesitations, with little or no personal feedback, and potential for bias and discrimination of women, minorities, as well as people using a hearing aid, a wheelchair, or not able to respond quickly enough or in the right ways because of other disabilities

Some claim that AI-driven recruitment ensures the process is fairer, hiring a more diverse workforce and including candidates who might otherwise be overlooked due to health issues, gaps in employment, no officially recognised qualifications, or a degree from a less well-known university. But the lack of transparency in algorithmic decision-making might make bias and discrimination more difficult to detect.

In March 2022, ChatGPT could answer 28% of a set of PISA mathematics tasks, in March 2023 GPT answered 46% of the tasks successfully. In science, the corresponding percentages were 65% and 85%.

Ensuring All of Us Have Access to the Skills to Work with AI

While AI has not so far resulted in job loss at an aggregate level, it is already changing the content and nature of jobs. The impact of AI on tasks and jobs is quickly changing skills needs. The OECD has been tracking how well systems like ChatGPT fare on tasks from the Programme for International Student Assessment (PISA), the global yardstick of educational success that over eighty countries use to assess the knowledge and skills of 15-year-olds. This comparison shows rapid advances in the capabilities of artificial intelligence when compared with humans. In March 2022, ChatGPT could answer 28% of a set of PISA mathematics tasks, in March 2023 GPT answered 46% of the tasks successfully. In science, the corresponding percentages were 65% and 85%.

While companies using AI say they provide AI training, a lack of skills remains a major barrier to adoption, and investment in education and skills training is crucial. Workers who have been trained to work with AI are far more positive about the impact of AI on performance and working conditions than workers who have not been trained. Young people in school and higher education need to be equipped with the skills to work with AI, and people already in the workforce need training to use it. Older workers, and less digitally literate workers, must not be left behind.

Public policies will need to ensure that employers invest in training, as well as revisiting formal education so that it provides the needed skills. While AI itself may help improve the design, targeting and delivery of training, notably the opportunity of providing tailored training solutions at scale, the use of AI in training could also exacerbate inequalities and perpetuate human biases.

The rapid pace of AI development underscores the need for policy action to reap the benefits AI can bring to the workplace while addressing risks for workers’ fundamental rights and well-being.

Governments, international organisations and regulators must provide a framework for how to work with AI. This includes setting standards, enforcing appropriate regulations or guidelines, and promoting proper oversight of these new technologies. The OECD has played a pioneering role in this area by developing the OECD AI Principles for responsible stewardship of trustworthy AI, adopted in May 2019 by OECD member countries – forming the basis also for the G20 AI Principles – and since then adopted by Argentina, Brazil, Egypt, Malta, Peru, Romania, Singapore and Ukraine.

Existing legislation – e.g., on discrimination, data protection or workers’ rights to organise – could be an important foundation, but it is still uncertain to what extent it can be applied to AI because relevant case law remains limited.

Collective bargaining and social dialogue have an important role to play as well in supporting workers and businesses in the AI transition. Yet, the specific characteristics of AI and the way it is implemented, such as its rapid speed of diffusion, its ability to learn and the greater power imbalance it can create, put further pressure on labour relations.




Please register here for the discussion on 20 July at 14:30-16:00 CEST, 13:30-15:00 BST, 08:30-10:00 EDT with the following speakers:  

 

Welcome & Moderation:                                       

  • Monique El-Faizy, Paris-based journalists & Author

Presentation of the OECD Employment Outlook:

  • Mark Keese, Head of Skills & Employability Division, Directorate for Employment, Labour and Social Affairs, OECD

Panel Discussion

  • Brando Benifei, Member of the European Parliament, Group of the Progressive Alliance of Socialists and Democrats; Rapporteur on the EU AI Act; Member of the OECD Global Parliamentary Network, and the OECD Parliamentary Group on AI

  • Soumitra Dutta, Peter Moores Dean and Professor of Management, Saïd Business School, University of Oxford, and a Fellow of Balliol College, Oxford

  • Stefan Gran, Senior Advisor, European Trade Union Confederation (ETUC)

  • Sean Hinton, Founder and CEO of SkyHive Technologies

  • Susan Scott-Parker, Chief Executive, Business Disability International; Founder Disability Ethical AI Alliance

Please sign in or register for FREE

If you are a registered user on The OECD Forum Network, please sign in

Go to the profile of David H. Deans
3 months ago

Willemien, you said, "One particular area where AI is beginning to be used is in hiring and job-seeking." I'm encouraged by the 'potential' benefits of AI in enterprise recruitment situations. For example, I'm hopeful that artificial intelligence can be trained to avoid the inherent bias of human corporate recruiters. While the current shift to value candidate Skills, over other qualifications, is a positive advancement for hiring decisions, I do wonder how this trend will evolve over time.

Go to the profile of Ganesha
3 months ago

That's interesting topic. I don't know... what should do copywriters and 2D artists?

Go to the profile of Martin  Owino Omedo
2 months ago

Such an exciting read. While debates are going on around the impact of AI in different spheres, there is a need to be strategic with how we respond to every possible out. If it is negative, how do we negate the effects, and if positive, how do we juice the benefits.?

For example, in recruitment, how do we ensure that we are not institutionalising inequality through an algorithm matrix whose decision-making rationale is not well understood? How do we audit such systems?