The Forum Network is a space for experts and thought leaders—from around the world and all parts of society— to discuss and develop solutions now and for the future. It aims to foster the fruitful exchange of expertise and perspectives across fields, and opinions expressed do not necessarily represent the views of the OECD.
Just imagine :
- You lose your dream job because your stammer caused you to go 15 seconds over the 3 minutes permitted for the video interview - and the algorithm automatically discards your application.
- You have a facial disfigurement from an acid burn, but the AI tool doesn’t recognise your face as real.
- Your visual impairment makes eye contact tricky, but you can't find any way to request that the video assessment disregard the way your eyes ‘dance’.
- Your ADHD means you ‘fidget’ in front of your screen and the surveillance tech assumes you aren’t working –or accuses you of cheating.
- You usually lip-read at interviews, but this robot interviewer is lipless.
- You have used a wheelchair since you were four, but the virtual reality test drops you walking into an ancient tomb to assess your problem-solving skills. You struggle to even imagine standing up (!), never mind doing so while solving complex puzzles.
- And how will you know if your personality profile, produced by scanning everything you have ever put online, tells the recruiter you belong to a Parkinson’s Disease self-help network? Is that why your application got nowhere? And if you could take someone to court, who would it be?
AI recruitment tools have become the first line of defence against high-volume online hiring. A recruiter’s priority is to discard as many applicants as possible, as quickly and as cheaply as possible, to narrow down to the talent deemed worthy of consideration by human beings. And an increasingly controversial multi-billion dollar industry stands ready to help.
Thankfully, those influencing responsible AI have begun to address race and gender bias, but the world’s 1.3 billion people with disabilities are still so excluded from this debate that no one has even noticed they aren’t there.
Brilliantly presented research by BR, the German Public Broadcaster, reveals that a candidate’s Behavioural Personality Profile, produced after a one-minute Retorio video interview, changed significantly depending on her appearance. She lost 10 points just by putting on glasses; she gained 20 points by putting on a head scarf. (Retorio says German recruiters find head scarves appealing – so her scores went up).
And what if that camera was to spot your hearing aid, wheelchair, or arthritic hands? Would you score as more or less agreeable, neurotic, or conscientious? No one knows - and that’s a problem. BR didn’t test for ‘disability indicators’. Neither has the developer, nor the German corporations that use it.
Thankfully, those influencing responsible AI have begun to address race and gender bias, but the world’s 1.3 billion people with disabilities are still so excluded from this debate that no one has even noticed they aren’t there. Unless the unintended consequences of AI-powered HR technology are urgently addressed, hundreds of millions of people worldwide face lifetimes of economic and societal exclusion.
Read more on the Forum Network: Artificial intelligence as a matchmaker in the job market by Stijn Broecke, Lead of the Future of Work Initiative
Could AI contribute to more efficient matching in the labour market, lowering unemployment and increasing productivity?
Neither the AI creators nor their HR customers seem to understand disability discrimination
Creators often claim they have removed human bias by dropping their AI tool into a standardised recruitment process that treats everyone the same. However, standard processes are by definition inherently discriminatory – recruiters are obliged to make reasonable adjustments at every stage of the process if they want to employ disabled people fairly and on an equal basis. We treat people differently to treat them fairly.
This is not just about the data which, let's face it, is always ‘disability biased’. However, biased data, while deeply problematic, is different from the concrete reality of associated discriminatory behaviours, policies, and procedures, such as refusing to adapt an automated talent acquisition process so that a job seeker with a disability can be accurately assessed. And as recent research from NYU has pointed out, too often the science shaping these assessments is not, in fact, valid for anyone.
AI Creators are not legally obliged to prove their products are ‘safe’ for any disadvantaged job seekers. But regulators are catching up.
What we have here is a ‘market failure’- neither the HR buyers, nor their tech suppliers understand disability discrimination: neither party seems to know how to design a recruitment process that is both barrier-free for people with similar access needs (e.g., accessible game controls) and flexible enough for individuals who need things to be done differently so they can demonstrate their potential (e.g., bypassing psychometric tests which are not validated for people with autism when assessing autistic candidates).
AI Creators are not legally obliged to prove their products are ‘safe’ for any disadvantaged job seekers. But regulators are catching up. The U.S. Equal Employment Opportunity Commission (EEOC) has published their first guidance which, while still exploratory in nature, does indicate that AI-facilitated disability discrimination is now on its agenda. And the European Disability Forum is hoping that emerging EU guidance and standards regarding AI accountability will also protect the human rights of people with disabilities.
Interestingly, a leading HR tech developer, HireView, recently argued that it is the employer that should be held responsible if a candidate claims discrimination further to HireView data being used to justify the decision not to hire them. It’s not every day that a supplier sends such a ‘Buyers Beware’ alert to potential customers. Were employers to respond by requiring all their HR tech service suppliers to prove they have taken the necessary steps to mitigate potential harm to disadvantaged job-seekers and the associated legal and reputation risks to their brands, it could serve as a compelling reminder of the impact of AI-powered HR technology on disability discrimination.
However, the seriously big question remains: How do we bring the human rights of persons with disabilities into the world view of those influencing this global ethical AI debate? There is no easy answer.
But surely an important first step is to stop the unhelpful waffle about ‘inclusion’ and bring the conversation back to some ‘Disability Equality Basics’. We need a much broader consensus that equality and inclusion are not possible…
- when you can't ask for an interview to be extended because you have a slight speech impairment;
- when you aren’t told how the employer plans to assess you and therefore cannot ask for the accommodations you require ;
- when you can't complete the application form using a screen reader ;
- when you have an intellectual disability and can do the job, but the automated system can't and won’t simplify the wording of the interview questions ;
- when the employer insists you take psychometric tests that have not been validated for sign language users speaking English as a second language ;
- when the AI CV screening tool discards your application because it has never heard of Loyola College.
Disability – intrinsic as it is to the human condition – ‘matters’ perhaps more than ever in the age of AI. We need to start using our imaginations and challenge AI creators to develop tools designed explicitly to protect the world’s 1.3 billion people with disabilities from the use of AI-powered HR technology: it’s past time for ‘poachers to turn game keepers’.
Generative Artificial Intelligence tools such as ChatGPT, have taken the world by storm. As technology advances and new tools emerge, it is essential that governments, education institutions and businesses understand how to leverage and adapt to these technologies and how to govern them to ensure they are beneficial for humanity and the environment.
To learn more, attend the 2023 International Conference on AI in Work, Innovation, Productivity, and Skills
And listen to OECD's podcast: Know thy AI: Assessing the risks of artificial intelligence
Please sign in or register for FREE
If you are a registered user on The OECD Forum Network, please sign in