8 min read

AI Bias #6: Recruitment

Published on
March 26, 2024
Written by
Leon Gibson
By submitting this newsletter request, I consent to Fairgen sending me marketing communication via email. View Privacy Policy.
Author
Leon Gibson
Business Development

The definition of an organisation is a group of people with a particular purpose. When you strip back a company, large or small, the people it recruits are critical in determining its success. 

Companies are turning to Artificial Intelligence to optimise their recruitment activities from application screening to interview analysis, but what risks does this pose to the rights of candidates and the outcomes for the recruiting company? 

Why AI? The unconscious biases can make poor human recruiters.

When it comes to recruiting, we as humans tend to be drawn toward those we share similarities in the form of behaviours, work processes and communication styles. This is called unconscious bias, when you identify these commonalities you tend to exaggerate them and miscategorise people. 

This creates an issue as the personalities of your staff create the foundations of your company culture which is in turn central to achieving your goals. You need to get the balance right, employees need to be a good culture fit but also have room to be individuals and create a diverse culture that can deliver innovation and performance. 

This is one reason companies have been turning to AI in their application process, these biases could manifest in the form of discrimination if left unchecked. 

Second, to this is the cost associated with both the recruitment process and that staff turnover, losing an employee within a short period is immensely resource-draining and unsettling for a team. 

The use case for AI-driven recruitment technology is in principle clear, as such it’s estimated that the market for AI technology in recruitment is projected to reach a value of $981.74bn by 2029, seeing a CAGR of 6.8%

However it hasn’t all been plain sailing for the industry thus far…

Amazon’s AI recruitment tool that didn’t like women

Throughout Amazon’s rise to success, they have harnessed the power of automation, from warehouses to delivery routing they have been market leaders. In 2014 their machine learning team turned to the hiring process. They wanted to screen candidates' CVs, the goal was that this would be able to extract the best candidates for particular roles, this was to be critical as they shaped up to more than triple their global headcount

Come 2015 it became clear the system was not fairly assessing candidates for technical roles in particular. It was heavily favouring male candidates as a result of Amazon training the model on the company's previous 10 years of hiring activity. It’s widely known that women in Science, Technology, Engineering and Math (STEM) roles had been underrepresented in the past. 

In short, Amazon was training its model with historic data that was biased. It’s reported that tweaks were made to keyword identification to neutralise terms like women’s or female but adjusting these sensitive attributes failed to work, something our CTO Nathan discussed in another blog; Can I be sexist without knowing your gender?

In the case of Amazon, it was reported by Reuters that other words and tendencies of male candidates use terms like “executed” or “captured” in their applications. 

CV screening wasn’t accurate so why expect biometric analysis to be effective and fair?

Where Amazon had AI sifting through applications other companies have gone one step further, harnessing facial recognition among other technologies to conduct video assessments of candidates. 

One of these companies is HireVue, they collect video responses to a standard set of questions and then proceed to filter the responses using AI-powered biometric assessments. In 2019 HireVue indicated that in filtering the response of interviewees they were using a candidate's facial expression with a weighting of 10-30% with verbal analysis making up the remainder to filter responses. 

This resulted in EPIC, a non for profit privacy campaign group filed a complaint against them for failing to comply with baseline standards for AI decision making, accusing the company of evaluate applicants based on their appearance with an opaque, proprietary algorithm. The full details can be found here

The lack of explainability and risk of discrimination given the nature of the software is evident. It’s well documented that facial recognition technology has not performed well across diverse users, often misidentifying or failing to identity those of minority groups. It’s also worth nothing that candidates were often unaware of the nature of the assessment and the biometric analysis being conducted which raised a number of GDPR concerns.

We need to find fairness in algorithms before allowing them to make decisions

HireVue have since removed the facial analytics aspect of their product and Amazon have only begun using a very watered down version of their tool to clear duplicate applications organise applications. It goes to show how algorithms continue to be utilised when it is clear that there is no guarantee of fairness.  need to consider the externalities that arise when AI is applied to spaces, particularly where humans are involved.

We need to ensure that we correctly explain the decisions, benchmark fairness and understand how we are serving our diverse society. Fairgen hope to further this mission through the use of synthetic data to ensure fair representation of protected subgroups and society as a whole where AI is being implemented.