Technical
Technical
Technical

AI Bias #4: Human Resource Management

August 14, 2022

By

Leon Gibson

Managing people is critical to an organisation's success, from large corporations to start-ups and social enterprises, people make the difference. Human Resource Management (HRM) is incredibly complex and challenging for humans, so as organisations implement Artificial Intelligence to better manage people what are the risks and how might biases result in undesirable outcomes? 

The growing use of algorithms in Human Resource Management

Algorithms are becoming more prevalent across key functions of HR departments such as recruitment, succession planning, compensation and performance management. Whilst the primary application across these areas is analytics at present we have begun to see and will continue to see AI-driven decision-making or prompts come to the fore. 

Data suggests that globally, corporations will increase spending on AI from $85.3bn in 2021 to more than $204bn by 2025, a compound annual growth rate of 24.5%. 

In this blog, we’re going to touch on a variety of ways AI is being implemented within HR departments post-recruitment to highlight how biases can pose a risk to organisations and their employees if left unchecked. We’ll circle back on talent acquisition and recruitment with another post! 

Predicting the employees that may quit 

Since as early as 2015 it has been well documented that organisations such as Credit Suisse and IBM had algorithms that identified employees likely to quit their jobs, even providing feedback on why this might be the case. Management could then approach these cases and seek to retain the staff through various methods of incentivisation or motivation but these also factored into succession planning. The implementation of this program resulted in savings of up to $70m per year for Credit Suisse with IBM seeing a return of $300m alongside a 25% reduction in turnover for critical roles

The use of this data is concerning as an employee could be flagged by an algorithm as being likely to quit when in fact they’re not. It’s to be expected that this would weigh on the decisions of an organisation on promotions and responsibilities. Imagine your career progression being hampered because an algorithm had assessed you to be less committed to the company than your colleague.

This is a huge risk as organisations are limited to (1) Using historical data on employee habits that may not be representative through either biases or mis-interpretations and (2) They are limited in how much data they have access to relative to the size of the organisation. 

Algorithmic dismissals 

The use of AI to aid or nudge internal decision-making is one thing but there are a growing number of cases where algorithms have been making the final decision on employee dismissals. 

In one example, Estée Lauder was forced to settle with employees who were fired as a result of an AI interview tool that had been applied in a decision to dismiss a number of workers. The employees were asked several questions through an online video portal before the software analysed their responses and determined which should be fired without providing any further explanation. When questioned Estée Lauder could point only to analysing 15,000 data points in their modelling when asked to defend their AI’s decision and were unable to explain how it came to make the decision. 

This also occurred when Uber dismissed drivers in the UK and Portugal based on their facial verification software failing to verify their identities on several occasions. The real issue here is that the dismissals came with no human oversight and went as far as to issue letters of complaint to Transport for London (TFL), this resulted in the drivers being unable to obtain the necessary licence to taxi passengers with any other carriers. 

Uber was utilising a piece of Microsoft facial recognition software which is among a number of tools to have faced their own controversy. An MIT study identified that these technologies have an error rate of 0.8% for light-skinned men yet it would have an error rate of 34.7% for dark-skin females. This landmark Uber case focused on the risk of false negatives, particularly when being used by ethnic minorities. The court ruled that the lack of oversight constituted profiling and as a result breached EU & UK GDPR laws.

 

We’re at a key stage of AI integration for the industry

 

Each of these use cases highlights the use of AI in decision-making, in varied contexts within the HR function of a business across several industries. The issue within this space is that these decisions have very serious, personal implications for employees whilst offering little to no explanation and varied levels of human oversight. 

Whether the AI is making final decisions or simply nudging/prompting management it can skew a decision to promote or dismiss a member of staff with little justification. The risk here is that biases in the data used to train models can often result in discrimination against protected groups due to a lack of data and poor historical representation. 

Accuracy in outputs is one aspect but reasonable and fair use of these tools for decision-making is critical for their adoption to be a force for good. Disregarding the debate on the accuracy and efficacy of facial recognition technology we must consider how these are applied to decision making. It is apparent that these technologies, especially where humans and their complexities are involved, are another step up for Artificial Intelligence. 

We need to ensure that as we deploy AI in industries, it is done so thoughtfully and that it supports us in building a fairer society. This requires also ensuring the data we train models with is representative of society and provides the necessary protections for minorities and subgroups to ensure they are fairly represented in the underlying data and as a result, can be treated fairly.