AI Bias #5: Legal

Leon Gibson
3 min read
December 30, 2022
Overview
Where we have considered the impact of biased AI in the contexts of Healthcare to Insurance, there are of course serious implications for discriminatory decisions. However, biased decisions in the legal sector could pose the biggest threat to an individual's liberties out of all the areas we've looked at.

Where we have considered the impact of biased AI in the contexts of Healthcare to Insurance, there are of course serious implications for discriminatory decisions. However, biased decisions in the legal sector could pose the biggest threat to an individual's liberties out of all the areas we've looked at.

Growing Applications of AI in the Legal setting

AI can be used in a variety of ways within the legal setting, from automating the sorting and organisation of case files at law firms to reviewing or screening contracts. It’s expected that the market for AI software in the Legal Industry will grow from $448m in 2022 to $2.6bn by 2027, representing a 29.17% rate of growth over the period. 

As with other sectors, the initial applications of AI usually have limited scope and a degree of human oversight, this avoids incorrect or discriminatory decisions and the associated negative impacts. If a file is missing it can be relocated by a human and a contract screening tool will flag discrepancies to then be reviewed. Put simply the impact of a poor algorithmic decision should be weighed against the risk of it making an error.


However, applications of AI and Machine Learning are already directly impacting decision-making scenarios as we’ll see below. The difficulty this poses is that even where the final decision may still lie with the likes of a judge or a case manager, it often requires them to effectively object to the suggestion made by the algorithm which creates a problem.

These decisions can have catastrophic consequences, even once appealed the distress and damage caused could be enough to ruin a life. 




Allegheny’s Algorithm that is triggering false neglect investigations

Associated Press published an investigative piece highlighting the role of an algorithm in determining whether or not a family is investigated for neglect of their children. Child welfare officials in Allegheny County employed a tool to process case reviews more quickly, the problem? 

CMU researchers found that between 2016 and 2018 the scores suggested that 32.5% of Black children should be subject to a “mandatory” investigation, whilst only 20.8% of white children were flagged for the same treatment in similar circumstances.

We’ve spoken in a number of our blogs about the impact of historically discriminatory data being used to train algorithms has on their outputs, the US Department of Health and Human Services themselves has identified that racial disparities exist in “nearly every major decision-making point” of the child welfare system. 

In the case of Alleghany Country, officials disagreed with the assessment made by the tool in about 1/3 of the cases flagged. Child welfare officers still had the final decision on whether to proceed with an investigation but as mentioned this makes for a difficult decision for staff members already under immense pressure. 

Of course, instances where neglect is evident require an investigation but equally the impact of an investigation being falsely flagged could be devastating. These are often invasive and require input from workplaces and educators, it could damage what is likely already a difficult set of family circumstances, ultimately leading to worse outcomes for vulnerable children.

 

Bias Reoffenders Risk Assessments 

Across the US there are dozens of risk assessment tools providing input to sentencing decisions through the use of complex black-box algorithms. These are designed to identify defendants’ likelihood to re-offend however a ProPublica report discovered the accuracy of these scores was not much great than that of a coin flip at 61%. In addition to this lack of predictive success, it was also twice as likely to identify Black defendants as future criminals in comparison to White offenders even when looked at in isolation.

Federal advice mandates multi-level approaches to sentencing, including the consideration of risk assessments such as the tools described above, in this instance provided by a private company Northpointe.

It goes on to highlight the case of Paul Zilly, a Black defendant charged with the theft of a lawnmower and several gardening tools. He had agreed to a one-year sentence and follow-up supervision in a plea. The judge presiding over his case threw out the deal struck by the prosecution and defence, opting to sentence Ziggy to 2 years of state prison and three years of supervision beyond that as his score had suggested he was at high risk of being a violent re-offender. 

The whole report is worth a read, you’ll find it here. However, looking at the abstracts we’ve highlighted it is clear there lies discrimination in the algorithms used but also automation bias in the human making the final decision in the case of Ziggy. 

Fair AI in the Legal space

This goes to highlight the biases and factors that impact delivering fair decisions in a legal setting. It's crucial that both humans and AI can work in tandem to deliver decisions that are explainable and consistent.

We believe that machine learning when applied properly can help address discrimination and variances in decision-making across the judicial system.

However, that being said, when left unchecked these tools can have devastating consequences and we need to ensure that models are free from discrimination. We're incredibly excited to be tackling the hurdles faced by society as we integrate AI into our lives, some of our recent research has been incredibly promising and we cannot wait to share it. 

Table of contents

Have  A question? We’re Happy to assist you

CONTACT US