Technical
Technical
Technical

AI Bias #2 : Insurance

July 27, 2022

By

Leon Gibson

We want to ensure AI is free from bias to make the future fairer. In the case of Insurance, it's a service that’s mandatory in many instances and whilst the industry is discriminating in pricing risk, it should be free from bias which is difficult when the AI and underlying data is biased

AI Upholds Historical Insurance Biases

We want to highlight how historical biases in data used to train insurance algorithms may result in discriminatory decisions. This can lead to customers being overcharged based on factors such as race, geographic location and sex with others getting undercharged relative to the risk they pose. The practice of Redlining, where postcodes/zip codes were identified as low to high risk based on their ethnic makeup has historically driven these disparities. This has been eradicated on the surface but legislators have identified “Hi-Tech Redlining” where AI upholds and even exaggerates these discriminatory trends. 

Removing bias from AI in Insurance is critical as it is expected that the insurance analytics market will be sized at $39 billion by 2030 whilst employer head-counts drop by 70-90% over the same period. This means more AI decision-making with less human oversight, the time is now to identify and remove biases in these models. 

What this means in practice 

We're now seeing legislation introduced that enables regulators to hold insurers responsible for the decisions their AI makes, with a focus on stamping out hidden biases. There has also been a shift in consumer awareness, disparities in policy offerings are becoming publicised and in some instances, there have been insurers entering the market with a focus on serving customers who historically have faced discrimination. Amongst all this, insurers may be mis-pricing their policies leaving them over-exposed to risk in some circumstances or losing customers based on the pricing of their premiums, which ultimately damages their bottom line.

Zip Code Bias in Chicago, Illinois 

A ProPublica report identifies drivers with similar driving records, age and gender but shows a stark difference in pricing based on their zip code. In East Garfield Park, Chicago, a minority neighbourhood, one driver pays a premium of $753 a year. Within this zip code, the average pay-out by insurers per policy was $91.57 meaning premiums in this area were 8x what Illinois insurers had paid out in the 3 years to 2017.

Across the city in Geico, a predominantly white neighbourhood, the same policy has a base rate of $376 a year which is half that paid by residents in East Garfield Park. The most alarming thing? Insurers have paid out on average $104.45 per policy over the same period… meaning premiums are only 4x the average payout.

Looking back to the 1930s, East Garfield Park was a community built for factory workers, populated by minorities and was coloured red for “hazardous”. It had been redlined over 80 years ago, yet here an algorithm is making decisions based on historic data which was fuelled by discrimination.

Digital Redlining in Memphis

The DOJ recently settled with Trustmark, a bank with $13 billion in assets that was fined $9m and will now be required to open offices in particular areas of Memphis that they had underserved. This resulted from “digital redlining” whereby their decision-making software had failed to appropriately distribute services to black and Hispanic communities. 

Addressing bias in AI is the right thing to do, for business and society

Bias in AI decision making is impacting the practice of insurers, they incorrectly pricing for risk and not acting in the best interests of society from an ethical point of view. The requirements for transparency in decisions is growing and legislation is now supporting regulators to prosecute algorithmic bias and discrimination, firms will be both fined and face reputational damage if they cannot justify the decisions of their models.

We all bear the responsibility to find fairness in society, discrimination imposed by previous generations is not in line with todays societal values. Unfortunately, the lasting impact of these practices remains in some aspects of our society to this day. AI presents an opportunity to make progress and deliver fairness but left unaddressed it can uphold and exaggerate discriminatory practices, we want to harness AI to build a society we can be proud of.

What Fairgen does‍

Fairgen's aim is, through synthetic data, to provide more accurate datasets to eradicate biased treatment mistakes.

  1. We improve diagnostic accuracy: we can simulate accurate synthetic datasets to train AI to come up with fair & transparent decisions.
  2. We better reflect the overall population: our rebalancing tool generates synthetic data without unfair advantages to a certain group/demographic.
  3. We provide empirical & theoretical proofs for the effectiveness of our approach through extensive accuracy and fairness benchmarking.

The insurance industry has offered some of the most stark examples of discriminatory practice in society, even as we move to eradicate discrimination the lasting impact remains in the data. When decision making AI considers how to price a policy it is skewed heavily by biased data, this results in higher premiums for minorities and other protected groups in comparison to their counterparts. Insurance is mandatory in many instances and as a result consumers should be protected from biased decision making, simply removing sensitive attributes from the data does not solve the issue as footprints remain.

Fairgen's synthetic data solution bolsters data sets to provide fair outcomes in decision making AI, allowing decisions to be made free from discrimination and in the best interests of all parties involved.

­Sources

Navigating Fair Lending and Redlining Under the Biden Administration

Zurich Artificial Discrimination

Rocky Road Ahead for Insurers Using Consumer Data

DOJ Announce Aggressive Redlining Initiative

Minority Neighbours Pay Higher Insurance Premiums

Actuaries Consider Racial Bias in Insurance Policies