Technical
Technical
Technical

Can I be sexist without knowing your gender ? Yes.

May 9, 2022

By

Nathan Cavaglione

The Misconception

A misconception we keep seeing about AI fairness is that if you ignore the sensitive attribute (eg: gender), then discrimination is out of the equation. This practice called "Fairness through Unawareness" is effectively what most companies do today to not be labeled as discriminative. It can be applied by simply dropping the sensitive column from your dataset. The thinking is the following: "If we do not know whether a data point is a male or female, how could we be discriminating ?". Well, you still can.

The reason is every data point has a gender footprint, meaning one can guess what is the gender of a data point even without the gender information itself. This happens through the use of proxy variables that inform us with a high probability on what could be the person's gender. For example, 80% of physics students were men in 2014 in the US, so education can be a strong proxy variable for gender. Add in to the mix joint information about age, marital status or birthplace and you can get to a high accuracy of guessing a data point's gender.

Thus, AI models trained on such datasets guess genders and discriminate against them without even learning about the concept of genders. The AI simply sees the existence of two clusters in the dataset and naturally gives advantage to one of them based on the information in the data. Going even further, many insurance firms or banks explicitly recover gender and ethnicity attributes in their data science pipeline because they believe such additional information gives them better model accuracy, and thus financial returns.

Now that we have established that Fairness through Unawareness is not a solution to prevent discrimination, let's provide some evidence.


The Experiment

1. Detecting the Data bias

Let's take the example of the Adult Census dataset (Kaggle version) which predicts whether a citizen's income exceeds $50k/year based on 11 census data features such as education, occupation and... gender.

Sample data points from the Adult Census dataset

By plotting the percentage of men and women winning over $50k/year, we can clearly see the data is biased against women.

By bias, we are speaking here about demographic parity bias - i.e. we look at disparities between proportions of positive outcome for men and women. We can express this demographic parity as a number by calculating the percentage difference between men and women with a high income. We get:

In the data, one can see the probability of having a high income as a male is 30% while it is 11% as a woman. The Demographic Parity is 19%. Any percentage above 5% is considered high.
We can confirm the data is biased against women.

2. Detecting Model bias

We then train & test a logistic regression model on this biased dataset to estimate a person's income based on census data features. We then analyze the fairness of the decisions it makes, meaning we simply look at the proportion of men and women given a high income prediction by the model:

One can see the probability of a man being predicted as high income is 25% while it is 7% for women, which gives a demographic parity of 18%.
This shows how the model is also discriminating against women and therefore simply reflecting the bias present in the data.

3. Fairness Through Unawareness: removing the column

We now removed the sensitive 'sex' column and ran the same experiment as above:

One can see the accuracy and the demographic parity are still the same. Thus, removing the gender information has not hurt the prediction power of the model but it also has not reduced the bias. This is because the dataset still holds the gender information hidden inside other variables called proxy variables. It also proves that removing the sensitive column is near to useless to improve gender biases in models in production.

4. Using a correlation matrix to find proxy variables

We might wonder what are the proxy variables.
Checking for correlations between the sensitive attribute and the rest of the attributes answers this: relationship, marital-status and occupation are strong tells of someone's gender.

We then try to guess the gender of a data point without the gender information itself and based only on the 3 features mentioned above, we get a very high 83% accuracy. This proves further how a data point has a gender footprint hidden inside its other variables.

To conclude, those proxy variables are classical techniques used by banks and insurance firms to implicitly recover sensitive information about users, while respecting the regulator's request to not ask them explicitly.

Without fighting proxies, there is no fight for fair variables. This is why Fairgen offers a solution to debias proxy variables, so they can be still be used to boost your model's accuracy and without impacting your model's fairness.

Get the code to run this experiment by yourself here!