Open Access Taster Release
Today we're proud to release an open-access taster version of our platform, allowing anyone to upload their data to be analysed and any biases mitigated.
We aim to raise awareness of the biases often present within the underlying data used for research and the training of machine learning models.
Users will be able upload their tabular data in CSV format, we'll scan for biases before running them through our synthetic generators to rebalance and augment the dataset. Once that's complete, they'll get access to a set of summary reports which can be downloaded alongside their bias-free and more robust data set.
You can check it out here - Either upload your data or try one of our preloaded data sets
Why are we offering a taster?
We're passionate about fairness; it's our thing. Many instances of unfair AI stem from the underlying data, and it is often challenging to identify that the data presents biases, but more importantly, that the decisions or outputs are biased.
We've built a world-leading technology stack allowing data scientists and researchers to understand, support and manage the adequate performance of their data relative to fairness. This taster is a small sample of what our full release will be capable of, but we believe it can help everyone better understand their data sets, from data scientists to students and hobbyists.
What can I use this for?
The platform will allow you to see how your dataset performs relative to proportion, allocation and performance biases.

Why should I be concerned about biases?
AI presents an opportunity to better society, building on our progress to minimise discrimination and biases. However, algorithms are likely to compound any remaining biases or discriminatory patterns resulting from historic decisions.
We've written extensively about the risks across industries from healthcare, human resource management, insurance and banking.
What is to come from Fairgen?
This version is a small taste of our full-release platform due to ship in January. It samples a limited feature set, and we hope to use it as a tool to raise awareness and educate users on the presence of biases.
Please get in touch with us via Linkedin or scohen@fairgen.ai for any questions or feedback you have.