8 min read

White House sets out Citizens Rights for AI

Published on
March 26, 2024
Written by
Leon Gibson
By submitting this newsletter request, I consent to Fairgen sending me marketing communication via email. View Privacy Policy.
Author
Leon Gibson
Business Development

US Bill of Rights for AI

We’ve detailed the impact that undisciplined deployment, and use of Artificial Intelligence can have on individuals and society, very often unbeknownst to those being affected. There have been a number of cases brought against banks, insurers and employers highlighting unethical practices driven by algorithmic decision making or prompts. The latest bill brought by the White House Office of Science and Technology Policy delivers five principals that are intended to “guide the design, use and deployment of automated systems to protect the public”, paving the way for critical legislation in the space.

Shaping AI in the interests of the public

It should be noted that whilst this bill is not legally binding it is a culmination both of recent efforts to regulate and build a framework to hold organisations accountable to their use of automated systems. This will pave the way for future policies to support legislators in protecting the individual citizen and wider society from malign AI technology.

The five principles can be highlighted as; 

  • Safe and Effective Systems 
  • Algorithmic Discrimination Protections 
  • Data Privacy 
  • Notice and Explanation 
  • Human Alternatives, Consideration and Fallback 

Safe and Effective Systems details how systems should be developed with the inclusion of diverse communities, stakeholders and domain experts to identify concerns, risks and potential impacts of the systems. It suggests that independent evaluation and reporting is essential before deployment. Further to this, the possibility of not deploying as well as retrospectively withdrawing from use should be an option considered where it is not safe and effective. 

In practice the use of systems to assess those suitable for parole have been identified as high risk by legislators in a number of states.  Having discussed the risks surrounding the use of parole assessment technologies in a previous blog, it’s positive to see that independent validation of these systems is required to ensure they are safe and effective moving forward.

Algorithmic Discrimination Protection encourages designers, developers and deployers of automated systems to take proactive and continuous measures to protect individuals and communities from discrimination. It also highlights the importance of a plain language assessment which shows disparity testing results and mitigation information.  

Real world context for this guideline is visible in human resources and even identified by the government, highlighting the potential for discrimination where AI has been deployed in job application processes

Data Privacy should be central to how those deploying automated systems obtain and use the data of those engaging with the system. Specifically it discusses clarity around terms and conditions and the use of “inferred data”. It goes further in detailing the use of surveillance technologies, such as biometrics and how these collected across settings such as the workplace and in education.

Notice and Explanation highlights that users should be alerted when an automated system is being used, with details provided on how and why it contributes to outcomes that impact you. The explainability feature is crucial here, it suggests that users need to be able to access a pain language explanation as to how and why an outcome has been determined. 

We’ve discussed extensively how damaging black box algorithms which no explanation can be, particularly to protect subgroups, particularly in the allocation of credit and other financial tools. The White House has specifically said the complexity of a system, or lack of understanding by those who deploy it, is an invalid excuse for not providing an explanation

Human Alternatives, Consideration and Fallback guidelines require companies to allow access to a person who can quickly remedy problems you encounter. This should allow those engaging with automated systems to opt out and engage with a human where appropriate, at no added burden to the individual. 

In the context of the real world for example, those faced with automated telephone systems should be able to easily navigate to speak to a human to assist them, consider the elderly or those speaking in an accent which the system struggles to understand. Further to this diagnostics present a high stakes situation with the ability to seek human alternatives or escalate concerns is critical, failure to do so could harm patient outcomes

The First Step for Regulatory Enforcement

The introduction of these guidelines sets out a framework for bills to follow in coming years, allowing legislators to protect individuals and groups from automated systems that could pose a threat to their civil liberties. Much of the guidelines ensure that traditional rights are being upheld in the face of these new technologies, a promising development in the fight to ensure we build and integrate AI into society fairly.

The Fairgen toolkit that we have developed allows organisations across a number of sectors to ensure the fair and proper use of algorithms through extensive de-biasing , re-weighting and and benchmarking of their systems. This goes to ensure that these systems are as fair as possible and that suitable mitigations have been made in the event the outcomes are being questioned or queried, something which is critical to proving best practice in the deployment of these tools.