December 28, 2022

White House sets out Citizens Rights for AI

The White House just released their principals for citizens rights in the face of the ever growing use of Artificial Intelligence, we break down what it means for those deploying AI.

White House sets out Citizens Rights for AI

Building the right tech stack is key

Lorem ipsum dolor sit amet, consectetur adipiscing elit lobortis arcu enim urna adipiscing praesent velit viverra sit semper lorem eu cursus vel hendrerit elementum morbi curabitur etiam nibh justo, lorem aliquet donec sed sit mi dignissim at ante massa mattis.

  1. Neque sodales ut etiam sit amet nisl purus non tellus orci ac auctor
  2. Adipiscing elit ut aliquam purus sit amet viverra suspendisse potent
  3. Mauris commodo quis imperdiet massa tincidunt nunc pulvinar
  4. Excepteur sint occaecat cupidatat non proident sunt in culpa qui officia

How to choose the right tech stack for your company?

Vitae congue eu consequat ac felis placerat vestibulum lectus mauris ultrices cursus sit amet dictum sit amet justo donec enim diam porttitor lacus luctus accumsan tortor posuere praesent tristique magna sit amet purus gravida quis blandit turpis.

Odio facilisis mauris sit amet massa vitae tortor.

What to consider when choosing the right tech stack?

At risus viverra adipiscing at in tellus integer feugiat nisl pretium fusce id velit ut tortor sagittis orci a scelerisque purus semper eget at lectus urna duis convallis. porta nibh venenatis cras sed felis eget neque laoreet suspendisse interdum consectetur libero nisl donec pretium vulputate sapien nec sagittis aliquam nunc lobortis mattis aliquam faucibus purus in.

  • Neque sodales ut etiam sit amet nisl purus non tellus orci ac auctor
  • Adipiscing elit ut aliquam purus sit amet viverra suspendisse potenti
  • Mauris commodo quis imperdiet massa tincidunt nunc pulvinar
  • Adipiscing elit ut aliquam purus sit amet viverra suspendisse potenti
What are the most relevant factors to consider?

Nisi quis eleifend quam adipiscing vitae aliquet bibendum enim facilisis gravida neque. Velit euismod in pellentesque massa placerat volutpat lacus laoreet non curabitur gravida odio aenean sed adipiscing diam donec adipiscing tristique risus. amet est placerat in egestas erat imperdiet sed euismod nisi.

“Nisi quis eleifend quam adipiscing vitae aliquet bibendum enim facilisis gravida neque velit in pellentesque”
What tech stack do we use at Techly X?

Eget lorem dolor sed viverra ipsum nunc aliquet bibendum felis donec et odio pellentesque diam volutpat commodo sed egestas aliquam sem fringilla ut morbi tincidunt augue interdum velit euismod eu tincidunt tortor aliquam nulla facilisi aenean sed adipiscing diam donec adipiscing ut lectus arcu bibendum at varius vel pharetra nibh venenatis cras sed felis eget.

US Bill of Rights for AI

We’ve detailed the impact that undisciplined deployment, and use of Artificial Intelligence can have on individuals and society, very often unbeknownst to those being affected. There have been a number of cases brought against banks, insurers and employers highlighting unethical practices driven by algorithmic decision making or prompts. The latest bill brought by the White House Office of Science and Technology Policy delivers five principals that are intended to “guide the design, use and deployment of automated systems to protect the public”, paving the way for critical legislation in the space.

Shaping AI in the interests of the public

It should be noted that whilst this bill is not legally binding it is a culmination both of recent efforts to regulate and build a framework to hold organisations accountable to their use of automated systems. This will pave the way for future policies to support legislators in protecting the individual citizen and wider society from malign AI technology.

The five principles can be highlighted as; 

  • Safe and Effective Systems 
  • Algorithmic Discrimination Protections 
  • Data Privacy 
  • Notice and Explanation 
  • Human Alternatives, Consideration and Fallback 

Safe and Effective Systems details how systems should be developed with the inclusion of diverse communities, stakeholders and domain experts to identify concerns, risks and potential impacts of the systems. It suggests that independent evaluation and reporting is essential before deployment. Further to this, the possibility of not deploying as well as retrospectively withdrawing from use should be an option considered where it is not safe and effective. 

In practice the use of systems to assess those suitable for parole have been identified as high risk by legislators in a number of states.  Having discussed the risks surrounding the use of parole assessment technologies in a previous blog, it’s positive to see that independent validation of these systems is required to ensure they are safe and effective moving forward.

Algorithmic Discrimination Protection encourages designers, developers and deployers of automated systems to take proactive and continuous measures to protect individuals and communities from discrimination. It also highlights the importance of a plain language assessment which shows disparity testing results and mitigation information.  

Real world context for this guideline is visible in human resources and even identified by the government, highlighting the potential for discrimination where AI has been deployed in job application processes

Data Privacy should be central to how those deploying automated systems obtain and use the data of those engaging with the system. Specifically it discusses clarity around terms and conditions and the use of “inferred data”. It goes further in detailing the use of surveillance technologies, such as biometrics and how these collected across settings such as the workplace and in education.

Notice and Explanation highlights that users should be alerted when an automated system is being used, with details provided on how and why it contributes to outcomes that impact you. The explainability feature is crucial here, it suggests that users need to be able to access a pain language explanation as to how and why an outcome has been determined. 

We’ve discussed extensively how damaging black box algorithms which no explanation can be, particularly to protect subgroups, particularly in the allocation of credit and other financial tools. The White House has specifically said the complexity of a system, or lack of understanding by those who deploy it, is an invalid excuse for not providing an explanation

Human Alternatives, Consideration and Fallback guidelines require companies to allow access to a person who can quickly remedy problems you encounter. This should allow those engaging with automated systems to opt out and engage with a human where appropriate, at no added burden to the individual. 

In the context of the real world for example, those faced with automated telephone systems should be able to easily navigate to speak to a human to assist them, consider the elderly or those speaking in an accent which the system struggles to understand. Further to this diagnostics present a high stakes situation with the ability to seek human alternatives or escalate concerns is critical, failure to do so could harm patient outcomes

The First Step for Regulatory Enforcement

The introduction of these guidelines sets out a framework for bills to follow in coming years, allowing legislators to protect individuals and groups from automated systems that could pose a threat to their civil liberties. Much of the guidelines ensure that traditional rights are being upheld in the face of these new technologies, a promising development in the fight to ensure we build and integrate AI into society fairly.

The Fairgen toolkit that we have developed allows organisations across a number of sectors to ensure the fair and proper use of algorithms through extensive de-biasing , re-weighting and and benchmarking of their systems. This goes to ensure that these systems are as fair as possible and that suitable mitigations have been made in the event the outcomes are being questioned or queried, something which is critical to proving best practice in the deployment of these tools.