Technical
Technical
Technical

The AI Act simplified

July 5, 2022

By

Sharon Cohen

In April 2021, the European Commission released the proposed Regulation on Artificial Intelligence, most known as the ‘AI Act’. Up to date, it’s the latest piece of law that aims to regulate AI inventions. Out of its 108 pages and 85 Articles, we've selected the provisions that are of interest to you. Here’s everything you need to know about (i) its aims, (ii) its approach, (iii) the high-risk category under which Fairgen’s market falls and (iv) the sanctions encountered.


Aims

  • The AI Act is the first step in creating a global legal framework for AI innovations.
  • Similarly to the GDPR, the AI Act will lay the foundations for a global base in terms of protection of fundamental rights (notably health, safety and privacy) with regard to the uses of AI.
  • The Act’s purpose is to ensure the free movement of AI-based goods and services across borders, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems.
  • The AI Act aims to ensure that all AI inventions are trustworthy, human-centred, ethical, sustainable and inclusive.
  • One of its aims is to position the European Union as a pioneer in ethical technologies, through the establishment of international standards.


Risks approach

The AI Act is based on a type of risks approach; it establishes four categories and their respective characteristics, proposed uses and sanctions.


  1. Unacceptable Risk AI: Any system that is contrary to the values of the EU and to fundamental rights.
    Examples: social scoring by public authorities, real-time biometric identification (facial recognition), systems that use subliminal techniques, exploitation of vulnerabilities and manipulation of behaviour.
    Proposition to ban all systems that fall under this category according to Article 5(1)(d).

  2. High Risk AI: Any system that includes a high risk to health, safety or fundamental rights of physical persons through inventions intended to be used as a safety component of a product or as a product themselves.
    Examples: private and public services (credit scoring), biometric ID, access to employment (automatic CV sorting), assisted medicine, administration of justice and democratic processes (asylum and border control management), exam scoring...

  3. Limited Risk AI: Any system that interacts with human beings with transparency.
    Examples: chatbots, emotion recognition or biometric categorisation systems, content generation (’deepfake’).

  4. Minimal Risk AI: Any system that includes sustainability, diversity or accessibility to its
    code of conduct.
    Examples: spam filters or AI-enabled video games.


Closer look at the high risk category & Fairgen

According to Article 6 of the AI Act, AI systems that fall under the category of “High Risk” will systematically have to undergo a third-party conformity assessment.


To be compliant, the providers of the AI system will need to :

  • Ensure a level of traceability of the system’s functioning
  • Have a risk management system
  • Ensure that their operation is sufficiently transparent to consumers
  • Provide up-to-date technical documentation
  • Have high quality data sets
  • Have appropriate human-machine interface tools to ensure human supervision
  • Have an appropriate level of accuracy, robustness and cybersecurity

If the high-risk AI system is not in conformity with the requirements set out above, the providers shall inform the national competent authorities.


Sanctions

In the event of a breach of the AI Act, following the nature, gravity and duration of the infringement, whether administrative fines have been already applied by other authorities to the same operator for the same infringement and the size and market share of the operator committing the infringement, the penalties incurred follow the same logic as those of the GDPR regime.


Non-compliance with the rules of :

  • Unacceptable Risk AI (Article 5) : up to €30 Million fine or 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
  • All other types of AI: up to €20 Million fine or up to 4 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
  • Failure to cooperate with national authorities within the EU: up to €10 Million fine or up to 2 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.


Some 2022 GDPR Sanctions :

Seeing as the GDPR is nowadays systematically enforced, it is clear that it is only a matter of time before the AI Act is passed and enforced in its turn. Making sure that AI systems are compliant and fair is a way more pressing issue than it seems.


SOURCES

AI Act

AI Act Explainer

Venture Beat - State of AI Ethics

France Devoteam: Reglementation

Montreal Ethics AI