Referring back to my earlier blog entry, today's discussion centres on the classification of AI systems within the framework of the AI Act (AIA). The AIA adopts a risk-based classification system to regulate AI systems. Accordingly, risk is delineated in Article 3.1a as “the combination of the probability of an occurrence of harm and the severity of that harm”. It should be noted however, that all AI systems, regardless of the level of risk, are subject to minimum obligations. In this blog post, only AI systems that are prohibited and those that are high risk will be discussed.

The initial set of AI systems subject to regulation by the AIA focuses on those presenting unacceptable risks, leading to their prohibition. Article 5 explicitly bans the introduction/ use of AI systems into various spheres, including: (a) those employing subliminal techniques beyond human awareness or employing manipulative and deceptive tactics deliberately; (b) those exploiting vulnerable individuals; (c) those engaged in biometric processing for inferring sensitive personal data categories; (d) social credit scoring systems; (e) real-time remote biometric identification systems in publicly accessible areas for law enforcement, permitted only when strictly necessary for predefined purposes; (f) criminal predictive systems relying solely on profiling or personality trait analysis without human oversight; (g) facial recognition systems expanding databases through indiscriminate scraping of facial images from the internet or CCTV sources; (h) emotion recognition systems used within workplaces and educational environments.

Title III of the AIA introduces the second category of regulated AI systems, which are identified as posing a heightened level of risk. Article 6 delineates these systems, encompassing: AI systems designed to serve as safety components within products; AI systems falling under the harmonised legislation outlined in Annex II of the AI Act; and AI systems specified in Annex III, including those utilised for critical infrastructure, educational purposes, vocational training, access to essential private and public services, and related benefits. Notably, any AI system engaging in natural person profiling is automatically deemed high risk. Nevertheless, systems listed in Annex III are exempt from being classified as high risk if they do not present significant threats to the health, safety, or fundamental rights of individuals, nor materially influence decision-making outcomes. However, to demonstrate this exemption, one must satisfy one or more of the prescribed criteria. Article 7 clarifies that this enumeration is not exhaustive, granting the Commission authority to add to or amend the roster of high-risk AI systems. Ownership of a high-risk AI system entails adherence to the stringent obligations outlined in Chapter 2. However, the interpretation and practical implementation of these articles remain subject to scrutiny, as initial impressions suggest compliance may prove exceedingly challenging.

Unveiling the AI Act - written by Maria Mot