view all news & events
03/24/2025

AI Flash: European Commission guidelines on prohibited practices under Art. 5 of the AI Regulation

Since February 2, 2025, the provisions of the AI Regulation on prohibited AI systems, namely those that pose an unacceptable risk, must be observed. 

In February of this year, the European Commission published guidelines on prohibited practices in accordance with Art. 5 of the AI Regulation (AI Regulation). The AI Regulation follows a risk-based approach that defines a comprehensive catalog of prohibited practices in the field of artificial intelligence (Art. 5 AI Regulation). Art. 5 of the AI Regulation regulates a ban on certain AI and sanctions infringements with fines of up to EUR 35 million and, in the case of companies, alternatively up to 7% of the total global annual turnover of the previous financial year. Accordingly, the relevant provisions are likely to be open to interpretation. Nevertheless, the regulation contains a number of undefined legal terms.

The guidelines now address this issue by providing non-binding explanations on definitions, applicability and enforcement options. This article provides a rough overview of the individual prohibitions. For a detailed summary of the guidelines and a detailed explanation of the provisions of Art. 5 of the AI Regulation, we recommend our white paper, which will be available for download shortly.

 

Art. 5 para. 1 a) AI Regulation: AI systems that influence persons outside their awareness or intentionally use manipulative or deceptive techniques 

According to this provision, it is prohibited to exert a subliminal influence outside a person's awareness or to intentionally manipulate them using AI. The guidelines state that this primarily refers to exerting a significant influence on a person's ability to make decisions without them being consciously aware of it and causing or being likely to cause material or immaterial harm. Manipulative techniques are therefore subliminal techniques that escape a person's awareness, targeted manipulative and deceptive techniques. Accordingly, the person concerned makes a decision based on coercion, manipulation or deception that they would not normally have made. It is also necessary for there to be a causal link between the behavior and the manipulation.

 

Art. 5 para. 1 b) of the AI Regulation: AI systems that exploit the vulnerability or vulnerability of a natural person

Closely related to the above prohibition is the prohibition on using AI systems to exploit the vulnerability or vulnerability of individuals. In this context, older people, children, people unable to work and people in special socio-economic situations, such as extreme poverty, are considered to be particularly vulnerable. As an example, the guidelines refer to digital toys that use targeted manipulative mechanisms to encourage excessive consumption or risky behavior. Here, too, a sufficient causal link and material or immaterial harm is required for the ban to apply.

With the two above-mentioned prohibitions, it is important to distinguish between unlawful manipulation and legitimate persuasion. The dividing line should be transparent persuasion, including the provision of factual information and respect for the individual's freedom of choice.

 

Art. 5 para. 1 c) of the AI Regulation: AI systems for the assessment or classification of persons on the basis of their social behavior or personal characteristics

AI systems that make a social assessment of individuals on the basis of data on their social behavior or personal characteristics and thereby disadvantage or disadvantage them in a social context unrelated to the data collection are therefore prohibited. Furthermore, the effect must be disproportionate and unjustified with regard to the scope of the social behavior. The relevant data for this prohibition is usually behavior-related, meaning that it generally includes actions, behavior, habits within society or information about gender, ethnicity, family situation, etc. Here too, a causal link between the assessment and disadvantage and the occurrence of a disadvantage is important.

 

Art. 5 para. 1 d) of the AI Regulation: Carrying out risk assessments of a person with regard to the commission of a criminal offense 

Art. 5 para. 1 d) of the AI Regulation regulates the prohibition of AI systems for carrying out risk assessments with regard to the commission of a criminal offense. This concerns an individual risk assessment and prediction of criminal offenses, exclusively on the basis of profiling or the assessment of personality traits and characteristics. A risk assessment for the commission of administrative offenses should not be prohibited in any case, as its prosecution is less invasive of fundamental rights. The AI in question works by recognizing and linking patterns in historical data and generating risk values for prediction on this basis. Parallel to the ban on automated decision-making, it is not relevant here if the AI is only used to support the assessment. 

 

Art. 5 para. 1 e) AI Regulation: Creation of a database for untargeted facial recognition 

This prohibition covers the generation of a database for untargeted facial recognition. The specific act of biometric identification is regulated in provisions below. In this context, untargeted means that data is collected indiscriminately, without information on a specific individual or a definable group of people.

 

Art. 5 para. 1 f) AI Regulation: AI systems for inferring emotions in the workplace and in educational institutions 

This provision prohibits AI systems that can recognize and infer emotions in the workplace and in educational institutions. The main reason for this is that emotion recognition is often criticized for its effectiveness, accuracy and limited generalizability. The AI systems in this case draw conclusions about the emotion of a natural person by comparing an emotion that was previously programmed into the emotion recognition system by processing biometric data. The restriction to the workplace or educational institution relates to the particular imbalance of power between individuals that prevails here.

 

Art. 5 para. 1 g) AI Regulation: Systems for biometric categorization

The AI Regulation also prohibits AI systems that can carry out biometric categorization in order to draw conclusions about sensitive characteristics such as ethnicity, political opinion, religious or philosophical beliefs, sexual orientation, etc. The process typically works by using biometric data to determine whether the data subject belongs to a group with certain predefined characteristics. The process typically works by using the biometric data to determine whether the person concerned belongs to a group with certain predefined characteristics. It is not the identification of a person that is relevant for the ban, but only the resulting assignment to a certain category.

 

Art. 5(1)(h) of the AI Regulation: real-time biometric remote identification in publicly accessible spaces for law enforcement purposes

Finally, the prohibition of real-time biometric remote identification in publicly accessible areas for law enforcement purposes should also be addressed. Systems that are used to identify people remotely - without any significant time delay - by comparing their biometric data with the biometric data stored in a reference database are therefore covered by this ban. This is justified by the resulting feeling of constant surveillance and the massive infringement of fundamental rights. In addition, the limited verifiability and lack of possibility to correct these systems entail considerable risks. The provision provides for three cases in which the security needs of society outweigh the associated risks and the procedure should therefore be permitted.

 

The guidelines therefore help with the definitions and application of the prohibition provisions. They not only provide interpretation aids, but also explain the interaction of the AI Regulation with other Union provisions.

    Share

  • LinkedIn
  • XING