The EU Parliament has given the green light for the world's first rules on artificial intelligence (AI) in the European Union. A majority of parliamentarians voted in favor of a corresponding law in Strasbourg on Wednesday. The EU AI Act (AI Act) creates a uniform legal framework for AI in the EU, regardless of industry or technology. By way of its immediate and uniform application in all 26 EU member states, the Act is intended to provide legal certainty throughout Europe (and beyond).
Scope of application of the AI Act
The Act governs AI systems, which it defines as machine-based systems that operate autonomously. This means that such AI systems can adapt and develop themselves without any directing human influence. The AI systems can independently generate results from user input.
The AI Act affects providers, suppliers, importers, distributors and product manufacturers of AI systems based in the EU or outside the EU. The decisive factor is that they use the AI within the EU. End users are not directly affected.
Risk-based approach
The AI Act classifies AI according to risk groups. AI systems with unacceptable risk are prohibited, while high-risk AI systems are subject to strict requirements and are not completely prohibited. AI systems with limited or minimal risk have specific transparency obligations. A General Purpose AI (GPAI), and its respective base models, which form the basis for generative AI applications such as ChatGPT, are subject to specific regulations.
Prohibited AI systems
The AI Act prohibits the use of certain AI systems, including manipulative practices, biometric categorization and social scoring. Employers must carefully consider the use of AI systems in the workplace and the use of AI for targeted advertising is likely to be restricted.
High-risk AI systems
AI systems with a significant risk are also subject to strict requirements. The AI Act allows exceptions to these strict requirements via a legal filter, e.g. for AI systems with a narrowly defined procedural task. However, they must be registered in an official EU database before they are placed on the market in the EU. GPAI models that are particularly relevant ("systemic") due to the volume of calculations performed have additional obligations to mitigate systemic risks.
Providers of high-risk AI systems must establish a risk and quality management system and ensure that the AI system has an appropriate level of robustness, accuracy and cybersecurity. Operators must ensure human oversight and report risks, and always conduct a fundamental rights impact assessment. The parallels with the GDPR's data protection impact assessment and the registration of medical devices are obvious and should be helpful in the practical implementation of the AI Act. In particular, the AI Act also emphasizes high fairness requirements when selecting training data in order to avoid distorted and unfair results.
General Purpose AI (GPAI)
A GPAI is subject to special regulations in the AI Act, with extensive information and documentation obligations for providers. The users of these systems should be able to understand the capabilities and limitations of the AI model in the sense of "explainable AI". Systemic GPAI models must take additional safety measures and develop codes of conduct.
Use of AI systems in relation to human individuals
Transparency is required for AI systems that interact directly with humans. Generative AI systems must label their output as artificially generated. Operators of AI systems for biometric categorization or emotion recognition must inform end users transparently.
Control and enforcement
National authorities monitor compliance with the AI Act. Yet, at least for Germany it remains to be settled who is going to be the responsible watchdog within the German regulatory landscape. The newly established AI Office of the EU Commission monitors the GPAI models and systems. Sanctions for violations can amount to up to EUR 35 million or 7% of annual global turnover.
Timetable for the transition period
The AI Act is expected to come into force in April or May 2024. Different deadlines apply for prohibited AI systems, AI models, GPAI and high-risk AI systems. Codes of conduct can be submitted after 9 months and sanctions will take effect after 12 months.
We also recommend keeping a close eye on the new rules on product liability and the liability rules of the AI Act, as specific new rules are also on their legislative way at EU level in this area.