Following our last AI Flash report, we would like to continue providing you with regular legal insights.
Today's Topic: Outlook on the Relationship Between AI Systems and the Proposed Cyber Resilience Act (CRA)
On August 1, 2024, the EU AI Act came into force. Its provisions will be applicable in stages. Currently, however, another regulatory framework is in the final stages of European legislation that may be relevant for AI systems: the Cyber Resilience Act (CRA). This regulation contains strict requirements regarding the cybersecurity of so-called products with digital elements. This can also affect manufacturers of AI systems, even if they are not classified as high-risk AI systems under the AI Act. Therefore, we provide a brief overview of the scope of the CRA, its mandatory requirements, and its intersections with the AI Act.
The CRA was passed by the EU Parliament on March 12, 2024, and the pending approval by the Council of the European Union is considered a formality. It is expected to come into force by the end of 2024.
Scope of the CRA and AI Systems
The CRA applies to products with digital elements. This term is very broadly defined and includes all software and hardware products that include a data connection to another device or a network (connected products). Standalone software is also covered. Thus, the CRA applies to a much broader range than the AI Act, which only applies to artificial intelligence. The CRA also encompasses "normal" software products that do not autonomously infer output but are based on simple if-then relationships. However, a data connection with a device or network is always required. Under this condition, AI systems can also fall within the scope of the CRA. For instance, if AI-based software – in a smart home environment – accesses other devices or a network, the CRA's scope is met.
The applicability of the CRA is independent of the classification of the AI system in question as low-risk or high-risk under the AI Act. This can lead to unpleasant surprises if the AI system, while subject to only limited obligations as an uncritical AI system under the AI Act, is also subject to an additional array of obligations under the CRA.
What Obligations Must AI System Manufacturers Fulfill Under the CRA?
Manufacturers of products within the scope of the CRA must conduct a risk and vulnerability analysis. Subsequently, they must design the product in such a way that they ensure an appropriate level of cybersecurity based on the risks. Among other things, they must not deliver the product with any known vulnerabilities. They must adhere to the principles of data minimization, confidentiality, and integrity, and provide authentication and encryption systems. These obligations apply throughout the entire product lifecycle, and the cybersecurity of the product must be continually ensured through security updates. Transparency information, user manuals, and technical documentation must be provided. Cyber vulnerabilities of the product must be promptly reported to the market surveillance authority, ENISA.
All these requirements must ultimately be validated through a conformity assessment procedure. This is similar to the procedure known from the AI Act for high-risk AI systems. In both cases, the manufacturer/provider can partially conduct an independent assessment, while certain critical products require an external evaluation body. Annex III of the CRA lists such important and critical products, including smart home products with general-purpose or security functions.
High-Risk AI Systems as Products with Digital Elements
According to Article 15 (1) of the AI Act, providers of high-risk AI systems must design and develop these systems in such a way that they achieve an appropriate level of cybersecurity. If a high-risk AI system is also a product with digital elements, Article 8 of the CRA stipulates that the cybersecurity requirements of Article 15 of the AI Act are deemed fulfilled if all CRA requirements are met and covered by the conformity assessment procedure.
The procedural rules are generally uniformly governed by the AI Act (Article 43). An exception to this principle exists if the high-risk AI system, while allowed to be independently assessed for conformity under Article 43 of the AI Act, requires a stricter assessment procedure under the CRA. In this case, the conformity assessments under the AI Act and the CRA must be conducted separately.
Conclusion
In addition to the AI Act, the CRA may also apply to AI systems as products with digital elements or as part of such products. Therefore, manufacturers of such AI systems must comply with the strict cybersecurity requirements of the CRA and undergo a conformity assessment procedure focused on cybersecurity, even if their AI system is classified as uncritical under the AI Act. Should the CRA come into force this year, affected products must be CRA-compliant by 2027. Otherwise, fines and recall orders may be imposed. This means that companies must already consider the new requirements in their product development.
Providers of high-risk AI systems are specifically obligated under the AI Act to ensure cybersecurity. They can (and must) align with the standards of the CRA if the high-risk AI system is simultaneously a connected product with digital elements. The mandatory conformity assessment will generally be conducted according to the AI Act procedure and must include the requirements of the CRA.