A few days ago (on July 12, 2024), the AI Regulation was published in the Official Journal of the European Union. As frequently reported, it will then enter into force twenty days later, i.e. on August 1, 2024.
However, the question of who is responsible for the general market surveillance of the enforcement of the AI Regulation is still open.
There is of course the EU AI Office (European Office for Artificial Intelligence), which was established by Commission decision in January 2024 - i.e. before the adoption of the AI Regulation. This EU AI Office is to perform a number of key tasks as part of the EU's AI strategy and is to become the center of the EU's AI expertise. In addition, the EU AI Office is responsible for the regulation of general purpose AI models (GPAI models) and AI systems based on a GPAI model and developed by the same provider.
But the national authorities of the Member States are responsible for enforcement measures in relation to AI systems that are not GPAI models or that have not been developed directly by the provider itself from GPAI models. This topic is regulated in Article 70 of the AI Act. This stipulates that each EU Member State must maintain at least one notification authority and at least one market surveillance authority to enforce the AI Act.
Yesterday (17 Juli 2024), following their last plenary meeting, all European data protection authorities have expressly asked for this role as market surveillance authority through the European Data Protection Board (EDPB). The original statement was issued in March 2024 and was adopted at yesterday's meeting. The German Data Protection Conference also reaffirmed this position in May 2024.
EDPB Vice-Chair Irene Loizidou Nicolaidou said yesterday: "Data Protection Authorities (DPAs) should play an important role in enforcing the AI Act, as most AI systems involve the processing of personal data. I strongly believe that DPAs are suitable for this role due to their complete independence and their comprehensive understanding of the risks of AI for fundamental rights, based on their previous experience."
It is already certain that data protection supervisory authorities will be entrusted with market surveillance for the largest parts of the high-risk catalog of AI systems (AI in law enforcement, administration of justice, migration control and AI that influences elections). It is interesting to note that this applies not only to authorities that use such AI systems, but also to commercial enterprises such as software providers, cloud services or security companies that offer AI systems for these areas. The competence for surveillance affects the entire value chain in this context.
What happens next?
By August 2, 2025, all member states must adopt an implementing law for the AI Regulation. A key issue here will be who will then be appointed as the general market surveillance authority.
Practical relevance
From our many years of practical experience with data protection supervisory authorities, we can welcome such a role. From our point of view, it is a clear advantage for our clients in all respects, whether they are small, medium-sized or large companies, if they then also have to deal with a "known" authority with regard to AI systems. We know from experience what the data protection authorities attach particular importance to and can provide companies with even better support in implementing the AI Regulation. This bundling of audits at the data protection supervisory authorities would also be very helpful with regard to both the GDPR and the AI Regulation, as the authority - just like the companies - would then have to consider both areas of law and the implementation of practical processes. This could prevent the risk of silos arising in the sense of a mental separation between "data protection" and "AI governance", which would be rather impractical.