view all news & events
04/18/2023

AI Regulation in Europe: Legal Challenges and Perspectives

While the regulation of AI applications in the USA is either already in place or about to be introduced, the road in Europe from the first draft of the "AI Act" to its entry into force still seems a long way...

According to the Federal Trade Commission (FTC), the national consumer protection organisation in the US, regulation of AI applications is already the status quo ("The reality is AI is regulated (in the U.S.)“). In particular, the laws against unfair and deceptive trade practices should also apply to AI applications and extends the FTC’S authority to companies that make, sell or use AI. In its blog, the FTC also sets the tone for the promotion of AI applications with transparency as one of the top priorities and its warning against deception.  The guideline on how to use user data for training algorithms and AI applications has already been in place for 3 years - transparency is again the key principle.

There has also been a draft for the Union-wide regulation of AI in Europe since the 21st of April 2021, as "Europe shall become the global centre for trustworthy artificial intelligence (AI)". Since then, the draft has been discussed and negotiated under the various presidencies of the Council of the EU. The draft primarily represents a prohibition law that either bans the use of certain AI applications altogether or ties their use to further, possibly far-reaching, preconditions and safeguards. Now, the European Parliament is expected to vote on the draft by the end of April so that its representatives can enter the trilogue with the European Commission and the European Council in May with a final position.

In addition, there seems to be an increasing discussion at the moment on how so-called Large Language Models (LLM), on which applications such as ChatGPT are based, should be classified within the framework of the regulation and whether they should be classified as "high risk" applications under the draft.

To summarise, the current draft takes a risk-based approach and distinguishes between AI applications that are classified as (i) unacceptable risk, (ii) high risk, (iii) low or minimal risk.

AI applications (or "AI systems" as defined in the draft) are primarily associated with an unacceptable risk and thus prohibited if their use violates the Union's values, for example by infringing fundamental rights (e.g. in the case of manipulation of human behaviour or social scoring).

In previous discussions on the draft, chatbots were generally classified as applications with low or minimal risk, which would only lead to "minimal" transparency obligations as a legal consequence.

AI applications that would be classified as high risk, on the other hand, would generally have to fulfil a number of additional requirements:

  • According to Article 10 "Data and data Governance" of the draft, these applications may only be trained with data that have a certain quality as further specified in the draft.
  • The applications would also have to be designed and developed in such a way;
    • (i) that they, inter alia, automatically record/log "events" (Article 12 "Record-keeping");
    • (ii) that the operation is sufficiently transparent to allow the user to interpret and use the results appropriately (Article 13 "Transparency and provision of information to users");
    • (iii) that the applications can be effectively overseen by natural persons during the period of use (Article 14 "Human oversight"); and;
    • (iv) that they achieve an appropriate level of accuracy, robustness and cybersecurity in light of their intended purpose and perform consistently in those respects throughout their lifecycle (Article 15 "Accuracy, robustness and cybersecurity").
  • Article 16 "Obligations of providers of high-risk AI systems" also imposes far-reaching obligations on the providers of such applications - for example, a conformity assessment procedure must be carried out.

Even without a European regulation, however, AI is not unregulated in the EU or Germany at the moment. Naturally, laws already in force may also apply to AI applications in individual cases. This can already be observed under the provisions of the GDPR: After the ban of ChatGPT in Italy (we reported), the German data protection authorities are currently also examining its use in Germany and the European Data Protection Board (EDPB) decided last week to launch a task force to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities. It is also imaginable under the new Digital Service Act (DSA) that certain regulations could apply to AI applications (depending on how they function, also imaginable for LLMs) - for example, in the use of AI applications in the context of online interfaces under the prohibition of "dark patterns" (Article 25 "Online interface design and organisation" DSA) or with regard to the transparency rules for recommendation systems using AI applications (Article 27 "Recommender system transparency" DSA).

As a result, the general matter of AI regulation in Europe and Germany remains exciting in the coming weeks - especially with the European Parliament's vote on the draft AI regulation expected at the end of April. In addition, the deadline set by the Italian data protection authority to the European representative of the company OpenAI as provider of ChatGPT expires at the end of April.  By then, the provider must present measures to remedy the identified defects. Otherwise, in case of insufficient compliance, a fine of up to 20 million EUR or 4% of the worldwide group turnover may be considered.

    Share

  • LinkedIn
  • XING