view all news & events
05/21/2024

"KI-Flash": To be or not to be an AI provider - that is the question!

After reporting on the DSK's new guidance on "Artificial intelligence and data protection" in our last AI Flash, we would like to continue to provide you with legal impulses at regular intervals.

Today's topic: To be or not to be an AI provider - that is the question!

More and more companies are looking into artificial intelligence (AI) and are now at least considering relying on the versatile possibilities of AI-supported processes in their own business areas. The trending topic here is undoubtedly: Generative AI!

However, the first step often raises the important question of where the required AI components are actually obtained from. For example, does the company have the capacity and the necessary expertise to develop its own AI models and/or systems? Is third-party licensing of a ready-made AI system the better option for a specific use case? In addition to these strategic decisions, some legal questions also arise in individual cases, in particular regarding the scope of the term "provider" of an AI model or system.

What does the AI Act say?

If you look at the definition of the term "provider" contained in Art. 3 No. 3 AI Act, the question seems to be quite easy to answer. According to this, a "provider" is

"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".

While in some cases the classification can be made without major difficulties, there are a few hurdles to overcome in detail: For example, when is the threshold for development crossed and what exactly is being referred to? To the AI model or the AI system?

An exemplary use case to illustrate the problem

Imagine the following scenario, which in our experience is extremely relevant in practice:

A company is now planning to develop a chatbot for customers. This chatbot should be able to answer simple support queries, have the necessary knowledge of the company's products and services and - in the case of difficult questions - establish contact with a customer support employee. As the company does not have the necessary expertise to develop AI components in-house, it is considering licensing available technologies from third parties. Many of the well-known providers - we deliberately refrain from naming specific providers here - offer, for example, the option of obtaining the services of AI models via programming interfaces (also known as "application programming interfaces", or "APIs" for short). To ensure that the intended chatbot also has the relevant information about the company at the end of the day, additional databases are used that are "linked" to the AI. This is known as "retrieval augmented generation" (RAG for short) or "pre-prompt engineering" (more on this in a moment).

However, if the project is implemented as described, the question of the specific allocation of roles under the AI Act arises in particular.

Differentiation between AI system and AI model

The question may now arise as to where the problem addressed here actually comes from. The relevant terms have already been mentioned at the beginning. The AI Act distinguishes between an AI system and an AI model, with the AI Act making specific reference to the so-called "General Purpose AI Models" (abbreviated to "GPAIM"). While the specific legal framework for GPAIMs is reserved for further AI flashes, this article will focus on the fundamental distinction between system and model.

If we want to describe it as simply and comprehensibly as possible, an AI system is the functional AI application equipped with a user interface, while the AI model represents the underlying (technical) core, i.e. the AI-supported functionality. The latter refers in particular to the actual algorithm and its weightings. Recital 97 of the AI Act literally states that

"General-purpose AI models may be placed on the market in various ways, including through libraries, application programming interfaces (APIs), as direct download, or as physical copy. These models may be further modified or fine-tuned into new models. Although AI models are essential components of AI systems, they do not constitute AI systems on their own. AI models require the addition of further components, such as for example a user interface, to become AI systems. AI models are typically integrated into and form part of AI systems."

The following questions in particular must therefore be answered for the use case described above:

  • What is the AI model and what is the AI system?
  • Who is the provider of the two components?

Evaluation of the use case

Let's start with the simpler classification: The company that enables access to its AI model via an API is - unsurprisingly in this respect - the provider of an AI model, in relation to the specific model provided. Why this needs to be emphasized separately will be addressed again in a moment. However, the provider of this AI model is not also the provider of an AI system. Why? Because the AI model - without further technical integration - lacks a user interface (see recital 97 of the AI Act) and the provider of the AI model is also not directly related to the customer chatbot in our exemplary use case. Therefore, if one focuses on the relevant definition of the provider, only the AI model is initially placed on the market.

Now we come to the more difficult classification: What is the role of the company that wants to provide the customer chatbot? This question must ultimately be answered on two different levels:

(1) The AI model
Since the company wants to integrate a "third-party" AI model into its own application, the question arises as to whether the company is also the provider of an AI model. This fundamental question is extremely controversial and can be assessed differently from case to case. In our exemplary use case, we have deliberately chosen a simpler example, as we believe that the company will come to the conclusion that it is not developing an (independent) AI model and therefore cannot be regarded as a provider. Procedures such as RAG or pre-prompt engineering are based on the fact that (only) external sources of knowledge are consulted for the decision-making process of an AI, without any changes being made to the actual weightings (i.e. the algorithm of the AI) itself. If, on the other hand, we talk about "fine-tuning", i.e. a process in which the AI model itself is trained using the company's own training data, we may arrive at a different legal assessment. At the latest at the point at which the AI model previously placed on the market is (further) developed into an independent (new) AI model and this AI model is then placed on the market, the role of the applier of an AI model would be conceivable. However, when the concrete threshold for (independent) development - here in relation to the AI model (!) - is exceeded is highly controversial and must be assessed on a case-by-case basis.

(2) The AI system
With regard to the specific AI system - in this case the chatbot - our assessment is that the company is to be regarded as the provider in this respect (!). This result is easy to understand, as only the company assumes responsibility for the specific chatbot. If you look at Art. 50 para. 1 AI Act, for example, it literally states:

"Providers shall ensure that AI systems intended for direct interaction with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the perspective of a reasonably informed, observant and circumspect natural person based on the circumstances and context of use."

From a technical point of view, the aforementioned obligation can only be implemented by the provider, whereby explicit reference is made here to the AI system. This means that the company would be regarded as the provider of an AI system, although no significant changes are made to the actual AI model itself. However, the AI model used is integrated into a specific application and thus also used for a specific intended purpose. This application is exclusively the responsibility of the company, which is why the role of a provider can be assumed in this respect.

The question of who is to be regarded as the provider of an AI must therefore always be determined on the basis of the respective reference object (model or system).

Practical recommendation

Many questions surrounding the AI Act are still completely unclear from a legal perspective. Much is in flux and much is controversial. However, it is clear that companies must deal with the requirements of the AI Act as early as possible. In addition to the question of role allocation (fun fact: a company can also take on several roles at the same time), it is particularly important to work out which specific legal obligations need to be implemented. We will be addressing these and many other exciting questions in further AI Flashes. So please stay tuned.

    Share

  • LinkedIn
  • XING