view all news & events
09/05/2023

"AI Flash" - Special data protection features in the use of AI

"AI Quickies" - A series of contributions

After we recently published our consulting offer for conducting a data protection impact assessment for the use of AI, we would like to take the next step - and from now on at regular intervals - to show you the legal particularities of the use of AI. Under the title "AI Quickies", important legal issues on the topic of AI will be highlighted in the form of short articles in the coming weeks and months.

Since time is a rare commodity in today's society, we want to get straight to the point with our "AI Quickies" and summarize the legal challenges briefly and concisely:

Today's topic: Special data protection features in the use of AI

Our first AI Quickie is a brief presentation of the special features of the use of AI under data protection law. In principle, it is fair to ask why compliance with data protection regulations poses a certain challenge, especially when using AI. As a rule, personal data is processed and this processing must comply with the requirements of the GDPR. But what are the special features?

Typical risks when using AI

AI makes independent decisions - at least to a certain extent. To make this possible, the system is trained in advance with a large number of data sets. But what happens if the content of this data is incorrect, or if there are omissions in the cleansing of the data sets? As a result, it can happen that the AI - since it does not "know" otherwise - causes discrimination (for example, based on age, skin color or gender). It would also be a disaster if the (personal) work results of the AI were incorrect in terms of content and, in the worst case, the system became economically unusable. The type of data used and its path to real training data can therefore be seen as a real special feature of AI.

Large amount of data

This leads us to the next special feature. Since an AI does not have a real human understanding, it must be trained - and this regularly with a great deal of data. Since the GDPR follows a risk-based approach, this weight of data must of course be taken into account separately. All of the risks laid out in the GDPR increase proportionally to an increasing amount of data. Conceivable violations of the principle of data minimization and purpose limitation are therefore obvious.

Intransparency

Another special feature of the use of AI is the regular lack of transparency in the decision-making process (also known as the "black box"). With certain AI models (especially neural networks), the result of an AI's decision can be shown. However, the way of decision making is not easily understandable. Practical problems with the issuance of data protection notices pursuant to Article 13 of the GDPR or the obligation to provide evidence to the supervisory authority pursuant to Article 5 (2) of the GDPR are obvious. In the future, it will be the responsibility of the manufacturers of AI applications to provide the necessary information for the user. Only a "GDPR-friendly" AI will convince companies to use it in the long term.

Legal and technical challenges - our consulting services:

Our credo can therefore only be: Check data protection requirements when using AI as early as possible. SKW Schwarz already advises a large number of clients (both users and providers/manufacturers of AI solutions) of all sizes on the implementation and use of AI applications. We know the legal and technical challenges from a large number of AI projects. In addition to data protection issues, we also regularly deal with the drafting of contracts and the securing of the respective rights (e.g. in copyright law). We are therefore happy to support you in the legally compliant use of AI in your company.

In our next AI Quickie, we will briefly present the conceivable legal bases for data protection when using AI.

    Share

  • LinkedIn
  • XING