After reporting on the AI literacy required under the AI Act in our last KI Flash, we would like to continue to provide you with legal impulses at regular intervals.
Today's topic: AI and (product) liability - entry into force of the new EU Product Liability Directive
On August 1, 2024, a comprehensive set of regulations for AI came into force with the European AI Act. However, the AI Act does not conclusively regulate all legal issues relating to AI systems. For example, liability for defective AI systems is not explicitly regulated in the AI Act.
While the planned AI Liability Directive (on which we reported) is still awaited, the new Product Liability Directive (EU) 2024/2853 came into force on December 9, 2024. The directive must now be transposed into national law by the member states within 24 months. It contains new rules on the strict liability of product manufacturers (and possibly also importers, authorized representatives, suppliers and so-called fulfilment service providers). The products covered by the new Product Liability Directive include, in particular, AI systems.
Background and application of the new Product Liability Directive to AI
According to the German Product Liability Act (“ProdHaftG”) currently in force in Germany, manufacturers of products in Germany are liable, regardless of fault, for injury to life, limb or health of a person and for damage to property that is attributable to a defect in their product. However, the extent to which the ProdHaftG also applies to software products and AI systems, for example, has not yet been conclusively clarified.
The European legislator is now finally putting an end to this ambiguity with the new Product Liability Directive. According to Art. 4 No. 1 of the Product Liability Directive, a product now also expressly includes software. Recital 13 of the Product Liability Directive specifies the forms in which software is covered and explicitly mentions AI: “e.g. operating systems, firmware, computer programs, applications or AI systems”. This applies regardless of how the software or AI is provided or used - i.e. regardless of whether it is stored on a device, accessed via a cloud or provided through a software-(AI)-as-a-service model.
Providers of AI systems within the meaning of the AI Act are also considered product manufacturers within the meaning of the Product Liability Directive. Deployers of AI systems who significantly redesign an AI system or an AI model of a third party through extensive training (e.g. as part of fine-tuning) can quickly become a product manufacturer. According to Art. 8 para. 2 of the Product Liability Directive, any company that substantially modifies a product outside the manufacturer's control and subsequently makes it available on the market or puts it into service is considered a manufacturer. Recital 14 clarifies that a substantial modification can also occur through the continuous learning of an AI system. A substantial modification can therefore also take place in the course of major updates and upgrades of the AI system.
However, “naked” information is explicitly not covered. The Product Liability Direction therefore generally does not apply to the content of digital files or the pure source code of software. However, the developer / Provider of an AI model may also fall within the scope of the Product Liability Directive if the model is integrated into a product as a “component”.
Free and open source software that is provided outside of a business activity is also excluded. However, this threshold could also be quickly exceeded in individual cases, as the Product Liability Directive can also lead to the provision of software “in exchange for data” being classified as a business activity.
Consequences of applicability
As under current German product liability law, manufacturers are also liable under the new Product Liability Directive for damage to certain legal interests (e.g., life or personal injury) causally caused by defective products.
In principle, the injured party must demonstrate and prove these conditions. However, the Product Liability Directive does provide for a number of lighter burdens of proof for the injured party. Against the backdrop of increasingly complex technologies, including AI, the legislator saw an unequal information gap between the manufacturer and the injured party. The manufacturer should therefore be obliged to disclose evidence, among other things. There is a presumption that a product is defective if it would be excessively difficult for the injured party to prove this due to the technical complexity of the case. This also applies to the causal link between damage and defectiveness. Recital 48 of the Product Liability Directive lists machine learning and the functioning of an AI system as examples of this.
Product liability generally expires ten years after the (AI) product has been placed on the market or put into service. The period begins newly by applying substantial modifications to the product. This is particularly important in the case of (further) training of AI.
Conclusion and practical advice
The Product Liability Directive contains the first general provisions for the liability of AI as a software product. However, it will only apply to products that are placed on the market or put into service after December 9, 2026. Existing products are therefore not covered. Product manufacturers are primarily affected. Only in exceptional cases can suppliers also be held liable under the reformed product liability law.
The planned AI Liability Directive will further expand the rules on liability for AI, in particular high-risk AI systems, and extend them to other market participants.
However, it should not be forgotten that both manufacturers and users of AI are already liable for breaches of contractual obligations in connection with AI under the general rules of German civil law. Manufacturers and dealers who offer AI systems or models on the market are liable to their customers under the law on warranties for defects and must take particular care to ensure that they only contractually promise AI functionalities that the application can actually deliver. The mere user of an AI can also be held liable for breaches of contractual obligations resulting from the use of the AI-generated output in accordance with the general rules of Sections 280 et seq. and 823 et seq. of the German Civil Code (“BGB”). A blanket exclusion of liability for AI in general terms and conditions is ineffective, although it is often found in practice.
A recent decision by the Regional Court of Kiel (decision of 29.02.2024 - 6 O 151/23) shows that companies will have to deal more intensively with the possible liability scenarios in connection with the use of AI in the future. The court affirmed the defect-based liability of an information service for the inaccuracy of information generated with the help of AI. These diverse liability scenarios should be carefully analyzed before implementing an AI project in order to ensure compliance with all relevant requirements for the specific product. Please contact us if we can assist you with this.