Considerations on COM(2022)496 - Adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive)

Please note

This page contains a limited version of this dossier in the EU Monitor.

 
 
(1) Artificial Intelligence (‘AI’) is a set of enabling technologies which can contribute to a wide array of benefits across the entire spectrum of the economy and society. It has a large potential for technological progress and allows new business models in many sectors of the digital economy.

(2) At the same time, depending on the circumstances of its specific application and use, AI can generate risks and harm interests and rights that are protected by Union or national law. For instance, the use of AI can adversely affect a number of fundamental rights, including life, physical integrity and in respect to non-discrimination and equal treatment. Regulation (EU) …/… of the European Parliament and of the Council [the AI Act] 31 provides for requirements intended to reduce risks to safety and fundamental rights, while other Union law instruments regulate general 32 and sectoral product safety rules applicable also to AI-enabled machinery products 33 and radio equipment. 34 While such requirements intended to reduce risks to safety and fundamental rights are meant to prevent, monitor and address risks and thus address societal concerns, they do not provide individual relief to those that have suffered damage caused by AI. Existing requirements provide in particular for authorisations, checks, monitoring and administrative sanctions in relation to AI systems in order to prevent damage. They do not provide for compensation of the injured person for damage caused by an output or the failure to produce an output by an AI system.

(3) When an injured person seeks compensation for damage suffered, Member States’ general fault-based liability rules usually require that person to prove a negligent or intentionally damaging act or omission (‘fault’) by the person potentially liable for that damage, as well as a causal link between that fault and the relevant damage. However, when AI is interposed between the act or omission of a person and the damage, the specific characteristics of certain AI systems, such as opacity, autonomous behaviour and complexity, may make it excessively difficult, if not impossible, for the injured person to meet this burden of proof. In particular, it may be excessively difficult to prove that a specific input for which the potentially liable person is responsible had caused a specific AI system output that led to the damage at stake.

(4) In such cases, the level of redress afforded by national civil liability rules may be lower than in cases where technologies other than AI are involved in causing damage. Such compensation gaps may contribute to a lower level of societal acceptance of AI and trust in AI-enabled products and services.

(5) To reap the economic and societal benefits of AI and promote the transition to the digital economy, it is necessary to adapt in a targeted manner certain national civil liability rules to those specific characteristics of certain AI systems. Such adaptations should contribute to societal and consumer trust and thereby promote the roll-out of AI. Such adaptations should also maintain trust in the judicial system, by ensuring that victims of damage caused with the involvement of AI have the same effective compensation as victims of damage caused by other technologies.

(6) Interested stakeholders – injured persons suffering damage, potentially liable persons, insurers – face legal uncertainty as to how national courts, when confronted with the specific challenges of AI, might apply the existing liability rules in individual cases in order to achieve just results. In the absence of Union action, at least some Member States are likely to adapt their civil liability rules to address compensation gaps and legal uncertainty linked to the specific characteristics of certain AI systems. This would create legal fragmentation and internal market barriers for businesses that develop or provide innovative AI-enabled products or services. Small and medium-sized enterprises would be particularly affected.

(7) The purpose of this Directive is to contribute to the proper functioning of the internal market by harmonising certain national non-contractual fault-based liability rules, so as to ensure that persons claiming compensation for damage caused to them by an AI system enjoy a level of protection equivalent to that enjoyed by persons claiming compensation for damage caused without the involvement of an AI system. This objective cannot be sufficiently achieved by the Member States because the relevant internal market obstacles are linked to the risk of unilateral and fragmented regulatory measures at national level. Given the digital nature of the products and services falling within the scope of this Directive, the latter is particularly relevant in a cross-border context.

(8) The objective of ensuring legal certainty and preventing compensation gaps in cases where AI systems are involved can thus be better achieved at Union level. Therefore, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality as set out in that Article, this Directive does not go beyond what is necessary in order to achieve that objective.

(9) It is therefore necessary to harmonise in a targeted manner specific aspects of fault-based liability rules at Union level. Such harmonisation should increase legal certainty and create a level playing field for AI systems, thereby improving the functioning of the internal market as regards the production and dissemination of AI-enabled products and services.

(10) To ensure proportionality, it is appropriate to harmonise in a targeted manner only those fault-based liability rules that govern the burden of proof for persons claiming compensation for damage caused by AI systems. This Directive should not harmonise general aspects of civil liability which are regulated in different ways by national civil liability rules, such as the definition of fault or causality, the different types of damage that give rise to claims for damages, the distribution of liability over multiple tortfeasors, contributory conduct, the calculation of damages or limitation periods.

(11) The laws of the Member States concerning the liability of producers for damage caused by the defectiveness of their products are already harmonised at Union level by Council Directive 85/374/EEC 35 . Those laws do not, however, affect Member States’ rules of contractual or non-contractual liability, such as warranty, fault or strict liability, based on other grounds than the defect of the product. While at the same time the revision of Council Directive 85/374/EEC seeks to clarify and ensure that injured person can claim compensation for damages caused by defective AI-enabled products, it should therefore be clarified that the provisions of this Directive do not affect any rights which an injured person may have under national rules implementing Directive 85/374/EEC. In addition, in the field of transport, Union law regulating the liability of transport operators should remain unaffected by this Directive. 

(12) [The Digital Services Act (DSA) 36 ] fully harmonises the rules applicable to providers of intermediary services in the internal market, covering the societal risks stemming from the services offered by those providers, including as regards the AI systems they use. This Directive does not affect the provisions of [the Digital Services Act (DSA)] that provide a comprehensive and fully harmonised framework for due diligence obligations for algorithmic decision-making by hosting service providers, including the exemption from liability for the dissemination of illegal content uploaded by recipients of their services where the conditions of that Regulation are met.

(13) Other than in respect of the presumptions it lays down, this Directive does not harmonise national laws regarding which party has the burden of proof or which degree of certainty is required as regards the standard of proof. 

(14) This Directive should follow a minimum harmonisation approach. Such an approach allows claimants in cases of damage caused by AI systems to invoke more favourable rules of national law. Thus, national laws could, for example, maintain reversals of the burden of proof under national fault-based regimes, or national no-fault liability (referred to as ‘strict liability’) regimes of which there are already a large variety in national laws, possibly applying to damage caused by AI systems.

(15) Consistency with [the AI Act] should also be ensured. It is therefore appropriate for this Directive to use the same definitions in respect of AI systems, providers and users. In addition, this Directive should only cover claims for damages when the damage is caused by an output or the failure to produce an output by an AI system through the fault of a person, for example the provider or the user under [the AI Act]. There is no need to cover liability claims when the damage is caused by a human assessment followed by a human act or omission, while the AI system only provided information or advice which was taken into account by the relevant human actor. In the latter case, it is possible to trace back the damage to a human act or omission, as the AI system output is not interposed between the human act or omission and the damage, and thereby establishing causality is not more difficult than in situations where an AI system is not involved.

(16) Access to information about specific high-risk AI systems that are suspected of having caused damage is an important factor to ascertain whether to claim compensation and to substantiate claims for compensation. Moreover, for high risk AI systems, [the AI Act] provides for specific documentation, information and logging requirements, but does not provide a right to the injured person to access that information. It is therefore appropriate to lay down rules on the disclosure of relevant evidence by those that have it at their disposal, for the purposes of establishing liability. This should also provide an additional incentive to comply with the relevant requirements laid down in [the AI Act] to document or record the relevant information.

(17) The large number of people usually involved in the design, development, deployment and operation of high-risk AI systems, makes it difficult for injured persons to identify the person potentially liable for damage caused and to prove the conditions for a claim for damages. To allow injured persons to ascertain whether a claim for damages is well-founded, it is appropriate to grant potential claimants a right to request a court to order the disclosure of relevant evidence before submitting a claim for damages. Such disclosure should only be ordered where the potential claimant presents facts and information sufficient to support the plausibility of a claim for damages and it has made a prior request to the provider, the person subject to the obligations of a provider or the user to disclose such evidence at their disposal about specific high-risk AI systems that are suspected of having caused damage which has been refused. Ordering disclosure should lead to a reduction of unnecessary litigation and avoid costs for the possible litigants caused by claims which are unjustified or likely to be unsuccessful. The refusal of the provider, the person subject to the obligations of a provider or the user prior to the request to the court to disclose evidence should not trigger the presumption of non-compliance with relevant duties of care by the person who refuses such disclosure.

(18) The limitation of disclosure of evidence as regards high-risk AI systems is consistent with [the AI Act], which provides certain specific documentation, record keeping and information obligations for operators involved in the design, development and deployment of high-risk AI systems. Such consistency also ensures the necessary proportionality by avoiding that operators of AI systems posing lower or no risk would be expected to document information to a level similar to that required for high-risk AI systems under [the AI Act]. 

(19) National courts should be able, in the course of civil proceedings, to order the disclosure or preservation of relevant evidence related to the damage caused by high-risk AI systems from persons who are already under an obligation to document or record information pursuant to [the AI Act], be they providers, persons under the same obligations as providers, or users of an AI system, either as defendants or third parties to the claim. There could be situations where the evidence relevant for the case is held by entities that would not be parties to the claim for damages but which are under an obligation to document or record such evidence pursuant to [the AI Act]. It is thus necessary to provide for the conditions under which such third parties to the claim can be ordered to disclose the relevant evidence.

(20) To maintain the balance between the interests of the parties involved in the claim for damages and of third parties concerned, the courts should order the disclosure of evidence only where this is necessary and proportionate for supporting the claim or potential claim for damages. In this respect, disclosure should only concern evidence that is necessary for a decision on the respective claim for damages, for example only the parts of the relevant records or data sets required to prove non-compliance with a requirement laid down by [the AI Act]. To ensure the proportionality of such disclosure or preservation measures, national courts should have effective means to safeguard the legitimate interests of all parties involved, for instance the protection of trade secrets within the meaning of Directive (EU) 2016/943 of the European Parliament and of the Council 37 and of confidential information, such as information related to public or national security. In respect of trade secrets or alleged trade secrets which the court has identified as confidential within the meaning of Directive (EU) 2016/943, national courts should be empowered to take specific measures to ensure the confidentiality of trade secrets during and after the proceedings, while achieving a fair and proportionate balance between the trade-secret holder's interest in maintaining secrecy and the interest of the injured person. This should include measures to restrict access to documents containing trade secrets and access to hearings or documents and transcripts thereof to a limited number of people. When deciding on such measures, national courts should take into account the need to ensure the right to an effective remedy and to a fair trial, the legitimate interests of the parties and, where appropriate, of third parties, and any potential harm to either party or, where appropriate, to third parties, resulting from the granting or rejection of such measures. Moreover, to ensure a proportionate application of a disclosure measure towards third parties in claims for damages, the national courts should order disclosure from third parties only if the evidence cannot be obtained from the defendant.

(21) While national courts have the means of enforcing their orders for disclosure through various measures, any such enforcement measures could delay claims for damages and thus potentially create additional expenses for the litigants. For injured persons, such delays and additional expenses may make their recourse to an effective judicial remedy more difficult. Therefore, where a defendant in a claim for damages fails to disclose evidence at its disposal ordered by a court, it is appropriate to lay down a presumption of non-compliance with those duties of care which that evidence was intended to prove. This rebuttable presumption will reduce the duration of litigation and facilitate more efficient court proceedings. The defendant should be able to rebut that presumption by submitting evidence to the contrary. 

(22) In order to address the difficulties to prove that a specific input for which the potentially liable person is responsible had caused a specific AI system output that led to the damage at stake, it is appropriate to provide, under certain conditions, for a presumption of causality. While in a fault-based claim the claimant usually has to prove the damage, the human act or omission constituting fault of the defendant and the causality link between the two, this Directive does not harmonise the conditions under which national courts establish fault. They remain governed by the applicable national law and, where harmonised, by applicable Union law. Similarly, this Directive does not harmonise the conditions related to the damage, for instance what damages are compensable, which are also regulated by applicable national and Union law. For the presumption of causality under this Directive to apply, the fault of the defendant should be established as a human act or omission which does not meet a duty of care under Union law or national law that is directly intended to protect against the damage that occurred. Thus, this presumption can apply, for example, in a claim for damages for physical injury when the court establishes the fault of the defendant for non-complying with the instructions of use which are meant to prevent harm to natural persons. Non-compliance with duties of care that were not directly intended to protect against the damage that occurred do not lead to the application of the presumption, for example a provider’s failure to file required documentation with competent authorities would not lead to the application of the presumption in claims for damages due to physical injury. It should also be necessary to establish that it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output. Finally, the claimant should still be required to prove that the output or failure to produce an output gave rise to the damage.

(23) Such a fault can be established in respect of non-compliance with Union rules which specifically regulate high-risk AI systems like the requirements introduced for certain high-risk AI systems by [the AI Act], requirements which may be introduced by future sectoral legislation for other high-risk AI systems according to [Article 2(2) of the AI Act], or duties of care which are linked to certain activities and which are applicable irrespective whether AI is used for that activity. At the same time, this Directive neither creates nor harmonises the requirements or the liability of entities whose activity is regulated under those legal acts, and therefore does not create new liability claims. Establishing a breach of such a requirement that amounts to fault will be done according to the provisions of those applicable rules of Union Law, since this Directive neither introduces new requirements nor affects existing requirements. For example, the exemption of liability for providers of intermediary services and the due diligence obligations to which they are subject pursuant to [the Digital Services Act] are not affected by this Directive. Similarly, the compliance with requirements imposed on online platforms to avoid unauthorised communication to the public of copyright protected works is to be established under Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market and other relevant Union copyright law. 

(24) In areas not harmonised by Union law, national law continues to apply and fault is established under the applicable national law. All national liability regimes have duties of care, taking as a standard of conduct different expressions of the principle how a reasonable person should act, which also ensure the safe operation of AI systems in order to prevent damage to recognised legal interests. Such duties of care could for instance require users of AI systems to choose for certain tasks a particular AI system with concrete characteristics or to exclude certain segments of a population from being exposed to a particular AI system. National law can also introduce specific obligations meant to prevent risks for certain activities, which are applicable irrespective whether AI is used for that activity, for example traffic rules or obligations specifically designed for AI systems, such as additional national requirements for users of high-risk AI systems pursuant to Article 29 (2) of [the AI Act]. This Directive neither introduces such requirements nor affects the conditions for establishing fault in case of breach of such requirements.

(25) Even when fault consisting of a non-compliance with a duty of care directly intended to protect against the damage that occurred is established, not every fault should lead to the application of the rebuttable presumption linking it to the output of the AI. Such a presumption should only apply when it can be considered reasonably likely, from the circumstances in which the damage occurred, that such fault has influenced the output produced by the AI system or the failure of the AI system to produce an output that gave rise to the damage. It can be for example considered reasonably likely that the fault has influenced the output or failure to produce an output, when that fault consists in breaching a duty of care in respect of limiting the perimeter of operation of the AI system and the damage occurred outside the perimeter of operation. On the contrary, a breach of a requirement to file certain documents or to register with a given authority, even though this might be foreseen for that particular activity or even be applicable expressly to the operation of an AI system, could not be considered as reasonably likely to have influenced the output produced by the AI system or the failure of the AI system to produce an output.

(26) This Directive covers the fault constituting non-compliance with certain listed requirements laid down in Chapters 2 and 3 of [the AI Act] for providers and users of high-risk AI systems, the non-compliance with which can lead, under certain conditions, to a presumption of causality. The AI Act provides for full harmonisation of requirements for AI systems, unless otherwise explicitly laid down therein. It harmonises the specific requirements for high-risk AI systems. Hence, for the purposes of claims for damages in which a presumption of causality according to this Directive is applied, the potential fault of providers or persons subject to the obligations of a provider pursuant to [the AI Act] is established only through a non-compliance with such requirements. Given that in practice it may be difficult for the claimant to prove such non-compliance when the defendant is a provider of the AI system, and in full consistency with the logic of [the AI Act], this Directive should also provide that the steps undertaken by the provider within the risk management system and the results of the risk management system, i.e. the decision to adopt or not to adopt certain risk management measures, should be taken into account in the determination of whether the provider has complied with the relevant requirements under the AI Act referred to in this Directive. The risk management system put in place by the provider pursuant to [the AI Act] is a continuous iterative process run throughout the lifecycle of the high-risk AI system, whereby the provider ensures compliance with mandatory requirements meant to mitigate risks and can, therefore, be a useful element for the purpose of the assessment of this compliance. This Directive also covers the cases of users’ fault, when this fault consists in non-compliance with certain specific requirements set by [the AI Act]. In addition, the fault of users of high-risk AI systems may be established following non-compliance with other duties of care laid down in Union or national law, in light of Article 29 (2) of [the AI Act].

(27) While the specific characteristics of certain AI systems, like autonomy and opacity, could make it excessively difficult for the claimant to meet the burden of proof, there could be situations where such difficulties do not exist because there could be sufficient evidence and expertise available to the complainant to prove the causal link. This could be the case, for example, in respect of high-risk AI systems where the claimant could reasonably access sufficient evidence and expertise through documentation and logging requirements pursuant to [the AI Act]. In such situations, the court should not apply the presumption.

(28) The presumption of causality could also apply to AI systems that are not high-risk AI systems because there could be excessive difficulties of proof for the claimant. For example, such difficulties could be assessed in light of the characteristics of certain AI systems, such as autonomy and opacity, which render the explanation of the inner functioning of the AI system very difficult in practice, negatively affecting the ability of the claimant to prove the causal link between the fault of the defendant and the AI output. A national court should apply the presumption where the claimant is in an excessively difficult position to prove causation, since it is required to explain how the AI system was led by the human act or omission that constitutes fault to produce the output or the failure to produce an output which gave rise to the damage. However, the claimant should neither be required to explain the characteristics of the AI system concerned nor how these characteristics make it harder to establish the causal link.

(29) The application of the presumption of causality is meant to ensure for the injured person a similar level of protection as for situations where AI is not involved and where causality may therefore be easier to prove. Nevertheless, alleviating the burden of proving causation is not always appropriate under this Directive where the defendant is not a professional user but rather a person using the AI system for its private activities. In such circumstances, in order to balance interests between the injured person and the non-professional user, it needs to be taken into account whether such non-professional users can add to the risk of an AI system causing damage through their behaviour. If the provider of an AI system has complied with all its obligations and, in consequence, that system was deemed sufficiently safe to be put on the market for a given use by non-professional users and it is then used for that task, a presumption of causality should not apply for the simple launch of the operation of such a system by such non-professional users. A non-professional user that buys an AI system and simply launches it according to its purpose, without interfering materially with the conditions of operations, should not be covered by the causality presumption laid down by this Directive. However, if a national court determines that a non-professional user materially interfered with the conditions of operation of an AI system or was required and able to determine the conditions of operation of the AI system and failed to do so, then the presumption of causality should apply, where all the other conditions are fulfilled. This could be the case, for example, when the non-professional user does not comply with the instructions of use or with other applicable duties of care when choosing the area of operation or when setting performance conditions of the AI system. This is without prejudice to the fact that the provider should determine the intended purpose of an AI system, including the specific context and conditions of use, and eliminate or minimise the risks of that system as appropriate at the time of the design and development, taking into account the knowledge and expertise of the intended user.

(30) Since this Directive introduces a rebuttable presumption, the defendant should be able to rebut it, in particular by showing that its fault could not have caused the damage.

(31) It is necessary to provide for a review of this Directive [five years] after the end of the transposition period. In particular, that review should examine whether there is a need to create no-fault liability rules for claims against the operator, as long as not already covered by other Union liability rules in particular Directive 85/374/EEC, combined with a mandatory insurance for the operation of certain AI systems, as suggested by the European Parliament. 38 In accordance with the principle of proportionality, it is appropriate to assess such a need in the light of relevant technological and regulatory developments in the coming years, taking into account the effect and impact on the roll-out and uptake of AI systems, especially for SMEs. Such a review should consider, among others, risks involving damage to important legal values like life, health and property of unwitting third parties through the operation of AI-enabled products or services. That review should also analyse the effectiveness of the measures provided for in this Directive in dealing with such risks, as well as the development of appropriate solutions by the insurance market. To ensure the availability of the information necessary to conduct such a review, it is necessary to collect data and other necessary evidence covering the relevant matters.

(32) Given the need to make adaptations to national civil liability and procedural rules to foster the rolling-out of AI-enabled products and services under beneficial internal market conditions, societal acceptance and consumer trust in AI technology and the justice system, it is appropriate to set a deadline of not later than [two years after the entry into force] of this Directive for Member States to adopt the necessary transposition measures.

(33) In accordance with the Joint Political Declaration of 28 September 2011 of Member States and the Commission on explanatory documents 39 , Member States have undertaken to accompany, in justified cases, the notification of their transposition measures with one or more documents explaining the relationship between the components of a directive and the corresponding parts of national transposition instruments. With regard to this Directive, the legislator considers the transmission of such documents to be justified.