Ida Varošanec
(PhD student, University of Groningen) and Nynke Vellinga
(post-doc researcher, University of Groningen)
Photo credit: Cryteria, via Wikimedia
commons
1. Objectives of the
proposal
On 28 September 2022, the
European Commission published a proposal for an AI
Liability Directive and an accompanying update of a complementary Product
Liability Directive. In the preceding Report
on Artificial Intelligence Liability, the Commission acknowledged the
immense potential of artificial intelligence (AI). However, it has also
identified the risks associated with it. For instance, connectivity of an
AI-encompassing product can compromise its safety for users as it may be susceptible
to cyber-attacks. Moreover, the outcomes of AI cannot always be predicted. To
this end, ex ante risk assessments can be insufficient to address the possible
wrongs. The opacity inherent in advanced AI-based products and systems makes it
difficult to ascertain the responsibility of AI systems’ behaviours and
choices. It is pivotal that humans can be able to understand how algorithmic
decisions were reached in order to make a liability claim. Particularly, the
opacity of AI systems can hinder victims in proving fault and causality in such
cases. Consequently, the AI Liability Directive aims to ensure the provision of
protection for victims of AI commensurate with those where damage has been
caused by other products. It aims to increase trust in new technologies as well
as to contribute to the ‘rollout of AI’ and improve its development in the
internal market by preventing fragmentation and increasing legal certainty
through harmonisation. Once adopted, these proposals will complement other AI
regulation (e.g. the proposed AI Act) and establish liability rules for
software and AI systems in the EU.
2. The proposed AI Liability Directive: scope
Other than what the name might
suggest, the proposed AI Liability Directive does not provide any new ground of
civil liability. That remains a matter of the national legislature, except when
it comes to liability for defective products under the regime of the Product
Liability Directive. Instead, the proposed AI Liability Directive provides
for burden of proof rules on disclosure of evidence and rebuttable presumptions
on a causal link. The rules cannot be invoked in every tort law case: only
cases on fault liability fall within the scope of the AI Liability Directive.
These are cases where liability for damage caused by (the use of) an AI system
are based on fault. Fault encompasses wrongful actions and omissions. Due to
the characteristics of AI systems, it can be difficult or prohibitively
expensive to prove fault. Consequently, those suffering damage caused by an AI
system might not get compensated for damage suffered, whereas those suffering
damage from a non-AI system would be able to get compensated as they do not
incur the same difficulties in proving fault. The proposed AI Liability
Directive would address this discrepancy by providing rules on:
‘(a) the
disclosure of evidence on high-risk artificial intelligence (AI) systems to
enable a claimant to substantiate a non-contractual fault-based civil law claim
for damages;
(b) the burden
of proof in the case of non-contractual fault-based civil law claims brought
before national courts for damages caused by an AI system.’ (art. 1 Proposal)
The proposed AI Liability
Directive does not apply to risk-based liability claims. However, the proposed
new Product Liability Directive does provide similar rules on disclosure of
evidence and the burden of proof (art. 8 and 9).
The scope of the applicability of
the proposed AI Liability Directive is partially limited to a specific category
of AI system: the high-risk AI systems. For the definition of a high-risk AI
system, the AI Liability Directive refers to the proposed AI Act. The AI Act
identifies and lays down rules as per levels of risk associated with AI systems
– those that carry (1) unacceptable risk, (2) high risk, and (3) limited risk.
The fourth category – that of systems that pose a minimal risk (e.g. spam
filters) – although within the material scope are not subject to any concrete
rules. The first category – (1) unacceptable risk – concerns AI systems that
are a clear threat to the safety, livelihoods and rights of persons (e.g.
manipulation and social scoring systems). The third category (those of limited
risk), is subject to specific transparency obligations due to their nature
(e.g. deep fakes). High-risk AI systems represent those which are embedded in
products subject to third-party assessment under sectoral legislation, and
those which are not components of products but are deemed to be high-risk when
used in certain areas (e.g. transport, education, safety components etc.). Such
systems are subject to a set of requirements (e.g. risk assessments, mitigation
systems, data quality, logging, and technical documentation) before being
placed on the market.
The rules on disclosure of
evidence as laid down in art. 3 of the proposed AI Liability Directive only
apply to these high-risk AI systems. The rules on the burden of proof, however,
apply to claims relating to all AI systems. (art. 4).
3. The proposed AI Liability Directive: disclosures and presumptions
3.1 Rebuttable presumption of a
causal link
Article 4 introduces a rebuttable
presumption of a causal link in the case of fault. It allows the courts to
presume the causal connection between the fault of the defendant and the AI
output (or failure to produce it) under three cumulative conditions. Firstly,
the fault needs to be established (either by an assuming court or a claimant)
consisting of non-compliance with the duty of care under EU or national law.
Secondly, it must be ‘considerably likely’ that the fault has influenced the
output of an AI system. Finally, damage by an AI system needs to be
demonstrated. Paragraphs (2) and (3) differentiate between providers and users of
AI systems.
The causal link concerning a
claim for damages caused by a high-risk AI system shall not be presumed if the
defendant demonstrates that sufficient evidence and expertise is reasonably
accessible for the claimant to prove this causal link (art. 4(4)). When the
claim concerns an AI system that is not high-risk, the presumption of the
causal link shall only be applied where the national court considers it
excessively difficult for the claimant to prove the causal link (art. 4(5)).
Moreover, the defendant can always rebut any presumption regarding the causal
link (art. 4(6)).
3.2 The disclosure of evidence
Article 3 of the proposed AI
Liability Directive establishes the conditions regarding the disclosure of
evidence and introduces a rebuttable presumption of non-compliance. This
applies to high-risk AI systems as defined in the AI Act.
Article 3(1) of the Directive
allows a court to order the disclosure of relevant evidence about specific
high-risk AI systems that are suspected to have caused damage. Recital (16)
confirms that this requirement has been unaccounted for by the AI Act proposal.
However, the disclosure provided for in the AI Liability Directive does not
seem to be absolute. Rather, it seems to be subject to a certain
proportionality assessment since disclosure is only allowed to the extent
necessary for sustaining the liability claim. To do that, national courts ought
to consider the legitimate interests of all parties. Particularly, this applies
in relation to the preservation of trade secrets and confidential information.
The explanation notes convey that the aim is to strike a balance between ‘the
claimant’s rights and the need to ensure that such disclosure would be subject
to safeguards to protect the legitimate interests of all parties concerned,
such as trade secrets or confidential information’. In other words, the goal is
to strike a balance between claimant’s rights and the need for safeguards
imposed by the court to preserve trade secrets or confidential information. The
court will presume that the defendant did not comply with the duty of care if
they refuse to disclose the requested information. In this case, the defendant
can remedy that and rebut that presumption by providing evidence.
Recital (20) confirms that
national courts should have the power to take specific measures to ensure the
confidentiality of trade secrets during and after the proceedings in a
proportionate manner in respect of balancing interests. Such measures could
include restricting access to documents containing trade secrets and access to
hearings or documents and transcripts thereof to a limited number of people.
However, the courts cannot decide on this without considering the need to
ensure the right to an effective remedy, fair trial and potential harm that
could occur.
4. Comment
It is commendable that the EU is
taking steps to address the information asymmetry between AI systems’
developers and individuals harmed by their creations. The (prospect of the)
realisation of liability and compensation can provide an important incentive to
AI providers and users to ensure the safe and correct functioning of their
systems. Together with the Product Liability Directive, the proposed AI Act and
other product safety rules such as the
General Product Safety Directive, the European Commission is designing a
comprehensive framework addressing the safety of AI systems and liability for
damage caused by those systems.
Nevertheless, the AI Liability
Directive harbours an important flaw that might have been overlooked: the AI Liability Directive offers defendants
a way to avoid having to disclose evidence. As the proposal currently stands,
if the defendant refuses to provide trade secret information about an AI system
as evidence, they will be presumed as non-compliant with the duty of care. A
defendant might decide it is strategically wiser to simply pay compensation in
exchange for non-disclosure. After all, trade secrets are of great economic
importance to such enterprises and one of the conditions for legal protection
of trade secrets requires continuous efforts to keep information secret. In
other words, non-compliance becomes a choice in order to avoid disclosure.
This is at odds with the drive
for transparency of high-risk AI systems in the EU AI Act (art. 13). By
offering an option to avoid transparency, the AI Liability Directive undermines
this requirement for transparency indicated in the AI Act. This creates tension
between these two new instruments. The EC could have chosen a clearer stance on
transparency and its necessity, by carrying the requirement of transparency
from the AI Act through to the AI Liability Directive.
There is an additional
disadvantage to the route the EC has chosen. By avoiding the disclosure of the
information necessary to establish fault in liability claims, one can avoid any
flaws of the AI systems to be disclosed. This might take away any motivation to
improve an AI system, as sufficient financial means offer the possibility to
keep any shortcomings of the AI system hidden from the public eye. The lack of
transparency could thereby lead to disincentivising the development and
improvement of AI systems. Ultimately, this might negatively impact innovation
and trust in AI.
This comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDelete