Tiago Sérgio Cabral*
** PhD Candidate at the
University of Minho | Researcher at JusGov | Project Expert for the Portuguese
team in the "European Network on Digitalization and E-governance"
(ENDE). Author’s opinions are his own.
Photo credit: Salino01,
via Wikimedia
Commons
Introduction to AI Literacy
under the AI Act
Under Article 4 of the AI Act (headed “AI literacy”),
providers and deployers are required to take “measures to ensure, to their best
extent, a sufficient level of AI literacy of their staff and other persons
dealing with the operation and use of AI systems on their behalf, taking into
account their technical knowledge, experience, education and training and the
context the AI systems are to be used in, and considering the persons or groups
of persons on whom the AI systems are to be used”. The concept of AI literacy
is defined in Article 3(56) of
the AI Act as we will see below.
Article 4 is part of a wider
effort by the AI Act to promote AI literacy, which is also reflected in other
provisions such as those addressing human oversight, the requirement to draw up
technical documentation or the right to explanation of individual
decision-making. Article 4 focuses on staff and people involved in the
operation and use of AI systems. As such, the main consequence of this
provision is that providers and deployers are required to provide training for
employees and people involved in the operation and use of AI systems, allowing
them to obtain a reasonable level of understanding of the AI systems used
within the organization, as well as general knowledge about the benefits and
dangers of AI.
It is important to distinguish
between the rules on AI literacy and the requirements on human oversight, in
particular Article 26(2) of the
AI Act. Under this provision, deployers will be required to assign human
oversight of high-risk AI systems to natural persons who have the necessary
competence, training and authority, as well as the necessary support. The level
of knowledge and understanding of the AI system required of the human overseer
will be deeper and more specialized than what is required from all staff in the
context of AI literacy. The human overseer must have specialized knowledge
about the system that he/she is overseeing. The people subjected to AI literacy
obligations require more general knowledge about the AI systems used in the
organization, particularly the ones with whom the staff is engaging, along with
understanding of the benefits and dangers of AI. The level of AI literacy
required in organizations that develop or deploy high-risk systems will,
naturally, higher than organizations that deploy, for example, systems subject
to specific transparency requirements. In any case, it will still likely be
lower than one is required of a human overseer (although not limited to
high-risk systems as the rules for human oversight).
The scope of Article 4 of the
AI Act
AI literacy is a sui generis obligation
under the AI Act. It is systematically placed within “Chapter I – General
Provisions”, and thus disconnected from the risk categorization for AI systems.
This can result in significant challenges in interpreting the scope of the
obligations arising from Article 4 of the AI Act.
In fact, an isolated reading of this
provision could result in the conclusion that AI
literacy obligations apply to all systems that meet the definition of an AI system
under Article 3(1) of the AI Act. As the definition in Article 3(1) of the AI
Act is extremely broad in nature, this would result in a large expansion of the
scope of the AI Act, far beyond the traditional pyramid composed of (i) prohibited
AI (Article 5 of the AI Act),
(ii) high-risk AI systems (Article 6
of the AI Act); and (iii) AI systems subject to specific transparency
requirements (Article 50 of the
AI Act). The risk categorization also includes general-purpose AI models and
general-purpose AI models with systemic risk, but the AI literacy obligation
under Article 4 appears to only apply directly to systems. Indirectly,
providers of AI models will be required to provide providers of AI systems that
integrate their models with sufficient information to allow the latter to
fulfil their literacy obligations (see, inter alia, Article 53(1)(b) of the AI
Act).
The abovementioned interpretation
does not hold, however, if we opt for reading of Article 4 of the AI Act that
adequately considers the definition of AI literacy under Article 3(56) of the
AI Act. Article 3(56) of the AI Act lays down that AI literacy means the “skills,
knowledge and understanding that allow providers, deployers and affected
persons, taking into account their respective rights and obligations in the
context of [the AI Act], to make an informed deployment of AI systems, as well
as to gain awareness about the opportunities and risks of AI and possible harm
it can cause”.
Providers and deployers of AI
systems that are not part of the abovementioned categorization are not, strictly
speaking, subject to any obligations related to these AI systems– at least none
arising from the AI Act. Likewise, the AI Act does not grant affected persons
any rights in relation to systems that are not part of the “classic”
categorizations. Given that the existence of rights and obligations are the
building blocs upon which the AI literacy measures need to be designed, if they
do not exist, the logical conclusion is that the definition cannot be applied
in this context. If the definition under Article 3(56) of the AI Act cannot be
applied, Article 4 of the AI Act which entirely depends on this definition
cannot apply either.
Enforcement
In addition to issues around the
interpretation of its scope, enforcement of Article 4 also raises significant
questions. Article 99(3-5) of the AI Act does not establish fines for the
infringement of AI literacy. As such organizations cannot be fined for failing
to fulfil their AI literacy obligations based on the AI Act (if considered in
isolation). Market surveillance authorities have enforcement powers that do not
entail financial sanctions, but it is still a strange scenario for the AI Act
to establish an obligation without a corresponding fine which is arguably the
key sanctioning tool. It also remains to be seen whether market surveillance
authorities will prioritize an obligation that the EU legislator did not
consider as significant enough to merit inclusion in Article 99(3-5) of the AI Act.
In addition, Member States may
use their power under Article 99(1) of the AI Act to establish additional
penalties and, through those, ensure the enforcement of Article 4 of the AI Act.
However, this approach risks fragmentation and inconsistency, which is
undesirable.
Private enforcement is also a possibility,
but whether in the context of tort liability or product
liability, it seems to us that proving damages and the causal link between
the behaviour of the AI system and the damage may continue to be major
obstacles to the success of any attempts. In this context, it is important to
note that new EU
Product Liability Directive (applicable to products marketed or put into
service after 9 December 2026) contains relevant provisions that may make
private enforcement easier against producers in the future. In particular,
Article 10(3) of the Product Liability Directive establishes that “the causal
link between the defectiveness of the product and the damage shall be presumed
where it has been established that the product is defective and that the damage
caused is of a kind typically consistent with the defect in question”. In
addition, Article 10(4) addresses situations where claimants face excessive
difficulties, in particular due to technical or scientific complexity, in
proving the defectiveness of the product or the causal link between its
defectiveness and the damage by allowing courts to establish a presumption. However,
in this scenario, linking a breach of the obligation to ensure AI literacy to a
defect in a product or a specific instance of damage in any type of
satisfactory manner seems challenging and unlikely to accepted by courts.
Lastly, although the AI literacy
obligations technically became applicable on 2 February 2025, the deadline for
the appointment of Member State authorities is on 2 August 2025. As such, any
attempt of enforcement will likely be limited during this period.
Identification of AI systems
as a preliminary step for the assessment of literacy obligations
Although, as referred above, literacy
obligations are not applicable to systems outside of the risk categorization of
the AI Act, from an accountability perspective, providers and deployers who
want to rely on this exception should still proceed an evaluation
of the AI systems for which they are responsible as a preliminary step. Only
after identifying the AI systems and evaluating whether they fall out of the
risk categories established by the AI Act can providers and deployers know with
an adequate level of certainty that they are not likewise subject to the
literacy obligations under Article 4 of the AI Act.
The GDPR as an alternative
source of literacy obligations
For providers and deployers who
are acting as data controllers under the GDPR, it is important to note that
non-applicability of Article 4 of the AI Act does not exclude literacy and
training obligations that may arise under other EU legal instruments.
Particularly, for AI systems that depend on the processing of personal data for
their work, adequate training of staff may be required to comply with
controller accountability obligations and ensure that the measures implemented
by the controller to ensure lawful processing of personal data in the context
of the organization’s use of AI (Articles 5(2) and 24 of the GDPR). Considering
the wording of Article 39(1)(b) of the GDPR, data protection officers should be
involved in the evaluation of training requirements.
No comments:
Post a Comment