Lorna Woods, Professor of Internet Law, University of Essex
Introduction
The development of ‘web 2.0’,
especially social media, has meant that many people are able to post content to
potentially large audiences. The amount
of content, however, and how to manage conflicting rights between different
users has led to debate about the role of the platforms in helping remedy the
problems that the platforms facilitate (that come along with the benefits the
platforms enable). One particular issue
is the acceptability of the use of filtering technologies, especially from the
perspective of the freedom of expression of the user of the work. It has come before the courts before, when
the courts – in the context of copyright claims - had expressed concerns about
those techniques. Given the quantity of
material uploaded, however, it is hard to envisage that in person ex post
review of content would be possible, let alone effective. The problem of copyright enforcement remains
– and the ‘value gap’ created by mass unauthorised use of protected works.
Platforms have had little incentive to prevent the problem from arising –
indeed it could be said the platforms benefitted (through advertising revenue)
from the existence of this content. The ex post system – whereby copyright
holders notify and the platform removes content to maintain its immunity under
Article 14 e-Commerce
Directive – has not been seen as effective by rights-holders.
This problem had led to the
overhaul of the copyright regime and the enactment of the Copyright
Directive in the Digital Single Market (Directive 2019/790), a proposal
that during the legislative process was subject to extensive lobbying. The result is a directive which aims to reduce
the ‘value gap’ and to rebalance matters more in favour of the creators of
content with the introduction of a new press publisher’s right (Article 15)
and, notably, Article 17 which covers use of protected content by online
content-sharing service providers.
Article 17, however, was contentious, leading to this challenge by
Poland, and the recent
Advocate-General’s opinion. While it is important in understanding the
scope of Article 17 itself, we might also ask whether the reasoning here might
have broader implications.
Provisions in Issue
Article 17 changes (or clarifies)
the position under copyright that the platforms caught by the definitions in
the directive will automatically be considered to be carrying out 'acts of
communication to the public or making available to the public' when they give
the public access to copyright-protected content uploaded by users, and
therefore require authorisation from the relevant content owners. Article 17(3)
displaces Article 14 e-Commerce Directive, which provides conditional immunity
from penalties to neutral hosts. Article 17(3) provides that, if there are no
relevant licensing arrangements in place, the platforms will only be able to
maintain immunity if they satisfy the terms of Article 17(4). Article 17(4)
introduces 4 cumulative conditions (arranged across 3 subparagraphs) – that the
platform has:
(a) made best
efforts to obtain an authorisation, and
(b) made, in
accordance with high industry standards of professional diligence, best efforts
to ensure the unavailability of specific works and other subject matter for
which the right-holders have provided the service providers with the relevant
and necessary information; and in any event
(c) acted
expeditiously, upon receiving a sufficiently substantiated notice from the
right-holders, to disable access to, or to remove from their websites, the
notified works or other subject matter, and made best efforts to prevent their
future uploads in accordance with point (b).
While the first part of Article
17(4)(c) is similar to the conditions in Article 14 e-Commerce Directive, the
other three elements are new. Without
dealing with any questions around the definitions of the platforms falling
within this obligation, a number of questions arise: does Article 17(4)
effectively require upload filters (and will they lead to overblocking); what
are best efforts, especially in relation to the monitoring which is implied;
and does Article 17(4) effectively require ‘general monitoring’ (despite the
clarification in Article 17(8) that it should not lead to general monitoring).
Article 17(7) might be seen as an
effort at counter-balance: it provides
The
cooperation between online content-sharing service providers and right-holders
shall not result in the prevention of the availability of works or other
subject matter uploaded by users, which do not infringe copyright and related
rights, including where such works or other subject matter are covered by an
exception or limitation.
Significantly, the directive
expressly lists the exceptions for quotation, criticism, review and for
caricature, parody or pastiche. There
are also obligations (in Article 17(9)) relating to redress and complaints
mechanisms, which some sections of industry have claimed are onerous. Some of the vagueness around requirements
might be dealt with by Commission
guidance aimed at aiding coherent implementation of the directive; while
this is now available, at the time the case was lodged it was not.
The Legal Challenge
The Issue
Poland issued a judicial review
action, seeking annulment of the provision (either just Article 17(4)(b) and
(c) or Article 17 in its entirety) on the basis of its incompatibility with
freedom of expression as guaranteed by the Charter (Article 11 EUCFR), either
by destroying the essence of the right or by constituting a disproportionate
interference with that right.
The Nature of the Obligation
A preliminary issue concerned is
the nature of the obligation imposed by Article 17(4) and whether it requires
for preventive monitoring purposes the use of upload filters. While this is not
explicitly required, the Advocate General took the view that, in many
circumstances, the use of those tools are required [para 62]. Further, industry
standards will have an impact on the decision as to what best practice is [para
65-6]. So, while the recitals provided considerations to assess what suitable
methods would look like (see recital 66), this did not affect the assessment
that the reality was the upload filters of some description would be used.
The Impact on Freedom of
Expression
Applicability of the Right
One precondition for the
applicability of fundamental rights is that the actions under challenge could
be imputed to the State; here, the actions of the platforms are in issue (and
their right to run a business under Article 16 EUCFR). The Advocate General
drew a distinction between the circumstances where a platform had real choice
and the circumstances here. The provision might formally give operators a
choice: do this and get exemption from liability, or choose not to do that and
face exposure to liability. The Advocate General emphasised that the assessment
as to compliance with Article 11 should take account of what is happening in
practice; the reality is that ‘the conditions for exemption laid down in the
contested provisions will, in practice, constitute genuine obligations for those providers’ [para 86, emphasis in original].
Limitations – Lawfulness
The conditions for limiting
Article 11 EUCFR are found in Article 52(1) EUCFR. The requirement there that
the restriction be ‘provided for by law’ was to be understood in the light of
the jurisprudence on lawfulness for the purpose of Article 10(2) ECHR (citing
some CJEU decisions on data protection and the right to a private life in
support). Lawfulness requires not just a basis in law, clearly satisfied here,
but must be accessible and foreseeable. The first aspect is clearly satisfied.
As regards the second, the Advocate General noted that the case law allows the
legislature ‘without undermining the requirement of “foreseeability” [to]
choose to endow the texts it adopts with a certain flexibility rather than
absolute certainty’ [para 95, citing the Grand Chamber judgment in Delfi v Estonia, discussed here].
Nonetheless, the case-law on lawfulness also requires safeguards against
arbitrary or abusive interference with rights. This issue the Advocate General
linked to proportionality.
Limitations – the Essence of the Right
The requirement to respect the
essence of the right provides a limit on the discretion of the legislature to
weigh up competing interests and come to a fair balance. It is ‘an “untouchable
core” which must remain free from any interference’ [para 99]. According to the
Advocate General, an ‘obligation preventively to monitor, in general, the
content of users of their services in search of any kind of illegal, or even
simply undesirable information’ constitutes such an interference [para 104].
Article 15 e-Commerce Directive is ‘a general
principle of law governing the Internet’ [para 106, emphasis in original
and referring to Scarlet Extended and
SABAM], and binds the EU legislature.
Importantly, this principle does not prohibit all forms of monitoring; the
jurisprudence of the CJEU has already distinguished monitoring which occurs in
specific cases, and a similar position can be seen in the case-law of the ECtHR
(Delfi). Tracing the development of
the CJEU’s reasoning over time from the early cases of L’Oreal,
Scarlet
Extended and SABAM,
through McFadden to
Glawischnig-Piesczek
(discussed here),
the Advocate General opined that Article 17 is a specific monitoring obligation
[para 110]; it focuses on specific items of content and the fact that a
platform would have to search all content to find it does not equate to a
general obligation.
Limitations – Proportionality
After reviewing the first two
aspects of proportionality (appropriate and necessary), the Advocate General
moved to discuss the heart of the matter:
proportionality strictu sensu
and the balance achieved between the conflicting rights. The Advocate General accepted that it was
permissible for the EU legislature to change the balance it had adopted in
Article 14 e-Commerce Directive for that in the new Copyright in the Digital
Market Directive taking into account the different context, and the broad
discretion the institutions have in the complex area. The Advocate General
identified the following factors: the extent of the economic harm caused due to
the scale of uploading; the ineffectiveness of the notice and take down system;
the difficulties in prosecuting those responsible and the fact that the
obligations concern specific service providers [para 137].
The next issue whether platforms
would take ‘the easy way out’ and over-block just to be on the safe side in
terms of their own exposure to liability. The Advocate-General excluded this
possibility in his interpretation of the ‘best efforts’ obligation. So, the
obligation to take users’ rights into account ex ante and not just ex post
supports the proportionality of the measure; the redress rights and the out-of-court
redress mechanism are supplementary safeguards. Service providers may not use
any filtering technology but must instead consider the collateral effect of
blocking when implementing measures. Systems which block based on just content
and not taking into account the legitimate uses would fall foul of the position
in Scarlet Extended and SABAM.
This was followed by a
consideration of Glawischnig-Piesczek.
In the light of the CJEU’s emphasis on
the platform in that case not having to make an independent decision as to the
acceptability of content to take down (and to stay down), the Advocate General
suggested that platforms cannot be expected to make independent assessments of
the legality of content. He concluded:
to minimise
the risk of ‘over-blocking’ and, therefore, ensure complaince with the right to
freedom of expression, an intermediary provider may, in my view, only be
required to filter and block information which has first been established by a
court as being illegal or, otherwise, information the unlawfulness of which is
obvious from the outset, that is to say, it is manifest, without, inter alia,
the need for contextualisation [para 198].
Referring back to his own opinion
in YouTube
and Cyando, ‘an intermediary provider cannot be required to undertake
general filtering of the information it stores in order to seek any infringement’ [200, emphasis in original].
The Advocate General concluded
that Article 17 contained sufficient safeguards. Article 17(7), which states
that measures taken ‘shall not result in the prevention of the availability of
works or other subject matter uploaded by users, which do not infringe
copyright and related rights’ means that wide blocking is not permitted and
that in ambiguous cases, priority should be given to freedom of expression
[para 207] and that ‘“false positives” of blocking legal content, were more
serious than “false negatives”, which would mean letting some illegal content
through’ [para 207]. Rights-holders can still request infringing content be
taken down [para 218]. Having said that, a nil error rate for false positives
is not required, though the error rate should be as low as possible and those
techniques that result in a significant false positive rate being precluded.
Article 17(10), which providers for stakeholder cooperation, is in the view of
the Advocate-General the place to determine the practical implementation of
these requirements [213].
Comment
The Opinion constitutes the attempts
of the Advocate-General to steer a course through the radically different
interpretations of Article 17, a fact which perhaps reflects the provision’s
contentious nature. The outcome of the
case will be significant beyond the enforcement of copyright, as similar
mechanisms might be required under other legislation: TERREG
(Regulation 2021/784 on addressing the
dissemination of terrorist
content online) for example,
envisages hosting service providers putting in place ‘specific measures’
(recitals 22-23, Article 5(2)) that include the possibility of ‘technical
means’ to address dissemination of terrorism content online. The highlight news is, of course, that the
Advocate-General did not find Article 17 to be contrary to Article 11 EUCFR
though what that means for the obligations under Article 17 is potentially
complex. Before discussing that issue, a number of other points can be noted.
The first point is the complex
context for assessing fundamental rights. As the Advocate General noted, the
platforms are private actors; it is not as simple as a user saying ‘because of
freedom of expression I can upload what I like on this platform’. There are two points. The first is whether the platforms’ choices
can be attributed to the Member States? This is relevant because the rights are
not addressed to private actors (Article 51 EUCFR; this is also true under the
ECHR). This is an issue on which there
has not – in the context of the EUCFR – been much case law to date. The responsibility of the State, however,
subsequently forms a significant element of the Advocate general’s approach to
Article 17 and its safeguards: attributing the interference to the State means
that the framework for analysis is that of the state’s negative obligations,
rather than introducing questions of positive obligations.
At this early stage in his
Opinion, however, the Advocate General was content to flag up the relevance of
the rights. He referred to the jurisprudence of the ECHR to support his
position:
-
Appleby, which concerned the
access of peaceful protesters to a privately-owned shopping centre. There, the
ECtHR held that they had no right under Article 10; they could make their views
known in other venues. This is a case
about positive obligations.
-
Tierfabriken, concerning the refusal
of the Commercial Television Company to allow the broadcast of an animal rights
advert because it breached the company’s terms of business and the terms of
national law. In the view of the regulatory authorities, the company was free
to purchase its ads wherever it chose. The Court held that, irrespective of the
formal status of the actors, the State was implicated because the company had
relied on the prohibition of political advertising contained in the regulatory
regime when making its decision. Domestic law “therefore made lawful the
treatment of which the applicant association complained” [para 47].
Neither case seems to be making
precisely the argument that the Advocate General made – that the platforms had
no choice. Nonetheless, the point seems fair. The implications of this point
should be considered; does this mean that whenever platforms make a decision
based on elements of their terms of service that reflect national law that
freedom of expression is implicated?
Beyond this point, it seems clear that platforms may set their own terms
of service to reflect their business choices and that (subject to concerns
about individuals losing all possibility of communicating) there would be no
freedom of expression based complaint related to the enforcement of those
terms. Further, it seems that were the State to try to interfere with the
platforms’ choices in this regard, that interference would need to recognise
Article 16 EUCFR or even Article 11.
The Advocate-General considered
the lawfulness requirement in Article 52(1), something that the Court does not
always do (assuming it is satisfied). As well as the formal required of being
based in law, the lawfulness test has qualitative requirements. In carrying out
his analysis, the Advocate-General treated questions about the safeguards
against abuse which are part of the lawfulness test as part of questions of
proportionality. In this, he followed the approach of the ECtHR under Article 8
ECHR (right to a private life) in the surveillance cases. This approach has been criticised in that
context as blurring two different questions aimed at two separate concerns and
in so doing lowering the threshold of protection. The approach has not so far been adopted in
relation to freedom of expression even by the Strasbourg court and so is novel
here. The issue of safeguards in this
Opinion is central, as we shall see below.
Another novelty is the discussion
of the ‘essence of the right’, which has not received that much attention. The
Advocate General helpfully started with a clear statement as to what the
requirement is – an untouchable core – where the usual balancing of rights
cannot take place. Given the complex
array of potentially conflicting rights in play in this context, that principle
could be important. Once again, the Advocate General drew on the surveillance
case law, perhaps because it is the only place where there is much discussion
of the point. In the context of surveillance, the Court has held that general
monitoring of content would damage the essence of the right, but that the
general retention of metadata did not (though it might still be hard to
justify). On one level the prohibition on general monitoring covers the same
ground as the prohibition on mass content interception under Article 7 EUCFR
(and Article 8 ECHR) – though Article 7 operates in the context of private
communications rather than content that could well be made publicly,
effectively broadcast. What is arguably a similar boundary was drawn here:
general monitoring would undermine the core of the right but specific
monitoring, as a form of prior restraint, would not. In this, the Advocate General pointed out
that although prior restraints are very intrusive of freedom of expression and
tightly controlled under the Strasbourg jurisprudence, they are not
automatically impermissible.
Significantly, the Advocate-General claimed that the prohibition on
general monitoring is a general principle of Internet law – though it is far
from clear what weight the status as ‘general principle’ has in this specific
context. Is a general principle of Internet law different from a general
principle within EU law more generally? Of course, this discussion is based on
the assumption that filtering for specific content is somehow different from
looking at everything and also leaves the question of how broad the category of
content searched for can be before it ceases to be ‘specific’.
What then of the Advocate-General’s
approach to Article 17? From his
analysis of the freedom of expression framework, the scope of the obligation is
important with determining its acceptability. Clearly, the discussion of
general versus specific monitoring is one aspect of this, but the safeguards
required to legitimate an interference with freedom of expression also protect
in the Advocate-General’s view against over-blocking. The inventive
interpretation of the platforms’ ‘best efforts’ is central to this approach.
Essentially, this interpretation narrows the scope of when and what is
permissible; automated techniques can be used when they are functionally able
to do the job. On one viewpoint this is
good; preventing platforms from over-reliance on possibly not very good
technologies to the detriment of their users (and potentially exhibiting bias
in that process too). It is a way of balancing the reality of scale with the
concerns of over-blocking and could be seen as a clever way of reconciling
conflicting demands.
Does this interpretation,
however, suggest, that the balance of Article 17 is still heavily shifted
towards ex post moderation and take down systems because effectively the
conditions that the Advocate General has set on the use of technology mean that
there is no technology that can be used (and little incentive to develop it) or
can only be used in a very limited way?
Where we are balancing copyright and business rights against freedom of
expression, this shift towards a less effective content control system might
not seem so bad (even if it flies in the face of the stated concerns driving
the legislation), but would the same analysis be deployed in relation to child
sexual exploitation and abuse material? The difficulty here is that the
Advocate General’s framework for analysis is content blind. While it is based
on the text of Article 17(7) and could therefore be understood as relevant just
to this directive, his interpretation of that provision is given impetus by his
introduction of the requirement for safeguards derived from freedom of
expression. This would then have a wider application. The Advocate General here
explicitly prioritises freedom of expression over another Charter right
(Article 17 EUCFR) and there does not seem to be an obvious place within the
safeguards framing where issues around the importance of speech or importance
of other rights can easily be taken into account.
One final point to note is, of
course, that this is an Opinion and not binding. The Advocate General referred
to his reasoning in YouTube and Cyando.
The Court decided that case without reference to his reasoning. It remains to
be seen how much it will influence the Court here – or in relation to
discussions around other legislation which envisage proactive technical
measures.
Photo credit: via wikicommons
media
This comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDelete