Showing posts with label freedom of expression. Show all posts
Showing posts with label freedom of expression. Show all posts

Thursday, 6 June 2024

EU Media Freedom Act: the convolutions of the new legislation

 



Samira Asmaa Allioui, research and tutorial fellow at the Centre d'études internationales et européennes de l'Université de Strasbourg

 

Photo credit: Bin im Garten, via Wikimedia Commons

 

Journalists are under pressure in different ways. Throughout the last few years, media freedom and especially media pluralism are in peril.

On December 15, 2023, the European Council and the European Parliament struck a deal on rules to safeguard media freedom, media pluralism and editorial independence in the European Union. The EU Media Freedom Act (EMFA) promised increased transparency about media ownership and safeguards against government surveillance and the use of spyware against journalists. The agreement comes after numerous revisions of the Audiovisual Media Services Directive (AMSD) and new regulations such as the Digital Market Act (DMA) and Digital Services Act (DSA). As a reminder, the EMFA builds on the DSA.

The aim of this contribution is to present an overview of the EMFA and specifically to analyse to what extent its rules still contribute to the limitation of freedom of speech, the erosion of trust, the breach of democratic processes, disinformation, and legal uncertainty.

The EMFA requires EU countries to respect editorial freedom, no spyware, no political interference, stable funding for public media, protection of online media and transparent state advertising.  It established a European watchdog: a new independent European Board for Media Services to fight interference from inside and outside the EU.

Nevertheless, this new EU legislation tries to set boundaries for the journalists’ actions through Article 18 EMFA on the protection of media content on very large online platforms (VLOPs), and the potential detrimental effects of introducing something akin to a media exemption. But the most significant ambiguity is addressed by Article 2 of the EMFA on the definition of ‘media service’ which appears to be the problem everyone acknowledges. This raises the question of who the EMFA is protecting. Are democracy and the possibility for people to get impartial and unbiased information really strengthened? Not forgetting that for the European Parliament elections, there is a potential danger of political interference by extra-European countries that will try to take advantage of democratic elections to influence the media illegally, by creating fake social media accounts and by launching a massive propaganda campaign to disseminate conflict-ridden content.

 

THE ACCURACY OF INFORMATION

The EMFA focuses on two main points regarding VLOPs. First, it asserts that platforms limit users’ access to reliable content when they apply their terms and conditions to media companies that practice editorial responsibility and create news conforming with journalistic standards. First, the Regulation takes aim at VLOPs’ gatekeeping power over access to media content. To do so, the EMFA aims to remould the relationship between media and platforms. Media service providers that exercise editorial responsibility for their content have a primary role in the dissemination of information and in the exercise of freedom of information online. In exercising this editorial responsibility, they are expected to intervene diligently and provide reliable information that complies with fundamental rights, in accordance with the regulatory or self-regulatory requirements to which they are subject in the Member States.

Secondly, it asserts that the quality of the media may fight against disinformation. To consider this problem, the EMFA’s objective is to adjust the connection between platforms and media. According to Article 2 EMFA, ‘media service’ means ‘a service as defined by Articles 56 and 57 [TFEU], where the principal purpose of the service or a dissociable section thereof consists in providing programmes or press publications, to the general public, under the editorial responsibility of a media service provider, by any means, in order to inform, entertain or educate’  A ‘media service’ has some protections under the Act.  According to Joan Barrata, the media definition under EMFA is an overly “limited” definition, which is not “aligned” with international and European human rights standards, and “discriminatory”, as it excludes “certain forms of media and journalistic activity”. The DSA classifies platforms or search engines that have more than 45 million users per month in the EU as VLOPs or Very Large Online Search Engines (VLOSEs). As an illustration, according to Article 18 EMFA, media service providers will be afforded special transparency and contestation rights on platforms. In addition to that, according to Article 19 EMFA, media service providers will have the opportunity to engage in a constructed dialogue with platforms on concepts such as disinformation. Under the agreement, VLOPs will have to inform media service providers that they plan to remove or restrict their content and give them 24 hours to answer (except in the event of a crisis as defined in the DSA).

Article 18 of the EMFA enforces a 24-hour content moderation exemption for media, effectively making platforms host content by force. By making platforms host content by force, this rule prevents large online platforms from deleting media content that violates community guidelines. Nevertheless, not only it could threaten marginalised groups, but it could also undermine equality of speech and fuel disinformation. This is a vicious circle between the speaker planting false information on social media, the media platform spreading the false speech thanks to amplifying algorithms or human-simulating bots, and the recipients who view the claims and spread them.

According to the EMFA provides that, before signing up to a social media platform, platforms must create a “special/privileged communication channel” to consider content restrictions with “media service providers”, defined as “a natural or legal person whose professional activity is to provide a media service and who has editorial responsibility for the choice of the content of the media service and determines the manner in which it is organised “. In other words, instead of being forced to host any content, online platforms should provide special privileged treatment to certain media outlets.

However, not only does this strategy impede platforms’ autonomy in enforcing their terms of use (nudity, disinformation, self-harm) but it also imperils the protection of marginalised groups who are frequently the main targets of disinformation and hate speech. Politics remains fertile ground for hate speech as well as disinformation. Online platforms and social media have played a key role in amplifying the spread of hate speech and disinformation. As proof, recent reports reveal the widespread abuse of these platforms by political parties and governments. Indeed, it turns out that more than 80 countries around the world have engaged in political disinformation campaigns.

This could also permit misleading information to remain online which allows sufficient time to see the information transmitted and disseminated, hindering one of the key objectives of EMFA - to give more reliable sources of information to citizens.


ABUSIVE REGULATORY INTERVENTION AND DETERIORATION OF TRUST

Primarily, one can only be concerned about any regulatory intervention by governments on issues such as freedom of expression or media freedom. Through their EU Treaty competencies in security and defence matters, EU Member States seem to be winning because their options to spy on reporters have been reaffirmed. However, according to the final text (April 11, 2024), the European Parliament added important guarantees to allow the use of spyware, which will only be possible on a case-by-case basis and subject to authorization from an investigating judicial authority as regards serious offenses punishable by a sufficiently long custodial sentence.

Furthermore, it must be emphasized that even in these cases the subjects will have the right to be informed after the surveillance and will be able to challenge it in court. It is also specified that the use of spyware against the media, journalists and their families is prohibited. In the same vein, the rules specify that journalists should not be prosecuted for having protected the confidentiality of their sources.

The law restricts possible exceptions to this for national security reasons which fall within the competence of member states or in cases of investigations into a closed list of crimes, such as murder, child abuse or terrorism. Only in such situations or cases of neglect, the law makes it very clear that this must be duly justified, on a case-by-case basis, in accordance with the Charter of Fundamental Rights, in circumstances where no other investigative tool would be adequate.

In this regard, the law therefore allows for new concrete guarantees at EU level in this regard. Any journalist concerned would have the right to seek effective judicial protection from an independent court in the Member State concerned. In addition to that, each Member State will have to designate an independent authority responsible for handling complaints from journalists concerning the use of spyware against them. These independent authorities provide, within three months of the request, an opinion on compliance with the provisions of the law on media freedom.

Some governments in Europe have tried to interfere in the work of journalists recently which is a blatant demonstration of how far politicians can go against media using national security as an excuse. To avoid an erosion of trust, media service providers must be totally transparent about their ownership structures. That is why, in its final version (April 2024), the EMFA enhances transparency of media ownership, responding to rising concerns in the EU about this issue. The EMFA broadens the scope of the requirements of transparency, providing for rules guaranteeing the transparency of media ownership and preventing conflicts of interest (Article 6) as well as the creation of a coordination mechanism between national regulators in order to respond to propaganda from hostile countries outside the EU (Article 17).

To do that, there is a need to deepen safeguards to shield all media against economic capture by private owners to avoid media capture. It can be worse when no official intervention can mean non-transparent and selective support for pro-government media. As a matter of fact, it demonstrates that a combination of political pressure and corruption can be risky for the free press.

Secondly, the EMFA’s content moderation provisions could ruin public trust in media and endanger the integrity of information channels. Online platforms moderate illegal content online. Moderation provisions include: a solution-orientated conversation between the parties (VLOPs, the media and civil society) to avoid unjustified content removals; obligatory annual reporting (reports on content moderation which must include information about the moderation initiative, including information relating to illegal content, complaints received under complaints-handling systems, use of automated tools and training measures) by very large online platforms (VLOPs); any complaint lodged under complaints-handling systems by media service providers must be processed with priority; and additional protection against the unjustified removal by VLOPs of media content produced according to professional standards. These platforms will need to take every precaution to communicate the reasons for suspending content to media service providers before the suspension becomes effective. The process consists of a series of safeguards to ensure that this rapid alert procedure is consistent with the European Commissions’ priorities such as the fight against disinformation. In this regard, the Electronic Frontier Foundation states that « By creating a special class of privileged self-declared media providers whose content cannot be removed from big tech platforms, the law not only changes company policies but risks harming users in the EU and beyond ».


MEDIA COMPANIES AND PLATFORMS BARGAINING CONTENT

Yet the EMFA still does not deal with the complex issue of who would oversee controlling the self-declarations (Article 18(1) EMFA). More precisely, according to Article 18 EMFA “Providers of [VLOPs] shall provide a functionality allowing recipients of their services to declare” that they are media service providers. This self-declaration can be done, mainly, according to three criteria: if public service media providers fulfill the definition of Article 2 EMFA; if public service media providers “declare that they are editorially independent from Member States, political parties, third countries and entities owned or controlled by third countries”; and if public service media providers “declare that they are subject to regulatory requirements for the exercise of editorial responsibility in one or more Member States” or adhere “to a co-regulatory or self-regulatory mechanism governing editorial standards that is widely recognised and accepted in the relevant media sector in one or more Member States”. According to Article 18(4), when a VLOP decides to suspend its services regarding the content provided by a self-declared media service provider, “on the grounds that such content is incompatible with its terms and conditions”, it must “communicate to the media service provider concerned a statement of reasons” accompanying that decision “prior to such a decision to suspend or restrict visibility taking effect”.

Aside from that, Article 18 EMFA  splits the rules implemented by the Digital Services Act (DSA), a horizontal instrument that aims to create and ensure a more trustworthy online environment by putting in place a multilevel framework of responsibilities targeted at different types of services and by proposing a set of asymmetric obligations harmonized at EU level with the aim of ensuring regulatory oversight of the EU transparency, online space and accountability. Those rules covering all services and all types of illegal content, including goods or services are set by the DSA. This implies that media regulators will be enrolled in the cooperation mechanisms that will be set up for the aspects falling under their mandate. The inception of a specific “structured cooperation” mechanism is intended to contribute to strengthening robustness, legal certainty, and predictability of cross-border regulatory cooperation. This entails enhanced coordination and more precisely collective deliberation between national regulatory authorities (NRAs) which can bring significant added value to the application of the EMFA. This implies that media regulators will be involved in the cooperation mechanisms that will be set up for the aspects falling under their remit, even if it is still unclear how this will look in practice.

Above all, how will the new legislation be applied in practice and how will it work to ensure that it neither undermines the equality of speech and democratic debate nor endangers vulnerable groups? Excluding the fact that Article 18 of the EMFA incorporates safeguards about AI-generated content, details about which remain undisclosed as of now (see also Hajli et al on ‘Social Bots and the Spread of Disinformation in Social Media’ and Vaccari and Chadwick on ‘Deepfakes and Disinformation’), there is clearly reason to be concerned about the use of generative AI to promote disinformation and deep fakes. In an era where new technologies dominate, voluntary guidelines are not enough. Stronger measures are urgently needed to balance free speech and to have control over AI systems. It is admitted that while AI can be an excellent tool for journalists, it can also be used for bad purposes.

 

INEQUALITY BETWEEN MEDIA PROVIDERS: THE ATTRIBUTION OF A SPECIAL STATUS

In terms of platforms and media companies negotiating content, since not all media providers (media companies negotiating content) will receive a special status, it creates inequality. Platforms will have to guarantee that most of the reported information is publicly accessible. The main privilege resulting from this special status is that VLOP providers are more restricted in the way they moderate the content, but not in the sense of a ban on acting against this content but rather in the form of advanced transparency and information towards the information provider concerned. This effectively leads to an uncertain negotiation situation in which influential media and platforms negotiate over what content remains visible. This is especially true since the media have financial interests in seeking a rapid means of communication and in ensuring that their content remains visible even if it is at the expense of small providers.


CONCLUSION

As a conclusion, the risk to tamper with public opinion by disguising disinformation and propaganda as legitimate media content is still reflected in Article 18’s self-proclamation mechanism. In top of that, the risk of establishing two categories of freedom of speech arises from the fragmentation of legislation, not aligning with the DSA. Then, our capacity to create informed decisions could be undermined by Article 18 EMFA, an article that allows self-proclaimed media entities to operate with insufficient oversight. Furthermore, our democratic processes risk to be severely damaged by the unregulated spread of disinformation. Finally, the opacity of Article 18 in the determination of the authenticity of self-proclaimed media engenders problems of compliance enforcement.

The elements recalled here highlight the underside of the new legislation and corroborates that efforts must be made in the future to remedy the critical situation of press freedom within the EU.

 

Friday, 29 April 2022

“Daphne’s Law”: The European Commission introduces an anti-SLAPP initiative


 


Professor Justin Borg-Barthet, University of Aberdeen*

*Advisor to a coalition of press freedom NGOs on the introduction of SLAPPs, co-author of the CASE Model Law, lead author of a study commissioned by the European Parliament, and member of the Commission's Expert Group on SLAPPs and its legislative sub-group

 

Background

 

When Daphne Caruana Galizia was assassinated in Malta on 16th October 2017, 48 defamation cases were pending against her in Maltese and other courts. Daphne was at the peak of her journalistic powers when she was killed, producing a seemingly endless exposé of criminality involving government and private sector actors. Naturally, those she was exposing did not take kindly to the intrusion on the enjoyment of the fruits of their labour. Courts which offered few meaningful safeguards against vexatious litigation presented a nominally legitimate forum in which they would seek to exhaust and punish Daphne and to ensure that others did not engage in similar investigations. Most of these cases were inherited by her sons, whose grief was interrupted constantly by a need to appear in court in defence of their mother’s work.

 

The scale of abusive litigation which Daphne endured prompted several NGOs to look more closely at the phenomenon of SLAPPs. Strategic Lawsuits Against Public Participation, a term coined in American academic circles, are lawsuits intended not to serve the legitimate purpose of pursuing a claim against a respondent, but instead to use court procedure to suppress scrutiny of matters of public interest. The direct costs, psychological strain, and opportunity costs of defending oneself in court are intended to coerce retraction of legitimate public interest activity, and to have a chilling effect on others who might show an interest. While most SLAPPs are framed as defamation claims, there is also a growing body of abusive litigation which suppresses public participation using the pretext of other rights such as privacy and intellectual property.

 

In response to the growing SLAPP phenomenon, several US States, Canadian provinces and Australian states and territories have introduced anti-SLAPP statutes. Typically, these statutes provide for the early dismissal of cases, and include cost-shifting measures to compensate SLAPP victims and to dissuade claimants. No EU Member State has yet adopted similar laws. Prompted by Daphne’s experience, European NGOs and MEPs became increasingly aware of the alarming incidence of SLAPPs throughout Europe. They then set out to identify and advocate for legal solutions in the European Union.

 

Initially, the European Commission resisted calls for the introduction of anti-SLAPP legislation, citing a lack of specific legal basis. As the legal and statistical research bases for NGO advocacy evolved further, and following a change in the Commission’s political leadership, the Commission’s assessment changed. This culminated in the introduction of a package of anti-SLAPP measures on 27th April 2022, including a proposed anti-SLAPP Directive which Vice-President Jourova dubbed “Daphne’s Law”.

 

The legislative proposal is based, in part, on a Model Law which was commissioned by the Coalition Against SLAPPs in Europe (CASE), a grouping of NGOs established to further the research basis and advocacy for anti-SLAPP laws in Europe. That Model Law is itself inspired by anti-SLAPP statutes adopted in the United States, Canada and Australia, but accounts for divergent continental legal traditions, and benefits from extensive consultation with experts and practitioners in Europe and elsewhere.

 

Legal Basis and Scope

As noted above, the key barrier for NGOs and MEPs to persuade the Commission to initiate anti-SLAPP legislation was disagreement about whether the EU had competence to act in this area. Subsequently, however, the Commission recognised the internal market relevance of SLAPPs, as well as adopting a more strident approach to the rule of law and human rights implications of SLAPPs. Arguments concerning a legal basis included an approach based on numerous treaty articles (as in the Whistleblowers’ Directive), reliance on the internal market effects of SLAPPs (Article 114 TFEU) as in the Model Law, and the potential use of treaty provisions on cross-border judicial cooperation. Ultimately, in view of Member States’ expected resistance to intervention in domestic procedural law, the Commission’s draft proceeds on the basis that Article 81 TFEU confers competence in respect of judicial cooperation in civil matters.

 

The orthodox view of Article 81 TFEU presupposes an international element to matters falling within its scope. It was therefore incumbent on the drafters to constrain the scope of the proposed directive to cases having a cross-border dimension. The Commission’s proposal begins with a classic private international law formulation which refers to the domicile of the parties. A case lacks cross-border implications if the parties are both domiciled in the Member State of the court seised. This, however, is subject to a far-reaching caveat in Article 4(2):

 

Where both parties to the proceedings are domiciled in the same Member State as the court seised, the matter shall also be considered to have cross-border implications if:

a)      the act of public participation concerning a matter of public interest against which court proceedings are initiated is relevant to more than one Member State, or

b)      the claimant or associated entities have initiated concurrent or previous court proceedings against the same or associated defendants in another Member State.

 

The Commission’s proposal adopts an innovative formulation, the breadth of which is commensurate to the internal market and EU governance implications of SLAPPs. Given the EU’s interconnectedness, it is paramount that the law account for the fact that cross-border implications do not flow only from the circumstances of the parties but also from transnational public interest in the underlying dispute.

 

The broad scope could be extended further if and when Member States come to transpose the proposed directive in national law. It is hoped, and indeed recommended as good practice, that Member States will take the view that national transposition measures will not be restricted to matters falling within the scope of the Directive but would apply also to purely domestic cases. This would avoid the prospect of reverse discrimination against SLAPP victims in domestic disputes. It would also minimise opportunistic litigation concerning the precise meaning of ‘[relevance] to more than one Member State’ in Article 4(2)(a).

 

Defining SLAPPs

Other than in the title and preamble, the proposed directive does not deploy the term ‘SLAPPs’. Discussions preceding the drafting process noted a number of difficulties associated with the term, not least (i) its unfamiliarity to a European legal audience, and (ii) the potential confusion resulting from the word ‘strategic’, which could be understood to require evidence of said strategy. In keeping with the Model Law, the Commission’s draft Directive deploys familiar language and focuses on the abusive nature of the proceedings. Rather than referring to SLAPPs, therefore, the text of the draft directive uses the term ‘abusive court proceedings against public participation’.

 

In identifying matters falling within the scope of the draft directive, it is first necessary to establish that a matter concerns ‘public participation’ on a matter of ‘public interest’. The Commission’s draft accounts for the fact that SLAPPs do not only target journalistic activity, but also seek to constrain legitimate action of civil society, NGOs, academics, and others. Public participation and public interest are therefore defined broadly as follows in Article 3:

 

‘public participation’ means any statement or activity by a natural or legal person expressed or carried out in the exercise of the right to freedom of expression and information on a matter of public interest, and preparatory, supporting or assisting action directly linked thereto. This includes complaints, petitions, administrative or judicial claims and participation in public hearings;

‘matter of public interest’ means any matter which affects the public to such an extent that the public may legitimately take an interest in it, in areas such as:

a)      public health, safety, the environment, climate or enjoyment of fundamental rights;

b)      activities of a person or entity in the public eye or of public interest;

c)       matters under public consideration or review by a legislative, executive, or judicial body, or any other public official proceedings;

d)      allegations of corruption, fraud or criminality;

e)      activities aimed to fight disinformation;

 

If a case concerns public participation in matters of public interest, it is then necessary to establish that the proceedings are abusive in accordance with the definition in Article 3:

‘abusive court proceedings against public participation’ mean court proceedings brought in relation to public participation that are fully or partially unfounded and have as their main purpose to prevent, restrict or penalize public participation. Indications of such a purpose can be:

a)      the disproportionate, excessive or unreasonable nature of the claim or part thereof;

b)      the existence of multiple proceedings initiated by the claimant or associated parties in relation to similar matters;

c)       intimidation, harassment or threats on the part of the claimant or his or her representatives.

 

There are therefore two key elements to the notion of abuse: (i) claims may be abusive because they are fully or partly unfounded, or (ii) they may be abusive because of vexatious tactics deployed by claimants. The implications of a finding of abusiveness will vary depending on the type of abuse identified in the proceedings, with more robust remedies available where the claim is manifestly unfounded in whole or in part.

 

Main legal mechanisms to combat SLAPPs

Once a court has established that proceedings constitute SLAPPs falling within the directive’s scope, three key remedies will be available to the respondent in the main proceedings: (i) the provision of security for costs and damages while proceedings are ongoing, (ii) the early dismissal of proceedings, and (iii) payment of costs and damages.

 

Speedy dismissal of claims is considered the cornerstone of anti-SLAPP legislation. Accelerated dismissal deprives the SLAPP claimant of the ability to extend the financial and psychological costs of proceedings to the detriment of the respondent. Early dismissal of cases must, of course, be granted only with great caution given it is arguable that this restricts the claimant’s fundamental right to access to courts. The solution provided in the draft directive is to restrict the availability of this remedy to claims which are manifestly unfounded in whole or in part. It is for the claimant in the main proceedings to show that their claim is not manifestly unfounded (Art 12).

 

Early dismissal is not available where the claim is not found to be manifestly unfounded, even if the its main purpose is ‘to prevent, restrict or penalize public participation’ (as evidenced by ‘(i) the disproportionate, excessive or unreasonable nature of the claim…the existence of multiple proceedings [or] intimidation, harassment or threats on the part of the claimant’). This differs from the Model Law which envisages early dismissal in cases which are not manifestly unfounded but which bear the hallmarks of abuse. The Model Law’s authors reasoned that a court should be empowered to dismiss a claim which is designed to abuse rather than vindicate rights. This would not, in our view, constitute a denial of the right to legitimate access to courts but would dissuade behaviour which is characterised as abusive in the Commission’s own draft instrument. While the Commission’s reasoning and caution are understandable, the high bar set by the requirement of manifest unfoundedness allows for significant continued abuse of process.

 

This shortcoming is mitigated somewhat by the other remedies, namely the provision of security pendente lite (Article 8) and liability for costs, penalties, and compensatory damages (Articles 14-16), which are available regardless of whether the SLAPP is manifestly unfounded or merely characterised by abuse of rights. These financial remedies are especially useful insofar as they give the respondent some comfort that they will be compensated for the loss endured through litigation. They are also expected to have a dissuasive effect on SLAPP claimants who would be especially loathe to the notion of rewarding the respondent whose legitimate exercise of freedom of expression they had sought to dissuade or punish. Nevertheless, it bears repeating that in all cases these remedies, designed to compensate harm, should supplement the principal remedy of early dismissal which is intended to prevent harm.

 

In addition to these main devices to dissuade the initiation of abusive proceedings against public participation, the draft directive includes a number of further procedural safeguards. These include restrictions on the ability to alter claims with a view to avoiding the award of costs (see Recital 24 and Article 6), as well as the right to third party intervention (Article 7) which will enable NGOs to submit amicus briefs in proceedings concerning public participation. While this may appear to be a minor innovation at first blush, it could have substantial positive implications insofar as it would equip more vulnerable respondents (and less expert courts) with valuable expertise and oversight.

 

London Calling: Private International Law Innovation

While the provisions discussed above would limit the attractiveness of SLAPPs in EU courts, there would remain a significant gap if EU law did not provide protection against the institution of SLAPPs in third countries. London, with its high litigation costs and somewhat claimant friendly defamation laws, is an especially attractive forum for claimants who wish to suppress public scrutiny. Equally, other States could be attractive to claimants who wish to circumvent EU anti-SLAPP law, whether simply as a function of the burden of transnational litigation, or because of the specific content of their substantive and/or procedural laws. The draft directive therefore proposes to introduce harmonised rules on the treatment of SLAPP litigation in third countries.

 

Article 17 provides that the recognition and enforcement of judgments from the courts of third countries should be refused on grounds of public policy if the proceedings bear the hallmarks of SLAPPs. While Member States were already empowered to refuse recognition and enforcement in such cases, the inclusion of this article ensures that protection against enforcement of judgments derived from vexatious proceedings is available in all Member States.

 

Article 18 provides a further innovation by establishing a new harmonised jurisdictional rule and substantive rights to damages in respect of SLAPPs in third countries. The provision confers jurisdiction on the courts of the Member State in which a SLAPP victim is domiciled regardless of the domicile of the claimant in the SLAPP proceedings. This would provide an especially robust defence against the misuse of third country courts and reduce the attractiveness of London and the United States as venues from which to spook journalists into silence.

 

While the limitation of forum shopping in respect of third countries is, of course, welcome, there does remain a significant flaw insofar as EU law and the Lugano Convention facilitate forum shopping within the European judicial area. The cumulative effect of EU private international law of defamation is to provide mischievous litigants with ample opportunity to deploy transnational litigation as a weapon to suppress freedom of expression. NGOs have therefore requested amendment of two EU private international law instruments:

 

In the first instance, and as a matter of urgency, the Brussels I Regulation (recast) requires amendment with a view to grounding jurisdiction in the domicile of the defendant in matters relating to defamation. This would remove the facility for pursuers to abuse their ability to choose a court or courts which have little connection to the dispute;

The omission of defamation from the scope of the Rome II Regulation requires journalists to apply the lowest standard of press freedom available in the laws which might be applied to a potential dispute. We recommend the inclusion of a new rule which would require the application of the law of the place to which a publication is directed;

 

These changes have not yet been forthcoming. It is hoped that ongoing reviews of these instruments will yield further good news for public participation in the EU.

 

Concluding remarks

Daphne’s Law will now have to be approved by the Council of Ministers and the European Parliament. The legislative process may see a Parliament seeking more robust measures pitted against Member States who may be inclined to protect their procedural autonomy. The Commission has considered these competing demands in its draft and sought to propose legislation which strikes a balance between divergent institutional stances. Nevertheless, it must be expected that the draft will be refined as it makes its way through the approval process. As noted above, the draft would be improved if those refinements were to include the extension of early dismissal to cases beyond the narrow confines of manifest unfoundedness. Equally, the draft directive should be viewed as a first welcome step in the pushback against SLAPPs in Europe and that reviews of private international law instruments will follow soon after.

 

Photo credit: ContinentalEurope, on Wikicommons

Tuesday, 11 January 2022

A democratic alternative to the Digital Services Act's handshake between States and online platforms to tackle disinformation

 



 

By Paul De Hert* and Andrés Chomczyk Penedo**

 

* Professor at Vrije Universiteit Brussel (Belgium) and associate professor at Tilburg University (The Netherlands)

** PhD Researcher at the Law, Science, Technology and Society Research Group, Vrije Universiteit Brussel (Belgium). Marie Skłodowska-Curie fellow at the PROTECT ITN. The author has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497

 

 

 

1. Dealing with online misinformation: who is in charge?

 

Misinformation and fake news are raising concerns for the digital age, as discussed by Irene Khan, the United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression (see here). For example, during the last two years, the COVID19 crisis caught the world by surprise and considerable discussions about the best course of action to deal with the pandemic were held. In this respect, different stakeholders spoke up but not all of them were given the same possibilities to express their opinion. Online platforms, but also traditional media, played a key role in managing this debate, particularly using automated means (see here).

 

A climate of polarization developed, in particular on the issue of vaccination but also around other policies such as vaccination passports, self-tests, treatment of the virus in general, or whether the health system should focus on ensuring immunity through all available strategies (see here). Facebook, YouTube, and LinkedIn, just to name a few, stepped in and started delaying or censoring posts that in one way or another were perceived as harmful to governmental strategies (see here). While the whole COVID19 crisis deserves a separate discussion, it serves as an example of how digital platforms are, de facto, in charge of managing online freedom of expression and, from a practical point of view, have the final say in what is permissible or not in an online environment.

 

The term 'content’ has been paired with adjectives such as clearly illegal, illegal and harmful, or legal but harmful, just to name the most relevant ones. However, what does exactly each of these categories entail, and why are we discussing these categories? What should be the legal response, if any, to a particular piece of content and who should address it? While content and its moderation is not a new phenomenon, as Irene Khan points in her previously mentioned report, technological developments, such as the emergence and consolidation of platforms, demand new responses.

 

With this background, the European Union is currently discussing at a surprisingly, very quick speed the legal framework for this issue through the Digital Services Act (the DSA, previously summarised here). The purpose of this contribution is to explore how misinformation and other categories of questionable content are tackled in the DSA and to highlight the option taken in the DSA to transfer government-like powers (of censorship) to the private sector. A more democratic alternative is sketched. A first one is based on the distinction between manifestly illegal content and merely illegal content to distribute better the workload between private and public enforcement of norms. A second alternative consists in community-based content moderation as an alternative or complementary strategy next to platform-based content moderation

 

 

2. What is the DSA?

 

The DSA (see here for the full text of the proposal and here for its current legislative status) is one of the core proposals in the Commission’s 2019-2024 priorities, alongside the Digital Markets Act (discussed here), its regulatory ‘sibling’. It intends to refresh the rules provided for in the eCommerce Directive and deal with certain platform economy-related issues under a common European Union framework. It covers topics such as: intermediary service providers liability - building up from the eCommerce Directive regime and expanding it -, due diligence obligations for a transparent and safe online environment -including notice and takedown mechanisms, internal complaint-handling systems, traders traceability, and advertising practices-, risk management obligations for very large online platforms and the distribution of duties between the European Commission and the Member States. Many of the these topics might demand further regulatory efforts beyond the scope of the DSA, such as political advertisement which would be complemented by sector-specific rules as, for example, the proposal for a Regulation on the Transparency and Targeting of Political Advertising (see here).

 

As of late November 2021, the Council has adopted a general approach to the Commission’s proposal (see here) while the European Parliament is still dealing with the discussion of possible amendments and changes to that text (see here). Nevertheless, as with many other recent pieces of legislation (see here), it is expected that its adoption is sooner rather than later in the upcoming months.

 

3. Unpacking Mis/Disinformation (part1): illegal content as defined by Member States

 

We started by discussing misinformation and fake news. If we look at the DSA proposal, the term 'fake news' is missing in all its sections. However, the concept of misinformation appears as disinformation in Recitals 63, 68, 69, and 71. Nevertheless, both terms are nowhere to be found in the Articles of the DSA proposal.

 

In literature, the terms are used interchangeably or are distinguished, with disinformation defined as the intentional and purposive spread of misleading information, and misinformation as ‘unintentional behaviors that inadvertently mislead’ (see here). But that distinction does not help in recognizing either mis- or disinformation, from other categories of content.

 

Ó Fathaigh, Helberger, and Appelman (see here) have pointed that disinformation, in particular, is a complex concept to tackle and that very few scholars have tried to unpack its meaning. Despite the different policy and scholarly efforts, a single unified definition of mis- or disinformation is still lacking, and the existing ones can be considered as too vague and uncertain to be used as legal definitions. So, where shall we start looking at these issues? A starting point, so we think, is the notion of content moderation, which according to the DSA proposal, is defined as follows:

 

'content moderation' means the activities undertaken by providers of intermediary services aimed at detecting, identifying, and addressing illegal content or information incompatible with their terms and conditions, provided by recipients of the service, including measures taken that affect the availability, visibility, and accessibility of that illegal content or that information, such as demotion, disabling of access to, or removal thereof, or the recipients' ability to provide that information, such as the termination or suspension of a recipient's account (we underline);

 

Under this definition, content moderation is an activity that is delegated to providers of intermediary services, particularly online platforms, and very large online platforms. Turning to the object of the moderation, we can ask what is exactly being moderated under the DSA? As mentioned above, moderated content is usually associated with certain adjectives, particularly illegal and harmful. The DSA proposal only defines illegal content:

 

illegal content’ means any information, which, in itself or by its reference to an activity, including the sale of products or provision of services is not in compliance with Union law or the law of a Member State, irrespective of the precise subject matter or nature of that law;

 

So far, this definition should not provide much of a challenge. If the law considers something as, it makes sense that it is similarly addressed in the online environment as in the physical realm. For example, a pair of fake sneakers constitute a trademark infringement, regardless of if the pair is being sold via eBay or by a street vendor in Madrid’s Puerta del Sol. In legal practice, regulating illegal content is not black and white. A distinction can be made between clearly illegal content and situations where further exploration must be conducted to determine the illegality of certain content. This is how it is framed in the German NetzDG, for example. In some of the DSA proposal articles, mainly Art. 20, we can see the distinction between manifestly illegal content and illegal content. However, this distinction is not picked up again in the rest of the DSA proposal.

 

What stands is that the DSA proposal does not expressly cover disinformation but concentrates on the notion of illegal content. If Member State law defines and prohibit mis- or disinformation -which Ó Fathaigh, Helberger and Appelman have reviewed and found to be inconsistent across the EU- , then this would fall under the DSA category of illegal content. Rather than creating legal certainty, this further reinforces legal uncertainty and pegs the notion of illegal content to be dependent on each Member State's provisions. But where does this leave disinformation that is not regulated in in Member State laws? The DSA does not like it, but its regulation is quasi hidden.

 

 

4. Unpacking Mis/Disinformation (part2): harmful content non defined by the DSA

 

The foregoing brings us to the other main concept dealing with content in the DSA, viz. harmful content. To say that this is a (second) 'main' concept might confuse the reader, since the DSA does not define it or regulate it at great lengths.  The DSA’s explanatory memorandum states that `[t]here is a general agreement among stakeholders that ‘harmful’ (yet not, or at least not necessarily, illegal) content should not be defined in the Digital Services Act and should not be subject to removal obligations, as this is a delicate area with severe implications for the protection of freedom of expression’.

 

As such, how can we define harmful content? This question is not new by any means as we can trace back policy documents from the European Union dating back to 1996 (see here) dealing with this problem. Since then, little has changed in the debate surrounding harmful content as the core idea remains untouched: harmful content refers to something that, depending on the context, could affect somebody due to it being unethical or controversial (see here).

 

In this respect, the discussion on this kind of content does not tackle a legal problem but rather an ethical, political, or religious one. As such, it is a valid question to be asked if laws and regulations should even mingle in this scenario. In other words, does it make sense to talk about legal but harmful content when we discuss new regulations? Should our understanding of illegal and harmful content be construed in the most generous way to accommodate for the most amount of situations possible to avoid this issue? And more importantly, if the content seems to be legal, does it make sense to add the adjective of ‘harmful’ rather than using, for example, ‘controversial’? Regardless of the terminology used, this situation leaves us with three types of content categories: (i) manifestly illegal content; (ii) illegal, both harmful and not, content; (iii) legal but harmful content. Each of them demands a different approach, which shall be the topic of our following sections.

 

 

5. Illegal content moderation mechanisms in the DSA (content type 1 & 2)

 

The DSA puts forward a clear, but complex, regime for dealing with all kinds of illegal content. As a starting point, the DSA proposal provides for a general no monitoring regime for all intermediary service providers (Art. 7) with particular conditions for mere conduits (Art. 3), caching (Art. 4), and hosting service providers (Art. 5). However, voluntary own-initiative investigations are allowed and do not compromise this liability exemption regime (Art. 6). In any case, once a judicial or administrative order mandates the removal of content, this order has to be followed to avoid incurring liability (Art. 8). In principle, public bodies (administrative agencies and judges) have control over what is illegal and when something should be taken down.

 

However, beyond this general regime, there are certain stakeholder-specific obligations spread out across the DSA proposal also dealing with illegal content that challenge the foregoing state-controlled mechanism. In this respect, we can point out the mandatory notice and takedown procedure for hosting providers with a fast lane for trusted flaggers notices (Arts. 14 and 19, respectively), in addition to the internal complaint-handling system for online platforms paired with the out-of-court dispute settlement (Arts. 17 and 18, respectively) and, in the case of very large online platforms, these duties should be adopted following a risk assessment process (Art. 25). With these set of provisions, the DSA grants a considerable margin to certain entities to act as law enforcers and judges, without a government body having a say in if something was illegal and its removal was a correct decision.

 

6. Legal but harmful content moderation mechanisms in the DSA (content type 3)

 

But what about our third type of content, legal but harmful content, and its moderation? Without dealing with the issue of content moderation directly, the DSA transfers the delimitation of this concept to providers of online intermediary services, mainly online platforms. In other words, a private company can limit apparently free speech within its boundaries. In this respect, the DSA proposal grants all providers of intermediary services the possibility of further limiting what content can be uploaded and how it shall be governed via the platform’s terms and conditions and, by doing so, these digital services providers are granted substantial power in regulating digital behavior as they see fit:

 

‘Article 12 Terms and conditions

 

1. Providers of intermediary services shall include information on any restrictions that they impose concerning the use of their service in respect of information provided by the recipients of the service, in their terms and conditions. That information shall include information on any policies, procedures, measures, and tools used for content moderation, including algorithmic decision-making and human review. It shall be set out in clear and unambiguous language and shall be publicly available in an easily accessible format.

 

2. Providers of intermediary services shall act in a diligent, objective, and proportionate manner in applying and enforcing the restrictions referred to in paragraph 1, with due regard to the rights and legitimate interests of all parties involved, including the applicable fundamental rights of the recipients of the service as enshrined in the Charter.’

 

In this respect, the DSA consolidates a content moderation model heavily based around providers of intermediary services, and in particular, very large online platforms, acting as lawmakers, law enforcers, and judges at the same time. They are lawmakers as the terms and conditions lay down what is permitted as well as forbidden in the platform. While there isn't a general obligation to patrol the platform, they must react to notices from users and trusted flaggers and enforce the terms if necessary. And, finally, they act as judges by attending to the replies from the user who uploaded illegal content and dealing with the parties involved in the dispute, notwithstanding the alternative means provided for in the DSA.

 

Rather than using the distinction between manifestly illegal content and ordinary illegal content and refraining from regulating other types of content, the DSA creates a governance model for moderation of all content in the same manner. While administrative agencies and judges can request content to be taken down, under Art. 8, the development of the further obligations mentioned above poses the following question: who is the main responsible to define what is illegal and what is legal? Are the existing institutions subject to checks and balances or rather private parties, particularly BigTech and very large online platforms?

 

 

7. The privatization of content moderation: the second (convenient?) invisible handshake between the States and platforms

 

As seen with many other areas of the law, policymakers and regulators have slowly but steadily transferred government-like responsibilities into the private sector and mandated their compliance relying on a risk-based approach. For example, in the case of financial services, banks, and other financial services providers have turned into the long arm of financial regulators to tackle money laundering and tax evasion rather than relying on government resources to do this. This resulted in financial services firms having to process vast amounts of personal data to determine whether a transaction is illegal (either because it is laundering criminal proceedings or avoiding taxes) with nothing but their planning and some general guidelines; if they fail in this endeavor administrative fines (and in some cases, criminal sanctions) can be expected. The result has been an ineffective system to tackle this problem (see here) yet regulators keep on insisting on this approach.

 

A little shy of 20 years ago, Birnhack and Elkin denounced the existence of an invisible handshake between States and platforms for the protection and sake of national security after the 9/11 terror attacks (see here). At that time, this invisible handshake could be considered by some as necessary to deal with an international security crisis. Are we in the same situation as we speak when it comes to dealing with disinformation and fake news? This is a valid question. The EU policy makers seems to be impressed by voices such as Facebook’s whistleblower Frances Haugen who wants to align 'technology and democracy' by enabling platforms to moderate post. The underlying assumption seems to be that platforms are in the best position to moderate content following supposedly clear rules and that 'disinformation' can be identified (see here).

 

Content moderation presents a challenge for States given the amount of content generated non-stop across different intermediary services, in particular, social media online platforms (see here). Facebook employs a sizable staff of almost 15,000 individuals as content moderators (see here) but also relies heavily on automated content moderation, authorized by the DSA proposal under Arts. 14 and 17, in particular, to mitigate mental health problems to those human moderators given the inhuman content they sometimes have to engage with. To put this in comparison, using the latest available numbers from the Council of Europe about the composition of judiciary systems in Europe (see here), the Belgian judiciary employs approximately 9200 individuals (-the entire judiciary dealing with issues about commercial law up to criminal cases-), a little more than half of Facebook’s content moderators.

 

As such, one can argue that courts could be easily overloaded with cases that demand a quick and agile solution for defining what is illegal or harmful content if platforms didn't act as a first-stage filter for content moderation. Governments would need to heavily invest in administrative or judicial infrastructure and human resources to deal with such demand from online users. This matter has been discussed by scholars (see here). The available options they see either (i) strengthening platform content moderation by requiring the adoption of judiciary-like governance schemes, such as social media councils as Facebook has done; or (ii) implementing e-courts with adequate resources and procedures suited to the needs of the digital age to upscale our existing judiciary.

 

8. The consequences of the second invisible handshake

 

The DSA seems to have, willingly or not, decided on the first approach. Via this approach, -the privatization of content moderation-, States do not have to deal with the lack of judicial infrastructure to deal with the amount of content moderation that digital society requires. As shown by our example, Facebook has an infrastructure, just on raw manpower available, that doubles that of a country’s judiciary, such as Belgium. This second invisible handshake between BigTech and States can be situated in the incapacity of States to deal with disinformation effectively with the current legal framework and institutions.

 

If the DSA proposal is adopted ‘as is’, then platforms would have a significant power over individuals. First, through the terms and conditions, they would in position to determine what is allowed to be said and what cannot be discussed, as provided for by Art. 12. Not only that but also any redress before decisions adopted by platforms would have to be first channeled through the internal complaint handling mechanisms, as provided for by Arts. 17 and 18, for example, rather than seeking judicial remedy. As it can be appreciated, the power scale has clearly shifted towards platforms, and by extension to governments, in detriment of end-users.

 

Besides this, the transfer of government-like powers to platforms contributes to avoiding making complicated and hard decisions that could cost political reputation. Returning to our opening example, the lack of a concrete decision from our governments regarding sensitive topics has left platforms in charge of choosing what is the best course of action to tackle a worldwide pandemic by defining when something is misinformation that can affect the public health and when something could help fight back something that is out of control. Not only that but if platforms wrongfully approach the issue, then they are exposed to fines for non-compliance with their obligations, although particularly very large online platforms can deal with the fines proposed under the DSA.

 

If the second invisible handshake is going to take place, the least we, as a society, deserve is that agreement is made transparent so that public scrutiny can oversight such practices and free speech can be safeguarded. In this respect, the DSA could have addressed the issue of misinformation and fake news in a more democratic manner. Two proposals:

 

 

9. Addressing disinformation more democratically to align 'technology and democracy'

 

Firstly, the distinction between manifestly illegal content and merely illegal content could have been extremely helpful in distributing the workload between the private and public sector in a manner that administrative authorities and judges would only take care of cases where authoritative legal interpretation is necessary. As such, manifestly illegal content, such as apology to crime or intellectual property infringements, could be handled directly by platforms and merely illegal content by courts or administrative agencies. In this respect, a clear modernization in legal procedures to deal with claims about merely illegal content would still be necessary to adjust the legal response time to the speed of our digital society. Content moderation is not alone in this respect but joins the ranks of other mass-related issues, such as consumer protection, where effective legal protection is missing due to the lack of adequate infrastructure to channel complaints.

 

Secondly, as for legal but harmful content, while providers of online intermediary services have a right to conduct their business as to how they see fit and therefore can select which content is allowed or not via terms and conditions, citizens do have a valid right to engage directly in the discussion of those topics and determine how to proceed with them. This is even more important as users themselves are the ones interacting on these platforms and that content is exploited by platforms to ensure that controversy remains on the table to ensure engagement (see here).

 

However, there is a possibility to deal with content moderation, particularly in the case of legal but harmful content, that avoids a second invisible handshake: community-based content moderation strategies (see here) where users have a more active role in the management of online content has proven to be successful in certain online platforms. While categories such as clearly illegal or illegal and harmful content do not provide much margin for societal interpretation, legal but harmful content could be tackled by citizens' involvement. In this respect, community-based approaches, while resource-intensive, allow for citizens to engage directly in the debate about the issue at hand.

 

While community-based content moderation also has its own risks, it could serve as a more democratic method than relying on platforms’ unilateral decisions and it might serve where judges and administrative agencies cannot go due to the legality of content. As noted by the Office of the United Nations High Commissioner for Human Rights, people, rather than technology, should be making the hard decisions but also States, as elective representatives of society, need to make decisions about what is illegal and what is legal (see here).

 

Our alternatives are only a part of a more complete program. Further work is needed at policy level to address fake news. Sadly, as it may be, the matter is not matured yet and ripe for regulation. While the phenomena of political actors actively spreading misleading information (the twittering lies told by political leaders) are well-known and discussed, the role of traditional news media, who are supposed to be the bearers of truth and factual accuracy, is less well understood. Traditional news media are in fact a part of the problem, and play a somewhat paradoxical role with respect to fake news and its dissemination. People learn about fake news, not via obscure accounts that Facebook and others can control, but through regular media that find it important for many reasons to report on disinformation. Tsfatie and others (see here) rightly ask for more analysis and collaborations between academics and journalists to develop better practices in this area.

 

We are also surprised by the lack of attention in the DSA proposal to the algorithmic and technological dimension that seems central to the issue of fake news. More work is needed on the consequences of algorithmic production of online content. More work too is needed to assess the performance of technological answers to technology.  How to organize a space of contestation in a digitally mediated and enforced world? Are the redress mechanisms in the DSA sufficient when the post has already been deleted, i.e. "delete first rectify after"?

 

Art credit: Frederick Burr Opper, via wikimedia commons

Tuesday, 10 August 2021

Copyright and the Internet: Poland v Parliament and Council (Case C-401/19), Opinion of the Advocate General, 15 July 2021


 


Lorna Woods, Professor of Internet Law, University of Essex

 

Introduction

 

The development of ‘web 2.0’, especially social media, has meant that many people are able to post content to potentially large audiences.  The amount of content, however, and how to manage conflicting rights between different users has led to debate about the role of the platforms in helping remedy the problems that the platforms facilitate (that come along with the benefits the platforms enable).  One particular issue is the acceptability of the use of filtering technologies, especially from the perspective of the freedom of expression of the user of the work.  It has come before the courts before, when the courts – in the context of copyright claims - had expressed concerns about those techniques.  Given the quantity of material uploaded, however, it is hard to envisage that in person ex post review of content would be possible, let alone effective.  The problem of copyright enforcement remains – and the ‘value gap’ created by mass unauthorised use of protected works. Platforms have had little incentive to prevent the problem from arising – indeed it could be said the platforms benefitted (through advertising revenue) from the existence of this content. The ex post system – whereby copyright holders notify and the platform removes content to maintain its immunity under Article 14 e-Commerce Directive – has not been seen as effective by rights-holders.

 

This problem had led to the overhaul of the copyright regime and the enactment of the Copyright Directive in the Digital Single Market (Directive 2019/790), a proposal that during the legislative process was subject to extensive lobbying.  The result is a directive which aims to reduce the ‘value gap’ and to rebalance matters more in favour of the creators of content with the introduction of a new press publisher’s right (Article 15) and, notably, Article 17 which covers use of protected content by online content-sharing service providers.  Article 17, however, was contentious, leading to this challenge by Poland, and the recent Advocate-General’s opinion. While it is important in understanding the scope of Article 17 itself, we might also ask whether the reasoning here might have broader implications.

 

Provisions in Issue

 

Article 17 changes (or clarifies) the position under copyright that the platforms caught by the definitions in the directive will automatically be considered to be carrying out 'acts of communication to the public or making available to the public' when they give the public access to copyright-protected content uploaded by users, and therefore require authorisation from the relevant content owners. Article 17(3) displaces Article 14 e-Commerce Directive, which provides conditional immunity from penalties to neutral hosts. Article 17(3) provides that, if there are no relevant licensing arrangements in place, the platforms will only be able to maintain immunity if they satisfy the terms of Article 17(4). Article 17(4) introduces 4 cumulative conditions (arranged across 3 subparagraphs) – that the platform has:

 

(a) made best efforts to obtain an authorisation, and

 

(b) made, in accordance with high industry standards of professional diligence, best efforts to ensure the unavailability of specific works and other subject matter for which the right-holders have provided the service providers with the relevant and necessary information; and in any event

 

(c) acted expeditiously, upon receiving a sufficiently substantiated notice from the right-holders, to disable access to, or to remove from their websites, the notified works or other subject matter, and made best efforts to prevent their future uploads in accordance with point (b).

 

While the first part of Article 17(4)(c) is similar to the conditions in Article 14 e-Commerce Directive, the other three elements are new.  Without dealing with any questions around the definitions of the platforms falling within this obligation, a number of questions arise: does Article 17(4) effectively require upload filters (and will they lead to overblocking); what are best efforts, especially in relation to the monitoring which is implied; and does Article 17(4) effectively require ‘general monitoring’ (despite the clarification in Article 17(8) that it should not lead to general monitoring).

 

Article 17(7) might be seen as an effort at counter-balance: it provides

 

The cooperation between online content-sharing service providers and right-holders shall not result in the prevention of the availability of works or other subject matter uploaded by users, which do not infringe copyright and related rights, including where such works or other subject matter are covered by an exception or limitation.

 

Significantly, the directive expressly lists the exceptions for quotation, criticism, review and for caricature, parody or pastiche.  There are also obligations (in Article 17(9)) relating to redress and complaints mechanisms, which some sections of industry have claimed are onerous.  Some of the vagueness around requirements might be dealt with by Commission guidance aimed at aiding coherent implementation of the directive; while this is now available, at the time the case was lodged it was not.

 

The Legal Challenge

 

The Issue

 

Poland issued a judicial review action, seeking annulment of the provision (either just Article 17(4)(b) and (c) or Article 17 in its entirety) on the basis of its incompatibility with freedom of expression as guaranteed by the Charter (Article 11 EUCFR), either by destroying the essence of the right or by constituting a disproportionate interference with that right.

 

The Nature of the Obligation

 

A preliminary issue concerned is the nature of the obligation imposed by Article 17(4) and whether it requires for preventive monitoring purposes the use of upload filters. While this is not explicitly required, the Advocate General took the view that, in many circumstances, the use of those tools are required [para 62]. Further, industry standards will have an impact on the decision as to what best practice is [para 65-6]. So, while the recitals provided considerations to assess what suitable methods would look like (see recital 66), this did not affect the assessment that the reality was the upload filters of some description would be used.

 

The Impact on Freedom of Expression

 

Applicability of the Right

 

One precondition for the applicability of fundamental rights is that the actions under challenge could be imputed to the State; here, the actions of the platforms are in issue (and their right to run a business under Article 16 EUCFR). The Advocate General drew a distinction between the circumstances where a platform had real choice and the circumstances here. The provision might formally give operators a choice: do this and get exemption from liability, or choose not to do that and face exposure to liability. The Advocate General emphasised that the assessment as to compliance with Article 11 should take account of what is happening in practice; the reality is that ‘the conditions for exemption laid down in the contested provisions will, in practice, constitute genuine obligations for those providers’ [para 86, emphasis in original].

 

Limitations – Lawfulness

 

The conditions for limiting Article 11 EUCFR are found in Article 52(1) EUCFR. The requirement there that the restriction be ‘provided for by law’ was to be understood in the light of the jurisprudence on lawfulness for the purpose of Article 10(2) ECHR (citing some CJEU decisions on data protection and the right to a private life in support). Lawfulness requires not just a basis in law, clearly satisfied here, but must be accessible and foreseeable. The first aspect is clearly satisfied. As regards the second, the Advocate General noted that the case law allows the legislature ‘without undermining the requirement of “foreseeability” [to] choose to endow the texts it adopts with a certain flexibility rather than absolute certainty’ [para 95, citing the Grand Chamber judgment in Delfi v Estonia, discussed here]. Nonetheless, the case-law on lawfulness also requires safeguards against arbitrary or abusive interference with rights. This issue the Advocate General linked to proportionality.

 

Limitations – the Essence of the Right

 

The requirement to respect the essence of the right provides a limit on the discretion of the legislature to weigh up competing interests and come to a fair balance. It is ‘an “untouchable core” which must remain free from any interference’ [para 99]. According to the Advocate General, an ‘obligation preventively to monitor, in general, the content of users of their services in search of any kind of illegal, or even simply undesirable information’ constitutes such an interference [para 104]. Article 15 e-Commerce Directive is ‘a general principle of law governing the Internet’ [para 106, emphasis in original and referring to Scarlet Extended and SABAM], and binds the EU legislature. Importantly, this principle does not prohibit all forms of monitoring; the jurisprudence of the CJEU has already distinguished monitoring which occurs in specific cases, and a similar position can be seen in the case-law of the ECtHR (Delfi). Tracing the development of the CJEU’s reasoning over time from the early cases of L’Oreal, Scarlet Extended and SABAM, through McFadden to Glawischnig-Piesczek (discussed here), the Advocate General opined that Article 17 is a specific monitoring obligation [para 110]; it focuses on specific items of content and the fact that a platform would have to search all content to find it does not equate to a general obligation.

 

Limitations – Proportionality

 

After reviewing the first two aspects of proportionality (appropriate and necessary), the Advocate General moved to discuss the heart of the matter:  proportionality strictu sensu and the balance achieved between the conflicting rights.  The Advocate General accepted that it was permissible for the EU legislature to change the balance it had adopted in Article 14 e-Commerce Directive for that in the new Copyright in the Digital Market Directive taking into account the different context, and the broad discretion the institutions have in the complex area. The Advocate General identified the following factors: the extent of the economic harm caused due to the scale of uploading; the ineffectiveness of the notice and take down system; the difficulties in prosecuting those responsible and the fact that the obligations concern specific service providers [para 137].

 

The next issue whether platforms would take ‘the easy way out’ and over-block just to be on the safe side in terms of their own exposure to liability. The Advocate-General excluded this possibility in his interpretation of the ‘best efforts’ obligation. So, the obligation to take users’ rights into account ex ante and not just ex post supports the proportionality of the measure; the redress rights and the out-of-court redress mechanism are supplementary safeguards. Service providers may not use any filtering technology but must instead consider the collateral effect of blocking when implementing measures. Systems which block based on just content and not taking into account the legitimate uses would fall foul of the position in Scarlet Extended and SABAM.

 

This was followed by a consideration of Glawischnig-Piesczek. In the light of the CJEU’s emphasis  on the platform in that case not having to make an independent decision as to the acceptability of content to take down (and to stay down), the Advocate General suggested that platforms cannot be expected to make independent assessments of the legality of content. He concluded:

 

to minimise the risk of ‘over-blocking’ and, therefore, ensure complaince with the right to freedom of expression, an intermediary provider may, in my view, only be required to filter and block information which has first been established by a court as being illegal or, otherwise, information the unlawfulness of which is obvious from the outset, that is to say, it is manifest, without, inter alia, the need for contextualisation [para 198].

 

Referring back to his own opinion in YouTube and Cyando, ‘an intermediary provider cannot be required to undertake general filtering of the information it stores in order to seek any infringement’ [200, emphasis in original].

 

The Advocate General concluded that Article 17 contained sufficient safeguards. Article 17(7), which states that measures taken ‘shall not result in the prevention of the availability of works or other subject matter uploaded by users, which do not infringe copyright and related rights’ means that wide blocking is not permitted and that in ambiguous cases, priority should be given to freedom of expression [para 207] and that ‘“false positives” of blocking legal content, were more serious than “false negatives”, which would mean letting some illegal content through’ [para 207]. Rights-holders can still request infringing content be taken down [para 218]. Having said that, a nil error rate for false positives is not required, though the error rate should be as low as possible and those techniques that result in a significant false positive rate being precluded. Article 17(10), which providers for stakeholder cooperation, is in the view of the Advocate-General the place to determine the practical implementation of these requirements [213].

 

Comment

The Opinion constitutes the attempts of the Advocate-General to steer a course through the radically different interpretations of Article 17, a fact which perhaps reflects the provision’s contentious nature.  The outcome of the case will be significant beyond the enforcement of copyright, as similar mechanisms might be required under other legislation: TERREG (Regulation 2021/784 on  addressing  the  dissemination  of  terrorist  content  online) for example, envisages hosting service providers putting in place ‘specific measures’ (recitals 22-23, Article 5(2)) that include the possibility of ‘technical means’ to address dissemination of terrorism content online.  The highlight news is, of course, that the Advocate-General did not find Article 17 to be contrary to Article 11 EUCFR though what that means for the obligations under Article 17 is potentially complex. Before discussing that issue, a number of other points can be noted.

 

The first point is the complex context for assessing fundamental rights. As the Advocate General noted, the platforms are private actors; it is not as simple as a user saying ‘because of freedom of expression I can upload what I like on this platform’.  There are two points.  The first is whether the platforms’ choices can be attributed to the Member States? This is relevant because the rights are not addressed to private actors (Article 51 EUCFR; this is also true under the ECHR).  This is an issue on which there has not – in the context of the EUCFR – been much case law to date.  The responsibility of the State, however, subsequently forms a significant element of the Advocate general’s approach to Article 17 and its safeguards: attributing the interference to the State means that the framework for analysis is that of the state’s negative obligations, rather than introducing questions of positive obligations.

 

At this early stage in his Opinion, however, the Advocate General was content to flag up the relevance of the rights. He referred to the jurisprudence of the ECHR to support his position:

 

-          Appleby, which concerned the access of peaceful protesters to a privately-owned shopping centre. There, the ECtHR held that they had no right under Article 10; they could make their views known in other venues.  This is a case about positive obligations.

 

-          Tierfabriken, concerning the refusal of the Commercial Television Company to allow the broadcast of an animal rights advert because it breached the company’s terms of business and the terms of national law. In the view of the regulatory authorities, the company was free to purchase its ads wherever it chose. The Court held that, irrespective of the formal status of the actors, the State was implicated because the company had relied on the prohibition of political advertising contained in the regulatory regime when making its decision. Domestic law “therefore made lawful the treatment of which the applicant association complained” [para 47]. 

 

Neither case seems to be making precisely the argument that the Advocate General made – that the platforms had no choice. Nonetheless, the point seems fair. The implications of this point should be considered; does this mean that whenever platforms make a decision based on elements of their terms of service that reflect national law that freedom of expression is implicated?  Beyond this point, it seems clear that platforms may set their own terms of service to reflect their business choices and that (subject to concerns about individuals losing all possibility of communicating) there would be no freedom of expression based complaint related to the enforcement of those terms. Further, it seems that were the State to try to interfere with the platforms’ choices in this regard, that interference would need to recognise Article 16 EUCFR or even Article 11.

 

The Advocate-General considered the lawfulness requirement in Article 52(1), something that the Court does not always do (assuming it is satisfied). As well as the formal required of being based in law, the lawfulness test has qualitative requirements. In carrying out his analysis, the Advocate-General treated questions about the safeguards against abuse which are part of the lawfulness test as part of questions of proportionality. In this, he followed the approach of the ECtHR under Article 8 ECHR (right to a private life) in the surveillance cases.  This approach has been criticised in that context as blurring two different questions aimed at two separate concerns and in so doing lowering the threshold of protection.  The approach has not so far been adopted in relation to freedom of expression even by the Strasbourg court and so is novel here.  The issue of safeguards in this Opinion is central, as we shall see below.

 

Another novelty is the discussion of the ‘essence of the right’, which has not received that much attention. The Advocate General helpfully started with a clear statement as to what the requirement is – an untouchable core – where the usual balancing of rights cannot take place.  Given the complex array of potentially conflicting rights in play in this context, that principle could be important. Once again, the Advocate General drew on the surveillance case law, perhaps because it is the only place where there is much discussion of the point. In the context of surveillance, the Court has held that general monitoring of content would damage the essence of the right, but that the general retention of metadata did not (though it might still be hard to justify). On one level the prohibition on general monitoring covers the same ground as the prohibition on mass content interception under Article 7 EUCFR (and Article 8 ECHR) – though Article 7 operates in the context of private communications rather than content that could well be made publicly, effectively broadcast. What is arguably a similar boundary was drawn here: general monitoring would undermine the core of the right but specific monitoring, as a form of prior restraint, would not.  In this, the Advocate General pointed out that although prior restraints are very intrusive of freedom of expression and tightly controlled under the Strasbourg jurisprudence, they are not automatically impermissible.  Significantly, the Advocate-General claimed that the prohibition on general monitoring is a general principle of Internet law – though it is far from clear what weight the status as ‘general principle’ has in this specific context. Is a general principle of Internet law different from a general principle within EU law more generally? Of course, this discussion is based on the assumption that filtering for specific content is somehow different from looking at everything and also leaves the question of how broad the category of content searched for can be before it ceases to be ‘specific’.

 

What then of the Advocate-General’s approach to Article 17?  From his analysis of the freedom of expression framework, the scope of the obligation is important with determining its acceptability. Clearly, the discussion of general versus specific monitoring is one aspect of this, but the safeguards required to legitimate an interference with freedom of expression also protect in the Advocate-General’s view against over-blocking. The inventive interpretation of the platforms’ ‘best efforts’ is central to this approach. Essentially, this interpretation narrows the scope of when and what is permissible; automated techniques can be used when they are functionally able to do the job.  On one viewpoint this is good; preventing platforms from over-reliance on possibly not very good technologies to the detriment of their users (and potentially exhibiting bias in that process too). It is a way of balancing the reality of scale with the concerns of over-blocking and could be seen as a clever way of reconciling conflicting demands. 

 

Does this interpretation, however, suggest, that the balance of Article 17 is still heavily shifted towards ex post moderation and take down systems because effectively the conditions that the Advocate General has set on the use of technology mean that there is no technology that can be used (and little incentive to develop it) or can only be used in a very limited way?  Where we are balancing copyright and business rights against freedom of expression, this shift towards a less effective content control system might not seem so bad (even if it flies in the face of the stated concerns driving the legislation), but would the same analysis be deployed in relation to child sexual exploitation and abuse material? The difficulty here is that the Advocate General’s framework for analysis is content blind. While it is based on the text of Article 17(7) and could therefore be understood as relevant just to this directive, his interpretation of that provision is given impetus by his introduction of the requirement for safeguards derived from freedom of expression. This would then have a wider application. The Advocate General here explicitly prioritises freedom of expression over another Charter right (Article 17 EUCFR) and there does not seem to be an obvious place within the safeguards framing where issues around the importance of speech or importance of other rights can easily be taken into account.

 

One final point to note is, of course, that this is an Opinion and not binding. The Advocate General referred to his reasoning in YouTube and Cyando. The Court decided that case without reference to his reasoning. It remains to be seen how much it will influence the Court here – or in relation to discussions around other legislation which envisage proactive technical measures.

 

Photo credit: via wikicommons media