Pages

Friday, 17 May 2019

Facebook, defamation and free speech: a pending CJEU case






Preliminary Notes on the Pending Case Glawischnig-Piesczek v. Facebook Ireland Limited

Dr Paolo Cavaliere, University of Edinburgh Law School, paolo.cavaliere@ed.ac.uk

Introduction

In the next few months the Court of Justice of the European Union is expected to deliver a decision with the potential to become a landmark in the fields of political speech and intermediary liability (the Advocate-General’s opinion is due June 4). In fact the Court will have to render its opinion on two intertwining yet distinct questions: first, the case opens a new front in the delineation of platforms’ responsibility for removing illegal content, focusing on whether such obligations should extend to content identical, or even similar to other posts already declared unlawful. Secondly, the decisions will also determine whether such obligations could be imposed even beyond the territorial jurisdiction of the seised court. What is at stake is, in ultimate analysis, how much responsibility platforms should be given in making proactive assessments of the illegality of third-party content, and how much power courts should be given in imposing their own standards of acceptable speech across national boundaries.

Summary of the case

The plaintiff is a former Austrian MP and spokeswoman of the Green Party who – before retiring from politics – was reported as criticizing the Austrian Government’s stance on the refugee crisis in an article published by the news magazine oe24.at in April 2016. A Facebook user shared the article on her private profile along with her own comment, which included some derogatory language. In July, the plaintiff contacted Facebook and requested the post to be removed, only for the platform to decline the request as it did not find the post in breach of its own terms of use nor of domestic law. The plaintiff then filed an action for interim injunctive relief seeking removal of the original post, and of any other post on the platform with ‘analogous’ content. After the Commercial Court of Vienna found the post unlawful, Facebook proceeded to remove it.

However, the Court considered that Facebook, by failing to remove the original post on the plaintiff’s first notice, was not covered by the exemption from secondary liability and ordered the platform to remove any further post that would include the plaintiff’s picture alongside identical or ‘analogous’ comments. The Higher Regional Court of Vienna then found that such an obligation would amount to an obligation of general monitoring on Facebook’s part and removed the second part of the injunction, while upholding that the original post was manifestly unlawful and should have been removed by the platform following the first notification from the plaintiff. The Higher Court also confirmed that Facebook should remove any future posts that would include the same derogatory text alongside any image of the plaintiff. Facebook appealed this decision to the Austrian Supreme Court.

The Court referred to the CJEU two main ranges of questions:

- First, whether it would be compatible with Article 15(1) of the E-commerce Directive an obligation for host providers to remove posts that are ‘identically worded’ to other illegal content. In case of positive answer, the Court asks whether this obligation could expand beyond identical content and include content that is analogous in substance, despite a different wording. These are ultimately questions concerning the responsibility that platforms can be given in making their own assessment of what content amounts to unlawful speech, and what are the limits of “active monitoring”.

- Second, whether national courts can order platforms to remove content only within the national boundaries, or beyond (‘worldwide’). This is a question concerning the admissibility of extra-territorial injunctions for content removal.

Analogous content and active monitoring

To start with, it needs to be clarified that the dispute focuses effectively on a case of political speech, only formally concerns a case of defamation. The post on Facebook was considered by the Austrian court in breach of Art 1330 of the Austrian Civil Code, which protects individual reputation. However, the status of the plaintiff, who served as the spokeswoman of a national political party at the time, gives a different connotation to the issue. Established case-law of the European Court of Human Rights (Lingens v. Austria, 1986; Oberschlick v. Austria (no. 2), 1997) has repeatedly found that the definition of defamation in relation to politicians must be narrower than usual and the limits of acceptable criticism wider, especially when public statements susceptible of criticism are involved. In this case, the plaintiff had made public statements concerning her party’s immigration policy: the circumstance is relevant since the ECtHR traditionally identifies political speech with matters of public interest and requires interferences to be kept to a minimum. By established European standards the impugned content here amounts to political commentary, and the outcome of the case could inevitably set a new standard for the treatment of political speech online.

While intermediaries enjoy a series of immunities under the E-commerce Directive, which also notably established a prohibition for state authorities to impose general monitoring obligations, the 2011 Report of the Special Rapporteur to the General Assembly on the right to freedom of opinion and expression exercised through the Internet clarified that blocking and filtering measures are justified in principle when they specifically target categories of speech prohibited under international law, and the determination of the content to be blocked must be undertaken by a competent judicial authority. A judicial order determining the exact words (‘lousy traitor’, ‘corrupt oaf’, ‘fascist party’) may be an adequately precise guidance for platforms to operate, depending on how precise the contours of the order are.

To put the question in a context, the requirement to cancel ‘identical’ content marks the latest development in a growing trend to push platforms to take active decisions in content filtering. It cannot be neglected that the issue of unlawful content re-appearing, in identical or substantially equivalent forms, is in fact becoming increasingly worrisome. In a workshop held in 2017, delegates from the EU Commission heard from industry stakeholders that the problem of repeat infringers has become endemic to the point that, for those platforms that implement notice-and-takedown mechanisms, 95% of notices reported the same content to the same sites, at least in the context of intellectual property infringements. If rates of re-posting of content infringing other personality rights such as reputation can be considered anecdotally similar, then any attempts to clear platforms of unlawful content recall the proverbial endeavor of emptying the ocean with a spoon.

Nonetheless, the risk of overstepping the limits of desirable action is always looming. A paradigmatic example comes from early drafts of Germany’s Network Enforcement Law, which included a requirement for platforms to prevent re-uploads of same content already found unlawful – a provision that closely resembles the one at stake here. The requirement was expunged from the final version of the statute amid fears of over-blocking and concerns that automated filters would not be able, at the current state of technology, to correctly understand the nuances and context of content that is similar of equivalent at face value, such as irony or critical reference for instance.

The decision of the German law-makers to eventually drop the requirement – evidently considered a step too far even in the context of a statute already widely considered to hardly strike a suitable balance between platform responsibilities and freedom of expression – is indicative of the high stakes in the decision that the CJEU faces. A positive answer from the CJEU would mean a resurgence of this aborted provision on Europe-wide scale.

The idea of platforms’ monitoring of re-uploaded content is being gaining traction in digital industries for a little while now and is trickling down into content regulation. In the field of SEO, the concept of “duplicate content” defines content that has been copied or reused from other Web pages, sometimes for legitimate purposes (e.g. providing a mobile-friendly copy of a webpage), sometimes resulting in flagrant plagiarism. Yet definitions diverge when it comes to the criteria considered: while duplicate content is most commonly defined as ‘identical or virtually identical to content found elsewhere on the web’, Google stretches the boundaries to encompass content that is ‘appreciably similar’. Content regulation simply cannot afford the same degree of flexibility in defining ‘identically worded’ content, as the criterion of judicial determination required by the Special Rapporteur and the prohibition of general monitoring obligations in the E-commerce Directive exclude it.

In the area of copyright protection, it is in principle possible for service providers like YouTube to automatically scan content uploaded by private users and compare it to a database of protected works provided by rights-holders. In the case of speech infringing personality rights and other content-based limitations, discourse analysis is necessary to understand the context, and this kind of task would evidently amount to a private intermediary making a new determination on the legality of the speech.

The assessment of what amounts to unlawful speech rarely depends on the sole wording; context plays a fundamental role in the assessment, and that is all but a straightforward exercise. The European Court of Human Rights’ case-law includes several examples of complex evaluations of the local circumstances to determine whether or not an interference with speech would be justified.

For instance, in the case of Le Pen v. France (2010), the Court considered that comments, that could seem at face value derogatory towards a minority, needed anyway to be considered in the context of an ongoing general debate within the Country, and stressed that the domestic courts should be responsible for assessing the breadth and terms of the national debate and how to take it into account when determining the necessity of the interference. In Ibragimov v. Russia (2018), the Court noted that the notion of attack on religious convictions can change significantly from place to place as no single standard exists at the European level and, similar to political debates in societies, domestic authorities are again best placed to ascertain the boundaries of criticism of religions ‘[b]y reason of their direct and continuous contact with the vital forces of their countries’. The historical context is consistently taken into account to determine whether a pressing social needs exist for a restriction, and is enough to justify different decisions in respect to speech that otherwise appears strikingly similar.

For instance, outlawing Holocaust denial can be a legitimate interference in countries where historical legacies justify proactive measures taken in an effort to atone for their responsibility in mass atrocities (see Witzsch v. Germany (no. 1), 1999; Schimanek v. Austria, 2000; Garaudy v. France, 2003); whereas a similar statute prohibiting the denial of the Armenian genocide would be an excessive measure in a country like Switzerland with no strong links with the events in 1915’s Ottoman Empire (Perinçek v. Switzerland, 2015).

The intricacies of analysing the use of language against a specific historical and societal context are perhaps best illustrated by the Court’s minute analysis in Dink v. Turkey (2010). The Court was confronted with expressions that could very closely resemble hate speech: language such as ‘the purified blood that will replace the blood poisoned by the “Turk” can be found in the noble vein linking Armenians to Armenia’, and references to the Armenian origins of Atatürk’s adoptive daughter, were included in articles written by the late Turkish journalist of Armenian origin Fırat Dink.   The Court eventually came to the conclusion that it was not Turkish blood that Dink referred to as poison, but rather attitudes of the Armenian diaspora’s campaign which he intended to criticise. The Court built extensively on the assessment made by the Principal State Counsel at the Turkish Court of Cassation – who analysed all Dink’s articles published between 2003 and 2004 – in order to be able to ascertain whether these expressions amounted to denigrating Turkishness, and in what ways references to blood and the origins of Atatürk’s daughter amounted to sensitive subjects in Turkish ultranationalist circles and were susceptible to ignite animosity.

Not only social and political context matters, often it is precisely the use of language in a culturally specific way that forms a fundamental part of the Court’s assessment, with the conclusion that words alone have little importance and it is instead their use in specific contexts that determines whether or not they cross the boundaries of lawful speech. In Leroy v. France (2008), the Court went to great lengths in evaluating the use of the first person plural “We” and a parodistic quote of an advertising slogan to establish that a cartoon mocking the 9/11 attacks amounted to hate speech.  

Beyond the Court’s experience, examples of words that, though otherwise innocuous, can become slurs if used in a certain context abound: for instance, the term ‘shiptari’ in Southern Slavic-speaking countries to indicate Albanians especially in Serbia acquires a particularly nasty connotation as it was often used by Slobodan Milošević to show contempt of the Albanian minority in Yugoslavia. In Greece, the term lathrometanastes (literally ‘illegal immigrants’) has been appropriated and weaponised by the alt-right rhetoric to purposefully misrepresent the legal status of asylum seekers and refugees in an attempt to deny them access to protection and other entitlements, and now arguably lies outside of the scope of legitimate political debate,[1] to the point that it has been included in specialised research on indicators of intolerant discourse in European countries.

This handful of examples shows how language needs to be understood in the context of historical events and social dynamics, and can often convey a sense beyond their apparent meaning. While for domestic and supranational courts this seems challenging enough already, the suggestion that it would suffice for platforms to just check for synonyms and turns of phrases in a mechanical fashion is simplistic at best.

Extraterritorial injunctions

This plain observation calls into question whether it would be appropriate for the CJEU to answer in the positive the question on extra-territorial injunction. The Austrian Court’s order is in fact addressed towards an entity based outside the Court’s territorial jurisdiction and the order sought is to operate beyond the Austrian territory. To clarify, the novelty here resides not on the seising of Austrian courts, but rather on the expansive effect of their decision; the question concerns whether it would be appropriate for the effects of the injunction sought to extend beyond the limits of the national jurisdiction and effectively remove content from Facebook at the global level.

The Court of Justice has already interpreted jurisdiction in a similarly expansive way on a few occasions. In L’Oréal v. eBay (2011) the Court decided to apply EU trade-mark law on the basis that trade-mark protected goods were offered for sale from locations outside the EU but targeted at consumers within the EU. In Google Spain (2014), the Court decided to apply EU data protection law to the processing of a European Union citizen’s personal data carried out ‘in the context of the activities’ of an EU establishment even if the processor was based in a third country. The Court considered that delimiting the geographical scope of de-listings would prove unsatisfactory to protect the rights of the data subjects. A similar reasoning was the basis for deciding in Schrems (2015) that EU data protection law should apply to the transfer of personal data to the US.

One common element emerges from the case-law of the Court of Justice so far, in that the extraterritorial reach of court orders seems to be a necessary measure to ensure the effectiveness of EU rules and the protection of citizens’ or businesses’ rights. The Court has been prepared to grant extraterritorial reach when fundamental rights of European Union citizens were at stake (for instance in the context of processing of personal data) or when, in case of territorially protected rights such as trade marks, a conduct happening abroad was directly challenging the protected right within the domestic jurisdiction. It is dubious that the case at stake resembles either of these circumstances; in fact limiting political speech requires a different analysis.

A politician certainly is entitled to protect their own reputation, however when the criticism encompasses aspect of an ongoing public debate, the limits of acceptable speech broaden considerably: whether the speech falls within, and contributes to, an ongoing social conversation is very much a factual and localised consideration. Conversations that are irrelevant or even offensive within one national public sphere could very well be of the utmost relevance to communities based elsewhere, especially minorities or diasporas, who could find themselves deprived of their fundamental right to access information.

The CJEU has traditionally paid attention to connecting factors justifying extraterritorial orders. Following its own jurisprudence, it will now be faced with the challenge of identifying a possible connecting element to justify a worldwide effect of the Austrian Court’s local assessment. It needs to be recalled that a fundamental tenet of L’Oréal is the principle that the mere accessibility of a website is not enough a reason to conclude that a jurisdiction is being targeted, and it is for national courts to make the assessment. With the exception of the ECtHR that applies to date one of the most expansive jurisdictional approaches (Perrin, 2005), international policy-makers (such as for instance the UN Special Rapporteur on Freedom of Opinion and Expression, the OSCE Representative on Freedom of the Media, the OAS Special Rapporteur on Freedom of Expression and the ACHPR Special Rapporteur on Freedom of Expression and Access to Information’s Joint Declaration on Freedom of Expression and the Internet of 2011, among others) and most courts favour a different approach inspired to judicial self-restrain, and put emphasis on a ‘real and substantial connection’ to justify jurisdiction over Internet content.

When personality rights are at stake, the recent CJEU decision in Bolagsupplysningen of 2017 (incidentally, derogating from the more established CJEU decisions in Shevill, 1995, and eDate, 2011) suggested that, when incorrect online information infringes personality rights, applications for rectification and/or removal are to be considered single and indivisible, and a court with jurisdiction can rule on the entirety of an application.

However, this precedent (already controversial in its own right) seems unfit to apply to this case. Bolagsupplysningen builds at least on the same rationale as in the other decisions of the CJEU mentioned above, that expansive jurisdiction can be justified by the necessity to guarantee the protection of citizens’ fundamental rights and not to see them frustrated by a scattered territorial application. In the case of political speech, where limitations need to be justified by an overriding public interest, such as typically public safety, the connecting element the Court looks for becomes immediately less apparent, as it cannot be assumed that the same speech would prove equally inflammatory in different places under different social and political circumstances. In other words, public order better lends itself to territorially sensitive protection.     

This taps into the assessment of the necessity and proportionality of the measure that decision-makers need to make before removing content. As a matter of principle, the geographical scope of limitations is part of the least restrictive means test: the ECtHR for instance in Christians Against Fascism and Racism (1980) considered that even when security considerations would outweigh the disadvantage of suppressing speech and thus justify the issue of a ban, said ban would still need a ‘narrow circumscription of its scope in terms of territorial application’ to limit its negative side effects.  Similarly, the 2010 OSCE/ODIHR – Venice Commission Guidelines on Freedom of Peaceful Assembly (as quoted by the ECtHR in Lashmankin v. Russia of 2017) consider that ‘[b]lanket restrictions such as a ban on assemblies in specified locations are in principle problematic since they are not in line with the principle of proportionality which requires that the least intrusive means’, and thus restrictions on locations of public assemblies need to be weighed against the specific circumstances of each case. Translated in the context of digital communications, the principle requires that the territorial scope of content removal orders is narrowly circumscribed and strictly proportionate to the interest protected. An injunction to remove commentary on national politics worldwide, as a result, seems unlikely to be considered the least intrusive means.

Conclusions

The decision of the CJEU has the potential to change the landscape of intermediary responsibility and the boundaries of lawful speech as we know them. By being asked to remove content that is identical or analogous, intermediaries will be making, in all other cases other than removing mere copies of posts that have already been found illegal, active determinations on the legality of third-party content. While the re-upload of illegal content is an issue of growing importance that needs to be addressed, consideration needs to be paid as to whether this would be an appropriate measure, as solutions borrowed from other fields like copyright protection can sit at odds with the specificities of content regulation and infringe on European and international standards for the protection of freedom of expression online.

Similarly, by granting an extraterritorial injunction in this case the Court would follow a stream that has been emerging in the last few years in privacy and data protection. Thanks to extraterritorial reach, the GDPR is rapidly becoming a global regulatory benchmark for the processing of personal data, which arguably benefits European Union citizens and protects their relevant fundamental rights. The same could not be true if this rationale would be applied to standards of legitimate political speech. It is questionable whether the EU (or any other jurisdiction) bears any interest in setting a global regulatory benchmark for content regulation. By restricting the accessibility of content beyond the national boundaries where the original dispute took place, it would restrict other citizens’ right to receive information without granting any substantive benefit, such as protecting public security, to the citizens of the first state.

Barnard & Peers: chapter 9
Photo credit: Slate magazine


[1] L. Karamanidou (2016) ‘Violence against migrants in Greece: beyond the Golden Dawn’, Ethnic and Racial Studies, 39:11, 2002-2021; D. Skleparis (2016) ‘(In)securitization and illiberal practices on the fringe of the EU’, European Security, 25:1, 92-111.

5 comments:

  1. Our society is at a tipping point, and we need to remember the importance of free expression, as it the only thing that keeps away from tyrannical rule. Like any time in history, new mediums for communication can be scary, but limiting free expression has never been the answer. Sometimes we need to realize that our problems stem from our people and our societies. You regulate social media, and the same speech will pop up somewhere else. People simply need to change - whether that is possible is not clear.

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete
  3. This comment has been removed by a blog administrator.

    ReplyDelete
  4. This comment has been removed by a blog administrator.

    ReplyDelete
  5. This comment has been removed by a blog administrator.

    ReplyDelete