Showing posts with label Digital Rights. Show all posts
Showing posts with label Digital Rights. Show all posts

Wednesday, 20 July 2016

Human Rights and National Data Retention Law: the Opinion in Tele 2 and Watson




Lorna Woods, Professor of Internet Law, University of Essex

Yesterday’s Advocate-General’s opinion concerns two references from national courts which both arose in the aftermath of the invalidation of the Data Retention Directive (Directive 2006/24) in Digital Rights Ireland dealing with whether the retention of communications data en masse complies with EU law.  The question is important for the regimes that triggered the references, but in the background is a larger question: can mass retention of data ever being human rights compliant. While the Advocate General clearly states this is possible, things may not be that straightforward.

Background

Under the Privacy and Electronic Communications Directive (Directive 2002/58), EU law guarantees the confidentiality of communications transmitted via a public electronic communications network.  Article 1(3) of that Directive limits the field of application of the directive to activities falling with the TFEU, thereby excluding matters covered by Titles V and VI of the TEU at that time (e.g. public security, defence, State security).  Even within the scope of the directive, Article 15 permits Member States to restrict the rights granted by the directive

‘when such restriction constitutes a necessary, appropriate and proportionate measure within a democratic society to safeguard national security, defence, public security, and the prevention, investigation, detection and prosecution of criminal offences or of unauthorised use of the electronic system..’.

Specifically, Member States were permitted to legislate for the retention of communications data (ie details of communications but not the content of the communication) for the population generally. The subsequent Data Retention Directive specified common maximum periods of retention and safeguards, and was implemented (in certain instances with some difficulty) by the Member States.

Following the invalidation of the Data Retention Directive, the status of Member State data retention laws was uncertain. This led both Tele2 and Watson (along with a Conservative MP, David Davis, who withdrew his name when he became a cabinet minister) to challenge their respective national data retention regimes, essentially arguing that such regimes were incompatible with the standards set down in Digital Rights Ireland. The Tele2 case concerned the Swedish legislation which implemented the Data Retention Directive. The Watson case concerned UK legislation which was implemented afterwards: the Data Retention and Investigatory Powers Act (DRIPA). Given this similarity, the cases were joined.

The Swedish reference asked whether traffic data retention laws that apply generally are compatible with EU law, and asked further questions regarding the specifics of the Swedish regime. Watson et al asked two questions: whether the reasoning in Digital Rights Ireland laid down requirements that were applicable to a national regime; and whether Articles 7 and 8 of the EU Charter of Fundamental Rights (EUCFR) established stricter requirements than Article 8 of the European Convention on Human Rights (ECHR) – the right to private life. Although the latter case concerns the UK, the Court’s will still be relevant if the UK leaves the EU because the CJEU case law provides that non-Member States’ data protection law must be very similar to EU data protection law in order to facilitate data flows (see Steve Peers’ discussion here).

Opinion of the Advocate General

The Advocate General dealt first with the question about the scope of the protection under the EUCFR.  This question the Advocate General ruled as inadmissible because it was not relevant to resolving the dispute.  In so doing, he confirmed that the obligation in Article 52 EUCFR to read the rights granted by the EUCFR in line with the interpretation of the ECHR provided a base line and not a ceiling of protection.  The EU could give a higher level of protection; indeed Article 52(3) EUCFR expressly allows for the possibility of ‘… Union law providing more extensive protection’. 

Moreover, Article 8 EUCFR, in providing a specific right to data protection, is a right that has no direct equivalent in the ECHR; the Advocate General therefore argued that the rule of consistent interpretation in Article 52(3) EUCFR does not apply to Article 8 EUCFR (Opinion, para 79). Later in the Opinion, the Advocate General also dismissed the suggestion that Digital Rights Ireland did not apply because the regime in issue in Watson et al was a national regime and not one established by the EU legislature. Articles 7, 8 and 52 EUCFR were interpreted in Digital Rights Ireland and are again at issue here: Digital Rights Ireland is therefore relevant despite the different jurisdiction of the court (paras 190-191).

The Advocate General then went on to consider whether EU law permits Member States to establish general data retention regimes.  The first question was whether Article 1(3) meant general data retention regimes were excluded from the scope of Directive 2002/58 because the sole use of the data was for the purposes of national security and other grounds mentioned in Art 1(3).  The Advocate General made three points in response:

Given that Article 15(1) specifically envisaged data retention regimes, national laws establishing  such a regime were in fact implementing Article 15(1) (para 90).

The argument the governments put forward was related to the access to the data by public authorities, the national schemes concerned the acquisition and retention of that data by private bodies – that the former might lie outside the directive did not imply that the latter also did (paras 92-94).

The approach of the Court in Ireland v Parliament and Council (Case C-301/06), which was a challenge to the Data Retention Directive as regards the Treaty provision on which it was enacted, meant that general data retention obligations ‘do not fall within the sphere of criminal law’ (para 95).

The next question was whether Article 15 of the Directive applied. The express wording of Article 15, which refers to data retention, makes clear that data retention is not per se incompatible with Directive 2002/58. The intention was rather to make any such measures subject to certain safeguards. This means that data retention can be legal provided the scheme complies with the safeguards (para 108). Indeed, following his earlier reasoning, the Advocate General rejected the argument that Article 15 is a derogation and should therefore be read restrictively.

This brings us to the question of whether sufficient safeguards are in place. Since the Advocate General took the view that in providing for general data retention regimes the Member States are implementing Article 15, such measures fall within the scope of EU law and therefore, according to Article 51 EUCFR, the Charter applies, even if rules relating to access to the data by the authorities lie outside the scope of EU law (paras 122-23).  Nonetheless, given the close link between access and retention, constraints on access are of significance in assessing the proportionality of the data retention regime.

Assessing compliance with the EUCFR requires as a first step an interference with rights protected. The Advocate General referred to Digital Rights Ireland to accept that ‘[g]eneral data retention obligations are in fact a serious interference’ with the rights to privacy (Art 7 EUCFR) and to data protection (Art. 8 EUCFR) (para 128). Justification of any such interferences must satisfy both the requirements set down in Article 15(1) Directive 2002/58 AND Article 52(1) EUCFR which sets out the circumstances in which a member State may derogate from a right guaranteed by the EUCFR (para 131). The Advocate General then identified 6 factors arising from these two obligations (para 132):

Legal basis for retention;
Observe the essence of the rights in the EUCFR (just Article 52 EUCFR, rather than Art 15 of the directive);
Pursue an objective of general interest;
Be appropriate for achieving that objective;
Be necessary to achieve that objective; and
Be proportionate within a democratic society to the pursuit of the objective.

As regards the requirement for a legal basis the Advocate General argued that the ‘quality’ considerations that are found in the ECHR jurisprudence should be expressly applied within EU law too. They must have the characteristics of accessibility, foreseeability and providing adequate protection against arbitrary interference, as well as being binding on the relevant authorities (para 150). These factual assessments fall to the national court. 

In the Opinion of the Advocate General, the ‘essence of the rights’ requirement – as understood in the light of Digital Rights Ireland – was unproblematic. The data retention regime gave no access to the content of the communication and the data held was required to be held securely. A general interest objective can also easily be shown: the fight against serious crime and protecting national security. The Advocate General, however, rejected the argument that the fight against non-serious crime and the smooth running of proceedings outside the criminal context could constitute a public interest objective. Likewise, data retention gives national authorities ‘an additional means of investigation to prevent or shed light on serious crime’ (para 177) and it is specifically useful in that general measures give the authorities the power to examine communications of persons of interest which were carried out before they were so identified. They are thus appropriate.

A measure must be necessary which means that ‘no other measure exists that would be equally appropriate and less restrictive’ (Opinion, para 185). Further, according to Digital Rights Ireland, derogations and limitations on the right of privacy apply only insofar as strictly necessary.  The first question was whether a general data retention regime can ever be necessary. The Advocate General argued that Digital Rights Ireland only ruled on a system where insufficient safeguards were in place; there is no actual statement that a general data retention scheme is not necessary. While the lack of differentiation was problematic in Digital Rights Ireland, the Court ‘did not, however, hold that that absence of differentiation meant that such obligations, in themselves, went beyond what was strictly necessary’ (Opinion, para 199).  The fact that the Court in Digital Rights Ireland examined the safeguards suggests that the Court did not view general data retention regimes as per se unlawful. (see also Schrems (Case C-362/14), para 93, cited here in para 203). On this basis the Advocate General opined:

a general data retention obligation need not invariably be regarded as, in itself, going beyond the bounds of what is strictly necessary for the purposes of fighting serious crime. However, such an obligation will invariably go beyond the bounds of what is strictly necessary if it is not accompanied by safeguards concerning access to the data, the retention period and the protection and security of the data. (para 205)

The comparison as to the effectiveness of this sort of measure with other measures must be carried out within the relevant national regime bearing in mind the possibility that generalised data retention gives of being able to ‘examine the past’ (para 208). The test to be applied, however, is not one of utility but that no other measure or combination of measures can be as effective.

The question is then of safeguards and in particular whether the safeguards identified in paras 60-68 of Digital Rights Ireland are mandatory for all regimes. These rules concern:

Access to and use of retained data by the relevant authorities;
The period of data retention; and
The security and protection of the data while retained.

Contrary to the arguments put forward by various governments, the Advocate General argued that ‘all the safeguards described by the Court in paragraphs 60 to 68 of Digital Rights Ireland must be regarded as mandatory’ (para 221, italics in original). Firstly, the Court made no mention of the possibility of compensating for a weakness in respect of one safeguard by strengthening another. Further, such an approach would no longer give guarantees to individuals to protect them from unauthorised access to and abuse of that data: each of the aspects identified needs to be protected. Strict access controls and short retention periods are of little value if the security pertaining to retained data is weak and that data is exposed. The Advocate General noted that the European Court of Human Rights in Szabo v Hungary emphasised the importance of these safeguards, citing Digital Rights Ireland.

While the Advocate General emphasised that it is for the national courts to make that assessment, the following points were noted:

In respect of the purposes for which data is accessed, the national regimes are not sufficiently restricted (only the fight against serious crime, not crime in general, is a general objective) (para 231)

There is no prior independent review (as required by para 62 Digital Rights Ireland) which is needed because of the severity of the interference and the need to deal with sensitive cases (such as the legal profession) on a case by case basis. The Advocate General did accept that in some cases emergency procedures may be acceptable (para 237).

The retention criteria must be determined by reference to objective criteria and limited to what is strictly necessary. In Zacharov, the European Court of Human Rights accepted 6 months as being reasonable but required that data be deleted as soon as it was not needed. This obligation to delete should be found in national regimes and apply to the security services as well as the service providers (para 243).

The final question relates to proportionality, an aspect which was not considered in Digital Rights Ireland.  The test is:

‘a measure which interferes with fundamental rights may be regarded as proportionate only if the disadvantages caused are not disproportionate to the aims pursued’ (para 247).

This opens a debate about the importance of the values protected. In terms of the advantages of the system, these had been rehearsed in the discussion about necessity. As regards the disadvantages, the Advocate General referred to the Opinion in Digital Rights Ireland, paras 72-74 and noted that

‘in an individual context, a general data retention obligation will facilitate equally serious interference as targeted surveillance measures, including those which intercept the content of communications’ (para 254)

and it has the capacity to affect a large number of people. Given the number of requests for access received, the risk of abuse is not theoretical.  While it falls to the national courts to balance the advantages and disadvantages, the Advocate General emphasised that even if a regime includes all the safeguards in Digital Rights Ireland, which should be seen as the minimum, that regime could still be found to be disproportionate (para 262).

Comment

It is interesting that the Court of Appeal’s reference did not ask the Court whether DRIPA was compliant with fundamental rights in the EUCFR. Rather, the questions sought to close off that possibility – firstly by limiting the scope of the EUCFR to a particular conception of Article 8 ECHR and secondly by seeking to treat Digital Rights Ireland as a challenge to the validity of a directive as not relevant within the national field.  Although the Advocate General did not answer the first question, the reasons given for dismissing it make clear that the Court of Appeal’s approach was wrong. Indeed, it is hard to see how Art 52(3) when read in its entirety could support the argument that the EUCFR should be ‘read down’ to the level of the ECHR. The entire text of Article 52(3) follows:

In so far as this Charter contains rights which correspond to rights guaranteed by the Convention for the Protection of Human Rights and Fundamental Freedoms, the meaning and scope of those rights shall be the same as those laid down by the said Convention. This provision shall not prevent Union law providing more extensive protection.

The focus of the second question was likewise misguided. As the Advocate General pointed out, Digital Rights Ireland was based on the interpretation of the meaning of two provisions of the EUCFR, Articles 7 and 8.  They should have the same meaning wherever they are applied.

Quite clearly, the Advocate General aims to avoid saying that mass surveillance – here in the form of general data protection rules – is per se incompatible with human rights. Indeed, one of the headline statements in the Opinion is that ‘a general data retention obligation imposed by a Member State may be compatible with the fundamental rights enshrined in EU law’ (para 7). The question then becomes about reviewing safeguards rather than saying there are some activities a member State cannot carry out.  This debate is common in this area, as the case law of the European Court of Human Rights illustrates (see Szabo, particularly the dissenting opinion).

Fine distinction abound. For example, where the Advocate General relies on the distinction between meta data and content to reaffirm that the essence of Article 7 and 8 has not been undermined.  Yet while the Advocate General tries hard to hold that general data retention may be possible, tensions creep in.  The point the Advocate General made in relation to the ‘essence of the right’ was based on the assumption that meta data collection is less intrusive than intercepting content.  In assessing the impact of a general data protection regime, the Advocate General then implies the opposite (paras 254-5). Indeed, the Advocate General quotes Advocate General Cruz Villalon in Digital Rights Ireland that such surveillance techniques allow the creation of:

‘a both faithful and exhaustive map of a large portion of a person’s conduct strictly forming part of his private life, or even a complete and accurate picture of his private identity’.

The Advocate General here concludes that:

‘the risks associated with access to communications data (or ‘metadata’) may be as great or even greater than those arising from access to the content of communications’ (para 259).

Another example relates to the scope of EU law. The Advocate General separates access to the collected data (which is about policing and security) and the acquisition and storage of data which concerns the activities of private entities. The data retention regime concerns this latter group and their activities which fall within the scope of EU law. In this the Advocate General is following the Court in the Irish judicial review action challenging the legal basis of the Data Retention Directive (the outcome of which was that it was correctly based on Article 114 TFEU).  The Advocate General having separated these two aspects at the question of scope of EU law, then glues them back together to assess the acceptability of the safeguards.

In terms of safeguards, the Advocate General resoundingly reaffirms the requirements in Digital Rights Ireland.  All of the safeguards mentioned are mandatory minima, and weakness in one area of safeguards cannot be offset by strength in another area. If the Court takes a similar line, this may have repercussions for the relevant national regimes, for example as regards the need for prior independent review (save in emergencies). Indeed, in this regard the Advocate General might be seen to going further than either European Court has.  Further, the Advocate General restricts the purposes for which general data retention may be permitted to serious crime only (contrast here, for example, the approach to Internet connect records in the Investigatory Powers Bill currently before the UK Parliament). 


Another novelty is the discussion of lawfulness. As the Advocate General noted, there has not been much express discussion of this issue by the Court of Justice, though the requirement of lawfulness is well developed in the Strasbourg case law. While this then might be seen not to be particularly new or noteworthy, the Advocate General pointed out that the law must be binding and that therefore:

‘[i]t would not be sufficient, for example, if the safeguards surrounding access to data were provided for in codes of practice or internal guidelines having no binding effect’ (para 150)

Typically, much of the detail of surveillance practice in the UK has been found in codes; as the security forces’ various practices became public many of these have been formalised as codes under the relevant legislation (see e.g. s. 71 Regulation of Investigatory Powers Act; codes available here). Historically, however, not all were publicly available, binding documents.

While the headlines may focus on the fact that general data retention may be acceptable, and the final assessment of compliance with the 6 requirements falls to the national courts, it seems that this is more a theoretical possibility than easy reality. The Advocate General goes beyond endorsing the principles in Digital Rights Ireland: even regimes which satisfy the safeguards set out in Digital Rights Ireland may still be found to be disproportionate. While Member States may not have wanted to have a checklist of safeguards imposed on them, here even following that checklist may not suffice. Of course, this opinion is not binding; while it is designed to inform the Court, the Court may come to a different conclusion. The date of the judgment has not yet been scheduled.

Photo credit: choice.com.au
Barnard & Peers: chapter 9

JHA4: chapter II:7

Thursday, 18 June 2015

Delfi v Estonia: Curtailing online freedom of expression?


Lorna Woods, Professor of Media Law, University of Essex
When can freedom of expression online be curtailed? The recent judgment of the Grand Chamber of the European Court of Human Rights in Delfi v. Estonia has addressed this issue, in the particular context of comments made upon a news article. This ruling raises interesting questions of both human rights and EU law, and I will examine both in turn.
The Facts
Delfi is one of the largest news portals in Estonia. Readers may comment on the news story, although Delfi has a policy to limit unlawful content, and operates a filter as well as a notice and take down system. Delfi ran a story concerning ice bridges, accepted as well-balanced, which generated an above average number of responses. Some of these contained offensive material, including threats directed against an individual known as L. Some weeks later L requested that some 20 comments be deleted and damages be paid. Delfi removed the offending comments the same day, but refused to pay damages. The matter then went to court and eventually L was awarded damages, though of a substantially smaller amount than L originally claimed. Delfi’s claim to be a neutral intermediary and therefore immune from liability under the EU’s e-Commerce Directive regime was rejected. The news organisation brought the matter to the European Court of Human Rights and lost the case in a unanimous chamber decision. It then brought the matter before the Grand Chamber.
The Grand Chamber Decision
The Grand Chamber in essence, affirmed the outcome and the reasoning of the chamber judgment in the same case, albeit not unanimously. It commenced by re-capping the principles of Article 10 of the European Convention on Human Rights from its previous case law. These are familiar statements of law, but it seems that from the beginning of its reasoning the Grand Chamber had concerns about the nature of content available on the internet. It commented:
while the Court acknowledges that important benefits can be derived from the Internet in the exercise of freedom of expression, it is also mindful that liability for defamatory or other types of unlawful speech must, in principle, be retained and constitute an effective remedy for violations of personality rights. [110]

The Grand Chamber then referred to certain Council of Europe Recommendations, suggesting:
a “differentiated and graduated approach [that] requires that each actor whose services are identified as media or as an intermediary or auxiliary activity benefit from both the appropriate form (differentiated) and the appropriate level (graduated) of protection and that responsibility also be delimited in conformity with Article 10 of the European Convention on Human Rights and other relevant standards developed by the Council of Europe” (see § 7 of the Appendix to Recommendation CM/Rec(2011)7, ..). Therefore, the Court considers that because of the particular nature of the Internet, the “duties and responsibilities” that are to be conferred on an Internet news portal for the purposes of Article 10 may differ to some degree from those of a traditional publisher, as regards third-party content. [113]

The Grand Chamber applied the principles of freedom of expression to the facts using the familiar framework. First there must be an interference with the right under Article 10(1) of the Convention, then any restriction should be assessed for acceptability according to a three stage test. The test requires that the restriction be lawful, achieve a legitimate aim and be necessary in a democratic society. The existence of a restriction to freedom of expression was not disputed, and nor that the Estonian rules pertained to a legitimate aim. Two areas of dispute arose: lawfulness and necessary in a democratic society.
Lawfulness
Lawfulness means that the rule is accessible to the person concerned and foreseeable as to its effects. Delfi argued that it could not have anticipated that the Estonian Law of Obligations could apply to it, as it had assumed that it would benefit from intermediary liability derived from the e-Commerce Directive. The national authorities had not accepted this classification, so essentially Delfi argued that this was a misapplication of national law. The Grand Chamber re-iterated (as had the chamber) that it is not its task to take the place of the domestic courts but instead to assess whether the methods adopted and the effects they entail are in conformity with the Convention. On the facts, and although some other signatory states took a more “differentiated and graduated approach” as suggested by the Council of Europe recommendation, the Grand Chamber was satisfied that it was foreseeable that the normal rules for publishers would apply. Significantly, the Grand Chamber commented, in an approach similar to that of the First Chamber that:
as a professional publisher, the applicant company should have been familiar with the legislation and case-law, and could also have sought legal advice. [129]

Necessary in a Democratic Society
The Grand Chamber started its analysis by re-iterating established jurisprudence to the effect that, given the importance of freedom of expression in society, necessity must be well proven through the existence of a ‘pressing social need’. It must determine whether the action was ‘proportionate to the legitimate aim pursued’ and whether the reasons adduced by the national authorities to justify it are ‘relevant and sufficient’. The Grand Chamber also emphasised the role of the media, but also recognised that different standards may be applied to different media. Again it re-iterated its view that the Internet could be harmful, as well as beneficial ([133]). The Grand Chamber then travelled familiar terrain, stating the need to balance Articles 8 and 10 and approving the factors that the First Chamber took into account: the context of the comments, the measures applied by the applicant company in order to prevent or remove defamatory comments, the liability of the actual authors of the comments as an alternative to the applicant company’s liability, and the consequences of the domestic proceedings for the applicant company ([142-3]).
Here, the Grand Chamber emphasised the content of the comments: that they could be seen as hate speech and were on their face unlawful [153] and that given the range of opportunities available to anyone to speak on the internet obliging a large news portal to take effective measures to limit the dissemination of hate speech and speech inciting violence was not ‘private censorship’. ([157]) The idea that a news portal is under an obligation to be aware of its content is a key element in the assessment of proportionality. Against this background (rather than one which accepts the notice and take down regime as enough), Delfi’s response had not been prompt. Further, ‘the ability of a potential victim of hate speech to continuously monitor the Internet is more limited than the ability of a large commercial Internet news portal to prevent or rapidly remove such comments’ [158]. In the end, the sum that Delfi was fined was not large, and the consequence of the action against the news portal was not that Delfi had to change its business model. In sum, the interference could be justified.
There were two concurring judgments, and one dissent. Worryingly, one of the concurring judges (Zupančič), having criticised the possibility of allowing anonymous comments, argued:
To enable technically the publication of extremely aggressive forms of defamation, all this due to crass commercial interest, and then to shrug one’s shoulders, maintaining that an Internet provider is not responsible for these attacks on the personality rights of others, is totally unacceptable.
According to the old tradition of the protection of personality rights, …, the amount of approximately EUR 300 awarded in compensation in the present case is clearly inadequate as far as damages for the injury to the aggrieved persons are concerned.

Human Rights Issues: Initial Reaction
This is a long judgment which will no doubt provoke much analysis. Immediate concerns relate to the Court’s concern about the Internet as a vehicle for dangerous and defamatory material, which seems to colour its approach to the Article 10(2) analysis and, specifically, to the balancing of Articles 10 and 8. In recognising that the various forms of media operate in different contexts and with different impact, the Grand Chamber has not recognised the importance of the role of intermediaries of all types (and not just technical intermediaries) in providing a platform for and curating information. While accepting that the internet may give rise to different ‘duties and responsibilities’, it seems that the standard of care required is high.
Indeed, the view of the portal as having control over user generated content seems to overlook the difficulties of information management. The concurring opinions go to great length to say that a view which requires the portal only to take down manifestly illegal content of its own initiative is different from a system that requires pre-publication review of user generated content. This may be so, but both effectively require monitoring (or an uncanny ability to predict when hate speech will be posted). Indeed, the dissenting judges say that there is little difference here between this requirement and blanket prior restraint (para 35). Both approaches implicitly reject notice and take down systems, which are used – possibly as a result of the e-Commerce Directive framework – by many sites in Europe. This focus on the content has led to reasoning which almost reverses the approach to freedom of expression: speech must be justified to evade liability. In this it seems to give little regard neither to its own case law about political speech, nor its repeated emphasis on the importance of the media in society.

EU law elements: consistency with the e-commerce Directive?

The Delfi judgment raises some practical questions for news sites hosting third party content, especially reader comments.  An underlying concern is how this judgment fits with the EU policy approach towards the Internet and intermediaries in particular.  The eCommerce Directive provides, inter alia, for the limitation of liability for intermediaries, in articles 12-15.  These provisions were considered important, not just for the free flow of services through the EU, but support to the development of the Internet and service offered on it.  The eCommerce Directive envisages three categories of intermediary – those which are mere conduits, those which offer caching and those which host content.  The essential quality of these intermediaries is that they were facilitators via technical services rather than contributing to the provision of specific content.  It is the scope of this last category that is uncertain, especially given the development of a range of services which challenge the understanding of the Internet as it stood at the time of the enactment of the directive.  Following the first chamber decision, there was some concern that the judgment did not respect the underlying policy choice about intermediaries, nor reflect the significance of the role of intermediaries for the functioning of the Internet, especially from the perspective of end-users.  The question is how out of line, if at all, is the judgment with the Directive?

The first thing to note before we look at the substance is that the Strasbourg Court was not making the decision about whether Delfi was a neutral or passive intermediary or not.  The Court was rather reviewing the impact of the Estonian court’s reasoning.  In sum, it is far from clear that the court was unreasonable – bearing in mind the current jurisprudence from the European Court of justice – in accepting the Estonian court’s end conclusion (even if we might be critical about some points of its reasoning).

The intermediary liability provisions provide a graduated scale of protection, with the greatest protection going to services that are the most technical.  For hosting services, protection is dependent on lack of knowledge of the offending content.  There have been questions about the interpretation of some of the phrases in Article 14(2) of the Directive, such as ‘awareness’, ‘actual knowledge’ and obligation to act expeditiously’. The Directive envisages notice and take down regimes as a way to deal with offending content. Articles 14 and 15 do not affect Member States’ freedom to require hosting service providers to apply those duties of care that can reasonably be expected from them and which are specified by national law in order to detect and prevent certain types of illegal activities. (recital 48) Article 15 prevents Member States from imposing on internet intermediaries, with respect to activities covered by Articles 12 to 14, a general obligation to monitor the information they transmit or store or a general obligation to actively seek out facts and circumstances indicating illegal activities.  Article 15 does not prevent public authorities in the Member States from imposing a monitoring obligation in a specific, clearly defined individual case (recital 47). It is implicit in the foregoing, that Article 15 only applies to intermediaries which can claim the benefit of one of Articles 12, 13 or 14.

A number of cases have been brought before the European Court of Justice to understand better the scope of Article 14, and the extent of the protection in Article 15. For example, SABAM v Netlog (Case C-360/10) concerned a social networking site which received a request from SABAM, the Belgian copyright society, to implement a general filtering system to prevent the unlawful use of musical and audio-visual work by the users of its site.  In addition to confirming the prohibition in Article 15 on monitoring, the ECJ noted that a filter might not be able to distinguish between lawful and illegal content, thus affecting users’ freedom of expression (access to information).  In this the ECJ seems to be reflecting the position the ECtHR took in Yildirim, regarding ‘collateral censorship’.  There is a limitation on carrying the ideas in Netlog across to Delfi in that the rules in Article 15 apply to neutral intermediaries and it is unclear whether the ECJ would find a news site to be neutral in this sense, whether because of the agenda-setting function which ‘invites’ particular responses, or because of the adoption of filtering and moderation systems.

In the Google Adwords case (Joined Cases C-236/08, C-237/08 and C-238/08, judgment 23rd March 2010), the ECJ held that the test for whether a service provider could benefit from Article 14 ECD was whether it was ‘"neutral, in the sense that its conduct is merely technical, automatic and passive, pointing to a lack of knowledge or control of the data which it stores"’ (para 114).  One could argue that, insofar as a site invites comment on a particular topic, it is not neutral though one might question how overt that invitation might be. In L’Oreal (Case C-324/09, judgment 12 July 2011), the Court held that the Article 14 exemption should not apply where the host plays an "active role" in the presentation and promotion of offers for sale posted by its users so as to give it knowledge of, or control over, related data.  Further, if a host has knowledge of facts that would alert a "diligent economic operator" to illegal activity, it must remove the offending data to benefit from the Article 14 exemption.  We might question what the role of moderation and filters are in this context specifically in terms of giving an intermediary control over content.  As regards the Delfi case itself, there are arguably parallels between the ECJ and ECtHR approaches in that both courts seem to think that those acting in the course of their business are in a better place to assess where and when problems might arise.  A point of difference relates to the views of commercial activities. The ECJ argued in Google Adwords:  
It must be pointed out that the mere facts that the referencing service is subject to payment, that Google sets the payment terms or that it provides general information to its clients cannot have the effect of depriving Google of the exemptions from liability provided for in Directive 2000/31. [116]
The reference to ‘general information’ also suggests that contributors’ policies would not be determinative either.
Applying the tests found in L’Oreal v. eBay and Google Adwords in Papasavvas v O Fileleftheros, a case concerning on-line defamation in relation to a news story posted by a newspaper on its site (which I discussed earlier here), the ECJ ruled:
Consequently, since a newspaper publishing company which posts an online version of a newspaper on its website has, in principle, knowledge about the information which it posts and exercises control over that information, it cannot be considered to be an ‘intermediary service provider’ within the meaning of Articles 12 to 14 of Directive 2000/31, whether or not access to that website is free of charge. [45]

There are some similarities to the Strasbourg court’s reasoning, in that both courts point to the idea about control over information.  There are differences, however, in that the control over the defamatory material in Papasavvas was much more direct than in Delfi, and the predictive abilities of newspapers about their audience’s response to stories not in issue.  Nonetheless, it is far from clear that the ECJ would reject the agenda-setting argument the Strasbourg court used, especially given its reasoning in L’Oreal regarding the ‘promotion’ of particular content and the requirements of a diligent economic operator.

The Strasbourg court’s reasoning put Delfi in a position of effectively having to monitor user content.  Had Delfi been found to be an intermediary in the sense of Articles 12-14, this would have been contrary to Article 15 of the eCommerce Directive, as implemented in domestic law.  Given that Delfi was found not to be such an intermediary, then Article 15 does not come into play. It also seems that this finding is not unlikely under EU law.  There is then no automatic conflict between this ruling and the position under EU law.  Whether this outcome is desirable from an Internet policy perspective is another matter.  This case and its consequences may then feed into the review of intermediaries that the EU Commission is planning as part of its Digital Single Market strategy.

*Part of this post was previously published on the LSE Media Policy Project blog
Barnard & Peers: chapter 9


Tuesday, 15 July 2014

Open letter on the UK's Data Retention and Investigatory Powers Bill



To all Members of Parliament,
Re: An open letter from UK internet law academic experts

On Thursday 10 July the Coalition Government (with support from the Opposition) published draft emergency legislation, the Data Retention and Investigatory Powers Bill (“DRIP”). The Bill was posited as doing no more than extending the data retention powers already in force under the EU Data Retention Directive, which was recently ruled incompatible with European human rights law by the Grand Chamber of the Court of Justice of the European Union (CJEU) in the joined cases brought by Digital Rights Ireland (C-293/12) and Seitlinger and Others (C-594/12) handed down on 8 April 2014.
In introducing the Bill to Parliament, the Home Secretary framed the legislation as a response to the CJEU’s decision on data retention, and as essential to preserve current levels of access to communications data by law enforcement and security services. The government has maintained that the Bill does not contain new powers.

On our analysis, this position is false. In fact, the Bill proposes to extend investigatory powers considerably, increasing the British government’s capabilities to access both communications data and content. The Bill will increase surveillance powers by authorising the government to;
·         compel any person or company – including internet services and telecommunications companies – outside the United Kingdom to execute an interception warrant (Clause 4(2));
·         compel persons or companies outside the United Kingdom to execute an interception warrant relating to conduct outside of the UK (Clause 4(2));
·         compel any person or company outside the UK to do anything, including complying with technical requirements, to ensure that the person or company is able, on a continuing basis, to assist the UK with interception at any time (Clause 4(6)).
·         order any person or company outside the United Kingdom to obtain, retain and disclose communications data (Clause 4(8)); and
·         order any person or company outside the United Kingdom to obtain, retain and disclose communications data relating to conduct outside the UK (Clause 4(8)).

The legislation goes far beyond simply authorising data retention in the UK. In fact, DRIP attempts to extend the territorial reach of the British interception powers, expanding the UK’s ability to mandate the interception of communications content across the globe. It introduces powers that are not only completely novel in the United Kingdom, they are some of the first of their kind globally.

Moreover, since mass data retention by the UK falls within the scope of EU law, as it entails a derogation from the EU's e-privacy Directive (Article 15, Directive 2002/58), the proposed Bill arguably breaches EU law to the extent that it falls within the scope of EU law, since such mass surveillance would still fall foul of the criteria set out by the Court of Justice of the EU in the Digital Rights and Seitlinger judgment.

Further, the bill incorporates a number of changes to interception whilst the purported urgency relates only to the striking down of the Data Retention Directive. Even if there was a real emergency relating to data retention, there is no apparent reason for this haste to be extended to the area of interception.

DRIP is far more than an administrative necessity; it is a serious expansion of the British surveillance state. We urge the British Government not to fast track this legislation and instead apply full and proper parliamentary scrutiny to ensure Parliamentarians are not mislead as to what powers this Bill truly contains.

Signed,



Dr Subhajit Basu, University of Leeds
Dr Paul Bernal, University of East Anglia
Professor Ian Brown, Oxford University
Ray Corrigan, The Open University
Professor Lilian Edwards, University of Strathclyde
Dr Theodore Konstadinides, University of Surrey
Professor Chris Marsden, University of Sussex
Dr Karen Mc Cullagh, University of East Anglia
Dr. Daithí Mac Síthigh, Newcastle University
Professor David Mead, University of East Anglia
Professor Andrew Murray, London School of Economics
Professor Steve Peers, University of Essex
Julia Powles, University of Cambridge
Professor Burkhard Schafer, University of Edinburgh

Professor Lorna Woods, University of Essex