Showing posts with label Edward Snowden. Show all posts
Showing posts with label Edward Snowden. Show all posts

Sunday, 16 September 2018

Analysis of the ECtHR judgment in Big Brother Watch: part 1







Lorna Woods, Professor of Internet Law, University of Essex


This chamber judgment is the latest in a line of cases that deal with secret surveillance, a topic which seems to be appearing increasingly frequently in a post-Snowden world. This judgment is substantial (over 200 pages in length) and deals with three cases challenging the UK’s now mainly repealed Regulation of Investigatory Powers Act 2000 (RIPA) as regards to interception of communications in bulk, the acquisition of communications data and the sharing of intercepted communications and communications data between the UK and the United States of America: Big Brother Watch (app no. 58170/13), Bureau of Investigative Journalism and Alice Ross (app no. 62322/14) and 10 Human Rights Organisations (app. no. 24960/15).  It follows in the steps of the Liberty case (app no. 58243/00) against the previous regime and, given the similarity between some aspects of RIPA and the Investigatory Powers Act 2016 (IPA), might have relevance for our understanding of that act too. In addition to questions about Article 8, the judgment also deals with the impact of surveillance on freedom of speech under Article 10 ECHR. 

This post is the first of two on the judgment. It outlines the issues and the Court’s reasoning. The second comments on the judgment. Given the size of the judgment that will be just an initial reaction to the judgment – there will, no doubt, be much more to be said.

Factual Background

The applicants in the three cases are organisations and individuals who are either journalists or are active in campaigning on civil liberties issues. Their challenges to RIPA were triggered by the information revealed by Edward Snowden which made apparent the existence of surveillance and intelligence sharing programmes operated by the intelligence services of the United States and the United Kingdom.  Specifically, they believed that the nature of their activities meant that their electronic communications and/or communications data were likely to have been intercepted or obtained by the UK intelligence services relying on the regime found in RIPA.  Three areas of problems were highlighted:

-          bulk interception of ‘external’ communications under s. 8(4), as well as connected communications data;
-          the sharing process whereby the British agencies received data collected by the US; and
-          access to communications data under Part II RIPA.

In all instances the applicants thought that the protection against abuse were insufficient and that the regimes were neither lawful nor necessary in a democratic society.

Only the applicants in the third case brought an action before the Investigatory Powers Tribunal (IPT), alleging violations of Articles 8, 10 and 14 of the Convention.  Although the IPT found two ‘technical violations’ of the Convention, in the main it regarded the challenged regime to be in accordance with the requirements of Article 8, notably the requirements set down in Weber and Saravia (app no. 54934/00).

Judgment

The first issue concerned exhaustion of domestic remedies, in particular the failure to bring a case before the IPT.  The applicants argued that in the light of the ECtHR’s own ruling in Kennedy (app no. 26839/05), the IPT would not be an effective remedy and they were therefore not obliged so to do.  The Court agreed with this assessment of its case law in general terms, but now thought that recent practice showed that the IPT now constituted a viable route for a remedy, especially given the response of the UK government to its findings. Nonetheless, the Court accepted that, at the time the applicants in the first and second of the joined cases introduced their applications, they could not be faulted for having relied on Kennedy as authority for the proposition that the IPT was not an effective remedy for a complaint about the general Convention compliance of a surveillance regime. It therefore found that there existed special circumstances absolving those applicants from the requirement that they first bring their complaints to the IPT.

The Court first considered the position under s. 8(4) RIPA and whether it met the tests of legitimate purpose, lawful and necessary in a democratic society. In doing so, it noted that there was jurisprudence in this field but that in previous jurisprudence the Court had distinguished between different types of secret surveillance, finding that there different levels of intrusion depending on the data collected, and also different rules depending on whether national security was in issue.  The Court sought to synthesise the principles, suggesting that the 6 principles established in Weber – to ensure the lawfulness of any such regime - were the starting point, though they might need to be differently applied depending on the type of surveillance. These need not be updated to take account of changes in technology.  These minima are:

-          the nature of offences which might give rise to an interception order;
-          definition of the categories of people liable to have their communications intercepted;
-          a limit on the duration of interception;
-          the procedure to be followed for examining, using and storing the data obtained;
-          the precautions to be taken when communicating the data to other parties; and
-          the circumstances in which intercepted data may or must be erased or destroyed.

In the context of national security it also recognised the gloss added by the Grand Chamber in Zakharov (app no. 47143/06) the review mechanisms and remedies should also be taken into account. The Court noted that the nature of secret surveillance was such that until an individual were to be notified about such surveillance, that individual would not be in a position to exercise their rights. In this, the safeguards against abuse assumed high importance; moreover, the role or rights to remedies was important for protection after notification.

Looking at the situation in issue, the Court started by making the general point that operating a bulk interception scheme was not in itself in violation of the Convention. Governments would have “a wide margin of appreciation” in deciding what kind of surveillance scheme was necessary to protect national security.  The operation of the system would still however need to be checked to ensure that there were sufficient safeguards against abuse.  The applicants argued that the fact that there was no requirement for prior judicial authorization was a fatal flaw in the scheme. 

The Court agreed judicial authorisation was an important safeguard, perhaps even “best practice”, but by itself it was neither necessary nor sufficient to ensure compliance with Article 8. It was unnecessary because of the ex post controls available in the British system. Looking to Zakharov, the Court recognised that a formal requirement was insufficient – the requirement there had not prevented bad practice. The Court then held that regard had to be had to the actual operation of the system of interception, including the checks and balances on the exercise of power, and the existence or absence of any evidence of actual abuse.

In assessing the scheme the Court took the law at the time of its consideration of the claims; this meant that the Court considered the matter after the impact of the Snowden leak and some of the consequent changes to practice, including revisions to relevant codes accompanying RIPA, as well as statements in Parliament (such as the clarification as to what an external communication was – it includes Google searches, tweets and Facebook posts from by users in the UK). 

The Court took the view that, as regards the first Weber requirement, the law was clear as to the circumstances in which and the conditions on which a section 8(4) warrant might be issued. There was no evidence to suggest that the Secretary of State was authorising warrants without due and proper consideration. The authorisation procedure was subject to independent oversight and the IPT had extensive jurisdiction to examine any complaint of unlawful interception. Following its analysis in Kennedy, the Court accepted that the provisions on the duration and renewal of interception warrants, the provisions relating to the storing, accessing, examining and using intercepted data, the provisions on the procedure to be followed for communicating the intercepted data to other parties and the provisions on the erasure and destruction of intercept material provided adequate safeguards against abuse.

There were some weaknesses in the system.  While in the opinion of the Court the selectors (e.g. email address) and search criteria used to narrow down the mass of information collected to that which would be read by analysts did not need to be made public or be listed in the warrant ordering interception, the choice of search criteria and selectors should be subject to independent oversight (para 387); indeed the Court expressed some concerned about the cables (‘bearers’) selected for tapping. Here the ex post review by the Interception of Communications Commissioner (now replaced under the IPA by the Investigatory Powers Commissioner) and, should an application be made to it, the IPT were held not to be ‘sufficiently robust to provide adequate guarantees against abuse’ (para 347).

The Court also expressed concern about communications data.  This is often summarised as who, where, when but this underplays the significance of the data collected.  Indeed, here the Court rejected the Government’s argument that communications data was necessarily less sensitive than the content of the communications (para 357). The Court explained the position thus:

... the content of an electronic communication might be encrypted and, even if it were decrypted, might not reveal anything of note about the sender or recipient. The related communications data, on the other hand, could reveal the identities and geographic location of the sender and recipient and the equipment through which the communication was transmitted. In bulk, the degree of intrusion is magnified, since the patterns that will emerge could be capable of painting an intimate picture of a person through the mapping of social networks, location tracking, Internet browsing tracking, mapping of communication patterns, and insight into who a person interacted with. (para 356)

In the context of s 8(4), communications data associated with the communications intercepted is also covered by the warrant but crucially some of the limitations (e.g. that the communication must be external) do not apply to this data.  The Court concluded that the unjustified lower level of protection meant that there was a violation in this regard.

The Court then considered the data sharing arrangements, the first time that the Court had been asked to consider the matter. It noted to start with the many ways in which this issue might arise.  The interference in the case had not been occasioned by the interception of communications itself but lay in the receipt of the intercepted material and subsequent storage, examination and use by the intelligence services.  It confined its judgment to the specific argument brought before it: the breach occasioned by the British services receiving American intelligence.  The applicants argued that this indirect access should be treated the same way as direct surveillance by the British services.  The Court commented that:

"[a]s with any regime which provides for the acquisition of surveillance material, the regime for the obtaining of such material from foreign Governments must be 'in accordance with the law'..., it must be proportionate to the legitimate aim pursued, and there must exist adequate and effective safeguards against abuse .… In particular, the procedures for supervising the ordering and implementation of the measures in question must be such as to keep the 'interference' to what is 'necessary in a democratic society'" (para 422).

The Court also recognised the danger of States using intelligence sharing as a means to circumvent controls (para 423).  It nonetheless accepted that the safeguards need not look identical in this context as in that of direct surveillance. Applying the principles to the facts, the Court found unanimously that there had been no violation. In particular, it accepted that the lawful requirement had been satisfied although the basis for the data sharing was an internal agreement which were disclosed only during proceedings before the IPT and subsequently incorporated into the Interception of Communications Code (para 426). The Code links the circumstances in which intelligence may be requested to the issuing of s. 8(1) or s. 8(4) warrants, thus circumscribing the circumstances in which such requests may arise and indirectly imposes supervision via sign-off by the Secretary of State and review by the ISC and the Interception of Communications Commissioner. 

The Court applied its assessment of the Code’s safeguards in relation to s. 8(4) warrants (in paras 361-363) here. Its assessment of the proportionality of information sharing was influenced by the threat of international terrorism and the global nature of terror networks necessitating information flow. In the Court’s view, ‘this “information flow” was embedded into a legislative context providing considerable safeguards against abuse’ so that ‘the resulting interference was to that which was “necessary in a democratic society”’ (para 446) and it considered that the threshold set by the Venice Commission – that the material transferred should only be able to be searched if all material requirements of a national search were fulfilled – were met (para 447). 

The next issue was the final question relating to Article 8.  It concerned Chapter II of RIPA which allows specified authorities to access communications data held by communications service providers (CSPs).  As noted, communications data is not necessarily less intrusive than content.  The Court did not however go into detail on this here, although it noted that real time surveillance is more intrusive that the transfer of records of existing data (citing Ben Faiza (app no. 31446/12)). It re-iterated that the same three criteria apply: lawfulness, legitimate aim and necessary in a democratic society.  The Court focussed on the lawfulness of the rules, referring to the position under EU law – notably Digital Rights Ireland (Case C-293/12 and C-594/12) and Watson (Case C-698/15) - which requires that any regime permitting access to data retained by CSPs was only to be for the purpose of combating “serious crime”, and that such access be subject to prior review by a court or independent administrative body. RIPA -although it provided a clear basis for action on the face of it - did not comply with this requirement and was therefore was not compliant with domestic law requirements (para 467).

A further issue arose in the Bureau of Investigative Journalism (BIJ) complaint. There, BIJ (a newsgathering organisation) and a journalist (Ross) raised the issue of interference with confidential journalistic material occasioned by the operation of both the section 8(4) and the Chapter II regimes.  While the Court has emphasised the importance of protection of journalists’ sources, its case law has distinguished between court orders for disclosure and searches carried out by the authorities to obtain this information – that latter is more intrusive. Further, the Court also distinguished between attempts to reveal sources and investigations into the commission of crimes. So the importance of source confidentiality is not an automatic trump card. The Court noted that the s. 8(4) regime was not aimed at monitoring journalists or uncovering journalistic sources.  The authorities would often only know that a journalist’s communications had been intercepted when examining the intercepted communications. Following Weber, this in itself could not be characterised as a particularly serious interference with freedom of expression. Nonetheless, where those communications were selected, the concerns would increase and safeguards would be required, especially as regards the need to protect confidentiality. In this context, concerns expressed in relation to the s. 8(4) regime ran through to Article 10 concerns. The Court emphasised that:

... there are no [public] requirements...either circumscribing the intelligence services' power to search for confidential journalistic or other material (for example, by using a journalist's email address as a selector), or requiring analysts, in selecting material for examination, to give any particular consideration to whether such material is or may be involved. (para 493)

This blanket power without any “above the water” arrangements limiting the intelligence services’ ability to search and examine such material constituted a violation of Article 10.

As regards the Chapter II regime, while there were some protections in place for journalistic sources, the Court determined that this was limited. They applied only where the purpose of the application was to determine a source. They would not apply in every case where there was a request for the communications data of a journalist, or where such collateral intrusion was likely.  Given this and the fact that access was not limited to ‘serious crime’, the Court found a violation of Article 10.

The Court rejected complaints under Article 6 as well as Article 14 combined with Articles 8 and 10 of the Convention as manifestly ill-founded.

The judgment was not unanimous. Judge Koskelo, joined by Judge Turkovic, disagreed with some points of the reasoning of the majority and particularly the appropriateness of relying on old case law in a context following a technological ‘sea change’ in which people’s lives are more thoroughly exposed to view.  Judges Pardalos and Eicke did not agree that the applicants in the first and second case should have been absolved from the requirement to exhaust domestic remedies, nor – in the light of the recent chamber judgment in Centrum For Rattvisa (app no. 35252/08) – that there had been a violation of Article 8 in relation to s. 8(4) warrants.

Barnard & Peers: chapter 9
Photo credit: Journalism, Media and Culture

Wednesday, 3 February 2016

Live. Die. Repeat. The ‘Privacy Shield’ deal as ‘Groundhog Day’: endlessly making the same mistakes?



Steve Peers

Love it, hate it, or spend an academic career analysing it, the USA is the best-known country in the world. Yet some of its traditions still puzzle outsiders. One of them, celebrated yesterday, is ‘Groundhog Day’: the myth that the appearance, or non-appearance, of the shadow of an otherwise obscure rodent on February 2nd each year will determine whether or not there will be another six weeks of winter. Outside the USA, Groundhog Day is probably better known as a movie: grumpy Bill Murray keeps repeating the same day, trying to perfect it and woo the lovely Andie MacDowell. Others have borrowed this basic plot. In Edge of Tomorrow, sleazy Tom Cruise keeps repeating the same day, trying to kill aliens and woo the lovely Emily Blunt. In the Doctor Who episode Hell Bent, angry Peter Capaldi keeps repeating the same day, trying to cut through a diamond wall and resurrect the lovely Jenna Coleman.

The basic idea is summed up in the advertising slogan for Edge of Tomorrow: Live. Die. Repeat. Groundhog Day in particular has attracted many interpretations. Of these, the most convincing is that the film’s story is a Buddhist parable: repeated reincarnation until we reach the state of enlightenment, or nirvana.

How does this relate to the new EU/US privacy deal, dubbed ‘Privacy Shield’? Obviously the deal involves the USA, and it was reached yesterday, on Groundhog Day. And it’s a new incarnation of a prior deal: ‘Safe Harbor’, killed last October by the CJEU in the Schrems judgment (discussed here). While the text of the new agreement is not yet available, the initial indication is that it is bound to be killed in turn – unless the CJEU, admittedly an increasingly fickle judicial deity, is willing to go back on its own case law. Goodness knows how many further reincarnations will be necessary before the US and EU can reach enlightenment.

Problems with the deal

The point of the new deal is the same as the old one: to provide a legally secure set of rules for EU/US data transfers, for companies that subscribe to a set of data protection principles. Failing that, it is possible to argue that transfers can be justified by binding corporate rules, by individual consent or (as regards US government access to the data) by a third State’s public interest. But as I as I noted in my blog post on Schrems, these alternatives are untested yet in the CJEU, and are possibly subject to legal challenges of their own. Understandably, businesses would like to make a smooth transition to a new set of legally secure rules. Does the new deal fit the bill?   

In the absence of a text, I can’t analyse the new deal much. But here are my first impressions.

According to the CJEU, the main problems with the previous deal were twofold: the extent of mass surveillance in the USA, and the limited judicial redress available to EU citizens as regards such government surveillance. It appears that the new agreement will address the latter issue, but not the former. There will be an ‘ombudsman’ empowered to consider complaints against the US government. While the details are unknown, it’s hard to see how this new institution could address the CJEU’s concerns completely, unless it is given the judicial power to order the blocking and erasure of data, for instance.   

Furthermore, there’s no sign that the underlying mass surveillance will be changed. Here, the argument is that the Court of Justice simply misunderstood the US system, or that in any event many EU countries are just as wicked as the USA when it comes to mass surveillance. These arguments are eloquently set out in a barrister’s opinion, summarised in this (paywalled) Financial Times story.

Facebook and the US government disdained to get involved in the Schrems case, and have no doubt repented this at leisure. The assumption here appears to be that they would participate fully in new litigation, and convince the CJEU to see the error of its ways.

How likely is this? It’s undoubtedly true to say that the CJEU gives an increasing impression that it willing to bend the rules, or double up on its own case law, in order to ensure the survival of an increasingly beleaguered EU project. In Pringle and Gauweiler, it agreed with harshly criticized plans to keep monetary union afloat. In Dano and Alimanovic, it qualified its prior case law on EU citizens’ access to benefits, in an attempt to quell growing public concern about this issue. In Celaj, it gave a first indication that it would row back on its case law limiting the detention of irregular migrants, perhaps in light of the migration and refugee crisis. The drafters of the proposed deal on UK renegotiation appear to assume that the Court would back away from even more free movement case law, if it appears necessary to keep the UK from leaving the European Union.

Once the Court reminded legal observers of Rome: the imperial author of uniform codes that would bind a whole continent, upon which the sun would never set. Now it increasingly reminds me of Dunkirk: the centre of a brave and hastily improvised retreat from impending apocalypse, scouring for a beach to fight its last stand. The Court used to straighten every road; now it cuts every corner.

Since the ‘Privacy Shield’ deal faces many litigious critics, it seems virtually certain to end up before the Court before long. Time will tell where the judgment on the deal will fit within the broader sweep of EU jurisprudence.


Photo credit: play.google.com

Wednesday, 7 October 2015

The party’s over: EU data protection law after the Schrems Safe Harbour judgment




Steve Peers

The relationship between intelligence and law enforcement agencies (and companies like Google and Facebook) and personal data is much like the relationship between children and sweets at a birthday party. Imagine you’re a parent bringing out a huge bowl full of sweets (the personal data) during the birthday party – and then telling the children (the agencies and companies) that they can’t have any. But how can you enforce this rule? If you leave the room, even for a moment, the sweets will be gone within seconds, no matter how fervently you insist that the children leave them alone while you’re out. If you stay in the room, you will face incessant and increasingly shrill demands for access to the sweets, based on every conceivable self-interested and guilt-trippy argument. If you try to hide the sweets, the children will overturn everything to find them again.

When children find their demands thwarted by a strict parent, they have a time-honoured circumvention strategy: “When Mummy says No, ask Daddy”. But in the Safe Harbour case, things have happened the other way around. Mummy (the Commission) barely even resisted the children’s demands. In fact, she said Yes hours ago, and retired to the bath with an enormous glass of wine, occasionally shouting out feeble admonitions for the children to tone down their sugar-fuelled rampage. Now Daddy (the CJEU) is home, shocked at the chaos that results from lax parenting. He has immediately stopped the supply of further sweets. But the house is full of other sugary treats, and all the children are now crying. What now?

In this post, I’ll examine the reasons why the Court put its foot down, and invalidated the Commission’s ‘Safe Harbour’ decision which allows transfers of personal data to the USA, in the recent judgment in Schrems. Then I will examine the consequences of the Court’s ruling. But I should probably admit for the record that my parenting is more like Mummy's than Daddy's in the above example. 

Background

For more on the background to the Schrems case, see here; on the hearing, see Simon McGarr’s summary here; and on the Advocate-General’s opinion, see here. But I’ll summarise the basics of the case again briefly.

Max Schrems is an Austrian Facebook user who was disturbed by Edward Snowden’s revelations about mass surveillance by US intelligence agencies. Since he believed that transfers of his data to Facebook were subject to such mass surveillance, he complained to the Irish data protection authority, which regulates Facebook’s transfers of personal data from the EU to the USA.

The substantive law governing these transfers of personal data was the ‘Safe Harbour’ agreement between the EU and the USA, agreed back in 2000. This agreement was put into effect in the EU by a decision of the Commission, which was adopted pursuant to powers conferred upon the Commission by the EU’s current data protection Directive. The latter law gives the Commission the power to decide that transfers of personal data outside the EU receive an ‘adequate level of protection’ in particular countries.

The ‘Safe Harbour’ agreement was enforced by self-certification of the companies that have signed up for it (note that not all transfers to the USA fell within the scope of the Safe Harbour decision, since not all American companies signed up). Those promises were in turn meant to be enforced by the US authorities. But it was also possible (not mandatory) for the national data protection authorities which enforce EU data protection law to suspend transfers of personal data under the agreement, if the US authorities or enforcement system found a breach of the rules, or on a list of limited grounds set out in the decision.

The Irish data protection authority refused to consider Schrems’ complaint, so he challenged that decision before the Irish High Court, which doubted that this system was compatible with EU law (or indeed the Irish constitution). So that court asked the CJEU to rule on whether national data protection authorities (DPAs) should have the power to prevent data transfers in cases like these.

The judgment

The CJEU first of all answers the question which the Irish court asks about DPA jurisdiction over data transfers (the procedural point), and then goes on to rule that the Safe Harbour decision is invalid (the substantive point).

Following the Advocate-General’s view, the Court ruled that national data protection authorities have to be able to consider claims that flows of personal data to third countries are not compatible with EU data protection laws if there is an inadequate level of data protection in those countries, even if the Commission has adopted a decision (such as the Safe Harbour decision) declaring that the level of protection is adequate. Like the Advocate-General, the Court based this conclusion on the powers and independence of those authorities, read in light of the EU Charter of Fundamental Rights, which expressly refers to DPAs’ role and independence. (On the recent CJEU case law on DPA independence, see discussion here). In fact, the new EU data protection law currently under negotiation (the data protection Regulation) will likely confirm and even enhance the powers and independence of DPAs. (More on that aspect of the proposed Regulation here).

The Court then elaborates upon the ‘architecture’ of the EU’s data protection system as regards external transfers. It points out that either the Commission or Member States can decide that a third country has an ‘adequate’ level of data protection, although it focusses its analysis upon what happens if (as in this case) there is a Commission decision to this effect. In that case, national authorities (including DPAs) are bound by the Commission decision, and cannot issue a contrary ruling.

However, individuals like Max Schrems can still complain to the DPAs about alleged breaches of their data protection rights, despite the adoption of the Commission decision. If they do so, the Court implies that the validity of the Commission’s decision is therefore being called into question. While all EU acts must be subject to judicial review, the Court reiterates the usual rule that national courts can’t declare EU acts invalid, since that would fragment EU law: only the CJEU can do that. This restriction applies equally to national DPAs.

So how can a Commission decision on the adequacy of third countries’ data protection law be effectively challenged? The Court explains that DPAs must consider such claims seriously. If the DPA thinks that the claim is unfounded, the disgruntled complainant can challenge the DPA’s decision before the national courts, who must in turn refer the issue of the validity of the decision to the CJEU if they think it may be well founded. If, on the other hand, the DPA thinks the complaint is well-founded, there must be rules in national law allowing the DPA to go before the national courts in order to get the issue referred to the CJEU.

The Court then moves on to the substantive validity of the Safe Harbour decision. Although the national court didn’t ask it to examine this issue, the Court justifies its decision to do this by reference to its overall analysis of the architecture of EU data protection law, as well as the national court’s doubts about the Safe Harbour decision. Indeed, the Court is effectively putting its new architecture into use for the first time, and it’s quite an understatement to say that the national court had doubts about Safe Harbour (it had compared surveillance in the USA to that of Communist-era East Germany).

So what is an ‘adequate level of protection’ for personal data in third countries? The Court admits that the Directive is not clear on this point, so it has to interpret the rules. In the Court’s view, there must be a ‘high’ level of protection in the third country; this does not have to be ‘identical’ to the EU standard, but must be ‘substantially equivalent’ to it.  Otherwise, the objective of ensuring a high level of protection would not be met, and the EU’s internal standards for domestic data protection could easily be circumvented. Also, the means used in the third State to ensure data protection rights must be ‘effective…in practice’, although they ‘may differ’ from that in the EU. Furthermore, the assessment of adequacy must be dynamic, with regular automatic reviews and an obligation for a further review if evidence suggests that there are ‘doubts’ on this score; and the general changes in circumstances since the decision was adopted must be taken into account.

The Court then establishes that in light of the importance of privacy and data protection, and the large number of persons whose rights will be affected if data is transferred to a third country with an inadequate level of data protection, the Commission has reduced discretion, and is subject to ‘strict’ standards of judicial review. Applying this test, two provisions of the ‘Safe Harbour’ decision were invalid.

First of all, the basic decision declaring adequate data protection in the USA (in the context of Safe Harbour) was invalid. While such a decision could, in principle, be based on self-certification, this had to be accompanied by ‘effective detection and supervision mechanisms’ ensuring that infringements of fundamental rights had to be ‘identified and punished in practice’. Self-certification under the Safe Harbour rules did not apply to US public authorities; there was not a sufficient finding that the US law or commitments met EU standards; and the rules could be overridden by national security requirements set out in US law.

Data protection rules apply regardless of whether the information is sensitive, or whether there were adverse consequences for the persons concerned. The Decision had no finding concerning human rights protections as regards the national security exceptions under US law (although the CJEU acknowledged that such rules pursued a legitimate objective), or effective legal protection in that context. This was confirmed by the Commission’s review of the Safe Harbour decision, which found (a) that US authorities could access personal data transferred from the EU, and then process it for purposes incompatible with the original transfer ‘beyond what was strictly necessary and proportionate for the purposes of national security’, and (b) that there was no administrative or judicial means to ensure access to the data and its rectification or erasure.

Within the EU, interference with privacy and data protection rights requires ‘clear and precise rules’ which set out minimum safeguards, as well as strict application of derogations and limitations.  Those principles were breached where, ‘on a generalised basis’, legislation authorises ‘storage of all the personal data of all the persons whose data has been transferred’ to the US ‘without any differentiation, limitation or exception being made in light of the objective pursued’ and without any objective test limiting access of the public authorities for specific purposes. General access to the content of communications compromises the ‘essence’ of the right to privacy. On these points, the Court expressly reiterated the limits on mass surveillance set out in last year’s Digital Rights judgment (discussed here) on the validity of the EU’s data retention Directive. Furthermore, the absence of legal remedies in this regard compromises the essence of the right to judicial protection set out in the EU Charter. But the Commission made no findings to this effect.

Secondly, the restriction upon DPAs taking action to prevent data transfers in the event of an inadequate level of data protection in the USA (in the context of Safe Harbour) was also invalid. The Commission did not have the power under the data protection Directive (read in light of the Charter) to restrict DPA competence in that way. Since these two provisions were inseparable from the rest of the Safe Harbour decision, the entire Decision is invalid. The Court did not limit the effect of its ruling.

Comments

The Court’s judgment comes to the same conclusion as the Advocate-General’s opinion, but with subtle differences that I’ll examine as we go along. On the first issue, the Court’s finding that DPAs must be able to stop data flows if there is a breach of EU data protection laws in a third country, despite an adequacy Decision by the Commission, is clearly the correct result. Otherwise it would be too easy for the standards in the Directive to be undercut by means of transfers to third countries, which the Commission or national authorities might be willing to accept as a trade-off for a trade agreement or some other quid pro quo with the country concerned.

As for the Court’s discussion of the architecture of the data protection rules, the idea of the data protection authorities having to go to a national court if they agree with the complainant that the Commission’s adequacy decision is legally suspect is rather convoluted, since it’s not clear who the parties would be: it’s awkward that the Commission itself would probably not be a party.  It’s unfortunate that the Court did not consider the alternative route of the national DPA calling on the Commission to amend its decision, and bringing a ‘failure to act’ proceeding directly in the EU courts if it did not do so. In the medium term, it would be better for the future so-called ‘one-stop shop’ system under the new data protection Regulation (see discussion here) to address this issue, and provide for a centralised process of challenging the Commission directly.

It’s interesting that the CJEU finds that there can be a national decision on adequacy of data flows to third States, since there’s no express reference to this possibility in the Directive. If such a decision is adopted, or if Member States apply the various mandatory and optional exceptions from the general external data protection rules set out in Article 26 of the data protection Directive, much of the Court’s Schrems ruling would apply in the same way by analogy. In particular, national DPAs must surely have the jurisdiction to examine complaints about the validity of such decisions too. But EU law does not prohibit the DPAs from finding the national decisions invalid; the interesting question is whether it obliges national law to confer such power upon the DPAs. Arguably it does, to ensure the effectiveness of the EU rules. Any decisions on these issues could still be appealed to the national courts, which would have the option (though not the obligation, except for final courts) to ask the CJEU to interpret the EU rules.

As for the validity of the Safe Harbour Decision, the Court’s interpretation of the meaning of ‘adequate’ protection in third States should probably be sung out loud, to the tune of ‘We are the World’. The global reach of the EU’s general data protection rules was already strengthened by last year’s Google Spain judgment (discussed here); now the Court declares that even the separate regime for external transfers is very similar to the domestic regime anyway. There must be almost identical degrees of protection, although the Court does hint that modest differences are permissible: accepting the idea of self-certification, and avoiding the issue of whether third States need an independent DPA (the Advocate-General had argued that they did).

It’s a long way from the judgment in Lindqvist over a decade ago, when the Court anxiously insisted that the external regime should not be turned into a copy of the internal rules; now it’s insistent that there should be as little a gap as possible between them. With respect, the Court’s interpretation is not convincing, since the word ‘adequate’ suggests something less than ‘essentially equivalent’, and the EU Charter does not bind third States.

But having said that, the American rules on mass surveillance would violate even a far more generous interpretation of the meaning of the word ‘adequate’. It’s striking that (unlike the Advocate-General), the Court does not engage in a detailed interpretation of the grounds for limiting Charter rights, but rather states that general mass surveillance of the content of communications affects the ‘essence’ of the right to privacy. That is enough to find an unjustifiable violation of the Charter.

So where does the judgment leave us in practice? Since the Court refers frequently to the primary law rules in the Charter, there’s no real chance to escape what it says by signing new treaties (even the planned TTIP or TiSA), by adopting new decisions, or by amending the data protection Directive. In particular, the Safe Harbour decision is invalid, and the Commission could only replace it with a decision that meets the standards set out in this judgment. While the Court refers at some points to the inadequacy or non-existence of the Commission’s findings in the Decision, it’s hard to believe that a new Decision which purports to claim that the American system now meets the Court’s standards would be valid if the Commission were not telling the truth (or if circumstances subsequently changed).

What standards does the US have to meet? The Court reiterates even more clearly that mass surveillance is inherently a problem, regardless of the safeguards in place to limit its abuse. Indeed, as noted already, the Court ruled that mass surveillance of the content of communications breaches the essence of the right to privacy and so cannot be justified at all. (Surveillance of content which is targeted on suspected criminal activities or security threats is clearly justifiable, however). In addition to a ban on mass surveillance, there must also be detailed safeguards in place. The US might soon be reluctantly willing to address the latter, but it will be even more unwilling to address the former.

Are there other routes which could guarantee that external transfers to the USA take place, at least until the US law is changed? In principle, yes, since (as noted above) there are derogations from the general rule that transfers can only take place to countries with an ‘adequate’ level of data protection. A first set of derogations is mandatory (though Member States can have exceptions in ‘domestic law governing particular cases’): where the data subject gives ‘consent unambiguously’; where the transfer is necessary to perform a contract with (or in the interest of) the data subject, or for pre-contractual relations; where it’s ‘necessary or legally required on important public interest grounds’, or related to legal claims; where it’s ‘necessary to protect the vital interests of the data subject’; or where it’s made from a public register. A second derogation is optional: a Member State may authorise transfers where the controller offers sufficient safeguards, possibly in the form of contractual clauses. The use of the latter derogation can be controlled by the Commission.

It’s hard to see how the second derogation can be relevant, in light of the Court’s concerns about the sufficiency of safeguards under the current law. US access to the data is not necessary in relation to a contract, to protect the data subject, or related to legal claims.  An imaginative lawyer might argue that a search engine (though not a social network) is a modern form of public register; but the record of an individual’s use of a search engine is not.

This leaves us with consent and public interest grounds. Undoubtedly (as the CJEU accepted) national security interests are legitimate, but in the context of defining adequacy, they do not justify mass surveillance or insufficient safeguards. Would the Court’s ruling in Schrems still apply fully to the derogation regarding inadequate protection? Or would it apply in a modified way, or not at all?

As for consent, the CJEU ruled last year in a very different context (credibility assessment in LGBT asylum claims) that the rights to privacy and dignity could not be waived in certain situations (see discussion here). Is that also true to some extent in the context of data protection? And what does unambiguous consent mean exactly? Most people believe they are consenting only to (selected) people seeing what they post on Facebook, and are dimly aware that Facebook might do something with their data to earn money. They may be more aware of mass surveillance since the Snowden revelations; some don’t care, but some (like Max Schrems) would like to use Facebook without such surveillance. Would people have to consent separately to mass surveillance? In that case, would Facebook have to be accessible for those who did not want to sign that separate form? Or could a ‘spy on me’ clause be added at the end of a long (and unread) consent form?  Consent is a crucial issue also in the context of the purely domestic EU data protection rules.

The Court’s ruling has addressed some important points, but leaves an enormous number of issues open. It’s clear that it will take a long time to clear up the mess left from this particular poorly supervised party.  


Barnard and Peers: chapter 9

Photo credit: www.businessinsider.com

Wednesday, 23 September 2015

American Mass Surveillance of EU citizens: Is the End Nigh?




Steve Peers

*This blog post is dedicated to the memory of the great privacy campaigner Caspar Bowden, who passed away recently. What a tragedy he did not leave to see the developments in this case. To continue his work, you can donate to the Caspar Bowden Legacy Fund here.


A brilliant university student takes on the hidebound establishment – and ultimately wins spectacularly. That was Mark Zuckerberg, founding Facebook, in 2002. But it could be Max Schrems, taking on Zuckerberg and Facebook, in the near future – if the Court of Justice decides to follow the Advocate-General’s opinion in the Schrems case, released today.

In fact, Facebook is only a conduit in this case: Schrems’ real targets are the US government (for requiring Facebook and other Internet companies to hand over personal data to intelligence agencies), as well as the EU Commission and the Irish data protection authority for going along with this. In the Advocate-General’s opinion, the Commission’s decision to allow EU citizens’ data to be subject to mass surveillance in the US is invalid, and the national data protection authorities in the EU must investigate these flows of data and prohibit them if necessary. The case has the potential to change much of the way that American Internet giants operate, and to complicate relations between the US and the EU in this field.

Background

There’s more about the background to this litigation here, and Simon McGarr has summarised the CJEU hearing in this case here. But I’ll summarise the basics of the case again here briefly.

Max Schrems is an Austrian Facebook user who was disturbed by Edward Snowden’s revelations about mass surveillance by US intelligence agencies. Since such mass surveillance is put into effect by imposing obligations to cooperate upon Internet companies, he wanted to complain about Facebook’s transfers of his personal data to the USA. Since Facebook’s European operations are registered in Ireland, he had to bring his complaints to the Irish data protection authority.

The legal regime applicable to such transfers of personal data is the ‘Safe Harbour’ agreement between the EU and the USA, agreed in 2000 – before the creation of Facebook and some other modern Internet giants, and indeed before the 9/11 terrorist attacks which prompted the mass surveillance. This agreement was put into effect in the EU by a decision of the Commission, which used the power conferred by the EU’s current data protection Directive to declare that transfers of personal data to the USA received an ‘adequate level of protection’ there.

The primary means of enforcing the arrangement was self-certification of the companies concerned (not all transfers to the USA fall within the scope of the Safe Harbour decision), enforced by the US authorities.  But it was also possible (not mandatory) for the national data protection authorities which enforce EU data protection law to suspend transfers of personal data, if the US authorities or enforcement system have found a breach of the rules, or on the following further list of limited grounds set out in the decision:

there is a substantial likelihood that the Principles are being violated; there is a reasonable basis for believing that the enforcement mechanism concerned is not taking or will not take adequate and timely steps to settle the case at issue; the continuing transfer would create an imminent risk of grave harm to data subjects; and the competent authorities in the Member State have made reasonable efforts under the circumstances to provide the organisation with notice and an opportunity to respond.

In fact, Irish law prevents the national authorities from taking up this option. So the national data protection authority effectively refused to consider Schrems’ complaint. He challenged that decision before the Irish High Court, which doubted that this system was compatible with EU law (or indeed the Irish constitution). So that court asked the CJEU to rule on whether national data protection authorities (DPAs) should have the power to prevent data transfers in cases like these.

The Opinion

The Advocate-General first of all answers the question which the Irish court asks, and then goes on to examine whether the Safe Harbour decision is in fact valid. I’ll address those two issues in turn.

In the Advocate-General’s view, national data protection authorities have to be able to consider claims that flows of personal data to third countries are not compatible with EU data protection laws, even if the Commission has adopted a decision declaring that they are. This stems from the powers and independence of those authorities, read in light of the EU Charter of Fundamental Rights, which expressly refers to DPAs’ role and independence. (On the recent CJEU case law on DPA independence, see discussion here). It’s worth noting that the new EU data protection law under negotiation, the data protection Regulation, will likely confirm and even enhance the powers and independence of DPAs. (More on that aspect of the proposed Regulation here).

On the second point, the opinion assesses whether the Safe Harbour Decision correctly decided that there was an ‘adequate level of protection’ for personal data in the USA. Crucially, it argues that this assessment is dynamic: it must take account of the protection of personal data now, not just when the Decision was adopted back in 2000.

As for the meaning of an ‘adequate level of protection’, the opinion argues that this means that third countries must ensure standards ‘essentially equivalent to that afforded by the Directive, even though the manner in which that protection is implemented may differ from that’ within the EU, due to the importance of protecting human rights within the EU. The assessment of third-country standards must examine both the content of those standards and their enforcement, which entailed ‘adequate guarantees and a sufficient control mechanism’, so there was no ‘lower level of protection than processing within the European Union’. Within the EU, the essential method of guaranteeing data protection rights was independent DPAs.

Applying these principles, the opinion accepts that personal data transferred to the USA by Facebook is subject to ‘mass and indiscriminate surveillance and interception’ by intelligence agencies, and that EU citizens have ‘no effective right to be heard’ in such cases. These findings necessarily mean that the Safe Harbour decision was invalid for breach of the Charter and the data protection Directive.

More particularly, the derogation for the national security rules of US law set out in the Safe Harbour principles was too general, and so the implementation of this derogation was ‘not limited to what is strictly necessary’. EU citizens had no remedy against breaches of the ‘purpose limitation’ principle in the US either, and there should be an ‘independent control mechanism suitable for preventing the breaches of the right to privacy’.

The opinion then assesses the dispute from the perspective of the EU Charter of Rights. It first concludes that the transfer of the personal data in question constitutes interference with the right to private life. As in last year’s Digital Rights Ireland judgment (discussed here), on the validity of the EU’s data retention directive, the interference with rights was ‘particularly serious, given the large numbers of users concerned and the quantities of data transferred’. In fact, due to the secret nature of access to the data, the interference was ‘extremely serious’. The Advocate-General was also concerned about the lack of information about the surveillance for EU citizens, and the lack of an effective remedy, which breaches Article 47 of the Charter.

However, interference with these fundamental rights can be justified according to Article 52(1) of the Charter, as long as the interference is ‘provided for by law’, ‘respect[s] the essence’ of the right, satisfies the ‘principle of proportionality’ and is ‘necessary’ to ‘genuinely meet objectives of general interest recognized by’ the EU ‘or the need to protect the rights and freedoms of others’.  

In the Advocate-General’s view, the US law does not respect the ‘essence’ of the Charter rights, since it extends to the content of the communications. (In contrast, the data collected pursuant to the data retention Directive which the CJEU struck down last year concerned only information on the use of phones and the Internet, not the content of phone calls and Facebook posts et al). On the same basis, he objected to the ‘broad wording’ of the relevant derogations on national security grounds, which did not clearly define the ‘legitimate interests’ at stake. Therefore, the derogation did not comply with the Charter, ‘since it does not pursue an objective of general interest defined with sufficient precision’. Moreover, it was too easy under the rules to escape the limitation that the derogation should only apply when ‘strictly necessary’.

Only the ‘national security’ exception was sufficiently precise to be regarded as an objective of general interest under the Charter, but it is still necessary to examine the ‘proportionality’ of the interference. This was a case (like Digital Rights Ireland) where the EU legislature’s discretion was limited, due to the importance of the rights concerned and the extent of interference with them. The opinion then focusses on whether the transfer of data is ‘strictly necessary’, and concludes that it is not: the US agencies have access to the personal data of ‘all persons using electronic communications services, without any requirement that the persons concerned represent a threat to national security’.

Crucially, the opinion concludes that ‘[s]uch mass, indiscriminate surveillance is inherently disproportionate and constitutes an unwarranted interference’ with Charter rights. The Advocate-General agreed that since the EU and the Member States cannot adopt legislation allowing for mass surveillance, non-EU countries ‘cannot in any circumstances’ be considered to ensure an ‘adequate level of protection’ of personal data if they permit it either.

Furthermore, there were not sufficient guarantees for protection of the data. Following the Digital Rights Ireland judgment, which stressed the crucial importance of such guarantees, the US system was not sufficient. The Federal Trade Commission could not examine breach of data protection laws for non-commercial purposes by government security agencies, and nor could specialist dispute resolution bodies. In general, the US lacks an independent supervisory authority, which is essential from the EU’s perspective, and the Safe Harbour decision was deficient for not requiring one to be set up. A third country cannot be considered to have ‘an adequate level of protection’ without it. Furthermore, only US citizens and residents had access to the judicial system for challenging US surveillance, and EU citizens cannot obtain remedies for access to or correction of data (among other things).  

So the Commission should have suspended the Safe Harbour decision. Its own reports suggested that the national security derogation was being breached, without sufficient safeguards for EU citizens. While the Commission is negotiating revisions to that agreement with the USA, that is not sufficient: it must be possible for the national supervisory authority to stop data transfers in the meantime.

Comments

The Advocate-General’s analysis of the first point (the requirement that DPAs must be able to stop data flows if there is a breach of EU data protection laws) is self-evidently correct. In the absence of a mechanism to hear complaints on this issue and to provide for an effective remedy, the standards set out in the Directive could too easily be breached. Having insisted that the DPAs must be fiercely independent of national governments, the CJEU should not now accept that they can be turned into the tame poodles of the Commission.

On the other hand, his analysis of the second point (the validity of the Safe Harbour Decision) is more problematic – although he clearly arrives at the correct conclusion. With respect, there are several flaws in his reasoning. Although EU law requires strong and independent DPAs within the EU to ensure data protection rights, there is more than one way to skin this particular cat. The data protection Directive notably does not expressly require that third countries have independent DPAs. While effective remedies are of course essential to ensure that data protection law (likely any other law) is actually enforced in practice, those remedies do not necessarily have to entail an independent DPA. They could also be ensured by an independent judiciary. After all, Americans are a litigious bunch; Europeans could join them in the courts. But having said that, it is clear that in national security cases like this one, EU citizens have neither an administrative nor a judicial remedy worth the name in the USA. So the right to an effective remedy in the Charter has been breached; and it is self-evident that processing information from Facebook interferes with privacy rights.

Is that limitation of rights justified, however? Here the Advocate-General has muddled up several different aspects of the limitation rules. For one thing, the precision of the law limiting rights and the public interest which it seeks to protect are too separate things. In other words, the public interest does not have to be defined precisely; but the law which limits rights in order to protect the public interest has to be. So the opinion is right to say that national security is a public interest which can justify limitation of rights in principle, but it fails to undertake an examination of the precision of the rules limiting those rights. As such, it omits to examine some key questions: should the precision of the law limiting rights be assessed as regards the EU law, the US law, or both?  Should the US law be held to the same standards of clarity, foreseeability and accessibility as European states’ laws must be, according to the ECHR jurisprudence?

Next, it’s quite unconvincing to say that processing the content of communications interferes with the ‘essence’ of the privacy and data protection rights. The ECHR case law and the EU’s e-privacy directive expressly allow for interception of the content of communications in specific cases, subject to strict safeguards. So it’s those two aspects of the US law which are problematic: its nature as mass surveillance, plus the inadequate safeguards.

On these vital points, the analysis in the opinion is correct. The CJEU’s ruling in Digital Rights Ireland suggests, in my view, that mass surveillance is inherently a problem, regardless of the safeguards in place to limit its abuse. This is manifestly the Advocate-General’s approach in this case; and the USA obviously has in place mass surveillance well in excess of the EU’s data retention law. The opinion is also right to argue that EU rules banning mass surveillance apply to the Member States too, as I discuss here. But even if this interpretation is incorrect, and mass surveillance is only a problem if there are weak safeguards, then the Safe Harbour decision still violates the Charter, due to the lack of accessible safeguards for EU citizens as discussed above. Hopefully, the Court of Justice will confirm whether mass surveillance is intrinsically problematic or not: it is a key issue for Member States retaining data by way of derogation from the e-privacy Directive, for the validity of EU treaties (and EU legislation) on specific issues such as retaining passenger data (see discussion here of a pending case), and for the renegotiation of the Safe Harbour agreement itself.

This brings us neatly to the consequences of the CJEU’s forthcoming judgment (if it follows the opinion) for EU/US relations. Since the opinion is based in large part upon the EU Charter of Rights, which is primary EU law, it can’t be circumvented simply by amending the data protection Directive (on the proposed new rules on external transfers under the planned Regulation, see discussion here). Instead, the USA must, at the very least, ensure that adequate remedies for EU citizens and residents are in place in national security cases, and that either a judicial or administrative system is in place to enforce in practice all rights which are supposed to be guaranteed by the Safe Harbour certification. Facebook and others might consider moving the data processing of EU residents to the EU, but it’s hard to see how this could work for any EU resident with (for instance) Facebook friends living in the USA. Surely in such cases processing of the EU data in the USA is unavoidable.

Moreover, arguably it would not be sufficient for the forthcoming EU/US trade and investment agreement (known as ‘TTIP’) to provide for a qualified exemption for EU data protection law, along the lines of the WTO’s GATS. Only a complete immunity of EU data protection law from the TTIP – and any other EU trade and investment agreements – would be compatible with the Charter. Otherwise, companies like Facebook and Google might try to invoke the controversial investor dispute settlement system (ISDS) every time a judgment like Google Spain or (possibly) Schrems cost them money.

Barnard and Peers: chapter 9

Photo credit: www.techradar.com