Showing posts with label Max Schrems. Show all posts
Showing posts with label Max Schrems. Show all posts

Saturday, 21 December 2019

The AG Opinion in Schrems II: Facebook, national security and data protection law





Lorna Woods, Professor of Internet Law, University of Essex


Last week a CJEU Advocate-General gave an opinion in the case of Schrems II, the latest challenge to US national security rules as they apply to transfers of personal data from the EU (via Facebook). The original Schrems case (discussed here) shocked the data protection world when the Court of Justice of the EU (ECJ) ruled that the adequacy decision with regards to the United States (which simplified personal data transfers between the EU and the US) was invalid and – effectively - that US practices were incompatible with the EU Charter. Companies transferring data to the US turned to other legal mechanisms to legitimise the transfer of data and Schrems II (Data Protection Commissioner v. Facebook Ireland Limited, Maximillian Schrems (Case C-311/18)) concerns one of these mechanisms: standard contractual clauses (SCCs). Surely, given the similar context and the fact that those under US jurisdiction must comply with US law, the outcome must be the same?

The Facts

Max Schrems aimed to stop the transfer of his personal data from the EU to the US under SCCs, following on from the finding in Schrems I that US law did not provide sufficient safeguards for individuals’ privacy rights in the context of bulk surveillance. This resulted in an action being brought by the Irish Data Protection Commissioner (DPC). The DPC took the view that her assessment of whether the transfers were valid depended on whether the model SCCs (established by the European Commission by Decision 2010/87/EU) were valid and she brought an action before the Irish courts, which resulted in an 152 page judgment and a reference to the ECJ, to determine this.

The reference comprised 11 questions, which the Advocate General bundled into a number of topics:

-          the applicability of EU law when data transferred is processed for national security purposes in third countries;
-          the level of protection required;
-          the impact of the non-binding nature of an SCC on the authorities of a third country on the validity of Decision 2010/87;
-          the validity of Decision 2010/87 in the light of the EU Charter; and
-          an assessment of the Privacy Shield decision (the replacement adequacy decision for transfers to the US, following the finding in Schrems I that the previous decision, known as ‘Safe Harbour’, was invalid).

The Opinion

The first issue was whether the fact that the concerns regarding privacy occur in the policy space of national security (an area outwith EU competence) affects the applicability of the data protection directive (DPD) or the replacement law, the GDPR. Those rules are designed for the commercial sphere. As the Advocate General noted,

The significance of that question … lies in the fact that, if such a transfer fell out side the scope of EU law, all the objections raised ...would be rendered baseless [101].

Given the Court’s approach in Schrems I, it is unsurprising that the answer here was that the locus of regulation was the commercial activity that was being undertaken. The purpose of the transfer was not that of allowing the data to be processed for national security [106]. So, ‘the possibility that the data will undergo processing by the authorities of the third country of destination for the purposes of the protection of national security does not render EU law inapplicable...’ [108].

The second issue at which the Advocate General looked was that of the level of protection. He accepted that the approach of the Court in Schrems I to adequacy decisions (under Article 25(6) DPD, and now Article 45(3) GDPR) is also relevant to SCCs so that the ‘appropriate safeguards’ envisaged by Article 46 GDPR should ensure data subjects benefit from a level of protection ‘essentially equivalent’ to that which follows from the GDPR [115]. While the adequacy decision mechanism and the SCC mechanism both aim towards the same objective, the way they each achieve it may be different: the underlying difference between the mechanisms is that the adequacy decision considers whether the protections provided by law in the destination country are adequate; the SCCs accept that they are not and provide other safeguards [120, see also 123-4].

Validity of Decision 2010/87

Moving on to the question of validity of Decision 2010/87 in the light of the EU Charter, the fact that SCCs are not binding on the third country undermines the ability of the recipient of the data always to respect the data protection safeguards contained in the SCC. The Advocate General considered this in the context of the question the Irish Court raised regarding the obligations on the national supervisory authority to suspend transfer [122]. The Advocate General proposed that:

-          SCCs may be assessed only on the ‘soundness of the safeguards’ they each provide;
-          safeguards may be reduced/eliminated as a result of the law of the third country;
-          the mechanism imposes on the exporter/controller or the national supervisory authorities, on a case-by-case basis, to prohibit or suspend transfers.

The Advocate General concluded that this did not invalidate the Decision but rather raised the question of ‘whether there are sufficiently sound mechanisms to ensure that transfers based on the standard contractual clauses are suspended or prohibited where those clauses are breached or impossible to honour’ [127]. He also highlighted the requirement in Article 46(1) GDPR that data subjects’ rights must be enforceable and remedies available.

Obligations on data controllers

The SCC imposes obligations on exporter and importer to comply with the terms of the contract. Given the obligations on the data controller (the person in control of the uses to which the data is put) imposed by the GDPR, where the exporter is aware that the importer cannot honour the terms of the SCC, the controller does not have a choice to suspend transfer but is required to do so [132]. The Advocate General also suggested that the parties should carry out an examination into whether the law of the third country would entail such a breach [135]. The rights of the data subject are ensured as against the exporter/controller under the SCC in Decision 2010/87 and the data subject may also apply to the national supervisory authorities.

Obligations on the supervisory authorities

The Advocate General proposed that national supervisory authorities are required to order the suspension of the transfer. Specifically, the right to suspend is not only to be used in exceptional cases (this follows amendment of the SCC terms in the light of Schrems I) and recital 11 of Decision 2010/87 is ‘obsolete’ [143].  The Advocate General emphasised that

‘the exercise of the powers to suspend and prohibit transfers …. is no longer merely an option left to the supervisory authorities’ discretion’ [144].

Article 58(2) GDPR, which sets out the powers of supervisory authorities, should be understood in the light of Article 8(3) EUCFR and Article 16(2) TFEU (both of which provide that compliance with data protection law should be overseen by an independent authority) – the Advocate General inferred that this meant the authorities have to act in such a way as to ensure the proper application of the GDPR. This imposes a due diligence requirement on the authorities, as well as an obligation to react appropriately to infringements. Failure to do so can lead to judicial action, and this re-emphasises that the obligation on the national supervisory authorities is ‘strict’, not discretionary [150].

The DPC had contended that this obligation is insufficient: it fails to address the systemic problems of inadequate safeguards; and that the approach leaves unprotected those whose data have already been transferred. The Advocate General disagreed; while problems existed they were not sufficient to invalidate the decision. He stated that:

EU law does not require that a general and preventive solution be applied for all transfers to a given third country that might entail the same risks of violation of fundamental rights [154].

As regards, effective redress for those already affected, the Advocate General emphasised the roles of the supervisory authorities to take corrective measures and the rights under Article 82 GDPR.

Privacy Shield

The Advocate General than took the view that it was unnecessary to consider the ‘Privacy Shield’ decision, in part because it assumes that the general level of law and protection in the recipient state need to afford adequate protection for SCCs to be available – a point which the Advocate General had already rejected.  Nonetheless the Advocate General did produce some guidance for the Court were it to consider the issue.

The finding of adequacy under the Privacy Shield does not preclude a national supervisory authority from exercising its powers. A number of parties challenged (directly or indirectly) the finding of adequacy in relation to the Privacy Shield. He suggested that when considering the comparison between the law and safeguards of the third country the appropriate comparison would be with the approach of the Member States to their own national security within the framework of the European Convention on Human Rights (ECHR) [207] and that those standards must be known in advance. The Advocate General discussed the scope of the national security exception, defined as:

activities connected with the protection of national security in so far as they constitute activities of the State or of States authorities that are unrelated to fields in which individuals are active [para 210, citing inter alia Tele2 Sverige and Watson (Cases C-203/15 and C-698/15, discussed here)].

The Advocate General suggests that the exclusion covers measures ‘that are directly implemented by the State for the purposes of national security, without imposing specific obligations on private operators’ [211]. He notes that where private operators are involved the law is less clear with the earlier PNR judgment (Parliament v Council and Commission (Cases C-317/04 and C-318/04)) seemingly pointing in a different direction from more recent jurisprudence including Tele2/Watson.  He proposed a number of ways to reconcile the two lines of cases:

-          Tele2/Watson arose where operators were required to keep data; the airlines kept the data for their own commercial purposes [218];
-          Tele2/Watson arises where operators are required to cooperate as regards the access to the data, irrespective of whether there is a prior obligation to retain data - because the provision required the operators to engage in data processing [219-220].

The Advocate General favoured the second approach, suggesting it was also in line with Schrems I and that, once national authorities have the data and engage in further processing of them, such processing is not caught by the scope of the GDPR. In this view of the Advocate General, this means verification must take place by reference first to the GDPR and Charter and secondly by reference to the ECHR.

A further issue was whether continuity of protection means that measures must be in place during transit (e.g. through submarine cables). Article 44 GDPR refers to ‘after transfer’ which could mean after arrival or once transfer has been initiated. Relying on a teleological interpretation, the Advocate-General adopted the second interpretation.

Moving on to the validity of the Commission’s assessment of adequacy, the Advocate General assessed whether the Commission’s findings warranted the adoption of an adequacy decision, recalling the principles set down in Schrems I allowing for ‘a certain flexibility in order to take the various legal and cultural traditions into account’ but ‘that certain minimum safeguards and general requirements for the protection of fundamental rights that follow from the Charter and the ECHR have an equivalent ...’ [249].  It was this essential equivalence that the referring court challenged. The Advocate General re-stated case law from both Courts that recognised the existence of an interference, and as far as the ECJ is concerned it does not matter whether the data are sensitive. Further:

the obligation to make the data available to the NSA, in so far as it derogates from the principle of confidentiality of communications, entails in itself an interference even if those data are not subsequently consulted and used by the intelligence authorities [259].

As regards the requirement that interferences must be provided for by law, the Advocate General – treating the approach of the ECJ and ECtHR together states that this test means that:

regulations which entail an interference … lay down clear and precise rules governing the scope and application of the measure at issue and imposing a minimum of requirements, in such a way as provide the persons concerned with sufficient guarantees to protect their data against the risks of abuse and also against any unlawful access to or use of data [para 265, citing Digital Rights Ireland (discussed here), Tele 2 Sverige, Opinion 1/15 (discussed here), Weber and Saravia, Zakharov (discussed here) and Szabo and Vissy].

The Advocate General doubted whether the US framework met this threshold [266].  Following existing jurisprudence, however, the Advocate General accepted that the very essence of Article 7 or 8 was not compromised.  In this, the Advocate General noted that the position of the ECtHR was that such surveillance could, in principle, be capable of justification [282].

National security has long been accepted as a legitimate public interest ground justifying interferences with rights. The scope of ‘national security’ was challenged. The Advocate General accepted that some aspect of foreign affairs might fall within ‘national security’; further objectives dealt with under ‘foreign intelligence information’ could constitute other public interest objectives but that these would have a lesser weighting in a proportionality analysis. However, ‘it may be asked whether those measures are defined sufficiently clearly and precisely to prevent the risk of abuse and to permit a review of the proportionality.’ [289].

The Advocate General nonetheless considered the necessity and proportionality aspects, within the framing set down by Schrems I in particular. The Advocate General also noted the safeguards required by Article 23(2) GDPR. He doubted whether the selection criteria were sufficiently clear and precise and whether there were sufficient guarantees to prevent the risk of abuse noting in particular the difference between the requirement that an activity be ‘as tailored as feasible’ is not the same as an activity which is strictly necessary [300], nor does it necessarily forewarn data subjects [307]. There is no prior review. He therefore concluded that he had doubts about the adequacy of protection provided.

The next issue was the right to an effective remedy and the impact of the introduction of the Ombudsperson Mechanism which is intended to compensate for some of the deficiencies in the US system.  The Advocate General noted that the Article 47 right is in addition to the requirement that there be independent oversight/authorisation of surveillance activities. Re-iterating Schrems I, where there is no possibility to pursue legal remedies, the national rules do not respect the essence of the right. The right include that of receiving confirmation from national authorities whether or not they are processing data as well as being notified about an investigation once it would no longer jeopardise that investigation (though the ECtHR has not made this aspect a requirement). The US system is deficient in these aspects. The Advocate General considered whether the Ombudsperson Mechanism compensates but was not convinced. Such a body to be effective must be established by law and be independent. The Advocate General noted that the mechanism satisfied neither requirement and is not subject to judicial control.

Comment

A cursory look at the conclusion to the Opinion might suggest that there will be no change in the approach to data transfers and that in general this was a bit of a defeat for Schrems. This would mis-characterise the position (and also overlook the fact that it was the DPC that was arguing for invalidity of the SCC decision, not Schrems).  The Opinion is divided broadly into two topics: the first which deals the legality of the SCC decision and the second which deals with the Privacy Shield adequacy decision. 

The Advocate General may have suggested that the Decision underlying the SCCs should not be considered invalid but this does not mean that those transferring data to the US can ignore the privacy concerns. The response of the Advocate General - in avoiding challenging the underlying system itself - is to rely on decentralised, and ultimately private, enforcement by the exporter/data controllers, but also by the national supervisory authority.  This obligation is described in rather strong terms; certainly a data exporter cannot be passive but must investigate conditions and if it finds problems it must act to suspend transfers. A head in the sand approach – if the Court follows the reasoning of the Advocate General – is unlikely to be successful. For national supervisory authorities the obligation seems still stronger and the obligation to assess on a case by case basis potentially increases their workload. Underpinning this again is the threat of legal action by data subjects. While empowering data subjects is probably to be regarded as positive, viewing private enforcement of regulation as an essential element of that scheme is problematic.  It assumes data subjects have the energy and the resources to take action – a real weakness in this approach, despite the possibility for class actions.

It is noteworthy that while the Advocate General heads the section on the acceptability of the Decision as its acceptability under the Charter, in practice his analysis focuses on the right to a remedy. This leaves the impact of the transfers on privacy and data protection (especially against a backdrop of bulk surveillance) under-considered.  Further, the Advocate-General seems to assume that the ability to sue in the EU (under Article 80 causes of action) compensates for the difficulties in standing and lack of remedies in the relevant third country, and assumes that compensation is adequate (as opposed to more behavioural remedies such as ceasing processing).  This aspect of the analysis is in marked contrast to the considerations discussed under the Privacy Shield section.

While the ruling on the impact of national security in the early part of the Opinion may not come of much surprise, it is potentially significant for the UK. At the moment, as a member of the EU, the activities of its security and intelligence services mainly lie outside the ECJ’s purview (though note pending reference on scope of this: Privacy International v Secretary of State for Foreign and Commonwealth Affairs (Case C-623/17)); once it becomes a third country (and subject to any negotiated agreement) national security becomes a relevant consideration.  This difference between EU States and third countries did not escape the attention of those making representations before the court. On this difference, the Advocate General when discussing the comparison that must take place to come to any decision on whether a third State’s data privacy protections are essentially equivalent argues that, in regards to interferences arising in the context of national security (which falls outside EU law and therefore the scope of the Charter), the relevant standards are to be found in the ECHR. 

As noted, however, that boundary is somewhat uncertain and consequently the extent to which it is consistent with earlier jurisprudence, including Schrems I, open to question. The approach of the Advocate General does seem to move away from the approach in the PNR judgment, which was based on looking at the provision’s purpose to determine whether it fell within the national security exception. Perhaps the forthcoming cases will develop a clear and consistent line on this point going forward. The significance of drawing a boundary between the EU Charter and the ECHR lies in the extent of difference in approach of the Strasbourg and Luxembourg courts to bulk surveillance, especially that in relation to communications data. On this, the Big Brother Watch case (discussed here and here) is heading to the ECtHR Grand Chamber.

As regards the second aspect, having noted that the Advocate General seeks to avoid commenting on the Privacy Shield, some of his comments in this regard (made ‘in the alternative’) highlight some real problems for that system. In his discussion he beds his reasoning both in the ECJ’s jurisprudence but also that of the ECtHR.  The Opinion constitutes a clear statement as to the applicability of the law to ‘automated’ surveillance and also as to the requirement of legality (which is not particularly clear as regards the Strasbourg jurisprudence).  In this, as well as in the context of necessity and proportionality of the measures the Advocate General was not convinced the US framework passed the tests. This is not just one problem to fix, but many.  While the Advocate General did not the difference in the jurisprudence between the two courts, this difference did not seem to lead to a different outcome in terms of his assessment of the acceptability of the US regime.

If the Court chooses to consider this question, there will be some serious difficulties going forward for data flows.  Whether the approach will stick is a question; the ECJ has been under pressure to step back from its stance on bulk collection and automated assessment of data in particular. Some of the surveillance issues will be returning to the Court in a bevy of cases: in addition to Privacy International see La Quadrature du Net & Ors v Commission (Case T-738/16); La Quadrature du Net & Ors and French Data Network & Ors (Cases C-511-12/18); and Ordre des barreaux francophones et germanophone, Académie Fiscale ASBL, UA,  Liga voor Mensenrechten ASBL, Ligue des Droits de l’Homme ASBL, VZ, WY,  XX v Conseil des ministres (Case C-520/18). Further Advocates-General opinions in several of these cases are set for January.

Barnard & Peers: chapter 9
Photo credit: Forbes

Tuesday, 12 April 2016

The Commission’s draft EU-US Privacy Shield adequacy decision: A Shield for Transatlantic Privacy or Nothing New under the Sun?

  


Dr. Maria Tzanou (Lecturer in Law, Keele University)

On 6 October 2015, in its judgment in Schrems, the CJEU invalidated the Commission’s decision finding that the US ensured an adequate level of protection for the transfer of personal data under the Safe Harbour framework on the basis that US mass electronic surveillance violated the essence of the fundamental right to privacy guaranteed in Article 7 EUCFR and the right to effective judicial protection, enshrined in Article 47 EUCFR (for an analysis of the judgment, see here).
             
On 2 February 2016, the Commission announced that a political agreement was reached on a new framework for transatlantic data flows, the EU-US Privacy Shield, which will replace the annulled Safe Harbour. On 29 February 2016, the Commission published a draft Privacy Shield adequacy decision followed by seven Annexes that contain the US government’s written commitments on the enforcement of the arrangement. The Annexes include the following assurances from the US: Annex I, a letter from the International Trade Administration of the Department of Commerce, which administers the programme, describing the commitments that it has made to ensure that the Privacy Shield operates effectively; Annex II, the EU-US Privacy Shield Framework Principles; Annex III, a letter from the US Department of State and accompanying memorandum describing the State Department’s commitment to establish a Privacy Shield Ombudsperson for submission of inquiries regarding the US’ intelligence practices; Annex IV, a letter from the Federal Trade Commission (FTC) describing its enforcement of the Privacy Shield; Annex V, a letter from the Department of Transportation describing its enforcement of the Privacy Shield; Annex VI, a letter prepared by the Office of the Director of National Intelligence (ODNI) regarding safeguards and limitations applicable to US national security authorities; and, Annex VII, a letter prepared by the US Department of Justice regarding safeguards and limitations on US Government access for law enforcement and public interest purposes.

Similar to its predecessor, Privacy Shield is based on a system of self-certification by which US companies commit to a set of privacy principles. However, unlike Safe Harbour, the draft Privacy Shield decision includes a section on the ‘access and use of personal data transferred under the EU-US Privacy Shield by US public authorities’ (para 75). In this, the Commission concludes that ‘there are rules in place in the United States designed to limit any interference for national security purposes with the fundamental rights of the persons whose personal data are transferred from the Union to the US to what is strictly necessary to achieve the legitimate objective.’ This conclusion is based on the assurances provided by the Office of the Director of National Surveillance (ODNI) (Annex VI), the US Department of Justice (Annex VII) and the US Secretary of State (Annex III), which describe the current limitations, oversight and opportunities for judicial redress under the US surveillance programmes. In particular, the Commission employs four main arguments arising from these letters to reach its adequacy conclusion: Firstly, US surveillance prioritises targeted collection of personal data, while bulk collection is limited to exceptional situations where targeted collection is not possible for technical or operational reasons (this captures the essence of the principles of necessity and proportionality, according to the Commission). Secondly, US intelligence activities are subject to ‘extensive oversight from within the executive branch’ and to some extent from courts such as the Foreign Intelligence Surveillance Court (FISC). Thirdly, three main avenues of redress are available under US law to EU data subjects depending on the complaint they want to raise: interference under the Foreign Intelligence Surveillance Act (FISA); unlawful, intentional access to personal data by government officials; and access to information under Freedom of Information Act (FOIA). Fourthly, a new mechanism will be created under the Privacy Shield, namely the Privacy Shield Ombudsperson who will be a Senior Coordinator (at the level of Under-Secretary) in the State Department in order to guarantee that individual complaints are investigated and individuals receive independent confirmation that US laws have been complied with or, in case of a violation of such laws, the non-compliance has been remedied.

The draft Privacy Shield framework may have been hailed as providing an ‘essentially equivalent’ level of protection for personal data transferred from the EU to the US, but despite the plethora of privacy-friendly words (‘Privacy Shield’, ‘robust obligations’, ‘clear limitations and safeguards’) one cannot be very optimistic that the new regime will fully comply with the Court’s judgment in Schrems. A first problematic aspect with the US assurances is that they merely describe the US surveillance legal framework and the relevant safeguards that already exist. In fact, the only changes that were introduced in the US following the Snowden revelations was the issuance of Presidential Policy Directive 28 (PPD-28) (in January 2014) which lays down a number of principles on the use of signal intelligence data for all people; and the passing of the USA Freedom Act which modified certain US surveillance programmes and put an end to the mass collection of Americans’ phone records by the NSA (in June 2015).  Finally, in February 2016, the US Congress passed the Judicial Redress Act which was signed into law by President Obama. Given that one can reasonably assume that the Court was aware of these developments when laying down its judgment in Schrems in October 2015, it seems that, with the exception of the Ombudsperson, Privacy Shield does not change much in US surveillance law. In fact, the Commission has entirely based its draft adequacy analysis on a mere detailed description of this law without any further commitment that this will improve in any way in order to comply with EU fundamental rights as interpreted by the CJEU.

While the assurance that US surveillance is mainly targeted and does not take place in bulk is important, there is no reference to the fact that US authorities access the content of the personal data that was deemed to violate the essence of the right to privacy in Schrems. Furthermore, even if the US authorities engage only in targeted surveillance, the CJEU has held in Digital Rights Ireland that the mere retention of private-sector data for the purpose of making them available to national authorities affects Articles 7 and 8 EUCFR and might have a chilling effect on the use by subscribers of platforms of communication, such as Facebook or Google and, consequently, on their exercise of freedom of expression guaranteed by Article 11 EUCFR. Individuals, when faced with surveillance, cannot know when they are targeted; nevertheless, the possibility of being the object of surveillance has an effect on the way they behave. Insofar as Article 47 EUCFR and the right to effective judicial protection is concerned, the Commission itself notes in its draft adequacy decision that the avenues of redress provided to EU citizens do not cover all the legal bases that US intelligence authorities may use and the individuals’ opportunities to challenge FISA are very limited due to strict standing requirements.

The creation of the Ombudsperson with the important function of ensuring individual redress and independent oversight should be welcomed as the main addition of the draft Privacy Shield. Individuals will be able to access the Privacy Shield Ombudsperson without having to demonstrate that their personal data has in fact been accessed by the US intelligence activities and the Ombudsperson, who will be carrying out his functions independently from Instructions by the US Intelligence Community will be able to rely on the US oversight and review mechanisms. However, there are several limitations to the function of the Privacy Shield Ombudsperson. First, the procedure for accessing the Ombudsperson is not as straightforward as lodging a complaint before a national Data Protection Authority (DPA). Individuals have to submit their requests initially to the Member States’ bodies competent for the oversight of national security services and, eventually a centralised EU individual complaint handling body that will channel them to the Privacy Shield Ombudsperson if they are deemed ‘complete’. In terms of the outcome of the Ombudsperson’s investigation, the Ombudsperson will provide a response to the submitting EU individual complaint handling body –who will then communicate with the individual- confirming (i) that the complaint has been properly investigated, and (ii) that the US law has been complied with, or, in the event of non-compliance, such non-compliance has been remedied. However, the Ombudsperson will neither confirm nor deny whether the individual has been the target of surveillance nor will the Ombudsperson confirm the specific remedy that was applied. Finally, Annex III stipulates that commitments in the Ombudsperson’s Memorandum will not apply to general claims that the EU-US Privacy Shield is inconsistent with EU data protection requirements. In the light of the above, the Privacy Shield Ombudsperson does not seem to provide the redress guarantees of a supervisory authority such as the DPAs as the AG had asked in his Opinion in Schrems.

Draft Privacy Shield is problematic for another reason as well: it puts together the regulative framework for commercial transactions with the regulation for law enforcement access to private sector data. These are, however, different issues and they should be dealt with separately. It is important to encourage and facilitate transborder trade, thus flexible mechanisms allowing for undertakings self-compliance with data protection principles should continue to apply. But, the challenges of online surveillance on fundamental rights are too serious to be covered by the same regime and some ‘assurances’ that essentially describe the current US law. Two solutions could possibly deal with this problem: Either the US adheres to the Council of Europe Convention No. 108 and abandons the distinction between US and EU citizens regarding rights to redress or a transatlantic privacy and data protection framework that ensures a high level of protection of fundamental rights and the transparency and accountability of transnational counter-terrorism operations (the so-called ‘umbrella agreement’) is adopted. Regrettably, the current form of the umbrella agreement is very problematic as to its compatibility with EU data protection standards- or even human rights standards in general, and, therefore, does not seem to provide an effective solution to the issue.
      
A recently leaked document reveals that the Article 29 Working Party has difficulties in reaching an overall conclusion on the Commission’s draft adequacy decision and supports the view that Privacy Shield does not fully comply with the essential guarantees for the transfer of personal data from the EU to the US for intelligence activities.

Should the Commission nevertheless decide to proceed with the current draft, it is highly possible that the CJEU will be called in the future to judge the adequacy of Privacy Shield in a Schrems 2 line of cases.


Photo credit: www.teachprivacy.com

Wednesday, 3 February 2016

Live. Die. Repeat. The ‘Privacy Shield’ deal as ‘Groundhog Day’: endlessly making the same mistakes?



Steve Peers

Love it, hate it, or spend an academic career analysing it, the USA is the best-known country in the world. Yet some of its traditions still puzzle outsiders. One of them, celebrated yesterday, is ‘Groundhog Day’: the myth that the appearance, or non-appearance, of the shadow of an otherwise obscure rodent on February 2nd each year will determine whether or not there will be another six weeks of winter. Outside the USA, Groundhog Day is probably better known as a movie: grumpy Bill Murray keeps repeating the same day, trying to perfect it and woo the lovely Andie MacDowell. Others have borrowed this basic plot. In Edge of Tomorrow, sleazy Tom Cruise keeps repeating the same day, trying to kill aliens and woo the lovely Emily Blunt. In the Doctor Who episode Hell Bent, angry Peter Capaldi keeps repeating the same day, trying to cut through a diamond wall and resurrect the lovely Jenna Coleman.

The basic idea is summed up in the advertising slogan for Edge of Tomorrow: Live. Die. Repeat. Groundhog Day in particular has attracted many interpretations. Of these, the most convincing is that the film’s story is a Buddhist parable: repeated reincarnation until we reach the state of enlightenment, or nirvana.

How does this relate to the new EU/US privacy deal, dubbed ‘Privacy Shield’? Obviously the deal involves the USA, and it was reached yesterday, on Groundhog Day. And it’s a new incarnation of a prior deal: ‘Safe Harbor’, killed last October by the CJEU in the Schrems judgment (discussed here). While the text of the new agreement is not yet available, the initial indication is that it is bound to be killed in turn – unless the CJEU, admittedly an increasingly fickle judicial deity, is willing to go back on its own case law. Goodness knows how many further reincarnations will be necessary before the US and EU can reach enlightenment.

Problems with the deal

The point of the new deal is the same as the old one: to provide a legally secure set of rules for EU/US data transfers, for companies that subscribe to a set of data protection principles. Failing that, it is possible to argue that transfers can be justified by binding corporate rules, by individual consent or (as regards US government access to the data) by a third State’s public interest. But as I as I noted in my blog post on Schrems, these alternatives are untested yet in the CJEU, and are possibly subject to legal challenges of their own. Understandably, businesses would like to make a smooth transition to a new set of legally secure rules. Does the new deal fit the bill?   

In the absence of a text, I can’t analyse the new deal much. But here are my first impressions.

According to the CJEU, the main problems with the previous deal were twofold: the extent of mass surveillance in the USA, and the limited judicial redress available to EU citizens as regards such government surveillance. It appears that the new agreement will address the latter issue, but not the former. There will be an ‘ombudsman’ empowered to consider complaints against the US government. While the details are unknown, it’s hard to see how this new institution could address the CJEU’s concerns completely, unless it is given the judicial power to order the blocking and erasure of data, for instance.   

Furthermore, there’s no sign that the underlying mass surveillance will be changed. Here, the argument is that the Court of Justice simply misunderstood the US system, or that in any event many EU countries are just as wicked as the USA when it comes to mass surveillance. These arguments are eloquently set out in a barrister’s opinion, summarised in this (paywalled) Financial Times story.

Facebook and the US government disdained to get involved in the Schrems case, and have no doubt repented this at leisure. The assumption here appears to be that they would participate fully in new litigation, and convince the CJEU to see the error of its ways.

How likely is this? It’s undoubtedly true to say that the CJEU gives an increasing impression that it willing to bend the rules, or double up on its own case law, in order to ensure the survival of an increasingly beleaguered EU project. In Pringle and Gauweiler, it agreed with harshly criticized plans to keep monetary union afloat. In Dano and Alimanovic, it qualified its prior case law on EU citizens’ access to benefits, in an attempt to quell growing public concern about this issue. In Celaj, it gave a first indication that it would row back on its case law limiting the detention of irregular migrants, perhaps in light of the migration and refugee crisis. The drafters of the proposed deal on UK renegotiation appear to assume that the Court would back away from even more free movement case law, if it appears necessary to keep the UK from leaving the European Union.

Once the Court reminded legal observers of Rome: the imperial author of uniform codes that would bind a whole continent, upon which the sun would never set. Now it increasingly reminds me of Dunkirk: the centre of a brave and hastily improvised retreat from impending apocalypse, scouring for a beach to fight its last stand. The Court used to straighten every road; now it cuts every corner.

Since the ‘Privacy Shield’ deal faces many litigious critics, it seems virtually certain to end up before the Court before long. Time will tell where the judgment on the deal will fit within the broader sweep of EU jurisprudence.


Photo credit: play.google.com

Wednesday, 7 October 2015

The party’s over: EU data protection law after the Schrems Safe Harbour judgment




Steve Peers

The relationship between intelligence and law enforcement agencies (and companies like Google and Facebook) and personal data is much like the relationship between children and sweets at a birthday party. Imagine you’re a parent bringing out a huge bowl full of sweets (the personal data) during the birthday party – and then telling the children (the agencies and companies) that they can’t have any. But how can you enforce this rule? If you leave the room, even for a moment, the sweets will be gone within seconds, no matter how fervently you insist that the children leave them alone while you’re out. If you stay in the room, you will face incessant and increasingly shrill demands for access to the sweets, based on every conceivable self-interested and guilt-trippy argument. If you try to hide the sweets, the children will overturn everything to find them again.

When children find their demands thwarted by a strict parent, they have a time-honoured circumvention strategy: “When Mummy says No, ask Daddy”. But in the Safe Harbour case, things have happened the other way around. Mummy (the Commission) barely even resisted the children’s demands. In fact, she said Yes hours ago, and retired to the bath with an enormous glass of wine, occasionally shouting out feeble admonitions for the children to tone down their sugar-fuelled rampage. Now Daddy (the CJEU) is home, shocked at the chaos that results from lax parenting. He has immediately stopped the supply of further sweets. But the house is full of other sugary treats, and all the children are now crying. What now?

In this post, I’ll examine the reasons why the Court put its foot down, and invalidated the Commission’s ‘Safe Harbour’ decision which allows transfers of personal data to the USA, in the recent judgment in Schrems. Then I will examine the consequences of the Court’s ruling. But I should probably admit for the record that my parenting is more like Mummy's than Daddy's in the above example. 

Background

For more on the background to the Schrems case, see here; on the hearing, see Simon McGarr’s summary here; and on the Advocate-General’s opinion, see here. But I’ll summarise the basics of the case again briefly.

Max Schrems is an Austrian Facebook user who was disturbed by Edward Snowden’s revelations about mass surveillance by US intelligence agencies. Since he believed that transfers of his data to Facebook were subject to such mass surveillance, he complained to the Irish data protection authority, which regulates Facebook’s transfers of personal data from the EU to the USA.

The substantive law governing these transfers of personal data was the ‘Safe Harbour’ agreement between the EU and the USA, agreed back in 2000. This agreement was put into effect in the EU by a decision of the Commission, which was adopted pursuant to powers conferred upon the Commission by the EU’s current data protection Directive. The latter law gives the Commission the power to decide that transfers of personal data outside the EU receive an ‘adequate level of protection’ in particular countries.

The ‘Safe Harbour’ agreement was enforced by self-certification of the companies that have signed up for it (note that not all transfers to the USA fell within the scope of the Safe Harbour decision, since not all American companies signed up). Those promises were in turn meant to be enforced by the US authorities. But it was also possible (not mandatory) for the national data protection authorities which enforce EU data protection law to suspend transfers of personal data under the agreement, if the US authorities or enforcement system found a breach of the rules, or on a list of limited grounds set out in the decision.

The Irish data protection authority refused to consider Schrems’ complaint, so he challenged that decision before the Irish High Court, which doubted that this system was compatible with EU law (or indeed the Irish constitution). So that court asked the CJEU to rule on whether national data protection authorities (DPAs) should have the power to prevent data transfers in cases like these.

The judgment

The CJEU first of all answers the question which the Irish court asks about DPA jurisdiction over data transfers (the procedural point), and then goes on to rule that the Safe Harbour decision is invalid (the substantive point).

Following the Advocate-General’s view, the Court ruled that national data protection authorities have to be able to consider claims that flows of personal data to third countries are not compatible with EU data protection laws if there is an inadequate level of data protection in those countries, even if the Commission has adopted a decision (such as the Safe Harbour decision) declaring that the level of protection is adequate. Like the Advocate-General, the Court based this conclusion on the powers and independence of those authorities, read in light of the EU Charter of Fundamental Rights, which expressly refers to DPAs’ role and independence. (On the recent CJEU case law on DPA independence, see discussion here). In fact, the new EU data protection law currently under negotiation (the data protection Regulation) will likely confirm and even enhance the powers and independence of DPAs. (More on that aspect of the proposed Regulation here).

The Court then elaborates upon the ‘architecture’ of the EU’s data protection system as regards external transfers. It points out that either the Commission or Member States can decide that a third country has an ‘adequate’ level of data protection, although it focusses its analysis upon what happens if (as in this case) there is a Commission decision to this effect. In that case, national authorities (including DPAs) are bound by the Commission decision, and cannot issue a contrary ruling.

However, individuals like Max Schrems can still complain to the DPAs about alleged breaches of their data protection rights, despite the adoption of the Commission decision. If they do so, the Court implies that the validity of the Commission’s decision is therefore being called into question. While all EU acts must be subject to judicial review, the Court reiterates the usual rule that national courts can’t declare EU acts invalid, since that would fragment EU law: only the CJEU can do that. This restriction applies equally to national DPAs.

So how can a Commission decision on the adequacy of third countries’ data protection law be effectively challenged? The Court explains that DPAs must consider such claims seriously. If the DPA thinks that the claim is unfounded, the disgruntled complainant can challenge the DPA’s decision before the national courts, who must in turn refer the issue of the validity of the decision to the CJEU if they think it may be well founded. If, on the other hand, the DPA thinks the complaint is well-founded, there must be rules in national law allowing the DPA to go before the national courts in order to get the issue referred to the CJEU.

The Court then moves on to the substantive validity of the Safe Harbour decision. Although the national court didn’t ask it to examine this issue, the Court justifies its decision to do this by reference to its overall analysis of the architecture of EU data protection law, as well as the national court’s doubts about the Safe Harbour decision. Indeed, the Court is effectively putting its new architecture into use for the first time, and it’s quite an understatement to say that the national court had doubts about Safe Harbour (it had compared surveillance in the USA to that of Communist-era East Germany).

So what is an ‘adequate level of protection’ for personal data in third countries? The Court admits that the Directive is not clear on this point, so it has to interpret the rules. In the Court’s view, there must be a ‘high’ level of protection in the third country; this does not have to be ‘identical’ to the EU standard, but must be ‘substantially equivalent’ to it.  Otherwise, the objective of ensuring a high level of protection would not be met, and the EU’s internal standards for domestic data protection could easily be circumvented. Also, the means used in the third State to ensure data protection rights must be ‘effective…in practice’, although they ‘may differ’ from that in the EU. Furthermore, the assessment of adequacy must be dynamic, with regular automatic reviews and an obligation for a further review if evidence suggests that there are ‘doubts’ on this score; and the general changes in circumstances since the decision was adopted must be taken into account.

The Court then establishes that in light of the importance of privacy and data protection, and the large number of persons whose rights will be affected if data is transferred to a third country with an inadequate level of data protection, the Commission has reduced discretion, and is subject to ‘strict’ standards of judicial review. Applying this test, two provisions of the ‘Safe Harbour’ decision were invalid.

First of all, the basic decision declaring adequate data protection in the USA (in the context of Safe Harbour) was invalid. While such a decision could, in principle, be based on self-certification, this had to be accompanied by ‘effective detection and supervision mechanisms’ ensuring that infringements of fundamental rights had to be ‘identified and punished in practice’. Self-certification under the Safe Harbour rules did not apply to US public authorities; there was not a sufficient finding that the US law or commitments met EU standards; and the rules could be overridden by national security requirements set out in US law.

Data protection rules apply regardless of whether the information is sensitive, or whether there were adverse consequences for the persons concerned. The Decision had no finding concerning human rights protections as regards the national security exceptions under US law (although the CJEU acknowledged that such rules pursued a legitimate objective), or effective legal protection in that context. This was confirmed by the Commission’s review of the Safe Harbour decision, which found (a) that US authorities could access personal data transferred from the EU, and then process it for purposes incompatible with the original transfer ‘beyond what was strictly necessary and proportionate for the purposes of national security’, and (b) that there was no administrative or judicial means to ensure access to the data and its rectification or erasure.

Within the EU, interference with privacy and data protection rights requires ‘clear and precise rules’ which set out minimum safeguards, as well as strict application of derogations and limitations.  Those principles were breached where, ‘on a generalised basis’, legislation authorises ‘storage of all the personal data of all the persons whose data has been transferred’ to the US ‘without any differentiation, limitation or exception being made in light of the objective pursued’ and without any objective test limiting access of the public authorities for specific purposes. General access to the content of communications compromises the ‘essence’ of the right to privacy. On these points, the Court expressly reiterated the limits on mass surveillance set out in last year’s Digital Rights judgment (discussed here) on the validity of the EU’s data retention Directive. Furthermore, the absence of legal remedies in this regard compromises the essence of the right to judicial protection set out in the EU Charter. But the Commission made no findings to this effect.

Secondly, the restriction upon DPAs taking action to prevent data transfers in the event of an inadequate level of data protection in the USA (in the context of Safe Harbour) was also invalid. The Commission did not have the power under the data protection Directive (read in light of the Charter) to restrict DPA competence in that way. Since these two provisions were inseparable from the rest of the Safe Harbour decision, the entire Decision is invalid. The Court did not limit the effect of its ruling.

Comments

The Court’s judgment comes to the same conclusion as the Advocate-General’s opinion, but with subtle differences that I’ll examine as we go along. On the first issue, the Court’s finding that DPAs must be able to stop data flows if there is a breach of EU data protection laws in a third country, despite an adequacy Decision by the Commission, is clearly the correct result. Otherwise it would be too easy for the standards in the Directive to be undercut by means of transfers to third countries, which the Commission or national authorities might be willing to accept as a trade-off for a trade agreement or some other quid pro quo with the country concerned.

As for the Court’s discussion of the architecture of the data protection rules, the idea of the data protection authorities having to go to a national court if they agree with the complainant that the Commission’s adequacy decision is legally suspect is rather convoluted, since it’s not clear who the parties would be: it’s awkward that the Commission itself would probably not be a party.  It’s unfortunate that the Court did not consider the alternative route of the national DPA calling on the Commission to amend its decision, and bringing a ‘failure to act’ proceeding directly in the EU courts if it did not do so. In the medium term, it would be better for the future so-called ‘one-stop shop’ system under the new data protection Regulation (see discussion here) to address this issue, and provide for a centralised process of challenging the Commission directly.

It’s interesting that the CJEU finds that there can be a national decision on adequacy of data flows to third States, since there’s no express reference to this possibility in the Directive. If such a decision is adopted, or if Member States apply the various mandatory and optional exceptions from the general external data protection rules set out in Article 26 of the data protection Directive, much of the Court’s Schrems ruling would apply in the same way by analogy. In particular, national DPAs must surely have the jurisdiction to examine complaints about the validity of such decisions too. But EU law does not prohibit the DPAs from finding the national decisions invalid; the interesting question is whether it obliges national law to confer such power upon the DPAs. Arguably it does, to ensure the effectiveness of the EU rules. Any decisions on these issues could still be appealed to the national courts, which would have the option (though not the obligation, except for final courts) to ask the CJEU to interpret the EU rules.

As for the validity of the Safe Harbour Decision, the Court’s interpretation of the meaning of ‘adequate’ protection in third States should probably be sung out loud, to the tune of ‘We are the World’. The global reach of the EU’s general data protection rules was already strengthened by last year’s Google Spain judgment (discussed here); now the Court declares that even the separate regime for external transfers is very similar to the domestic regime anyway. There must be almost identical degrees of protection, although the Court does hint that modest differences are permissible: accepting the idea of self-certification, and avoiding the issue of whether third States need an independent DPA (the Advocate-General had argued that they did).

It’s a long way from the judgment in Lindqvist over a decade ago, when the Court anxiously insisted that the external regime should not be turned into a copy of the internal rules; now it’s insistent that there should be as little a gap as possible between them. With respect, the Court’s interpretation is not convincing, since the word ‘adequate’ suggests something less than ‘essentially equivalent’, and the EU Charter does not bind third States.

But having said that, the American rules on mass surveillance would violate even a far more generous interpretation of the meaning of the word ‘adequate’. It’s striking that (unlike the Advocate-General), the Court does not engage in a detailed interpretation of the grounds for limiting Charter rights, but rather states that general mass surveillance of the content of communications affects the ‘essence’ of the right to privacy. That is enough to find an unjustifiable violation of the Charter.

So where does the judgment leave us in practice? Since the Court refers frequently to the primary law rules in the Charter, there’s no real chance to escape what it says by signing new treaties (even the planned TTIP or TiSA), by adopting new decisions, or by amending the data protection Directive. In particular, the Safe Harbour decision is invalid, and the Commission could only replace it with a decision that meets the standards set out in this judgment. While the Court refers at some points to the inadequacy or non-existence of the Commission’s findings in the Decision, it’s hard to believe that a new Decision which purports to claim that the American system now meets the Court’s standards would be valid if the Commission were not telling the truth (or if circumstances subsequently changed).

What standards does the US have to meet? The Court reiterates even more clearly that mass surveillance is inherently a problem, regardless of the safeguards in place to limit its abuse. Indeed, as noted already, the Court ruled that mass surveillance of the content of communications breaches the essence of the right to privacy and so cannot be justified at all. (Surveillance of content which is targeted on suspected criminal activities or security threats is clearly justifiable, however). In addition to a ban on mass surveillance, there must also be detailed safeguards in place. The US might soon be reluctantly willing to address the latter, but it will be even more unwilling to address the former.

Are there other routes which could guarantee that external transfers to the USA take place, at least until the US law is changed? In principle, yes, since (as noted above) there are derogations from the general rule that transfers can only take place to countries with an ‘adequate’ level of data protection. A first set of derogations is mandatory (though Member States can have exceptions in ‘domestic law governing particular cases’): where the data subject gives ‘consent unambiguously’; where the transfer is necessary to perform a contract with (or in the interest of) the data subject, or for pre-contractual relations; where it’s ‘necessary or legally required on important public interest grounds’, or related to legal claims; where it’s ‘necessary to protect the vital interests of the data subject’; or where it’s made from a public register. A second derogation is optional: a Member State may authorise transfers where the controller offers sufficient safeguards, possibly in the form of contractual clauses. The use of the latter derogation can be controlled by the Commission.

It’s hard to see how the second derogation can be relevant, in light of the Court’s concerns about the sufficiency of safeguards under the current law. US access to the data is not necessary in relation to a contract, to protect the data subject, or related to legal claims.  An imaginative lawyer might argue that a search engine (though not a social network) is a modern form of public register; but the record of an individual’s use of a search engine is not.

This leaves us with consent and public interest grounds. Undoubtedly (as the CJEU accepted) national security interests are legitimate, but in the context of defining adequacy, they do not justify mass surveillance or insufficient safeguards. Would the Court’s ruling in Schrems still apply fully to the derogation regarding inadequate protection? Or would it apply in a modified way, or not at all?

As for consent, the CJEU ruled last year in a very different context (credibility assessment in LGBT asylum claims) that the rights to privacy and dignity could not be waived in certain situations (see discussion here). Is that also true to some extent in the context of data protection? And what does unambiguous consent mean exactly? Most people believe they are consenting only to (selected) people seeing what they post on Facebook, and are dimly aware that Facebook might do something with their data to earn money. They may be more aware of mass surveillance since the Snowden revelations; some don’t care, but some (like Max Schrems) would like to use Facebook without such surveillance. Would people have to consent separately to mass surveillance? In that case, would Facebook have to be accessible for those who did not want to sign that separate form? Or could a ‘spy on me’ clause be added at the end of a long (and unread) consent form?  Consent is a crucial issue also in the context of the purely domestic EU data protection rules.

The Court’s ruling has addressed some important points, but leaves an enormous number of issues open. It’s clear that it will take a long time to clear up the mess left from this particular poorly supervised party.  


Barnard and Peers: chapter 9

Photo credit: www.businessinsider.com