Saturday 20 April 2024

‘Trusted’ rules on trusted flaggers? Open issues under the Digital Services Act regime

Alessandra Fratini and Giorgia Lo Tauro, Fratini Vergano European Lawyers

Photo credit:  SynLLOER, via Wikimedia Commons

 

1 Introduction

The EU’s Digital Services Act (DSA) institutionalises the tasks and responsibilities of ‘trusted flaggers’, key actors in the online platform environment, that have existed, with roles and functions of variable scope, since the early 2000. The newly applicable regime fits with the rationale and aims pursued by the DSA (Article 1): establishing a targeted set of uniform, effective and proportionate mandatory rules at Union level to safeguard and improve the functioning of the internal market (recital 4 in the preamble), with the objective of ensuring a safe, predictable and trusted online environment, within which fundamental rights are effectively protected and innovation is facilitated (recital 9), and for which responsible and diligent behaviour by providers of intermediary services is essential (recital 3). This article, after retracing the main regulatory initiatives and practices at EU level that paved the way for its adoption, looks at the DSA’s trusted flaggers regime and at some open issues that remain to be tested in practice.

 

2 Trusted reporters: the precedents paving the way to the DSA

The activity of flagging can be generally recognised as that of third parties reporting harmful or illegal content to intermediary service providers that hold that content in order for them to moderate it. In general terms, it refers to flaggers that have “certain privileges in flagging”, including “some degree of priority in the processing of notices, as well as access to special interfaces or points of contact to submit their flags”. This, in turn, poses issues in terms of both the flaggers’ responsibility and their trustworthiness since, as rightly noted, “not everyone trusts the same flagger.”

In EU law, the notion of trusted flaggers can be traced back to Directive 2000/31 (the ‘e-Commerce Directive’), the foundational legal framework for online services in the EU. The Directive exempted intermediaries from liability for illegal content they managed if they fulfilled certain conditions: under Articles 12 (‘mere conduit’), 13 (‘caching’) and 14 (‘hosting’) – now replaced by Articles 4-6 DSA – intermediary service providers were liable for the information stored at the request of the recipient of the service if, once become or made aware of any illegal content, such content was not removed or access to it was not disabled “expeditiously” (also recital 46). The Directive encouraged mechanisms and procedures for removing and disabling access to illegal information to be developed on the basis of voluntary agreements between all parties concerned (recital 40).

This conditional liability regime encouraged intermediary services providers to develop, as part of their own content moderation policies, flagging systems that would allow them to rapidly treat notifications so as not to trigger liability. The systems were not imposed as such by the Directive, but adopted as a result of the liability regime provided therein.

Following the provisions of Article 16 of the Directive, which supports the drawing up of codes of conduct at EU level, in 2016 the Commission launched the EU Code of Conduct on countering illegal hate speech online, signed by the Commission and several service providers, with others joining later on. The Code is a voluntary commitment made by signatories to, among others, review the majority of the flagged content within 24 hours and remove or disable access to content assessed as illegal, if necessary, as well as to engage in partnerships with civil society organisations, to enlarge the geographical spread of such partnerships and enable them to fulfil the role of a ‘trusted reporter’ or equivalent. Within the context of the Code, trusted reporters are entrusted to provide high quality notices, and signatories are to make information about them available on their websites.

Subsequently, in 2017 the Commission adopted the Communication on tackling illegal content online, to provide guidance on the responsibilities of online service providers in respect of illegal content online. The Communication suggested criteria based on respect for fundamental rights and of democratic values to be agreed by the industry at EU level through self-regulatory mechanisms or within the EU standardization framework. It also recognised the need to strike a reasonable balance between ensuring a high quality of notices coming from trusted flaggers, the scope of additional measures that companies would take in relation to trusted flaggers and the burden in ensuring these quality standards, including the possibility of removing the privilege of a trusted flagger status in case of abuses.

Building on the progress made through the voluntary arrangements, the Commission adopted Recommendation 2018/334 on measures to effectively tackle illegal content online. The Recommendation establishes that cooperation between hosting service providers and trusted flaggers should be encouraged, in particular, by providing fast-track procedures to process notices submitted by trusted flaggers, and that hosting service providers should be encouraged to publish clear and objective conditions for determining which individuals or entities they consider as trusted flaggers. Those conditions should aim to ensure that the individuals or entities concerned have the necessary expertise and carry out their activities as trusted flaggers in a diligent and objective manner, based on respect for the values on which the Union is founded.

While the 2017 Communication and 2018 Recommendation are the foundation of the trusted flaggers regime institutionalized by the DSA, further initiatives took place in the run-up to it.

In 2018, further to extensive consultations with citizens and stakeholders, the Commission adopted a Communication on tackling online disinformation, which acknowledged once again the role of trusted flaggers to foster credibility of information and shape inclusive solutions. Platform operators agreed on a voluntary basis to set self-regulatory standards to fight disinformation and adopted a Code of Practice on disinformation. The Commission’s assessment in 2020 revealed significant shortcomings, including inconsistent and incomplete application of the Code across platforms and Member States and lack of an appropriate monitoring mechanism. As a result, the Commission issued in May 2021 its Guidance on Strengthening the Code of Practice on Disinformation, containing indications on the dedicated functionality for users to flag false and/or misleading information (p. 7.6). The Guidance also aimed at developing the existing Code of Practice towards a ‘Code of Conduct’ as foreseen in (now) Article 45 DSA.

Further to the Guidance, in 2022 the Strengthened Code of Practice on Disinformation was signed and presented by 34 signatories who had joined the revision process of the 2018 Code. For signatories that are VLOPs, the Code aims to become a mitigation measure and a Code of Conduct recognized under the co-regulatory framework of the DSA (recital 104).

Finally, in the context of provisions/mechanisms defined before the DSA, it is worth mentioning Article 17 of Directive 2019/790 (the ‘Copyright Directive’), which draws upon Article 14(1)(b) of the e-Commerce Directive on the liability limitation for intermediaries and acknowledges the pivotal role of rightholders when it comes to flagging unauthorised use of their protected works. Under Article 17(4), in fact, “[i]f no authorisation is granted, online content-sharing service providers shall be liable for unauthorised acts of communication to the public, including making available to the public, of copyright-protected works and other subject matter, unless the service providers demonstrate that they have: (a) made best efforts to obtain an authorisation, and (b) made, in accordance with high industry standards of professional diligence, best efforts to ensure the unavailability of specific works and other subject matter for which the rightholders have provided the service providers with the relevant and necessary information; and in any event (c) acted expeditiously, upon receiving a sufficiently substantiated notice from the rightholders, to disable access to, or to remove from their websites, the notified works or other subject matter, and made best efforts to prevent their future uploads in accordance with point (b)” (emphasis added).

 

3 Trusted flaggers under the DSA

The DSA has given legislative legitimacy to trusted flaggers, granting a formal (and binding) recognition to a practice that far developed on a voluntary basis.

According to the DSA, a trusted flagger is an entity that has been granted such status within a specific area of expertise by the Digital Service Coordinator (DSC) in the Member State in which it is established, because it meets certain legal requirements. Online platform providers must process and decide upon - as a priority and with undue delay - notices from trusted flaggers concerning the presence of illegal content on their online platform. That requires that online platform providers take the necessary technical and organizational measures with regard to their notice and action mechanisms. Recital 61 exposes the rationale and scope of the regime: notices of illegal content submitted by trusted flaggers, acting within their designated area of expertise, are treated with priority by providers of online platforms.

The regime is mainly outlined in Article 22.

Eligibility requirements

Article 22(2) sets out the three cumulative conditions to be met by an applicant wishing to be awarded the status of trusted flagger: 1) expertise and competence in detecting, identifying and notifying illegal content; 2) independence from any provider of online platforms; and 3) diligence, accuracy and objectivity in how it operates. Recital 61 clarifies that only entities - being them public in nature, non-governmental organizations or private or semi-public bodies - can be awarded the status, not individuals. Therefore, (private) entities only representing individual interests, such as brands or copyright owners, are not excluded from accessing the trusted flagger status. However, the DSA displays a preference for industry associations representing their member interests applying for the status of trusted flagger, which appears to be justified by the need to ensure that the added-value of the regime (the fast-track procedure) be maintained, with the overall number of trusted flaggers awarded under the DSA remaining limited. As clarified by recital 62, the rules on trusted flaggers should not be understood to prevent providers of online platforms from giving similar treatment to notices submitted by entities or individuals that have not been awarded trusted flagger status, from otherwise cooperating with other entities, in accordance with the applicable law. The DSA does not prevent online platforms from using mechanisms to act quickly and reliably against content that violates their terms and conditions.

The status’ award

Under Article 22(2), the trusted flagger status shall be awarded by the DSC of the Member State in which the applicant is established. Different from the voluntary trusted flagger schemes, which are a matter for individual providers of online platforms, the status awarded by a DSC must be recognized by all providers falling within the scope of the DSA (recital 61). Accordingly, the DSC shall communicate to the Commission and to the European Board for Digital Services details of the entities to which they have awarded the status of trusted flagger (and whose status they have suspended or revoked - Article 22(4)), and the Commission shall publish and keep up to date such information in a publicly available database (Article 22(5)).

Under Article 49(3), Member States were to designate their DSCs by 17 February 2024; the Commission makes available the list of designated DSCs on its website. The DSCs, who are responsible for all matters relating to supervision and enforcement of the DSA, shall ensure coordination in its supervision and enforcement throughout the EU. The European Board for Digital Services, among other tasks, shall be consulted on Commission’s guidelines on trusted flaggers, to be issued “where necessary”, and for matters “dealing with applications for trusted flaggers” (Article 22(8)).

The fast-track procedure

Article 22(1) requires providers of online platforms to deal with notices submitted by trusted flaggers as a priority and without undue delay. In doing so, it refers to the generally applicable rules on notice and action mechanisms under Article 16. On the priority to be granted to trusted flaggers’ notices, recital 42 invites providers to designate a single electronic point of contact, that “can also be used by trusted flaggers and by professional entities which are under a specific relationship with the provider of intermediary services”. Recital 62 explains further that the faster processing of trusted flaggers’ notices depends, amongst other, on “actual technical procedures” put in place by providers of online platforms. The organizational and technical measures that are necessary to ensure a fast-track procedure for processing trusted flaggers’ notices remain a matter for the providers of online platforms.

Activities and ongoing obligations of trusted flaggers

Article 22(3) requires trusted flaggers to regularly (at least once a year) publish detailed reports on the notices they submitted, make them publicly available and send them to the awarding DSCs. The status of trusted flagger may be revoked or suspended if the required conditions are not consistently upheld and/or the applicable obligations are not correctly fulfilled by the entity. The status can only be revoked by the awarding DSC following an investigation, either on the DSC’s own initiative or on the basis of information received from third parties, including providers of online platforms. Trusted flaggers are thus granted the possibility to react to, and fix where possible, the findings of the investigation (Article 22(6)).

On the other hand, if trusted flaggers detect any violation of the DSA provisions by the platforms, they have the right to lodge a complaint with the DSC of the Member State where they are located or established, according to Article 53. Such a right is granted not only to trusted flaggers but to any recipient of the service, to ensure effective enforcement of the DSA obligations (also recital 118).

The role of the DSCs

With the DSA it becomes mandatory for online platforms to ensure that notices submitted by the designated trusted flaggers are given priority. While online platforms maintain discretion as to entering into bilateral agreements with private entities or individuals they trust and whose notices they want to process with priority (recital 61), they must give priority to entities that have been awarded the trusted flagger status by the DSCs. From the platforms’ perspective, the DSA ‘reduces’ their burden in terms of decision-making responsibility by shifting it to the DSCs, but ‘increases’ their burden in terms of executive liability (for the implementation of measures ensuring the mandated priority). From the reporters’ perspective, the DSA imposes a set of (mostly) harmonised requirements to be awarded the status by a DSC, once and for all platforms, and to maintain such status afterward.

While the Commission’s guidelines are in the pipeline, some DSCs have proposed and adopted guidelines to assist potential applicants with the requirements for the award of the trusted flagger status. Among others, the French ARCOM published “Trusted flaggers: conditions and applications” on its website; the Italian AGCOM published for consultation its draft “Rules of Procedure for the award of the trusted flagger status under Article 22 DSA”; the Irish Coimisiún na Meán published the final version of its “Application Form and Guidance to award the trusted flagger status under Article 22 DSA”; as did the Austrian KommAustria, the Danish KFST and the Romanian ANCOM. The national guidelines have been developed following exchanges with the other authorities designated as DSCs (or about to be so) with the view to ensuring a consistent and harmonised approach in the implementation of Article 22. As a matter of fact, the published guidelines are largely comparable.

 

4 Open issues

While the DSA’s regime is in its early stages and no trusted flagger status has been awarded yet, some of its merits have been acknowledged already, such as the fact that it has standardised existing practices, harmonised eligibility criteria, complemented special regimes – such as the one set out in Article 17 Copyright Directive - confirmed the cooperative approach between stakeholders, and finally formalised the role of trusted flaggers as special entities in the context of notice and action procedures.

At the same time, the DSA’s regime leaves on the table some open issues, such as the respective role of trusted flaggers and other relevant actors in the context of tackling illegal/harmful content online, such as end users and reporters that reach bilateral agreements with the platforms, which remain to be addressed in practice for the system to effectively work.

The role of trusted flaggers vis-à-vis end users

While the DSA contains no specific provision on the role of trusted flaggers vis-à-vis end users, some of the national guidelines published by the DSCs require that the applicant entity, as part of the condition relating to due diligence in the flagging process, indicates whether it has mechanisms in place to allow end users to report illegal content to it. In general, applicants have to indicate how they select content to monitor (which may include end users’ notices) and how they ensure that they do not unduly concentrate their monitoring on any one side and apply appropriate standards of assessment taking all legitimate rights and interests into account. As a matter of fact, the organisation and management of the relationship with end users (onboarding procedures, collection and processing of their notices, etc.) are left to the trusted flaggers. For example, some organisations (such as those part of the INHOPE network, operating in the current voluntary schemes) offer hotlines to the public to report to them, including anonymously, illegal content found online.

Although it is clear from the DSA that end users retain the right to flag their notices directly to online platforms (Article 16) with no duty to notify trusted flaggers, as well as their right to autonomously lodge a complaint against platforms (Article 53) and to claim compensation for damages (Article 54), it remains unclear whether, in practice, it will be more convenient for end users to rely on specialised trusted flaggers for their notices to be processed more expeditiously – in other words, whether the regime provides sufficient incentives, at least for some end users, to go the trusted flaggers’ way. On the other hand, it remains unclear to what extent applicant entities will be actually ‘required’ to put in place effective mechanisms to allow end users to report illegal or harmful content to them – in other words, whether the due diligence requirements will imply the trusted flaggers’ review of end users’ notices, within their area of expertise.

From another perspective, in connection with the reporting of illegal content, trusted flaggers may come across infringements by the platforms, as any recipient of online services. In such cases, Article 53 provides the right to lodge a complaint with the competent DSC, with no difference being made between complaints lodged respectively by trusted flaggers and by end users. If ‘priority’ is to be understood as the main feature of the privileged status granted to trusted flaggers when flagging illegal content online to platforms, a question arises about the possibility of granting them a corresponding priority before the DSCs when they complain about an infringement by online platforms. And in this context, one may wonder whether lodging a complaint to the DSC on behalf of end users might also fall within the scope of action of trusted flaggers (to the extent of claiming platforms’ abusive practices such as shadow banning, recital 55).

The role of trusted flaggers vis-à-vis other reporters

The DSA requires online platforms to put in place notice and action mechanisms that shall be “easy to access and user-friendly” (Article 16) and to ensure an internal complaint-handling system to recipients of the service (Article 20). However, as noted above, these provisions concern all recipients, with no difference in treatment for trusted flaggers. Although their notices are granted priority by virtue of Article 22, which leaves platforms free to choose the most suitable mechanisms, the DSA says nothing about ‘how much priority’ should be guaranteed to trusted flaggers with respect to notices filed not only by end users, but (also - and especially) by other entities/individuals with whom platforms have agreements in place.

In this respect, guidance would be welcome as to the degree of prevalence that platforms are expected to give trusted flaggers’ notices compared to other trusted reporters’, as would a clarification as to whether the nature of the content may influence such prevalence. From the trusted flaggers’ perspective, there should be a rewarding incentive to engage in a role that comes with the price tag of ongoing obligations.

 

5 Concluding remarks

While the role of trusted flaggers is not new when it comes to tackling illegal content online, the tasks newly entrusted to the DSCs in this context are. This results in a different allocation of responsibilities for the actors involved, with the declared aims of ensuring harmonisation of best practices across sectors and territories in the EU and a better protection for users online. Some open issues, as the ones put forward above, appear at this stage to be relevant, in particular for ensuring that the trusted flaggers’ mechanism effectively works as an expeditious remedy against harmful and illegal content online. It is expected that the awaited Commission guidelines under Article 22(8) DSA will shed a clarifying light on those issues. In the absence, there is a risk that the costs-benefits analysis - with the costs being certain and the benefits in terms of actual priority uncertain - might make the “trusted flagger project” unattractive for a potential applicant.

Podchasov v. Russia: the European Court of Human Rights emphasizes the importance of encryption

 

 


 

Mattis van ’t Schip & Frederik Zuiderveen Borgesius*

*Both authors work at the iHub and the Institute for Computing and Information Sciences, Radboud University, The Netherlands - mattis.vantschip[at]ru.nl & frederikzb[at]cs.ru.nl

Photo credit: Gzen92, on wikimedia commons 

 

In a judgment from February 2024 in the case Podchasov v. Russia, the European Court of Human Rights emphasised the role of encryption in protecting the right to privacy. The judgment comes at a time where encryption is central to many legal debates across the world. In this blog post, we summarise the main findings of the Court and add some reflections.

Summary

Podchasov, the applicant in the case, is a user of Telegram. Russia listed Telegram as an ‘internet communication organiser’ in 2017. This registration meant that Telegram, according to Russian law, had to store all its communications data for one year, and the contents of communication data for six months. The obligation concerns all electronic communications (e.g., textual, video, sound) received, transmitted, or processed by internet users. Law enforcement authorities could request access to that data, including access to the decryption key in case communications are encrypted (para 6 of the judgment).

Telegram is a messaging app that users often employ because of its end-to-end encrypted messaging. For instance, Telegram is an important communication channel for Ukrainians to receive updates about the current war. End-to-end encryption means, roughly summarised, that only the sender and the intended recipient can access the content of the encrypted data, in this case Telegram messages.

In July 2017, the Russian Federal Security Service (FSB) required Telegram to disclose data that would allow the FSB to decrypt messages of suspects of ‘terrorism-related’ activities (para 7 of the judgment). Telegram refused. Telegram said that it was impossible to allow the FSB to access encrypted messages without creating a backdoor to their encryption that malicious actors might also use. Because of Telegram’s refusal, a District Court in Moscow ordered the nation-wide blocking of Telegram in Russia. The applicants challenged the disclosure order, but their challenge was dismissed across several Moscow courts. Meanwhile, Telegram remains operational in Russia today. Finally, the applicants lodged their complaint with the European Court of Human Rights. They complained that Russia violated their right to private life in Article 8 of the European Convention on Human Rights (ECHR).

Russia is not a member of the Council of Europe anymore. The Council of Europe stopped Russia’s membership in March 2022, in response to Russia’s invasion of parts of Ukraine. Six months later, on 16 September 2022, Russia ceased to be party to the European Convention on Human Rights. Nevertheless, the Court gives this judgment. The Court says that it has jurisdiction over this case, as the alleged violations occurred before the date that Russia ceased to be a party to the Convention.

The Court quotes several documents that are not directly related to the ECHR, including surveillance case law of the Court of Justice of the European Union, a report on the right to privacy in the digital age by the Office of the United Nations High Commissioner for Human Rights, a statement by Europol and the European Union Agency for Cybersecurity, and an Opinion of the European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB).

The surveillance scheme before the European Court of Human Rights resembles earlier Russian surveillance schemes, which the Court held as a violation of providing adequate and sufficient safeguards to protect against indiscriminate breaches of the right to private life in Article 8 ECHR. Earlier holdings thus also apply in the underlying case. Unlike in previous judgments about surveillance in Russia, the Court discusses the role of encryption in protecting the right to private life.

On encryption, the Court holds that the underlying case only concerns the encryption scheme of ‘secret chats’. Telegram offers ‘cloud chats’ by default with ‘custom-built server-client encryption’, but users can also decide to activate ‘secret chats’ which are end-to-end encrypted (para 5 of the judgment). The Court explicitly excludes any considerations of so-called ‘cloud chats’ in the case, as the complaints only concern the ‘secret chats’. The scope of the Court’s holdings is therefore limited to only end-to-end encryption as used for secret chats.

The applicants and several privacy-related civil organisations say that decryption of end-to-end encrypted messages would concern all users of that system, in this case Telegram, as technical experts can never create an encryption backdoor for a specific instance, case, or user. The Russian government did not refute these statements. The Court therefore holds that the Russian authorities interfered with right to private life of Article 8 ECHR. The Court then investigates whether the Russian authorities can justify this violation, for instance because the violation is necessary in a democratic society. The Court analyses encryption in this light.

The Court emphases that encryption contributes to ensuring the enjoyment of the right to private life and other fundamental rights, such as freedom of expression:

[T]he Court observes that international bodies have argued that encryption provides strong technical safeguards against unlawful access to the content of communications and has therefore been widely used as a means of protecting the right to respect for private life and for the privacy of correspondence online. In the digital age, technical solutions for securing and protecting the privacy of electronic communications, including measures for encryption, contribute to ensuring the enjoyment of other fundamental rights, such as freedom of expression (…) (para 76).

The Court adds that encryption is important to secure one’s data and communications:

Encryption, moreover, appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information. This should be given due consideration when assessing measures which may weaken encryption. (para 76)

The Court observes that legal decryption obligations cannot be specific or limited to certain circumstances: once a messaging provider creates a backdoor, there is a backdoor to all communications on the messaging platform:

Weakening encryption by creating backdoors would apparently make it technically possible to perform routine, general and indiscriminate surveillance of personal electronic communications. Backdoors may also be exploited by criminal networks and would seriously compromise the security of all users’ electronic communications. The Court takes note of the dangers of restricting encryption described by many experts in the field. (par 77)

Based on the above-mentioned arguments, the Court holds that the requirement to decrypt communication messages cannot be ‘regarded as necessary in a democratic society.’ (para 80 of the judgment) The Court concludes that Russia breached the right to private life, protected in article 8 ECHR.

Comments

The Podchasov case follows a long debate about the value of end-to-end encryption in democratic societies globally. As the Court mentions, end-to-end encryption is valuable for privacy as it enables people to communicate in such a way that third parties cannot access the communication. In this context, experts herald end-to-end encryption for its capacity to support, for instance, journalists in performing their work safely, or historically marginalised groups to express themselves freely.

At the same time, some law enforcement agencies consider end-to-end encryption a threat to public safety, as malicious actors can benefit from the privacy provided by secure messaging and similar methods, such as data encryption, too.

For instance, the FBI is in a long battle with Apple over the encryption of iPhones, which several suspects employed to keep their phone information and data private. On each occasion, Apple refused to offer decryption keys or software to the FBI, citing security concerns that can stem from enabling such backdoors.

The battle between security and privacy is, of course, long-standing. Encryption is now central to this debate. The EU Commission recently joined the debate with a proposal for a Child Sexual Abuse Material Regulation (CSAM proposal). Roughly summarised, the proposal would require communication providers (such as Telegram or WhatsApp) to analyse people’s communications to find, block, and report child sexual abuse materials, such as inappropriate pictures. Experts agree that communication providers can only do so if they do not encrypt communications, if they include a type of backdoor, or if they analyse communications on people’s devices before they are encrypted. Experts warn that such on-device analysis can be seen as a kind of backdoor of encrypted communications too. Many civil organisations, technical experts, and academics oppose the CSAM proposal. Opponents of the CSAM proposal can be expected to cite his judgment. 

The European Court of Human Rights is clear about the role of end-to-end encryption for the right to private life. In one paragraph, the Court states that end-to-end encryption is vital to privacy. The Court bases its reasoning partly on an opinion of the European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) which discusses encryption in the context of the above-mentioned CSAM proposal. The Court also refers to responses from civil society organisations, who can present their views to the Court as amici curiae. The Court follows the reasoning of the EDPS, the EDPB, and privacy organisations regarding the conclusion that once encryption is broken, the entire system is no longer secure for its users.

The Court also mentions that encryption is vital to security of users. Consider, for instance, the importance of data protection in the current privacy context. Without adequate data encryption, people cannot be sure that the data they store in, for instance, cloud storage, is accessible to only them. Encryption therefore also helps against hacking, identity fraud, and data theft (para 76 of the judgment).

The Podchasov case is straight-forward: encryption is vital to the protection of the right to privacy. The Court’s clear statements will influence ongoing encryption debates, but the end of the debate is not in sight.

Thursday 21 March 2024

Resistance is futile: the new Eurodac Regulation – part 4 of the analysis of new EU asylum laws


 




Professor Steve Peers, Royal Holloway University of London

Photo credit: Rachmaninoff, via Wikimedia Commons

Just before Christmas, the European Parliament and the Council (the EU body consisting of Member States’ ministers) reached a deal on five key pieces of EU asylum legislation, concerning asylum procedures, the ‘Dublin’ system on responsibility for asylum applications, the ‘Eurodac’ database supporting the Dublin system, screening of migrants/asylum seekers, and derogations in the event of crises. These five laws joined the previously agreed revised laws on qualification of refugees and people with subsidiary protection, reception conditions for asylum-seekers, and resettlement of refugees from outside the EU. Taken together, all these laws are intended to be part of a ‘package’ of new or revised EU asylum laws.

I’ll be looking at all these agreements for new legislation on this blog in a series of blog posts (see the agreed texts here), unless the deal somehow unravels. This is the fourth post in the series, on the new Regulation on Eurodac – the system for collecting personal data to attempt to ensure the operation of the EU’s asylum laws. The previous blog posts in the series concerned the planned new qualification Regulation (part 1), the revised reception conditions Directive (part 2), and the planned new Regulation on resettlement of refugees (part 3).

As noted in the earlier posts in this series, all of the measures in the asylum package could in principle be amended or blocked before they are adopted, except for the previous Regulation revising the powers of the EU asylum agency, which was separated from the package and adopted already in 2021. I will update this blog post as necessary in light of developments. (On EU asylum law generally, see my asylum law chapter in the latest edition of EU Justice and Home Affairs Law; the summary of the current Regulation below is adapted from that chapter).

The new Eurodac regulation: background

There have been two previous ‘phases’ in development of the Common European Asylum System: a first phase of laws mainly adopted between 2003 and 2005, and a second phase of laws mainly adopted between 2011 and 2013. The 2024 package will, if adopted, in effect be a third phase, although for some reason the EU avoids calling it that.

The initial Eurodac Regulation (the ‘2000 Regulation’) was adopted before the first phase of the CEAS, back in 2000, to supplement the Dublin Convention on the allocation of responsibility for asylum applications, which also predated the first phase. The 2000 Regulation was subsequently replaced in 2013, as part of the second phase of the CEAS (the ‘2013 Regulation’).

The 2013 Regulation requires fingerprints of all asylum seekers over fourteen to be taken and transmitted to a ‘Central Unit’ which compared them with other fingerprints previously (and subsequently) transmitted to see whether the asylum seeker had made multiple applications in the EU. (So did the 2000 Regulation: the difference is that Member States since 2013 have to take fingerprints not only of those who apply for refugee status, but also of those who apply for subsidiary protection, a separate type of international protection for those who do not qualify for refugee status; for the definitions, see Part 1 in this series).

Similarly, Member States have to take the fingerprints of all third-country nationals who crossed a border irregularly, and transmit them to the Central Unit to check against fingerprints subsequently taken from asylum seekers. The reason for this is that one of the grounds to determine responsibility for asylum applications under the Dublin rules is which Member State the person concerned first entered without authorisation. The deadline to take the fingerprints is within seventy-two hours after an application for international protection is made, or after apprehension in connection with irregular crossing of an external border.

Member States may also take fingerprints of third-country nationals ‘found illegally present’ and transmit them to the Central Unit to see whether such persons had previously applied for asylum in another Member State. If so, it is possible that the other Member State is obliged to take them back under the Dublin rules. But note that under the 2013 Regulation, it is not mandatory to take and transmit the fingerprints of this group, and the Eurodac system does not store them. Law enforcement agencies and Europol have also been given access to Eurodac data, subject to certain conditions.

For a transitional period under the 2000 Regulation, the data on recognized refugees was blocked once the refugee status of a person was granted. However, the 2013 Regulation unblocked this data. Conversely, the 2013 Regulation reduced the time that the Eurodac system retained data on irregular border crossers (cutting that time from two years to eighteen months).

Unlike most other EU asylum laws, the Eurodac Regulation has not been the subject of case law of the CJEU, so it is not necessary to look at case law to fully understand its meaning.

The UK and Ireland opted in to the two previous Eurodac Regulations, although the 2013 Regulation ceased to apply to the UK (along with the Dublin rules) at the end of the Brexit transition period. Ireland opted out of the proposal for the 2024 Regulation, although it could still choose to opt in to that Regulation after it has officially been adopted. Denmark is covered by Eurodac as part of its treaty with the EU on applying Dublin and Eurodac; there are also treaties with Norway and Iceland, and Switzerland (with a protocol on Liechtenstein) to apply the Dublin rules and Eurodac too.

As with all the new EU asylum measures, each must be seen in the broader context of all the others – which I will be discussing over the course of this series of blog posts. The Eurodac Regulation has always had close links with the EU’s Dublin rules on allocation of responsibility for asylum applications; the new version of the Regulation will have further links with other EU law on asylum, as discussed below.

The legislative process leading to the agreed text of the revised Eurodac Regulation started with the Commission proposal in 2016, as a response to the perceived refugee crisis. A revised version was tabled in 2020, as part of the relaunch of all the asylum talks. The negotiations on that proposal by EU governments (the Council) and then between the Council and the European Parliament, have been convoluted, but have now ended. But this blog post will look only at the final text, leaving aside the politics of the negotiations. My analysis focusses on how the new Eurodac Regulation will differ from the 2013 Regulation, the main details of which were already summarised above.

Basic issues

Like other measures in the asylum package, the application date of the 2024 Eurodac Regulation is two years after adoption (so in spring 2026). However, as discussed below, there will be special rules on the application of the Regulation to temporary protection (ie the application of the EU temporary protection Directive on initial short term protection in the event of mass influxes, so far applied only once, to those fleeing the invasion of Ukraine).

The 2024 Eurodac Regulation first of all expands the list of the purposes of Eurodac – previously support of the Dublin system, with some law enforcement access to data – to include general support for the asylum system, assistance with applying the Resettlement Regulation (on which, see part 3 of this series), control of irregular migration, detection of secondary movement, child protection, identification of persons, supporting the EU travel authorization system and the Visa Information System, the production of statistics to support ‘evidence-based policy making’, and to assist with implementing the temporary protection Directive. The clause on ‘purpose limitation’ related to the use of personal data is far broader, although it is now accompanied by a general human rights safeguard.

Next, the type of data collected is expanded beyond fingerprints to include ‘biometric data’, now defined as including ‘facial image data’. Other types of data will also be newly collected. The obligation to take data is more clearly highlighted in the 2024 Regulation, along with both further safeguards and yet also ‘the possibility to use means of coercion as a last resort’.

The age of collecting data from children will be reduced from 14 to 6. While there will be special safeguards for children, these make uncomfortable reading. For instance, ‘[n]o form of force shall be used against minors to ensure their compliance with the obligation’, and yet ‘a proportionate degree of coercion may be used against minors to ensure their compliance’.

New provisions in the 2024 Regulation aim to secure interoperability with other EU databases – namely the ETIAS travel authorization system and the Visa Information System. Also, the use of Eurodac to generate immigration statistics will be hugely expanded.

Data will still be collected for Eurodac from asylum-seekers and those crossing the external border irregularly, with additional data on changes of status of the data subject. Also, data will now be collected and stored on a mandatory basis (rather than being checked against the database, but not stored, on an optional basis), for irregular migrants, to assist in identifying them. Finally, data will now be collected for the first time as regards four more situations: EU resettlement under the new Resettlement Regulation; national resettlement; search and rescue; and temporary protection, under the EU temporary protection Directive. However, the extension to temporary protection cases only applies to future hypothetical uses of the temporary protection Directive – not to those covered by the 2022 application of that Directive to those fleeing the invasion of Ukraine.

Most of this data will be automatically compared to data already in Eurodac. Data on asylum-seekers will be stored (as before) for ten years; data on irregular border crossers will now be stored for five years, rather than 18 months; and there are varying periods of storage (usually five years) for data newly collected under the 2024 Regulation. However, for temporary protection cases, the storage period is linked to the period of temporary protection under EU law, which is currently three years maximum. As before, data will be erased in advance if the person concerned obtains citizenship of a Member State, but not (for irregular border crossers) if they leave or obtain a residence permit. Conversely, data on those who obtain international protection will be kept for the usual ten year period, rather than (as before) deleted three years after obtaining protection.

Finally, as for data protection, the huge increase in data being collected is regulated by largely the same standards as before (adapted to include the collection and comparison of facial images, as well as the collection of data on security risks), except it is now possible to transfer data to non-EU countries for the purposes of return.

Comments

There was no Commission impact assessment specifically for the amendments to the Eurodac Regulation, and the rationales for the amendments offered in the preamble to the Regulation are rather sweeping. However, there is more detail in the explanatory memoranda to the Commission’s proposals. The 2016 proposal argues for Eurodac to be used not just to facilitate application of the Dublin system, but also as a tool for application of immigration control more broadly. In the Commission’s view, this justified the use of the system to identify those who were staying irregularly – including more comparisons of data. Collecting data on younger children was justified on grounds of safeguarding, to trace parents if they were separated. The collection of facial images and other new types of data was justified on grounds of facilitating identification. Data on relocation should be collected in order to transfer an asylum seeker to the correct Member State under the Dublin rules. The ten year period of retaining asylum seeker data, even if a claim was successful, was justified in case those with status moved without authorization and had to be returned to the Member State responsible. A longer period of retaining data of border crossers, without advance deletion in as many cases, was justified in case it was necessary for return purposes.

As for the revised 2020 proposal, the Commission argued that it was necessary to be consistent with other new rules on search and rescue, resettlement, changes to the main Dublin rules, screening, listing rejected applications (so that the rules on repeat applications could be applied), and internal security risks (because this rules out relocation under the Dublin rules).

Much of these rationales – which in any event are not based on detailed statistical analysis, in the absence of a specific impact assessment from the Commission (a vague staff working document does not contain any further detail) – can be questioned. Was it necessary to include future temporary protection cases, given that an ad hoc solution was found for the current use of the temporary protection directive? In particular, was it necessary to include such cases, considering the original rationale of Eurodac, if (as in the current use of temporary protection) the Dublin rules are de facto disapplied to temporary protection beneficiaries?

Given that the system is extended to temporary protection cases, why does the logic of a short period for retaining data in such cases not apply more broadly? Or at least, why is the logic of retaining data on resettled persons for five years – because long-term residence status is likely then – not applied equally to other people with protection status, or a residence permit? (The idea – raised during negotiations – of deleting data once people obtained long-term residence status was unfortunately dropped). This is a subset of the more general flaw with the whole package of amendments: the determination to strengthen the application of negative mutual recognition (ie Member States recognizing each others’ refusal of applications), without strengthening positive mutual recognition (recognizing the successful applications in other Member States) in parallel, and without considering the cases where those with protection status have a justified reason to move to another Member State (see the threshold set out in the Ibrahim judgment, for instance), or the prospects of long-term residents using their right under EU law (the long-term residents' Directive) to move to another Member State if they meet the criteria to do so. Finally, there is no rationale of using the Eurodac system for returns in light of the expansion of the Schengen Information System to the same ends (expanded data on entry bans, data on return decisions), which is already applicable in practice.

Overall then, the new Eurodac system will collect much more data, on many more people, for far more purposes, and for much longer – and with an inadequate explanation for many of these changes.



Sunday 10 March 2024

Climate case against ING: what does it mean for monetary policy?

 



Annelieke Mooij, Assistant Professor, Tilburg University

Photo credit: Sandro Halank, via Wikimedia Commons

The Dutch climate organization “milieudefensie” had threatened to start a case against the Dutch ING bank. The 14th of February 2024 the ING has responded that it will not give in into the demands of the climate organization. Hence making it highly likely that the climate policy of the ING will face legal challenges. Prima facie the case seems without EU relevance as it concerns a national climate organization suing a national bank. Though the case may seem to lack European relevance, the opposite is true. The decision by the Dutch judiciary may have serious European consequences. In particular for the Monetary Union and may even bypass the independence of the ECB.

Milieudefensie v. ING

The climate organization (plaintiff) asks the court to order the ING to take four concrete steps. The first is to align its climate policy with the target of 1.5C as stipulated by the Paris Agreement. The second demand is that the ING reduces its own emissions by 48%CO2 and 42% CO2e by 2030. Third that it stops financing large corporate clients who have adverse climate impacts. The fourth and final demand is that ING engages in discussion with the plaintiff about how to substantiate these demands. The demands made by the plaintiff are serious claims. Raising the question of the likelihood these demands are met by the Dutch court.

Whilst the court summons is not yet finalized it is likely that the plaintiff will refer to two earlier cases. The first is to an earlier case won against the Dutch state. In the Urgenda case the Dutch Supreme Court ruled that the state had to reduce its emissions in accordance with the Paris Agreement. The Supreme Court did not state how the state had to comply, simply that it had to comply. The case gave a strong message to the state that it had the obligation to meet the climate agreements. Urgenda provided the foundation for the second case.

The second case that the plaintiff will likely reference is that of Milieudefensie v. Shell. This case still has an appeal pending. The case concerned the climate responsibilities of Dutch oil concern Shell. The judiciary decided that Royal Dutch Shell (RDS) was responsible for the emission reductions of the global shell activities. In this capacity it had to reduce its global emissions by 45% by 2030 in comparison to 2019 levels. This was considered a revolutionary case as it is one of the first where the judiciary recognized climate duties against a legal person.  The legal foundation was article 6:162 of the Dutch Civil Code, this article is a form of tort law. The court considered that the emission reduction plans of Shell were not concrete enough. Shell thereby violated an unwritten duty of care. Prima facie the case against ING therefore looks strong. There are, however, two obstacles to overcome.

The first minor challenge is that of the impact of ING’s financial products on their clients. In the case against Shell the court considered that the mother company RDS determined the policy of the entire group (paraf. 4.4.4). It therefore had the influence to change the companies’ policies and directions. Arguably a bank can have a similar steering influence upon the direction of its clients. In particular the ING may refuse loans intended to buy polluting machines. On the other hand banks can approve loans for investment in greener operations. Loans can thereby have a powerful impact upon the direction of a consumer. Operating credit on the other hand will have a less likely impact on the course of a business. To demand that all financing is discontinued to corporate clients who do not have a climate plan provides a broad interpretation to the duty of care of the banking sector. In particular, as the Dutch judge will have to weigh the right to a clean environment against the right to operate a business.

The second difficulty is that unlike RDS, ING’s emissions (in)directly result from a varied investment portfolio. As stated by the response of ING measuring merely the emissions can lead to a negative climate result. An increased investment in heat pumps, increases the emission portfolio of ING but can decrease global emissions. The emissions in the Shell case were the direct result of the company’s own activities. Redirecting its efforts from fossil fuels to sustainable energy will have a positive impact upon the fight against climate change. In length of this argument Ferrari and Landi argue with regard to central banks that investments should be made not by simply investing in the lowest emitters.  Instead of this so-called “best-in-universe” approach, banks should invest in companies that do well within their substitute production group. The so-called best-in-class method of investment. Through this approach global demand can be shifted to green products. Therefore unlike the Shell case the court will have to decide between a blanket reduction of emissions which may have a negative environmental impact, or a best-in-class approach. The difficulty is that the court will then have to provide instructions not on what goals to achieve but rather on how to achieve emission reductions. The methods of achievement has been something the court has refrained from doing in both Shell and Urgenda. The decision on methodology may have a large impact on the future European Central Bank’s purchasing programmes.

 

Impact on the Monetary Union

The right to (private) life codified in the European Convention for Human Rights (ECHR) played a significant role in these cases. Article 52(3) of the EU Charter states that the ECHR provides a minimum level of protection. The CJEU may therefore award a higher level of protection but not lower than the ECHR. The interpretation of the ECHR therefore has a large influence on the fundamental rights protected within the EU Charter of Fundamental Rights.

The judgements of national judges are not binding for the European Court on the Convention of Human Rights (ECtHR). However, when there appears to be a consensus among the majority of members the ECtHR considers there is common ground. The existence of common ground decreases the margin of appreciation for the member states. The case of Urgenda directly involved an appeal to human rights against the state, specifically the right to life (article 2) and private life (article 8). Similar cases have been successfully tried in Ireland and France. The ECtHR is yet to rule on the climate change cases that are pending. There however seems a likelihood of a positive outcome for the plaintiffs. The CJEU will have to consider the scope of these cases and can decide on the same or a higher standard of protection. There is, however, a difference with the case of ING.

The cases against the states directly invoked human rights. In the Shell case the Dutch judge only indirectly applied the fundamental rights when interpreting the duty of care. It will likely do the same in the case of ING. This provides a less strong signal about common ground to the ECtHR that the right to a clean environment includes specific obligations for banks and other legal persons. It will take more national judges to reach similar judgements to provide the ECtHR with to conviction that there is common ground. The court in the Shell case, however, included the in its considerations the UN Guiding principles. These principles create a large common understanding throughout the ECHR members. The states obligation to enforce direct obligations for legal persons through its courts are likely to be accepted by the ECtHR.   If so it cannot be ignored especially by the largest bank in the EU; the European Central Bank (ECB).

The ECB has a tiered mandate. Its primary objective is to obtain price stability which has been defined as keeping inflation under but close to two percent on the medium term. To achieve this goal the Treaty on the Functioning of the European Union (TFEU) has granted the ECB with a high level of independence. This means that neither the EU or national legislators cannot determine or influence how the ECB executes its monetary policy. The ECB is therefore likely to argue that it cannot be influenced as to how it conducts is monetary policy even with regard to climate change. The ECB, however, is not immune from other primary or secondary legislation. In the Olaf case the CJEU considered that the ECB falls within the EU legal framework. Its independence only protects the ECB against political influence when it conducts monetary policy.

In addition to its primary mandate the ECB has a secondary mandate to abide by. This mandate includes “[…]the sustainable development of the Earth”. The ECB has to comply with its secondary mandate if it does not violate its primary mandate. Currently this is interpreted by the ECB to mean that when the ECB has a choice in how to achieve its price stability objectives, the secondary mandate is guiding. The secondary mandate, however, has various goals. Some of these goals can be achieved simultaneously but some are independent or even substitute goals. This makes it currently difficult to pinpoint to the legal obligations of the ECB from the secondary mandate. When it comes to climate change, however, the ECB considers itself bound by the Paris Agreement. In addition the ECB has to abide by the EU Charter of Fundamental Rights. It is however unclear what precise duties these treaties bring to the ECB when it carries out its private sector funding programmes. The ECB states that it is trying to decarbonize its corporate sector portfolio’s by using a method called tilting. The green bonds in the sector are given preference to the brown bonds. The difficulty is that when green bonds run out the ECB will continue by purchasing brown bonds if it considers this necessary for its monetary aim. The case of Milieudefensie v. ING, can provide clear guidance with regard to the ECB’s fundamental right climate responsibilities in its corporate sector programmes.  The Dutch court’s reasoning can provide the balance between a bank’s obligations to climate against the right to operate a business. This reasoning can be incorporated by the ECB.

The ECB makes choices with regard to how (intense) to pursue price stability. These choices should be guided by human rights such as climate change and economic needs. The ING decision can create a guiding framework on how to balance these different interests. However before such guidelines can be considered binding more national cases need to be tried, or the ING case would have to reach the ECtHR. Still quite a road to be travelled.

Friday 8 March 2024

The Dillon Judgment, Disapplication of Statutes and Article 2 of the Northern Ireland Protocol/Windsor Framework

 



 

Anurag Deb, PhD researcher, Queens University Belfast, and Colin Murray, Professor of Law, Newcastle Law School

Photo credit: Aaronward, via Wikicommons media

Extensive provisions of an Act of Parliament have been disapplied by a domestic court in the UK for the first time since Brexit. That is, in itself, a major development, and one which illustrates the power of the continuing connections between the UK and EU legal orders under the Withdrawal Agreement. It is an outcome which took many by surprise, even though we have argued at length that the UK Government has consistently failed to recognise the impact of Article 2 in rights cases. So here is the story of this provision of the Withdrawal Agreement, the first round of the Dillon case, and why understanding it will matter for many strands of the current government’s legislative agenda.

Article 2 of the Windsor Framework, as the UK Government insists on calling the entirety of what was the Northern Ireland Protocol (even though the Windsor Framework did nothing to alter this and many other provisions), is one of the great survivors of this most controversial element of the Brexit deal. Whereas other parts of the Brexit arrangements for Northern Ireland have been repeatedly recast, the wording of this provision has remained remarkably consistent since Theresa May announced her version of the Brexit deal in November 2018 (although it was Article 4 in that uncompleted version of the deal).

The provision was tied up relatively early in the process. Indeed, it suited the UK Government to be able to claim that rights in Northern Ireland were being protected as part of the Withdrawal Agreement, to enable them to avoid claims that Brexit was undermining the Belfast/Good Friday Agreement of 1998. Although the 1998 Agreement makes limited mention of the EU in general, it devotes an entire chapter to rights and equality issues, and EU law would play an increasing role with regard to these issues in the years after 1998.   

The UK Government made great play of explaining, in 2020, that its Article 2 obligations reflected its ‘steadfast commitment to upholding the Belfast (“Good Friday”) Agreement (“the Agreement”) in all its parts’ (para 1). Even as it appeared ready to rip up large portions of the Protocol, in the summer of 2021, the Article 2 commitments continued to be presented as ‘not controversial’ (para 37). It might more accurately have said that these measures were not yet controversial, for no one had yet sought to use this provision to challenge the operation of an Act of Parliament. In a powerful example of Brexit “cake-ism”, the UK Government loudly maintained that Article 2 was sacrosanct only because it had convinced itself that the domestic courts would not be able to make much use of it.

Little over a month ago, the Safeguarding the Union Command Paper all-but sought to write the rights provision out of the Windsor Framework (para 46):

The important starting point is that the Windsor Framework applies only in respect of the trade in goods - the vast majority of public policy is entirely untouched by it. … Article 2 of the Framework does not apply EU law or ECJ jurisdiction, and only applies in the respect of rights set out in the relevant chapter of the Belfast (Good Friday) Agreement and a diminution of those rights which arises as a result of the UK’s withdrawal from the EU.

Article 2 is a complex and detailed provision, by which (read alongside Article 13(3)) the UK commits that the law in Northern Ireland will mirror developments in EU law regarding the six equality directives listed in Annex 1 of the Protocol and, where other aspects of EU law protect aspects of the rights and equality arrangements of the relevant chapter of the 1998 Agreement, that there will be no diminution of such protections as a result of Brexit. But notwithstanding the complexity of these multi-speed provisions, by no construction can it be tenable to suggest that ‘the Windsor Framework applies only in respect of the trade in goods’.

The Dillon judgment marks the point at which the Government’s rhetoric is confronted by the reality of the UK’s Withdrawal Agreement obligations, and the extent to which they are incorporated into domestic law by the UK Parliament’s Withdrawal legislation. The case relates to the controversial Northern Ireland Troubles (Legacy and Reconciliation) Act 2023, heralded by the UK Government as its vehicle for addressing the legal aftermath of the Northern Ireland conflict. This Act, in preventing the operation of civil and criminal justice mechanisms in cases relating to the conflict, providing for an alternate body for addressing these legacy cases (Independent Commission for Reconciliation and Information Recovery) and requiring this body to provide for immunity for those involved in causing harms during the conflict, has provoked widespread concern within and beyond Northern Ireland.

The Act has been the subject of challenges under the Human Rights Act 1998 and an inter-state action against the UK launched before the European Court of Human Rights by Ireland. In the interest of brevity, however, this post will explore only the challenges under the Protocol/Windsor Framework. This is not the first case to invoke Article 2 (see here and here for our analysis of earlier litigation to which the UK Government should have paid more attention), but this remains the most novel element of the litigation, testing the operation of this element of the Withdrawal Agreement. It is also offers the most powerful remedy directly available to those challenging the Act; disapplication of a statute to the extent that it conflicts with those elements of EU law which this provision preserves.

These requirements are explained by the operation of Article 4 of the Withdrawal Agreement, which spells out that elements of the Withdrawal Agreement and the EU law which continues to be operative within the UK as a result of that Agreement will continue to be protected by the same remedies as applicable to breaches of EU law by Member States. Section 7A of the European Union (Withdrawal Act) 2018 reflected this obligation within the UK’s domestic jurisdictions, as accepted by the UK Supreme Court in the Allister case (see here for analysis). For Mr Justice Colton, his task could thus be summarised remarkably easily; ‘any provisions of the 2023 Act which are in breach of the WF [Windsor Framework] should be disapplied’ (para 527). All he had to do, therefore, was assess whether there was a breach.

The rights of victims are a prominent element of the Rights, Safeguards and Equality of Opportunity chapter of the 1998 Agreement. These rights were, in part, given protection within Northern Ireland Law through the operation of the Victims’ Directive prior to Brexit and, insofar as this EU law is being implemented, through the operation of the EU Charter of Fundamental Rights with regard to its terms. The key provision of the Victims’ Directive is the guarantee in Article 11 that applicants must be able to review a decision not to prosecute, a right clearly abridged where immunity from prosecution is provided for under the Legacy Act. The breach of this provision alone was therefore sufficient to require the application of extensive elements of the Legacy Act (sections 7(3), 8, 12, 19, 20, 21, 22, 39, 41, 42(1)) (para 608):

It is correct that article 11(1) and article 11(2) both permit procedural rules to be established by national law. However, the substantive entitlement embedded in article 11 is a matter for implementation only and may not be taken away by domestic law. The Directive pre-supposes the possibility of a prosecution. Any removal of this possibility is incompatible with the Directive.

The UK Government cannot claim to have been blindsided by this conclusion. They explicitly acknowledged the specific significance of the Victims’ Directive for the 1998 Agreement commitments in their 2020 Explainer on Article 2 (para 13). Moreover, in the context of queries over the application of Article 2 to immigration legislation, the UK Government insisted that in making provisions for victims the 1998 Agreement’s ‘drafters had in mind the victims of violence relating to the conflict in Northern Ireland’. Exposed by these very assertions, the Government hoped to browbeat the courts with a vociferous defence of the Legacy Act (going so far as to threaten consequences against Ireland for having the temerity to challenge immunity arrangements which raised such obvious rights issues).

The strange thing about the Dillon case, therefore, is not that the court disapplied swathes of the Legacy Act. This outcome is the direct consequence of the special rights protections that the UK agreed for Northern Ireland as part of the Withdrawal Agreement. The strange thing is that Mr Justice Colton arrived at this position so readily, in the face of such a determined efforts by the UK Government to obscure the extent of the rights obligations to which it had signed up. In the context of the UK’s full membership of the EEC and its successors, it took many years and many missteps to get to Judicial Committee of the House of Lords applying the remedy of disapplication of statutory provisions which were in conflict with EU law (or Community law, as it then was) in Factortame (No. 2). The Northern Ireland High Court was not distracted from recognising that these requirements remain the same within Northern Ireland’s post-Brexit legal framework when it comes to non-diminution of rights as a result of Brexit.

Indeed, the Court could not be so distracted. As we set out above, once Colton J determined that relevant sections of the Legacy Act had breached the Victims’ Directive, the judge had no discretion in the matter of disapplying the offending sections. This marks perhaps one of the strangest revelations to emerge from Brexit. Disapplication of inconsistent domestic law (of whatever provenance) as a remedy extends across much of the Withdrawal Agreement, covering any and every aspect of EU law which the Agreement makes applicable in the UK. This fact – spelled out in the crisp terms of Article 4 of the Withdrawal Agreement – was nowhere to be found in the 1972 Accession Treaty by which the UK became part of the (then) EEC. This is unsurprising, considering that the primacy of Community law over domestic law was then a relatively recent judicial discovery. In the decades since then, however, the principle of EU law primacy and the requirement that inconsistent domestic laws be disapplied have become a firm and irrevocable reality. Small wonder then, that the UK Government accepted it as a price to pay for leaving Brussels’ orbit without jeopardising the 1998 Agreement – no matter how it has since spun the notion of “taking back control”.

Where the government might have its own interests in attempting to obscure the clarity of Article 2 and its attendant consequences, Dillon is by some measure a wake-up call for Westminster. The report of the Joint Committee on Human Rights’ scrutiny of the Bill which became the Legacy Act contained no reference to the Windsor Framework, notwithstanding consistent work by the statutory Human Rights and Equality Commissions in Northern Ireland (the NIHRC and ECNI) to highlight the issue. Dillon marks not only some of the most extensive disapplication of primary legislation ever enacted by Parliament, but also the first such outcome after Brexit. But Dillon is only the beginning. It will be followed in the weeks to come by a challenge to the Illegal Migration Act 2023 by the NIHRC, where there are clear arguments that relevant EU law has been neglected. The Government, and Westminster in general, have not woken up to the legal realities of the Brexit deal. Dillon makes clear that Parliament needs to pay far greater attention to the Windsor Framework; not as a legal curio that only occasionally escapes its provincial relevance, but as a powerful source of law which impacts law-making and laws which are intended to apply on a UK-wide basis.