Monday, 21 November 2022

The EU Commission’s proposal on Media Freedom Regulation


 


Lorna Woods, Professor of Internet Law, University of Essex

 

Photo credit: Bin im Garten, via Wikimedia Commons

 

In her 2021 State of the Union address, EU Commission President von der Leyen stated:

 

Media companies cannot be treated as just another business. Their independence is essential. Europe needs a law that safeguards this independence – and the Commission will deliver a Media Freedom Act in the next year.

 

The resulting Proposal sits against a network of existing rules – notably the long-standing Audiovisual Media Services Directive (AVMSD) and the e-Commerce Directive as well as the recently agreed Digital Services Act (DSA) and Digital Markets Act (DMA).  It will be accompanied by a Recommendation. The Proposal is a significant step; the Commission is entering new regulatory terrain here. This move indicates concerns not just about the state of the media but about public discourse more generally, but how has the Commission sought to transfer this high level concern into specific rules?

 

Outline of the Proposal

 

The Proposal can be said to be divided into roughly five elements (in addition to the definitions and scope), reflecting the fact that the concerns around media freedom have different aspects and need a response that itself is multifaceted. 

 

1 Media Freedoms

 

The first is about media freedom (and the recommendation is relevant for this issue too as it focuses on internal safeguards for editorial independence and ownership transparency). The Proposal introduces rights and obligations on media service providers in Chapter II. Specifically, it provides them the right to exercise their “economic activities in the internal market without restrictions other than those allowed under [EU] law” (Article 4(1)).  Article 4(2) then provides more detail. It specifies that Member States are prohibited from:

 

-          interfering with editorial policies and decisions by media service providers (Article 4(2)(a));

 

-          detaining, sanctioning, intercepting, subjecting to surveillance or search and seizure or inspecting media service providers, their employees, their families or their premises “on the ground that they refuse to disclose information on their sources, unless this is justified by an overriding requirement in the public interest” (Article 4(2)(b)); and

 

-          deploying spyware in any device or machine used by media service providers, their employees or their families other than in certain narrowly-defined circumstances (Article 4(2)(c)).

 

According to the Q&A document, this is to “protect them from unjustified, disproportionate and discriminatory national measures”.  There are provisions dealing specifically with “public service media providers”, reflecting their “societal role as a public good” (Recital 14) but also their “institutional proximity to the State, which puts them at peculiar risk of interference (Recital 18): they are obliged to provide “in an impartial manner a plurality of information and opinions to their audiences, in accordance with their public service mission” (Article 5(1)), although what “plurality” means for these purposes is not defined. It seems that public service media cannot be self-declared as such – the definition of “public service media provider” requires the media service either to be “entrusted with a public service mission under national law” or receives national funding for the fulfilment of such a mission (Art 2(3)). 

 

There are some ownership transparency obligations on media service providers who “provid[e] news and current affairs content”. They must provide the provider’s name and contact details, and details relating to certain shareholders and beneficial owners (Article 6(1)). They must also “take measures that they deem appropriate with a view to guaranteeing the independence of individual editorial decisions” (Article 6(2)).

 

The proposal also sets out the right of the audience (“recipients of media services”) the right to “receive a plurality of news and current affairs content, produced with respect for editorial freedom of media service providers, to the benefit of the public discourse” (Article 3(1)). Recital 11, however, clarifies that this right “does not entail any correspondent obligation on any given media service provider to adhere to standards not set out explicitly by law.”

 

2. VLOPS

 

Secondly, there are obligations on Very Large Online Platforms (VLOPs), which are in addition to those in the DSA. These provide additional rights to media service providers on VLOPs. Specifically, VLOPs must provide certain mechanisms to deal with the media (including applications of the requirements of the Platform to Business Regulation – see Article 17 MFA, and Articles 11 P2B Regulation).

 

3. Media Regulation and Institutions

 

A third element concerns the institutional set up of media regulation. There are provisions around cooperation of national regulators. The Proposal expands the scope of the existing European Regulators Group for Audiovisual Media Services (ERGA), replacing it with the European Board for Media Services (EBMS) which - with the European Commission - is to ensure the consistent application of the MFA and the wider EU media law framework (perhaps in a similar fashion to the EDPB in relation to the GDPR). Specifically, the EBMS will

 

-          advise the Commission on the implementation of the Regulation, for example, providing expertise on regulatory, technical, or practical aspects concerning the identification of audiovisual media services of general interest under Article 7a of the AVMSD;

-          mediate between the regulatory bodies of the Member States;

-          assess areas of interest such as the functioning of media markets and the potential impact of national measures; and

-          take a position if the functioning of the internal market appears to be affected.

 

4. Media markets

 

A fourth element deals with the market and includes requirements for Member States to put in place rules for assessing media market concentrations (Articles 20-22). In addition for setting rules for when concentrations must be notified, Member States should also set out criteria for assessing the impact of a concentration on media pluralism and editorial independence, an assessment which is distinct from that under competition law.

 

5. Resources and Audience measurement

 

Finally, there are rules relating to measurements of audience and to criteria for allocating resources to media outlets. The Commission notes that ‘opaque and unfair allocation of economic resources’ contribute not only to an uneven playing field but also to internal market barriers. The “opacity of and biases inherent to proprietary systems of audience measurement skew advertising revenue flows”, and the way state advertising revenue is allocated is also problematic. The Proposal therefore mandates transparent, non-discriminatory and objective measures and allocation of resources.

 

Comment

 

Competence

 

The Proposal builds on the Commission’s Rule of Law Report 2020 and the European Democracy Action Plan, and seems to aim at some worthy objectives. Despite this, the Proposal is not framed as directly protecting democracy.   The Proposal frames issues as media companies facing

 

“obstacles hindering their operation and impacting investment conditions in the internal market such as different national rules and procedures related to media freedom and pluralism”.

 

This would seem to be aimed at tackling concerns around competence and the fact that culture is typically for Member States, not the EU. To be sure, there are often special ownership and merger regimes for media undertakings, but these are often based on not on economic considerations but on non-market concerns. The emphasis on the impact that disparate rules have on media undertakings is used to justify the use of Article 114 TFEU as the legal basis for the proposal. This re-emphasises that this in not a specific piece of media policy, fields in which the EU has limited competence and has no competence to harmonise (Article 167 TFEU), but market regulation.  The Commission has pushed the extent of its harmonising powers before; while the AVMSD may have started off dealing with restrictions on cross-border advertising, it has also got a distinct cultural aspect (eg EU quotas). In this proposal, it is not clear how the measures listed actually map on to addressing the internal market problems identified in the Explanatory Memorandum. The extent to which a harmonising measure has to deal directly with the eradication of barriers to trade and the degree to which it may be directed at other policy issues has been the subject of a certain amount of jurisprudence, as the examples of Titanium Dioxide case (Case 300/89), Tobacco Advertising I (Case C-376/98), Swedish Match (Case C-210/03) and Vodafone (Case C-58/08) illustrate, and a cottage industry in legal commentary. On first glance, this proposal lies quite close to the boundary.  It is noteworthy that the justification given in recital 6 – that the audience should be able to receive cross boarder information flows – is linked to the satisfaction of the requirement in Article 11 of the Charter on Fundamental Rights. Yet, the Charter in itself is not a legal base for harmonising legislation. It is likely that this issue of competence may lead to legal challenge in the measure is enacted.

 

Place in the Digital Regulation Landscape

 

There will be a question of the interplay between this measure and others impacting on publicly available content. The measure is to a large part aware of this and cross refers to some of these relevant measures. It replicates some definitions from the AVMSD, albeit slightly tweaked. For instance the definition of ‘programme’ is in its base element the same that in the ADMSD (Art 1(b)) but excludes the reference in the AVMSD to “including feature-length films, video clips, sports events, situation comedies, documentaries, children's programmes and original drama”.  It is also notable that the definition of “media service” moves the focus of the service on to the provision  of programmes or press publications (Article 2(1), emphasis added); traditionally publications might have been thought to be goods! The definition of audiovisual media service remains the same as in the AVMSD. The terms “editorial decision” (Art 2(8)) and “editorial responsibility” (art 2(9)) seem to be aimed at drawing the boundary of these terms in the same place as the analogous terms in the AVMSD, though the language has been revised to reflect the broader scope of the Proposal.

 

The Proposal also notes the currently limited scope for the ERGA to take action; currently it is limited to audio-visual media services only.   The development of EBMS, however, follows the approaches taken in the DSA and also found in the EU’s approach to disinformation. Extending ERGA’s remit beyond audiovisual media services brings into question the historic difference in approach between broadcasting (and subsequently video on demand) and the print media, even in their online formats. It has long been accepted that regulation of broadcast entities is legitimate (even if different justifications might be given for that regulation) whereas the press has typically been subject to self-regulation. Giving ERGA (or the EBMS as it would become under the Proposal) a role starts to challenge that settlement.  It is worth reminding ourselves that the national regulatory bodies making up ERGA must meet certain independence requirements (and ERGA itself emphasises the importance of independence – as well as adequate resources!) – these independent bodies might start to have oversight over the press (in the areas covered by the proposal).  Again, this is a sensitive topic.

 

Media Independence

 

The Proposal does contain some important provisions that should benefit the maintenance of media independence – though of course the inclusion of these provisions recognises the distinctive nature of the media and the important role they play in an informed, democratic society. There are specific provisions on editorial independence and for public service media providers Member States will be under an obligation to ensure they have “have adequate and stable financial resources to fulfill their public service remit. These resources shall be such that editorial independence is preserved.” (Article 5).

 

This requirement for sufficient funding brings into law a principle long found in the Council of Europe recommendations on this area. Indeed, EU state aid law has also long recognised the need for State support (and the definition for public service media to a large extent reflects the position under Article 106(2) TFEU). How this is to be calculated or assessed however is not specified in the Proposal (the Recitals merely noting that a multi- year funding model is desirable – see Recital 18) and may cause tensions given the different levels of resources available and funding models used across the various Member States.  The Recitals are anxious to emphasise that this obligation does “not affect the competence of Member States to provide for the funding of public service media”, though it would seem that there is a shift from the permissive regime envisaged by Protocol 29 and the mandatory rule envisaged here. Currently Member States may provide such funding (subject to competition law and state aid rules in particular); this Proposal suggests that in future Member States must do so.

 

Moreover the Proposal introduces obligations so that the senior management is to be appointed according to transparent, non-discriminatory and objective procedures. They will also have term limits and can only be dismissed if it is determined that they are no longer fulfilling their legal duties. The rules around non-dismissal are commonly found to ensure institutional independence in regulators but are here extended to the media (though the Commission has noted that concerns remain regarding the independence of some regulators - Rule of Law Report (3.3) despite the provisions introduced by the 2018 amendments to the AVMSD). 

 

The specific obligations in Article 4(2) follow the lines set doewn in standard freedom of expression case law concerning journalists – notably the protection of journalists sources, and the importance of journalists’ communications remaining confidential, as noted in Recital 16. In this, the prohibition of spyware in Article 4(2)(c) seems to be a specific response to recent scandals showing the use of these technologies.

 

Transparency

 

The lack of transparency in media ownership has been seen as an issue specifically in relation to assessing plurality of the media as well as for users to make assessments as to likely bias in the information and opinions published by a media outlet, a point recognised in Recital 19. This was an issue on which there was little action in the individual Member States. The Commission’s Rule of Law report also noted “The transparency of media ownership continues to present on average a medium risk across Member States, due to a lack of effectiveness of legal provisions and to the fact that information is provided only to public bodies, but not to the public” (3.3). Against this background, the requirements to give information to the public is a step forward; it might be questioned how effective it will be, however, in the case of highly complex corporate structures. Moreover, the transparency obligations are limited in to those providing news and current affairs content.  This term, however, is not defined in the proposal – nor is it defined in the AVMSD.  There is a question as to whether the rules apply only to those whose purpose is to provide news and current affairs, or whether it includes providers whose offering includes news and current affairs. If so, how big a proportion of the offering should news and current affairs constitute to trigger the obligation? This of course assumes we know what news and current affairs comprises; but does this term encompass, for example, celebrity gossip? Broader aspects are contained in the recommendation and are therefore not binding.

 

Rules on VLOPs

 

It is unclear what the obligations on VLOPs add to the the obligations in the Platform to Business Regulation (“P2B Regulation”) – which could apply to VLOPs anyway – indeed, may apply much more broadly than to VLOPs or how the relationship between the two measures might be managed. 

 

VLOPs are likely to satisfy the definition of “online intermediation service providers” within the meaning of the P2B Regulation and therefore owe certain obligations to “business users”. It seems also likely that media service providers using VLOPs (or other platforms) to reach their audiences would constitute such “business users”, though perhaps some citizen journalists might fall outside this definition.  Having said that, would “citizen journalists” fall within the definition of media service for the purpose of the Proposal; ‘services’ within the TFEU are limited to economic activity – as Recital 7 to the Proposal recognises. It specifically notes that

 

[t]his definition should exclude user-generated content uploaded to an online platform unless it constitutes a professional activity normally provided for consideration (be it of financial or of other nature).

 

This might adversely affect charitable foundations and the like by contrast with influencers. Note, however, that the recital specifically excludes ‘[c]orporate communication and distribution of informational or promotional materials for public or private entities”.

 

Article 17(3) deals with complaints lodged by media organisations “with priority” and “without undue delay”, yet Article 11 of P2B requires online intermediation service providers to handle complaints “swiftly and effectively”. It is hard to see what added benefit is from the requirement in Article 17 regarding “undue delay” adds – indeed, it might be seen to be a lower standard than “swiftly”. The obligation to give media entities priority does seem to suggest that their complaints be dealt with in some sort of differentiated way.  This could be justified by the public interest in news and its perishable nature; however, it seems less good if such claims – no matter their merit - are automatically dealt with over other serious claims. While there might be specified time limits for dealing with certain sorts of content (notably terrorism), prioritising journalism leaves the victim of revenge porn, for example, relatively unprotected. This may of course be the nature of a legislative measure dealing with one type of content; specifying time scales that are not comparative in nature (implicitly ‘with priority’ is whereas ‘swiftly’, for example, is not) could avoid that problem.  Insofar as the Proposal envisages a separate mechanism for media entities, there is a risk of confusion as to which mechanisms for dispute resolution – whether those in the DSA or those envisaged here – should be used.

 

There is also a concern about the definition of media services which receive the benefit of this special treatment as it covers what have been termed ‘self-declared media’. This recalls the debates in the DSA’s legislative process to create a media exemption, but which was ultimately rejected.  The concern is that a wide range of actors could self declare as media entities for the purpose of this clause – perhaps benefitting those who spread disinformation. 

 

VLOPs are also required to allow their users to customise the audiovisual media offer (subject to Art 7a AVMSD) (Article 19). It is not clear the extent to which this overlaps with Article 27 DSA, which provides for recommender system transparency, and Article 38 to allow recommender systems not based on profiling.

 

Media Concentration

 

In the Commission’s Rule of Law Report, it notes that the media market is at risk from high levels of concentration. This seems to be a consequence of the dominance of online platforms in digital advertising and the adverse impact that has had on the financial stability of many media entities, a situation worsened during COVID. Against this background some controls on media concentration are required – though that then leaves the question of how the media entities are expected to survive in an environment dominated by clickbait content especially when the market dominance of the platforms and similar services are taken into account. This Proposal does not include those services into account. While the DMA provides some controls, it is not clear how the two sets of provisions will work together and whether there would be gaps (think for example of a cross media merger involving a platform and a content provider).  Finally, these questions seem to be dealt with at national level; rules may differ between Member States. The EBMS and the Commission are envisaged as having advisory roles. While this may respect divisions of competence, there are question about equality of enforcement – it remains to be seen (in the light of the experience of the GDPR) how well the co-operation provisions (Article 13, Article 14) work.

 

Resources

 

The final section relates to the measuring of audiences (indirectly affecting resources) and the allocation of State advertising – which is an important source of revenue in many places. Recital 29 notes that state advertising can be used as a form of covert public subsidy. Article 2(15) defines “State advertising” to mean the “placement, publication or dissemination … of a promotional or self-promotional message, normally in return for payment of for any other consideration, for or on behalf or any national or regional public authority” – this includes state-owned enterprises or other state-controlled entities.  This is a broad definition although there are limits on those subject to the obligation. For example, there is a de minimis threshold of local authorities with less than 1 million inhabitants. Recital 10 excludes “emergency messages by public authorities which are necessary, for example, in cases of natural or sanitary disasters, accidents or other sudden incidents that can cause harm to individuals”.  Although the Proposal envisages that the reporting on advertising spend should be monitored, it does not specify by which body.

 

Enforcement

 

One final point to note is that the Proposal does not include a specific mechanism for enforcement; the presumption seems to be that national mechanisms should be relied on (see eg Article 4(3)) (and the Q&A doc notes that any claimed breaches can be brought before national courts since the proposal – as a regulation – is directly applicable). This may, for example, give a route to relief for those subject to spyware – though the route to CJEU itself through the national courts – especially when those courts form part of the regime deploying the spyware and therefore may be unlikely to provide adequate relief themselves - may be long. It is also unclear what the precise role of the EBMS is in ensuring the consistent application of the Proposal.

 

Conclusion

 

In conclusion, the Proposal marks a significant shift in the current status quo and attempts the important job of safeguarding media independence – independence which has come under increasing threat in recent years. In so doing, however, pushes at the edges of EU competence. Moreover, some of the measures proposed may prove controversial as they seek to support the media against authoritarian regimes seeking to control them, not least with some Member States. The passage of this proposal is unlikely therefore to be smooth or easy; whether it achieves its stated aims is yet another question.





Friday, 18 November 2022

The Cyber Resilience Act in the context of the Internet of Things

 



Mattis van ’t Schip, PhD candidate, Radboud University

Image credit: Grafiker61, via Wikicommons Media

 

In our homes and across industries, the use of Internet of Things (IoT) devices is increasing. These devices integrate hardware and software elements (e.g., a ‘smart’ watch, a WiFi-connected security camera). The cybersecurity of these connected devices is a growing issue. In the ‘Mirai botnet’, hackers accessed thousands of devices and, together, used them to bring down websites and companies, while other attackers accessed the cash registers of Target supermarkets by hacking into their network-connected air-conditioning systems. As evident from the Target hack, attackers can easily access these devices as they are always connected, through WiFi or BlueTooth. Companies and consumers now use billions of IoT devices, which thus creates an expanding cybersecurity threat. The European legislator struggled with this cybersecurity issue for a long time, as existing legislation (e.g., product safety law) did not sufficiently cover the cybersecurity of IoT devices. A recent legislative proposal, however, now intends to address this legal gap.

On 15 September 2022, the European Commission published the proposal for the Cyber Resilience Act (CRA). The Cyber Resilience Act intends to protect the European Union’s market from insecure products. The Act addresses four central themes, according to Article 1:

1) rules for placing products with digital elements on the European Union’s market to ensure the cybersecurity of such products;

2) essential requirements for the design, development, and production of products with digital elements;

3) requirements for vulnerability handling processes by manufacturers to ensure cybersecurity throughout the whole lifecycle of products with digital elements; and

4) market surveillance and enforcement.

This blog post gives a short overview of the new rules on the cybersecurity of products with digital elements (points 1-3). First, I address the framework of the Act by focusing on its scope and cybersecurity provisions. Second, I shortly examine how the Act fits within and adapts the existing regulatory landscape for the cybersecurity of products with digital elements, especially Internet of Things devices.

Products with digital elements

The Cyber Resilience Act will apply to ‘products with digital elements’. Article 3(1) clarifies that such products can be software, hardware, and remote data processing solutions. The Act does thus not only apply to software applications, but also applies to certain hardware objects that are not traditionally digital (e.g., routers, microcontrollers). A connected security camera is an example of a product with digital elements. The camera integrates a traditional camera system (the hardware) with software that, for instance, allows users to access the device’s camera from anywhere in the world.

The European Commission mainly hints at IoT devices as the main focus of the Act, but these devices are not the only products in scope. The Commission includes two additional categories of products with digital elements. These categories are based on the ‘criticality’ of the products. All ‘critical products with digital elements’ are listed in Annex III and mainly include products which have privileged access to networks or security. For example, Annex III includes password managers, identity management software, and network monitoring systems. Such critical systems present a cybersecurity risk, according to Article 3(3), and therefore must adhere to stricter cybersecurity requirements, which I discuss below. An additional category exists for ‘highly critical products with digital elements’, which present even more serious cybersecurity risks (e.g., network management software used by energy providers).

The Commission can amend the list of critical and highly critical products based on the cybersecurity risks those products pose, according to Article 6(2) and 6(5). Criteria for the assessment of those risks include whether the products have privileged access, control access to data, or perform critical trust-based functions in networks or security. The Commission uses additional criteria for highly critical products (e.g., the use of the product within critical sectors). (See also the NIS2 proposal for the cybersecurity requirements of devices employed in those sectors: Proposal for a Directive for a high common level of cybersecurity, which is about to be adopted)

Cybersecurity requirements

For all products with digital elements, the Cyber Resilience Act prescribes baseline cybersecurity requirements. Only products with digital elements that adhere to those requirements can be placed on the European market, similar to earlier IoT related product rules, such as the Radio Equipment Directive.

The cybersecurity requirements are listed in Annex I Section 1. The requirements must be met on the condition that devices are properly installed, maintained, used, and updated, according to Article 5(1). The provision is not clear on who should actually ensure these pre-conditions. The responsibility could shift between the manufacturer and user based on the action; for example, proper use is most likely a condition for the user, while proper maintenance is a condition for the manufacturer. Article 10(10) seems to indicate that the manufacturer must document the conditions under which the user can ensure proper installation, operation, and use. In a broader sense, these conditions could also indicate that the user, for instance as part of proper installation, should change the default password of their device before using it.

Next to the cybersecurity requirements, manufacturers must comply with certain vulnerability handling requirements, listed in Annex I Section 2. These vulnerability handling requirements address the large number of devices which do not receive sufficient updates during their lifecycle. Without sufficient updates, devices become security threats, as the manufacturers do not ‘patch’ the latest security issues.

Manufacturers must now provide regular security updates which address any vulnerabilities in their products. This obligation exists for the expected lifetime of the product, or up to five years, according to Article 10(6). In addition, the vulnerability handling processes are meant to ensure transparency about the vulnerabilities that manufacturers discover and patch. Here, the Commission aims to solve two problems: a lack of security updates for devices that manufacturers disregard (e.g., because they brought a newer device to the market) and a lack of transparency on any vulnerabilities the manufacturer or third parties find in their products. The latter can put devices from other manufacturers at risk. For example, if company Eppla finds a vulnerability in their BlueTooth protocol and patch it, this patch could help other companies, such as Geeglo, who use the same protocol. If Eppla is not transparent about the vulnerability, they might put Geeglo at risk of security breaches too.

Through the cybersecurity requirements and vulnerability handling processes, the Cyber Resilience Act thus addresses quite a broad range of cybersecurity related issues.

Economic operators

The Cyber Resilience Act introduces product requirements to protect the European Union’s market. Therefore, most of its rules apply to manufacturers that bring devices to the Union’s market. In addition, the rules apply to any other actors, including importers and distributors, that place a product with digital elements on the market with their name or trademark on it, or if they carry out a substantial modification of a product which is already on the market (Article 15). The same condition of a substantial modification applies to any natural or legal person (Article 16). The scope of the Act is thus broad: any entity that brings the product to the market or modifies a product on the market to the extent that it can be considered a ‘new’ product, falls within the scope of the Act.

The rules of the Cyber Resilience Act mostly apply to manufacturers. Article 10 lists several of the most important obligations for the manufacturers. Most of these obligations also apply to importers and distributors. Manufacturers must primarily ensure security-by-design (Article 10(2)). They must ensure this secure design by conducting a risk assessment for their device. Subsequently, the manufacturers must implement the results of that assessment throughout the entire production process of the device, from planning to delivery and maintenance. Manufacturers must include certain information in the technical documentation, including this risk assessment (Article 10(3)). The rules for technical documentation are part of a set of obligations for manufacturers to provide clear and intelligible information to users about different aspects of the device (Article 10(10)).

Finally, Article 10(14) includes an obligation for manufacturers to notify market surveillance authorities (a type of regulatory agencies) and users of their product when they cease operations. This obligation might help mitigate a problem in the IoT industry where manufacturers who, for instance, go bankrupt or sell their company to a competitor, disregard their existing devices on the market. As a result, consumers are left with devices that no longer receive regular updates or stop working entirely. In some cases, consumers are not aware of this problem. This new obligation can help mitigate this problem as manufacturers must inform market surveillance authorities and users of this situation, which can lead to a more secure end of service for existing devices on the market.

A new approach

The Cyber Resilience Act will contain the most important cybersecurity requirements for Internet of Things devices. Existing legislation does apply to the cybersecurity of Internet of Things, but only through particular criteria.

The closest piece of legislation to the Act is the Radio Equipment Directive (RED), a type of product safety legislation. The Directive establishes requirements for radio equipment before it can be placed on the Union’s market. The approach is thus quite similar to the Cyber Resilience Act: economic operators must comply with specific requirements before they can place their products on the market of the EU.

In terms of cybersecurity requirements, the Radio Equipment Directive, however, is much more limited than the Cyber Resilience Act. The Directive contains two main cybersecurity requirements in Article 3(3): 1) radio equipment must ‘not harm the network or its functioning nor misuse network resources’ (3(3)(d)); and 2) radio equipment must contain safeguards to protect the personal data and privacy of its users (3(3)(e)). These cybersecurity requirements also apply to Internet of Things devices, pursuant to a recent Delegated Act from the Commission. These general cybersecurity requirements are much more limited than the list of requirements in the Cyber Resilience Act, which, crucially, also includes requirements for vulnerability handling processes. Recital 15 of the Act notes on these differences: ‘The essential requirements laid down by [the Cyber Resilience Act] include all the elements of the essential requirements referred to in [the Radio Equipment Directive].’ The Cyber Resilience Act, therefore, will be much more in the forefront concerning cybersecurity requirements for Internet of Things devices than the Radio Equipment Directive.

The Radio Equipment Directive is quite similar in its product safety provisions; it includes, for example, rules on technical documentation. However, the Cyber Resilience Act includes broader obligations for the manufacturer that focus on cybersecurity, for instance with the requirement to notify the market surveillance authorities when they cease their operations. While, from the outset, the Directive might seem partially redundant due to its similarities with the Act, the approach of both pieces of legislation is different. The Radio Equipment Directive focuses on rules that ensure radio equipment is safe, broadly speaking, when placed on the European Union’s market. These safety requirements are different from cybersecurity requirements. For instance, the Radio Equipment Directive requires devices to ensure access to emergency services, to facilitate users with certain disabilities, and to work with commonly used chargers. The Cyber Resilience Act, instead, fully focuses on the cybersecurity of devices.

The foundation of the Cyber Resilience Act also differs from the General Data Protection Regulation, another relevant piece of legislation in the context of cybersecurity for Internet of Things devices. The GDPR applies to processing of personal data, which only partially covers the security requirements of the Act. The GDPR, foundationally, focuses on protecting people against misuse of their personal data. The Cyber Resilience Act, therefore, as with the Radio Equipment Directive, supports the aim of the GDPR with its cybersecurity requirements. The Cyber Resilience Act notes, in Recital 17, that ‘the essential cybersecurity requirements laid down in this Regulation, are also to contribute to enhancing the protection of personal data and privacy of individuals.’

The Cyber Resilience Act will provide a comprehensive framework for cybersecurity requirements, which supports the aims of similar legislation, such as the Radio Equipment Directive and the General Data Protection Regulation. Therefore, the Act gives substance to the growing number of cybersecurity requirements for Internet of Things devices in currently scattered pieces of legislation.

Conclusion

The Cyber Resilience Act offers a more comprehensive set of cybersecurity requirements for Internet of Things devices than existing legislation. Furthermore, its rules offer answers to many lingering questions on the security of IoT, such as what should happen when manufacturers cease their operations or when new vulnerabilities require updates from the manufacturer.

In relation to existing legislation, the Cyber Resilience Act will provide a comprehensive overview of cybersecurity requirements. Existing cybersecurity-related legislation often contained open norms and required specific operations (e.g., personal data processing in the General Data Protection Regulation). The Cyber Resilience Act will support the aims of this related set of legislation, while offering the primary set of cybersecurity requirements modern software and hardware must adhere to.

 

Monday, 31 October 2022

Migration in Europe and the Problems of Undercriminalisation

 



By Amanda Spalding, Lecturer in Law, University of Sheffield

Photo credit: Gzen92, via wikicommons media

Introduction

As five million refugees enter Europe having fled Ukraine, Denmark and the UK prepare for off-shore processing of asylum applications and Frontex tells us that in the first half of 2022 irregular entries to the European Union are up 84%, it is difficult to keep up with rapid and ever-changing laws and policies on migration. However, it is important to continue to reflect on the broader legal context that these developments are situated within, especially the human rights framework that will be crucial in providing some level of protection. This protection, though, is far from robust and subject to being increasingly undermined by other trends in the law.

The following blog post summarises some of the main themes of my new book, The Treatment of Immigrants in the European Court of Human Rights.

The Criminalisation of Immigration

The criminalisation of immigration has long been noted by scholars across Europe and beyond. The criminalisation of immigration – sometimes called ‘crimmigration’- refers to the increased entwining overlap of the criminal justice system and the immigration system. This entwining takes multiple different forms including the law. The legal framework surrounding immigration increasingly draws on the criminal law by creating a huge number of immigration offences. This includes the criminalisation of the most basic immigration offences such as irregular entry or stay which is widely criminalised in Europe with varying levels of seriousness (see the Country Profiles by the Global Detention Project). For example, the level of fine for such an offence can be relatively low such as in the Czech Republic and Estonia where maximum fines are below €1,000 whereas in countries such as Austria, Cyprus, Italy and the UK maximum fines exceed €4,000. Most European states, including the UK, Sweden, Norway, the Netherlands, Ireland, Germany, France, Finland and Denmark, set the maximum prison term for these types of crime at between six months to one year. In practice though some states such as Germany and Finland rarely use imprisonment whereas in others such as Bulgaria and the Czech Republic there is evidence of extensive inappropriate use of imprisonment against asylum seekers.

Criminalisation is not confined to migrants themselves but also affects those who facilitate their irregular entry and stay. Article 1(1)(a -b) of the EU Facilitation Directive requires Member States to create appropriate sanctions for those who deliberately assist irregular entry to or stay in a Member State with Article 1(1)(b) requiring the imposition of sanction on anyone who does so for financial gain. The aim of these measures was, at least in part, to tackle organised crime. Article (1)(2) of the Facilitation Directive does allows Member States to provide exceptions for those who provide such assistance for humanitarian reasons but it does not require them to do so. Thus, there are varying standards across Europe as to when the facilitation of entry or stay is a punishable offence with some countries allowing for broad criminalisation including situations of humanitarian assistance. The prosecution of individuals providing help such as Lisbeth Zornig Andersen in Denmark, the criminalisation of rescue where those who aid migrant boats in distress as sea have faced criminal charges and extensive criminalisation of NGO organisations providing asylum and humanitarian assistance have all been incredibly controversial. Many states have also gone further and criminalised other interactions with migrants such as the letting of accommodation to those with irregular status.

Immigration and criminal law have become further entwined by the increased use of immigration measures as a consequence of criminal conviction. Although public security has long been a ground for deportation in many European countries, its use in recent years have become increasingly punitive and severe. Over the last twenty years states such as the UK, Denmark and Germany have all passed laws that make deportation an automatic result of many criminal convictions and the UK and Norway now have separate prisons to hold foreign national prisoners.

There has also been a significant increase in the immigration detention estate across the EU with varying types and uses as explored by Elspeth Guild in her ‘Typology of different types of centres in Europe’ for the European Parliament. There has also been a huge increase in surveillance of migration. The EU has created a ‘plethora of systems’ regarding border control including the EURODAC database which holds migrant fingerprint data, the Visa Information System (VIS) which stores the biometric information on all third country nationals who apply for a visa in the Schengen area and Eurosur which is a surveillance system which uses drones, sensors and satellites to track irregular immigration. The use of fingerprint and other surveillance technology in immigration control in and of itself has connotations with the criminal law but this is further compounded by Europol (European Police Office) and national law enforcement agencies being given access to some of this data.

The Problem of Undercriminalisation

There are thousands of other elements to the criminalisation of immigration trend, not least the rhetoric surrounding migration in many European states, but there is a possibility that focusing too much on criminalisation is actually a bit of red herring. The complex powers and systems in immigration law and policy mean that much of the stigma and severity of the criminal law is being endured by migrants but often without the concurrent procedural safeguards that the criminal law provides. The problem for immigrants may be then conceptualized as a problem of ‘undercriminalisation.’ Ashworth and Zedner offer a clear definition of this practice: “undercriminalisation can be said to occur when the state sets out to provide for the exercise of police power against citizens in alternative (non-criminal) channels which are subject only to lesser protections inadequate to constraining an exercise of power of the nature and magnitude involved… undercriminalisation occurs where the failure to designate a preventative measure as criminal deprives the citizen of what is due to her, in view of the substance of the restrictions on liberty and possible sanctions involved in the ostensibly preventative measure.”

Thus, in a perverse way, immigrants might be better off if the whole system was being criminalised as they’d benefit from far more procedural safeguards and judicial oversight than they do now. It is also possible that this is not simply ‘undercriminalisation’ but the beginnings of a two-tier system in both criminal justice and human rights. The intersection of these two can already been seen in the UK government’s proposed Bill of Rights Bill which seeks to severely limit certain human rights for migrants, particularly foreign national offenders.

The ECtHR and Migration

In order to appreciate the risk of this two-tier system, it is important to understand how the European Court of Human Rights has responded to the increasingly harsh immigration system and where there are significant gaps in protection. For example, the lack of a proper necessity and proportionality test when considering the arbitrariness of immigration detention means it has the lowest level of protection of any form of detention and as Professor Costello put it: has been left “in its own silo.” Likewise the failure of the Court to apply the right to a fair trial contained in Article 6 to immigration decisions has barely been discussed by academics and advocacy organisations despite the fact that this is an incredibly powerful and fundamental right that would serve as a crucial check on state power. The fact that immigration decisions and detention are becoming increasingly bound up with the criminal law means that we should be especially careful to scrutinise the legal approach to such issues, with many criminological and sociological scholars challenging the long-held legal conception of immigration measures as non-punitive.

Finally, it is important to continuous reflect on the fact that the criminalisation phenomenon is part of a wider trend of very harsh immigration regimes in Europe and the two are often related. The criminalisation phenomenon may increase the harshness with which immigrants are dealt with and exacerbate existing issues, but it is not always the root problem in the failure of the Court to protect migrants fully.  As already demonstrated in depth by others such as Professor Costello and Professor Dembour, there are significant issues with how the European Court of Human Rights approaches migrants’ rights and that to truly understand the treatment of immigrants in Europe, the criminalisation of immigration framework may be insufficient. This is a trend that must be subject to rigorous scrutiny. Beyond the clear moral issues with having a two-tier human rights and criminal justice system, the Court’s approach poses other dangers. The general failure of the Court to engage in proper scrutiny of state immigration power and policies means that it may allow racial discrimination to go unchecked. The approach of the Court to immigration matters may also seep into other areas of its case-law and mean a general erosion of rights for everyone, immigrants and citizens.

 

 

 

Friday, 28 October 2022

Should the EU Ban the Real-Time Use of Remote Biometric Identification Systems for Law Enforcement Purposes?

 




Asress Adimi Gikay (PhD)

Senior Lecturer in AI, Disruptive Innovation, and Law

Brunel Law School & Brunel Centre for AI(London, UK)

Twitter: @DrAsressGikay

Photo credit: Irbsas, via Wikimedia commons

 

The Call for Ban on Real-Time Remote Biometric Identification System

It has been around two years since the European Commission introduced its Draft Artificial Intelligence Act ("EU-AIA), which aims to provide an overarching AI safety regulation in the region. The EU-AIA's risk-based approach has been severely criticised mainly for failing to take a fundamental rights approach to regulate AI systems. This post focuses on the EU-AIA's position on the use of Real-Time Remote Biometric Identification Systems (RT-RBIS) by law enforcement authorities in public spaces, which continues to cause the most controversy.  

The EU-AIA defines RT-RBIS as a "system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay" [EU-AIA Art. 3(37)]. The regulation covers the real-time processing of a person's biological or physical characteristics, including facial and bodily features, living traits, and physiological and behavioural characteristics, through a digitally connected surveillance device. The most commonly known RT-RBIS is facial recognition technology (FRT)—a process by which an AI software identifies or recognises a person using their facial image or video. The software compares the individual's digital image captured by a camera to an existing biometric image to estimate the degree of similarity between two facial templates and identifies a match. In the case of real-time systems, capturing and comparing images occur almost instantaneously.   

As EU institutions, Member States, and stakeholders continue to discuss the EU-AIA, there is growing dissent against the use of RT-RBIS for law enforcement purposes in publicly accessible spaces. In 2021, the European Parliament invited the Commission to consider a moratorium on the use of this technology by public authorities on premises meant for education and healthcare. In response to the EU Council's latest proposed revision of the EU-AIA, on October 17, 2022, 12 NGOs wrote a letter to the EU Council reiterating the need to prohibit the technology unconditionally.  

  

The Risk Posed by the Technology

RT-RBIS poses multiple risks that might jeopardise individual rights and citizens’ overall welfare.

As the technology is still evolving, there remains the risk of inaccurate analysis and decisions made by the system. In the United States, police have used FRT to apprehend individuals suspected of a crime where multiple instances of mistaken identification led to wrongful arrests and pre-trial incarcerations. In one example, a Black American wrongly identified by a Non-Real Time FRT for suspicion of shoplifting, resisting an arrest and attempting to hit a police officer with a car spent eleven days in jail in New Jersey. Between January 2019 and April 2021, 228 wrongful arrests were reportedly made based on FRT in the State of New Jersey. 

The deployment of RT-RBIS in public spaces could cause more significant harms compared to Non-Real time biometric identifications systems. These harms include missing flights, false arrests, and prolonged and distressing police interrogations that have adverse socio-economic and psychological effects on law-abiding members of society. 

RT-RBIS could also be applied discriminatorily, disproportionately targeting specific groups. In a 2019 study, researchers have found that FRT falsely identifies "Black and Asian faces 10 to 100 times more often than white faces." False positives were found to be between "2 and 5 times higher for women than men." Whilst an ethical and inclusive machine learning programme could alleviate this, the potential for discriminatory application of the technology cannot be ignored. In the UK, the existing policing practice has been criticised for subjecting ethnic minorities to disproportionate stops and searches. Indeed, the police should not be allowed to use technology to maintain similar stereotypical practices.

Lastly, RT-RBIS could continue to normalise surveillance culture and increase the infrastructure for it. Public spaces such as airports, train stations, and parking lots could be equipped with cameras that law enforcement authorities could activate for live biometric identification in case of necessity. This could expose the public to the risk of state surveillance. The use of FRT to crack down on the exercise of democratic rights by authoritarian governments is becoming a common practice.  Currently, there is an ongoing legal challenge against Russia before the European Human Rights Court for mass surveillance of protests using FRT.

The risks highlighted above must be addressed seriously and comprehensively. However, is a complete ban on the use of the technology a reasonable solution? 

Qualified Prohibition and Fundamental Rights Approach under the EU AI Act

Due to the high risk to fundamental rights  posed by some AI systems, scholars have argued that the EU-AIA should take a fundamental rights approach in regulating these AI systems. As fundamental rights are given strong legal protection, any measure that interferes with them should meet three legal requirements:

Interference  with derogable rights is allowed for a narrowly defined, specific and legitimate purposes prescribed by law, and subject to the tests of necessity and proportionality.

The burden of proving the necessity and proportionality of interfering with fundamental rights lies with the authority seeking to interfere with such rights.

A court or a similar independent body determines whether the authority has met the threshold of its burden of justification.

These requirements involve a careful judicial balancing act. The EU-AIA's qualified prohibition of using RT-RBIS effectively adopts the same approach. 

First, the EU-AIA permits, by way of exception, the use of the technology for narrowly defined, specific, and legitimate purposes [EU-AIA Art. 5(1)(d)]. These purposes are, (i) the targeted searches for specific potential victims of crime, including missing children; (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a terrorist attack; and (iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a crime with a maximum sentence of at least three years that would allow for issuing a European Arrest Warrant. These are specific and legitimate purposes for restricting fundamental rights, depending on the context. 

Second, the relevant law enforcement authority must demonstrate that the use of the technology is justifiable against: (a) the seriousness, probability and scale of the harm caused in the absence of the use of the technology; (b) the seriousness, probability and scale of consequences of the use of the technology for the rights and freedoms of all persons concerned; and (c) the compliance of the technology's use with necessary and proportionate safeguards and conditions in relation to the temporal, geographic and personal limitations[EU-AIA Art. 5(2)-3]. The authority proposing to use the technology bears the burden of justification.

Third, the relevant law enforcement authority must obtain prior express authorisation from a judicial or a recognised independent administrative body of the Member State in which the technology is to be used, issued upon a reasoned request. If duly justified by urgency, the police may apply for authorisation during or after use [EU-AIA Art. 5(3)].

The preceding analysis demonstrates that the EU-AIA does not give a blank cheque to the police to conduct spatially, temporally, and contextually unlimited surveillance. Despite the EU-AIA not explicitly employing fundamental rights language in the relevant provision, it entails a balancing act by courts, that must determine whether the use of RT-RBIS is necessary and proportionate to the purpose in question by considering multiple factors, including human rights. 

 

The Call for Categorical Prohibition is Unsound

The fear of increasing surveillance is one of the grounds for the heightened call for the complete prohibition of RT-RBIS. Nevertheless, viewed within the overall context, the envisioned use of the RT-RBIS under the EU-AIA does not significantly change the existing surveillance culture or infrastructure.   

 

Amid Corporate Surveillance Capitalism

Contemporary societies now live in massive corporate surveillance capitalism. Big Tech companies such as Facebook, Google, Twitter, Apple, Instagram, and many other businesses access our personal data effortlessly. They know almost everything about us— our location, addresses, phone numbers, private email conversations and messages, food preferences, financial conditions and other information we would prefer to keep confidential. Surveillance is the rule rather than the exception, and we have limited tools to protect ourselves from pervasive privacy intrusions. 

Whilst surveillance, if employed by law enforcement, is used at least in theory to enhance public welfare, such as prosecuting criminals and delivering justice, Big Tech uses it to target us with advertisements or behavioural analysis. The fear of law enforcement's use of RT-RBIS in limited instances is inconsistent with our tolerance for Big Tech corporate surveillance. This does not mean we must sink further into surveillance culture, but we should not apply inconsistent policies and societal standards, detrimental to the beneficial use of the technology.  

 

Minimal Change in Surveillance Infrastructure

 The deployment of RT-RBIS as envisioned by the EU-AIA is unlikely to change the current surveillance infrastructure significantly, where Closed-Circuit Television (CCTV) cameras are pervasively present. In Germany, in 2021, there were an estimated 5.2 million CCTV Cameras, most facing publicly accessible spaces. In the UK, there are over five million surveillance cameras, over 691 000 of which are in London. On average, a London resident could be caught  300 times on CCTV cameras daily

The police can access these data during the crime investigation, probably without needing a search warrant in practice. It is improbable that private CCTV camera owners refuse to provide access footage to the police due to a lack of a search warrant, unless they are involved in the crime or protecting others. At the same time, footage from these cameras play an instrumental role in solving serious crimes. However, the overall picture surveillance infrastructure would not significantly change; if it does, it is for a better public good.

 

Ethical Development and Use Guideline

The potential biases or disproportionate use of the technology against certain groups could be tackled by designing ethical standards for the development, deployment and use of AI systems. These guidelines include ensuring that the AI systems are bias-free before deployment and requiring law enforcement authorities to have clear, transparent and auditable ethical standards. The EU-AIA itself has several provisions to ensure this.

 

Maintaining the EU-AIA's Provisions on RT-RBIS

The use of RT-RBIS, as envisioned under the EU-AIA, does not fundamentally change the existing surveillance culture and infrastructure. Nor does it unreasonably increase the surveillance power of the state. On the contrary, a categorical ban would impede beneficial limited use. Therefore, the provisions of the EU-AIA governing the limited use of RT-RBIS by law enforcement authorities in publicly accessible spaces must be maintained. Stakeholders should resist the temptation to implement radical solutions that will harm societal interest, and focus on developing ethical guidelines for development, deployment and use of the technology. 

Monday, 24 October 2022

A boost for family reunification through the Dublin III Regulation? The CJEU on the right to appeal refusals of take charge requests

 

 


 

Mark Klaassen, Leiden University

Photo credit: DFID 

An unaccompanied minor has the right to appeal the refusal of a take charge request by the receiving Member State. This is the conclusion of the Court of Justice of the EU (CJEU) in the I. & S. judgment. The preliminary question posed by the District Court of Haarlem in the Netherlands was interesting from the outset because the Dublin III Regulation itself does not provide for such right to appeal. The take charge request procedure functions between two Member States and the individual asylum seeker is not a party to this procedure. The referring court essentially asked the CJEU whether the right to an effective remedy as protected by Article 47 of the Charter of Fundamental Rights obliges the Member States to provide for an appeal procedure against the refusal of take charge requests. In this blog, I discuss the reasoning of the Court and the implications for the application of the Dublin III Regulation.

 

The applicant is an Egyptian national who applied for asylum in Greece as an unaccompanied minor. His uncle lives in the Netherlands and the applicant would like to join him there as well. Based on the Dublin III Regulation, Greece made a take charge request to the Netherlands. As prescribed by Article 8(2), Greece deems that the Netherlands is responsible for handling the asylum request of the applicant. The Netherlands had refused the take charge request because it deemed that the applicant did not substantiate the existence of family ties with his uncle. Greece requested the Netherlands to reconsider the refusal, but this request was denied. The applicant and his uncle started proceedings against the refusal before the Dutch courts. The administrative appeal was declared inadmissible by the Dutch authorities because the Dublin III Regulation does not provide for a right to appeal the refusal of a take charge request. In the appeal against this, the referring court asked preliminary questions to the CJEU.

 

Based on Article 27(1) Dublin III Regulation, an asylum seeker has the right to appeal a transfer decision made by the sending State. But when the receiving State refuses a take over or take charge request, no transfer decision is made at all. The CJEU observes that even though Article 27(1) does not provide for a right to appeal the refusal of a take charge request by the receiving State, it does not exclude the possibility that such right to appeal exists. The Court refers to its earlier case law to conclude that the Dublin III Regulation is not only an instrument that functions between the Member States, but that it is also intended to afford rights to asylum seekers. Based on this assertion, the Court ruled in Ghezelbash that asylum seekers must be able to appeal the application of the criteria which determine which Member State is responsible to deal with an asylum request.

 

In the present judgment, the Court also applies this reasoning to the refusal of a take charge request of an unaccompanied minor. According to the Court, the legal protection of an asylum seeker may not be dependent on the acceptance or refusal of a take charge request (para 41).That would hinder the effectiveness of the right of the unaccompanied minor asylum seeker to be reunified with the family member lawfully residing in the receiving Member State (para 42). The Court holds that based on the right to an effective remedy, an asylum seeker has the right to appeal both the wrong application of the criteria, as well as the refusal of a take charge request (para 45). Furthermore, the right to appeal the refusal of a take charge request is also based on the right to respect for family life and the best interests of the child, as protected by respectively Article 7 and 24(2) Charter. An asylum seeker has the right to invoke the protection of these rights and therefore a procedure must exist to do so (paras 47-49). The family member residing in the receiving Member State does not have the right to appeal the refusal of a take charge request. The Court reasons that Article 27 does not grant appeal rights to the family member at all and therefore the family member also does not have the right to appeal the refusal of a take charge request.

 

This judgment makes it necessary for the Member States to provide for the possibility to appeal the refusal of a take charge request to the authorities of the receiving Member State. This is a novelty in EU asylum law. The Court does not give further guidance on this appeal procedure. In his Opinion, Advocate-General Emiliou observes that in the absence of concrete guidance in the Regulation itself, the appeal procedure falls within the procedural autonomy of the Member States, which is limited by the principle of effectiveness. The AG argues that this principle requires that the asylum seeker is informed of the reasons for the refusal of the take charge request. The AG deems it most appropriate if the authorities of the sending Member State inform the asylum seeker of the reasons of the refusal by the receiving Member State. Even though the Court has not made this explicit, in my view the reasoning of the AG is still applicable. Not informing the asylum seeker of the reasons for a refusal would undermine the effectiveness of the right to appeal because the asylum seeker would not know on what grounds the take charge request has been refused. Furthermore, the receiving Member State is already obliged to motivate the refusal of the take charge request to the sending Member State based on Article 5(1) Commission Regulation (EC) No 1560/2003. As the applicant is residing in the sending Member State at the moment that the take charge request is refused, it seems the most appropriate solution that the authorities of that Member State inform the applicant of the reasons of the refusal of the take charge request by the receiving Member State and the procedure to appeal this refusal. This, however, requires coordination between both Member States involved.

 

Having established that under the Dublin III Regulation an asylum seeker has the right to appeal the application of the criteria (Ghezelbash) and the refusal of a take charge request (I. & S.), a remaining question is whether an asylum seeker has the right to appeal against the refusal of a sending Member State to make a take charge request in the first place. In my view, the reasoning of the Court in I. & S. can be applied to that question as well. The Dublin III Regulation aims to provide concrete rights to asylum seekers and lists the criteria for determining the responsible Member State. An asylum seeker, however, is dependent on the sending Member State to make a take charge request. If the sending Member State simply refuses to make a take charge request, for whatever reason, the Dublin III Regulation does not provide the asylum seeker the possibility to appeal against this refusal. In my view, even though Article 27 Dublin III Regulation only grants the right to appeal a transfer decision, reading the criteria from the Regulation as rights for asylum seekers implies that refusing to apply the criteria would undermine the right of the asylum seeker to be transferred to a Member State where a family member is legally present. For this reason, asylum seekers must be able to challenge the refusal to make a take charge request.

 

The reasoning of the Court is also interesting in the light of the current negotiations regarding the reform of the Dublin system. Article 33(1) of the Proposal for a Regulation on asylum and migration management (COM(2020) 610 final) provides for a limitation of the right to appeal. It states that the scope of the legal remedy shall be limited to the risk of ill-treatment within the meaning of Article 4 Charter and the application of the criteria relating to family life. This proposal from the Commission is an attempt to limit the effects of the Court’s ruling in Ghezelbash. By repeating Ghezelbash and emphasising that the right to appeal is based on the Charter of Fundamental Rights in the I. & S. judgment, it seems unlikely to me that the Court would deem limiting the scope of the legal remedy to be lawful.

 

Considering that because of the structure of EU asylum law, family members with an asylum background often find themselves in different Member States, in practice the Dublin III Regulation can function as an instrument to bring families together. By placing family ties at the top of the pyramid of criteria in the Dublin system, this was also the intention of the EU legislature. The judgment of the CJEU in I. & S. makes clear that a refusal of a take charge request may violate fundamental rights and therefore a legal remedy must be made available by the Member States. This gives asylum seekers an extra tool to enforce the application of the Dublin criteria to reunite with family members.