Tuesday, 25 April 2023

The ‘sidelining’ of the European Parliament from the EU-US Trade and Technology Council (TTC): TTC(s) as post-Democracy Divas or Disasters?

 


 



Professor Elaine Fahey, Institute for the Study of European Law, City Law School, City, University of London*

Elaine.fahey.1@city.ac.uk

 

* Professor of Law at the City Law School, City, University of London; Jean Monnet Chair in Law and Transatlantic Relations 2019-2022; co-director of the Institute for the Study of European Law (ISEL), City Law School since 2016; in 2023, Visiting Professor at the American University, Washington College of Law (WCL) and Senior Land Steiermark fellow at the University of Graz; Research interests: EU law, global governance, EU external relations and the EU as a global digital actor.

 

 

The EU-US Trade and Technology Council (TTC)

 

A Transatlantic Trade and Technology Council (TTC) has been set up quickly by the European Union (EU) with the US at the outset of the US Biden administration. It is not a trade negotiation and does not adhere to any specific Article 218 TFEU procedure, although it has many signature ‘EU’ characteristics. The TTC has high-minded goals to ‘solve’ global challenges on trade and technology with its most significant third country cooperating partner.  Yet it is notably not the only recent Council proposed by the EU- there is also a new EU-India Trade and Technology Council. These new Councils represent a new modus operandi for the EU to engage with ‘complex’ partners, comprising executive to executive engagement, meeting agency counterparts regularly in close groups in an era of EU trade policy deepening its stakeholder and civil society ambit overall. The TTC has a vast range of policy-making activities, traversing many areas of EU law.  Their precise selection and future is difficult to understand in EU regional trade and data policy, seemingly pivoting, like US trade law, to executive-led soft law.

 

One entity not officially to be found within the TTC is the European Parliament (EP).  The EP is formally not part in any way of the EU-US Trade and Technology Council (EU-US TTC).  The TTC has held three ‘high-level’ political meetings so far escribed as executive to executive ‘ministerial’ meetings steer cooperation within the TTC and guide its 10 working groups on technology standards, secure supply chains, tech regulation, global trade challenges, climate and green technologies, investment screening and export controls. The first two meetings focused on launching the TTC and setting its agenda, while the third – in December 2022 – was described as a ‘shift to deliverables’. The TTC strikingly has a vast range of global law-making goals and has received public critique for either ‘under-performing’ or for its overbroad focus. It comes at the back of significant EU-US collaboration in data privacy.

 

This short blog considers the merits of the placement of the EP. It considers its de facto and de jure ‘sidelining’ from this era of EU-US relations, in an ostensible age of parliamentarisation and widening participation.  

 

EP powers in external relations: increasingly empowered at all stages … to a point

 

The EP is increasingly empowered politically and legally in international relations including important powers of consent to approve international agreements in a wide variety of circumstances, pursuant to the EU treaties in Article 218(6)(a) TFEU, with information and veto rights. The EP is excluded from the critical stage of the opening of negotiations on external relations agreements.  Many of its powers represent a very end-point of diplomacy, politics and technical issues, in reality, temporally earlier issues are increasingly important in a world where soft international economic law prevails and trade agreements are viewed as old-fashioned. As a result, the EP uses many soft law resolutions to advocate legal positions in the shadow of its veto. The EP has, however, also been granted important information rights in Article 218(10) TFEU, which have been given constitutional significance by the CJEU in key caselaw initiated by the EP.

 

However, similar to or mimicking the US the EU increasingly uses ‘soft’ international arrangements rather than formal international agreements in establishing relations with non-EU states.  Yet the use of the many forms of soft law in EU external relations runs the risk that parliamentary influence is by-passed.  

 

 

The EP in EU-US relations: a striking history of litigation and evolving legal powers

 

The EP record on EU-US relations is quite striking, from civil liberties to trade, using its many and evolving legal powers. The EP litigated notoriously the EU-U.S. Passenger Name Records Agreement (PNR) and swiftly rejected the EU-US Transatlantic Terror and Financing Programme (TFTP) (Swift) giving it ever more legal prominence in EU-US relations. The EP did not issue recommendations on the opening of EU-US trade negotiations in 2019 and the EP notably even rejected a draft resolution recommend the opening of Trump-era EU-US trade talks relating to concerns as to the Trump administration, Eastern European country visas for the US, accepting the so-called mini-Lobster trade agreement with difficulty. The EP had a highly prominent role in compelling more transparency to the EU-US Transatlantic Trade and Investment Partnership (TTIP), through illegal leaking negotiation texts in the public interest.

 

The EP in TTC: self-sidelining?

 

However, it can now be said that the EP is not per se helping itself as to TTC. The EP has once received a briefing from the Commission through its INTA Trade committee on the TTC. The EP thus appears to be ‘monitoring’ the TTC through INTA- although this seems very odd as to why EP technology and industry committees might be any less involved than trade in a ‘trade and technology’ venture. One meeting of the INTA committee with two Commissioners held in December 2022 few tech committees MEPs were invited – and appeared to have few critical questions of the TTC. The EP has issued one critical press release via its trade committee publicly, in late 2022 but little else, critical of its lack of trade results. However, democratic scrutiny has been repeatedly mentioned by the EP as to the TTC-  via the European Parliamentary research service ‘EPRS’ briefings - rather than via an EP resolution- arguably downgrading its importance and EP engagement with it.

 

 

Stakeholders and the TTC - civil society, industry and the EP all lumped in together?

 

It is important to say that the TTC has a range of engagement strategies for stakeholders. A TTC Stakeholder Assembly was organised by the Trade and Technology Dialogue (TTD) which adopts the EU international relations lexicon of dialogues with stakeholders, increasingly found in EU trade negotiations and resulting agreements. One may say that it is confusing series of alphabetised meetings called the TTD, meant to support the TTC. The sheer range of issues and topics considered by the TTD by zoom- using breakout rooms- is particularly remarkable and easily accused of being ill-focused. The lack of formal accountability here appears striking also with stakeholder sessions run by thinktanks for the EU. High level US administration, professional lobbyists and/ or thinktanks and EU institutions all appear here to have privileged input and capacity to influence and scrutinise- but less so the EP.

 

 EU in the US:-  increasing EP and EEAS physical site offices

 

The sidelining of the EP in the TTC is notable given the EU’s ratcheting up of institutions and diplomacy in the US recently. In 2010, the EU established a dedicated structure with the explicit task of channelling and deepening ties between the EU and US legislatures - a European Parliament Liaison Office (EPLO) – notably with no US equivalent. The EPLO sits alongside physically the European External Action Service (EEAS) in Washington DC in the same building entitled the ‘The EU and US,’ but notably on the floor below it (metaphorically?). EPLO Washington DC has added a ‘hard’ dimension to institutionalising the EU-US inter-parliamentary relationship. Aside from the EEAS office in Washington DC and the EPLO in Washington DC alongside it, the EU recently opened its new EEAS office in San Francisco, California, as a self-professed global centre for digital technology and innovation. Its mission was said to be to promote EU standards and technologies, digital policies and regulations and governance models, and to strengthen cooperation with US stakeholders, including by advancing the work of the EU-US TTC. The office was said to work under the authority of the EU Delegation in Washington, DC, in close coordination with Headquarters in Brussels and in partnership with EU Member States consulates in the San Francisco Bay Area- but again without any mention of or reference to the EP or EPLO in the US.

 

 

Conclusions: the real harm of soft law councils?

 

The EP is arguably legally excluded from the new era of soft international economic law that the EU is readily subscribing to, to a high degree. The rights of the EP have evolved significantly - even in an age of soft law in international relations.  The TTC is following an EU law blueprint in effect legitimising its executive-led action but it is also acting contrary to the thrust of much EU international relations practice which is about widening and deepening participation.

 

The harm of ‘soft law’ councils remains very real if it becomes mainly executive to executive sidelining of parliaments.

 

Where entities such as the US have declared trade agreements to be old fashioned in favour of soft law framework agreements, the EU had always appeared less so inclined as a rules-based multilateralist.

 

The EP in transatlantic relations has been highly effective, engaged and participating and should not necessarily be formally excluded from this new era of EU-US relations, privileging TTC contacts.

 

 

Saturday, 1 April 2023

The Ease and Perils of Modern Love – legal effects of algorithmic based dating (online)



 

Tessa Sophie Hoffmann & Chrisa Alexiou (University of Groningen)

 

Photo credit:  Jack Sexton, via Wikimedia commons 

 

 

Introduction

 

‘Love can be several splendid things, a source of joy and gladness, a wonderful surprise. But it can also be a source of unease and regret’. (Tony Milligan, 2011)

 

Love is something that existed long before digitalisation found its way into our every day life. Nonetheless, especially with the recent developments of technology and artificial intelligence (AI), over 48 million people in Europe rely on dating platforms to find a suitable match. (Stefanie Duguay, 2016) Indeed, online dating has become a billion-dollar market with significant potential for growth. Most of the largest online dating platforms such as Tinder, Grindr, OKCupid and Co. maintain the business concept of a free basic version, creating a user-friendly environment for an easy signing in and exploring its content. Thereby, they create the illusion that users have only profits to gain from the available matches and connections with others. However, joining in and creating a profile automatically opens the door to numerous unregulated issues.

 

Dating platforms utilise many different algorithmic functions, such as setting the location or age preferences, to provide users with sufficient match recommendations. Especially Tinder, as a geolocation-based social app, is known for the standard swipe right or left concept which represents the like or dislike of other users’ profiles nearby. The use of algorithms and AI, in particular with regard to sharing private information, entails certain risks that the users are confronted with, such as the lack of algorithmic transparency, privacy and data protection issues, issues with damage liability, biases and discrimination.

 

The unique features of online dating platforms still seem to fall through the gaps of regulation governed by the E-Commerce Directive, which lacks the necessary comprehensiveness, as they hold much more complex algorithmic functions than other common social media platforms.

 

In our contribution, we shed light into the particular effects of algorithmic love, and we argue that additional improvement in EU regulation for algorithmic based online dating platforms is necessary.

 

 

The function of algorithms and AI within dating applications

 

The location-based services (LBS), in general, display an important component of online dating platforms, as they provide them with the ability to track users’ locations after given consent. In that way, the location-based social networks (LBSNs) create a bridge between two social worlds, the real and the online one. (Huiji Gao and Huan Liu, 2013)

 

The incorporation of AI into the structure of online dating applications takes modern dating to another level. Artificial intelligence goes beyond the ‘mere algorithmic analytical processes’ (Philip Sales, 2021) as it has the ability of machine learning. This means that the AI, with the help of an algorithm, can identify and conclude certain preference patterns throughout the analysis of a large amount of data. Subsequently, AI’s can observe the activity of a user and analyse his or her behaviours, preferences and patterns in order to recommend specific matches. The more information a user stores within the app, the better the algorithmic outcome will be.

 

Online dating platforms use algorithmic services to smoothen their functionality and boost the fast occurrence of matches for its users. In an interview Tinder allowed a slight insight into its operational system and how it generates matches using an Elo rating system, usually used for chess players. Tinder’s users are actually ranked based on the ‘liking’ value of their swipers. A users’ score depends on the swipe score of his likers. This way Tinder matches up its users based on similar ‘liking ranks’ dividing the crowd in tiers of ‘desirability’.  (Kaitlyn Tiffany, 2019)

 

Other online dating apps, such as OKCupid, depict rather conventional applications which require its users to answer several questions regarding religion, political views, etc. According to sociologist Kevin Lewis ‘OKCupid prides itself with its algorithm’, though it is not even proven how well it actually works. Very controversial are also the various profile boosts, ‘super likes’, or special subscriptions users can purchase, which basically allow a user for a price to jump the algorithm, in order to get a better match and show up on the account of another, desired user. Others argue that ‘super likes’ are ‘pure moneymaking endeavours’.

 

Other questionable algorithmic services used by online dating platforms are those, which intend to prevent harm by controlling illegal and suspicious content. These services are also supplemented by AI, which then performs certain calculations to check the validity of a profile or it keeps a ‘conducting score’ to detect oppressive and malicious content. AI is also automised to recognise scams, spam or duplicated profiles, which the app provider is then alerted to delete. But at the end of the day, the dating companies’ goal is to get the users to keep visiting their dating apps, and subsequently, at that cost, the limits of controlling might get blurry in many different aspects. 

 

 

Effects and threats of algorithmic love

 

Lack of algorithmic transparency

 

There has been a great effort to regulate issues explicit on online platforms, of which examples include the Proposed DSA, the Unfair Commercial Practices Directive, and the Artificial Intelligence Act among others. Under the Proposed Digital Services Act, platforms are obliged to suspend their users from access to their services in case of finding ‘manifestly illegal content’, and to immediately report any information on the suspicion of a serious criminal offence to the local law enforcement. Dating platforms’ algorithms alert the provider to take further action, such as suspending or banning the profile.

 

However, the problem in this ‘smart’ matter is that the competence of algorithms is limited and may lead to suspension of certain user profiles, who should have been spared if the decision was made by a manual human operator. The algorithm filters the conversations for illegal and offensive content, without fully understanding a user’s train of thought or its relevance. Furthermore, it is nearly impossible to prove any algorithmic liability for not recognising relevant spam messages or a ‘fake account’. This controversial approach makes the connection of AI and machine learning even more complex.

 

What further adds to this issue, is the fact that information about the functioning of the algorithm is often withheld. Programming entities and dating platform owners usually refrain from disclosing their particular codings in order to prevent any popularity harm. As a consequence, users do not have the opportunity to understand the relevant functions and classifications of the algorithm and may end up being ‘ethically challenged’.

 

Privacy and data protection

 

Whilst private information is also revealed on other social media platforms, the intensity of information exchange especially at the early stages of online dating is prevalent on dating applications. Important privacy and data protection risks are connected to the algorithmic geolocation services and surveillance conducted through GPS technologies in order to provide platform users with matches nearby. Other risks include the infringement of the ‘right to personal data, the right to prevent processing data which are likely to cause damage or distress, or the right not to be subject to a decision based solely on automated processing.’ (Rowena Rodrigues, 2020) Sandra Wachter and Brent Mittelstadt (2019)

 

At first glance, European Union privacy and data protection law offers sufficient protection and safeguards for ‘data subjects’ in the GDPR. For example the right to transparency, information and access in Article 15, the right to rectification and erasure in Article 16 and 17, or the right to object to automated individual decision-making in Section 4. Additionally, Article 22 (3) provides minor safeguards to human intervention by the controller to challenge an algorithmic decision.

 

However, legal scholars emphasise that ‘the opacity of AI and machine learning may reduce the accountability of their owners’ (Mireille Hildebrandt, 2016) and that way ‘they lack contestability’. (Lilian Edwards and Michael Veale, 2018) In this context, respected academics believe that the current rules laid down in the GDPR appear insufficient, especially in regard to Article 9 ‘processing of special categories of personal data’ and Article 22 (3) as mentioned above. (Sandra Watcher and Brent Mittelstadt, 2019) Therefore, they suggest an extension of the GDPR, by adding an article particularly referring to ‘the right to reasonable inferences’ in order to protect users from inadequate decisions through the automation of algorithms.

 

 

Liability for damage/Legal personhood issues

 

The accountability of algorithms and AI’s poses a great issue, as, after all, it embodies a machine which lacks legal personality and, subsequently, legal accountability in our systems. Accountability with regard to AI should require ‘the function of guiding action (by forming beliefs and making decisions), and the function of explanation (by placing decisions in a broader context and by classifying them along moral values)’. (Virgina Dignum, 2018) However, the issue regarding accountability goes way beyond that, also causing problems in the areas of causality, justice and compensation. (Matt Bartlett, 2019)

 

In February 2017, the European Parliament discussed the issue of AI’s legal personhood (Resolution on Civil Law Rules on Robotics, 2018) where it was debated wether AI ‘fit within legal existing categories or if a new category should be created’. As advised by the Expert Group on Artificial Intelligence, the EU, ever since, has refrained from creating new legal personality for AI systems as it is ‘fundamentally inconsistent with the principle of human agency, accountability and responsibility’. (Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment)

 

In its report on Liability for Artificial Intelligence and other emerging digital technologies, the European Commission declared that at least a basic liability protection for victims of new technologies is ensured. Though, the complexity, self-learning capacity, opacity and vulnerability to cybersecurity threats the successful redemption of compensation. Nonetheless, the European Union has made several efforts to enhance its existing legal framework in the relevant Commission’s White Paper on AI and the Motion of a Parliament Resolution on a civil liability regime for AI.

 

Bias and discrimination

 

The algorithms used for online dating applications predominantly function through rating systems, which are created based on information derived from the personal data stored within. This information usually includes an analysis of users’ ‘likes’ and ‘dislikes’. To clarify, high ‘liked’ user profiles will show up amongst the possible match options of other ‘popular’ users. Less likes or limited activity automatically initiates the moving down within the algorithm’s ranking system.

 

This algorithmic ‘right or left swipe’ ranking system, elevates appealing and beauty standards devaluing, at the same time, other personal human characteristics. Biases are created based on the appearance of a user’s profile but also based on gender, sexuality and age, which is set as a preference within the first steps of the online dating sign up.

 

In a 2018 report, the EU Agency for Fundamental Rights raises awareness to the possible discrimination against individuals by algorithms. Thereby, the fundamental right to non-discrimination, laid down in Article 21 EU Charter, cannot be violated. In its resolution on fundamental rights implications of big data, the European Parliament outlined that based on the usage of algorithms and their automatised assessments throughout the processing of data, ‘big data may result not only in infringements of the fundamental rights of individuals, but also in differential treatment and indirect discrimination against groups of people with similar characteristics’. Moreover, the Parliament called upon the Commission and the Member States ‘to minimise algorithmic discrimination and bias and to create an ethical framework of the transparent processing of personal data’.

 

Furthermore, the paid algorithmic boost on dating apps such as Tinder is likely to cause emotional distress to several users, especially those who wish a chance to love but are not able or willing to pay a monthly subscription to broaden their algorithmic horizon. The algorithm tends to recycle the options for which you did not ‘swipe right’ the first time. According to Tinder, a subscription to ‘super likes’ triples a user's chance to find love. This unable-to-prove theory displays the highly questionable monetary discrimination of users on online dating platforms.

 

Algorithmic ranking systems split up ‘the information overload’ based on the ‘I like you, you don’t like me?’ model. Traditional recommendation systems are coded in a particular way to provide a dating user with recommended matches giving to the algorithm the power to predict who a specific user ‘likes’. However, a recommendation system can only be successful if the ‘like’ is reciprocal. (Luiz Pizzato et al., 2010). Many scholars suggest the usage of reciprocal recommendation systems, enhancing the probability of user interaction and satisfaction. The content-based reciprocal algorithm which has been introduced, applies collaborative filtering through stochastic matching algorithms, to assimilate the preferences of both users and to define a new evaluation metric leading to successful matches. (Peng Tia et al., 2016) Thereby, it has been established that the communication trace of users, which the algorithm utilises, is better fit to determine a user’s preference and often deviates from the preferences a user stated him/herself.

 

The legal framework: Why re-regulate?

 

‘It is precisely the accumulation of power and its significant impact on users’ human rights what is calling for the creation of dedicated rules that adequately protect those rights in the online world.’ (Marta Cantero Gamito, 2021) The algorithm of dating platforms is specifically designed to show personalised and relevant content to a user, predict which content appears engaging to a user, and, in that way, exercise a significant amount of power over vulnerable users. Based on that, the particularities of online dating do not fit within the threshold of current regulatory frameworks.

 

It is very important to recognise the influence of such power and the risks of ‘smart’ dating platforms to imply the implementation of particular regulatory measures, which meet the high algorithmic and technological standards of modern online dating. The non-regulation of AI can affect the EU’s values upon which Europeans rely, including fundamental human rights and non-discrimination policies. In the White Paper on AI the European Commission underlines the absence of a common legal framework that corresponds to the needs of the AI technology’s rapid developments. Several Member States, such as Germany and Denmark, have already begun to undertake steps regarding the enhancement of domestic regulation of AI systems, which ultimately reinforces the sense of deficiency of current European measures.

 

According to the Commission’s risk-based approach, high risk AI systems shall become subject to strict obligations in order to hold the capacity of continuing to exist within the EU. Thereby, Article 6 of the AI Act describes a high-risk existence, when ‘the AI is intended to be used as a safety component of a product’ and, subsequently, ’the product whose safety component is the AI system is required to undergo a third-party conformity assessment’. Considering this article of the Proposed Act, dating platforms which provide high algorithmic functions, intended as a safety component for the provision of transparent ‘matches’, will have to comply with higher regulation standards in order to continue their operation for European users.

 

Furthermore, Article 5 of the Act prohibits AI’s that ‘distort a person’s behaviour’ or ‘exploit vulnerabilities of specific groups’, ‘in a manner that causes psychological harm’ and that ‘evaluates and classifies the trustworthiness of natural persons’, which then leads to the ‘detrimental or unfavourable treatment of those persons’. This clause shall function as statutory basis for regulation aiming to protect the vulnerability which dominates online dating and is not covered by current platform regulations. Moreover, Article 5 will limit discrimination of users based on algorithmic ‘swipe’ evaluation. 

 

Dating platforms should establish an effective system to ensure that they are ‘verifiably safe, taking at heart the physical and mental safety’ of its users. Due to the algorithm’s advanced autonomy and the incapacity of always having complete control within decision-making processes based on provided data, Article 53 of the Proposed Act suggests regulatory sandboxes. These shall display a solution to explore more options especially for those AI’s that do not fit into an existing framework. Dating platforms, with their regulatory particularities, can level up their safety mechanisms by reducing the ‘time-to-market’ cycle of new safety features. Additionally, they can implement the testing and validation of new, safer, and ‘closer to human perspective’ algorithmic systems in order to develop a safer space for their users.

 

Under current domestic rules, there is a lack of provision of suitable compensation for cases in which harm was caused by AI services. The users ought to prove liability with the traditional ‘fault principle’, based on which they must prove wrongful action/ omission of the user that caused the physical or mental damage. The ‘black box effect’, which describes the autonomy and opacity of AI, makes it difficult to produce a successful liability claim. The Proposed AI Liability Directive provides a revolutionary takeover of the civil liability issues regarding AI currently regulated by domestic jurisdictions, and plans on non-contractual obligations as well as sanctions to guarantee non-discrimination, strict liability for higher interests, and harmonisation across Europe. The realisation of such plans aims to promote users’ trust in the algorithmic nature of online dating applications.

 

 

Conclusion

 

From a regulatory perspective, as more and more people step foot into the digital world, it appears crucial to pay more attention to the online dating life as it seems to have taken over the ‘classic dating process’. The rapid algorithmic development of online dating platforms together with the providers’ high profits do not yet create a fertile ground for European harmonisation to thrive.

 

Member States have already taken it upon themselves to enhance their national regulations concerning AI technology to comply with the necessary standards of user safety. In that regard, it is significantly relevant for the EU to consider the remaining Members’ reluctance to implement regulation concerning technological operators in order to prevent the inhibition of innovation. (Philip Alston, United Nations (UN) Special Rapporteur on Extreme Poverty and Human Rights) Subsequently, a harmonised EU regulation of algorithmic based dating is necessary to ensure the equal protection of all European citizens. Ultimately, it is to avoid that the Member States adopt higher standards within their Constitutions and to ensure that the EU functions as a pioneer in implementing algorithmic and AI relevant regulation, as once emphasised in Opinion 2/13.

 

Moreover, the rapid development of algorithmic services and AI calls for the rapid development of EU measures on the matter of liability. In that regard, it is also important to keep in mind that most online dating users are empty handed when it comes to proving liability claims against an online dating application. The effort of the European Unions’ Proposals on both the AI Act and the AI Liability Directive are definitely reaching for the development of the law surrounding dating platforms. However, there is still a lot of room for improvement as algorithms and regulatory measures concerning online dating applications must be ‘synchronised’ in a way that avoids harm and promotes trust for its users.

 

All in all, it might even be worth to consider the incorporation of more human operation within automated algorithmic processes, especially considering vulnerable platforms such as online dating applications, to uphold fairness and legitimacy. After all, disputability and verifiability are still fundamental aspects of the rule of law, which forms part of Europe’s most basic values and represents a ‘crucial prerequisite of democratic governance’. (Emre Bayamlioglu, 2018)