Saturday, 1 April 2023

The Ease and Perils of Modern Love – legal effects of algorithmic based dating (online)



 

Tessa Sophie Hoffmann & Chrisa Alexiou (University of Groningen)

 

Photo credit:  Jack Sexton, via Wikimedia commons 

 

 

Introduction

 

‘Love can be several splendid things, a source of joy and gladness, a wonderful surprise. But it can also be a source of unease and regret’. (Tony Milligan, 2011)

 

Love is something that existed long before digitalisation found its way into our every day life. Nonetheless, especially with the recent developments of technology and artificial intelligence (AI), over 48 million people in Europe rely on dating platforms to find a suitable match. (Stefanie Duguay, 2016) Indeed, online dating has become a billion-dollar market with significant potential for growth. Most of the largest online dating platforms such as Tinder, Grindr, OKCupid and Co. maintain the business concept of a free basic version, creating a user-friendly environment for an easy signing in and exploring its content. Thereby, they create the illusion that users have only profits to gain from the available matches and connections with others. However, joining in and creating a profile automatically opens the door to numerous unregulated issues.

 

Dating platforms utilise many different algorithmic functions, such as setting the location or age preferences, to provide users with sufficient match recommendations. Especially Tinder, as a geolocation-based social app, is known for the standard swipe right or left concept which represents the like or dislike of other users’ profiles nearby. The use of algorithms and AI, in particular with regard to sharing private information, entails certain risks that the users are confronted with, such as the lack of algorithmic transparency, privacy and data protection issues, issues with damage liability, biases and discrimination.

 

The unique features of online dating platforms still seem to fall through the gaps of regulation governed by the E-Commerce Directive, which lacks the necessary comprehensiveness, as they hold much more complex algorithmic functions than other common social media platforms.

 

In our contribution, we shed light into the particular effects of algorithmic love, and we argue that additional improvement in EU regulation for algorithmic based online dating platforms is necessary.

 

 

The function of algorithms and AI within dating applications

 

The location-based services (LBS), in general, display an important component of online dating platforms, as they provide them with the ability to track users’ locations after given consent. In that way, the location-based social networks (LBSNs) create a bridge between two social worlds, the real and the online one. (Huiji Gao and Huan Liu, 2013)

 

The incorporation of AI into the structure of online dating applications takes modern dating to another level. Artificial intelligence goes beyond the ‘mere algorithmic analytical processes’ (Philip Sales, 2021) as it has the ability of machine learning. This means that the AI, with the help of an algorithm, can identify and conclude certain preference patterns throughout the analysis of a large amount of data. Subsequently, AI’s can observe the activity of a user and analyse his or her behaviours, preferences and patterns in order to recommend specific matches. The more information a user stores within the app, the better the algorithmic outcome will be.

 

Online dating platforms use algorithmic services to smoothen their functionality and boost the fast occurrence of matches for its users. In an interview Tinder allowed a slight insight into its operational system and how it generates matches using an Elo rating system, usually used for chess players. Tinder’s users are actually ranked based on the ‘liking’ value of their swipers. A users’ score depends on the swipe score of his likers. This way Tinder matches up its users based on similar ‘liking ranks’ dividing the crowd in tiers of ‘desirability’.  (Kaitlyn Tiffany, 2019)

 

Other online dating apps, such as OKCupid, depict rather conventional applications which require its users to answer several questions regarding religion, political views, etc. According to sociologist Kevin Lewis ‘OKCupid prides itself with its algorithm’, though it is not even proven how well it actually works. Very controversial are also the various profile boosts, ‘super likes’, or special subscriptions users can purchase, which basically allow a user for a price to jump the algorithm, in order to get a better match and show up on the account of another, desired user. Others argue that ‘super likes’ are ‘pure moneymaking endeavours’.

 

Other questionable algorithmic services used by online dating platforms are those, which intend to prevent harm by controlling illegal and suspicious content. These services are also supplemented by AI, which then performs certain calculations to check the validity of a profile or it keeps a ‘conducting score’ to detect oppressive and malicious content. AI is also automised to recognise scams, spam or duplicated profiles, which the app provider is then alerted to delete. But at the end of the day, the dating companies’ goal is to get the users to keep visiting their dating apps, and subsequently, at that cost, the limits of controlling might get blurry in many different aspects. 

 

 

Effects and threats of algorithmic love

 

Lack of algorithmic transparency

 

There has been a great effort to regulate issues explicit on online platforms, of which examples include the Proposed DSA, the Unfair Commercial Practices Directive, and the Artificial Intelligence Act among others. Under the Proposed Digital Services Act, platforms are obliged to suspend their users from access to their services in case of finding ‘manifestly illegal content’, and to immediately report any information on the suspicion of a serious criminal offence to the local law enforcement. Dating platforms’ algorithms alert the provider to take further action, such as suspending or banning the profile.

 

However, the problem in this ‘smart’ matter is that the competence of algorithms is limited and may lead to suspension of certain user profiles, who should have been spared if the decision was made by a manual human operator. The algorithm filters the conversations for illegal and offensive content, without fully understanding a user’s train of thought or its relevance. Furthermore, it is nearly impossible to prove any algorithmic liability for not recognising relevant spam messages or a ‘fake account’. This controversial approach makes the connection of AI and machine learning even more complex.

 

What further adds to this issue, is the fact that information about the functioning of the algorithm is often withheld. Programming entities and dating platform owners usually refrain from disclosing their particular codings in order to prevent any popularity harm. As a consequence, users do not have the opportunity to understand the relevant functions and classifications of the algorithm and may end up being ‘ethically challenged’.

 

Privacy and data protection

 

Whilst private information is also revealed on other social media platforms, the intensity of information exchange especially at the early stages of online dating is prevalent on dating applications. Important privacy and data protection risks are connected to the algorithmic geolocation services and surveillance conducted through GPS technologies in order to provide platform users with matches nearby. Other risks include the infringement of the ‘right to personal data, the right to prevent processing data which are likely to cause damage or distress, or the right not to be subject to a decision based solely on automated processing.’ (Rowena Rodrigues, 2020) Sandra Wachter and Brent Mittelstadt (2019)

 

At first glance, European Union privacy and data protection law offers sufficient protection and safeguards for ‘data subjects’ in the GDPR. For example the right to transparency, information and access in Article 15, the right to rectification and erasure in Article 16 and 17, or the right to object to automated individual decision-making in Section 4. Additionally, Article 22 (3) provides minor safeguards to human intervention by the controller to challenge an algorithmic decision.

 

However, legal scholars emphasise that ‘the opacity of AI and machine learning may reduce the accountability of their owners’ (Mireille Hildebrandt, 2016) and that way ‘they lack contestability’. (Lilian Edwards and Michael Veale, 2018) In this context, respected academics believe that the current rules laid down in the GDPR appear insufficient, especially in regard to Article 9 ‘processing of special categories of personal data’ and Article 22 (3) as mentioned above. (Sandra Watcher and Brent Mittelstadt, 2019) Therefore, they suggest an extension of the GDPR, by adding an article particularly referring to ‘the right to reasonable inferences’ in order to protect users from inadequate decisions through the automation of algorithms.

 

 

Liability for damage/Legal personhood issues

 

The accountability of algorithms and AI’s poses a great issue, as, after all, it embodies a machine which lacks legal personality and, subsequently, legal accountability in our systems. Accountability with regard to AI should require ‘the function of guiding action (by forming beliefs and making decisions), and the function of explanation (by placing decisions in a broader context and by classifying them along moral values)’. (Virgina Dignum, 2018) However, the issue regarding accountability goes way beyond that, also causing problems in the areas of causality, justice and compensation. (Matt Bartlett, 2019)

 

In February 2017, the European Parliament discussed the issue of AI’s legal personhood (Resolution on Civil Law Rules on Robotics, 2018) where it was debated wether AI ‘fit within legal existing categories or if a new category should be created’. As advised by the Expert Group on Artificial Intelligence, the EU, ever since, has refrained from creating new legal personality for AI systems as it is ‘fundamentally inconsistent with the principle of human agency, accountability and responsibility’. (Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment)

 

In its report on Liability for Artificial Intelligence and other emerging digital technologies, the European Commission declared that at least a basic liability protection for victims of new technologies is ensured. Though, the complexity, self-learning capacity, opacity and vulnerability to cybersecurity threats the successful redemption of compensation. Nonetheless, the European Union has made several efforts to enhance its existing legal framework in the relevant Commission’s White Paper on AI and the Motion of a Parliament Resolution on a civil liability regime for AI.

 

Bias and discrimination

 

The algorithms used for online dating applications predominantly function through rating systems, which are created based on information derived from the personal data stored within. This information usually includes an analysis of users’ ‘likes’ and ‘dislikes’. To clarify, high ‘liked’ user profiles will show up amongst the possible match options of other ‘popular’ users. Less likes or limited activity automatically initiates the moving down within the algorithm’s ranking system.

 

This algorithmic ‘right or left swipe’ ranking system, elevates appealing and beauty standards devaluing, at the same time, other personal human characteristics. Biases are created based on the appearance of a user’s profile but also based on gender, sexuality and age, which is set as a preference within the first steps of the online dating sign up.

 

In a 2018 report, the EU Agency for Fundamental Rights raises awareness to the possible discrimination against individuals by algorithms. Thereby, the fundamental right to non-discrimination, laid down in Article 21 EU Charter, cannot be violated. In its resolution on fundamental rights implications of big data, the European Parliament outlined that based on the usage of algorithms and their automatised assessments throughout the processing of data, ‘big data may result not only in infringements of the fundamental rights of individuals, but also in differential treatment and indirect discrimination against groups of people with similar characteristics’. Moreover, the Parliament called upon the Commission and the Member States ‘to minimise algorithmic discrimination and bias and to create an ethical framework of the transparent processing of personal data’.

 

Furthermore, the paid algorithmic boost on dating apps such as Tinder is likely to cause emotional distress to several users, especially those who wish a chance to love but are not able or willing to pay a monthly subscription to broaden their algorithmic horizon. The algorithm tends to recycle the options for which you did not ‘swipe right’ the first time. According to Tinder, a subscription to ‘super likes’ triples a user's chance to find love. This unable-to-prove theory displays the highly questionable monetary discrimination of users on online dating platforms.

 

Algorithmic ranking systems split up ‘the information overload’ based on the ‘I like you, you don’t like me?’ model. Traditional recommendation systems are coded in a particular way to provide a dating user with recommended matches giving to the algorithm the power to predict who a specific user ‘likes’. However, a recommendation system can only be successful if the ‘like’ is reciprocal. (Luiz Pizzato et al., 2010). Many scholars suggest the usage of reciprocal recommendation systems, enhancing the probability of user interaction and satisfaction. The content-based reciprocal algorithm which has been introduced, applies collaborative filtering through stochastic matching algorithms, to assimilate the preferences of both users and to define a new evaluation metric leading to successful matches. (Peng Tia et al., 2016) Thereby, it has been established that the communication trace of users, which the algorithm utilises, is better fit to determine a user’s preference and often deviates from the preferences a user stated him/herself.

 

The legal framework: Why re-regulate?

 

‘It is precisely the accumulation of power and its significant impact on users’ human rights what is calling for the creation of dedicated rules that adequately protect those rights in the online world.’ (Marta Cantero Gamito, 2021) The algorithm of dating platforms is specifically designed to show personalised and relevant content to a user, predict which content appears engaging to a user, and, in that way, exercise a significant amount of power over vulnerable users. Based on that, the particularities of online dating do not fit within the threshold of current regulatory frameworks.

 

It is very important to recognise the influence of such power and the risks of ‘smart’ dating platforms to imply the implementation of particular regulatory measures, which meet the high algorithmic and technological standards of modern online dating. The non-regulation of AI can affect the EU’s values upon which Europeans rely, including fundamental human rights and non-discrimination policies. In the White Paper on AI the European Commission underlines the absence of a common legal framework that corresponds to the needs of the AI technology’s rapid developments. Several Member States, such as Germany and Denmark, have already begun to undertake steps regarding the enhancement of domestic regulation of AI systems, which ultimately reinforces the sense of deficiency of current European measures.

 

According to the Commission’s risk-based approach, high risk AI systems shall become subject to strict obligations in order to hold the capacity of continuing to exist within the EU. Thereby, Article 6 of the AI Act describes a high-risk existence, when ‘the AI is intended to be used as a safety component of a product’ and, subsequently, ’the product whose safety component is the AI system is required to undergo a third-party conformity assessment’. Considering this article of the Proposed Act, dating platforms which provide high algorithmic functions, intended as a safety component for the provision of transparent ‘matches’, will have to comply with higher regulation standards in order to continue their operation for European users.

 

Furthermore, Article 5 of the Act prohibits AI’s that ‘distort a person’s behaviour’ or ‘exploit vulnerabilities of specific groups’, ‘in a manner that causes psychological harm’ and that ‘evaluates and classifies the trustworthiness of natural persons’, which then leads to the ‘detrimental or unfavourable treatment of those persons’. This clause shall function as statutory basis for regulation aiming to protect the vulnerability which dominates online dating and is not covered by current platform regulations. Moreover, Article 5 will limit discrimination of users based on algorithmic ‘swipe’ evaluation. 

 

Dating platforms should establish an effective system to ensure that they are ‘verifiably safe, taking at heart the physical and mental safety’ of its users. Due to the algorithm’s advanced autonomy and the incapacity of always having complete control within decision-making processes based on provided data, Article 53 of the Proposed Act suggests regulatory sandboxes. These shall display a solution to explore more options especially for those AI’s that do not fit into an existing framework. Dating platforms, with their regulatory particularities, can level up their safety mechanisms by reducing the ‘time-to-market’ cycle of new safety features. Additionally, they can implement the testing and validation of new, safer, and ‘closer to human perspective’ algorithmic systems in order to develop a safer space for their users.

 

Under current domestic rules, there is a lack of provision of suitable compensation for cases in which harm was caused by AI services. The users ought to prove liability with the traditional ‘fault principle’, based on which they must prove wrongful action/ omission of the user that caused the physical or mental damage. The ‘black box effect’, which describes the autonomy and opacity of AI, makes it difficult to produce a successful liability claim. The Proposed AI Liability Directive provides a revolutionary takeover of the civil liability issues regarding AI currently regulated by domestic jurisdictions, and plans on non-contractual obligations as well as sanctions to guarantee non-discrimination, strict liability for higher interests, and harmonisation across Europe. The realisation of such plans aims to promote users’ trust in the algorithmic nature of online dating applications.

 

 

Conclusion

 

From a regulatory perspective, as more and more people step foot into the digital world, it appears crucial to pay more attention to the online dating life as it seems to have taken over the ‘classic dating process’. The rapid algorithmic development of online dating platforms together with the providers’ high profits do not yet create a fertile ground for European harmonisation to thrive.

 

Member States have already taken it upon themselves to enhance their national regulations concerning AI technology to comply with the necessary standards of user safety. In that regard, it is significantly relevant for the EU to consider the remaining Members’ reluctance to implement regulation concerning technological operators in order to prevent the inhibition of innovation. (Philip Alston, United Nations (UN) Special Rapporteur on Extreme Poverty and Human Rights) Subsequently, a harmonised EU regulation of algorithmic based dating is necessary to ensure the equal protection of all European citizens. Ultimately, it is to avoid that the Member States adopt higher standards within their Constitutions and to ensure that the EU functions as a pioneer in implementing algorithmic and AI relevant regulation, as once emphasised in Opinion 2/13.

 

Moreover, the rapid development of algorithmic services and AI calls for the rapid development of EU measures on the matter of liability. In that regard, it is also important to keep in mind that most online dating users are empty handed when it comes to proving liability claims against an online dating application. The effort of the European Unions’ Proposals on both the AI Act and the AI Liability Directive are definitely reaching for the development of the law surrounding dating platforms. However, there is still a lot of room for improvement as algorithms and regulatory measures concerning online dating applications must be ‘synchronised’ in a way that avoids harm and promotes trust for its users.

 

All in all, it might even be worth to consider the incorporation of more human operation within automated algorithmic processes, especially considering vulnerable platforms such as online dating applications, to uphold fairness and legitimacy. After all, disputability and verifiability are still fundamental aspects of the rule of law, which forms part of Europe’s most basic values and represents a ‘crucial prerequisite of democratic governance’. (Emre Bayamlioglu, 2018) 

1 comment:

  1. This comment has been removed by a blog administrator.

    ReplyDelete