Annelieke Mooij* and Anuj Puri**
*Assistant Professor at the Public Law & Governance
Department, Tilburg Law School
** Post-Doctoral Researcher at the Public Law &
Governance Department, Tilburg Law School
Photo credit: piqsels, via
Wikimedia
Commons
Introduction
General purpose Large Language
Models (LLMs) are amongst the most discussed innovation of the century, with AI
developers even being named persons of the year by the Times
magazine. Amongst the leading General purpose LLMs, perhaps the most famous
one is ChatGPT – which is offered by Open AI. In light of such success, it may
be surprising for many to learn that Open AI operates
at huge losses. Its annual revenue is predicted at 13 billion dollars,
which suffices to only a fraction of its computing costs which totals
approximately 1.4 trillion dollars over the next eight years. It was therefore
not entirely unexpected that Open AI was
preparing ChatGPT for the inclusion of advertisement. The potential
introduction of such advertisements raises significant ethical and legal
concerns.
Consider the following excerpt
from ChatGPT’s Memory
FAQ,
ChatGPT can
remember useful details between chats, making its responses more personalized
and relevant. As you chat with ChatGPT, whether you’re typing, talking, or
asking it to generate an image, it can remember helpful context from earlier
conversations, such as your preferences and interests, and use that to tailor
its responses.
Depending upon one’s penchant
towards customization or preference for privacy, such features may either
increase usability or raise privacy concerns or both. The inclusion of
advertisement within General purpose LLMs should, however, concern even the
least privacy conscious users.
ChatGPT has already taken steps
to provide a user with customized in-conversation and instant check-out shopping,
thereby creating new potential avenues for manipulation
of consumers. Hence, it is not surprising that its plan to introduce ads was
met with backlash. Most of the critique,
however, seemed focused on the inclusion of advertisement in ChatGPT pro-plans
and the lack of quality of the suggested ads. The advertisement
plans have purportedly been currently put on hold to improve ChatGPT’s core
features including personalization. It is not unlikely that ChatGPT may
roll out an improved version that includes personalized ads. Hence, there
exists an urgent need to examine the possibility of such advertisements manipulating
consumers.
Manipulation Risks
Consider a potential scenario
where an individual is in distress over the fall out of a personal relationship
and reaches out to a general purpose LLM like the ChatGPT for advice. The LLM
responds by advising the user to spend time and money on self-care by shopping
for products such as clothes, shoes etc. with “helpful” links to shopping
websites and perhaps a “helpful” image of the product. The user’s prior usage history
may lead to their vulnerable situation being exploited for surveillance
capitalist purposes. Such plausible uses of user’s information by the firms
developing and deploying general purpose LLMs raise concerns pertaining to the
use of manipulative techniques.
From
an ethical perspective, manipulation can
be understood in various ways— such as manipulation in the form of introduction
of non-rational influence (which renders it closer to subliminal technique), manipulation
as a form of pressure, and manipulation as a form of trickery (which is
conceptually linked to deception). Susser et al have defined
manipulation as imposing a hidden or covert influence on another
person’s decision-making and offered a widely accepted account of online
manipulation as the use of information technology to covertly influence
another person’s decision making. Manipulation understood in this manner
raises concerns pertaining to the covert exploitation of a LLM user’s emotional
vulnerabilities for commercial exploitation purposes. Before we address the
question of existing legal remedies, it would be helpful to highlight some of the
backdrop conditions which pave way for the potential manipulation of the
consumers.
Two
common conceptual concerns lie in the backdrop of the purported use of manipulative
AI— trust and anthropomorphization. The propensity of users to trust general
purpose LLMs with queries pertaining to all aspects of their lives, even when they
are not trustworthy, is at the heart of the
manipulation risks. Secondly, the conversational nature of the
interaction with the LLM increases the odds of the user getting exploited
on account of the tendency to anthropomorphize such interactions. It is worth
noting that the undeserved inducement of trust and
anthropomorphization are borne out of the design
choices made by the developers. The potential to rely on previous
conversations, the covert nature of the exercised influence, trusting
propensity of the users along with the tendency to anthropomorphize the
conversation lead to a fertile ground for potentially long-lasting
manipulation of the user. This is where the legal remedies provided under the
EU AI Act have an important role to play in protecting vulnerable users.
Legal Remedies
Article
5 of the AI-Act prohibits the deployment of manipulative AI. It is,
however, difficult
to define what constitutes manipulation. According to the Commission’s
Guidelines on the AI Act, “[m]anipulative techniques are typically designed
to exploit cognitive biases, psychological vulnerabilities, or situational
factors that make individuals more susceptible to influence” This
raises the question when ChatGPT’s advertisement exploits a psychological
vulnerability and/or situational factor. And whether legal distinctions of
vulnerabilities can reasonably be made.
One way of addressing the
question of vulnerability is by examining the tendency to anthropomorphize general
purpose LLMs. Users trust such LLMs as confidants instead of realizing that
their data is being used for commercial exploitation. In view of such
tendencies and dependencies, one could argue that general purpose LLM advertisements
are inherently exploiting the vulnerability of the users. Thus, such
advertisements are manipulative by design. This, however, fails to recognize
that some users may only use the LLM as a search engine. This ambiguity in
usage demonstrates the legal conundrum surrounding the identification of
vulnerability.
When it comes to consumer
protection, the question of exploitation of vulnerability has been addressed in
the Unfair
Commercial Practices Directive. In consumer cases, the Court
of Justice of the EU has held that the in order to be considered unlawful
advertisement should manipulate a reasonably informed and circumspect consumer.
The average consumer is an interpretative
standard that the CJEU develops based on the product as an expression of
proportionality. The average consumer is defined in relation to a product’s
target audience, certain groups, such as children, are considered inherently
more vulnerable. Gaming
platforms for children, for instance, must therefore comply with stricter
advertisement rules. From the perspective of vulnerability determination on
the basis of product, it is an open-ended question what does being a reasonably
informed and circumspect user of AI entail? Should it be assumed that AI users
always have a minimum level of knowledge that all their interactions with the
LLMs are aimed at commercial gain? Should the reasonable consumer be circumspect
that all prompts are potential data fodder for exploiting (future)
vulnerabilities; and be suspicious of all AI results at all times? Even when
they look up the recipe for apple pie? There are some who would argue that advertisement
based on algorithms and big data are inherently manipulative. If we accept
this argument, general purpose LLMs should not be able to include any form of
advertisement.
As stated before, development and
deployment of AI systems is currently extremely resource intensive. A proponent
of inclusion of advertisement in general purpose LLM may therefore argue that
ad-driven revenue generation model reduces digital exclusion. This argument
however begs the question whether access to AI systems in garb of exploitation
of user’s vulnerabilities through manipulation is equitable access at all. A
more sustainable solution is perhaps not to prohibit advertisement, but to
regulate against exploitation. This, however, requires a shift in approach.
A possible solution could be to
train AI systems to differentiate between (extremely) vulnerable prompts
(questions) such as how to deal with a break-up and prompts with lower
vulnerability such as how to bake an apple pie. This would require a shift in
perspective. Rather than defining the average consumer, it would require defining
the average prompt or AI interaction, whereby prompts such as “how to get over
a break-up” indicate a vulnerability that is legally protected from
exploitation. However, such a classification attributes the power to AI systems
to distinguish between users that are in a potentially vulnerable state and those that are not. Even if such a
hypothetical position were to be possible, it would not address all the
underlying ethical and legal concerns. Algorithmic determination of
vulnerability is as likely to reflect the normative choices made by the
developers and the computational trade-offs made in the training data sets. It
is unlikely that these accurately reflect vulnerabilities without bias, as development
of AI systems is not
known to reflect diversity.
Another avenue to explore is not
to regulate the prompts, but the amount of personal history that general
purpose LLMs may access to generate advertisement. Such regulation would do
justice to the argument that big data &
algorithm is inherently manipulative. Limiting the amount of data that can
be used for advertising has the additional advantage of clarity. If for
instance only ten data points can be used for advertising, this provides legal
certainty. The difficulty however is enforcement – as it requires verifying
source codes. Further it is difficult to construe a safe harbour provision for
data collection, depending upon the nature of data points, even limited data
can be used for undermining an individual’s autonomy. Furthermore, it does not
reflect the reality of people who use a LLM as a confidant, even though it is
not trustworthy, on account of the seemingly anonymized and private interaction with
the AI system. Thus making them vulnerable towards its manipulative
influences.
Questions for the future
remain
While the potential introduction
of advertisements in general purpose LLMs such as the ChatGPT might have been
paused for now, the financial incentives to justify the Silicon
Valley optimism remain, which are currently also driving the policy
measures across the Atlantic. The advent of these potential advertisements
would not be the last attempt to test the regulatory resolve in the
technological battle to impinge upon human autonomy. But by taking a strong
stand, and withstanding the geo-political
pressure, the EU institutions can make it amongst the first red lines that
should not be crossed.
No comments:
Post a Comment