Skip links

Curating the Human Mind:  Should it be allowed?

The Artificial Intelligence Act (AIA)’s “risk-based approach” which distinguishes between AI systems which pose unacceptable and thus prohibited risks, high risk and minimal risk will likely be emulated by lawmakers worldwide.

The draft Artificial Intelligence Act (AIA) proposed by the European Commission in April 2021, and adopted by the Council of Europe on December 6, 2022, is the first comprehensive attempt to legislate AI. The AIA’s “risk-based approach” which distinguishes between AI systems which pose unacceptable and thus prohibited risks, high risk and minimal risk will likely be emulated by lawmakers worldwide. However, AI systems which are already part of most people’s daily lives and which tell us not only what music to hear, what products to buy, but also curate the news we read, namely, content recommender systems are not prima facie prohibited, do not fall in any of the eight “high risk” areas and may well escape regulation altogether.

A recommender system is a type of information filtering system that provides suggestions and recommendations which are most relevant to and tailored for a particular individual. For example, the friends recommended on Facebook are the result of an AI recommender system and the suggested products which appear while shopping online are as well. As early as 2015, Netflix, claimed that their recommender system influenced the choice for about 80% of hours streamed.1 In 2018, YouTube reported that 70% of the time watched on its platform was based on its recommender system.2

Content recommender systems have a direct effect on fundamental rights, for example, human freedom. If the human brain is subjected to constant manipulation by being fed with the news content already read and liked and denied access to other information, then freedom of thought no longer exists. Recommender systems also jeopardize democracy as illustrated by the social media link to the January 6, 2021 attack on the U.S. capitol, the polarisation of American society and the fact that access to information is necessary to debate issues which, in turn, is the cornerstone of democracy.

Research studies have shown that recommender systems provide different recommendations and recommendation quality to different users, depending on their gender, age, ethnicity, or personality.3 The provision of such varying recommendations violates the fundamental right not to be discriminated against protected in the EU Charter. Despite the outsized role that recommender systems play on human minds, freedom and indeed the future of democracy, the draft AI Act does not even mention the term “recommender systems” and appears to have relegated it to the Digital Services Act (DSA).

The DSA regulates recommender systems used by an “online platform” and excludes other intermediaries from the definition as well as those platforms which provide content to the user. Moreover, the DSA concerns itself about risks being created by recommender systems only when used by Very Large Online Platforms (VLOPs) meaning platforms with more than 45 million users on a monthly basis which further limits its regulatory reach.

The DSA’s regulatory approach is based on transparency, that is, it requires VLOPs to disclose information on the “main parameters” of such recommender systems in an easily comprehensible manner to ensure that the users understand how information is prioritized for them. The VLOPs are also required to enable users to modify or influence those “main parameters” including a requirement to offer a non-profiling-based option for influencing those parameters. However, the DSA fails to define the term “main parameters.” It is also unclear how recommendations can be provided without using profiling.

The DSA also imposes a “due diligence” requirement on VLOPs by requiring them to conduct annual risk assessments to assess ‘any significant systemic risks stemming from the functioning and use made of their services in the Union.’4 The systemic risks relate to the (a) dissemination of illegal content, (b) any negative effects for the exercise of fundamental rights, or (c) the intentional manipulation of the service ‘with an actual or foreseeable negative effect on the protection of public health, minors, civic discourse, or actual or foreseeable effects related to electoral processes and public security.’ Based on this risk assessment, VLOPs are then required to put in place reasonable, proportionate and effective mitigation measures, which includes the adaption of their recommender systems.5 However, this really begs the question as recommender systems which filter news content will certainly have actual and foreseeable effects on civic discourse, electoral processes and public security as shown by the January 6, 2021 attack on the US capitol. Recommender systems would have to be banned from recommending any news items or political issues for these risks not to exist.

In sum, the DSA regulates the recommender systems used only by a few VLOPs, is limited to imposing transparency obligations qua users and the right of a user to demand an alternative non-profile based recommendation. In effect, the DSA has shifted the state’s regulatory obligation onto the public. However, expecting the public to reign in recommender systems is illusory particularly where the scope of the disclosure obligations is un-defined.

This brings us to the AIA. Article 5(1)(b) of the AIA prohibits “the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, disability or a specific social or economic situation, with the objective to or the effect of materially distorting the behavior of a person pertaining to that group in a manner that causes or is reasonably likely to cause that person or another person physical or psychological harm.” As per internal Facebook/Instagram studies provided by whistleblower Frances Haugen, the recommender systems used by them to curate and recommend information to teenage girls should attract Article 5(1)(b).

In 2021, Facebook whistle blower Frances Haugen revealed internal studies conducted by Facebook which showed that the company knew that Instagram negatively affected teenagers by harming their body image, and making eating disorders and thoughts of suicide worse. The Wall Street Journal has reported that internal research of Instagram found that “The tendency to share only the best moments, a pressure to look perfect and an addictive product can send teens spiraling toward eating disorders, an unhealthy sense of their own bodies and depression.6 The Wall Street Journal further reported that the same internal research warned that the Explore page which “serves photos and videos curated by an algorithm can send users deep into content that can be harmful.” An internal Facebook presentation showed that, among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the desire to kill themselves to Instagram.7 In view of these internal reports, the recommender systems used by Instagram appear to be an example of an AI system which exploits the vulnerabilities of teenage girls, materially distorts their behavior and is likely to cause harm. This evidence should be sufficient for these recommender systems to fall within the ambit of prohibited AI systems under the AIA. While the whistleblower’s evidence has not resulted in amendment of the AIA to date, the dangers of recommender systems has led a U.S. school district to take action.

In January 2023, Seattle’s public school district filed a lawsuit in a U.S. District Court against TikTok, Facebook, Instagram, Snapchat and YouTube accusing them of creating a mental health crisis among America’s youth by recommending, distributing and promoting content “in a way that causes harm.” The lawsuit states “Defendants affirmatively recommend and promote harmful content to youth, such as pro-anorexia and eating disorder content.”8 The lawsuit alleges that the social media companies have created a “public nuisance” by hooking tens of millions of students across the country into positive feedback loops of excessive use and abuse of the social media platforms which has resulted in rising rates of anxiety, depression, thoughts of self-ham and suicide ideation experienced among young people.

By alleging a “public nuisance,” the plaintiffs have marshalled tort law to regulate AI. This is a similar regulatory approach to the AIA whose legal moorings are very much in product liability in tort. As the AIA is still in draft form, European lawmakers should seize on the wealth of information already available thanks to Frances Haugen and take cognizance of the dangers of certain uses of recommender systems on kids and other vulnerable groups. In the next stage of negotiations, recommender systems aimed particularly at kids should be brought within the ambit of unacceptable risks without waiting for time consuming litigation to bring this about in the years ahead. The mental health and safety of our children demands it.

Aparna Viswanathan
Bar at Law (of Lincoln’s Inn), Attorney (admitted in NY, DC, CA)


1See Sebastian Felix Schwemer, “Recommender Systems in the EU:  From Responsibility to Regulation,” Morals & Machines, 2/2021.
2See discussion in Sebastian Felix Schwemer, “Recommender Systems in the EU: From Responsibility to Regulation,” Morals & Machines, 2/2021.
3Tommaso Di Noia, Nava Tintarey, Panagiota Fatourou, Markus Schedl, “Recommender   Systems under European AI Regulations,” Communication of the ACM, April 2022, Vol. 65, No. 4, pgs 69-73.
4DSA, Art. 26(1)
5DSA Art. 27(a).
6Georgia Wells, Jeff Hortwitz and Deepa Seetharaman, “Facebook Knows Instagram is Toxic for Teen Girls, Company Documents Show,” The Wall Street Journal, September 14, 2021.
7Ibid.
8“Seattle schools sue tech giants over social media harm,” http://tech.hindustantimes/com accessed on 1.9.23.


Aparna Viswanathan | IE LawAheadAPARNA VISWANATHAN received her Bachelor of Arts (A.B.) degree from Harvard University and her Juris Doctor (J.D.) from the University of Michigan Law School. She is called to the Bar in England (of Lincoln’s Inn) as well as in New York, Washington D.C., California and India.

In 1995, Ms. Viswanathan founded Viswanathan & Co, Advocates, a firm based in New Delhi. Since then, she has advised over 100 major multinational companies doing business in India and argues cases before the Delhi High Court and the Bombay High Court

Leave a comment

Home
Account
Cart
Search

DISCLAIMER

The Bar Council of India does not permit advertisement or solicitation by advocates in any form or manner. By accessing this website, viswaco.com you acknowledge and confirm that you are seeking information relating to Viswanathan & Co. of your own accord and that there has been no form of solicitation, advertisement or inducement by Viswanathan & Co. or its members. The content of this website is for informational purposes only and should not be interpreted as soliciting or advertisement. No material/information provided on this website should be construed as legal advice. Viswanathan & Co. shall not be liable for consequences of any action taken by relying on the material/information provided on this website. The contents of this website are the intellectual property of Viswanathan & Co.