Skip links

An American import into European legal thinking: Can “intent” really work as the cornerstone of AI regulation?

On April 21st, 2021, The European Union issued a Proposal that purports to be the first ever legal framework on artificial intelligence (AI) aimed to promote user safety and fundamental rights without stifling scientific and technological innovation.

On April 21st, 2021, the European Union issued a Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts (the “Proposal”).

The Proposal purports to be the first ever legal framework on artificial intelligence (AI) and aims to promote user safety and fundamental rights without stifling scientific and technological innovation. To achieve this goal, the Proposal applies a combination of two regulatory approaches: 1) the preemptory prohibitive approach and 2) the sliding scale approach based on the level of risk. The Proposal prohibits AI systems that pose an “unacceptable risk” to society.  It also seeks to regulate AI systems depending on the level of risk, that is, “high-risk,” or “low or minimal risk.” The level of risk is determined by the “intended use” of the AI system. However, by making doctrine of “intent” its cornerstone, the Proposal falls short of its promise of creating a regulatory regime which will work in practice to protect the public, promote innovation and become the international legal standard.

The Proposal prohibits as an “unacceptable risk” those AI systems that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior in a manner that causes or is likely to cause that person or others physical or psychological harm. The Proposal further prohibits AI systems that exploit vulnerabilities of children and people due to their age, physical or mental incapacities which is likely to cause that person or another physical or psychological harm. AI systems, which perform social scoring of natural persons for general purposes by public authorities, may lead to discriminatory outcomes and are also prohibited. The use of AI systems for real-time remote biometric identification of natural persons in publicly accessible spaces for the purposes of law enforcement is also prohibited except in three defined situations.

After prohibiting the foregoing AI systems which present an unacceptable risk, Annex III of the Proposal classifies the following eight areas in which AI is “intended to be used” as “high risk”:

  • Biometric identification and categorization of natural persons,
  • Management and operation of critical infrastructure (road traffic, supply of water, gas, heating and electricity),
  • Aducation and vocational training,
  • Amployment, workers management and access to self-employment,
  • Access to and enjoyment of essential private services and public services and benefits,
  • law enforcement,
  • Migration, asylum and border control management and,
  • Administration of justice and democratic processes.

These “high risk” systems are then subject to a series of requirements, inter alia, ex-ante conformity assessments, implementation of risk management systems, data governance practices, technical documentation, record keeping, transparency and provision of information to users.

An American import into European legal thinking: Can “intent” really work as the cornerstone of AI regulation?

However, the legal doctrine of “intent”, specifically, the formulation “intended to be used” by the developer of the AI system which is the crux of determining whether an AI system is “high risk,” is prima facie unviable. We have witnessed how Facebook, which was “intended to be used” to allow friends to connect to each other, was, according to the U.S. government, intentionally used by the Russians to foment dissent in America and influence the 2020 American elections.  Former U.S. President Donald Trump has been accused of using social media such as Twitter to instigate the January 6th, 2021 riot in Washington DC and to overturn the 2020 election results. Instagram, in turn, which is “intended to be used” for sharing photos, is being charged with causing psychological harm to teenage girls. These three incidents involve both the “unacceptable risk” of causing psychological harm to minors and the “high-risk” of disrupting the administration of justice and democratic processes under the Proposal. However, the innocuous intended uses of social media do not fall within Annex III. Given the disconnect between the intended use of social media and its effects already troubling Western democracies, it is perplexing why the drafters would focus on intended use as the legal doctrine which separates “high risk” AI systems from the rest.

All AI systems which are not “intended to be used” for the purposes specified in the eight areas in Annex III are virtually unregulated and are only subject to transparency obligations. However, the drafters have overlooked the fact that programmers only provide AI systems with rules about how to learn from data. The programmer does not give any rules about how the AI system should solve the problem given. Therefore, if the machine devises a strategy to achieve the task assigned by the developer and the strategy is illegal, it will be impossible to hold the developer liable under the Proposal  if the AI system was “intended to be used” for a purpose which does not fall within the eight categories in Annex III.

This problem is brilliantly illustrated in the following example by Yavar Bathaee in the Harvard Journal of Law & Technology, in which a developer programs an algorithm with the objective of maximizing profits or making a profitable trading strategy.1 The AI system is intended to be used for developing a profitable trading strategy, not an area included in Annex III of the Proposal, and therefore not “high-risk.” The AI system is given access to a Twitter account, real time stock prices of thousands of securities and access to popular business news.  The system learns to “retweet” news articles on Twitter and often does so before and after trades even though the developer never programmed it to “retweet”. The AI system makes many legitimate trades and withdraws some which could be seen as “spoof” trades. The question then arises whether the AI developer could be held liable under the Proposal despite the fact that the developer never instructed the AI system to “retweet” or engage in spoof trades.

Under the Proposal, the AI market trading system is low risk and, therefore, subject only to transparency obligations. Even the Proposal’s catch-all provision in Article 67 applicable to compliant AI systems which present a risk is unlikely to be applicable.  Article 67 applies where the market surveillance authority of a member state finds that, although an AI system is in compliance with the regulation, it presents a risk to the health or safety of persons, to the compliance with obligations under Union or national law intended to protect fundamental rights or to other aspects of public interest protection. A market trading system is unlikely to be considered as posing a risk to the health or safety or persons or to compliance with laws protecting fundamental rights or public interest protection. In short, the developer of the market trading system, which arguably engaged in market manipulation, is unlikely to be held liable under the Proposal.

Modern AI systems can be compared to a naughty but highly intelligent child. You can’t hold the parent liable for the child’s bad behavior with the intent test as a parent usually never intends the wrongful behavior to incur. Similarly, the AI developer can potentially be held liable only by applying legal tests which don’t involve a showing of intent or intended use. The parent/AI developer could be held liable if it failed to properly restrain the child/AI system or provide proper instructions.   Therefore, instead of ex ante intent, the test could be whether the particular result was reasonably foreseeable and whether or not the developer was negligent in failing to sufficiently restrict the AI.

Going back to the above example of market trading, as Yavar Bathaee argues in the Harvard Journal of Law & Technology, it was reasonably foreseeable that the AI system could engage in activities such as spoof trades; therefore, it was negligent for the AI developer not to specifically prohibit it.2 In this case, any liability would arise only from the common law, as the Proposal does not put forward a reasonableness test. However, as Yavar Bathaee points out, a “reasonably foreseeable” test will also fail where the programmer cannot reasonably foresee the conduct of the AI or the nature of the patterns the AI will find in the data.3 The uncomfortable truth is that regulation of AI systems does not fit within conventional legal parameters. The intent doctrine underlying the regulatory framework of the Proposal needs to be re-examined.

1Yavar Bathaee, “The Artificial Intelligence Black Box and the Failure of Intent and Causation,”  Harvard Journal of  Law and Technology, vol. 31, No. 2, Spring 2018.
2Yavar Bathaee, “The Artifiicial Intelligence Black Box and the Failure of Intent and Causation”  Harvard Journal of  Law and Technology, vol. 31, No. 2, Spring 2018.
3Ibid.


Aparna Viswanathan | IE LawAheadAPARNA VISWANATHAN received her Bachelor of Arts (A.B.) degree from Harvard University and her Juris Doctor (J.D.) from the University of Michigan Law School. She is called to the Bar in England (of Lincoln’s Inn) as well as in New York, Washington D.C., California and India.

In 1995, Ms. Viswanathan founded Viswanathan & Co, Advocates, a firm based in New Delhi. Since then, she has advised over 100 major multinational companies doing business in India and argues cases before the Delhi High Court and the Bombay High Court.

Leave a comment

Home
Account
Cart
Search

DISCLAIMER

The Bar Council of India does not permit advertisement or solicitation by advocates in any form or manner. By accessing this website, viswaco.com you acknowledge and confirm that you are seeking information relating to Viswanathan & Co. of your own accord and that there has been no form of solicitation, advertisement or inducement by Viswanathan & Co. or its members. The content of this website is for informational purposes only and should not be interpreted as soliciting or advertisement. No material/information provided on this website should be construed as legal advice. Viswanathan & Co. shall not be liable for consequences of any action taken by relying on the material/information provided on this website. The contents of this website are the intellectual property of Viswanathan & Co.