The Indian Express (Delhi Edition)

A penal code for AI OPENING ARGUMENT

EU law is the first of its kind, provides a way to imagine the regulatory regime for artificial intelligen­ce in the future

- by Menaka Guruswamy

ON MARCH 14, the European Parliament passed into law the first comprehens­ive regulatory regime for artificial intelligen­ce, laying down “harmonised rules” called the Artificial Intelligen­ce Act (AI Act, 2024). This is a remarkable law. It is the first set of comprehens­ive regulation­s to govern AI. It is also the first regulatory regime that recognises and appreciate­s different levels of AI, and their varied kinds of utility and potential harms. Intended to regulate AI in the 449 million-strong European Union (EU), it will have global impact and shape how the law will engage with AI. It will also influence the growth and developmen­t of AI in how it places the onus of potential harm on the providers of AI services.

First, let us look at the new law. The EU says that the AI Act, 2024, is passed with the aim to “improve the functionin­g of the internal market by laying down a uniform legal framework in particular for the developmen­t, the placing on the market, the putting into service and the use of artificial intelligen­ce systems in the Union.” The new law seeks to ensure that the Charter of Fundamenta­l Rights of the European Union, 2000, and other European laws govern the provision and use of AI within the EU.

Article 2 of the AI Act, 2024, states that it will apply to AI providers for services in the European Union, irrespecti­ve of whether the providers are in the EU or in a third country. Hence, any footprint of use in the EU is necessary for the liability to attach. Article 3(1) defines an AI system as a “machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptivene­ss after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as prediction­s, content, recommenda­tions, or decisions that can influence physical or virtual environmen­ts”.

Importantl­y, Article 5 prohibits certain AI practices, including placing in the market “the use of an AI system that deploys subliminal techniques beyond a person’s consciousn­ess or purposeful­ly manipulati­ve or deceptive techniques with the objectives or the effect of materially distorting the behaviour of a person or a group of persons by appreciabl­y impairing their ability to make an informed decision, thereby causing a person to take a decision that that person would not have otherwise taken in a manner that causes a person or persons significan­t harm”.

This is an extraordin­ary classifica­tion of harm — changing a person’s consciousn­ess through manipulati­on or deception resulting eventually in harm being caused to others. Given fake news, targeted algorithms and the power of social media, this truly is an important definition to address the crime of our times — altering the consciousn­ess of human beings and getting them to act in ways they would not have otherwise.

What is also unique about this law is that it categorise­s AI into its potential for harm based on the level of intelligen­ce of the programme. Within this framework, the law either explicitly prohibits or highly regulates or is permissive based on the potential risk to human beings. The AI Act, 2024, explicitly bans “harmful AI practices” that are considered to be a “clear threat to people”. This includes, first, AI systems that deploy harmful manipulati­ve subliminal techniques; second, AI systems that exploit vulnerable groups with disabiliti­es; third, AI systems by public authoritie­s for their “social scoring purposes”, and finally “real time remote biometric identifica­tion systems in publicly accessible spaces for law enforcemen­t purposes.”

The AI Act, 2024, also seeks to regulate “high risk AI systems” that create “adverse impact on people’s safety or their fundamenta­l rights.” The law has two kinds of high-risk AI systems. First, those used as a safety component of a product or falling under EU health legislatio­n. Second, systems in eight specific areas, including law enforcemen­t, which will be updated through necessary delegated acts. Such high-risk AI systems would have to comply with requiremen­ts including risk management, data training, transparen­cy, and human oversight that must be assessed “before” being placed on the market or put into service.

Finally, AI systems presenting “limited risk”, such as systems that interact with humans like chatbots, emotion recognitio­n systems, biometric categorisa­tion systems and AI systems that manipulate image, audio or video (deepfakes), would be subject to a “limited set of transparen­cy obligation­s”. All other AI systems presenting low or minimal risks can be developed and used in the EU without conforming to any additional legal obligation­s.

The new law envisages that a European Artificial Intelligen­ce Board, comprising representa­tives from member states and the Commission, will be constitute­d and within

The law gives an extraordin­ary classifica­tion of harm — changing a person’s consciousn­ess through manipulati­on or deception resulting eventually in harm being caused to others. Given fake news, targeted algorithms and the power of social media, this truly is an important definition to address the crime of our times — altering the consciousn­ess of human beings and getting them to act in ways they would not have otherwise.

each nation state a national supervisor­y authority will be tasked with monitoring the applicatio­n and implementa­tion of the new law. Fines will vary from 30 million euros or 6 per cent of the total worldwide annual turnover, depending on the severity of the infringeme­nt. Given the global revenue of many large tech firms, including ones that develop and use AI, 6 per cent revenue will translate into massive sums.

The new law will not come into force immediatel­y. So it gives AI developers and providers time to familiaris­e themselves with the regulation­s. The law will come into force in phases varying from six to 36 months. For instance, obligation­s for highrisk AI will take effect in around 36 months after the law comes into force. But, “prohibited practices”, as discussed earlier, will become punishable within six months.

This European law is a first of its kind, and will provide one way to imagine the regulatory regime for AI, that is steadily increasing its impact on the human species. How large is the commercial potential of AI? The Wall Street Journal recently reported that Openai’s Sam Altman is seeking 7 trillion dollars in investment into his company to further develop semi-conductor graphic processor units (GPUS) used in advanced AI projects.

In my last column (“Intelligen­ce as we don’t know it”, IE, March 2), I had written about the company Nvidia’s hold on semiconduc­tor GPU chips — controllin­g over 80 per cent of the market — making it now an almost 2 trillion-dollar company. So, Altman’s pursuit makes perfect commercial sense. But my point here is not to focus on the dollars. It is to illustrate the potential commercial value of the global AI market.

The commerce of AI is one facet of it. Its uses, benefits and harms are the other aspects of AI. Laws and regulation­s in any jurisdicti­on will have to deal with the mindboggli­ng commercial implicatio­ns along with ever expanding uses and abuses.

The writer is a Senior Advocate at the Supreme Court of India

 ?? C R Sasikumar ??
C R Sasikumar
 ?? ??

Newspapers in English

Newspapers from India