The Indian Express (Delhi Edition)
A penal code for AI OPENING ARGUMENT
EU law is the first of its kind, provides a way to imagine the regulatory regime for artificial intelligence in the future
ON MARCH 14, the European Parliament passed into law the first comprehensive regulatory regime for artificial intelligence, laying down “harmonised rules” called the Artificial Intelligence Act (AI Act, 2024). This is a remarkable law. It is the first set of comprehensive regulations to govern AI. It is also the first regulatory regime that recognises and appreciates different levels of AI, and their varied kinds of utility and potential harms. Intended to regulate AI in the 449 million-strong European Union (EU), it will have global impact and shape how the law will engage with AI. It will also influence the growth and development of AI in how it places the onus of potential harm on the providers of AI services.
First, let us look at the new law. The EU says that the AI Act, 2024, is passed with the aim to “improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems in the Union.” The new law seeks to ensure that the Charter of Fundamental Rights of the European Union, 2000, and other European laws govern the provision and use of AI within the EU.
Article 2 of the AI Act, 2024, states that it will apply to AI providers for services in the European Union, irrespective of whether the providers are in the EU or in a third country. Hence, any footprint of use in the EU is necessary for the liability to attach. Article 3(1) defines an AI system as a “machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
Importantly, Article 5 prohibits certain AI practices, including placing in the market “the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques with the objectives or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing a person to take a decision that that person would not have otherwise taken in a manner that causes a person or persons significant harm”.
This is an extraordinary classification of harm — changing a person’s consciousness through manipulation or deception resulting eventually in harm being caused to others. Given fake news, targeted algorithms and the power of social media, this truly is an important definition to address the crime of our times — altering the consciousness of human beings and getting them to act in ways they would not have otherwise.
What is also unique about this law is that it categorises AI into its potential for harm based on the level of intelligence of the programme. Within this framework, the law either explicitly prohibits or highly regulates or is permissive based on the potential risk to human beings. The AI Act, 2024, explicitly bans “harmful AI practices” that are considered to be a “clear threat to people”. This includes, first, AI systems that deploy harmful manipulative subliminal techniques; second, AI systems that exploit vulnerable groups with disabilities; third, AI systems by public authorities for their “social scoring purposes”, and finally “real time remote biometric identification systems in publicly accessible spaces for law enforcement purposes.”
The AI Act, 2024, also seeks to regulate “high risk AI systems” that create “adverse impact on people’s safety or their fundamental rights.” The law has two kinds of high-risk AI systems. First, those used as a safety component of a product or falling under EU health legislation. Second, systems in eight specific areas, including law enforcement, which will be updated through necessary delegated acts. Such high-risk AI systems would have to comply with requirements including risk management, data training, transparency, and human oversight that must be assessed “before” being placed on the market or put into service.
Finally, AI systems presenting “limited risk”, such as systems that interact with humans like chatbots, emotion recognition systems, biometric categorisation systems and AI systems that manipulate image, audio or video (deepfakes), would be subject to a “limited set of transparency obligations”. All other AI systems presenting low or minimal risks can be developed and used in the EU without conforming to any additional legal obligations.
The new law envisages that a European Artificial Intelligence Board, comprising representatives from member states and the Commission, will be constituted and within
The law gives an extraordinary classification of harm — changing a person’s consciousness through manipulation or deception resulting eventually in harm being caused to others. Given fake news, targeted algorithms and the power of social media, this truly is an important definition to address the crime of our times — altering the consciousness of human beings and getting them to act in ways they would not have otherwise.
each nation state a national supervisory authority will be tasked with monitoring the application and implementation of the new law. Fines will vary from 30 million euros or 6 per cent of the total worldwide annual turnover, depending on the severity of the infringement. Given the global revenue of many large tech firms, including ones that develop and use AI, 6 per cent revenue will translate into massive sums.
The new law will not come into force immediately. So it gives AI developers and providers time to familiarise themselves with the regulations. The law will come into force in phases varying from six to 36 months. For instance, obligations for highrisk AI will take effect in around 36 months after the law comes into force. But, “prohibited practices”, as discussed earlier, will become punishable within six months.
This European law is a first of its kind, and will provide one way to imagine the regulatory regime for AI, that is steadily increasing its impact on the human species. How large is the commercial potential of AI? The Wall Street Journal recently reported that Openai’s Sam Altman is seeking 7 trillion dollars in investment into his company to further develop semi-conductor graphic processor units (GPUS) used in advanced AI projects.
In my last column (“Intelligence as we don’t know it”, IE, March 2), I had written about the company Nvidia’s hold on semiconductor GPU chips — controlling over 80 per cent of the market — making it now an almost 2 trillion-dollar company. So, Altman’s pursuit makes perfect commercial sense. But my point here is not to focus on the dollars. It is to illustrate the potential commercial value of the global AI market.
The commerce of AI is one facet of it. Its uses, benefits and harms are the other aspects of AI. Laws and regulations in any jurisdiction will have to deal with the mindboggling commercial implications along with ever expanding uses and abuses.
The writer is a Senior Advocate at the Supreme Court of India