Brighter Kashmir

AI To Help Win Electoral Battles

Thus, it may portend an end of human interventi­on at all levels in all fi elds and manipulati­ve

- ASAD MIRZA Email:----------- asad. mirza. nd@ gmail. com

The efficacy of the accord also hinges on its ability to keep pace with the latest advancemen­ts. While the accord establishe­s guiding principles, it lacks concrete mechanisms for enforcemen­t. Ensuring accountabi­lity among participat­ing companies is essential for meaningful progress.

The fast progress of AI and generative AI in recent years has opened many new frontiers for the humans to scale new heights in various fi elds, but a darker side of the generative AI might be used for divisive purposes in global electoral battles in 2024. The manner in which the informatio­n technology has outpaced even our wildest dreams is manifest in the fast- paced growth of AI in technicall­y all fi elds of human existence.

The manner in which AI- generated content that can impersonat­e anyone with alarming accuracy has also set the alarms bells ringing. This when coupled with how this technologi­cal advancemen­t could be misused for unlawful activities I also very scary. One domain which could be affected the most in particular using this technology is the manner in which it could be deployed political parties to infl uence their voters or malign their opponents. This technologi­cal advancemen­t is known as Deepfakes. While developers could argue that their technology is evolving and mistakes are inevitable, the rapid rise of deepfakes, particular­ly those used for sexual harassment, fraud, and political manipulati­on, poses an existentia­l threat to democratic processes and public discourse.

Deepfakes could be exploited for non- consensual exploitati­on, fi nancial deception, political disinforma­tion or the disseminat­ion of falsehoods, and they could endanger not only the integrity of individual­s but also the very foundation­s of democratic societies. Further, they corrode trust, manipulate public sentiment, and have the potential to incite widespread chaos and violence. From 2019 to 2023, the total number of deepfakes surged by 550%, as revealed in the 2023 State of Deepfakes report issued by Home Security Heroes, a US- based organisati­on.

With elections slated in more or less half of the world, the potential for deepfakes to sow discord and undermine trust in institutio­ns is more signifi cant than ever. Various efforts are underway to combat this threat. Initiative­s like the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” at the recently held Munich Security Conference ( MSC) represent a commendabl­e effort to confront the challenges presented by deepfakes in electoral processes.

By uniting leading tech companies, the accord presents a unifi ed stance against malicious actors, showcasing a shared determinat­ion to address the issue. Beyond mere detection and removal, the agreement encompasse­s educationa­l initiative­s, transparen­cy measures, and origin tracing, laying the groundwork for comprehens­ive and enduring solutions. However, the accord’s focus on the 2024 elections may overlook the on- going evolution of the deepfake threat, potentiall­y necessitat­ing adjustment­s beyond the specifi ed timeframe.

The effi cacy of the accord also hinges on its ability to keep pace with the latest advancemen­ts. While the accord establishe­s guiding principles, it lacks concrete mechanisms for enforcemen­t. Ensuring accountabi­lity among participat­ing companies is essential for meaningful progress. Relying on selfregula­tion from tech companies raises concerns about potential biases in implementa­tion, underscori­ng the need for transparen­t and impartial oversight. The era of the deepfake campaign has already begun. And as generative AI gathers ubiquity and sophistica­tion, the fraying of social cohesion throughout the West in recent years may soon feel quaint by comparison. Rather than stoke outrage, tribalism, and conspirato­rial thinking among voters, these new digital tools might soon breed something arguably much worse: apathy. Indian Scenario

Deepfakes used to spread misinforma­tion online using treacherou­s role of a rapidly evolving AI technology are particular­ly lethal for countries like India.

The concern stems from some of the recent deepfake incidents. The concerns come after a series of recent deepfake incidents involving top Indian fi lm stars and public fi gures. This prompted the government to work out a “clear, actionable plan” in collaborat­ion with various social media platforms, artifi cial intelligen­ce companies and industry bodies to tackle the issue.

Reportedly, PM Narendra Modi has said that deepfakes were one of the biggest threats faced by the country, and warned people to be careful with new technology amid a rise in AI- generated videos and pictures.

Meanwhile, Global Risks Report released by World Economic Forum in January 2024, warns thatas polarisati­on grows and technologi­cal risks remain unchecked, ‘ truth’ will come under pressure Emerging as the most severe global risk anticipate­d over the next two years, foreign and domestic actors alike will leverage misinforma­tion and disinforma­tion to further widen societal and political divides. The report further says that as close to three billion people head to the electoral polls across several economies – including Bangladesh, India, Indonesia, Mexico, Pakistan, the United Kingdom and the United States – over the next two years, the widespread use of misinforma­tion and disinforma­tion, and tools to disseminat­e it, may undermine the legitimacy of newly elected government­s.

The resulting unrest could range from violent protests and hate crimes to civil confrontat­ion and terrorism. Misinforma­tion and disinforma­tion is a new leader of the top 10 rankings this year. No longer requiring a niche skill set, easy- to- use interfaces to large- scale artifi cial intelligen­ce ( AI) models have already enabled an explosion in falsifi ed informatio­n and socalled ‘ synthetic’ content, from sophistica­ted voice cloning to counterfei­t websites. Moreover, in Western countries in particular, an even deeper governance issue might be playing even now. Basically, AI may also accelerate a decades- long erosion of civic engagement and social capital, particular­ly in liberal democracie­s - where citizens tend to be more secular and self- oriented and have smaller family networks than in other societies, like in the East.

It is predicted that Generative AI will lead to even greater social fi ssures, by allowing more and more individual­s to bypass complex human interactio­ns and exchanges and contests of ideas that form the bedrock of democracy. In this background, the generative AI might echo your thoughts and will tell you what you want to hear, putting a complete end to human interactio­ns, grasp and execution. This highlights the darker side of the generative AI in future yet to come. The citizen’s ideals, inputs and actions might be marred at the local, national and internatio­nal issues and may force them to completely opt out of the normal civic life, by creating a false picture in their minds, akin to human being controlled by machines, though in the democratic scenario it will be the political leaders not machines, who’ll control the electorate through false narratives. Thus, it may portend an end of human interventi­on at all levels in all fi elds and manipulati­ve leaders could use the generative AI to create empathy in their favour amongst the wider electorate and thus infl uence the outcome of electoral battles in a completely corrupt manner.

 ?? ??

Newspapers in English

Newspapers from India