Deepfake videos raise concern in India ahead of general election

Many prominent Indians have fallen prey in the last few months to an outburst of deepfake content that continues to blur the line between fiction and reality.

Debarshi Dasgupta

Debarshi Dasgupta

The Straits Times


Videos used in a recent viral ad campaign were most likely doctored using artificial intelligence and overlaid with a fake voice-over, said an Indian fact-checking organisation. PHOTO: BOOM/THE STRAITS TIMES

December 18, 2023

NEW DELHI – Several popular Indian television news anchors have found themselves in the spotlight in recent weeks for the wrong reasons.

The likes of Mr Ravish Kumar, Mr Arnab Goswami, Mr Sudhir Chaudhary and Ms Anjana Om Kashyap – household names for most Indian families – were featured in a deepfake video ad campaign to promote an alleged diabetes medication online.

Boom, an Indian fact-checking organisation, reported that the viral campaign was created by editing publicly available videos of these anchors. The videos, it said, were most likely doctored using artificial intelligence (AI) and overlaid with a fake voice-over mimicking that of the anchors in Hindi, a language that deepfake detection tools have yet to master.

Mr Kumar distanced himself from any such endorsement in a post on X on Dec 7.

Many other prominent Indians have also fallen prey in the last few months to an outburst of deepfake content that continues to blur the line between fiction and reality, as the tools to produce such content become more sophisticated and easily accessible.

A simple photograph of Mr Ratan Tata, one of India’s top industrialists, was animated with AI and overlaid with a fake voice-over to endorse a dubious financial investment opportunity. In another instance, a publicly available video of industrialist Mukesh Ambani, Asia’s richest person, was doctored with a fake voice-over to sell a similar investment option.

The trend is a worry for India, with its rapidly growing base of Internet users but also widespread digital illiteracy.

Misinformation and financial fraud using falsified digital content has proliferated online, and there are concerns that deepfake technology could even be put to disruptive use in India’s forthcoming general election.

According to a survey released on Nov 30 by LocalCircles, a community-based social media platform, 30 per cent of Indians surveyed said around a quarter of the videos they watch are established to be fake.

In November, the Indian government said it would introduce a “clear, actionable plan” to tackle deepfake content, a move prompted in part by outrage over a deepfake video featuring actress Rashmika Mandanna.

The video shows her entering a lift in a bodysuit. It was produced by manipulating an original video with British-Indian influencer Zara Patel, whose face was replaced with Ms Mandanna’s.

The actress described the incident as “extremely scary”, and has gone on to urge people not to share such material, as well as ask women to speak up if somebody bullies them with such content.

India has seen a spate of deepfake pornographic videos featuring well-known actresses. These videos are created using existing pornographic footage, which is modified with AI to replace the faces of actual adult stars with those of popular actresses.

Across the world, deepfake content is being produced at a faster rate than ever. The Sumsub Identity Fraud Report 2023 showed a tenfold increase in deepfake content being detected globally from 2022 to 2023.

Ms Karen Rebelo, deputy editor of Boom, said its team of fact-checkers had noted “a lot of hyperlocal deepfakes tailored for a local audience” in India in 2023, something that had not been observed earlier.

“The tools have gotten better, the quality of effects has gotten better… You see it in a lot of places, you don’t have to go looking for it now,” she told The Straits Times.

The profusion of deepfake content has prompted concerns around its potential to perpetuate fraud, damage reputations, disrupt financial markets and even alter electoral outcomes.

In May, an AI-manipulated photo of two Indian female wrestlers went viral. They were shown smiling after being detained for protesting against a Bharatiya Janata Party (BJP) politician accused of sexual harassment. Supporters of the ruling party used the fake image to discredit their protest.

In April, a BJP politician in Tamil Nadu released two audio clips of an opposition leader allegedly accusing his own party members of corruption and praising his opponent. Deepfake experts contacted by Rest Of World, an online publication, concluded that the second clip was authentic, but the first clip might have been tampered with.

And back in 2020, the emergence of two pro-BJP deepfake videos during the Delhi state elections raised concerns about the potential misuse of the technology, something many political parties are keen to exploit to further their outreach ahead of the 2024 general election.

“I would really like the Election Commission of India to come out with a regulation banning political parties across the board from using any form of deepfake or generative AI in their campaigns,” added Ms Rebelo. “It is a low-hanging fruit, but I doubt something like that will happen.”

The Indian government is working on a draft regulation to stem the spread of deepfake content. According to local media reports, the plan could include penalties for creators or uploaders of such content, as well as the platform hosting it.

India lacks specific legislation to address deepfakes and AI-related crimes, with existing rules under the Information Technology Act or the Indian Penal Code being deployed to tackle this challenge.

Experts argue that a new set of legislation is important to regulate deepfakes more effectively. Such laws must have both preventive and punitive approaches, said Dr A. Nagarathna, co-director of the Advanced Centre on Research, Development and Training in Cyber Law and Forensics at the National Law School of India University.

While a preventive approach could have online platforms flagging content that may be fake or altered, a punitive approach needs to clearly define offences associated with deepfake misuse and clearly specify who is liable and to what extent.

But Dr Nagarathna noted that there are several challenges. These include questions around jurisdiction in cyberspace where national boundaries do not exist, the need to train law enforcement agencies on rapidly evolving deepfake technologies, as well as the collection and analysis of evidence associated with the abuse of such technology.

“Most importantly, we must design laws that are comprehensive but not vague, so that their scope is wide enough to cover forthcoming forms of offences committed by using technology,” she told ST.

scroll to top