Deepfakery in Malaysian politics

Deepfakes can sound or look genuine, and voters can and do fall for them.

ai.jpg

Thematic image only. PHOTO: PIXABAY

February 17, 2025

KUALA LUMPUR – That was written by Gemini AI, Google’s generative artificial intelligence chatbot. I prompted it to write a 50-word article using Philip Golingai’s writing style in the It’s Just Politics column. The topic was AI and deepfake possibly influencing Malaysian politics.

I give Gemini a D+ for this effort. For other tasks, it got a B- in imitating my writing style.

“Hahahaha! That’s pretty good. Got some of your fingerprints on it,” a senior media consultant told me when I shared the B- effort.

I was impressed with it, as it used “Donggongon”, a town in my home district of Penampang near Kota Kinabalu, in its 300-word article. However, it used too much mixed imagery, such as ikan bilis, rock, monsoon, latok-latok (sea grapes), and Sherlock Holmes, while I would have used only one or two.

I’ve been playing with AI to see if it can write like me. Earlier this week, another media practitioner contacted me, asking if I was responsible for an anonymous article.

I said no. I suspect either a brilliant writer or AI wrote it.

Our chat drifted to how AI will inevitably become more prominent in Malaysian politics.

“Like what you said, scary. As the election gets nearer, I won’t be surprised that there will be video reels of YBs in sexual acts,” he said.

I’ll be surprised if there are still political operatives out there who think that such dirty tactics work.

Such video reels are juicy, but do they work?

Let’s take the example of a politician allegedly involved in such a video. In the last two elections he stood in, he lost in GE15, the 15th General Election in late 2022, and won in a state election in 2023.

Arguably, he lost his parliamentary seat because the voters in his constituency were against his political stance and not his alleged personal behaviour. His winning the state seat shows that a juicy video reel can’t derail a political career.

But deepfakes can sound or look genuine, and voters can and do fall for them.

Take this example from the US presidential election last year. Then president Joseph Biden called on New Hampshire voters not to vote in the Democrat state primary. “We know the value of voting Democratic when our votes count. It’s important you save your vote for the November election,” he said.

However, the voice saying that was not Biden’s. It was a deepfake created with AI.

The Biden deepfake is how elections can be manipulated with fake articles, photographs, and even fake audio and video.

This week, I saw an amateurishly produced fake poster – probably created by a human and not AI – of an edited photo of a Sabah lawyer/politician with a woman. The caption claimed he had left the woman to return to his wife. It also claimed that he was a pervert and was trying to bring down the government.

At first glance, it looks real. But when I scrutinised the photo, I noticed the man’s head was not proportionate to the body. About an hour later, the lawyer/politician posted on Facebook that the photograph was fake. Someone shared the original photograph in a chat group, and it was of a man who was not the lawyer/politician, and it was published with a news story about the cheating.

How many of us will believe that fake photo?

Should we be worried that AI and/or deepfake could be a factor in shaping perceptions in Malaysian politics?

Let me ask Gemini AI.

Its 185-word reply was too long, so I asked for a summary, and here is the result: “AI and deepfakes pose a significant threat to Malaysian politics by enabling misinformation, eroding trust, increasing polarisation, and potentially facilitating foreign interference, though AI can also be used for positive purposes.”

It is all about how gullible voters are. But even for someone as sceptical as I am, some articles or videos can sound and look believable. Also, as a spinmeister told me, a lie told many times will take on a life of its own and become the truth.

Here’s one example.

One of the unforgettable takes coming out of the murder of Mongolian model Altantuya Shaariibuu in 2006 is that the explosive C4 was used. But, if you Google “C4”, “Altantuya”, and “court case”, number two in the list of answers is a news article published in 2014 with the headline: “We never said ‘C4’, prosecutor tells Altantuya murder appeal”.

“UTK never used C4 explosives. We never said the explosive was C4. We never said that but these people from day one, they said C4,” the lead prosecutor in the appeal, Datuk Tun Abdul Majid Tun Hamzah, told the court, referring to Malaysian police’s Special Action Unit (UTK).

But most people don’t read court hearings. For them, the “truth” is that C4 was used.

The other day, I was with a defence lawyer involved in the case, and I wanted confirmation on whether what was said in court was true. He confirmed that C4 was never used.

Currently, a big rumour is circulating that might have consequences for Malaysian politics.

“My tycoon friends are calling me about it,” a businessman told me.

His call made me call politicians, journalists, diplomats, and people in the know. The funny thing is that in trying to get confirmation, I become the one giving credibility to the rumour. It is humans, not AI, that is spreading it.

As Malaysian politics can be opaque, it is difficult to verify the veracity of all the talk making the rounds. As usual, people will say there’s no smoke without fire.

I asked Gemini AI and it answered, “I can’t help with responses on elections and political figures right now. I’m trained to be as accurate as possible but I can make mistakes sometimes.”

AI, sometimes, can’t beat inside information possessed by humans.

scroll to top