January 11, 2024
SINGAPORE – Deepfakes circulating here could be watermarked in the future to alert viewers. Similarly, the same watermarking technology could be used to label trusted content.
This is among a new arsenal of detection tools Singapore is developing to tackle the rising scourge of deepfakes and misinformation. The tools will be designed under a new $20 million initiative to build online trust and safety.
Announcing the efforts in Parliament on Jan 10, Minister for Communications and Information Josephine Teo said Singapore needs to grow new capabilities to keep pace with scammers and online risks. The most worrisome is the misuse of deepfakes to create compelling pitches.
“Our digital way of life has exposed us to new risks. Cyber attacks, scams and harmful content pose a growing threat to our safety and security. As many MPs have noted, trust in society, so crucial for normal human interactions, could be undermined,” she said.
And even though the Government has taken bold steps to counter these risks, including enacting new laws, more of the right things could be done continuously, said Mrs Teo.
She was responding to a motion, of which she expressed support, filed by five members of the Government Parliamentary Committee (GPC) for Communications and Information and five other MPs.
Titled Building An Inclusive And Safe Digital Society, the motion contains 13 recommendations to better safeguard online transactions, detect deepfakes and scams, and educate the whole of society to take part in digital activities safely.
The GPC members who filed the motion are chairwoman Tin Pei Ling, deputy chairman Alex Yam and members Sharael Taha, Hany Soh and Jessica Tan. The five People’s Action Party MPs involved in the motion are Mr Darryl David, Ms Mariam Jaafar, Ms Nadia Ahmad Samdin, Mr Yip Hon Weng and Dr Wan Rizal.
The recommendations include having the Government take the lead in setting up information-sharing platforms to help the public better detect scams, ensuring that device makers and digital platforms provide stronger safeguards against malware, and holding social media services accountable for harmful content and malicious ads.
Parliamentarians unanimously supported the motion and spoke widely, covering areas from deepfakes manipulating public opinion to online scams to helping vulnerable groups navigate the digital space.
Dr Tan Wu Meng (Jurong GRC) said the issue of deepfakes is a serious matter for all democracies, and called on the Government to look at ways of electronically watermarking content as proof that it is real.
“If we can no longer discern easily what is real and not real, you can’t even have a functioning democracy. No government, regardless of which political party they come from, will be able to govern in any country without that fundamental basis for deliberative democratic discussion,” he said.
Dr Tan raised the example of Prime Minister Lee Hsien Loong, who was recently a target of malicious actors, as he had his likeness used in scam videos to promote investment products.
“That’s just deepfake 1.0. Project that forward, three, five or 10 years with more computing power, you can imagine how authentic those deepfakes are going to be,” he said.
As generative artificial intelligence (AI) becomes mainstream, it has become easier to create and spread deepfakes and misinformation.
In a year of record elections where at least 40 countries and territories are heading to the polls, that has given rise to concerns on how such manipulated content can influence voters.
Singaporeans could head to the ballot box in 2024 too, as a leadership transition is set to take place by November.
Mrs Teo said a Centre for Advanced Technologies in Online Safety (Catos) will be set up to hone Singapore’s expertise in detecting deepfakes and online misinformation.
The centre comes under the Online Trust and Safety (OTS) Research Programme led by the Ministry of Communications and Information. Running from 2023 to 2028, the OTS receives $20 million funding under the Smart Nation and Digital Economy domain of Singapore’s Research, Innovation and Enterprise 2025 plan. These funds will be used to boost the research capabilities of Catos.
Catos will be hosted by the Agency for Science, Technology and Research and will focus on building and customising tools to detect harmful content such as deepfakes and non-factual claims, and test technologies such as watermarking and content authentication.
The centre will also identify societal vulnerabilities and develop potential interventions, like flagging or correcting misinformation, that could reduce Internet users’ susceptibility to harmful online content.
“These research efforts will also help inform new legislation or regulations that we may need for issues like deepfakes and which we are studying,” said Mrs Teo.
Catos will be officially launched at an inaugural online trust and safety forum in the first half of 2024. The event will feature international experts and showcase the first version of technology solutions by Catos for trial and adoption.
In the lead-up to its official launch, Catos has started conversations with local researchers, technology developers and industry to raise awareness of the challenges and opportunities in online trust and safety since April 2023.
Through its various workshops and seminar events, Catos has built a professional online trust and safety community network, with over 100 participants from academia, industry and public agencies.
To date, more than 30 professionals are involved in Catos’ work, including scientists, engineers, operations staff and adjunct members. Catos draws on the multidisciplinary research capabilities of academic experts from local and global research institutes to strengthen research collaboration and knowledge exchange.
Mrs Teo also said that Singapore recently refreshed its AI strategy, with the launch of its second National AI Strategy (NAIS 2.0) in December 2023.
The Republic will soon update its recommendations on dealing with AI risks, and the Model AI Governance Framework 2.0 will be released for public consultation later in January.
“For example, we are very concerned about the misuse of generative AI to spread misinformation and carry out targeted scams. Mitigating biasness and enhancing the explainability of AI models are also crucial to developing and deploying them responsibly,” she said.