January 18, 2024
SINGAPORE – The regulation of artificial intelligence (AI) will come in a spectrum, with deepfakes placed at the extreme end, likely needing the heavy hand of the law to rein them in, Communications and Information Minister Josephine Teo said on Jan 17.
Deepfakes, which involve AI tools being used to fraudulently create images in the likeness of others, are “an assault on the infrastructure of fact” and pose an issue to all societies, she said during a panel discussion at the World Economic Forum (WEF), which is being held in Davos, Switzerland, between Jan 15 and 19.
Mrs Teo spoke in the session titled 360° on AI Regulations alongside European Commission vice-president for values and transparency Vera Jourova, White House Office of Science and Technology Policy director Arati Prabhakar, and Microsoft vice-chair and president Brad Smith.
Mrs Teo said a risk-based approach could be taken to regulate the AI industry without hampering innovation, with laws for extreme matters like deepfakes, and “lighter” frameworks and guidelines that can apply to innovation on the other end of the spectrum.
“There is a real sense that (deepfakes are) an issue that all societies, regardless of political model, will have to deal with. And what is the right way of dealing with deepfakes?”
She added: “I cannot see an outcome where there isn’t a law in place. Exactly in what shape or form it will take, we will have to see.”
Ms Jourova, who sits on the European Commission, said concerns about AI-driven disinformation have prompted European regulators to mandate that AI-made content be labelled.
The European Union’s AI Act passed in December will eventually require all AI-generated content to be watermarked.
She added: “For me, it is a nightmare (if) voters are manipulated in a hidden way by means of AI and a combination of targeted disinformation. It would be the end of democratic elections.”
Singapore’s Model AI Governance Framework for Generative AI, announced on Jan 16, identifies nine key dimensions of AI governance, like accountability and security, expanding on the existing framework from 2019 that covers only traditional AI amid rapid AI development.
Content provenance is a key way to address the misuse of AI, the framework stated, referring to technical solutions that clearly show the source of AI-generated content like digital watermarking and the ability to trace the source of such content.
This comes as a spate of deepfakes that have hit Singapore, including videos of Prime Minister Lee Hsien Loong and Deputy Prime Minister Lawrence Wong, whose likenesses were used in scam videos to promote investment products.
The authorities also announced on Jan 10 that $20 million has been earmarked for a new research initiative to tackle the rising scourge of deepfakes and misinformation.
Global standards needed
Asked about how easy it will be for Singapore to navigate differing AI standards globally, Mrs Teo said that views on the use of AI and the risks are split.
But this is a “divergent phase” and views will likely converge as AI’s uses and risks become clearer, she added.
“We can’t have rules that we made for AI developers deployed in Singapore only, because they do cross borders… These have to be international rules.”
Mr Smith of Microsoft said many of the regulations around the world have shaped up around similar concerns. These build on existing fundamental laws in data privacy, competition and consumer protection that already apply to AI, even though they may not have been written for the technology.
Asked about China’s role in influencing global AI, Ms Jourova said there were similarities in views between Europe and China on how AI should be used, but they differ in the use of AI for surveillance.
“The main issue was how far to let the states go in using AI, especially in law enforcement, because we want to keep this philosophy of protecting the individual and balancing it with national security measures,” she said. “So here, we cannot have common language with China.”
Mrs Teo said China has been open regarding its use of AI and has published its expectations for businesses. “If you go to China and you talk to its AI developers, there is no misunderstanding on their part about the expectations that their government has on them.
“If your AI models are primarily going to be used within the enterprise sector, there is a light touch (in regulation). But if it is going to reach consumers in society, there are a whole host of requirements that will be made.”