January 17, 2024
SINGAPORE – Singapore has proposed a new governance framework for generative artificial intelligence and is seeking international feedback on it.
The new Model AI Governance Framework for Generative AI builds on an existing framework from 2019 that covers only traditional AI, and comes at a time when the generative AI scene is still developing.
The framework, developed by the AI Verify Foundation and Infocomm Media Development Authority (IMDA), is part of Singapore’s contribution to the global conversation in this space, said Minister for Communications and Information Josephine Teo on Jan 16.
Speaking to The Straits Times in Davos, Switzerland, where she is attending the World Economic Forum annual meeting, Mrs Teo said AI governance cannot be done only at the country level.
“Our contribution to the global conversation is partly why we have chosen the World Economic Forum to be the launch pad for the updated framework and to seek international inputs,” she said.
“It’s also partly because this field is so nascent. We believe that in order to make progress, somebody has to offer something, put it on the table, and then the global conversation can be further enriched.”
The framework – termed the MGF-GenAI – is expected to be finalised in mid-2024.
Mrs Teo added that the progress of this framework was in parallel with that of the National AI Strategy 2.0 launched in December 2023.
Examples of generative AI include popular content creation tools such as ChatGPT and Midjourney, while traditional AI include tools that can predict fraud, diseases and employee flight risks.
The new framework, which is meant to be quite comprehensive, identifies nine key dimensions of AI governance, such as accountability and security, as well as testing and assurance, said Mrs Teo.
It builds on efforts such as a discussion paper by IMDA in 2023 on the risks associated with greater use of generative AI, as well as work to provide guidance on the safety evaluation of generative AI models and ongoing evaluation tests on AI products, said IMDA in a statement on Jan 16.
While Singapore is not the only one contributing to the global discussion, the practical ways in which the country has approached AI governance – including the development of a testing toolkit like AI Verify – have led to the Republic being well regarded as pragmatic and forward-thinking, said Mrs Teo.
AI Verify is a toolkit meant to help organisations validate the performance of their AI systems against internationally recognised AI governance principles such as safety, reproducibility and transparency.
The AI Verify Foundation, set up by IMDA in 2023, lists companies such as Google, IBM, Microsoft and Salesforce among its members.
When asked how lessons learnt from the original framework shaped the new one, Mrs Teo said that when the 2019 version was launched, different sectors such as financial services and healthcare built on top of it to develop something specific to them.
MGF-GenAI will likely see a similar path, as different sectors will use generative AI in their own ways and have to find an expression of governance most suited to their circumstances, she said.
On how Singapore will raise awareness of MGF-GenAI worldwide, Mrs Teo noted that Singapore participates actively in global conversations on AI, such as through the Global Partnership on Artificial Intelligence, of which Singapore is a founding member, among other avenues.
Singapore and the United States have worked to find areas of alignment between the US’ AI risk management framework and Singapore’s governance framework, she added.
“They are in discussion with us, they are very interested in the fact that not only have we already developed a framework, we have a testing toolkit to go with it,” said Mrs Teo.
Having done the alignment, the conversation is being taken to international standard setting bodies to see whether this alignment between the two models can serve as a foundation to develop international standards that even more countries can be part of, she said.
“These are steps that we are taking in order to promote an environment where AI can be implemented in a responsible way. AI safety is attainable. And by doing so, we hope to provide a firmer foundation for AI innovations,” she said.
On some potential challenges that may arise in the governance of generative AI, as compared with traditional AI, Mrs Teo said that the way generative AI is being used is still being tested out by organisations and even the Government, making it difficult to tell which particular uses will become widespread.
This means that AI governance at this stage is largely still at a level of setting principles and identifying what is of greater importance, she added.
On how jobs and the economy will be affected by this framework and the use of generative AI, Mrs Teo said the testing and validation of AI tools is a potential industry that could grow out of this development.
Singapore sees technology in general, and not just AI, as impacting the workforce in three ways – enhancement of productivity, potential job displacement and job reinstatement.
The Republic’s approach is to enhance the effect on productivity such that it is as widespread as possible, while minimising the displacement effect by training people to take on new jobs or jobs enhanced as a result of the technology adoption, Mrs Teo added.
In announcing the National AI Strategy 2.0 in December, Deputy Prime Minister Lawrence Wong had unveiled plans to triple the country’s AI talent pool to 15,000 by training locals and hiring from overseas.
He also spoke of the Government’s commitment to building a trusted environment for AI and addressing moral and ethical issues in the field, such as whether AI is suited to make decisions in place of humans.
Governments now have to play an active role to shape AI, which has raised even more profound issues, and Singapore will find a pragmatic balance to regulation without choking innovation, he said then.
In its statement on Jan 16, IMDA said that while generative AI remains a dynamically developing space, there is growing global consensus that consistent principles are needed to create a trusted environment.
This is so that end users can use generative AI confidently and safely, while space is allowed for cutting-edge innovation.
“This proposed framework aims to facilitate international conversations among policymakers, industry and the research community, to enable trusted development globally,” said IMDA, referring to MGF-GenAI.