July 24, 2023
PETALING JAYA – The Science, Technology and Innovation Ministry is looking into the possibility of regulating artificial intelligence (AI) applications in Malaysia, including labelling material produced by such apps as “AI-generated” or “AI-assisted”.
Minister Chang Lih Kang said it is considering spearheading the drafting of a Bill which would involve consultations with technology experts, legal professionals, stakeholders, and the public to ensure it is robust and relevant.
“It is a strategic move considering the global trend towards stronger regulations around AI usage,” he told Sunday Star.
Chang said due to the widespread use of AI, it would be essential to label any material produced by generative AI as “AI-generated” or “AI-assisted” to ensure transparency and enable informed consumption.
“We should actively explore and advocate for policy measures that require content produced entirely or in part by AI, to be clearly identified.
“Additionally, adopting global standards for AI transparency and pushing for relevant certification can bolster these transparency efforts.
“These standards might include guidelines on how to label AI-produced content and how to provide easy-to-understand explanations about the workings of AI systems,” he said.
In March, the World Economic Forum reported that the European Union was working on a legal framework for regulating the use of AI, chiefly focusing on galvanising rules on data transparency, quality, accountability, and human oversight.
Dubbed the “AI Act”, the legislation is also designed to resolve “ethical questions and implementation challenges” in various industries, including education, finance, and healthcare.
On Friday, AI companies, including OpenAI, Alphabet and Meta, made voluntary commitments to the US government to implement measures such as watermarking AI-generated content.
Chang pointed out that such a Bill in Malaysia would cover crucial aspects such as data privacy and public awareness of AI use.
“It would be important for this AI Act to, among others, encompass areas such as transparency, data privacy, accountability, and cybersecurity.
“The legislation could also include provisions for educating the public about AI and promoting research and development in the field,” he said.
The legislation, he said, would not curtail the development of AI technology, adding that it is important to balance the need to manage risks with the potential for innovation as well as ensuring AI continues to positively contribute to the economy and society.
“It is also crucial for the ministry to continuously advance research and development in AI and machine learning technologies, promoting ethical guidelines, and supporting innovation that can help in detecting and countering misinformation and other forms of harmful content,” he said.
On the possible abuse of AI in elections through libellous content or misinformation, Chang said this is why there is a need for clear regulations.
“It is crucial to have strong legal frameworks and ethical guidelines for AI use.
“This could include laws that mandate transparency about the source of information, and severe penalties for those who use AI tools to spread false information.
“We also need to work with relevant ministries, social media companies and other platforms where misinformation is often spread, pushing them to increase their efforts to identify and remove such content,” he said.
Chang also said people would need to be taught to recognise AI-generated content to help them make informed opinions and choices.
He stressed the need to develop resources and public awareness campaigns on the basics of AI and how it is being used to generate content.
“This includes understanding the biases that can be inherent in AI, as well as the distinction between human-produced and AI-produced content.
“Raising awareness about AI has many advantages. It helps people make better choices and decisions, encourages them to be more critical about the media they consume, and enables them to participate in discussions about AI rules and guidelines.
“Ultimately, it can lead to a more cautious and aware community, reducing the impact of AI-generated misinformation,” he said.