Google, Microsoft and other tech giants form alliance with IMDA to tackle pressing AI issues

Members of the foundation will contribute to a software toolkit developed by IMDA that analyses datasets and AI code to check for bias, transparency and AI-related issues.

Osmond Chia

Osmond Chia

The Straits Times

st11-1.jpg

Minister for Communications and Information Josephine Teo (fourth from right) with members of the AI Verify Foundation. PHOTO: IMDA

June 8, 2023

SINGAPORE – Movers and shakers in artificial intelligence (AI) have allied with the authorities here to tackle pressing issues in AI such as bias, copyright and its susceptibility to lying.

The AI Verify Foundation comprises at least 60 global industry players. It was announced on Wednesday by Minister for Communications and Information Josephine Teo at the Asia Tech x Singapore conference at Capella Singapore on Sentosa, which runs from Tuesday to Friday.

The Singapore-based foundation includes the Infocomm Media Development Authority (IMDA), tech giants Google and Microsoft, and well-known companies that handle AI, including DBS, Meta and Adobe.

They will discuss AI standards and best practices, and create a neutral platform for collaboration on governing AI, said IMDA.

Members of the foundation will contribute to a software toolkit developed by IMDA that analyses datasets and AI code to check for bias, transparency and AI-related issues. The AI Verify toolkit attracted interest from companies such as IBM and Dell when it was piloted in 2022. The toolkit, now available to all companies, helps check the quality of their AI algorithms based on principles laid out by the foundation, including how well an AI can explain its decision-making process, for transparency.

The not-for-profit foundation has set its sights on generative AI, which is capable of creating text, images and other content when prompted, and has come into the mainstream since ChatGPT was launched to the public in 2022.

Generative AI is the foundation on which other apps are built. As more companies integrate generative AI models into their services, questions are being raised about the safety and reliability of machines that appear to have a mind of their own.

Six risks in AI were highlighted in a report by IMDA and Temasek-backed AI software firm Aicadium. These include mistakes created by AI, such as false responses that are deceptively convincing or incorrect answers to medical questions. The report also mentioned how ChatGPT fabricated a sexual harassment scandal and accused a law professor, who had no one to turn to to clear his name.

AI models may also be inherently biased if the training dataset is skewed. When prompted to create an image of an “American person”, image generators would typically illustrate a light-skinned person. Individuals with ragged clothes and primitive tools are created when prompted with “African worker”, the report added.

Tests by Aicadium found that AI-generated images tended to enforce gender stereotypes. PHOTO: AICADIUM

Generative AI can also be used by fraudsters with little technical skill to generate malicious code, launch cyber attacks or fake news campaigns and impersonate others by generating lifelike images.

Copyright issues in image generation, toxic content and privacy were also flagged as key risks of generative AI.

To tackle these, AI models should be accountable, with options for remedial action if harmful content is generated. They should also be clear about the type of training datasets used, the report recommended.

Mrs Teo, who is also Second Minister for Home Affairs, said in her opening speech that the industry needs to actively steer AI towards beneficial uses and away from harmful ones.

“This is core to how Singapore thinks about AI,” she told an audience of several hundred tech professionals. “In doing so, we hope to make Singapore an outstanding place for talent, ideas and experimentation.”

She gave examples of AI use in public service, including how it helps process feedback from citizens and how it helps prepare Singapore for an ageing population by improving clinical diagnosis and patient well-being.

Phishing detection tools also comb through 120,000 websites daily to remove spoof sites used in scams. “Without such AI in their arsenal, law enforcement agents will hardly have the capacity to focus on scam prevention or recovering the assets of victims,” she said.

“A strong desire for AI safety need not mean pulling the drawbridge to innovation and adoption,” said Mrs Teo, adding that guardrails are necessary to ensure the safe and responsible use of AI.

The knowledge shared by the foundation will help clarify how AI models should be tested before they go public, Ms Elham Tabassi, chief of staff of the Information Technology Laboratory of the National Institute of Standards and Technology, said during a panel discussion at the conference.

She said: “One immediate need is to have guidance on how to verify and validate these models, and have the right transparency mechanisms and documentation on what verification has been done.”

By opening the discussion on the guiding principles for AI to more parties, the foundation helps to ensure better representation, said software company Salesforce’s principal architect of ethical AI practice, Ms Kathy Baxter.

“It is not just picking the same handful of individuals to go and make the decisions. But we are being inclusive from the data we are pulling as well as how we are assessing if the model is successful in abiding by those values.”

scroll to top