July 24, 2025
HONG KONG – The potential dangers of using generative AI made the headlines in Hong Kong, after a student from a top local university allegedly used artificial intelligence tools to create hundreds of pornographic images of dozens of his female classmates and teachers.
The incident has sparked calls for the government to update its laws to regulate use of the technology, as Hong Kong prepares to launch its first ChatGPT-style AI service this year.
The case involving the University of Hong Kong (HKU) quickly gained public attention over the past week, after three victims took to social media anonymously to convey their unhappiness with the university’s handling of the matter.
According to what the victims shared in an Instagram post on July 12, pornographic images of around 20 women were found in February on the personal laptop of the HKU law student, whom they referred to only as X.
The more than 700 photos discovered included original innocuous screenshots and digitally manipulated indecent images of women who were the student’s “friends, university classmates, seniors, primary school classmates and even secondary school teachers”.
“Upon questioning, X admitted to using photos of the victims… as material to generate these pornographic images using free online AI software,” the Instagram post read. “It is understood that none of the victims authorised X’s actions.”
It was not clear how the images were discovered or whether he had distributed them.
The victims alleged that after the case was brought to HKU’s attention in March, the university proposed only written and verbal reprimands for the student and cited legal advice that he was unlikely to have committed any legal offence.
“Most victims felt that the university’s response was insufficient to hold X accountable,” the post said.
It added that those affected considered existing legislation inadequate to address the incident, leaving them “unable to seek punishment for X through Hong Kong’s criminal justice system”.
The three victims told The Straits Times that they filed a complaint to the city’s equality watchdog, but did not make a police report as they believed that “deepfakes, which represent a relatively new form of sexual violence, fall outside the scope of existing laws”.
“However, we hope this incident will raise public awareness about deepfakes and push for legislative reforms,” they said.
They also expressed disappointment in online opinions, which have been split along gender lines.
“We call on the public to recognise the seriousness of sexual violence issues,” they said.
“Voicing out personal stories of injustice should never be framed as the promotion of patriarchalism or feminism; it should be understood as a basic right in a law-based society.
“The perpetrators should be the only ones condemned,” they added.
In the wake of the scandal, the authorities have warned the public on the severity of such misconduct and said that they would carefully consider how best to manage such cases.
Chief Executive John Lee addressed the issue directly on July 15, placing the onus on the university to “deal seriously” with the matter and to report it to law enforcement agencies if it constitutes a legal offence.
“The government will… examine global regulatory trends, and conduct in-depth research into international best practices to see what we should do,” Mr Lee added.
Currently, there is no dedicated legislation in Hong Kong that governs the use of generative AI.
It is an emerging field, and countries are still grappling with the regulatory aspects of the new technology’s impact on employment, productivity, the economy and society.
Following Mr Lee’s comments, the city’s privacy watchdog, the Office of the Privacy Commissioner for Personal Data, said it had opened a criminal investigation into the incident.
HKU, meanwhile, said it had issued the student a warning letter and instructed him to apologise to his victims. It also clarified that the university’s investigation was still ongoing.
Technology minister Sun Dong cautioned users of AI tools that they would have to “bear the legal responsibilities” for how they choose to use them.
His ministry, the Innovation, Technology and Industry Bureau, said separately that the government would “review existing legislation (governing the application of AI) if necessary”.
“The use of AI is a double-edged sword… The key is to have proper guidance and a comprehensive legal framework,” Mr Sun told local broadcaster RTHK on July 19.
The minister’s remarks came as he shared details of Hong Kong’s first home-grown ChatGPT-style AI service, which is slated to be launched within the year.
The DeepSeek-powered chatbot, called HKGAI V1, has already been tested in 13 government departments, and can generate content ranging from translations, meeting minutes and travel itineraries, to videos.
“It is an all-round working platform that can handle all kinds of tasks, like helping you to compose a report, lyrics, or even a song,” Mr Sun said.
The service is free for all Hong Kong residents, who will be able to download it via the government’s iAM Smart digital identity app, which is similar to Singapore’s Singpass.
DeepSeek is China’s answer to the United States’ ChatGPT, which is inaccessible in Hong Kong except through a virtual private network. OpenAI, the US firm behind ChatGPT, has blocked the AI chatbot’s use in the city, mainland China, and Macau since July 2024.
The confluence of HKGAI V1’s upcoming launch and the recent HKU scandal has called to question whether Hong Kong is adequately equipped to regulate the burgeoning use of generative AI tools in the city.
Professor Benedict Chan, director of Hong Kong Baptist University’s (HKBU) Centre for Applied Ethics, said the ease of misusing the increasingly sophisticated generative AI technology has raised “serious concerns about privacy, consent and social trust”.
“The key challenge is to ensure that these powerful tools are used responsibly, with safeguards in place to prevent misuse,” Prof Chan, who is also associate dean at HKBU’s faculty of arts and social sciences, told ST.
As AI technology advances and its range of applications widens in scope, new laws should be made to fill any gaps, said Mr Alex Liu, managing partner at law firm Boase Cohen & Collins.
“Law reform in this area has been overdue,” Mr Liu told ST.
“There are fundamental issues that need to be addressed by the legal system, legislature and executive with regard to AI. For example, data privacy, intellectual property, liability in the event of accidents, and many other areas in which AI impacts our daily lives.”
In the HKU case involving deepfake porn images, however, Mr Liu suggested that existing laws may already be sufficient to address the misconduct.
The student could have violated laws in Hong Kong’s crimes and personal data privacy ordinances “if he used a computer dishonestly to create or distribute the images (or) if the AI-generated images were based on real photos of classmates or teachers (that) were used without consent”, the lawyer said.
The three victims suggested that legislation against deepfakes should cover their creation and distribution, as well as the potential risks that the fabricated content poses.
“The non-consensual creation of these indecent photos has undermined our personal autonomy and dignity, and inflicted psychological damage on us, the victims,” they said.