February 24, 2025
SEOUL – North Korea is showing signs of incorporating ChatGPT into its operations, adding to concerns that artificial intelligence technology could be used to advance the reclusive regime’s cybercrimes.
A video released Saturday by a North Korean external propaganda outlet showed scholars in the North learning about ChatGPT, a generative artificial intelligence chatbot developed by US AI research organization OpenAI.
The report by Voice of Korea showed members of an AI research institute at Kim Il Sung University, North Korea’s top institution of higher learning, using a program titled “GPT-4 Real Case: Writing” on their computers. The program demonstrated how ChatGPT produces text in response to user input.
Han Chol-jin, a researcher at the institute, told the outlet that they were “teaching methods to deeply learn an advanced technology and ways to make it our own.”
As internet access is generally unavailable in North Korea, with some citizens only having access to the country’s national intranet, called Kwangmyong, it is unknown whether the researchers had access to the actual ChatGPT site.
The Voice of Korea report came soon after OpenAI’s decision to ban user accounts from North Korea.
The ChatGPT maker claimed that several North Korea-linked accounts misused the chatbot program to create fake resumes, online job profiles and cover letters as part of the regime’s widely reported schemes to earn employment income from abroad.
“The activity we observed is consistent with the tactics, techniques and procedures Microsoft and Google attributed to an IT worker scheme potentially connected to North Korea,” OpenAI said in a recent report.
“While we cannot determine the locations or nationalities of the actors, the activity we disrupted shared characteristics publicly reported in relation to North Korean state efforts to funnel income through deceptive hiring schemes, where individuals fraudulently obtain positions at Western companies to support the regime’s financial network,” it added.
Pyongyang has been accused of running employment hiring schemes in which North Korean IT workers use false identities to get hired and work remotely for US companies. The workers would then funnel their wages to support the development of their country’s nuclear weapons program.
In January, Google’s Threat Intelligence Group, an intel squad within the US-based tech company, revealed that North Korean hackers were using Google’s Gemini chatbot to illegally gain access to information on the South Korean military and to steal cryptocurrency.
Experts expressed concerns about a spike in crypto thefts and other malicious cyber activities by North Korean hackers with their increased use of AI.
“With the use of generative AI, North Korea now faces a lower language barrier (when committing crimes) and needs significantly less money when plotting and carrying out schemes,” said Kim Seung-joo, a professor at Korea University’s School of Cybersecurity.
North Korean hackers stole some $659 million worth of crypto assets in a series of cyberattacks in 2024, according to a joint statement released last month by the governments of South Korea, the US and Japan.