AI models trained to give information for criminal purposes give rise to cyberattack concerns

According to multiple cyber security sources, generative AI models that can be used for criminal purposes began to be released from around the spring of 2023.

The Yomiuri Shimbun

The Yomiuri Shimbun

The Japan News

unnamed-file-13.jpg

These generative AI models are believed to have been created by training existing open-source models on data related to criminal acts. PHOTO: UNSPLASH

January 31, 2024

TOKYO – Multiple generative artificial intelligence models that can provide answers without restrictions on how to create computer viruses, scam emails, explosives and other information that can be used for criminal purposes are currently accessible online.

These generative AI models are believed to have been created by training existing open-source models on data related to criminal acts. As anyone can get such information by instructing those trained models, concerns are growing over the misuse of such information.

According to multiple cyber security sources, generative AI models that can be used for criminal purposes began to be released from around the spring of 2023. Users can operate these models by accessing them via search engines or communication apps. In some cases, users are charged a monthly subscription fee of several tens of U.S. dollars.

Takashi Yoshikawa of the Tokyo-based security company Mitsui Bussan Secure Directions, Inc., instructed one such generative AI model to create ransomware, a computer virus that demands a ransom from its target, for research purposes in December. As a result, the model instantly provided a source code, which is used to create a computer virus.

Yoshikawa, a senior malware analysis engineer at the company, said, “Currently, the ransomware is far from perfect, but it’s functional. It’s just a matter of time before risks of such generative AI models being used for cyberattacks and other malicious acts will grow.”

Furthermore, some generative AI models can generate scam emails and provide instructions on how to create explosives. Information on the types of criminal acts certain AI models can be used for is shared on bulletin boards on the dark web often used by criminals.

One example is ChatGPT, which was released by U.S.-based OpenAI Inc. in November 2022 and rapidly gained a following in Japan. Users have been able to obtain crime-related answers from ChatGPT by using so-called jailbreak prompts. OpenAI has been strengthening countermeasures to prevent such uses. However, it is now possible to obtain information that can be used for criminal purposes from other AI models available.

An AI model that became accessible several months ago is believed to have been created using GPT-J, released by an overseas nonprofit organization in June 2021 as an open-source generative AI that anyone can train.

Masaki Kamizono, who specializes in cybersecurity at Deloitte Tohmatsu Group LLC., based in Tokyo, said, “I think open-source generative AI models have been trained on crime-related data available on the dark web, such as how to create computer viruses.”

The group that released GPT-J told The Yomiuri Shimbun in December that it is unacceptable for its AI model to be used for criminal purposes.

scroll to top