May 24, 2023
TOKYO – The Fugaku supercomputer will be used to develop basic technology for generative AI with high Japanese language ability from this month, according to an announcement by a team of Japanese research institutes and Fujitsu Ltd. on Monday.
In tandem with this development, the team will work on solving issues connected to artificial intelligence models such as copyright infringement.
Tokyo Institute of Technology, Tohoku University and the government-funded Riken research institute will work with Fujitsu — which developed Fugaku with a Riken computational science center — to embark on the creation of so-called large language models (LLM).
Generative AI models, such as U.S. firm OpenAI’s ChatGPT, currently learn mainly from online English materials. Therefore, the Japanese language generated by these models sounds unnatural or inaccurate.
The team will have AI learn a large amount of Japanese document data to improve its Japanese language proficiency.
By using the world-leading supercomputer Fugaku, which is housed in Kobe, the team aims to be able to efficiently proceed as enormous amounts of calculations are required to train AI.
The plan is to release the LLM for free by the end of March 2024 so that universities and companies can use it for basic research and the development of new services.
The development is expected to contribute to production of domestically produced generative AI in the future.
At the same time, the team aims to establish technology to cope with such problems as personal data breaches and copyright infringement, which are among the concerns related to generative AI.
“For generative AI to spread in society, various issues need to be resolved,” said Tokyo Institute of Technology Prof. Rio Yokota, who is a member of the team. “We will proceed while ensuring transparency in our development process.”