November 28, 2023
TOKYO – The recent ouster and return of the head of ChatGPT developer OpenAI Inc. has once again highlighted the debate over what to prioritize in the development of artificial intelligence: growth or safety?
Governments and information technology businesses around the globe are rushing to make new innovations in the field, while also searching for ways to ensure safety through regulations. The tug-of-war between growth and safety continues.
“I love OpenAI, and everything I’ve done over the past few days has been in service of keeping this team and its mission together,” OpenAI’s Sam Altman, 38, said in a post Wednesday on X, formerly Twitter.
The U.S. generative AI startup announced Altman’s departure as chief executive officer on Nov. 17 but then announced his return just five days later. The change in policy came after more than 90% of the company’s 770 employees endorsed a document calling for his reinstatement.
ChatGPT was released last November and has quickly grown into a service used by about 100 million people each week. A key player in its development, Altman has been active as an “evangelist” for generative AI, meeting with world leaders including Prime Minister Fumio Kishida.
Although the reason for the ouster has not been disclosed, overseas media reported there was a conflict within the management team over growth and safety, and that while Altman actively promoted the spread of AI, OpenAI’s chief scientist Ilya Sutskever and others were concerned about its rapid adoption. Sutskever is suspected to have led the current dismissal drama.
‘Urgent threat’
The turmoil stems from the launch of OpenAI in 2015 as a nonprofit organization that aimed to develop safe AI, with Microsoft Corp. investing $10 billion (¥1.5 trillion) in its for-profit subsidiary established in 2019. A special structure was created in which huge amounts of money were invested in the subsidiary to accelerate development, which may have led to conflicting approaches among the management team over the growth and safety of the new technology.
After his return, Altman is expected to deepen his relationship with Microsoft and strengthen his focus on growth.
Generative AI, which produces elaborate text and images, has raised concerns about the proliferation of disinformation and copyright infringement, and its developers bear responsibility.
Geoffrey Hinton, a researcher who is known as the godfather of AI, said in May he would quit Google LLC, where he had engaged in the development of generative AI. Hinton said AI could pose a “more urgent” threat to humanity than climate change.
Meanwhile, competition is intensifying with the advent of Amazon.com Inc. and Meta Platforms Inc., formerly Facebook. They emphasize safety measures but seem to have started spreading technology hastily before fully examining safety from every angle.
Domestic players
Japanese companies, including NEC Corp., the Nippon Telegraph and Telephone Corp. and SoftBank Corp., are rushing to develop generative AI tailored to the Japanese language, hoping to catch up with overseas players by developing AI that learns Japanese culture and business practices while taking safety into consideration. Some believe domestic production of AI that supports industries should be prioritized from the standpoint of economic security.
NEC incorporates lawyers’ opinions into the development of AI, taking human rights into consideration, and also conducts third-party risk assessments. NEC currently is conducting tests with the Sagamihara city government.
“The risk of information leaks can be reduced by connecting with data centers domestically via dedicated lines,” a city official said.
The government will compile guidelines by the end of this year for AI developers and operators.
“The government must rethink its notion that regulations inhibit technological innovation, and enact laws and regulations as necessary,” said lawyer Ryoji Mori, who is well versed in regulations in the digital field.