October 16, 2023
TOKYO – The internet has become a chaotic space, brimming with outrageous, biased and fake information. The so-called attention economy, which generates revenue by getting people to click on or view ads, underpins today’s digital era. This is the fourth installment in a series of articles.
“We didn’t anticipate that people would use generative artificial intelligence [in their communications with us],” said an Internal Affairs and Communications Ministry official responsible for fielding comments from the public.
In March, the Cabinet Office and other entities solicited opinions from citizens regarding the issue of intellectual property. In response, one person submitted an about 4,000-character email, a part of which the submitter said was created by conversational AI model ChatGPT.
It is not illegal to submit AI-created comments in this regard, and the ministry said it was up to individual agencies to decide whether to reflect AI-spawned opinions in their respective policies.
The comment in question was submitted by Kenji Suzuki, 55, a patent attorney who runs a patent office in Yokohama.
With the aim of generating an attention-grabbing statement, Suzuki first created a straightforwardly worded sentence, then instructed the conversational AI model to “modify it in the style of a young and mischievous chief executive officer.”
In response, ChatGPT crafted a sentence that Suzuki says he would never would have come up with by himself: “The idea of designing management [systems] is pretty cool.”
“The fact that wordage reflecting a novel perspective could be created in an instant made me think that the technology could potentially be abused,” Suzuki said.
Suzuki is not alone in his concerns; the spurious generation of “public viewpoints” is an issue that has been around for a while. When the U.S. Federal Communications Commission solicited public comments in 2017, more than 80% of the 22 million comments it received were deemed to be fake. It is thought that a 19-year-old university student, among others, submitted an enormous number of comments under fictitious names and addresses.
If AI is used for such acts to fabricate public opinion, it will be easier to fashion a wide array of viewpoints, each written using different expressions.t
OpenAI, the U.S. developer of ChatGPT, released a report in January that cited concerns brought about by this technology, stating, “the prospect of highly scalable—and perhaps even highly persuasive—campaigns by those seeking to covertly influence public opinion.”
Kazuhiro Taira, a professor of media theory at J. F. Oberlin University, opined, “[Looking ahead], ‘public opinion’ will increasingly be molded by AI, with low costs and growing sophistication.”
The expansion of AI has made it easy for anyone to create “fake” content. Such material, which boasts an ever-growing online presence, often obfuscates the truth.
Fake reviews
In July, a company employee in his 40s from Saitama Prefecture used generative AI to write a review for an infant’s toy that he had never used: “Perfect for babies. Recommended.” He then posted the text on an online shopping site.
Writers of such counterfeit claims often receive compensation from sellers, despite never having used the product in question. Such sellers were likely violating the law. The man began posting reviews about five years ago, and occasionally, would upload more than 10 reviews a day.
However, after running short on inspiration, he decided to use AI to generate reviews. Subsequently, the man fed information from a product introduction website into an AI generative model and asked it to “write a positive review.” The software instantly generated a viewpoint, comprising about 200 characters.
Generative AI makes it easy for me to create generic reviews,” the man said. Observers note that AI is increasingly being leveraged to churn out positive critiques.
In spring, the operator of the Sakura Checker website, which weighs online reviews, noticed something unusual: a sharp spike in the number of product reviews written in a logical manner using research paper-like Japanese words. After careful analysis, the website operator concluded that most of the reviews — comprising some several hundred submissions — must have been written with the aid of generative AI.
“AI-generated reviews are set to increase,” offered a representative of the Sakura Checker site. “Faith in online reviews could be undermined if [such texts] are used to mislead consumers.”
Texts created by generative AI has become so realistic that it is often impossible to distinguish it from human-generated output. Some experts have pointed to an unfolding “double crisis” in the digital realm — an increase in “quantity” and an increase in “quality.”
Child pornography
“I can’t draw such high-quality [child pornography] illustrations myself,” said a male company worker in his 30s, living in the Kanto region. “I’m very satisfied.”
Since the end of last year, the man has been using AI to create realistic sexual images of female children, describing it as his “hobby.” To date, the man claims to have posted about 400 illustrations on Pixiv, a website operated by a Japanese image-hosting company.
Entering the slang term for “little girls” in Pixiv’s search bar returns a slew of sexualized imagery featuring children with their breasts and genitals exposed. Some images, which were generated by AI, look alarmingly realistic.
In May this year, the website’s operating company revised its regulations, banning the posting of sexual images that could be mistaken for real-world child pornography.
However, such postings have continued. In late June this year, the BBC — in an article that specifically named Pixiv — reported that pedophiles are using AI technology to create and upload realistic-looking images depicting the sexual abuse of children.
According to the Justice Ministry, the depiction of real-world child abuse violates the law against child pornography. Ultimately, however, it is up to the courts to decide whether AI-generated images are subject to the law.
“Genuine child pornography could become buried in a potential flood of AI-generated images, making it impossible to provide relief for the victims of such abuse,” said Hisashi Sonoda, a professor emeritus of criminal law at Konan University. “Existing laws and regulations alone can’t address the problems posed by AI. There’s an urgent need for discussions on the subject.”
Many people who publish fake information reportedly are motivated to “make a fuss in society” and “attract attention.” The latest generative AI models are helping increase the number of such “pleasure-focused” criminals.
‘Counterfeit’ cash
“Image-generative AI can create counterfeit cash!” posted a user of X (formerly Twitter) last December.
The provocative claim was made by a 27-year-old male graduate student, who, six months later, exhibited fake Japanese banknotes made to look realistic via generative AI at a Tokyo hall, prompting police to attend the venue.
The portrait portion on the “banknotes” had been replaced with a girl’s face. The police cautioned that members of the public could mistake them for real bills if seen in a dark location.
The graduate student’s research theme is AI-based contemporary art. “Online, AI-generated work is criticized as being fake. I wanted to stir a controversy aimed at getting AI-generated work to be recognized as art,” he said. “However, I failed to foresee the possibility [of someone passing off a fake banknote as a real one]. I should have been aware of that risk.”