January 31, 2024
BEIJING – While a White House official recently said the United States was willing to cooperate with China on artificial intelligence, some experts voiced skepticism, while others felt cooperation was still possible despite the trade tensions between the two countries.
Arati Prabhakar, director of the White House Office of Science and Technology Policy, told the Financial Times of London in an interview published on Thursday that despite the trade tensions between the two nations, particularly over sensitive technology, they could work together to “lessen (the) risks and assess (the) capabilities” of AI.
“Steps have been taken to engage in that process,” Prabhakar said of collaborating with China on AI. “We have to try to work (with Beijing).
“We are at a moment where everyone understands that AI is the most powerful technology …every country is bracing to use it to build a future that reflects their values,” said Prabhakar. “But I think the one place we can all really agree is we want to have a technology base that is safe and effective.”
Sourabh Gupta, a senior fellow at the Institute for China-America Studies, is skeptical about how such cooperation on AI would unfold.
“The US’ desire to work on AI safety policy with China and compete vigorously on AI hardware, including chips, against China, are proceeding on entirely separate tracks,” he said.
“The scope for trade-offs is minimal and probably nonexistent. As such, the policy conversation between the two will gravitate toward a lowest common denominator approach on preventing fundamental AI-related harms, especially in the military sphere,” he said.
“On the other hand, the AI hardware and software innovation and development side will see bitter competition between the two sides, with the US using its technology controls repeatedly to undercut China’s progress in this area,” Gupta said.
The White House issued an executive order in August 2023 that restricted US investments in Chinese technologies or products.
China, along with the US and more than two dozen countries, signed the Bletchley Declaration on standards for AI at the world’s first AI Safety Summit in the United Kingdom in November 2023.
At the conclusion of the summit, billionaire Elon Musk thanked British Prime Minister Rishi Sunak for inviting China, saying, “If they’re not participants, it’s pointless.”
Prabhakar said while the US may disagree with China on how to approach AI regulation, “there will also be places where we can agree”, including on global technical and safety standards for software.
Opportunity to learn
Gupta said he was “afraid there will not be complementary cooperation. As the two sides roll out their respective governing and regulatory frameworks, though, both will have the opportunity to learn from the other sides’ successes and mistakes”.
“I would also submit that China’s guidance on the development of AI is more encompassing than just content control,” he said in reference to the FT article, which suggested that China was more concerned about the regulation of domestic AI information while the US was focused on national security and consumer privacy.
Still, he said, “there is much for each side to learn by observing the development of the industry and its regulation on the counterpart’s soil”.
China’s AI industry is expected to accelerate over the next decade, with its market value reaching 1.73 trillion yuan ($241.3 billion) by 2035, according to research firm CCID Consulting.
China’s Foreign Ministry spokesman Wang Wenbin said AI development and governance bear on the future of humanity.
“It requires concerted and coordinated response, not decoupling, severing of supply chains nor fence-building,” he said when answering a question at a regular news conference on Monday.
“We urge the US side not to act contrarily to the laws of sci-tech advancement, earnestly respect the principles of market economy and fair competition, and create favorable conditions for strengthening international AI coordination and cooperation,” he said.
Prabhakar said the US “did not intend to slow down AI development, but to maintain oversight of the technology”.
“We are starting to have a global understanding that the tools to assess AI models — to understand how effective, how safe and trustworthy they are — are very weak today,” she told the FT.