Security worries over AI: The Korea Herald

South Korean government, firms block access to DeepSeek amid lack of effective regulations.

vVtGxv3G_x4LrpitVcP9EwN-M5GDSCrF1bLyGNsTd6E.jpg

The DeepSeek logo is seen at the offices of Chinese AI startup DeepSeek in Hangzhou, in China’s eastern Zhejiang province on February 5, 2025. PHOTO: CHINA OUT/AFP

February 12, 2025

SEOUL – The Jan. 20 release of DeepSeek, an innovative Chinese AI chatbot, upended global markets, prompting tech companies to scrutinize how an obscure Chinese startup seemingly developed such a competitive artificial intelligence model so suddenly.

But the initial surprise seems to be shifting toward caution, doubt and, in some cases, outright phobia against the new Chinese AI technology on the assumption that it potentially poses security threats.

While the US government is moving to put restrictions on the DeekSeek app, other countries such as Australia, Italy and Taiwan are imposing their own control on the generative AI chatbot.

Last week, South Korea joined the pack, with government agencies and big companies either blocking access to DeepSeek or sending employees warnings against the use of the app at work.

After the government on Tuesday sent a notice to ministries and state agencies to stay guarded against the use of DeepSeek, the Foreign Ministry and the Defense Ministry reportedly blocked access to the Chinese app, while major Korean companies took similar measures.

At issue is that DeepSeek is allegedly collecting too much sensitive information, including users’ keyboard input patterns, unlike conventional AI systems that collect only basic user data.

More troubling is that the Chinese startup stores huge volumes of data on servers in China, where all Chinese companies and organizations are required to cooperate with the government’s intelligence activities. In theory, should the Chinese government choose, it could access the sensitive data of DeepSeek users.

What worries Korean authorities in particular is the rapid pace at which DeepSeek is expanding its user base here. As of the fourth week of January, the number of DeepSeek users in Korea is reported to have surpassed 1.2 million.

Without a new set of regulations, the security dispute over Chinese apps and solutions is expected to intensify further. And it is not the first time that Chinese firms have been entangled in security problems. Last year, for instance, Korea’s fair trade watchdog belatedly issued a corrective order to Chinese e-commerce providers operating here to fix the controversial terms of agreement that allowed access to user contact data and social media accounts.

Other Chinese tech products such as electric vehicles equipped with cameras and data tracking capability are also feared to have higher risks of information leaks — a fast-evolving sector where policymakers are slow to come up with timely data protection policies.

But there is another important issue to consider. AI is a rapidly advancing sector, which often defies and bypasses stop-gap regulations crafted by governments and local agencies.

Restricting access to chatbots at work alone cannot prevent individuals from typing in queries related to private and corporate data at home. More importantly, the AI ecosystem, as demonstrated by the eye-catching innovation of DeepSeek, revolves around open-source platforms, where more users and companies are invited to pool their expertise to build more advanced systems. This means that regulators have to strike a balance between regulatory restrictions and policy incentives to nurture a thriving AI ecosystem.

Equally important is that security worries are not limited to AI technology from China. ChatGPT and other generative AI solutions developed by US-based tech giants such as Google are collecting reams of data from users across the world. Korean tech companies are also gathering detailed user data as they race to build their own AI solutions.

The need for strengthening AI-related security will gain more urgency in tandem with the eye-popping advance of AI technology, which is still in its infancy. But there are no specific Korean regulations aimed at protecting private data in connection with AI models. At a minimum, policymakers must brace for more DeepSeek-like disruptive solutions and start exploring a broader framework for the governance of AI.

scroll to top