January 12, 2026
SEOUL – With just weeks left before South Korea enforces the world’s first comprehensive artificial intelligence law, unease is spreading through the nation’s tech sector. A one-year grace period has been promised, yet many companies still find themselves unclear on what exactly the law expects of them.
The AI Framework Act, passed by the National Assembly in December 2024 and scheduled to take effect on Jan. 22, is the first attempt by any country to regulate and promote AI under a single, unified legal framework. It marks Korea’s ambition to join the “AI G3,” alongside the US and China.
The timing is no accident. While the European Union passed its AI Act first, key provisions won’t come into effect until 2027. Korea, by contrast, is poised to move first in real terms — not just with legislation, but with boots on the ground.
With Washington still mired in partisan gridlock over AI legislation and Brussels busy recalibrating its rulebook, Seoul smells opportunity — and it’s not waiting around. Step in early, they argue, and Korea could shape global norms — not from the sidelines, but from the front row. It’s a bid for influence that far exceeds the country’s size.
Policymakers say it’s more than symbolism. Early action, they argue, will allow Korea to help shape the international conversation on AI ethics and safety — particularly at a time when global rules remain in flux. While the US focuses inward and the EU moves slowly, Korea is betting on first-mover advantage.
Still, not everyone is convinced. Critics warn that this may be more about headlines than governance. Some caution that companies could be caught off guard by ambiguous standards and rushed timelines.
“It feels like a policy race for headlines, not sustainable governance,” said an international law professor at a Seoul-based university, speaking on condition of anonymity. “Establishing international credibility takes more than going first. It requires coherence and inclusiveness.”
At the heart of Korea’s approach is an “innovation-first” principle. The law allows companies to develop and deploy AI systems without prior government approval. It’s a clear shift away from precautionary models — and a deliberate attempt to eliminate regulatory bottlenecks that have long slowed down tech innovation.
But promotion comes with obligations. The law mandates state support for AI advancement — including funding for specialized data centers, standardization programs and workforce training.
Crucially, it defines a new category: “high-impact AI.” These are systems used in sensitive fields — medical devices, energy infrastructure, hiring algorithms and nuclear facility control. In short, areas where failure could carry serious risks.
Firms operating such systems will be required to conduct internal risk assessments and ongoing monitoring. They’ll also need to report their safety protocols to government bodies. The obligations aren’t voluntary; they’re legally backed.
Yet the law leaves much unsaid. High-impact AI is defined as any system whose failure could harm life, property or basic rights. But how exactly that’s determined remains unclear. Enforcement details are still in the works and expected to be outlined in executive decrees. The Ministry of Science and ICT will take the lead, coordinating with other regulators.
One feature drawing particular attention is the transparency rule. All AI-generated content — image, video, or audio — must be visibly labeled and watermarked. Machine-readable watermarks, invisible to human eyes, are also required to guard against misuse. The aim is to prevent deepfakes and disinformation from slipping through the cracks.
The government hopes the new law will bolster Korea’s standing as one of the world’s top three AI players. It has formally designated AI as a national strategic industry and laid down a legal foundation aimed at accelerating innovation.
Officials say early implementation is meant to reduce uncertainty. By clarifying legal standards up front, they argue, the law could prevent future service disruptions caused by gray areas — or punitive measures applied after the fact.
But the message hasn’t landed evenly. Many in the industry say they’re still in the dark.
Some point to vague definitions. Others raise concerns about compliance burdens. Among startups and developers, confusion appears widespread.
“There’s no way to know if what we’re building qualifies as high-impact AI,” said an industry source, who requested anonymity. “We’ve reached out to the ministry, but so far the answers have been vague.”
As a result, some firms are hitting pause. Product launches and updates have been delayed — not out of protest, but from uncertainty. A recent survey by the Startup Alliance found that 98 percent of local AI startups had yet to initiate formal compliance steps, citing unclear guidance.
One particularly thorny requirement is the watermarking of AI-generated content. Technically, developers say, this is no small task — especially when open-source and proprietary tools are used together.
“From a public trust standpoint, it makes sense,” said a project lead at a mid-sized AI firm. “But no one knows how to apply this consistently across platforms and file types.”
Civic groups are also sounding alarms. Without independent oversight or meaningful penalties, they argue, the law risks becoming more symbolic than substantive. Algorithmic bias and biometric surveillance are flagged as areas needing tougher checks.
Officials admit the rollout won’t be seamless. But they emphasize that the law is meant to evolve.
“We can’t delay indefinitely,” said Lim Mun-yeong, standing vice chairman of the Presidential Council on National AI Strategy. “That would only undermine Korea’s competitiveness in the global AI race.”
To ease the transition, the government has introduced a one-year grace period. During this time, penalties — which can go up to 30 million won ($20,800) — will be suspended. The goal, officials say, is not to punish, but to help the industry get ready.
“This is only the beginning,” said Choi Kyung-jin, president of the Korea Association for Artificial Intelligence and Law. “The real test will be how these principles play out in real-world scenarios.”

