January 22, 2026
SEOUL – South Korea will begin enforcing its Artificial Intelligence Act on Thursday, becoming the first country to formally establish safety requirements for high-performance — or so-called frontier, AI systems — a move that sets the country apart in the global regulatory landscape.
According to the Ministry of Science and ICT, the new law is designed primarily to foster growth in the domestic AI sector, while also introducing baseline safeguards to address potential risks posed by increasingly powerful AI technologies. Officials described the inclusion of legal safety obligations for frontier AI as a world-first legislative step.
“This is not about boasting that we are the first in the world,” said Kim Kyeong-man, deputy minister of the office of artificial intelligence policy at the ICT ministry, during a study session with reporters in Seoul on Tuesday. “We’re approaching this from the most basic level of global consensus.”
The act lays the groundwork for a national-level AI policy framework. It establishes a central decision-making body — the Presidential Council on National Artificial Intelligence Strategy — and creates a legal foundation for an AI Safety Institute that will oversee safety and trust-related assessments. The law also outlines wide-ranging support measures, including research and development, data infrastructure, talent training, startup assistance, and help with overseas expansion.
To reduce the initial burden on businesses, the government plans to implement a grace period of at least one year. During this time, it will not carry out fact-finding investigations or impose administrative sanctions. Instead, the focus will be on consultations and education. A dedicated AI Act support desk will help companies determine whether their systems fall within the law’s scope and how to respond accordingly. Officials noted that the grace period may be extended depending on how international standards and market conditions evolve.
The law applies to three areas only: high-impact AI, safety obligations for high-performance AI and transparency requirements for generative AI.
High-impact AI refers to fully automated systems deployed in critical sectors such as energy, transportation and finance — areas where decisions made without human intervention could significantly affect people’s rights or safety. At present, the government says no domestic services fall into this category, though fully autonomous vehicles at level 4 or higher could meet the criteria in the future.
What distinguishes Korea’s approach from that of the European Union is how it defines “high-performance AI.” While the EU focuses on application-specific risk — targeting AI used in areas like health care, recruitment, and law enforcement — Korea instead applies technical thresholds. These include indicators such as cumulative training computation, meaning only a very limited set of advanced models would be subject to the safety requirements.
As of now, the government believes no existing AI models, either in Korea or abroad, meet the criteria for regulation under this clause. In comparison, the EU is rolling out its own AI regulations gradually, with some measures accompanied by multiyear transition periods.
Enforcement under the Korean law is intentionally light. It does not impose criminal penalties. Instead, it prioritizes corrective orders for noncompliance, with fines — capped at 30 million won ($20,300) — issued only if those orders are ignored. This, the government says, reflects a compliance-oriented approach rather than a punitive one.
Transparency obligations for generative AI largely align with those in the EU, but Korea applies them more narrowly. Content that could be mistaken for real, such as deepfake images, video or audio, must clearly disclose its AI-generated origin. For other types of AI-generated content, invisible labeling via metadata is allowed. Personal or noncommercial use of generative AI is excluded from regulation.
Kim emphasized that the purpose of the legislation is not to hinder innovation but to offer a basic regulatory foundation that reflects growing public concerns. “The goal is not to stop AI development through regulation,” he said. “It’s to ensure that people can use it with a sense of trust.”
He added that the law should be seen as a starting point, not a finished product. “The legislation didn’t pass because it’s perfect,” Kim said. “It passed because we needed a foundation to keep the discussion going.”
Recognizing concerns from smaller firms and startups, Kim said the government plans to stay engaged throughout implementation. “We know smaller companies and ventures have their own worries,” he said. “As issues come up, we’ll work through them together via the support center.”

