모두보기닫기
[Opinion] Careful Consideration Needed on the Proposed Delay of Regulatory Provisions under the Framework Act on Artificial Intelligence
Date : 2025.09.01 16:16:11 Hits : 18

Careful Consideration Needed on the Proposed Delay of Regulatory Provisions under the Framework Act on Artificial Intelligence


Protecting fundamental rights in the age of rapidly developing AI and big data


 The National Human Rights Commission of Korea (Chairperson Ahn Chang-ho, hereinafter “NHRCK”) expressed its opinion on August 22, 2025, to the Speaker of the National Assembly regarding the proposed amendment to the Framework Act on the Advancement of Artificial Intelligence and the Establishment of Trust (hereinafter “the Framework Act on AI”). The amendment seeks to postpone the implementation of Articles 31 through 35 of the Act, provisions imposing obligations on AI providers, for three years. The NHRCK emphasized that this proposal requires careful review.


 The Framework Act on AI, enacted on January 21, 2025, is set to take effect on January 22, 2026. Its purpose is to protect citizens’ rights and dignity, improve quality of life, and strengthen national competitiveness by ensuring the sound development of AI and building a foundation of trust.


 However, an amendment currently pending before the National Assembly proposes that while provisions for the promotion of AI development should take effect as scheduled, the regulatory provisions (Articles 31–35) should be deferred from January 22, 2026, to January 22, 2029. The rationale is that imposing responsibilities and obligations on AI businesses might hinder technological progress and corporate innovation.


 The NHRCK stressed that while AI can enhance national competitiveness and quality of life, it also carries risks: when AI produces flawed or biased outcomes, the consequences extend beyond mere technical errors, potentially infringing on fundamental rights such as dignity, equality, privacy, and personal freedoms.


 The provisions in Articles 31–35, which the amendment seeks to delay, are not merely technical regulations. They are essential legislative safeguards intended to protect the public from risks arising across the entire AI lifecycle from development, deployment, to use, and to ensure that AI is used safely and reliably within constitutional boundaries.


 With AI systems now being introduced in administration, education, healthcare, finance, and other sectors, postponing the enforcement of provider responsibilities and obligations risks leaving citizens unprotected against “high-risk AI” for an extended period. Such a delay could undermine both the protection of basic rights and the sustainable development of AI.


 For example, as AI-based synthetic media technology has become increasingly sophisticated, crimes involving the misuse of “deepfake” images and voices have surged. As of October 2024, reports of deepfake sexual exploitation to police had increased by 518% year-on-year, reaching 964 cases.


 The UN report on The Role of New Technologies for Economic, Social and Cultural Rights has emphasized that although technologies such as AI are primarily driven by the private sector, States nonetheless bear a legal obligation to adopt necessary legislative measures to protect human rights affected by such technologies.


 Thus, deferring implementation of Articles 31–35 would not align with the State’s duty to fulfill its human rights obligations, especially given the structural and severe risks posed by AI.


 The NHRCK recognized the concerns of the industry that excessive regulation could hinder innovation. However, the Commission emphasized that these concerns should be addressed through refining subordinate regulations, supplementary legislation within the Act, and enhanced government support for start-ups and SMEs in the AI ecosystem, not by suspending the enforcement of critical safeguards.


 Accordingly, the NHRCK concluded that to establish an effective preemptive system for managing the inherent risks and uncertainties of AI, transparency, safety, and reliability must be ensured from design through development and operation. Therefore, Articles 31–35 of the Framework Act on AI should take effect as originally scheduled on January 22, 2026.



공감

File

확인

아니오