Research Report
The Study on Legislative Approach for AI impact assessment
Ⅰ. Background and purpose of Research
▶ Background and purpose of Research
○ Background to this research
- AI is likely to create various opportunities for the future, but it also raises concerns in various aspects, leading many countries to introduce assessment systems for AI impact.
- In our country, numerous legislations related to AI have been submitted to the National Assembly; however, the content related to impact assessments is weak. There are cases where the impact assessment is regulated through guidelines in specific areas, and there have been legislative attempts to introduce related concepts in the 22nd National Assembly.
○ Purpose of this research
- This research aims to analyze domestic and international cases of policy and legislative introduction of AI impact assessments, which are not yet established as a system, in terms of significance, necessity, criteria, methods, and procedures.
- Through this analysis, we aim to clarify the impacts and risks associated with AI introduction and propose impact assessment systems and legislative policy measures to minimize them.
Ⅱ. Research Content
▶ Significance of AI Impact Assessments
- Discussions on AI impact assessments are relatively recent, and there is no commonly accepted clear definition. However, AI impact assessments can be closely linked to AI ethics, such as transparency, accountability, and rights protection.
- Impact assessments related to AI include: firstly, ethical impact assessment (human rights issues such as racial discrimination, privacy invasions, bias, and fairness), secondly, social impact assessment (employment, education, healthcare, transportation), thirdly, safety assessment (verification of safety in autonomous vehicles, accuracy of medical AI), fourthly, data security and privacy assessment (evaluation of AI systems' handling of personal data regarding data security and privacy), and fifthly, economic impact assessment (creation of new industry sectors, job changes).
▶ Analysis of Relevant Major Legislation
- Current relevant regulations include the “Intelligent Informatization Basic Act,” the “Science and Technology Basic Act,” and the “Software Promotion Act.” Related policies and guidelines include the “NationalAI Ethics Principles,” the “Strategy for Realizing Trustworthy AI,” the “Guidelines for AI Development and Utilization Regarding Human Rights,” and the “On-site Guidelines for Securing Publicness of AI.”
- Among the bills proposed in the 22nd National Assembly are bills that include content on guaranteeing AI reliability through verification and certification (by lawmakers Jeong Jeom-sik and Kwon Chil-seung) and on conducting fundamental rights impact assessments (by lawmaker Lee Hoon-ki), with differences in application targets and methods (voluntary/mandatory).
▶ Comparative Analysis of Key Issues Through Examination of Major Foreign Laws
○ Target Risk Management
- The EU focuses on 'high-risk' AI and defines types of high-risk AI by law, whereas the US does not limit target risks and applies a risk management framework to all AI, requiring analysis and confirmation of risks by service, product, or model.
○ Risk vs. Risk and Benefit
- The EU considers risks causing significant harm within its risk management framework, whereas the US considers both risks and benefits of AI systems and acknowledges the possibility of taking risks despite identified and anticipated risks, reflecting responsive measures and accountability.
○ Mandatory vs. Voluntary Regulations
- The EU requires risk management frameworks, including risk assessment, for high-risk AI, demanding fundamental rights impact assessments for certain high-risk AI. Conversely, the US presents a risk management framework applicable to all developers, suppliers, distributors, allowing for voluntary adoption by businesses and institutions. However, federal AI implementation mandates risk management application.
○ Law vs. Guideline (Soft Laws)
- The EU, through the “EU AI Act,” mandates risk assessment and specific high-risk AI fundamental rights impact assessments. The US, via the “National AI Initiative Act,” requires the government (NIST) to establish an AI risk management framework and mandates government agencies to develop tools and provide guidelines for applying AI risk management frameworks through presidential executive orders.
○ Certification System
- The EU institutionalizes safety verification through certification for compliance with legal requirements for applicable high-risk AI, whereas the US, with voluntary risk management system implementation, offers no certification system.
○ General-purpose AI
- The EU applies high-risk AI equivalent requirements if general-purpose AI poses systemic risk, while the US issues government directives for guidelines and tools specific to general-purpose AI or generative AI risk management rather than direct legal standards.
○ Government AI Utilization
- The EU treats government-utilized AI as a type of high-risk AI without separate distinction. The US supports AI development and utilization for federal functions and establishes guidelines for risk management to be implemented, ensuring application to AI used in government tasks.
▶ Legislative Issues of AI Impact Assessment
○ Types of AI Impact Assessment
- First, impact assessments expanded from the AI index covering social, economic, and technological aspects. Second, impact assessments for AI service stability. Third, impact assessments for ensuring AI service reliability (ethics, user rights, etc.). Currently, both U.S. and EU impact assessments encompass all three types, with a focus closer to ensuring AI service reliability.
○ Subjects of AI Impact Assessment
- Commonly and primarily, risk assessment is normatively required for AI used by government or public institutions, AI provided by private institutions offering public services, followed by high-risk AI.
○ Indicators of AI Impact Assessment
- Recent major AI impact assessments are primarily focused on achieving AI trustworthiness, emphasizing anticipated risk management. Assessments are provided from a cost-benefit perspective (U.S.) and by evaluating positive and negative impacts on policy indicators (UN).
○ Entities Conducting AI Impact Assessment
- In systems where all AI actors (designers, developers, distributors, users, senior management, etc.) can voluntarily inspect (U.S.), distribution entities of high-risk AI conduct assessments (EU fundamental rights impact assessment), sometimes performed by a third-party certification body with an objective stance (conformity assessment).
○ Procedures of AI Impact Assessment
- Procedures include assessing before launching or starting a service with post-launch monitoring (EU) or conducting assessments throughout the entire process of design, development, implementation, or use (U.S.).
▶ Legislative Approaches for AI Impact Assessment
- First, distinguish between AI impact assessment and AI technology impact assessment (AI impact assessment: for launched AI systems conducted by system providers; AI technology impact assessment: for specific AI technology types performed by the government).
- Second, government and public institution AI are primarily subject to AI impact assessments mandated by law. High-risk AI must undergo AI impact assessments as required by law. However, the criteria for high-risk AI subject to these obligations are determined by a committee involving the public, industry, academia, and civil society, based on AI technology impact assessment results.
- Third, the government provides guidelines for an AI impact assessment framework accessible to all. Separate additional guidelines are issued to reflect the technical characteristics of AI systems such as generative AI.
- Fourth, AI impact assessments differentiate between policy and business units, with varied assessment entities and criteria. Policy units focus on achieving policy goals and assessing rights infringements, while business units focus on safety and harm.
- Fifth, explore technical solutions that can integrate various legal impact assessments. Support the development of technical means to evaluate compliance and perform integrated impact assessments through AI-based AI impact assessments.
Ⅲ. Expected Effects
▶ Academic effects
○ By analyzing domestic and international cases of AI impact assessments, systematically classify impact factors and assessment methods due to AI adoption and present related legislative issues and considerations.
▶ Policy contribution
○ Provide recommendations on policy and possible legislative measures concerning AI impact assessment criteria, methods, and procedures, expected to serve as foundational material when introducing AI and related legislative systems in the future.