Research Report
Legal Imperatives for the Protection of Future Generations 4 - Focusing on the European Union's regulatory framework and response strategies for artificial intelligence (AI) -
Ⅰ. Backgrounds and Purposes
▶ Backgrounds of a Research
○ The purpose of this research is to establish a regulatory framework or strategic plan for artificial intelligence systems by analyzing in detail the EU's policies and strategies for artificial intelligence, including the White Paper on Artificial Intelligence.
○ It aims to induce investment from industry by securing trust in artificial intelligence policy, and to lay the basis for mid- to long-term national policy for research and innovation on artificial intelligence.
▶ Purposes of a Research
○ Algorithms-based artificial intelligence causes infringement of basic rights, consumer protection, and increased risk, including infringement of privacy or personal information, and so a response strategy is needed on a normative level.
○ As the digital era enters, national competitiveness must be strengthened, and national strategies and visions for artificial intelligence systems must be presented.
Ⅱ. Major Content
▶ European Union's Artificial Intelligence Policy and Strategy
○ On April 25, 2018, the European Commission announced a communication on “AI for Europe”, and is pushing ahead with a digital transformation policy in the industrial sector. After that, on May 15, 2018, there was the discussion on “Completing of a trusted Single Digital Market”, and on April 8, 2019, the European Commission emphasized “Building Trust in Human-Centric Artificial Intelligence”.
○ The EU's strategy for artificial intelligence is based on trust-building in human-centered artificial intelligence, and is pursuing policies based on the protection of basic rights, including human dignity and values, freedom, democracy, equality, rule of law and protection of minorities.
○ As a hub for digital innovation, the European Union is striving to establish a network and secure data space to share data, and is promoting policies to secure international competitiveness and data sovereignty through the establishment of a single data market.
○ In particular, the White Paper on Artificial Intelligence and A European strategy for data released by the European Commission on February 19, 2020, contain specific regulatory framework, response measures, and it reveals the idea of a legislative system for governance of common European data space.
▶ Protection of fundamental rights and democracy in the digital age
○ Artificial intelligence with bias or prejudices is a threat to parliamentary democracy, characterized by democratic public opinion formation and free political opinion formation. Securing the democratic legitimacy of the artificial intelligence system is one of the important tasks, and automatic decisions by algorithm-based artificial intelligence must be bound to the law.
○ In order to prevent discrimination by artificial intelligence and block bias, more accurate and objective data must be set at the stage of program design and data input, and to secure transparency in this, a governance system should be established to ensure participation and cooperation of the private sector.
○ As the digital age shifts, human supervision and risk management systems must be guaranteed to secure democratic legitimacy for artificial intelligence decisions. The scope and level of artificial intelligence regulation are differentiated according to whether there is a high level of risk, and this is a part to be considered in designing the regulatory system of Korea in the future.
▶ Analysis and Response regarding to the Safety and Responsibility for AI
○ Since it is uncertain at what stage that the AI decision making appears, AI software system seems not be easily separated from the product, and as AI software can be updated or changed, there is limitations in applying the current safety and liability law system as it is.
○ Preventive risk management including proactive means requires advance notice, compensation system including liability insurance, common guideline to reduce risk management difference. Post responsibility management generally depends on the EU Product Liability Guidelines, which requires negligence liability or strict liability according to the causes of the damages.
○ As AI proceeds gathering information, deciding, and acting, damages might occur in the absence of anybody's action including manufacturer-software designer-user. If advanced AI developes when the strict liability commonly applies to AI softwares, related parties might be exempt for the state-of-the-art defense.
○ The Code of Ethics for Robotic Engineers suggested by the Eurpoean Parliament aims that AI needs to be useds for the "benefience of human, non-maleficence, autonomy justice", while it also suggest "acting in the best interest of humans, protecting fundamental rights, strict liability, compulsory insurance or compensation fund, AI registration system etc.
○ In the stage 1-2 autonomous driving system, the driver is presumed to be responsible for collision as the driver is still involved in the main actor. However, in the stage 3, which drivers only responsible for driving in case of an emergency, the driver would be liable only for the omission of obligated action. In the stage 4-5, drivers may be exempt as they are not mainly liable for the causes of the damages.
○ As medical device generally would be used under the medical decision of the medical professions, the strict liability would only be applied when the defect is clear. Considering the strict liability cases would not be happen generally, AI medical devices, which would be used as supplement on the medical actions, AI could not be the responsible agent replacing the person.
▶ AI Regulatory Paradigm and Design of Regulatory Framework
○ The EU white paper on artificial intelligence proposes risk-based regulations from the standpoint of phased risk management, and through this, the purpose of artificial intelligence regulation can be more appropriate and specific.
○ Like the artificial intelligence code of ethics, it is desirable to start with a soft law and approach risk-based regulation in the mid to long term.
○ The regulatory strategy for artificial intelligence needs to be differentiated according to the level or intensity of the risk, and the high level of risk must be judged in consideration of the impact on individuals and society.
○ Self-regulation should be considered in the private sector or research field, and regulated self-regulation may be applied in areas requiring cooperation between the state and the private sector. However, human supervision and high authority discipline are required in areas with high risk.
Ⅲ. Expected Effects
○ It reviews the EU's policy changes and recent trends on artificial intelligence systematically and suggests policy directions for artificial intelligence regulation, and also contributes to establishing the government's regulatory strategy.
○ By analyzing the ethical and normative grounds for issues related to artificial intelligence, such as securing basic rights, safety and responsibility, we can lay the foundation for enacting and reorganizing laws and regulations, and provide a theoretical foundation for securing the trust of industry through securing transparency.
○ Through a detailed analysis of the regulatory paradigm of the European Union, it provides motivation to stimulate discussions in the field of regulation on artificial intelligence in academia, and contribute to drive discussions on constitutional values such as democracy and human rights related to artificial intelligence.