AI Raises Human Rights Concerns in South Korea
South Korea’s rapid adoption of artificial intelligence (AI) has raised concerns about its impact on human rights. A recent controversy surrounding an AI chatbot that made discriminatory remarks has highlighted the need for stricter regulations.
The Controversy Surrounding Lee Luda
A private company launched “Lee Luda,” a popular AI chatbot, which attracted 820,000 users within two weeks. However, it was soon discovered that the chatbot had been trained using personal information collected without consent from millions of users, including over 200,000 children under the age of 14.
The developer of Lee Luda, Scatter Lab, was fined approximately $100 million for violating privacy laws. The incident has sparked concerns about the lack of regulation and oversight in South Korea’s AI industry.
Call for Stricter Regulations
A spokesperson for the Korean Civil Society Association emphasized the need for adequate legal frameworks to ensure accountability in the use of new technologies, including AI. However, in South Korea, there are no appropriate laws or procedures in place to regulate high-risk AI.
The association is calling for:
- The government to intervene and establish stricter regulations on AI products and systems
- Transparent disclosure of information on AI products and services, ensuring participation of consumers, workers, and citizens in designing them
- Establishment of AI impact assessment procedures and prohibition or regulation of high-risk AI, both in the public and private sectors
- Establishment of effective remedies for violating rights by AI and damages caused by AI
Revised Personal Information Protection Act Falls Short
The revised Personal Information Protection Act, which took effect in August 2020, has been criticized by civil society organizations for its lack of protection. The law allows personal information to be used for purposes such as statistics and scientific research without the consent of data subjects, as long as the information is pseudonymized.
Korean Civil Society Demands Enactment of AI Regulation Law
On May 24, 2021, 120 Korean civil society organizations held a press conference and announced a declaration calling for AI policies that guarantee human rights, safety, and democracy. The main points of the declaration are:
- Intervention by the Fair Trade Commission, the National Human Rights Commission of Korea, and the Personal Information Protection Commission for national-level supervision of AI products and systems
- Transparent disclosure of information on AI products and services and ensuring participation of consumers, workers, and citizens in designing them
- Establishment of AI impact assessment procedures and prohibition or regulation of high-risk AI, both in the public and private sectors
- Establishment of effective remedies for violating rights by AI and damages caused by AI
The Korean civil society organizations are demanding that lawmakers prioritize human rights and safety above all else in their approach to AI development and regulation.