Post a Deepfake, Pay the Price: South Korea’s New AI Law

South Korea has officially implemented the world’s first comprehensive law regulating artificial intelligence, marking a major global milestone in how governments respond to rapidly advancing AI technologies.

The legislation, known as the AI Basic Act, came into full effect this week and introduces strict requirements aimed at increasing transparency, safety, and public trust — particularly around deepfakes and generative AI.

Under the new law, companies must clearly notify users when a service or product relies on generative AI. Content that cannot be easily distinguished from reality — including deepfakes — must be visibly labeled, a move designed to curb misinformation, manipulation, and abuse.

Violations can result in fines of up to 30 million won, roughly $20,000.

South Korea’s government says the law is meant to balance innovation with accountability. The country has also designated ten “high-risk” sectors — such as healthcare, education, criminal investigations, lending, and nuclear power — where AI systems will face heightened oversight and safety standards.

The rollout comes amid renewed global concern over deepfakes, particularly following recent controversies involving AI-generated images of real people. While the European Union and U.S. states like California have passed AI-related regulations, South Korea is the first to fully enforce a wide-ranging national AI law.

Back