China initiates a lengthy campaign to combat the misuse of AI.
The annual 'Qinglang' campaign from the Cyberspace Administration faces a significantly different regulatory landscape compared to last year, coinciding with the White House's allegations against China for engaging in 'industrial-scale' AI theft operations. As reported by Reuters, China has initiated a lengthy enforcement campaign aimed at addressing the misuse of artificial intelligence.
This initiative, led by the Cyberspace Administration of China (CAC) and in collaboration with the Ministry of Public Security and other bodies, focuses on issues such as AI-powered fraud, deepfakes, disinformation, and illegal applications that infringe on privacy and intellectual property rights. This marks the 2026 installment of the recurring 'Qinglang' (Clear and Bright) enforcement campaign series. The previous campaign, which began on April 30, 2025, and was called 'Rectification of AI Technology Misuse,' lasted for three months and unfolded in two phases.
By the end of its first phase in June 2025, authorities had eliminated over 3,500 AI-related products, removed more than 960,000 pieces of illegal or harmful content, and penalized or shut down over 3,700 accounts. This year's campaign comes in a much more developed regulatory context and against a geopolitically charged backdrop, making its objectives and targets considerably more complex than in the past.
What does the campaign target?
China's enforcement efforts regarding AI misuse are organized according to a taxonomy that has broadened with each new iteration, responding to advancements in both AI capabilities and its criminal uses. Built on the established Qinglang enforcement framework and the new regulations instituted in 2025 and early 2026, this year's campaign is anticipated to simultaneously focus on several categories.
The foremost and most commercially relevant category is AI-enabled fraud and impersonation. There has been a notable rise in the use of voice-cloning and face-swapping deepfake technology to impersonate celebrities, executives, and government officials in scams directed at ordinary consumers. The CAC's 2025 campaign concentrated on the use of AI to impersonate friends and relatives for illegal activities such as online fraud, as well as the unauthorized AI recreation of deceased individuals' likenesses.
On April 3, 2026, the CAC issued draft regulations for digital virtual human services that outline consent requirements for likeness usage and prohibit the circumvention of biometric authentication systems, with the public comment period closing on May 6.
The second significant area of focus concerns AI-generated disinformation and 'online water army' operations, which involve large-scale use of AI to fabricate fake social media accounts, produce and disseminate coordinated content, manipulate engagement metrics, and create artificial trending topics. The 2025 campaign prioritized this issue in its second phase, concentrating on platforms that support AI-driven account farming, bulk content production, and social bot networks.
The third area pertains to non-compliance with mandatory filing and registration procedures. China mandates that large language models providing generative AI services to the public undergo security assessments and file with the CAC before their launch. As of March 2025, 346 generative AI services had completed the necessary filing; many others had not. The initial phase of the 2025 campaign identified unfiled AI products as a main target for rectification, leading to penalties for three AI applications in Shanghai that operated without the required procedures, and for a face-swapping app in Zhejiang province that was ordered to be removed from app stores.
Fourth, the campaign addresses the management of training data, specifically the use of datasets that include content infringing on intellectual property rights, privacy rights, or consent obligations. This enforcement aspect is particularly sensitive in 2026, in light of the White House’s formal accusation on April 23 that Chinese companies are conducting 'industrial-scale' campaigns to extract capabilities from U.S. frontier AI models using jailbreaking techniques and numerous proxy accounts.
While China's domestic enforcement effort does not directly respond to this U.S. accusation, it aims to protect Chinese rights holders and users, without addressing American interests. Nevertheless, both regulatory environments are evolving with an awareness of one another.
The 2026 campaign functions within a significantly more advanced domestic regulatory framework compared to its predecessor. Several key regulations were either implemented or published as draft versions in the months preceding this enforcement initiative. China's mandatory AIGC (AI-generated content) labeling standards, which require clear and technical labels on all AI-generated text, images, audio, and video, came into effect on September 1, 2025.
On April 10, 2026, the CAC released the Interim Measures for the Management of Anthropomorphic AI Interactive Services, which regulate chatbots, AI companions, and AI customer service agents designed to simulate human personality and communication styles, effective from July 15, 2026. On April 3, the CAC published draft rules governing digital virtual human services focused on biometric deepfakes, with a public comment period that ended on May 6, 2026. Additionally
Other articles
China initiates a lengthy campaign to combat the misuse of AI.
China's CAC has initiated a prolonged enforcement campaign focused on combating the misuse of AI, specifically addressing deepfakes, fraud, disinformation, and unlawful applications.
