China initiates a months-long effort to combat the misuse of AI.

China initiates a months-long effort to combat the misuse of AI.

      The annual ‘Qinglang’ campaign by the Cyberspace Administration enters a notably different regulatory landscape compared to last year's event, coinciding with the White House's accusation of China conducting large-scale AI theft operations. According to Reuters, China has embarked on a months-long enforcement initiative aimed at combating the misuse of artificial intelligence. This effort, led by the Cyberspace Administration of China (CAC) in collaboration with the Ministry of Public Security and other agencies, focuses on AI-related fraud, deepfakes, misinformation, and illegal applications infringing on privacy and intellectual property rights.

      This year marks the 2026 iteration of the 'Qinglang' (Clear and Bright) special campaign series, which has become an annual enforcement tool. Its predecessor, initiated on April 30, 2025, and titled ‘Rectification of AI Technology Misuse,’ lasted three months and unfolded in two phases. By the end of its first phase in June 2025, authorities had removed over 3,500 AI-related products, eliminated more than 960,000 instances of illegal or harmful content, and penalized or shut down over 3,700 accounts.

      This year's campaign emerges within a significantly evolved regulatory environment and a complex geopolitical context that adds depth to its scope and focus.

      What does the campaign target?

      China's enforcement campaigns against AI misuse are organized around a framework that has expanded with each round to address both the advanced capabilities and criminal uses of AI. Following the established Qinglang enforcement structure and new regulations from 2025 and early 2026, this year's initiative is anticipated to address several issues simultaneously.

      The foremost and most commercially impactful issue is AI-enabled fraud and impersonation. There has been a significant rise in the use of voice-cloning and face-swapping deepfake technologies to mimic celebrities, executives, and government officials in scams targeting average consumers. The CAC's 2025 campaign directly addressed the illegal use of AI to impersonate friends and relatives for online fraud, along with the unauthorized use of AI to recreate deceased individuals, highlighting the problematic nature of AI-generated likenesses without consent.

      On April 3, 2026, the CAC issued draft regulations for digital virtual human services detailing consent requirements for likeness usage and prohibiting the circumvention of biometric authentication systems, with public comments accepted until May 6.

      The second critical area focuses on AI-generated misinformation and the activities of ‘online water armies,’ involving the extensive use of AI to create fake social media accounts, produce and circulate coordinated content, manipulate engagement metrics, and create fictitious trending topics. This was identified as a priority in the second phase of the 2025 campaign, targeting platforms that allowed AI-driven account farming, mass content generation, and social bot networks.

      Thirdly, enforcement will address compliance with mandatory filing and registration protocols. Large language models providing generative AI services to the public in China must undergo security assessments and file with the CAC before their launch. By March 2025, 346 generative AI services had completed this filing; however, numerous others had not. The first phase of the 2025 campaign marked unfiled AI products as key rectification targets, resulting in penalties for several applications that offered services without following the required process.

      Fourth is the management of training data, specifically in relation to using training corpora that violate intellectual property rights, privacy rights, or consent requirements. This enforcement angle gains sensitivity in 2026, following the White House's formal April 23 accusation that Chinese companies conduct ‘industrial-scale’ campaigns to extract capabilities from U.S. AI models using jailbreaking techniques and numerous proxy accounts. While China's domestic enforcement does not directly address this U.S. allegation, it emphasizes the protection of its own rights holders and users. Nevertheless, both regulatory frameworks are now evolving with an awareness of each other's moves.

      The 2026 campaign benefits from a significantly more advanced domestic regulatory framework compared to its predecessor. Several key regulations took effect or were introduced in draft form leading up to this enforcement initiative. China's mandatory AIGC (AI-generated content) labeling standards, requiring clear and technical labels on all AI-generated content, went into effect on September 1, 2025. On April 10, 2026, the CAC released Interim Measures for the Management of Anthropomorphic AI Interactive Services, regulating chatbots, AI companions, and customer service agents that mimic human personalities, effective from July 15, 2026. On April 3, the CAC published draft rules for digital virtual human services relating to biometric deepfakes, with the public comment period closing on May 6, 2026. Additionally, a joint enforcement agenda targeting personal information protection in several sectors, such as internet advertising and healthcare, was released in April 2026 by the CAC, MIIT, and MPS.

      This layered rulemaking means that the 2026 Qinglang campaign has considerably more legal authority than its 2025 counterpart. Enforcement actions can be based on mandatory

Other articles

China initiates a months-long effort to combat the misuse of AI.

China’s CAC has initiated a prolonged enforcement campaign focusing on the misuse of AI, including deepfakes, fraud, disinformation, and unlawful applications.