Introduction
In the lead-up to China’s 2025 International Workers’ Day celebrations, a striking irony emerged. As major firms launched advanced AI systems, workers behind these technologies remained trapped in a contradictory digital order.
China today enforces some of the world’s most extensive AI regulations intended to shield workers from algorithmic exploitation, while simultaneously deploying AI systems that suppress discussions on worker rights and collective mobilisation. This contradiction reflects algorithmic authoritarianism: a governance model where AI protects and represses at once.
This article extends Mark Jia’s notion of “Authoritarian Privacy” to explain how these dual functions legitimise central control. Technological safeguarding becomes a political instrument, not simply a social protection mechanism.
Safeguard Through Regulation
At first glance, China’s regulatory response to platform labour appears progressive. The 2021 “Internet Information Service Algorithmic Recommendation Management Provisions” directly targeted exploitative delivery deadlines and sought to reduce unreasonable safety risks for gig workers.
This intervention followed years of public concern around delivery platforms such as Meituan and Ele.me. As algorithmic pressure intensified, delivery windows shrank and workers without formal protections were forced into high-risk behaviour on urban roads. In this context, the 2021 provisions offered temporary relief, and some platforms eased time requirements.
The Limits of Safeguard and AI as Political Gatekeepers
Yet tolerance quickly narrowed where autonomous labour organising began. Even before major platform investigations, workers had staged repeated strikes; protest leaders often faced detention rather than policy dialogue. The state’s message became clear: protections may be granted from above, but collective pressure from below is unacceptable.
This contradiction deepens in the AI information layer. Despite documented protests, mainstream Chinese AI models often deny or downplay labour unrest and frame dissent as destabilising or false. In effect, these systems act not as neutral assistants, but as narrative filters aligned with state priorities.
New surveillance integrations reinforce this trend. Expanded social listening systems and AI-assisted governance workflows increase the state’s ability to monitor discourse, including labour grievances. Even legacy complaint channels risk becoming sites of predictive control rather than remedies for workers.
Viewed through Jia’s framework, the pattern is coherent. Just as authoritarian privacy law can shield citizens from private actors while preserving centralised state surveillance, labour-oriented AI safeguards can constrain corporate excess while preserving political monopoly over mobilisation.
Conclusion
The Chinese case shows that algorithmic protection and algorithmic repression are not mutually exclusive. They can operate as complementary pillars of governance. By positioning itself as a benevolent corrector of market abuse while suppressing collective labour agency, the state transforms selective safeguards into a source of legitimacy, while keeping dissent structurally muted.