Ep.1 “Beijing’s AI Warning: Why Ding Xuexiang’s Speech at Davos Matters?”
“We will not blindly follow trends, nor will we engage in unrestrained international competition.”
At the 2025 World Economic Forum Annual Meeting in Davos, Chinese Vice Premier Ding Xuexiang presented an intriguing metaphor to the assembled political and business leaders:
"AI should be an 'Ali Baba's cave' of treasures, not a 'Pandora’s box' of risks."
Will the development of artificial intelligence become an Ali Baba’s cave brimming with wealth and opportunity, or an unleashed Pandora’s box, fraught with uncontrollable dangers?
Ding’s remarks quickly sparked discussions both inside and outside the forum. Unlike many other politicians around world, who often focus on AI’s challenges regarding privacy, employment, or ethics, Ding emphasized the systemic risks AI might pose—and the necessity of installing a “braking system” for this technological race.
At a time when global AI regulation is still in its early exploratory phase, why has Beijing chosen this moment to deliver such a stark warning—and through a senior official who has rarely spoken publicly about AI?
Is this merely diplomatic rhetoric, or does it signal a shift in China’s approach to AI governance?
1. Who Is Ding Xuexiang, and Why Does He Matter?
In China’s political system, an official’s formal title does not always accurately reflect their true influence. Ding Xuexiang is a prime example of this dynamic.
A Political Role Beyond His Formal Title
On paper, Ding serves as a Vice Premier of the State Council, overseeing policy coordination in areas such as technology, industry, and environmental protection. However, his real role extends far beyond these administrative responsibilities.
A Member of the Political Core
At 62 years old, Ding Xuexiang is one of the seven members of the Politburo Standing Committee (PSC)—the highest decision-making body of the Chinese Communist Party. He is also the only Standing Committee member born in the 1960s, making him the youngest in the group.
Unlike many senior officials who rise through traditional Party affairs, Ding’s career began in science and administrative management before he transitioned into politics.
Engineering Background – Ding was originally trained in mechanical engineering and spent 17 years working in scientific research and management at the Shanghai Research Institute of Materials.
Political Ascent – In 2007, he became a key aide to Xi Jinping when Xi was Party Secretary of Shanghai. Since then, he has followed Xi’s rise and ascended to the Party’s highest ranks.
Policy Coordinator – Since 2013, Ding has been one of Xi’s closest aides, responsible for implementing top-level decisions and coordinating policies within the Party elite.
Neil Thomas, an analyst at Eurasia Group, notes that Ding has played a crucial role in Xi’s push to elevate technocrats within China’s leadership. Some even suggest that Ding may be the official who has spent the most time with Xi over the past five years.
A Key Figure in AI Governance
Ding’s influence is not just political—it extends deep into China’s technology policy.
In 2023, he was appointed Director of the newly established Central Science and Technology Commission—a powerful body designed to centralize Party control over China’s technological strategy.
This role places him at the core of China’s AI policymaking, particularly at the intersection of AI regulation, technological competition, and national security.
Ding’s remarks on AI safety at the 2025 Davos Forum should not be seen as just the opinion of a senior technocrat. Instead, they signal Beijing’s top-level stance on AI governance.
The message is clear: China is not just developing AI—it is actively shaping global AI governance rules.
2. Are China’s Top Leaders—Beyond Ding Xuexiang—Concerned About AI Safety?
Yes. Among China’s seven most powerful political figures, at least five have publicly addressed AI safety concerns at major international or domestic forums.
Beyond Vice Premier Ding Xuexiang, these leaders include:
President Xi Jinping
In October 2023, Xi introduced the Global AI Governance Initiative, calling for international cooperation to shape AI’s future governance. On November 28, 2024, at the G20 Summit in Rio de Janeiro, he reiterated the need for stronger global AI governance and collaboration, emphasizing that AI should serve humanity and benefit all nations.
Premier Li Qiang
Speaking at the 2024 World Artificial Intelligence Conference, Li urged the international community to establish a widely accepted AI governance framework and standards, ensuring AI remains safe, reliable, and controllable, while always aligning with human values and fundamental interests.
Zhao Leji (Chairman of the National People’s Congress Standing Committee)
At the 2024 Boao Forum for Asia, Zhao called on nations to move beyond bloc confrontations and zero-sum competition, advocating for a cooperative approach to implementing the Global AI Governance Initiative.
Wang Huning (Chairman of the Chinese People’s Political Consultative Conference, CPPCC)
While Wang has not spoken at international forums, he has made similar statements during China’s domestic political meetings, reinforcing the need for AI governance that aligns with China’s strategic priorities.
Cai Qi (Director of the CCP General Office) & Li Xi (Secretary of the Central Commission for Discipline Inspection)
Unlike the others, Cai and Li have not actively participated in AI governance discussions. However, this is likely due to their specific roles:
Cai Qi focuses on Party administration and internal policy coordination, which rarely involves AI governance.
Li Xi is primarily responsible for anti-corruption and disciplinary oversight, making AI regulation outside his immediate domain.
Ding Xuexiang’s speech at Davos reflects more than just personal or technocratic concerns—it signals that AI safety has become a high-level strategic priority for Beijing.
But this raises an even more important question:
When did China officially start to concern about AI safety, and why?
In the next article, we will explore the turning points that led Beijing to shift its focus toward AI safety—and what these decisions reveal about China’s long-term strategy in the AI era.



