AI Development and Governance in China amid Geopolitical Tensions

Page 1


AI Development and Governance in China amid Geopolitical Tensions

CIGI Paper No. 338 — November 2025

AI Development and Governance in China amid Geopolitical Tensions

About CIGI

The Centre for International Governance Innovation (CIGI) is an independent, non-partisan think tank whose peer-reviewed research and trusted analysis influence policy makers to innovate. Our global network of multidisciplinary researchers and strategic partnerships provide policy solutions for the digital era with one goal: to improve people’s lives everywhere. Headquartered in Waterloo, Canada, CIGI has received support from the Government of Canada, the Government of Ontario and founder Jim Balsillie.

À propos du CIGI

Le Centre pour l’innovation dans la gouvernance internationale (CIGI) est un groupe de réflexion indépendant et non partisan dont les recherches évaluées par des pairs et les analyses fiables incitent les décideurs à innover. Grâce à son réseau mondial de chercheurs pluridisciplinaires et de partenariats stratégiques, le CIGI offre des solutions politiques adaptées à l’ère numérique dans le seul but d’améliorer la vie des gens du monde entier. Le CIGI, dont le siège se trouve à Waterloo, au Canada, bénéficie du soutien du gouvernement du Canada, du gouvernement de l’Ontario et de son fondateur, Jim Balsillie.

Credits

Senior Fellow S. Yash Kalash

Director, Program Management Dianna English

Senior Program Manager Ifeoluwa Olorunnipa

Manager, Publications Jennifer Goyder

Publications Editor Susan Bubak

Graphic Designer Sami Chouhdary

Copyright © 2025 by the Centre for International Governance Innovation

The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the Centre for International Governance Innovation or its Board of Directors.

For publications enquiries, please contact publications@cigionline.org.

The text of this work is licensed under CC BY 4.0. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

For reuse or distribution, please include this copyright notice. This work may contain content (including but not limited to graphics, charts and photographs) used or reproduced under licence or with permission from third parties. Permission to reproduce this content must be obtained from third parties directly.

Centre for International Governance Innovation and CIGI are registered trademarks.

67 Erb Street West Waterloo, ON, Canada N2L 6C2 www.cigionline.org

About the Author

Xingqiang (Alex) He is a CIGI senior fellow. Alex is an expert on digital governance in China, the Group of Twenty (G20), China and global economic governance, domestic politics in China and their role in China’s foreign economic policy making, and Canada-China economic relations.

Prior to joining CIGI in 2014, Alex was a senior fellow and associate professor at the Institute of American Studies at the Chinese Academy of Social Sciences (CASS) and a visiting scholar at the Paul H. Nitze School of Advanced International Studies, Johns Hopkins University, in Washington, DC (2009–2010). Alex was also a guest research fellow at the Research Center for Development Strategies of Macau (2008–2009) and a visiting Ph.D. student at the Centre of American Studies at the University of Hong Kong (2004).

Alex is the author of The Dragon’s Footprints: China in the Global Economic Governance System under the G20 Framework, published in English (CIGI Press, 2016) and Chinese editions, and co-author of A History of China-U.S. Relations (Chinese Social Sciences Press, 2009). Alex has published dozens of academic papers, book chapters, and newspaper and magazine articles.

Alex has a Ph.D. in international politics from the Graduate School of CASS and previously taught at Yuxi Normal University in Yunnan Province, China. Alex is fluent in Chinese and English.

Acronyms and Abbreviations

AGI artificial general intelligence

AI artificial intelligence

AIGC AI-generated content

CAC Cyberspace Administration of China

CPC Communist Party of China

DNB deep belief network

DNNs deep neural networks

FP 8 8-bit floating-point precision

GPT generative pre-trained transformer

LLM large language model

NLP natural language processing

R&D research and development

RLHF reinforcement learning from human feedback

Executive Summary

This paper examines the complex dynamics of China’s artificial intelligence (AI) development and governance, highlighting the interplay between state-led strategies, private sector innovation, national security imperatives, AI safety concerns and global AI competition amid geopolitical tensions.

Chinese companies’ strategic focus on industrial applications, combined with a lower business risk tolerance and a short-term profit orientation in the country’s innovation culture, helps explain why China was not among the first to achieve major breakthroughs in generative AI. A relatively unregulated environment in the 2010s allowed for rapid experimentation in AI, leading to strong momentum in application-driven growth. While China announced its ambition to become a global leader in AI in its 2017 national strategy for AI development, much of its progress since then has focused on rapid application, commercialization and industry-specific use cases — rather than on foundational innovation.

DeepSeek’s recent breakthrough has become a defining moment in China’s AI development, revealing both the progress and the structural limitations within the country’s innovation ecosystem. It has emerged as a global game changer by significantly reducing model costs and advancing optimization techniques in AI engineering, particularly improving AI accessibility. Its success, however, may be the exception rather than the norm in China. DeepSeek’s achievement is not based on original innovation, and its approach will be difficult to replicate within China’s current innovation system. As a result, despite its growing capabilities, China may remain only a close follower in the global AI race, trailing behind the United States in frontier model development.

The emergence of generative AI, symbolized by ChatGPT’s global impact in 2022, prompted China to tighten regulatory controls on AI development. The state quickly implemented content controls and censorship to protect national security and ideological boundaries, embedding security concerns deep within its AI governance structure. China’s current AI governance approach emphasizes balancing development with security, presenting security as the baseline and economic transformation as the strategic objective.

Despite these restrictions, models such as DeepSeek V3 continue to perform competitively on technical benchmarks. While content, perspective and cultural biases are evident due to political constraints, the models’ core computational capabilities remain intact. This suggests that censorship, while shaping outputs, does not necessarily compromise overall model performance.

Internationally, China has positioned AI safety as a central component of its global governance agenda, particularly through its 2023 Global AI Governance Initiative. It promotes principles such as human-centric development, algorithmic accountability and risk-tiered oversight. However, in practice, enforcement remains weak among domestic large language model (LLM) developers, where AI safety is not a top priority and effective risk controls are limited.

In the context of intensifying US-China tech competition, DeepSeek’s emergence challenges the effectiveness of Western export controls and signals China’s resilience in navigating technological restrictions. Nevertheless, the uniqueness of its success highlights the innovation gap that persists beneath China’s impressive AI advancements. Without systemic reform to encourage more foundational innovation, China’s position in the global AI landscape may remain reactive — strong in application but lagging in invention.

Introduction

Since the release of ChatGPT in November 2022, discourse around the global AI race has increasingly focused on the perceived development gap between the United States and China — the two dominant actors in the AI domain. Prominent figures, such as former Google CEO Eric Schmidt, argued in May 2024 that China was trailing the United States by two to three years, calling the lead “an eternity” in a fast-moving field (Booth 2025). Critics within China echoed this assessment, suggesting that the disparity may be as large as a decade (Zhao 2024) due to challenges in accessing high-quality data, securing advanced chips and lagging algorithmic innovation, which constitute key constraints on China’s foundational AI capabilities.

However, external assessments from third-party organizations and international AI platforms1 suggest a more nuanced picture. While Chinese LLMs may not yet match the top-tier models produced by leading US firms, the performance gap is often portrayed as narrowing, rather than insurmountable. Within China, a prevailing view among AI researchers and industry stakeholders is that the country has fallen behind in the development of foundational AI models. Consequently, there is a growing consensus that China should leverage its existing strengths to pursue an industry-specific strategy, focusing on domain-specialized models where it may have competitive advantages, rather than competing head-on with US firms in foundational model development, such as general-purpose chatbots such as GPT (generative pre-trained transformer).

The emergence of DeepSeek in early 2025 has shifted this narrative. Its success has contributed to a growing sense of optimism about China’s AI capabilities and reducing the technological gap with the United States. At a Harvard University forum in November 2024, Schmidt revised his earlier position, suggesting that China was closing the gap at an unexpectedly rapid pace, aided by short-term surges in investment and a vast pool of engineering talent (Mao and Patel 2024). The strong performance of DeepSeek’s V3 model and R1 reasoning model has been cited as empirical evidence supporting this view. Nonetheless, a broad consensus, both domestically and internationally, still holds that while China remains behind the United States in AI development, it is advancing swiftly.

This paper seeks to explore the factors that have shaped this trajectory. Why did China not take an early lead in foundational AI development and what factors account for its current position? The first and second sections examine the historical evolution of AI in China and the challenges that hindered its early leadership, and the current state of China’s AI development following the rise of DeepSeek. The paper then turns to China’s governance of AI, particularly its pioneering regulatory efforts in generative AI, analyzing how its governance approach

1 The DeepSeek R1 model ranks closely behind the OpenAI o3 model and the Google Gemini 2.5 model on both Hugging Face’s Open LLM Leaderboard and Artificial Analysis’ LLM Performance Leaderboard. See https://huggingface.co/spaces/ArtificialAnalysis/LLM-PerformanceLeaderboard and https://artificialanalysis.ai/.

has influenced its broader AI development. It investigates how China attempts to balance state security, economic growth and AI safety amid escalating technological and geopolitical competition with the United States. Finally, the paper discusses the broader implications of China’s AI strategy for global technological innovation, governance of emerging technologies and the geopolitical dynamics of AI leadership.

China’s AI Development Before DeepSeek

Baidu-Led Initial Research and AI Development in China

Baidu, China’s largest search engine and AI company, began investing in AI research and development in 2010 (Yang 2018), four years after Google launched Google Translate using machine learning.2 That year, Baidu established a natural language processing (NLP) department3 to meet the growing demand for intelligent search capabilities and enhance its core search engine business. It also initiated work on voice and image recognition technologies. These developments were set against a backdrop of global advances in deep learning led by Geoffrey Hinton and his students. Notable advances included deep belief networks (DBNs)4 in 2006 (Hinton, Osindero and Teh 2006) and the use of deep neural networks (DNNs)5 for acoustic modelling in speech recognition in 2009 (Hinton et al. 2009).

A pivotal moment in AI development came with the success of AlexNet in 2012. As Fei-Fei Li noted 12 years later (CHM Live 2024), AlexNet symbolized the convergence of three foundational elements of

2 See https://ai.google/our-ai-journey/?section=intro.

3 See https://research.baidu.com/Blog/index-view?id=146.

4 A DNB is a type of deep learning model that learns patterns from data through multiple layers. It is good at understanding complex data such as images or sounds, especially when they are not labelled (unsupervised learning).

5 A DNN is a neural network with multiple layers between the input and output layers, allowing it to learn complex patterns and features from data. Unlike a DBN, each layer of a DNN simply transforms inputs into outputs. It is mainly used for supervised learning and does not try to generate data.

modern AI: large-scale labelled data sets, increased computing power and improved training methods for deep neural networks (Lee 2024). Following this development, Baidu began to shape its AI strategy. In 2012, it established the Institute of Deep Learning and introduced Deep Speech, a deep learning-based speech recognition system. Baidu reportedly offered Geoffrey Hinton $12 million6 per year in an effort to recruit him and also bid more than $40 million for his start-up, competing with Google and Microsoft (Metz 2021). After failing to hire Hinton, Baidu appointed Andrew Ng, a leading machine-learning expert, as chief scientist in 2014. That same year, Baidu launched its AI lab — Silicon Valley AI Lab — marking a significant move toward global AI collaboration.

For AI researchers, the success of AlexNet marked the beginning of a new era driven by computation power, large data sets and improved algorithms, particularly computational power (Yu 2023).7 However, AI only gained widespread public attention in the world after AlphaGo defeated Lee Sedol, the world champion of the board game Go, in March 2016. Developed by DeepMind, AlphaGo combined Monte Carlo tree search8 with DNNs, demonstrating superhuman performance in a complex strategic game. The victory had a profound impact in China as well, prompting policy makers to recognize AI’s strategic potential. By November 2016, AI was officially designated as a “strategic emerging industry” in the 13th FiveYear Plan for Strategic Emerging Industries (He 2021), and in July 2017, the government released its first national AI development strategy.

In the wake of these events, Chinese researchers rapidly entered the deep learning field, achieving notable progress in computer vision, voice and facial recognition, and NLP. From the outset of deep learning research, China and its leading technology companies demonstrated a strong interest in prioritizing the application of AI technologies and deep learning methods. This application-oriented approach laid the foundation for China’s subsequent strengths in industry-

6 All dollar figures in US dollars.

7 Chinese AI researchers, including DeepSeek’s founder, Liang Wenfeng, realized the significance of AlexNet’s success at the time as well. See the interview with Liang Wenfeng by 36Kr, a Chinese media platform: https://36kr.com/p/2272896094586500.

8 Monte Carlo tree search is a search algorithm or method used in AI, particularly for the decision-making process in board games or solving sequential decision problems.

specific AI development. Baidu, having been the earliest major tech company in China to invest in AI, launched an “All in AI” strategy in 2017 (Ling 2017), and made advances in areas such as speech recognition, autonomous driving and its core platform, Baidu Brain, which integrates voice, image, NLP and user profiling into its ecosystem.

This commercialization-driven approach also defined the trajectories of China’s leading AI start-ups. Key AI start-ups that emerged from breakthroughs in computer vision and deep learning, which include SenseTime (deep learning platforms), Megvii and Yitu (facial recognition systems) and iFlytek (voice recognition), quickly translated their work into applications. Tian Xiao’ou founded SenseTime in June 2014, based on the team’s Deep ID (deep identification verification) technology, which achieved a facial verification accuracy of 99.15 percent, surpassing human performance.9 Megvii and iFlytek were also built on their remarkable technological achievements in image recognition and voice recognition, respectively.10

These firms, including both Baidu and AI startups, represent China’s rapid ability to convert cutting-edge AI research into industry-ready applications, reinforcing the country’s reputation for implementation excellence, especially in sectors such as surveillance, facial and voice recognition systems, finance, education and health care. In the meantime, this application-first research and development (R&D) strategy may have inadvertently contributed to China’s lag in foundational AI research, particularly in artificial general intelligence (AGI). The country’s focus on real-world applications has often diverted resources and talent away from basic AI research.

One example is Baidu’s early research on scaling laws in 2017, which applied long short-term memory architecture rather than the now-dominant transformer models, although they did not refer to their findings as “laws” (Hestness et al. 2017). Despite this early lead, Baidu failed to maintain momentum in large-scale AI model development. Changes in leadership at Baidu Research illustrate the company’s shifting priorities. Ng left in March

9 See SenseTime’s website (www.sensetime.com/en/about-index).

10 For details on these achievements, see: Megvii’s website (www.megvii.com/news_detail/id/43); Wang and Zhu (2016); MIT Technology Review (2018); and Xiong et al. (2017).

2017 without disclosing his reasons for leaving. His successor, Lin Yuanjing, an expert in computer vision and machine learning, resigned after just six months due to differences in priorities — Baidu emphasized consumer-facing AI applications, while Lin prioritized core technology research.

In terms of LLM research, only a few Chinese companies, such as Baidu, Huawei as well as Tsinghua University, invested in pre-training models before the release of ChatGPT in 2022. In March 2019, Baidu released ERNIE 1.0, shortly after GPT-1 and Google’s BERT. By 2021, Baidu’s ERNIE 3.0 surpassed 100 billion parameters, with ERNIE 3.0 Titan reaching 260 billion. Huawei launched its Pangu LLM initiative in November 2020, following OpenAI’s release of GPT-3 (HuaweiTech 2022), and released the first version in April 2021. ZhipuAI, a spin-off from Tsinghua University, released GLM-10B in September 2021, the first Chinese open-source model with 10 billion parameters, and later released GLM-130B (130 billion parameters) in August 2022.11

Why China Fell Behind at the Starting Line in Generative AI

A key question is why Chinese companies were not the first to release a ChatGPT-like chatbot, and more broadly, why China has lagged at the starting line in the generative AI race, despite Chinese companies’ deep integration with global AI development trends over the past decade. A review of the path taken by China’s leading AI company, Baidu, in developing generative AI may shed light on answers to this question.

During 2012–2022, Baidu invested more than 100 billion yuan in AI R&D (Zhou 2022). In the field of LLMs, Baidu attempted to catch up with ChatGPT by releasing its own chatbot, ERNIE, in March 2023, just four months after ChatGPT’s debut in November 2022. However, ERNIE’s launch was widely considered disappointing. As Baidu CEO Robin Li later admitted, ERNIE was not ready at the time, but the company decided to release it early, on March 16, 2023, due to strong market attention and demand, as well as to coincide with a major event Li was attending that day.12 Judging by this timeline, Baidu was already several months behind OpenAI, and the

11 See ZhipuAI’s website (www.zhipuai.cn/aboutus).

12 See Robin Li’s interview with Founder Park: https://wallstreetcn.com/ articles/3684875.

performance gap in foundational model training likely represented at least a difference of a year.

Beyond this timeline, a major factor contributing to Baidu’s lag is its strategic approach to AI. Unlike OpenAI, which has prioritized foundational model development and long-term goals such as AGI, Baidu focused more on short-term commercial applications. In hindsight, it is clear that leading AI firms such as OpenAI have made heavy investments in foundational model training with a long-term vision, while Baidu pursued a different trajectory. When ERNIE was launched in March 2023, Li emphasized Baidu’s strategy of building tools, platforms, products and ecosystems around the model. The company adopted a three-layered development approach: model plus tools/platforms plus ecosystem. This reflects Baidu’s commercial orientation, aiming to integrate LLMs directly into industry applications from the outset.

In contrast, OpenAI — founded as a non-profit organization to counter potential monopolies in AI — remained focused on pushing the boundaries of LLMs and AGI. Since at least 2015, OpenAI has methodically developed its models from GPT-1 to GPT-2 and GPT-3, steadily preparing for a public-facing conversational AI. By late 2022, OpenAI judged that LLMs were mature enough for prime time when paired with techniques such as instruction tuning and reinforcement learning from human feedback (RLHF). ChatGPT is essentially GPT-3.5 plus better fine tuning and a user-friendly interface.

Although Baidu may possess strong LLM capabilities, it adopted a fundamentally different strategy focused on industry applications and commercialization for profit. Rather than prioritizing breakthroughs in AI model architecture, Baidu concentrated on incorporating the ERNIE model into various industries. For instance, the company had already developed 11 industry-specific models applied in sectors such as finance, energy, manufacturing, media and internet services. ERNIE also served as the backbone for a wide range of service providers and consumer-facing applications.

Following the release of ERNIE 3.0 in March 2023, Baidu doubled down and further shifted its focus on product development and AI integration into everyday life. It embedded the ERNIE bot into its core businesses such as search engine; enterprise AI solutions (for example, Baidu Cloud, health care and manufacturing-

specific models); autonomous driving (Apollo); and voice recognition platforms (DuerOS). This “industry-first” strategy underscored Baidu’s intention to build commercially viable ecosystems rather than general-purpose AI platforms.

In essence, Baidu appeared to prioritize nearterm commercial viability over fundamental innovation or AGI pursuits. While commercially rational, Baidu’s approach came at a cost. It turns out Baidu’s industry-first approach did not yield comparable technological breakthroughs in AIgenerated content (AIGC). Nor did it generate the same public and investor excitement. This contrasts with OpenAI’s concentrated push toward general-purpose LLMs and AGI.

From a business standpoint, Baidu faced significant market pressure to secure market share before competitors emerged. The emphasis on quick commercialization, while understandable, came at the cost of long-term R&D and highrisk innovation. Compounding the issue, Baidu, like many large tech firms, has developed a multi-tiered, bureaucratic management structure that may stifle or delay innovation.

As a result of these divergent strategies, the gap between Chinese and US companies in foundational AI has only widened in recent years. While OpenAI, Google, Meta and other US firms have advanced rapidly, China’s efforts have been further hampered by US export controls on advanced AI chips — critical hardware for LLM training. In this context, DeepSeek later emerged as a potential game changer in China’s generative AI development.

China’s AI Development Direction and Priorities After DeepSeek

DeepSeek’s success has played a transformative role in China’s AI development. It may significantly alter the future trajectory of AI not only within China but also globally.

First, DeepSeek has demonstrated that Chinese companies are capable of achieving substantial innovation in AI through engineering optimization

and cost-effective approaches even under the constraints of US export controls on advanced AI chips. It challenges the widespread perception that China is destined to remain a follower in AI innovation and shifts the dynamics of US-China technological competition in AI.

The advanced chip ban introduced by the Biden administration in October 2022, just one month before the release of ChatGPT, reflected escalating geopolitical tensions between the world’s two largest economies and leading AI powers. As export controls tightened, access to high-performance chips increasingly emerged as a critical bottleneck for China’s development of LLMs and generative AI. In response, China pivoted toward an alternative approach — a more industry-specific, application-driven AI development strategy that aligns with its long-standing focus on deploying AI for real-world utility. Sector-specific models, which focus on domain-specific knowledge and data, require less computing power than generalpurpose LLMs and have become the preferred choice for Chinese companies and entrepreneurs.

The emergence of DeepSeek, however, reignited hope for China’s foundational models and generative AI capabilities. Using less advanced AI chips, namely Nvidia’s H800, DeepSeek achieved GPT-4o level performance with its V3 model and R1 reasoning model through remarkable engineering innovation and optimization in model architecture and algorithms design. This success has significantly boosted morale in China’s AI sector. DeepSeek’s open-source approach also inspired progress among other firms, such as Alibaba’s Qwen model, accelerating the country’s overall advancement in foundational AI models.

DeepSeek’s breakthrough sent shockwaves through the global AI community, particularly in the United States. During a congressional hearing in May 2025, leaders, including OpenAI’s Sam Altman, AMD’s CEO Lisa Su and Microsoft’s President Brad Smith, acknowledged that China’s AI capabilities are now closely approaching those of the United States (U.S. Senate Committee on Commerce, Science, & Transportation 2025). They noted that China’s deep talent pool and ability to innovate to manoeuvre around hardware constraints allow it to find alternative paths to achieving comparable outcomes even without access to the most advanced AI chips (ibid.).

Second, DeepSeek’s cost-effective model training represents a disruptive innovation. A central aspect of DeepSeek’s success lies in its drastic reduction in LLM training costs through creative engineering and optimization to achieve performance that is comparable to that of the most advanced American LLMs while using significantly fewer computing resources. These innovations substantially lower the barrier to entry for AI model training, making advanced AI technology more accessible for businesses, start-ups and developers worldwide.

Key innovations13 include:

→ Group relative policy optimization: A novel reinforcement learning algorithm that enhances reasoning by evaluating groups of responses relative to one another rather than relying on separate evaluation models to improve response quality. It is the main innovation in the DeepSeek-R1 reasoning model (Heaven2025).

→ Optimized mixture of experts and multi-head latent attention architectures: These were critical to the cost-effective training and efficient inference of the DeepSeek-V3 model (Meng et al. 2025).

→ Model/knowledge distillation: A compression technique that transfers knowledge from large “teacher” models to smaller “student” models without significantly sacrificing performance (Bergmann 2024).

→ Multi-token prediction: Allows systems’ simultaneous prediction of multiple tokens, increasing data throughput by two to three times14 compared to standard next-token prediction.

→ FP8 mixed-precision training: Reduces training costs by using 8-bit floating-point precision (FP8) instead of the standard 16-bit (FP16), allowing for faster computations with minimal loss in model accuracy (VerticalServe Blogs 2025).

→ Parallel thread execution programming: An intermediate instruction set architecture designed by Nvidia for its GPUs (graphics processing units) (Shilov 2025). Through software-level reconfiguring, DeepSeek improved

13 See more details about these innovations in the author’s opinion piece on CIGIonline (He 2025).

14 See https://github.com/vllm-project/vllm/issues/12181 for more details about multi-token prediction.

compute efficiency by enhancing multiprocessor interconnectivity on Nvidia’s H800 chips.

Third, DeepSeek’s success has galvanized China’s AI ecosystem by encouraging Chinese developers, start-ups and investors to double down on creative and cost-effective solutions for AI development and applications across various sectors. It has restored confidence in the nation’s ability to achieve breakthroughs, even those built upon existing Western technologies and innovations. For many, DeepSeek has become “China’s OpenAI.” Its inclusive, open-source and lowcost approach has enabled widespread adoption across industries and strengthened confidence in China’s industry-specific AI development model.

Within only months of the V-3 and R1 releases, DeepSeek’s models were widely adopted across Chinese society by hundreds of publicly listed companies; major internet platforms (Tencent/WeChat, Baidu, Alibaba, NetEase); cloud providers; automakers; smartphone and appliance manufacturers; services industries; local government platforms; and chip design firms such as Huawei and Cambricon. Its impact is both deep and broad, spanning all major sectors and industries in China. Internationally, the three largest cloud providers, AWS (Amazon Web Services), Microsoft Azure and Google Cloud, have all integrated DeepSeek’s R1 reasoning model into their platforms, further cementing its global relevance.

Fourth, DeepSeek’s open-source strategy has generated a far-reaching impact both in China and globally, echoing other vibrant open-source models in the LLM community, such as Meta’s Llama and Alibaba’s Qwen. It is helping to further democratize AI development (Gomstyn and Jonker 2024) and foster innovation, contributing to an inclusive ecosystem of scalable and cost-effective machine learning. This approach represents a foundational step toward AI evolving into an indispensable piece of digital public infrastructure — akin to electricity, water and the internet — for the future development of the digital economy.

Inspired by DeepSeek, Baidu announced it would open source its next-generation chatbot, ERNIE 4.5, starting June 30 and make its current chatbot free to use, marking a significant shift from its traditionally closed-source approach (Reuters 2025). Meanwhile, OpenAI recently declared plans to release an open-source language model for the first time in years (Wired 2025), and Elon

Musk’s xAI announced that Grok 2 will be opensource, followed by Grok 3 (Schwartz 2025).

That said, DeepSeek’s innovation and success in AI development should not be overstated, as it has clear limitations. First, its achievements relied heavily on Nvidia’s advanced AI chips to provide the necessary computational power. Although the final stages of training were conducted using H800 chips, the company reportedly invested $1.3 billion in acquiring AI hardware (Patel et al. 2025) — an immense upfront cost. Its breakthroughs, while impressive, remain dependent on advanced American chips and substantial financial investment.

Second, its accomplishments stem from the refinement and engineering of existing AI innovations and techniques rather than the creation of fundamentally new approaches. In addition, DeepSeek’s open-ended, curiosity-driven innovation model is relatively rare in China’s tech ecosystem. The company’s uniqueness is closely tied to its founder, Wenfeng’s personal passion and hands-on, engineer-like work ethic — characterized by coding, reading research papers and engaging in group discussions. This individual-driven model raises questions about whether DeepSeek’s success can be institutionalized or replicated more broadly within China’s innovation landscape (He 2025).

Third, concerns have arisen over the significantly high hallucination rate of the DeepSeek-R1 reasoning model. Five months after launch, its hallucination rate was reported at 14.3 percent, compared to 0.8–4.1 percent for leading competitors. OpenAI’s reasoning-enhanced model “o1” registered a lower rate of 2.4 percent (Vectara 2025). DeepSeek’s own earlier nonreasoning V3 model also had a lower rate. While engineers have since reduced hallucinations by 40–50 percent (Goh and Baptista 2025), R1 still lags behind comparable models on this metric.

To sum up, the emergence of DeepSeek marks a significant turning point in the great-power competition over AI technologies, especially under mounting geopolitical tensions. But does it also alter the strategic picture in terms of state security, AI safety and systemic risk associated with AI development in China?

To answer this, an exploration of China’s AI governance framework is necessary. The following section examines the evolution of China’s rapidly developing AI governance frameworks, before

and after the generative AI boom, and considers their impact on AI development in China.

AI Governance in China

Development of AI Governance Frameworks in China

China first integrated AI into its government policy initiatives in May 2016, when four government agencies introduced the “Internet+” Artificial Intelligence Three-Year Action and Implementation Plan. The plan aimed to achieve breakthroughs in AI technological innovation, build capacity for smart hardware supply and stimulate broader applications of AI in the national economy (National Development and Reform Commission 2016). It focused primarily on investment and research rather than regulation, emphasizing core technologies such as deep learning based on sensing data, multimedia and natural language; brain-inspired neurocomputing systems; computer vision; biometric recognition; humanmachine interface; and machine translation.

A pivotal moment came in July 2017 with the release of the Next Generation Artificial Intelligence Development Plan, a comprehensive blueprint that set the goal for China to become a global AI leader by 2030 (State Council 2017). The plan covered nearly every potential domain of AI development, including deep leaning, machine learning, AI models and natural language processing — all of which have since become core elements of the global AI landscape. However, at the time, Chinese policy makers, experts and industry leaders did not yet know which AI technologies would ultimately become most influential. The plan broadly outlined possible directions and key technologies without identifying a definitive focus.

In terms of governance, the plan stated general principles for regulating AI development, highlighting ethical and legal norms, participation in global AI governance and the establishment of oversight systems for AI safety and assessment. While these provisions were considered forwardlooking at the time, they remained vague and peripheral to the plan’s main focus: advancing AI technological innovation to boost manufacturing and overall economic growth. Nonetheless,

the plan laid a foundational framework for future AI regulatory efforts in China.

Notably, neither the 2016 “Internet+” AI Initiative nor the 2017 AI Development Plan anticipated that deep learning would become the central driver of AI development in just a few years.

In the years that followed (2017–2021), the Chinese government’s attention shifted to data governance, driven by growing concerns over data privacy, security and the ethical implications of digital technologies, particularly those stemming from large digital platforms. This culminated in the enactment of two major data regulation laws in 2021: the Data Security Law and the Personal Information Protection Law. These built upon the Cybersecurity Law, introduced in 2017 and enforced from 2018. Together, these three laws constituted the backbone of China’s state-centric data governance regime (He 2023). They marked a transition from relatively loose oversight over data, especially over large digital platforms handling massive amounts of data, to a more structured and stricter compliance-driven regulatory environment.

During this period, AI development was largely left to the industry to self-regulate, with minimal government oversight. China’s AI industry entered a phase of unsupervised growth, guided only by a few self-regulatory conventions among industry participants.

Beginning in 2021, China has accelerated the formulation of AI-specific regulations, with a new emphasis on industry ethics and algorithms governance. The Ministry of Science and Technology (2021) introduced the Ethical Norms for the New Generation Artificial Intelligence in September 2021. These norms addressed key concerns such as personal data protection, human control and responsibility, anti-discrimination, accountability in automated decision making and the promotion of “human-centric” AI. This coincided with a surge in AI investment — Chinese AI start-ups attracted US$17 billion (Shen et al. 2022) in private equity and venture capital funding, nearly one-fifth of the global total that year.

The Cyberspace Administration of China (CAC) released a draft of the Internet Information Service Algorithmic Recommendation Management Provision in 2021, which took effect in March 2022. The regulation required platforms to disclose when algorithms shape content delivery and to

allow users to opt-out. It emphasized algorithmic transparency and aimed to mitigate risks such as bias and manipulation. The Provisions on Deep Synthesis Management, introduced in 2023, required mandatory watermarking for AI-generated content (for example, deepfakes, which allow for the manipulation of audio and video using AI). These rules aimed to curb disinformation by enforcing traceability and aligning with the principles of data security and personal information protection.

The year 2023 marked the beginning of generative AI regulation in China. In response to the rapid rise of LLMs capable of mimicking human reasoning, the CAC acted swiftly by introducing the Interim Measures for the Management of Generative AI Services in July 2023. This was China’s first comprehensive framework specifically targeting generative AI. The measures required AI service providers to comply not only with existing data protection laws, but also to align with “socialist core values.”15

For the first time, responsibility was formally assigned to AI providers, which are defined as organizations and individuals (such as researchers and developers) who deliver generative AI services. These providers were obligated to tag AI-generated images and videos, optimize training data and prevent the generation of illegal content. This marked a significant departure from earlier policies, which had left responsibility for AI governance vague or unspecified (Calero 2024). Generative AI service providers were now expected to act not only as content generators, but also as supervisors, assessors and technical service providers to ensure regulatory compliance (Zou and Zhang 2025).

The regulation introduced three major innovations. The first is content restriction. AI-generated content must not include material deemed harmful to national unity, social harmony or state security. This means that AI models must avoid generating content related to sensitive topics such as the Tiananmen Square protests, criticisms of Chinese leadership or separatist narratives. The second is categorized and tiered supervision and security assessment. AI services providers must establish internal security review mechanisms and register algorithms with authorities. The

15 An English translation of the document can be found here: www.chinalawtranslate.com/en/generative-ai-interim/.

third is mandatory labelling and accountability. All AI-generated content must be labelled and providers must be accountable for any violations.

China’s long-established censorship regime has led companies to develop internal compliance mechanisms, often described as self-censorship. For example, Baidu had already incorporated self-censorship features into its generative AI products as early as September 2022, two months before ChatGPT’s release. It had implemented safety measures such as keyword filtering to enforce internal thresholds on what the model could and could not say.

In compliance with the Interim Measures, which mandate content filters to block “illegal” or “sensitive outputs,” Baidu further introduced a risk-filtering mechanism and real-time monitoring of user prompts and responses. It also implemented fine-tuning measures to align with China’s “socialist core values” and to limit discussion of politically sensitive topics (for example, Xinjiang, Taiwan, Tibet). Additionally, Baidu added disclaimers (such as “Generated by ERNIE Bot”) to comply with transparency requirements.

As of 2024, China is in the process of drafting a comprehensive AI law. Initiated in 2023 under the State Council’s 2023 Legislative Work Plan (Office of the State Council 2023), the law seeks to consolidate fragmented rules into a unified legal framework that balances innovation with risk management. It is intended to support technological advancement and the development of a robust AI industry while also addressing issues of security and safety. The draft aims to promote innovation in AI technology, develop a healthy AI industry and regulate AI products and services. The draft includes provisions for regulating AI systems, safeguarding national security and the public interest, and protecting the rights and interests of individuals and organizations (Calero 2024).

At the same time, China is exporting its governance model through international programs such as the Global AI Governance Initiative, released in 2023, positioning itself as a rulemaker in global AI ethics and regulation.

In summary, over the past 10 years, China has transformed its AI approach from initial enthusiasm for technological innovation to the establishment of a tightly regulated and strategically guided AI industry. This evolution reflects the country’s dual

objectives: fostering world-class AI capabilities while ensuring that these developments remain aligned with state priorities, national security interests and ideological control.

Impact of Strict Content Restrictions on the Performance of China’s AI Models

Before the Interim Measures were released in 2023, Chinese AI companies such as Baidu had already self-imposed restrictions on the training and development of their LLMs. It is widely believed that these strict content limitations have negatively impacted the performance of Chinese LLMs (Pierson and Wang 2025). Models such as ERNIE developed by Baidu, Qwen by Alibaba, Doubao by ByteDance and Kimi by Moonshot are likely constrained by these regulatory boundaries and censorship requirements, affecting their overall capabilities.

Is DeepSeek different in this regard? As a Chinese company, DeepSeek must also comply with China’s stringent censorship and data regulations. The heavily monitored environment is shaped by strict content restrictions and broad obligations outlined in the Interim Measures, the Data Security Law and other related legislation. These regulations require AI service providers such as DeepSeek to cooperate with state authorities on content moderation, data control and censorship, particularly concerning politically sensitive topics and criticism of the government, which the Chinese authorities view as serious threats to national security.

What sets DeepSeek apart is its high performance, which is comparable to leading American models such as OpenAI’s GPT-4o. This raises a critical question: Does DeepSeek’s selfrestrictions on training data and model output under China’s policy environment significantly affect the outcomes of LLM training and future development? In other words, are China’s content restrictions a significant obstacle to effective LLM training and long-term advancement?

The answer is nuanced. Yes, DeepSeek’s compliance with Chinese censorship laws does introduce bias into its system, as it must avoid politically sensitive topics and adhere to governmentapproved narratives. These biases generally fall into three categories. First, content bias: DeepSeek’s AI suppresses content that is critical of the Chinese government, politically sensitive or related to issues such as Taiwan, Tibet, Xinjiang

or Tiananmen Square. Second, perspective bias: the system may prioritize and amplify viewpoints that are aligned with Chinese government policies, while excluding or downplaying alternative perspectives. Third, cultural bias: DeepSeek’s AI reflects cultural and ideological values endorsed by the Chinese state (Badri 2025).

However, while DeepSeek’s adherence to Chinese censorship laws does affect its LLM’s output, these biases primarily affect the model’s content generation rather than its core computational ability. In practice, when prompted with politically sensitive questions, the model tends to refuse to respond or offers highly sanitized, biased answers. This built-in censorship, which is embedded both during and after training, compromises the model’s performance in areas requiring open discourse and balanced information.

Despite the presence of censorship, DeepSeek’s technical capabilities remain largely unaffected on standard benchmarks. In tasks outside these sensitive domains such as solving math problems, coding or general language understanding, DeepSeek remains competitive, praised for its efficiency and cost-effectiveness. Its limitations are largely confined to politically sensitive domains, where content neutrality and completeness are restricted.

Why, with all the biases caused by DeepSeek’s compliance with Chinese censorship and data security laws, can the model still perform well in other domains not subject to these restrictions? The same question applies to other highperforming Chinese LLMs such as the Qwen model. The explanation lies in the fact that all LLMs, including those developed by OpenAI and other companies, are trained with certain content limitations, such as prohibitions on pornography and hate speech. The key difference is that Chinese LLMs are subject to broader restrictions that extend to politically sensitive content and criticism of Chinese leadership.

A more concerning issue with DeepSeek is its significantly higher hallucination rate compared to other leading models. This reflects broader AI safety challenges, which are increasingly drawing attention in China’s evolving AI governance framework.

AI Safety Brought into the Spotlight in China’s AI Governance Framework

China first addressed the issue of AI safety in its 2017 AI Development Plan, although only in general terms — emphasizing the principle of developing and applying AI in a “safe and controllable” (State Council 2017) manner and strengthening research on legal, ethical and social issues related to AI. The issue of AI safety gained renewed attention as more regulations were introduced beginning in 2021.

The Regulations on Ethical Norms for AI, released in 2021, explicitly addressed AI safety. They emphasized ensuring human control over AI systems and called for the establishment and adoption of mechanisms for continual monitoring, updating and evaluation to mitigate potential risks. These regulations also advocated for humancentric development focused on improving wellbeing and prioritizing the benefits of humanity, protecting privacy and data, and minimizing bias in data collection and algorithm development.

The Provisions on Deep Synthesis Technology, released in November 2022 and enforced in January 2023, mandated the labelling and watermarking of AI-generated content (for example, deepfakes such as face swapping, voice cloning, text generation) to combat disinformation. Similarly, the Interim Measures in 2023 included AI risk control requirements such as mandatory data labelling, evaluation of data quality and training for data-labelling personnel.

One issue in China’s regulatory approach has been the lack of clear distinction between AI safety and AI security, both of which are referred to using the same term in Chinese. The Interim Measures primarily focus on AI security, particularly restricting content that undermines “socialist core values” and thereby threatens China’s national security.

It was not until 2023 that China began to distinguish between AI safety and AI security more clearly. That year, concerns about large-scale risks, such as those posed by AGI, were formally acknowledged. In April 2023, the Politburo of the Communist Party of China (CPC) emphasized risk prevention associated with AGI development (Xinhua News Agency 2023). At the third plenum in July 2024, the CPC proposed the establishment of oversight systems to ensure AI safety as part

of a broader public safety framework, which also includes cybersecurity, biosafety, food and drug safety, and natural disasters and industry incidents prevention (ibid. 2024). Although categorized under public safety, AI safety still falls within the broader scope of national security as public safety is part of China’s national security. China’s forthcoming national AI law is expected to introduce provisions for foundational model safety and AGI value alignment.

Since 2023, AI safety has gained traction in both academic and industrial circles. Chinese research on AI safety has expanded significantly, addressing topics such as LLM unlearning, AI misuse in biology and chemistry, and risks related to LLM self-awareness. Leading companies such as Alibaba and Zhipu AI have adopted technical safety measures such as RLHF and supervised fine-tuning to align models and prevent harmful content. China’s Artificial Intelligence Industry Alliance is developing AI safety benchmarks and risk management frameworks. Aiming to avoid existential risks, prominent experts have advocated for the establishment of clear “red lines” that AI systems must not cross, the introduction of minimum AI safety research funding and the allocation of 10–20 percent of AI companies’ resources to governance, safety and ethics. There is also growing interest in offering tax incentives for AI safety work and enhancing China’s participation in global AI discussions (Concordia AI 2023; He, Kalash and Samson 2025).

However, despite these developments, many Chinese companies, including some leading LLM developers, continued to adopt a performance-first approach amid fierce competition, often sidelining AI safety concerns. Even when companies address issues of ethics, bias and misuse, their use of RLHF (Concordia AI 2023, 61–62; 66–67) is typically aimed at boosting model performance rather than ensuring safety. A significant gap remains before AI safety becomes a core priority in China’s AI industry. Most companies have yet to fully adopt safe protocols such as technical alignment, safety evaluations and testing for highrisk behaviours, including hallucination, powerseeking tendencies and self-duplication (ibid., 63).

The extraordinary rise of DeepSeek and its integration into a wide array of platforms, services and products across sectors and search engines have brought AI safety issues to greater public attention. Increasing exposure to AI-generated

content, including deepfakes, has raised awareness of safety risks. Currently, safety mechanisms are insufficient to clearly differentiate AIGC from authentic information. This highlights a serious gap: despite high-level guidance from top leaders and proposed safety frameworks aimed at improving AI safety in business and research circles, enforceable safety measures and regulations continue to be lacking, and actual implementation is even more limited. This shortfall could pose significant challenges in the future.

Implications of China’s AI Development and Governance

The development and governance of AI in China over the past decade have been deeply embedded in the country’s political and economic context. In the early days, a relatively unregulated, marketdriven environment laid the groundwork for China’s AI development, which has largely been driven by the private sector and motivated by the desire to keep pace with the global technological revolution. Chinese companies’ strategic focus on industrial applications, combined with a lower business risk tolerance and a short-term profit orientation in its innovation culture, helps explain why China was not the first to achieve major breakthroughs in generative AI development. The rise of generative AI posed significant challenges to a core component of China’s state security: censorship to maintain regime and social stability. Despite tight regulatory controls, the emphasis on security compliance does not appear to significantly impair the technical performance of China’s large AI models such as DeepSeek’s V3 models.

These key findings derived from China’s experiences in AI development may offer important implications for global AI innovation and governance.

First, in the face of significant technological and commercial uncertainty, such as the current AI revolution, technology- and research-driven innovation appears to prevail over productdriven approaches. Innovation in transformative technologies typically involves substantial costs alongside significant potential returns. The

approach taken by major American AI companies, which prioritizes foundational research on AGI while keeping options open regarding technological pathways and applications, requires greater resources but offers strategic flexibility and long-term advantages. By contrast, China’s applications-driven strategy focusing on industrial use cases may not be the optimal path in hindsight, yet it can be seen as a pragmatic choice shaped by the country’s specific political, technological and business conditions. Nevertheless, the global AI race is far from over. China’s large talent pool and growing capacity for indigenous innovations continue to position it as a formidable actor in the ongoing competition, as exemplified by the case of DeepSeek.

Second, balancing security and regulation with development and innovation remains a persistent challenge in the governance of emerging technologies. Countries tend to lean toward one end of this spectrum, depending on their specific political, economic and strategic calculation.

In the case of China, mirroring its data governance framework, China’s approach to AI governance highlights striking a delicate balance between security and innovation-driven economic growth. While AI development is positioned as a central pillar of the country’s technological advancement, a top priority on Chinese President Xi Jinping’s national agenda, particularly in the context of strategic competition with the United States, China’s AI governance framework consistently upholds national security as the fundamental priority.

Although articulating the dual priorities of security and innovation, Chinese Vice Premier Ding Xuexiang emphasized the security dimension in China’s AI development at the World Economic Forum (2025) in Davos, Switzerland. Ding, a member of the Politburo Standing Committee, China’s most powerful decision-making body, and director of the Central Science and Technology Commission, plays a key role in shaping China’s technology strategy. Using the analogy of a brake pedal for AI security/safety and a gas pedal for development when driving on a highway, he underscored that security is the foundational requirement, while the overarching goal is to drive AI-led intelligent transformation and sustainable economic growth. He further remarked that China would not blindly follow or become overly engaged in global AI competition, reinforcing the prominence of the security imperative. He also noted that

China’s highly organized institutional structure and strong monitoring mechanisms are key to ensuring secure and risk-averse AI development.

In contrast, the United States has undergone a significant shift toward prioritizing innovation in its approach to balancing AI safety and development. This was underscored by US President Donald Trump’s Executive Order “Removing Barriers to American Leadership in Artificial Intelligence,” signed on January 23, 2025, during his first week in office. The order rescinded former President Joe Biden’s 2023 Executive Order “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This policy reversal highlights how domestic political dynamics and the strategic imperative to preserve global AI leadership shape the evolving contours of US AI governance.

Third, as AI safety gains prominence and becomes a central theme in international AI governance frameworks, it is also shaping China’s vision for global AI governance.

AI safety has increasingly become a key area of focus in China’s vision and advocacy for global AI governance. In October 2023, China released the Global AI Governance Initiative, which highlights various safety-related concerns (Ministry of Foreign Affairs of People’s Republic of China 2023). The initiative outlines general principles such as “people-centered approach in developing AI,” “human control of AI” and “AI for good” (ibid.), data privacy protection, as well as tiered oversight and security assessment for AI risks — all reflecting growing attention to AI safety. These principles are consistent with China’s long-standing positions in international affairs, including equal rights for all countries to pursue AI development, mutual benefit, UN-led governance frameworks and increased representation for Global South countries in shaping the future of AI. Ding reiterated at the World Economic Forum (2025) that China has established a robust ethics committee to supervise AI risks.

However, in practice, AI safety and risk mitigation remain a weak spot among Chinese LLM developers. Safety is not even on the top priority list, and concrete, enforceable measures to guarantee AI safety and prevent related risks are largely absent. Despite China’s vocal promotion of AI ethics and safety principles in global fora, many of these principles, such as “human-centric development,” “human control of AI,” “AI for good,” data privacy protection, and tiered oversight and

risk assessments, are not yet backed by strong domestic implementation or technical safeguards.

China’s efforts to engage in global AI governance have faced external challenges as well, including exclusion from some Western-led initiatives, such as the US-hosted AI Safety Summit. Although China participated in the UK and South Korea AI Safety Summits and supports UN-led AI resolutions, it remains uncertain whether China will play a meaningful role in shaping emerging international governance mechanisms for frontier AI safety. Meanwhile, China is strengthening ties with the Global South through initiatives such as the China-Africa AI Policy Dialogue and the China-BRICS AI Development & Cooperation Centre. However, these efforts may risk further fragmenting global AI governance. Since 2022, track 1.5 and track 2 dialogues between China and the West have expanded, providing unofficial yet important non-governmental forums for technical and policy exchanges on AI safety amid rising geopolitical tensions. These dialogues help maintain engagement when official diplomatic channels are strained (Concordia AI 2023; He, Kalash and Samson 2025).

Fourth, AI shapes the geopolitical and technological rivalry between the United States and China.

As a transformative emerging technology driving the next industrial revolution, AI has inevitably become a focal point in the technological and geopolitical rivalry between the United States and China. Ongoing US export controls on advanced AI chips have affected China’s ability to access the computing power required for large-scale AI model training. However, DeepSeek’s recent breakthroughs cast doubt on the effectiveness of such restrictions. Instead, they appear to have reinforced China’s commitment to self-reliant innovation and technological development.

With a vast talent pool in AI and related fields, coupled with substantial financial and policy support from its top leaders, China is wellpositioned to closely track, if not surpass, the United States in AI, particularly in certain domains such as in industry-specified applications. Nevertheless, a more significant barrier lies in China’s top-down innovation model and its strong emphasis on rapid application, commercialization and profitability. While pragmatic, this approach risks undermining foundational innovation. Baidu’s development strategy illustrates this applicationdriven approach, whereas DeepSeek represents

a more innovation-led, yet still exceptional, path forward: DeepSeek’s success might be an exception rather than a rule within China’s AI ecosystem. Its model will be difficult to replicate, and it is not based on original innovation. As a result, China may remain a close follower in the AI arms race, continuing to trail behind the United States.

Conclusion

Over the past decade, China’s journey from unbridled AI innovation to one of the world’s most regulated AI ecosystems illustrates the country’s unique approach to technology governance. By embedding strict regulatory controls at every stage, from data collection to algorithmic output, China has set a model that draws the bottom line at national security, social stability and ideological conformity, and AI and other emerging technologies can only grow built upon this premise.

A self-regulated environment has encouraged the investment of AI in the last 10 years. The ChatGPT shock has stimulated the rapid increase of AI investment and R&D, with private big tech and AI start-ups becoming the driving force of China’s AI development, contributing to rapid technological advancement and a surge in domestic AI capabilities.

Although China’s AI regulatory and supervisory measures might set guardrails for national security concerns, AI safety, risk prevention, alignment and data privacy and protection, they also underscore the challenges of balancing innovation with state control, as well as AI model quality and the safety risks associated with it — tensions that will continue to shape China’s AI landscape for years to come. For now, it looks like China’s security-first approach on AI governance did not essentially hinder the development of AI, in particular generative AI such as V3 and R1 models developed by DeepSeek. However, the R1 reasoning model’s high hallucination rate has raised concerns over model quality and associated safety risks.

Works Cited

Badri, Adarsh. 2025. “Things China’s DeepSeek Does Not — And Will Not — Tell You About Politics.” Adarsh Badri (blog), January 31. https://adarshbadri.me/technology/ deepseek-china-content-censorship-topics/.

Bergmann, Dave. 2024. “What is knowledge distillation?” IBM Think. April 16. www.ibm.com/think/topics/knowledge-distillation.

Booth, Harry. 2025. “How China Is Advancing in AI Despite U.S. Chip Restrictions.” Time, January 28. https://time.com/ 7204164/china-ai-advances-chips/.

Calero, Hipolito. 2024. “An Analysis of China’s AI Governance Proposals.” Center for Security and Emerging Technology, September 12. https://cset.georgetown.edu/article/ an-analysis-of-chinas-ai-governance-proposals/.

CHM Live. 2024. “Fei-Fei Li’s AI Journey.” September 17. YouTube video, 13:00–15:00. www.youtube.com/ watch?v=JgQ1FJ_wow8&t=811s.

Concordia AI. 2023. State of AI Safety in China October. https://concordia-ai.com/wp-content/ uploads/2023/10/State-of-AI-Safety-in-China.pdf.

Goh, Brenda and Eduardo Baptista. 2025. “Chinese AI start-up DeepSeek pushes US rivals with R1 model upgrade.” Reuters, May 29. www.reuters.com/world/china/chinas-deepseekreleases-an-update-its-r1-reasoning-model-2025-05-29/.

Gomstyn, Alice and Alexandra Jonker. 2024. “Democratizing AI: What does it mean and how does it work?” IBM. November 5. www.ibm.com/think/insights/democratizing-ai.

He, Alex. 2021. China’s Techno-Industrial Development: A Case Study of the Semiconductor Industry. CIGI Paper No. 252. Waterloo, ON: CIGI. www.cigionline.org/ publications/chinas-techno-industrial-developmentcase-study-semiconductor-industry/.

———. 2023. State-Centric Data Governance in China. CIGI Paper No. 282. Waterloo, ON: CIGI. www.cigionline.org/ publications/state-centric-data-governance-in-china/.

———. 2025. “DeepSeek and China’s AI Innovation in US-China Tech Competition.” Opinion, Centre for International Governance Innovation, April 11. www.cigionline.org/articles/deepseekand-chinas-ai-innovation-in-us-china-tech-competition/.

He, Alex, S. Yash Kalash and Paul Samson. 2025. Digital Governance in China: Trends in Generative AI and Digital Assets. Conference Report. Waterloo, ON: CIGI. www.cigionline.org/publications/digital-governancein-china-trends-in-generative-ai-and-digital-assets/.

Heaven, Will Douglas . 2025. “How DeepSeek ripped up the AI playbook — and why everyone’s going to follow its lead.” January 31. www.technologyreview.com/ 2025/01/31/1110740/how-deepseek-ripped-up-theai-playbook-and-why-everyones-going-to-follow-it/.

Hestness, Joel, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md. Mostofa Ali Patwary, et al. 2017. “Deep Learning Scaling Is Predictable, Empirically.” arXiv.org, December. https://arxiv.org/abs/1712.00409.

Hinton, Geoffrey, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, et al. 2009. “Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups.” IEEE Signal Processing Magazine, November. https://doi.org/10.1109/MSP.2012.2205597.

Hinton, Geoffrey E., Simon Osindero and Yee-Whye Teh. 2006. “A fast learning algorithm for deep belief nets.” Neural Computation 18 (6): 1527–54. https://pubmed.ncbi.nlm.nih.gov/16764513/.

HuaweiTech. 2022. “盘古开天记, AI落地时” [Pangu model initiated Huawei’s AI development]. HuaweiTech, No. 90. January. www.huawei.com/cn/huaweitech/publication/90/ huawei-cloud-pangu-model-releases-ai-productivity.

Lee, Timothy B. 2024. “How a stubborn computer scientist accidentally launched the deep learning boom.” Ars Technica, November 11. https://arstechnica.com/ ai/2024/11/how-a-stubborn-computer-scientistaccidentally-launched-the-deep-learning-boom/.

Ling, Jiwei. 2017. “百度开放60项核心AI能力 公布AI生态开放战 略” [Baidu Announced Its Strategy for Open AI Ecosystem, Releasing 60 Core AI Capabilities]. 新华网 [Xinhuanet.com], July 5. https://finance.sina.com.cn/ roll/2017-07-05/doc-ifyhryex6243428.shtml.

Mao, William C. and Dhruv T. Patel. 2024. “Former Google CEO Eric Schmidt Says U.S. Trails China in AI Development.” The Harvard Crimson, November 19. www.thecrimson.com/ article/2024/11/19/eric-schmidt-china-ai-iop-forum/.

Meng, Fanxu, Pingzhi Tang, Xiaojuan Tang, Zengwei Yao, Xing Sun and Muhan Zhang. 2025. “TransMLA: Multi-Head Latent Attention Is All You Need.” arXivLabs, Cornell University. June 12. https://arxiv.org/abs/2502.07864.

Metz, Cade. 2021. “How the future of AI was decided by a £30 million hotel room auction.” GQ, March 20. An extract from Genius Makers by Cade Metz, published by Random House Business. www.gq-magazine.co.uk/ culture/article/cade-metz-genius-makers-extract.

Ministry of Foreign Affairs of People’s Republic of China. 2023. “Global AI Governance Initiative.” October 20. www.fmprc.gov.cn/eng/xw/zyxw/202405/ t20240530_11332389.htm.l

Ministry of Science and Technology. 2021. “新一代人工智能伦 理规范” [Ethical Norms for the New Generation Artificial Intelligence]. September 26. www.most.gov.cn/ kjbgz/202109/t20210926_177063.html.

MIT Technology Review. 2018. “Cong Liu: From speech recognition to computer vision, he connects two fields and focuses on real world applications.” Innovators Under 35. MIT Technology Review. www.innovatorsunder35.com/ the-list/cong-liu/..

National Development and Reform Commission. 2016. “关于 印发《“互联网+” 人工智能三年行动实施方案》的 通知” [Notice on the Issuance of the Three-Year Action Implementation Plan of “Internet+” Artificial Intelligence], 中国政府网[gov.cn], May 23. www.gov.cn/ xinwen/2016-05/23/content_5075944.htm.

Office of the State Council. 2023. “国务院2023年度立 法工作计划的通知 ” [Notice on the State Council’s Legislative Work Plan for 2023], May 31. www.gov.cn/ zhengce/content/202306/content_6884925.htm.

Patel, Dylan, AJ Kourabi, Doug O’Laughlin and Reyk Knuhtsen. 2025. “DeepSeek Debates: Chinese Leadership on Cost, True Training Cost, Closed Model Margin Impacts.” SemiAnalysis, January 30. https://semianalysis.com/ 2025/01/31/deepseek-debates/.

Pierson, David and Berry Wang. 2025. “DeepSeek Is a Win for China in the AI Race. Will the Party Stifle It?” The New York Times, February 2. www.nytimes.com/2025/02/02/ world/asia/deepseek-china-ai-censorship.html.

Reuters. 2025. “China’s Baidu to make latest Ernie AI model open-source as competition heats up.” Reuters, February 13. www.reuters.com/technology/ artificial-intelligence/baidu-make-ernie-ai-modelopen-source-end-june-2025-02-14/.

Schwartz, Eric Hal. 2025. “Elon Musk says Grok 2 is going open source as he rolls out Grok 3 for Premium+ X subscribers only.” TechRadar, February 18. www.techradar.com/ computing/artificial-intelligence/elon-musksays-grok-2-is-going-open-source-as-he-rollsout-grok-3-for-premium-x-subscribers-only.

Shen, Kai, Xiaoxiao Tong, Ting Wu and Fangning Zhang. 2022. “The next frontier for AI in China could add $600 billion to its economy.” QuantumBlack AI by McKinsey, June 7. www.mckinsey.com/capabilities/ quantumblack/our-insights/the-next-frontier-for-aiin-china-could-add-600-billion-to-its-economy#/.

Shilov, Anton. 2025. “DeepSeek’s AI breakthrough bypasses industry-standard CUDA for some functions, uses Nvidia’s assembly-like PTX programming instead.” Tom’s Hardware. January 28. www.tomshardware.com/tech-industry/ artificial-intelligence/deepseeks-ai-breakthroughbypasses-industry-standard-cuda-usesassembly-like-ptx-programming-instead.

State Council. 2017. “新一代人工智能发展规划 ” [Next Generation Artificial Intelligence Development Plan]. 中国政府网[gov.cn], July 20. www.gov.cn/zhengce/ content/2017-07/20/content_5211996.htm.

U.S. Senate Committee on Commerce, Science, & Transportation. 2025. “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation.” Committee Hearing, May 8. www.commerce.senate.gov/ 2025/5/winning-the-ai-race-strengthening-u-scapabilities-in-computing-and-innovation_2.

Vectara. 2025. “DeepSeek-R1 hallucinates more than DeepSeek-V3.” Vectara (blog), January 30. www.vectara.com/blog/deepseek-r1hallucinates-more-than-deepseek-v3.

VerticalServe Blogs. 2025. “How DeepSeek Optimized Training — FP8 Framework.” Medium.com, January 28. https://verticalserve.medium.com/how-deepseekoptimized-training-fp8-framework-74e3667a2d4a.

Wang, Li and Tao Zhu. 2016. “科大讯飞包揽CHiME-4国际 多通道语音分离和识别大赛三项冠军” [iFlytek won all three championships of CHiMe-4 International MultiChannel Speech Separation and Recognition Competition]. 央广网 [cnr.cn], September 14. www.cnr.cn/ah/news/ 20160914/t20160914_523136536.shtml.

Wired. 2025. “Sam Altman Says OpenAI Will Release an ‘Open Weight’ AI Model This Summer.” Wired, March 31. www.wired.com/story/openai-samaltman-announce-open-source-model/.

World Economic Forum. 2025. “Special Address by Ding Xuexiang, Vice-Premier of the People’s Republic of China.” January 21. YouTube video, 43:17. www.youtube.com/watch?v=2iKX4kARehI.

Xinhua News Agency. 2023. “中共中央政治局召开会议 分析研 究当前经济形势和经济工作 中共中央总书记习近平主持 会议” [The CPC Politburo hold meeting to analyze and discuss the current economic situation and economic work, presided over by General Secretary Xi Jinping]. April 28. www.gov.cn/yaowen/2023-04/28/content_5753652.htm.

———. 2024. “中共中央关于进一步全面深化改革 推进中国 式现代化的决定” [Decision of the CPC Central Committee on Further Comprehensively Deepening Reform and Promoting Chinese-Style Modernization]. July 21. www.gov.cn/ zhengce/202407/content_6963770.htm.

Xiong, W., L. Wu, F. Alleva, J. Droppo, X. Huang and A. Stolcke. 2017. The Microsoft 2017 Conversational Speech Recognition System. Microsoft AI and Research Technical Report MSRTR-2017-39, August. www.microsoft.com/en-us/research/ wp-content/uploads/2017/08/ms_swbd17-2.pdf.

Yang, Na. 2018. “李彦宏: 用技术改变世界” [Robin Li: Using technology to change the world]. 新华网 [Xinhuanet.com], December 30. www.xinhuanet.com/politics/ 2018-12/30/c_1123929076.htm.

Yu, Lili. 2023. “疯狂的幻方: 一家隐形AI巨头的大模 型之路” [Crazy High-Flyer: An Invisible AI Giant’s Path for Large Language Model]. 36Kr, May 25. https://36kr.com/p/2272896094586500.

Zhao, Hejuan. 2024. “中国AI追随之路的五大误区” [Five misconceptions in China’s pursuit of AI]. 钛媒体[TMTPost], May 13. www.tmtpost.com/video/7083490.html.

Zhou, Yiwei. 2022. “李彦宏十年千亿豪赌后, 百度AI快熬出 头了?” [Will Baidu AI achieve success soon, after Robin Li’s decade-long, 100-billion-yuan gamble?] 澎湃新闻[The Paper], November 19. www.thepaper.cn/ newsDetail_forward_19765694.

Zou, Mimi and Lu Zhang. 2025. “Navigating China’s regulatory approach to generative artificial intelligence and large language models.” Cambridge Forum on AI: Law and Governance 1, e8. https://doi.org/10.1017/cfl.2024.4.

67 Erb Street West Waterloo, ON, Canada N2L 6C2

www.cigionline.org

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.