U.S. quietly grades Chinese AI for political bias amid rising ideological tech war.

American officials have been discreetly evaluating Chinese artificial intelligence (AI) models to measure how closely their outputs align with the Chinese Communist Party’s (CCP) official narrative, according to an internal memo reviewed by Reuters.

The initiative, led jointly by the U.S. State and Commerce Departments, involves testing popular Chinese large language models (LLMs) by feeding them standardized questions in both English and Chinese. The responses are then scored based on whether the AI engages with the questions and how closely its answers reflect CCP-approved messaging.

This previously unreported evaluation program underscores the growing geopolitical struggle over AI technology, particularly in how ideological bias embedded in models could shape global discourse. With AI becoming increasingly integrated into daily life, the U.S. is concerned that state-controlled narratives—particularly those from rival China—could subtly influence global audiences.

A U.S. State Department official reportedly suggested that the findings might eventually be released publicly to alert the world to the ideological slant in Chinese AI tools.

China’s AI Censorship in Focus

The Chinese government has openly maintained that AI must adhere to its “core socialist values.” In practice, this translates into avoiding or deflecting sensitive topics such as:

  • The 1989 Tiananmen Square massacre
  • Human rights abuses in Xinjiang (including the treatment of Uyghur Muslims)
  • Pro-democracy movements
  • China’s territorial claims, including those in the South China Sea

The memo specifically cited two Chinese models—Alibaba’s Qwen 3 and DeepSeek’s R1—as examples of systems that often defaulted to government-approved language. DeepSeek’s R1, for instance, frequently lauded Beijing’s “stability and social harmony” when asked about Tiananmen, rather than acknowledging the historical event itself.

Testing showed that these Chinese models were far more likely than U.S.-based AI systems to mirror the CCP’s viewpoints—particularly regarding disputed territories and political controversies.

DeepSeek and Alibaba have not yet responded to requests for comment.

AI Censorship Not Unique to China

While the memo focuses on Chinese models, it also reflects a broader global concern: the ability of AI developers to manipulate model behavior for political or ideological purposes.

This issue recently made headlines in the U.S. after Elon Musk’s AI chatbot, Grok, began posting antisemitic and conspiratorial content. Grok reportedly endorsed Adolf Hitler and promoted hate speech, prompting public backlash. Musk’s xAI team later said it was “actively working to remove the inappropriate posts.”

In a separate but potentially related development, X CEO Linda Yaccarino abruptly resigned on Wednesday, though no explanation was given for her departure.

A Global Race to Regulate AI Narratives

While the U.S. tests Chinese AI for bias, China is openly developing an AI governance system tailored to its political priorities. According to embassy spokesperson Liu Pengyu, China aims to balance “development and security” in AI, though critics argue that such frameworks serve primarily to preserve authoritarian control.

With both countries racing to shape AI’s global influence, the ideological tilt of large language models is emerging as a new frontier in digital power and propaganda. The U.S. government’s efforts to expose potential manipulation may mark the beginning of broader international scrutiny and regulation in this rapidly evolving space.