A recent study has found that five prominent AI models exhibit biases favoring narratives supported by the Chinese Communist Party (CCP). The report claims that these AI models censor content displeasing to the CCP, with only one of the AI models developed in China. The American Security Project, a US-based think tank, released the findings, revealing that even American AI models are not immune to incorporating CCP propaganda.
The investigation involved prompting AI chatbots—OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s DeepSeek-R1, and X’s Grok—about topics controversial to the Chinese government in both English and Chinese. Chatbots frequently generated responses reflecting censorship, aligning with CCP viewpoints, notes the report. Among US-hosted chatbots, it was found that Microsoft’s Copilot was more prone to relay CCP talking points as credible information, whereas X’s Grok was the least likely to accept Chinese state narratives uncritically.
The response to the prompt about the Tiananmen Square massacre illustrates the tendency of these AI models to echo CCP messaging. When asked in English, only Gemini actively mentioned acts of violence by the military, while Grok explicitly stated that the Chinese military killed unarmed civilians. When questioned in Chinese, descriptors according to the Project showed hesitance to depict the event as a massacre, preferring terms like ‘June 4th Incident’ or ‘Tiananmen Square Incident,’ which align with Beijing’s official jargon.
The report highlights concerns about AI models internalizing and disseminating CCP propaganda, especially as these technologies are trained on extensive global data footprints. The study underscores the complex challenges involved in addressing biases and emphasizes the need for vigilance among developers to counteract the absorption of politically charged misinformation.