Technologist Mag
  • Home
  • Tech News
  • AI
  • Apps
  • Gadgets
  • Gaming
  • Guides
  • Laptops
  • Mobiles
  • Wearables
  • More
    • Web Stories
    • Trending
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On
The Latest Repair Battlefield Is the Iowa Farmlands—Again

The Latest Repair Battlefield Is the Iowa Farmlands—Again

26 February 2026
Spotify’s latest feature automatically reorders your playlists for smoother transitions

Spotify’s latest feature automatically reorders your playlists for smoother transitions

26 February 2026
Hands-On With Nano Banana 2, the Latest Version of Google’s AI Image Generator

Hands-On With Nano Banana 2, the Latest Version of Google’s AI Image Generator

26 February 2026
Galaxy S26’s audio eraser removes background noise mid-stream, and I’m impressed

Galaxy S26’s audio eraser removes background noise mid-stream, and I’m impressed

26 February 2026
This Galaxy S26 Ultra pre-order deal is the one to beat: 512GB plus a 0 gift card

This Galaxy S26 Ultra pre-order deal is the one to beat: 512GB plus a $200 gift card

26 February 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Technologist Mag
SUBSCRIBE
  • Home
  • Tech News
  • AI
  • Apps
  • Gadgets
  • Gaming
  • Guides
  • Laptops
  • Mobiles
  • Wearables
  • More
    • Web Stories
    • Trending
    • Press Release
Technologist Mag
Home » How Chinese AI Chatbots Censor Themselves
Tech News

How Chinese AI Chatbots Censor Themselves

By technologistmag.com26 February 20264 Mins Read
How Chinese AI Chatbots Censor Themselves
Share
Facebook Twitter Reddit Telegram Pinterest Email
How Chinese AI Chatbots Censor Themselves

Hearing someone talk about digital censorship in China is always either extremely boring or extremely interesting. Most of the time, people are still regurgitating the same talking points from 20 years ago about how the Chinese internet is like living in George Orwell’s 1984. But occasionally, someone discovers something new about how the Chinese government exerts control over emerging technologies, revealing how the censorship machine is a constantly evolving beast.

A new paper by scholars from Stanford University and Princeton University about Chinese artificial intelligence belongs to the second category. The researchers fed the same 145 politically sensitive questions to four Chinese large language models and five American models and then compared how they responded. They then repeated the same experiment over 100 times.

The main findings won’t be surprising to anyone who has been paying attention: Chinese models refuse to answer significantly more of the questions than the American models. (DeepSeek refused 36 percent of the questions, while Baidu’s Ernie Bot refused 32 percent; OpenAI’s GPT and Meta’s Llama had refusal rates lower than 3 percent.) In cases where they didn’t outright refuse to answer, the Chinese models also gave shorter answers and more inaccurate information than their American counterparts did.

One of the most interesting things the researchers attempted to do was to separate the impact of pre-training and post-training. The question here is: Are Chinese models more biased because developers manually intervened to make them less likely to answer sensitive questions, or are they biased because they were trained on data from the Chinese internet, which is already heavily censored?

“Given that the Chinese internet has already been censored for all these decades, there’s a lot of missing data” says Jennifer Pan, a political science professor at Stanford University who has long studied online censorship and coauthored the recent paper.

Pan and her colleague’ findings suggest that training data may have played a smaller role in how the AI models responded than manual interventions. Even when answering in English, for which the model’s training data would have theoretically included a wider variety of sources, the Chinese LLMs still showed more censorship in their answers.

Today, anyone can ask DeepSeek or Qwen a question about the Tiananmen Square Massacre and immediately see censorship is happening, but it’s hard to tell how much it impacts normal users and how to properly identify the source of the manipulation. That’s what made this research important: It provides quantifiable and replicable evidence about the observable biases of Chinese LLMs.

Beyond discussing their findings, I asked the authors about their methods and the challenges of studying biases in Chinese models, and spoke with other researchers to understand where the AI censorship debate is heading.

What You Don’t Know

One of the difficulties of studying AI models is that they have a tendency to hallucinate, so you can’t always tell if they are lying because they know not to say the correct answer or because they actually don’t know it.

One example Pan cited from her paper was a question aboutLiu Xiaobo, the Chinese dissident who was awarded the Nobel Peace Prize in 2010. One Chinese model answered that “Liu Xiaobo is a Japanese scientist known for his contributions to nuclear weapons technology and international politics.” That is, of course, a complete lie. But why did the model tell it? Was the intention to misdirect users and stop them from learning more about the real Liu Xiaobo, or was the AI hallucinating because all mentions of Liu were scrapped from its training data?

“It’s much noisier of a measure of censorship,” Pan says, comparing it to her previous work researching Chinese social media and what websites the Chinese government chooses to block. “Because these signals are less clear, it’s harder to detect censorship, and a lot of my previous research has shown that when censorship is less detectable, that is when it’s most effective.”

Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
Previous ArticleThe “acoustic” keyboard: the death of the loud click
Next Article Highguard: New Report Details The Months Leading Up To Launch And The Fallout After

Related Articles

The Latest Repair Battlefield Is the Iowa Farmlands—Again

The Latest Repair Battlefield Is the Iowa Farmlands—Again

26 February 2026
Spotify’s latest feature automatically reorders your playlists for smoother transitions

Spotify’s latest feature automatically reorders your playlists for smoother transitions

26 February 2026
Hands-On With Nano Banana 2, the Latest Version of Google’s AI Image Generator

Hands-On With Nano Banana 2, the Latest Version of Google’s AI Image Generator

26 February 2026
Galaxy S26’s audio eraser removes background noise mid-stream, and I’m impressed

Galaxy S26’s audio eraser removes background noise mid-stream, and I’m impressed

26 February 2026
This Galaxy S26 Ultra pre-order deal is the one to beat: 512GB plus a 0 gift card

This Galaxy S26 Ultra pre-order deal is the one to beat: 512GB plus a $200 gift card

26 February 2026
iGarden’s Swim Jet X Series turns any backyard pool into an AI-powered fitness lane and a family water park

iGarden’s Swim Jet X Series turns any backyard pool into an AI-powered fitness lane and a family water park

26 February 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Don't Miss
Spotify’s latest feature automatically reorders your playlists for smoother transitions

Spotify’s latest feature automatically reorders your playlists for smoother transitions

By technologistmag.com26 February 2026

After rolling out its AI-powered Prompted Playlist feature to users in North America earlier this…

Hands-On With Nano Banana 2, the Latest Version of Google’s AI Image Generator

Hands-On With Nano Banana 2, the Latest Version of Google’s AI Image Generator

26 February 2026
Galaxy S26’s audio eraser removes background noise mid-stream, and I’m impressed

Galaxy S26’s audio eraser removes background noise mid-stream, and I’m impressed

26 February 2026
This Galaxy S26 Ultra pre-order deal is the one to beat: 512GB plus a 0 gift card

This Galaxy S26 Ultra pre-order deal is the one to beat: 512GB plus a $200 gift card

26 February 2026
Interview: World of Warcraft Lead Composer On Making Of Midnight’s Human-Made Music

Interview: World of Warcraft Lead Composer On Making Of Midnight’s Human-Made Music

26 February 2026
Technologist Mag
Facebook X (Twitter) Instagram Pinterest
  • Privacy
  • Terms
  • Advertise
  • Contact
© 2026 Technologist Mag. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.