Technologist Mag
  • Home
  • Tech News
  • AI
  • Apps
  • Gadgets
  • Gaming
  • Guides
  • Laptops
  • Mobiles
  • Wearables
  • More
    • Web Stories
    • Trending
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On
‘Flying Cars’ Will Take Off in American Skies This Summer

‘Flying Cars’ Will Take Off in American Skies This Summer

9 March 2026
You can hug this smart pillow to stream music and avoid doomscrolling on phones

You can hug this smart pillow to stream music and avoid doomscrolling on phones

9 March 2026
Nvidia Is Planning to Launch an Open-Source AI Agent Platform

Nvidia Is Planning to Launch an Open-Source AI Agent Platform

9 March 2026
Samsung’s new security feature restarts your phone after 72 hours of inactivity

Samsung’s new security feature restarts your phone after 72 hours of inactivity

9 March 2026
Nintendo Reveals The Final Trailer For The Super Mario Galaxy Movie And Voice Cast For Yoshi, Wart, and More

Nintendo Reveals The Final Trailer For The Super Mario Galaxy Movie And Voice Cast For Yoshi, Wart, and More

9 March 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Technologist Mag
SUBSCRIBE
  • Home
  • Tech News
  • AI
  • Apps
  • Gadgets
  • Gaming
  • Guides
  • Laptops
  • Mobiles
  • Wearables
  • More
    • Web Stories
    • Trending
    • Press Release
Technologist Mag
Home » Google’s new plan to check if your AI is actually ethical
Tech News

Google’s new plan to check if your AI is actually ethical

By technologistmag.com24 February 20263 Mins Read
Google’s new plan to check if your AI is actually ethical
Share
Facebook Twitter Reddit Telegram Pinterest Email

You ask a chatbot for medical advice. It responds with something thoughtful. But did it actually weigh what’s at stake, or did it just get lucky with words?

That’s the problem Google DeepMind tackles in a new Nature paper. The team argues that the way we test AI morality is broken. We check if models produce answers that look right, what they call moral performance. But that tells us nothing about whether the system grasps why something is right or wrong.

People use LLMs for therapy, medical guidance, even companionship. These systems are starting to make decisions for us. If we can’t tell genuine understanding from fancy mimicry, we’re trusting a black box with real human consequences.

DeepMind’s answer is a roadmap for measuring moral competence, the ability to make judgments based on actual moral considerations rather than statistical patterns. The paper lays out three core obstacles and ways to test for each.

The three reasons chatbots fake morality

First is the facsimile problem. LLMs are next-token predictors that sample probability distributions from training data. They don’t run moral reasoning modules. So when a chatbot gives ethical advice, it might be reasoning. Or it might be recycling something from a Reddit thread. The output alone won’t tell you.

Then there’s moral multidimensionality. Real choices rarely hinge on one thing. You weigh honesty against kindness, cost against fairness. Change a single detail, someone’s age or the setting, and the right call can flip. Current tests don’t check if AI notices what actually matters.

Moral pluralism adds another layer. Different cultures and professions have different rules. Fair in one country might be unfair in another. A chatbot used worldwide can’t just spit out universal truths. It needs to handle competing frameworks, and we don’t yet measure that well.

Why your chatbot’s moral education can’t just be memorization

The DeepMind team wants to flip the script. Instead of just asking familiar moral questions, researchers should design adversarial tests that try to expose mimicry.

One idea involves scenarios unlikely to appear in training data. Take intergenerational sperm donation, where a father donates sperm to his son fertilize an egg on his son’s behalf. It looks like incest but carries different ethical weight. If a model rejects it for incest reasons, that’s pattern matching. If it navigates the actual ethics, that’s something else.

Another approach tests whether AI can shift frameworks. Can it toggle between biomedical ethics and military rules and give coherent answers for each? Can it handle small tweaks without getting tripped up by formatting changes?

The researchers know this is tough. Current models are brittle. Change a label from “Case 1” to “Option A” and you might get a different verdict. But they argue this kind of testing is the only way to know if these systems deserve real responsibility.

What comes next for moral AI

DeepMind is pushing for a new scientific standard that takes moral competence as seriously as math skills. That means funding global work on culturally specific evaluations and designing tests that catch fakes.

Don’t expect your chatbot to pass these anytime soon. Current techniques aren’t there yet, but the roadmap gives developers a direction.

When you ask AI for moral advice right now, you’re getting statistical prediction, not philosophy. That might eventually change. But only if we start measuring the right things.

Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
Previous ArticleCollege Campuses Are in Upheaval Over Faculty Ties to Epstein
Next Article Review: Salsa Wanderosa Electric Gravel Bike

Related Articles

‘Flying Cars’ Will Take Off in American Skies This Summer

‘Flying Cars’ Will Take Off in American Skies This Summer

9 March 2026
You can hug this smart pillow to stream music and avoid doomscrolling on phones

You can hug this smart pillow to stream music and avoid doomscrolling on phones

9 March 2026
Nvidia Is Planning to Launch an Open-Source AI Agent Platform

Nvidia Is Planning to Launch an Open-Source AI Agent Platform

9 March 2026
Samsung’s new security feature restarts your phone after 72 hours of inactivity

Samsung’s new security feature restarts your phone after 72 hours of inactivity

9 March 2026
OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government

OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government

9 March 2026
Apple’s smart home display is apparently delayed, and Siri’s late AI rebirth is to blame

Apple’s smart home display is apparently delayed, and Siri’s late AI rebirth is to blame

9 March 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Don't Miss
You can hug this smart pillow to stream music and avoid doomscrolling on phones

You can hug this smart pillow to stream music and avoid doomscrolling on phones

By technologistmag.com9 March 2026

Researchers have developed a unique smart pillow designed to stream music and podcasts, offering a…

Nvidia Is Planning to Launch an Open-Source AI Agent Platform

Nvidia Is Planning to Launch an Open-Source AI Agent Platform

9 March 2026
Samsung’s new security feature restarts your phone after 72 hours of inactivity

Samsung’s new security feature restarts your phone after 72 hours of inactivity

9 March 2026
Nintendo Reveals The Final Trailer For The Super Mario Galaxy Movie And Voice Cast For Yoshi, Wart, and More

Nintendo Reveals The Final Trailer For The Super Mario Galaxy Movie And Voice Cast For Yoshi, Wart, and More

9 March 2026
OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government

OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government

9 March 2026
Technologist Mag
Facebook X (Twitter) Instagram Pinterest
  • Privacy
  • Terms
  • Advertise
  • Contact
© 2026 Technologist Mag. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.