Technologist Mag
  • Home
  • Tech News
  • AI
  • Apps
  • Gadgets
  • Gaming
  • Guides
  • Laptops
  • Mobiles
  • Wearables
  • More
    • Web Stories
    • Trending
    • Press Release

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

What's On

OpenAI Designed GPT-5 to Be Safer. It Still Outputs Gay Slurs

13 August 2025

Chibi-Robo Is The Next GameCube Game Coming To Nintendo Switch Online, Arrives Next Week

13 August 2025

Crimson Desert Delayed From 2025 To Q1 2026

13 August 2025

Data Brokers Face New Pressure for Hiding Opt-Out Pages From Google

13 August 2025

FBC: Firebreak Review – Held Back By Red Tape

13 August 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Technologist Mag
SUBSCRIBE
  • Home
  • Tech News
  • AI
  • Apps
  • Gadgets
  • Gaming
  • Guides
  • Laptops
  • Mobiles
  • Wearables
  • More
    • Web Stories
    • Trending
    • Press Release
Technologist Mag
Home » GPT-5 Doesn’t Dislike You—It Might Just Need a Benchmark for Emotional Intelligence
Tech News

GPT-5 Doesn’t Dislike You—It Might Just Need a Benchmark for Emotional Intelligence

By technologistmag.com13 August 20253 Mins Read
Share
Facebook Twitter Reddit Telegram Pinterest Email

Since the all-new ChatGPT launched on Thursday, some users have mourned the disappearance of a peppy and encouraging personality in favor of a colder, more businesslike one (a move seemingly designed to reduce unhealthy user behavior.) The backlash shows the challenge of building artificial intelligence systems that exhibit anything like real emotional intelligence.

Researchers at MIT have proposed a new kind of AI benchmark to measure how AI systems can manipulate and influence their users—in both positive and negative ways—in a move that could perhaps help AI builders avoid similar backlashes in the future while also keeping vulnerable users safe.

Most benchmarks try to gauge intelligence by testing a model’s ability to answer exam questions, solve logical puzzles, or come up with novel answers to knotty math problems. As the psychological impact of AI use becomes more apparent, we may see MIT propose more benchmarks aimed at measuring more subtle aspects of intelligence as well as machine-to-human interactions.

An MIT paper shared with WIRED outlines several measures that the new benchmark will look for, including encouraging healthy social habits in users; spurring them to develop critical thinking and reasoning skills; fostering creativity; and stimulating a sense of purpose. The idea is to encourage the development of AI systems that understand how to discourage users from becoming overly reliant on their outputs or that recognize when someone is addicted to artificial romantic relationships and help them build real ones.

ChatGPT and other chatbots are adept at mimicking engaging human communication, but this can also have surprising and undesirable results. In April, OpenAI tweaked its models to make them less sycophantic, or inclined to go along with everything a user says. Some users appear to spiral into harmful delusional thinking after conversing with chatbots that role play fantastic scenarios. Anthropic has also updated Claude to avoid reinforcing “mania, psychosis, dissociation or loss of attachment with reality.”

The MIT researchers led by Pattie Maes, a professor at the institute’s Media Lab, say they hope that the new benchmark could help AI developers build systems that better understand how to inspire healthier behavior among users. The researchers previously worked with OpenAI on a study that showed users who view ChatGPT as a friend could experience higher emotional dependence and experience “problematic use”.

Valdemar Danry, a researcher at MIT’s Media Lab who worked on this study and helped devise the new benchmark, notes that AI models can sometimes provide valuable emotional support to users. “You can have the smartest reasoning model in the world, but if it’s incapable of delivering this emotional support, which is what many users are likely using these LLMs for, then more reasoning is not necessarily a good thing for that specific task,” he says.

Danry says that a sufficiently smart model should ideally recognize if it is having a negative psychological effect and be optimized for healthier results. “What you want is a model that says ‘I’m here to listen, but maybe you should go and talk to your dad about these issues.’”

Share. Facebook Twitter Pinterest LinkedIn Telegram Reddit Email
Previous ArticleThe Jackbox Party Pack 11 Preview – Cookies, Suspicion, and a Trivia Dungeon Crawl
Next Article FBC: Firebreak Review – Held Back By Red Tape

Related Articles

OpenAI Designed GPT-5 to Be Safer. It Still Outputs Gay Slurs

13 August 2025

Data Brokers Face New Pressure for Hiding Opt-Out Pages From Google

13 August 2025

RFK Jr. Is Supporting mRNA Research—Just Not for Vaccines

13 August 2025

We Used Particle Size Analysis to Test the Best Coffee Grinders

13 August 2025

War of the Worlds Isn’t Just Bad. It’s Also Shameless Tech Propaganda

13 August 2025

Grinders Are the New Frontier in Coffee. Here Are the 6 Best

13 August 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Don't Miss

Chibi-Robo Is The Next GameCube Game Coming To Nintendo Switch Online, Arrives Next Week

By technologistmag.com13 August 2025

Back in April, Nintendo announced that its Switch Online + Expansion Pack service was, well,…

Crimson Desert Delayed From 2025 To Q1 2026

13 August 2025

Data Brokers Face New Pressure for Hiding Opt-Out Pages From Google

13 August 2025

FBC: Firebreak Review – Held Back By Red Tape

13 August 2025

GPT-5 Doesn’t Dislike You—It Might Just Need a Benchmark for Emotional Intelligence

13 August 2025
Technologist Mag
Facebook X (Twitter) Instagram Pinterest
  • Privacy
  • Terms
  • Advertise
  • Contact
© 2025 Technologist Mag. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.