YouTube has been struggling with a growing wave of AI-generated content on its platform, and it appears to have a new plan to deal with it.

What’s YouTube’s latest approach to tackling its AI slop problem?

According to recent posts on X, YouTube is now enlisting the help of viewers to flag low-quality AI content. On the mobile app, users are seeing a new pop-up when rating a video that asks, “Does this feel like AI slop?” or “How much does this video feel like low-quality AI?” with responses ranging from “Not at all” to “Extremely.”

The feature adds a third layer of AI detection on top of YouTube’s existing automated and human review systems, both of which have struggled to keep up. YouTube currently doesn’t prohibit creators from using AI tools, nor does it require them to disclose AI-generated content, though they risk losing monetization if their content is flagged as low quality.

Has YouTube’s fight against AI slop delivered any results?

Despite its efforts to crack down on low-quality content, the issue is far from resolved. One study found that roughly 21% of the first 500 recommended videos to a new YouTube account were identified as AI slop, while 33% fell into a broader “brainrot” category of repetitive, low-substance content.

The problem extends beyond adult audiences. A recent investigation by the New York Times found that thousands of videos aimed at kids claimed to be educational but were created primarily to capture attention with minimal effort. Experts have warned that this could have a negative impact on young viewers.

🚨 Did you just see what YouTube did?

YouTube isn’t banning AI slop.. They’re making you label it so they can train their next model to not look like slop.

Read that again…

You flag the bad AI content. YouTube collects it. Google feeds it into Veo 4… Then next year their… https://t.co/8UC2J3mjjv pic.twitter.com/mIrTChqC1b

— Tuki (@TukiFromKL) March 17, 2026

YouTube has not clarified how viewer ratings from the new pop-up will be weighted or used. At least one user on X has also raised concerns that collecting this kind of feedback at scale could be used as training data for AI models, potentially making future AI-generated videos even harder to spot.

Share.
Exit mobile version