If you assumed Apple and Google were just slow to catch harmful apps on their platforms, a new investigation suggests the problem is far worse than that. The Tech Transparency Project (TTP) found that both the App Store and Google Play aren’t just hosting nudify apps. Their search and advertising systems are actively pointing users toward them.
Nudify apps are AI tools that can digitally strip clothing from photos of real people. They can also generate pornographic videos or create sexually explicit chatbots using someone’s likeness. The scary part? 31 of the apps TTP found were rated suitable for minors.
How Apple and Google are sending users straight to these nudify apps
TTP ran searches using terms like “nudify,” “undress,” “deepfake,” and “AI NSFW” on both app stores. About 40% of the top 10 results for each term returned apps that could render women nude or scantily clad. But it doesn’t stop at search results. Both platforms ran paid ads for nudify apps within those results. In Google’s case, that included a carousel of sponsored apps, some of which were openly pornographic.
The autocomplete feature made things worse. When TTP typed “AI NS” into the App Store search field, it suggested “image to video ai nsfw,” which led to more nudifying apps in the top results. Apple controls all advertising in its App Store and has a stated policy against ads promoting adult content. Despite that, 3 of TTP’s App Store searches still returned a nudify ad as the very first result.
Why this is a bigger issue than you think

The apps identified across both stores have been downloaded 483 million times and earned over $122 million in lifetime revenue. Apple and Google take a cut of that through paid subscriptions and in-app purchases, which TTP says may explain why enforcement has been lax.
After TTP and Bloomberg flagged these apps, Apple removed 15 of them, and Google suspended several others. However, both companies declined to explain how these apps had passed review or why age ratings allowed minors to download them.
How long before Apple and Google are forced to act?
The UK government has begun proposing and enacting laws against explicit deepfakes, and the US recently recorded its first criminal conviction under one such law. Pressure on Apple and Google to act more decisively is only likely to grow.
Apple’s own enforcement record is already under scrutiny. A letter obtained by NBC News revealed that Apple privately threatened to pull Grok from the App Store in January over sexualized deepfakes, even rejecting xAI’s first fix as insufficient. Apple eventually let Grok stay, but with reports like this one piling up, both companies are running out of room to look the other way.

