Regulation of dark patterns has been proposed and is being discussed in both the US and Europe. De Freitas says regulators also should look at whether AI tools introduce more subtle—and potentially more powerful—new kinds of dark patterns.
Even regular chatbots, which tend to avoid presenting themselves as companions, can elicit emotional responses from users though. When OpenAI introduced GPT-5, a new flagship model, earlier this year, many users protested that it was far less friendly and encouraging than its predecessor—forcing the company to revive the old model. Some users can become so attached to a chatbot’s “personality” that they may mourn the retirement of old models.
“When you anthropomorphize these tools, it has all sorts of positive marketing consequences,” De Freitas says. Users are more likely to comply with requests from a chatbot they feel connected with, or to disclose personal information, he says. “From a consumer standpoint, those [signals] aren’t necessarily in your favor,” he says.
WIRED reached out to each of the companies looked at in the study for comment. Chai, Talkie, and PolyBuzz did not respond to WIRED’s questions.
Katherine Kelly, a spokesperson for Character AI, said that the company had not reviewed the study so could not comment on it. She added: “We welcome working with regulators and lawmakers as they develop regulations and legislation for this emerging space.”
Minju Song, a spokesperson for Replika, says the company’s companion is designed to let users log off easily and will even encourage them to take breaks. “We’ll continue to review the paper’s methods and examples, and [will] engage constructively with researchers,” Song says.
An interesting flip side here is the fact that AI models are themselves also susceptible to all sorts of persuasion tricks. On Monday OpenAI introduced a new way to buy things online through ChatGPT. If agents do become widespread as a way to automate tasks like booking flights and completing refunds, then it may be possible for companies to identify dark patterns that can twist the decisions made by the AI models behind those agents.
A recent study by researchers at Columbia University and a company called MyCustomAI reveals that AI agents deployed on a mock ecommerce marketplace behave in predictable ways, for example favoring certain products over others or preferring certain buttons when clicking around the site. Armed with these findings, a real merchant could optimize a site’s pages to ensure that agents buy a more expensive product. Perhaps they could even deploy a new kind of anti-AI dark pattern that frustrates an agent’s efforts to start a return or figure out how to unsubscribe from a mailing list.
Difficult goodbyes might then be the least of our worries.
Do you feel like you’ve been emotionally manipulated by a chatbot? Send an email to [email protected] to tell me about it.
This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.