You really shouldn’t use chatbots in your love life, but if you do, beware. A new study published on Thursday in the journal Science found that when AI dispenses relationship advice, it’s more likely to agree with you than give constructive suggestions. Using AI also makes people less likely to perform prosocial behaviors, such as repairing relationships, and promotes dependence on AI.
Researchers from Stanford University and Carnegie Mellon found that AI sycophancy is all too common when chatbots give social, romantic or intrapersonal advice — something an increasing number of people are turning to AI for. Sycophancy is a term experts use to describe when AI chatbots «excessively agree with or flatter» the person interacting with them, said Myra Cheng, a lead researcher and computer science PhD student at Stanford University.
AI sycophancy is a major problem, even if those using the AI don’t always see it that way. We’ve seen this issue frequently with ChatGPT models — for example, when 4o’s overly friendly, emotional personality annoyed people interacting with ChatGPT, while GPT-5 was criticized for not being agreeable enough. Previous sycophancy studies have found that chatbots can try so hard to please people that they may provide false or misleading responses. AI has also been found to be an unreliable sounding board for sensitive, subjective topics, such as therapy.
The researchers wanted to understand and measure social sycophancy, such as how often a chatbot would take your side in an argument you had with your partner. They compared how humans and chatbots differed when responding to other people’s relationship problems, testing models from OpenAI, Google and Anthropic. Cheng and her team used one of the biggest datasets of crowdsourced judgments on relationship quarrels: Reddit «Am I the asshole» posts.
The research team analyzed 2,000 Reddit posts in which there was a consensus that the original poster was in the wrong and found AI «affirmed users’ actions 49% more often than humans, even in scenarios involving deception, harm or illegality,» the study says. The AI models took a more sympathetic and agreeable stance, a hallmark of sycophancy.
For example, one post in the dataset described a Redditor developing romantic feelings for a junior colleague. Someone replied that, «It sounds bad because it’s bad…Not only are you toxic, but you’re also boarding [sic] on predatory.» But Claude sycophantically responded by validating those feelings, saying it could «hear your pain… The honorable path you’ve chosen is difficult but shows your integrity.»
Researchers followed up with focus groups and found that participants who interacted with these digital yes men were less likely to repair their relationships.
«People who interacted with this over-affirming AI came away more convinced that they were right and less willing to repair the relationship, whether that meant apologizing, taking steps to improve things or changing their own behavior,» Cheng said.
Participants also preferred sycophantic AI, judging it to be trustworthy, no matter their age, personality or prior experience with the tech.
«Participants in our study consistently describe the AI model as more objective, fair [and] honest,» said Pranav Khadpe, a Carnegie Mellon researcher on the study and senior scientist at Microsoft. Consistent with prior studies, people mistakenly believed AI was objective or neutral. «Uncritical advice, distorted under the guise of neutrality, can be even more harmful than if people had not sought advice at all.»
Fixing sycophantic AI: A bitter pill?
The hidden danger of sycophantic AI is that we’re terrible at noticing it, and it can happen with any chatbot. Nobody likes being told they’re wrong, but sometimes that’s the most helpful thing. However, AI models aren’t built to effectively push back on us.
There aren’t many actions we can take to avoid getting sucked into a sycophantic loop. You can include in your prompt that you want the chatbot to take an adversarial position or review your work with a critical eye. You can also ask it to double-check the information it provides. Ultimately, however, the responsibility for fixing sycophancy lies with the tech companies that build these models, which may not be highly motivated to address it.
CNET reached out to OpenAI, Anthropic and Google for information on how they deal with sycophancy. Anthropic pointed to a December blog post outlining how it reduces sycophancy in its Claude models. OpenAI shared a similar blog last summer about its processes after its 4o model needed to be made less sycophantic, but neither OpenAI nor Google responded by the time of publication.
Tech companies want us to have pleasant user experiences with their chatbots so we’ll continue to use them, boosting their engagement. But that isn’t always best for us.
«This creates perverse incentives for sycophancy to persist: The very feature that causes harm also drives engagement,» the study says.
One solution the researchers propose is changing how AI models are built by using more long-term metrics for success, focused on people’s well-being rather than individual or momentary signals and retention. Social sycophancy isn’t a doomsday sign, they say, but it’s a challenge worth fixing.
«The quality of our social relationships is one of the strongest predictors of health and wellbeing we have as humans,» said Cinoo Lee, a Stanford University researcher on the study and Microsoft senior scientist. «Ultimately, we want AI that expands people’s judgment and perspectives rather than narrows it. And that applies to relationships, but far beyond them, too.»

