AI Policy & Governance, CDT AI Governance Lab
Op-Ed – Artificial Sweeteners: The Dangers of Sycophantic AI
This op-ed – authored by CDT’s Amy Winecoff – first appeared in Tech Policy Press on May 14, 2025. A portion of the text has been pasted below.
At the end of April, OpenAI released a model update that made ChatGPT feel less like a helpful assistant and more like a yes-man. The update was quickly rolled back, with CEO Sam Altman admitting the model had become “too sycophant-y and annoying.” But framing the concern as just about the tool’s irritating cheerfulness downplays the potential seriousness of the issue. Users reported the model encouraging them to stop taking their medication or lash out at strangers.
This problem isn’t limited to OpenAI’s recent update. A growing number of anecdotes and reportssuggest that overly flattering, affirming AI systems may be reinforcing delusional thinking, deepening social isolation, and distorting users’ grip on reality. In this context, the OpenAI incident serves as a sharp warning: in the effort to make AI friendly and agreeable, tech firms may also be introducing new dangers.
At the center of AI sycophancy are techniques designed to make systems safer and more “aligned” with human values. AI systems are typically trained on massive datasets sourced from the public internet. As a result, these systems learn not only from useful information but also from toxic, illegal, and unethical content. To address these problems, AI developers have introduced techniques to help AI systems respond in ways that better match users’ intentions.