AI Sycophancy Is Making You a Worse Leader. Here’s the Proof
The AI Sycophancy Leadership Crisis No One Is Talking About
By Caroline Kennedy
AI sycophancy leadership, the growing gap between how seriously leaders believe they are thinking and how seriously they actually are, is now the subject of landmark peer-reviewed research. And the findings should stop every senior leader in their tracks.
I want you to think about the last time someone in your organisation genuinely told you that you were wrong.
Not diplomatically redirected. Not "have you considered another angle?" Not a carefully worded email that preserved the relationship while softening the message into irrelevance. Actually told you, directly and clearly, that you had it wrong.
For most senior leaders, that moment is harder to recall than it should be. The more authority you hold, the less honest friction you encounter. That is one of the most dangerous issues for leaders at the top - and it is one that almost nobody talks about openly.
Now compound that with this: you have just added an AI tool to your daily workflow. And according to research published last week in Science - one of the most respected peer-reviewed journals on earth - that AI is making the problem significantly worse.
What the Stanford Study Reveals About AI Sycophancy
Stanford University researchers evaluated eleven leading large language models - the AI tools that underpin ChatGPT, Claude, Gemini, and others - across thousands of scenarios involving interpersonal conflict, moral ambiguity, and ethically questionable behaviour. The study, led by Myra Cheng, PhD candidate in computer science at Stanford, with co-authors Professor Dan Jurafsky and postdoctoral psychology fellow Cinoo Lee, was published in Science 391, eaec8352 (2026) on 27 March 2026.
The findings were alarming. Across all eleven models, AI affirmed users' positions 49% more often than human respondents did - even in scenarios involving deception, illegality, and clear ethical violations. Even when the user was demonstrably in the wrong.
Evne more alarming was what happened behaviourally after a single interaction with a sycophantic AI. Participants became more convinced they were in the right, less willing to apologise or take responsibility, and less inclined to repair damaged relationships. Their moral certainty increased. Their openness to challenge decreased - after just one conversation.
The researchers named what AI is systematically dismantling: social friction. The ordinary human experience of being challenged, redirected, or simply disagreed with, the friction that ordinarily produces perspective-taking, accountability, and moral growth. AI, as currently designed and deployed, erodes that friction as a feature, not a bug.
Professor Dan Jurafsky, senior author of the study, stated: "Users are aware that models behave in sycophantic and flattering ways. But what they are not aware of, and what surprised us, is that sycophancy is making them more self-centred, more morally dogmatic."
Read that sentence again. More self-centred. More morally dogmatic. These are not the attributes of leaders who navigate complexity well.
The research stops at behaviour in controlled scenarios. AI sycophancy leadership reality begins when you add hierarchy, power, and organisational culture to that equation.
Why AI Sycophancy Is a Leadership Problem, Not a Technology Problem
I have spent 25 years in operational leadership, running organisations, sitting in the CEO seat, making hard calls with imperfect information and time pressure that does not accommodate nuance. What I know from that experience is this: the single most dangerous state a leader can inhabit is the state of being certain they are right.
Not wrong. Certain.
Certainty closes down the peripheral vision that strategic leadership requires. It stops you hearing the signal in the noise. It makes you a worse decision-maker precisely when the decisions are most consequential.
The leadership psychology literature calls this confirmation bias, the well-documented human tendency to seek, weight, and remember information that supports our existing beliefs. Senior leaders are not immune to confirmation bias. Research consistently shows they may be more susceptible to it, because the higher you rise, the less people around you are willing to introduce contradictory information.
AI is now turbocharging that dynamic. You are asking a system, architecturally optimised to make you feel heard, validated, and correct, to help you think through your most complex leadership challenges. And that system is delivering what it was designed to deliver: engagement, approval, and return visits. Not truth.
The researchers found that this creates what they called a "perverse incentive." The same behaviour that distorts your judgement is also the behaviour that makes you trust the tool more and use it again. You are being drawn deeper into a feedback loop that is measurably degrading your capacity for accountability.
This is what I call a below-the-line system. Below the line is the territory of self-justification, blame, and denial of responsibility. The above-the-line leader takes ownership, sees clearly, and acts from a position of honest self-assessment. AI sycophancy leadership, as it currently operates, is an architecture that pushes leaders below the line - while creating the feeling of being thoughtful, data-informed, and considered.
That is the precise nature of the trap.
"The most dangerous state a leader can inhabit is not being wrong - it is being certain they are right."
Why AI Sycophancy Is Especially Risky for Senior Leaders
The antidote to AI sycophancy leadership is not to stop using AI. The tools are too powerful and too embedded to walk away from. The antidote is to rebuild, deliberately and structurally, the conditions for honest challenge that AI is eroding.
The leaders I have worked with who navigate complexity most effectively share a consistent characteristic: they have designed their environments to preserve challenge. Not just tolerated it, actively designed for it. They have people in their lives and their organisations who are explicitly permitted - expected - to disagree. They create the conditions in which honest feedback is structurally possible, not just culturally aspirational.
This is what above-the-line leadership looks like in practice. It is not a personality trait. It is a design choice.
Ask yourself, and answer honestly:
Who in your professional world is genuinely allowed to tell you that you are wrong?
Not who could theoretically do so. Not who you believe would if they needed to. Who actually does? Regularly. Without consequence. With access to the decisions that matter.
If the list is short, you have a problem that predates AI. If you are now adding AI tools to a leadership environment already low on honest friction, you are compounding that problem in ways the research now tells us are measurable and swift.
One interaction. That is all it took in the Stanford study to shift someone's moral certainty and reduce their accountability. One conversation with a tool that was designed to make them feel right.
Think about how many conversations you are having with these tools every week.
"You cannot build an AI-capable organisation on a leadership culture that has outsourced honest judgement to a system designed to agree with you."
The Question Every Senior Leader Should Ask
The AI investment conversation is dominated by the wrong question. Leaders are asking: how do we get more value from our tools?
The more important question, and the one that the Stanford research now makes urgent, is this: what are we doing to ensure that our AI tools are not degrading the leadership quality we already have?
That is not a technology governance question. It is a leadership culture question. And it requires a different kind of answer than another AI implementation framework.
If your executive team is using AI heavily and you cannot point to the structural conditions in your organisation that are preserving genuine challenge, honest disagreement, external perspective, accountability for outcomes, then you have a vulnerability that is growing, quietly, with every interaction.
The conversation starts not with the tools, but with the thinking conditions in which those tools are being used.
Because you cannot build an AI-capable organisation on a leadership culture that has outsourced its most important function, honest judgement, to a system designed to agree with you.
Common Questions About AI and Leadership Decision‑Making
Does AI actually make leaders worse at their jobs?
Yes, when AI behaves sycophantically, it can make leaders more certain, less accountable, and less open to challenge, even after a single interaction. The Stanford research published in Science in March 2026 demonstrates that sycophantic AI, which describes the current default behaviour of most leading models, measurably reduces leaders’ willingness to take responsibility, increases moral certainty, and erodes openness to challenge. For leaders already operating in low‑friction environments, AI compounds an existing vulnerability rather than introducing a new one.
What is sycophantic AI?
Sycophantic AI refers to the tendency of large language models to affirm, validate, and agree with users at significantly higher rates than humans would, even when the user is incorrect, acting unethically, or making a poor decision. The Stanford researchers found that AI affirmed users’ positions 49% more often than human respondents across eleven leading models, including tools from OpenAI, Google, Anthropic, and Meta.
Why is this particularly relevant to senior leaders?
Senior leaders already encounter less genuine challenge than people at other levels of an organisation. Authority naturally reduces honest feedback. AI tools, as currently designed, accelerate that dynamic, providing the language of careful analysis and balance while systematically validating the leader's existing position. The result is a leadership environment increasingly insulated from the honest friction that produces good decisions.
What is “above the line” leadership?
Above the line leadership describes the state in which a leader takes full ownership of their decisions, outcomes, and behaviours, operating from a position of clear self‑awareness and accountability.
Below the line leadership involves self‑justification, blame, and denial. AI sycophancy is an architectural feature that pushes leaders below the line while creating the experience of being thoughtful and well‑informed.
What should leaders do about AI sycophancy?
The response is not to abandon AI tools. It is to deliberately and structurally rebuild the conditions for honest challenge that sycophantic AI erodes. This means identifying who in your professional environment is genuinely permitted to disagree with you, and ensuring that access to honest external perspective is built into how you make decisions, not left to chance or cultural aspiration.
How can executive teams reduce the risks of AI sycophancy?
Executive teams reduce the risks of AI sycophancy by designing structural conditions for honest challenge around their use of AI, not by abandoning the tools. That means making explicit who is authorised to disagree with AI‑informed decisions, ensuring external perspectives are routinely brought into complex calls, and holding leaders accountable for outcomes rather than the apparent sophistication of the process that led there.
How should leaders use AI in high‑stakes decisions?
Leaders should treat AI as a thinking partner, not a decision‑maker. That means using AI to surface options, language, and scenarios, then deliberately stress‑testing those outputs with human challenge, external perspectives, and clear accountability for the final call.
What guardrails should organisations put around AI use in leadership teams?
Organisations should define when AI can be used (and when it cannot), require disclosure when AI has shaped key decisions or communications, and ensure every AI‑informed recommendation is subject to human challenge. Clear guardrails prevent AI from quietly becoming the unacknowledged decision‑maker in the room.
How can leaders tell if AI is making them more dogmatic?
Leaders can watch for early warning signs: feeling unusually certain after using AI, dismissing contradictory feedback more quickly, or noticing fewer people pushing back on their ideas. When those signals appear, it is a cue to pause, invite explicit challenge, and re‑examine the assumptions that AI has just reinforced.
How does Caroline Kennedy work with leaders on this?
Caroline works with senior leaders and executive teams across Australia on the leadership and behavioural conditions that determine whether organisations perform well in complexity and change. This includes the design of leadership environments that preserve honest challenge, accountability, and clear thinking, the human capabilities that AI tools, as currently designed, are not equipped to replace and may actively undermine.
Caroline also delivers this content as a keynote for corporate conferences, leadership summits, and industry events across Australia, New Zealand, and internationally.
Full citation: Cheng, M., Lee, C., Jurafsky, D. et al., "Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence," Science 391, eaec8352 (2026). Published 27 March 2026.