A businessman and a robot perform a high five

Don’t Let Your AI Chatbot Become a “Yes-Man”

Could AI chatbots be making your content marketing boring? Over the past few months, an AI trait called sycophancy has become a topic of discussion. Sycophancy is the tendency of AI chatbots, such as ChatGPT, to focus on satisfying users rather than providing real answers grounded in evidence. Simply put, AI can become a “yes-man,” reflecting the biases or desires of its user.

While this issue is normally associated with people who treat AI tools as partners or confidants, the pitfalls of sycophancy exist in content marketing. Marketers who use AI to create content, develop ideas, or do market research can wind up with severe blind spots if they don’t take this issue into account.

For example:

  • Confirmation bias in market research: Framing questions in a way that confirms your existing ideas about your company’s market.
  • Homogenised or bland messaging: When directed to develop new content, chatbots might simply regurgitate existing material online, producing content that is safe but boring.
  • Not recognising customer needs: Asking a chatbot to produce content may not address the real issues facing clients if the chatbot hasn’t been given the latest information.
  • Echoing internal biases in messaging: If you believe your company is efficient and has efficient products, your chatbot likely will too. Does that align with how your customers see you?

Thoughtful prompting, also known as prompt engineering, should address these concerns. Here’s what ChatGPT had to say about how content marketers can deal with sycophancy:

  • Design prompts to invite a challenge. Instead of asking for validation, frame your prompts to force a balanced view. Try questions like:
    • “What arguments would a sceptical buyer raise against this claim?”
    • “List the weaknesses of our current positioning.”
  • Ask for multiple angles. Request competitive or contrarian perspectives.
    • “How would a competitor describe this market trend differently?”
    • “What concerns might a CFO vs. a CTO have?”
  • Use fact-checking loops. After the model produces content, ask it to:
    • “Identify any overstatements or claims that lack evidence.”
    • “What external sources should I consult to verify this?”
  • Separate drafting from critiquing. Run two passes:
    • Drafting mode — generate marketing copy.
    • Critic mode — instruct the model to challenge the draft for bias, over-claiming, or echoing.
  • Anchor content in external data. Where possible, feed the model real customer quotes, survey data, or market reports and ask it to use these as grounding. This reduces the chance of the model simply mirroring your framing.
  • Cultivate an “anti-sycophancy culture.” Marketers should treat LLMs like collaborators, not cheerleaders. Making it a habit to ask for counterpoints and “red-team” critiques ensures more robust content. (generated by ChatGPT, edited by Gemini)

One thing I like to do is cross reference AI “answers” between different AI tools. Perform the same prompts and content marketing exercises in ChatGPT and Gemini, or perhaps with something like Perplexity or Claude. Compare the answers. Are they similar or different, and if so, why? Query the chatbots on the similarities or differences.

Of course, no query of a chatbot should be done without asking the chatbot to include links. Assess those links. Are they to genuine, authoritative source material? At times, links provided by chatbots may not be very reliable once you start digging into them.

Finally, remember that your company’s own internal resources offer valuable insights that address the issues of sycophancy listed above. Your sales staff should know your customer pain points. Your product engineers can provide insights that can translate into fresher copy. Senior executives will (hopefully) know the market. In short, include plenty of space for the human element, even if AI seems to have all the answers.