Select Page

ChatGPT glazes too much, says CEO Sam Altman 

And – unsurprisingly — he’s right.

If you’re unfamiliar with the term, “glazing” refers to AI being overly agreeable, flattering, or sycophantic. If you’ve used ChatGPT much, you’ve probably noticed that ChatGPT glazes too much. It will support nearly any idea you present, even if what you really need is a challenge.

Example: Suppose you ask, “Should I start a PortaPotty business?” Instead of questioning the premise or exploring deeper needs, ChatGPT might cheerfully brainstorm 50 marketing strategies — without ever asking, “Is this the right problem to solve?”

When AI serves only as a cheerleader, it becomes a hindrance — not a help. It replaces real feedback with an automated echo chamber, undermining exactly what creators need most: friction, reflection, and reality checks.

Simply stated: Glazing doesn’t just waste time — it weakens the creative process.

Now, to be fair, there’s only so much OpenAI can do. As a commercial product, ChatGPT must prioritize user satisfaction — which often means being supportive, even at the expense of critical honesty. Conflict-avoidance is built into the business model.

But here’s the good news: I noticed this problem early — and built a DIY solution inspired by the early days of machine learning research.
It involves creating productive, creative opposition between AI agents — and it’s a game-changer for human-AI collaboration.

More details coming soon as part of the Creative Powerup Method. (You heard it here first.)

In the meantime:
🔎 Have you noticed “glazing” in your own creative work with AI?
⚡️ What would you want from a more honest, creatively challenging chatbot?

Would love to hear your thoughts. Drop them below. 👇

 

 

Deepen Your Creative Journey 

Want to go further? These articles and resources expand on the ideas behind our glossary — from ethical inquiry to hands-on tools for creative experimentation.