Performance Management Blog

AI Hallucinations and Believability
Square Wheels One - How might this illustration represent how things really work in most organizations

The Idea of this post anchors to AI Hallucinations and Believability and how to keep usefulness in perspective. The goal is to help increase comfort with using AI tools.

AI’s tendency to generate information that sounds plausible but is sometimes false is known as an “hallucination” which has inspired a range of colorful analogies to how AI represents reality. Google’s AI once suggested “eating one small rock per day” and “adding non-toxic glue to pizza to stop it from sticking” are each pretty absurd recommendations that an intelligent person should sort out pretty easily. But some think this is scary.

AI Hallucinations and Believability -- AI tools represent great ways to generate elegant solutions for improvements

Some people would have you believe that AI is dangerous. One conspiracy-oriented friend suggests that AI is all programmed by the CIA and Big Pharma to give intentionally bad information about vaccine safety, for example. Susan believes only her for-profit conspiracy writers and not the science that AI easily finds to support safety data or related questions. Susan will not believe anything that any AI finds and documents. Seriously. And THAT is scary. 

Yet AI is not nefarious or ill-intentioned – it simply plays with words.

My belief is that AI conversations can be compared to a college professor after too many beers – this seems apt, right? And I have been one of those, long ago!  I also believe that caterpillars can fly if they just lighten up. SO, there is some levity and anchoring to how this stuff really works for teambuilding and innovation…

Here are a few analogies and metaphors that capture the quirky, sometimes surreal nature of AI hallucinations:

One Seeing Shapes in Clouds

AI hallucinations are often compared to how humans see familiar patterns-like animals or faces-in clouds or on the moon. The AI, like a daydreaming sky-gazer, finds structure and meaning where none exists, producing answers that are creative but unmoored to reality. And note that different people will see vastly different things.

A Confident Storyteller Who Never Checks Facts

Imagine a person who tells stories with absolute conviction, even when they’re making things up on the spot. AI can spin a narrative or provide an answer with total confidence, regardless of whether the information is accurate or even real. But some AI can also provides source data so that you can check data and see from what information it has created its responses. THAT is often quite interesting, to see how your input / words affected the outcome.

The Overzealous Improv Performer

AI sometimes behaves like an improv actor given a nonsensical prompt-responding with enthusiasm and inventiveness, but not always with logic or truth. The results are often wildly entertaining, like a chatbot claiming the Golden Gate Bridge was transported across Egypt.

Your Bumbling Tour Guide

Picture a tour guide who’s never actually visited but has heard about the place, and of course feels pressured to provide answers. They might confidently point out “historic” landmarks that don’t exist or invent facts to fill awkward silences-much like AI when it hallucinates under the pressure of a prompt. There IS information out there, but it is not always clearly considered and shared.

The Art Student Painting from Memory

Sometimes AI is like an art student trying to recreate the famous Sistine Chapel ceiling from memory, filling in gaps with imagination. The result might resemble the original, but with odd details like having a panda riding a bicycle around the edge 

Why These Analogies Work – BALANCE and Perspective

People “hallucinate” when they see Square Wheels One, projecting their beliefs onto this simple illustration so that it represents a reality of how things really work in organizations. It is a great tool for collaboration and innovation. Obviously, people see things differently and, collectively, one can use ALL the shared ideas around this image to help develop an actual reality around what can be changed and improved. 

Square Wheels One - How might this illustration represent how things really work in most organizations

AI tools, despite their obvious sophistication, lack true understanding and they can and do misinterpret, confabulate, or invent details just as humans do when guessing, daydreaming, or joking.

The humor and surrealism of AI hallucinations, like suggesting that, “you fold in the cheese until it remembers its childhood,” when looking for a recipe or confusing Shakespeare with a milkshake recipe show both the promise and current limitations of generative AI.

AI’s hallucinations are less like deliberate lies and more like the earnest, if sometimes misguided, creativity of someone trying to please or impress you with its analysis, whether that’s a tipsy professor, an imaginative child, or a stand-up comic riffing on the absurd. Just use it!

These analogies should help us always remember that while AI can be a powerful assistant for research and analysis, it still needs careful supervision, much like a rookie employee with a flair for fiction.

Use AI, for sure. And encourage others to do so. I generally use Perplexity because it is simple to use and provides sources. There is also a free version. It is amazing insofar as what it can do.

And feel free to use the above analogies with your people to increase their comfort with playing with new tools for organizational improvement. But also be careful and observant:

I asked AI about the necessary steps to do self-surgery on my appendix. It started with a significant warning about the danger than it gave me step-by-step instructions about preparation, the necessary surgical tools, the incision location and then the post-surgical recovery processes. Maybe let this be a caution to you about the power and the uses of AI in moving all things forward. 

The basic idea is to use AI to help you and your people to:

… and go more better faster

For the FUN of It!

Dr Scott Simmerman, retired Managing Partner of Performance Management Company

Dr. Scott Simmerman is a designer of team building games and organization improvement tools.
Managing Partner of Performance Management Company since 1984, he is an experienced presenter and consultant who is trying to retire!! He now lives in Cuenca, Ecuador.

You can reach Scott at scott@squarewheels.com
Learn more about Scott at his LinkedIn site.

Square Wheels® are a registered trademark of Performance Management
and cartoons have been copyrighted since 1993,

© Performance Management Company, 1993 – 2025

#SquareWheels  #InnovationAtWork  #TeamEngagement  #FacilitationTools  #WorkplaceImprovement  #EmployeeEngagement  #CreativeProblemSolving  #OrganizationalDevelopment  #LeadershipTools

Dr. Scott Simmerman

Dr. Scott Simmerman is a designer of the amazing Lost Dutchman's Gold Mine team building game and the Square Wheels facilitation and engagement tools. Managing Partner of Performance Management Company since 1984, he is an experienced global presenter. -- You can reach Scott at scott@squarewheels.com and a detailed profile is here: https://www.linkedin.com/in/scottsimmerman/ -- Scott is the original designer of The Search for The Lost Dutchman's Gold Mine teambuilding game and the Square Wheels® images for organizational development.

Subscribe to the blog

Tags

Categories

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

You may also like

Square Wheels Gemba Walks

Square Wheels Gemba Walks

Managers can powerfully combine a Gemba Walks with their use of Square Wheels images in meetings...