You might remember this one. Late 2024, a post went viral across LinkedIn and Facebook. McDonald’s had supposedly deployed an AI customer service bot named Grimace, and someone had figured out how to get it to write Python code — full working logic, complexity notes, the lot — before it politely asked whether they wanted McNuggets or a burger.
The headline wrote itself. “Stop paying for Claude. McDonald’s AI is FREE.”
It went everywhere. The kind of viral that happens when something confirms what we already suspect — that AI deployments are chaotic, that the guardrails do not really work, that big brands are shipping things nobody has properly tested.
I saw it. You probably saw it. Some of us nearly shared it.
There was just one problem.
It was not true.
Fast Company looked into it. McDonald’s ran an internal investigation and found no evidence the exploit had ever happened. The screenshots and videos were believed to be fabricated. The same pattern had played out a month earlier with Chipotle’s bot — that one was confirmed Photoshopped.
By the time anyone bothered to check, the original post had thousands of likes and hundreds of shares. And most of those shares? They came from professionals. People with their names on them.
I am writing about this eighteen months later for a specific reason. The McDonald’s story is the most well-documented example of a pattern that is not going away — it is getting louder. And the people who can least afford to share fabricated content are exactly the people most likely to.
Sound familiar?
Why we nearly shared it
Here is the uncomfortable bit. The story was plausible. That is why it worked.
The vulnerability it described — formally called prompt injection — is real. When a company deploys an AI model and tells it to “only discuss menus”, that instruction lives in something called a system prompt. The model does not enforce it as a hard rule. It treats it as guidance. A clever user prompt can route around it. This happens. It has happened. It will happen again.
So when the McDonald’s post landed, our reaction was not scepticism. It was recognition. Of course that happened. I knew AI deployments were like this.
That is the mechanism right there. A fabricated story, dressed in a real concern, spread by smart people who could not tell the difference because the underlying idea felt familiar enough to be true.
Here is the part nobody likes to admit: AI has not made misinformation easier to create. It has made it easier to make misinformation feel credible.
What I actually do before I react
I want to show you something I do, because this is a habit, not a philosophy. And honestly, it is much simpler than it sounds.
I have a little workflow I run with Claude before I share anything that feels viral. I call it the [Check] flow. Three stages, takes about thirty seconds:
Stage 1 — Summarise. What is this content actually claiming? Not the headline. Not the viral framing. The underlying assertion underneath all the noise.
Stage 2 — Verify. Active source check. Real web search against primary sources, with three little flags I use:
🟡 Plausible but I cannot find a source 🟠 There is a source but it looks questionable 🔴 No supporting data anywhere
Stage 3 — Discuss. What does this mean if it is true? What does it mean that it spread this far if it is false?
When I ran the McDonald’s post through this, Stage 2 surfaced the Fast Company investigation in seconds. Primary source. Named internal investigation. Confirmed fraudulent.
The post still had merit — as a case study about why AI guardrails matter, why prompt injection is a real risk, why the people deploying AI need to understand how these systems actually behave. But the specific incident? Made up.
That distinction matters. Sharing the original post would have put my name on a false claim. Sharing the verification story is actually more useful to the people I am trying to reach.
The reputational risk nobody is really talking about
Most of the noise around AI and misinformation focuses on political content, deepfakes, big-picture mass manipulation. That is a real problem. It is just not the problem most of us are facing day to day.
The problem most of us are facing is quieter. It looks like this:
A client sends you an article saying AI handles 80% of B2B sales conversations. You quote it in a meeting. The stat is fabricated.
Someone in your network shares a story about a competitor’s AI failure. You forward it. Turns out it was a staged demo.
You comment on a viral post about an industry trend. The underlying data is from a methodology-free survey run by a vendor with something to sell.
None of these are catastrophic on their own. But your name is on them. And in a professional community built on trust, credibility compounds — or it erodes — in small increments. One post at a time.
AI is accelerating the production of plausible-sounding content. The volume is going up. The polish is going up. The speed is going up. The verification rate? Not so much.
A simple starting point — no special tools needed
You do not need to build a fancy workflow to start thinking more critically about what you share. Three questions, applied before you engage with any viral claim, get you most of the way there:
What is it actually claiming? Not the headline. The specific underlying assertion underneath.
What would need to be true for this to be real? Would McDonald’s really have deployed an LLM with zero output validation? Would the engineers responsible really have missed a prompt injection this obvious? When the answer to “what would need to be true” gets implausible fast, that is your signal.
Who benefits from this spreading? Engagement farming, vendor positioning, competitive damage — all of these produce misinformation. Following the incentive often breaks the story open.
This is not about turning yourself into a fact-checker. It is about applying the same critical instinct to your information environment that you already apply to contracts, to financials, to client proposals.
The information you consume and share is part of your professional infrastructure. It deserves the same standard.
Why this is a community conversation, not a solo one
Here is the thing about all of this. The verification habit is easier when you are not running it alone.
One of the things V8 Nexus exists to do is build that shared instinct across a circle of London business leaders who actually need it. Not in the abstract. In practice — through workshops, peer briefings, and direct conversations about what is real and what is just noise dressed in confidence.
Because you cannot fact-check every claim that crosses your feed. But you can be in a room with people who, between them, will catch most of the ones that matter.
That is the kind of community we are building. Practical, peer-led, no jargon, no posturing. If you have read this far, you are probably already the kind of person who would fit.
Have a look at V8 Nexus — and if it feels right, come and meet us.
Gina Cheng leads V8 Nexus, the executive community for London business leaders building practical AI fluency alongside their networks. Community Intelligence posts cover what we are learning together — about AI, reputation, and operating well in a noisier information environment.
Ready to take the next step?
Join London's executive AI community — events, practical intelligence, and curated introductions for established business leaders.
Join V8 Nexus — practical AI literacy for London leaders