Summary
This post explains why Generative Engine Optimization cannot be measured on visibility alone. It highlights findings from NewsGuard showing that AI models now repeat false claims more than a third of the time, exposing brands to reputational risk when cited alongside misinformation. It warns that inclusion in AI answers may look like progress but can actually amplify distortions and propaganda. The post introduces Reputation Engine Optimization as the safeguard that ensures visibility strengthens trust instead of eroding it.
The Audit That Changed the Conversation
NewsGuard’s AI report revealed a disturbing trend. The top ten AI engines repeated false information 35 percent of the time. That is nearly double last year’s 18 percent. ChatGPT alone repeated false claims 40 percent of the time. This isn’t a minor flaw. It is proof that AI visibility without reputation controls is dangerous ground for brands.
This data is not about abstract misinformation. It is about how the engines that millions of people now rely on for answers are repeating falsehoods at scale. The implication for brands is straightforward. If these models cannot protect elections and global affairs from distortion, they certainly cannot be trusted to protect your reputation without active oversight.

Visibility Has Become a Double-Edged Sword
AI platforms no longer refuse to answer questions. Non-response rates dropped from 31 percent last year to zero. The engines always give an answer. But the price is accuracy. By pulling from polluted sources like content farms and propaganda networks, they increase the risk that your brand is cited alongside misinformation. Visibility looks like progress, but it often hides reputational damage.
Here is where the tradeoff hits communications teams hardest. Visibility is easy to celebrate. It feels like progress. But the more you look at the source mix behind those AI-generated answers, the more you realize the danger. The very engines that amplify your brand can just as easily attach it to unreliable or hostile narratives.
The Data Behind the Risk
To understand the full scope of the problem, look at how each AI engine performed. The NewsGuard AI report broke down the false claim rate across the ten leading models. The numbers reveal a market-wide weakness, with some engines failing more often than others. This chart illustrates how serious the problem has become:

Notice the contrast. Claude and Gemini held relatively low failure rates. Meanwhile, Inflection crossed 56 percent and Perplexity jumped from a perfect record to nearly 47 percent. ChatGPT and Meta both landed at 40 percent. The problem is systemic, not isolated. When your brand shows up in these answers, you are rolling the dice on accuracy.
Why Reputation Must Come First in GEO
This is also where the concept of Reputation Engine Optimization (REO) comes into play. REO goes beyond visibility to measure how engines interpret, frame, and repeat your brand story. While GEO ensures you show up in generative answers, REO ensures those answers strengthen trust instead of undermining it.
Brands are rushing to measure inclusion in AI answers. That metric feels tangible, and it looks good on reports. But inclusion without context is dangerous. If one-third of those mentions carry negative tone, false claims, or low-authority sources, then visibility is a liability.
This is why reputation must come first in GEO measurement. You need to evaluate the credibility of every source being cited. You need to measure sentiment consistency across engines. You need to map how closely AI-generated narratives align with your actual story. Otherwise, you are not measuring influence. You are measuring exposure to misinformation.
The Risk of Propaganda Laundering
The NewsGuard audit showed how state-backed networks like Pravda and Storm-1516 manipulate AI answers. These actors flood the web with fabricated content until chatbots repeat it as fact. Today it is Moldova’s elections. Tomorrow it could be your brand’s integrity under fire. The same structural weakness that spreads propaganda can also distort corporate narratives.
If propaganda can game AI search at this scale, then every brand should assume it can happen to them. Competitors, activists, or even opportunistic trolls can flood the system with misleading narratives. The engines will not protect you. Only proactive monitoring and reputation safeguards will.
Take Logitech as an example. Imagine AI engines pulling mentions of its products not from respected outlets like Wired or The Verge, but from low-quality blogs recycling misleading claims about security vulnerabilities. On a visibility report, Logitech might look successful because the brand name shows up in multiple AI answers. But the context would be reputationally damaging.
By auditing citations, Logitech’s communications team could identify which engines sourced from trusted tech media and which leaned on unreliable content farms. That insight would allow them to adjust media strategy by prioritizing higher-authority outlets and correcting distortions in real time. They could also build proactive content streams that reinforce accurate narratives across product categories. The outcome would be clear direction for earned media investment and evidence of how GEO measurement protects brand trust, not just visibility. This is how reputation-first GEO strategy translates into action and accountability.
Conclusion: Reputation Is the Real Metric
The lesson is clear. Visibility is the baseline. Reputation is the safeguard. Without it, GEO becomes a measurement of exposure, not influence. Treat reputation as the first layer of GEO. Track who gets cited, how tone shifts, and where distortions appear. NewsGuard’s data makes the risk undeniable. GEO can make your brand stronger, or it can accelerate reputational harm. The choice depends on where you put reputation in the equation.
Download the NewsGuard AI report here.
















