Cited Newsletter Issue 17 (May 9th 2026)

The cited-newsletter returns with Issue 17, packed with AI search visibility insights for marketing teams.

Cited Newsletter Issue 17: AI search visibility shifts, Google AI Overviews tightening, and tactics that drive real citations right now.

Marketing professional reviewing AI search performance metrics, as featured in the cited-newsletter for digital visibility strategies

Welcome back to the Cited newsletter, your fortnightly briefing on what is actually moving in AI search visibility. Issue 17 lands on May 9th 2026, and there is a lot to unpack. The AI search landscape shifted considerably over the past two weeks, with new data on citation patterns, a meaningful update to how Google AI Overviews surfaces sources, and some honest reflection on what is working for the brands we track inside Lua.

We keep this brief focused. No filler, no hype. Just what matters for marketing teams trying to build real visibility in AI-powered search environments.

What Changed in AI Search This Fortnight

Google AI Overviews: Source Selection Getting More Selective

If you have been tracking your AI Overview appearances, you may have noticed a tightening in citation frequency over the past three weeks. We have. Across the brands we monitor through Lua, average citation appearances in Google AI Overviews dropped roughly 12% in late April before partially recovering in the first week of May.

What we observed is that Google appears to be weighting structured, entity-rich content more heavily in source selection. Pages with clear author attribution, explicit organisational schema, and well-defined factual claims are outperforming pages that rely on topical authority alone. This is consistent with findings from the Search Engine Land team, who reported a similar tightening of sourcing criteria in their May 2026 coverage of AI Overview behaviour.

ChatGPT Browsing: What the Citation Patterns Tell Us

ChatGPT's browsing-enabled responses have become a more reliable citation channel for brands with solid structured content. The pattern we keep seeing: pages that answer a specific, bounded question clearly, within the first 150 words, are being extracted at a significantly higher rate than pages optimised for traditional long-form SEO.

Perplexity continues to prioritise freshness alongside authority. If your content isn't being updated regularly (at least quarterly on core pages), you're likely losing ground to newer sources, even if your domain authority is higher. Perplexity's own documentation confirms that recency is a weighted signal in their sourcing algorithm.

Claude's Sourcing Behaviour: An Emerging Signal

Claude (Anthropic's model) is receiving less attention in most GEO conversations, which is a mistake. Our internal tracking shows Claude increasingly surfacing in enterprise search workflows, particularly in B2B contexts where users are researching vendors and solutions. For the brands in the Lua platform that have optimised their positioning content for clarity and specificity, Claude citations have increased 18% over the past 60 days.

The counterargument here is worth acknowledging: some teams are rightly questioning whether Claude traffic converts. Our honest answer is that we don't have clean conversion data from Claude citations yet, and anyone who claims to isn't being straight with you. What we do know is that Claude is part of enterprise research workflows, and brand absence in those conversations has a cost even if it's hard to quantify directly.

Visibility Tactics That Are Working Right Now

The 150-Word Answer Rule

We have referenced this before in the Cited newsletter, but the data keeps reinforcing it. AI models extract answers, not articles. If your most important claims, definitions, or value statements are buried inside long paragraphs or hidden below the fold, they are not getting picked up.

The fix is not complicated. Audit your highest-priority pages and check whether the core answer to the user's likely question appears within the first 150 words. If it doesn't, restructure. This single change has improved AI citation rates for several brands in our programme.

Entity Disambiguation: Still Underutilised

Many marketing teams we speak to have heard of schema markup but have not connected it to AI visibility. The connection is direct. When your brand, products, and key people are properly disambiguated as entities in structured data, AI models can reference you with greater confidence. Ambiguous entities get cited less. It is that straightforward.

The Schema.org vocabulary has everything you need. The gap is almost always implementation, not knowledge. Lua automates the generation of the relevant schema for brands in our programme, which removes the technical barrier that stops most teams from acting on this.

Competitor Benchmarking: A Reality Check

One of the most useful things you can do this month is run a direct comparison of how you and your main competitors appear in AI-generated responses across ChatGPT, Perplexity, and Google AI Overviews. Not once, but across 20 to 30 representative queries in your category.

What you will typically find: one or two competitors appear consistently, the rest appear sporadically or not at all. The brands appearing consistently share recognisable traits: clear positioning, structured content, regular publishing cadence, and strong entity signals. They are rarely the biggest brand by traditional metrics. Early movers in AI visibility are winning on structure, not just scale.

What We're Watching for the Next 60 Days

AI Search Share Continues to Climb

The broader context for everything in this issue: AI-assisted search queries now account for a meaningful and growing share of information-seeking behaviour globally. Statista's 2026 digital behaviour data points to continued double-digit growth in AI search usage across Western Europe, North America, and Southeast Asia. This is not a niche channel anymore.

AI Search Platform

Primary Ranking Signal (May 2026)

Update Frequency Needed

Schema Priority

ChatGPT (with browsing)

Answer clarity, domain authority

Quarterly minimum

High

Google AI Overviews

Entity signals, structured content

Monthly recommended

Very High

Perplexity

Recency, citation volume

Monthly or better

Medium

Claude

Content specificity, positioning clarity

Quarterly minimum

Medium

What We Predict Is Coming

We expect multi-model citation tracking to become standard practice for marketing teams by Q3 2026. Right now, most teams are measuring AI visibility inconsistently, if at all. The tools are catching up quickly, and as measurement improves, the performance gap between brands with structured AI visibility programmes and those without will become visible in board-level reporting. That visibility will accelerate investment.

We also expect AI models to begin surfacing more nuanced competitive comparisons in response to branded queries. If a user asks ChatGPT to compare your product to a competitor, the quality and specificity of your positioning content will determine how you come out of that comparison. Generic "we're the best" language won't survive that environment. Specific, evidence-backed claims will.

One Thing to Do Before the Next Issue

Pick your three most commercially important queries, the questions your ideal customers are asking AI models when they are close to a buying decision. Search them across ChatGPT, Perplexity, and Google AI Overviews. Note who appears, who doesn't, and what form the answer takes. That 20-minute exercise will tell you more about your current AI visibility position than most audits we've seen.

We'll be back in two weeks with Issue 18. If you have data points, experiments, or results you want us to cover in the next Cited newsletter, reply directly. We read everything.

Sources referenced: Search Engine Land (May 2026), Perplexity AI sourcing documentation, Schema.org, Statista Digital Behaviour Report 2026.

Related articles