Cited Newsletter Issue 16 (May 8th 2026)

Issue 16 of the cited-newsletter brings the latest AI search visibility insights for marketing teams.

Cited Newsletter Issue 16: AI search citation trends across ChatGPT, Perplexity, and Google AI Overviews — with real data from 40+ tracked brands.

Marketing professional reviewing AI search analytics, reflecting the cited-newsletter focus on measurable GEO visibility and competitive benchmarking

Welcome back to Cited, the newsletter where we track what is actually moving in AI search visibility, cut through the noise, and tell you what it means for your programme. Issue 16 covers a shift we have been watching build for several weeks: the divergence between how different AI models cite sources, a new pattern in how ChatGPT handles brand comparisons, and what our data from 40+ brands is telling us about the pace of first-citation gains.

If you are new here: Cited goes out every two weeks. It is written by the team at Lua Rank, and it is built for marketing professionals who are tracking AI search as a serious channel, not a curiosity.

What We're Seeing Across the Models This Week

ChatGPT's Brand Comparison Queries Are Shifting

Over the past three weeks, we have tracked a meaningful change in how ChatGPT handles brand comparison queries. Prompts like "what is the best [category] tool for [use case]" are increasingly returning structured recommendation lists where the citation source matters as much as the brand mention itself. If your brand appears without a citation link, it carries roughly 40% less weight in follow-up queries about that brand specifically.

This is not speculation. It is a pattern we are seeing across multiple verticals in our tracked brand set. The implication is direct: getting mentioned is table stakes. Getting *cited* is what actually compounds.

Perplexity Is Prioritising Recency More Aggressively

Perplexity's sourcing behaviour has tilted noticeably toward content published or updated within the last 90 days. We have seen brands that refreshed cornerstone pages in March 2026 gain citation appearances within two to three weeks of the update going live, while equivalent pages that have not been touched since late 2025 are losing ground in the same query clusters.

The practical implication: if your content calendar is not treating existing page refreshes as a first-class priority, you are leaving Perplexity citations on the table.

Google AI Overviews: The Authority Signal Debate

There is an ongoing debate in the GEO community about whether Google AI Overviews weight domain authority the same way traditional organic rankings do. Our position, based on what we are observing: they don't, and that's actually good news for mid-market brands.

We are seeing brands with moderate domain authority (DR 35 to 55 range) appearing in AI Overviews for competitive queries when their content structure, entity clarity, and topical depth are strong. This is a different game to traditional SEO. It rewards content architecture over pure link equity.

Data Snapshot: What 40+ Brands Are Telling Us

We pulled aggregated data from the brands currently running programmes through Lua to give you a grounded view of where gains are coming from and how long they are taking.

Metric

Average Across Tracked Brands

Top Quartile

Days to first ChatGPT citation (new brands)

38 days

22 days

Perplexity citation gain after content refresh

+31% within 30 days

+54% within 30 days

AI Overviews appearances (month 3 vs month 1)

+2.4x

+4.1x

Competitor citations displaced (month 6)

6.2 per brand

11 per brand

The top quartile brands share two characteristics: they are executing consistently (3 to 5 hours per week, as the programme is designed for), and they are prioritising structured content updates over net-new content creation. More pages is not the answer. Better pages is.

A Counterpoint Worth Considering

We hear from some marketing teams that their AI visibility gains have not translated into measurable traffic yet. That is a fair observation, and we want to be straight about it. AI citation visibility and direct referral traffic are not the same metric, at least not yet. The referral click behaviour from AI-generated responses is still developing. Some models link out readily; others summarise without routing users anywhere.

What we believe (and what the early data supports) is that brands building citation authority now are positioning themselves for the moment AI models increase their click-through and referral behaviour. That shift is already visible in Perplexity's product direction and in Google's AI Mode rollout. The brands that delay are building from behind when it matters.

What to Focus on Before Issue 17

Three Actionable Priorities This Fortnight

  • Audit your last content refresh date across your top 10 commercial pages. If any of them are older than 90 days, they are at risk of losing Perplexity citations regardless of how well they were originally optimised.

  • Run a brand comparison query test in ChatGPT using your category and use case. Check whether you appear, whether you are cited, and which competitors show up with citation links you don't have. That gap is your immediate content brief.

  • Review your entity clarity on your homepage and primary service pages. AI models need to extract who you are, what you do, and who you serve in under three seconds of parsing. If that is not clean, structured, and unambiguous, your content will be summarised rather than cited.

On the Horizon: Claude's Sourcing Behaviour in Q3 2026

Anthropic has signalled changes to how Claude handles web-sourced responses in its API and consumer products. We are watching this closely. Based on current trajectory, Claude is likely to become a more significant citation surface by Q3 2026, particularly for B2B and professional service queries. If your programme has been focused exclusively on ChatGPT and Perplexity, now is the time to broaden your tracking. The brands that diversify their citation footprint across models will be harder to displace than those optimised for a single platform.

We track Claude visibility as part of Lua's multi-model monitoring, and we will have more specific data on its sourcing patterns in Issue 17.

That is Issue 16. If a colleague forwarded this to you and you want to receive it directly, you can subscribe at luarank.com. And if you have questions about anything covered here, reply directly. We read every response.

Sources: Lua Rank internal tracking data (40+ brands, April to May 2026); Perplexity AI product updates blog (2026); Google Search Central documentation on AI Overviews (2026); Anthropic model release notes Q1 2026; Sparktoro AI referral traffic research (2025).

Related articles