Cited — 2026-05-05

Stay ahead with the cited-newsletter, delivering AI search visibility insights for modern marketing teams.

Cited newsletter 5 May 2026: ChatGPT brand citations rise, Perplexity weights recency harder, and Google AI Overviews expand internationally.

Marketing professional reviewing curated AI search content on a laptop, reflecting the cited-newsletter approach to staying informed on GEO trends

Welcome to this week's edition of Cited, the newsletter from Lua Rank covering what's actually moving in AI search visibility. We cut through the noise to bring you the signals that matter: model behaviour shifts, citation pattern changes, and practical moves you can make right now.

This week brought a few developments worth paying attention to. ChatGPT's Browse mode is surfacing more brand-specific results in commercial queries. Perplexity updated its source weighting logic (again). And Google AI Overviews are expanding into more non-English markets, which has direct implications for any brand operating internationally. Let's get into it.

What Changed This Week Across AI Search Platforms

ChatGPT: Brand Mentions Are Up in Commercial Queries

We've been tracking citation patterns across ChatGPT for the past several months across the 40+ brands on Lua's platform. The data from late April into early May shows a clear uptick in brand-specific citations appearing in transactional and comparison queries. In plain terms: when users ask "what's the best [product category]," ChatGPT is increasingly pulling named brands rather than generic category descriptions.

What seems to be driving this is structured entity data. Brands that have clear schema markup, consistent NAP (name, address, phone) data, and well-defined "About" pages with factual brand claims are getting cited more frequently. This isn't correlation. We've seen the same pattern repeat across multiple verticals.

If your brand doesn't have a clearly structured entity footprint, that's the single highest-leverage fix you can make this month.

Perplexity: Source Recency Is Weighting Heavier

Perplexity has always leaned toward recency, but the latest observable behaviour suggests it's weighting fresh content even more aggressively for fast-moving topics. Blog posts older than 90 days are losing citation share in categories like SaaS, fintech, and marketing technology, even when the content is technically accurate and well-structured.

The counterargument here is worth naming: not every brand needs to publish constantly. For stable categories (legal definitions, manufacturing processes, established frameworks), older evergreen content is still holding up well in Perplexity citations. The recency bias matters most where the topic itself is evolving.

What this means practically: audit which of your high-priority pages were last updated. If any cornerstone content is sitting untouched past the 90-day mark and sits in a fast-moving category, schedule a refresh. Not a rewrite. A refresh. Update statistics, add a new section reflecting current state, and re-publish with a new date.

Google AI Overviews: International Expansion and What It Means

Google confirmed further expansion of AI Overviews into French, German, Spanish, and Japanese markets this month, with more languages to follow through Q3 2026. For brands with multi-market content strategies, this is a significant moment.

The brands that will benefit early are those already producing localised, authoritative content in those languages. Translation alone won't cut it. AI Overviews pull from locally credible sources, and a Google-translated English article rarely has the citation authority of a natively written piece.

We flagged this to our clients operating in European markets last week. If you're in that position, now is the time to assess your non-English content depth before competitors do.

The Signals Behind the Cited Newsletter Format

The Cited newsletter format is built around one core principle: the information that moves AI visibility scores changes fast, and most marketing teams don't have time to monitor four AI platforms simultaneously while running their actual programmes.

We aggregate what we see across Lua's client base, combine it with public signals from model updates and platform announcements, and translate it into actions you can act on in the same week. That's the intent behind every edition.

What We're Watching (Not Just Reporting)

A few things we're tracking that haven't surfaced as major signals yet but are worth keeping in view:

  • Claude's citation behaviour: Anthropic's Claude 3.5 is increasingly used as a research assistant in professional contexts. We're starting to see it cited in enterprise-level queries. Citation patterns differ from ChatGPT and Perplexity, with Claude placing heavier weight on primary sources and official documentation.

  • Featured snippet cannibalisation by AI Overviews: Several of our clients are seeing their featured snippets disappear as AI Overviews take the top position. Traffic impact varies significantly by query type. Informational queries are hit hardest; navigational queries are largely unaffected.

  • Citation clustering: A small number of high-authority sources are being cited disproportionately across multiple AI platforms. Getting into that cluster is a medium-term objective worth building toward.

One Metric to Watch This Month

We'd encourage any marketing team tracking AI visibility to pay close attention to citation share by query intent over May and June. Not just whether you're cited, but at what stage of the funnel. Brands appearing in awareness queries but not conversion queries (or vice versa) have a very different problem to solve. Lua tracks this automatically, but even a manual review across a sample of 20 to 30 queries will tell you something useful.

AI Search: A Channel That Rewards Consistency, Not Sprints

We want to push back on something we're hearing more often: the idea that AI visibility is a quick-fix channel where you publish a few well-structured pieces and the citations follow.

Some brands do see fast results. We've had clients achieve first-page ChatGPT visibility in under 40 days. But those results come from brands that had decent structural foundations already in place and targeted the right queries with the right content format. For most businesses, AI visibility is a 3 to 6 month project before the citation trends become consistent.

The brands that will own AI search in 2027 are the ones building their programmes systematically right now, not the ones waiting for a clearer signal that "it's worth it."

What the Data Says About Early Movers

Metric

Early Movers (started Q1 2025)

Later Entrants (started Q4 2025)

Average citation share gain (6 months)

+34%

+12%

Time to first consistent citation

28 days

61 days

Competitor displacement rate

High (established authority)

Moderate (more contested space)

Content investment required

Lower (less competition for citations)

Higher (catching up to entrenched brands)

Sources: Lua Rank internal client data (Q1 2025 to Q1 2026), corroborated by findings from Search Engine Land's AI Overviews coverage, Semrush Blog generative engine optimisation research, and BrightEdge AI search impact reporting.

The gap between early and late movers is real, and it compounds. AI models learn citation authority over time, and sources that have been consistently cited across high-quality, structured content build a kind of compounding credibility that's hard to displace quickly.

That's the argument for starting now, not in Q3.

Next week's Cited will dig into structured data schemas that are driving citation gains in May 2026, with specific markup patterns we're testing. If you're not already subscribed, you can sign up at luarank.com. And if you want to see where your brand currently sits across ChatGPT, Perplexity, Google AI Overviews, and Claude, Lua's assessment gives you a full 13-layer picture within 24 hours.

Related articles