Cited — 2026-05-01

Cited newsletter 1 May 2026: Perplexity source shifts, entity clustering in AI Overviews, and citation data from 40+ brands tracked by Lua Rank.

Welcome to the 1 May 2026 edition of Cited, our weekly newsletter tracking what is actually shifting in AI search visibility. Not the hype. Not the announcements. The patterns that matter for brands trying to get cited by ChatGPT, Perplexity, Google AI Overviews, and Claude.
This week: a structural shift in how Perplexity surfaces authoritative sources, the quiet rise of entity clustering in AI responses, and what the latest data from our platform tells us about citation velocity for B2B brands.
What Changed in AI Search This Week
Perplexity Tightens Its Source Criteria
Perplexity has been adjusting its source weighting model throughout Q1 2026, and this week the shift became impossible to ignore in our tracking data. Brands that had built solid citation footprints on third-party directories and aggregator sites are seeing those citations decay. What is holding up, and in some cases growing, is direct domain citation: content that lives on your own site and demonstrates genuine topical depth.
This is not a surprise if you have been following AI model behaviour closely. Perplexity has always leaned toward what its team describes as "primary source preference," but the weighting shift appears more aggressive now. A domain that publishes one genuinely comprehensive resource on a topic is outperforming domains that have scattered coverage across fifteen thin posts.
The practical implication: if your content strategy still resembles a traditional SEO volume play (publish frequently, cover broadly, target keywords), it is not going to translate well into AI citation. Depth beats breadth in 2026, and Perplexity's behaviour is the clearest signal yet.
Google AI Overviews: Entity Clustering Is Real
We have been tracking a pattern in Google AI Overviews that we are calling entity clustering: the tendency for AI Overviews to group citations around a small set of entities it has confidently resolved, rather than pulling from the widest possible source pool.
What this means practically is that Google's model appears to be building a mental map of "who knows about what" before it generates an Overview. Brands that have strong entity signals (consistent name, category, and attribute data across structured sources) are getting pulled into that map earlier. Brands that have inconsistent entity data are getting bypassed, even when their content quality is comparable.
According to research from the Search Engine Land team and our own internal tracking across 40+ brands, entity consistency now ranks as a top-three factor in AI Overview citation probability. This aligns with what Google has communicated about its Knowledge Graph dependency in generative outputs.
ChatGPT Citations: The 40-Day Window Is Narrowing
One of the findings we have shared publicly is that brands following a structured AI visibility programme can achieve first-page ChatGPT citations in under 40 days. That window is still real, but it is getting more competitive. In January 2026, the median time to first citation for brands in our programme was 34 days. In April, it moved to 41 days.
That is not alarming, but it is directional. Early movers are accumulating citation authority, and new entrants are competing against an increasingly established field. The brands that moved in Q4 2025 and Q1 2026 are now compounding their advantage.
Data From the Platform: What We Are Seeing Across 40+ Brands
Every week we pull aggregate data from brands running active visibility programmes through Lua. Here is what the numbers looked like for the month of April 2026.
Metric | April 2026 Average | Change vs. March |
|---|---|---|
Days to first ChatGPT citation | 41 days | +7 days |
Brands cited in Perplexity (active programme) | 78% | +4 pts |
Brands cited in Google AI Overviews | 61% | +9 pts |
Average citation mentions per brand/week | 23 | +5 |
Brands outranking a direct competitor in AI search | 52% | +11 pts |
The Google AI Overviews number is the one worth sitting with. A 9-point jump in a single month is significant. It suggests that the structured content and entity work brands have been doing over the past quarter is now clearing whatever threshold Google's model requires before it starts pulling from a domain. The lag between implementation and citation is real, but so is the payoff.
Where Brands Are Still Getting Stuck
Not everything is moving in the right direction. Two patterns are holding brands back consistently.
Schema implementation gaps: Brands that have the right content but incomplete or incorrectly implemented structured data are losing citation opportunities to competitors who have done the technical work. This is fixable, and Lua's automated execution handles a portion of it directly, but it requires someone to actually run the programme.
Inconsistent publishing cadence: AI models appear to reward recency signals alongside authority signals. Brands that publish strong content in a burst and then go quiet for six weeks are not sustaining their citation momentum the way brands with a consistent weekly cadence are.
There is a counterargument worth acknowledging here: some brands with irregular publishing schedules are still performing well in AI citations because their content is genuinely exceptional in depth and specificity. Cadence matters less when the content is so comprehensive that models keep returning to it. But most brands are not operating at that level of content investment, and for them, consistency is the more reliable lever.
What to Prioritise in May 2026
Three Actions Based on This Week's Data
If you are running an AI visibility programme, here is where we would focus attention in May based on what the data is showing.
Audit your entity data across structured sources. Run your brand name, category, and key attributes through Google's Knowledge Graph Search API, Wikidata, and your primary industry directories. Inconsistencies in how your brand is described across these sources are directly suppressing your AI Overview citation rate. The Google Knowledge Graph API is free and takes about 20 minutes to work through properly.
Consolidate thin content into depth pieces. Take your three or four highest-traffic topic areas and identify whether you have one genuinely comprehensive resource in each, or just a cluster of shorter posts. Merge and expand where the depth is not there yet. This directly addresses the Perplexity source weighting shift we described above.
Set a publishing cadence and stick to it. Even one substantial piece per week, published consistently, outperforms sporadic bursts. If your team cannot sustain that cadence, Lua's execution calendar schedules and sequences the work so nothing falls through.
Looking Further Out: The Personalisation Layer
One pattern we expect to become a major factor by Q3 2026 is AI search personalisation. Right now, most AI model responses are relatively consistent across users for the same query. As models integrate more user context (location, prior queries, inferred preferences), citation patterns will start to diverge by segment.
Brands that have built broad citation authority will still benefit, but the brands that have structured their content around specific audience segments and use cases will likely see a disproportionate advantage in personalised AI responses. This is not hypothetical: Perplexity has already signalled personalisation as a roadmap priority, and OpenAI's product updates point in the same direction.
We will be watching this closely and reporting what the data actually shows, not what the announcements promise. That is the point of the Cited newsletter: grounded intelligence for marketing teams that need to make real decisions, not commentary on press releases.
Next edition drops 8 May. If you want to track your own brand's citation performance between now and then, Lua's visibility dashboard runs continuous monitoring across ChatGPT, Perplexity, Google AI Overviews, and Claude.