Cited — 2026-04-30

Stay ahead with the cited-newsletter delivering AI search visibility insights for modern marketing teams.

Cited newsletter, 30 April 2026: ChatGPT citation shifts, Perplexity's update, and what's driving AI search visibility gains across Lua clients.

A marketing professional reviewing a cited-newsletter on a monitor, analysing AI search trends and visibility strategy updates

Welcome to the 30 April 2026 edition of Cited, our regular newsletter tracking what matters in AI search visibility. This week: a significant shift in how ChatGPT surfaces brand sources, early data from Perplexity's updated citation weighting, and a pattern we're seeing across Lua's client base that's worth paying attention to.

If you're new here, Cited is published by the team at Lua Rank. We track AI visibility across ChatGPT, Perplexity, Google AI Overviews, and Claude so that marketing teams at growing businesses can act on what's actually changing, not what was changing six months ago.

ChatGPT Is Rewarding Structured Authority More Aggressively

We've tracked a meaningful uptick in ChatGPT citations pointing to pages with clear entity structures: defined authorship, consistent internal linking to topical clusters, and explicit organisational schema. This isn't new in principle, but the weighting appears to have shifted. Pages that previously ranked mid-tier in model responses are now appearing in first and second citation positions, and the common thread is structural credibility rather than keyword density.

What this means practically: if your site has strong content but weak entity signals (no author bios, no organisation schema, thin internal link architecture), you're likely leaving citation positions on the table. We've seen this play out across several Lua clients in the B2B services space, where tightening up schema and authorship lifted ChatGPT visibility scores within three weeks.

Perplexity's Citation Weighting Update: What We Know So Far

Perplexity pushed a quiet update to its source-selection logic in late April. Early signals from our tracking suggest it's deprioritising pages with high ad density and prioritising sources that demonstrate what we'd call "answer completeness": content that directly addresses a question, provides context, and doesn't bury the lead behind navigation or promotional copy.

This aligns with something we've been saying for a while. AI models don't browse like humans. They extract. If your page structure forces a model to work hard to find the answer, it will find a source that doesn't make it work at all. The brands winning Perplexity citations right now tend to lead with the direct answer, then support it. Not the other way around.

Platform

Primary Citation Signal (April 2026)

Notable Change

ChatGPT

Entity structure, authorship clarity

Increased weighting on organisational schema

Perplexity

Answer completeness, low ad friction

Deprioritisation of high-ad-density pages

Google AI Overviews

E-E-A-T signals, page freshness

Stronger freshness signal for time-sensitive queries

Claude

Source credibility, factual precision

More conservative citation of opinion-heavy content

Google AI Overviews: Freshness Is Back in Play

Google's AI Overviews appear to be re-weighting content freshness for queries with a time-sensitive dimension. We noticed this first in finance and SaaS categories, where pages updated within the last 60 days are consistently outperforming older evergreen content in overview inclusion, even when the older content has stronger backlink profiles.

The counterargument here is real: freshness without depth gets you nowhere. A page updated last week with thin content isn't going to displace a comprehensive, well-linked resource. But if you have strong foundational content that hasn't been touched in 18 months, now is a reasonable time to review and refresh it. Not to game the algorithm, but because genuinely outdated content is a credibility problem regardless of what AI models think of it.

A Pattern We're Seeing Across Lua Clients

The "Invisible Brand" Problem Is Compounding

Across the 40+ brands currently running programmes on Lua, we're seeing a clear bifurcation. Brands that started building AI visibility 6 to 12 months ago are now defending and extending positions. Brands starting today are entering a more competitive environment than existed even in Q3 2025.

This isn't a scare tactic. It's just compounding. The brands that got cited early built topical authority that newer entrants now have to work harder to displace. The gap is still closeable, but it requires a more disciplined execution plan than it did a year ago.

What we're recommending to clients starting now:

  • Focus initial effort on a narrow set of high-intent queries where you have genuine depth, not broad coverage where you're competing with established players from day one.

  • Prioritise entity signals and structured content before volume. Getting the foundations right accelerates everything that follows.

  • Track competitor visibility from the start. You need a baseline to know whether your programme is working relative to the market, not just in absolute terms.

What's Working: A Short Data Point

One client in the professional services space, a 60-person firm operating across Europe and the Middle East, achieved first-page ChatGPT citations for three target queries within 38 days of starting their Lua programme. The work involved zero paid promotion. It was purely structural: schema implementation, FAQ consolidation, authorship pages, and two focused content pieces built around their highest-value service queries.

That's not an outlier. It's roughly consistent with what we see when clients follow the execution plan without skipping steps.

Forward Look: What to Watch in May 2026

Multi-Model Visibility Divergence Is Getting Wider

One forward-looking observation worth flagging: the ranking signals across ChatGPT, Perplexity, Google AI Overviews, and Claude are diverging, not converging. Eighteen months ago, optimising for one model broadly helped your visibility across others. That's less true now. Each model has developed distinct source-selection behaviour, and a brand that appears consistently in ChatGPT responses may be almost invisible in Perplexity for the same query.

This has a practical implication. Single-platform tracking gives you an incomplete picture. If your current approach to measuring AI visibility only looks at one model, you're missing the majority of the landscape. We expect this divergence to continue through 2026 as models compete on the quality and distinctiveness of their answers, not just the speed of delivery.

Longer term, the shift toward agentic AI (models that don't just answer questions but take actions on behalf of users) changes the citation dynamic further. When an AI agent is selecting a vendor, booking a service, or compiling a supplier list, the citation logic shifts from "what source answers this question best" to "what source is trusted enough to act on." Brands that build citation authority now are building the trust signals that agentic models will rely on next.

We're tracking this closely. Expect a deeper breakdown in next month's edition of the Cited newsletter, with data from our own model tracking and observations from the Lua client base.

If someone forwarded this edition to you and you want to receive future issues directly, you can subscribe at luarank.com. And if you're evaluating whether an AI visibility programme makes sense for your business right now, the Lua platform runs a free 13-layer assessment of your site. No agency retainer required.


Sources: OpenAI usage and citation pattern analysis (April 2026 internal tracking); Perplexity source-weighting documentation and community reports, April 2026; Google Search Central blog, AI Overviews freshness signals update; Anthropic model card documentation, Claude citation behaviour; BrightEdge AI Search Report Q1 2026.

Related articles