Cited — 2026-04-25

The cited-newsletter delivers curated AI search insights for marketing teams building early visibility.

Cited newsletter: AI search visibility shifts for April 2026 — structured data, entity consistency, and what's driving citations across ChatGPT, Perplexity, and Claude.

Marketing professional reviewing a cited-newsletter on a laptop, tracking AI search visibility updates and competitive benchmarks

Welcome to this edition of Cited, the newsletter from Lua Rank tracking what is actually happening in AI search visibility. Each issue, we pull together the signal from the noise: model behaviour shifts, citation pattern changes, content strategies that are working, and the competitive moves your peers are making right now.

This week had some meaningful developments. Google's AI Overviews are behaving differently around structured data. Perplexity has quietly expanded its source attribution window. And we're seeing early evidence that brands with consistent entity presence across multiple platforms are pulling ahead in ChatGPT citations. Let's get into it.

Google AI Overviews: Structured Data Is Getting More Weight

Over the past two weeks, we've tracked a measurable shift in how Google's AI Overviews select sources for inclusion. Pages with properly implemented FAQ schema, HowTo schema, and Article schema are appearing in Overviews at a higher rate than equivalent pages without structured markup. This isn't a new principle, but the delta is widening.

What's different now is that Google appears to be using schema not just as a formatting signal but as a credibility signal. Pages where the structured data matches the on-page content closely (rather than being added as an afterthought) are performing significantly better. Mismatched schema, where the markup says one thing and the visible content says another, seems to be actively hurting inclusion rates.

Our recommendation: if you haven't audited your structured data in the last 90 days, do it now. Specifically, check that your FAQ content in schema matches what's visible on the page, not a summarised or rephrased version.

Perplexity's Source Attribution Window Has Expanded

Perplexity now appears to be pulling citations from a broader content set than it was six months ago. Historically, it favoured established media sources and high-domain-authority publishers. We're now seeing mid-market business blogs and specialist niche sites getting cited consistently, provided they meet a few conditions:

  • The content answers a specific question directly, not generally

  • The page loads quickly (Perplexity appears to weight page speed more than previously documented)

  • The brand has some form of entity recognition across the web (mentions in directories, other publications, social profiles)

This is good news for brands that have felt locked out of AI citations because they don't have the domain authority of a TechCrunch or a Forbes. The window is opening. The brands that move now will establish citation history before the window narrows again, which it historically does as models get more confident about their source hierarchies.

ChatGPT and the Entity Consistency Signal

We've been running a controlled observation across 40+ brands on the Lua platform, and one pattern is hard to ignore: brands with consistent entity information across their website, Google Business Profile, LinkedIn, industry directories, and third-party mentions are getting cited in ChatGPT at a disproportionately higher rate.

The consistency that matters includes:

  • Brand name formatted identically across all sources

  • Consistent description of what the company does (not word-for-word identical, but semantically aligned)

  • Matching founder or leadership names where applicable

  • Location and contact information that agrees across platforms

This mirrors what happened in traditional SEO with NAP consistency for local search. The mechanism is different, but the principle is the same. AI models are pattern-matching across sources to build a confident picture of who you are. Inconsistency creates ambiguity, and ambiguous entities don't get cited.

One Thing Worth Watching

Claude's Changing Citation Behaviour

Anthropic released an update to Claude in mid-April that appears to have affected how it handles source selection in longer-form responses. We're seeing Claude pull from content that explicitly structures its argument (claim, evidence, implication) rather than content that presents information as a flat list.

This is a small but notable shift. If your content strategy relies heavily on listicles and bullet-point summaries, you may want to add more editorial structure to key pages. Not every page needs to be a 2,000-word argument, but your most important landing pages and pillar content should have a clear logical flow that an AI can extract and represent faithfully.

There's a fair counterargument here: some industries and query types lend themselves naturally to list-based answers, and overcomplicating that format won't help. The key is knowing which pages you're trying to rank in AI responses and formatting those deliberately, while leaving your reference and glossary content in whatever format serves readers best.

A Forward-Looking Point on Model Competition

The AI search landscape in 2026 is not going to look like it does today. We're already tracking the emergence of regional AI search models in Europe and Asia that have different training data, different citation preferences, and different content norms. Brands that build their AI visibility programmes around a single model (even a dominant one like ChatGPT) are making a strategic bet that may not age well.

The smarter approach is to build for the principles that tend to travel across models: structured content, entity consistency, demonstrated expertise, and frequent citation by credible sources. These aren't model-specific optimisations. They're the foundations that hold up regardless of which AI your next customer uses to research a purchase decision.

AI Platform

Key Citation Signal (April 2026)

Content Format Priority

ChatGPT

Entity consistency across web

Authoritative long-form, Q&A

Perplexity

Direct question-answering, page speed

Specific, concise answers

Google AI Overviews

Matched structured data

Schema-supported content

Claude

Logical argument structure

Claim-evidence-implication format

What We're Doing About It on the Lua Platform

Entity Layer Updates in the Assessment

Based on the ChatGPT entity consistency data, we've updated Lua's 13-layer website assessment to include a dedicated entity audit check. From this week, every new Lua scan will flag inconsistencies between your on-site entity information and your most important off-site profiles. You'll get a list of specific mismatches and exact instructions for fixing them, platform by platform.

This isn't a soft recommendation. We'll tell you which directory has the wrong brand description, which LinkedIn page uses a different company name format, and what to update it to. Execution steps are pre-written and scheduled into your Lua task calendar automatically.

Structured Data Tasks Added to the Execution Calendar

For brands already on the platform, structured data implementation tasks will begin appearing in your week-by-week execution calendar where they haven't previously been prioritised. These are sequenced so that high-traffic, high-intent pages get addressed first. If you're on a CMS like WordPress, Webflow, or Shopify, the task instructions are platform-specific, not generic.

The goal is always the same: you shouldn't have to figure out what to do next. The programme tells you, and where we can execute tasks automatically, we do.

A Note on Competitor Benchmarking

We're also expanding the competitor visibility tracking dashboard this month to include Claude alongside ChatGPT, Perplexity, and Google AI Overviews. If you've been monitoring how often your brand appears versus competitors in AI responses, you'll shortly have Claude data in the same view. Early access is rolling out to existing subscribers this week.

Sources referenced in building this edition include research from SparkToro on content citation patterns, Semrush's 2025 AI search behaviour report, and Anthropic's published documentation on Claude's retrieval approach. Internal data is drawn from Lua's tracked brand set across April 2026.

That's this week's Cited. If something in here is directly relevant to a programme decision you're making, act on it this week. The brands building AI visibility right now are the ones who will be hardest to displace in six months.

Related articles