Cited Newsletter Issue 10 (April 28th 2026)

Stay ahead with the cited-newsletter covering AI search trends shaping brand visibility in 2026.

Cited newsletter: ChatGPT multi-turn citation shifts, April 2026 cross-model citation data, and how long AI search visibility actually takes to build.

Marketing professional reviewing cited-newsletter data on a laptop tracking AI search visibility and competitor benchmarking results

Welcome to Cited, our regular briefing on what's actually moving in AI search visibility. This edition covers three things worth paying attention to this week: a structural shift in how ChatGPT surfaces brands in multi-turn conversations, new data on citation patterns across AI models, and a practical question we're hearing a lot from marketing teams right now.

No padding, no speculation dressed up as insight. Just what we're seeing, what it means, and what to do about it.

What We're Watching: ChatGPT's Multi-Turn Behaviour Is Changing

Over the past few weeks, we've noticed something consistent across the brands we track inside Lua. ChatGPT is increasingly distinguishing between brands it cites in *initial* responses and brands it returns to when a user asks follow-up questions. The follow-up behaviour matters more than most people realise.

Here's the dynamic: a user asks "what's the best project management software for a remote team?" ChatGPT surfaces four or five names. The user then narrows it down: "which of those has the best mobile experience?" The brands that survive that second filter are not always the ones with the most backlinks or the highest domain authority. They're the ones whose content specifically and directly addresses that narrower question.

What This Means for Your Content Strategy

If you're only optimising for top-level queries, you're building visibility that doesn't hold up under pressure. The brands winning multi-turn conversations have content that answers the specific sub-questions users ask after the initial response. Think of it as depth of coverage, not just breadth.

Practically, this means auditing your existing content for what we'd call "second-layer" queries. If someone discovers your brand in a ChatGPT response and then digs into a specific capability, feature, or use case, is your content there to support that follow-up? If not, you're visible once and then invisible.

The Counter-Argument Worth Considering

Some teams we speak to push back on this, arguing that top-of-funnel AI visibility is enough because users will click through to the website anyway. There's some truth to that for certain categories. But for considered purchases (software, services, B2B tools), users increasingly validate their AI-assisted shortlist *within* the AI interface before they ever click. If your brand drops out of the conversation at step two, you're not on the shortlist. The click never comes.

Citation Data: What We're Seeing Across Models This Month

We track citation patterns across ChatGPT, Perplexity, Google AI Overviews, and Claude for the brands on our platform. Here's a snapshot of what the data shows for April 2026.

AI Platform

Primary Citation Driver

Avg. Citation Depth (pages)

Trend vs. March

ChatGPT

Structured FAQ content + schema

3.2

Up 14%

Perplexity

Recent indexed content (under 60 days)

1.8

Up 22%

Google AI Overviews

E-E-A-T signals + author authority

2.4

Stable

Claude

Long-form authoritative content

4.1

Up 8%

A few things stand out here. Perplexity's citation depth is the lowest of the four, but its recency bias is the strongest. If you publish something useful today, Perplexity can surface it within days. That's a meaningful opportunity for brands that can produce targeted content quickly.

Claude's citation depth is the highest, which aligns with what we see in its outputs: it tends to draw on fewer sources but use them more extensively. Getting cited by Claude once carries more weight than appearing briefly in a ChatGPT response.

Platform-Specific Optimisation Is Not Optional

One thing this data confirms is that a single content strategy won't optimise for all four models simultaneously. Each model has distinct extraction behaviour, different recency weighting, and different preferences for content format and structure. This is why platform-specific execution is built into everything we do at Lua. The brands seeing consistent multi-model visibility are the ones treating each AI platform as a distinct channel, not an afterthought.

The Question We Keep Getting Asked

"How long does it actually take to see results?"

Reasonable question, and one that deserves a straight answer rather than the usual "it depends" hedge.

Across the 40+ brands we work with, the pattern is fairly consistent. First measurable citations typically appear within three to six weeks of starting a structured AI visibility programme, assuming the technical and content groundwork is done correctly. "Measurable" here means appearing in tracked queries across at least one AI model, not just occasional untracked mentions.

Sustained, multi-model visibility (showing up consistently across ChatGPT, Perplexity, and AI Overviews for your core queries) takes longer. Realistically, three to four months for brands starting from a low base. Brands with existing domain authority and well-structured content get there faster.

What Accelerates the Timeline

  • Schema markup applied to existing content (this is the highest-leverage technical action most brands haven't taken)

  • Publishing content that directly answers specific, mid-funnel queries rather than broad informational topics

  • Getting cited in third-party sources that AI models already trust (industry publications, established directories)

  • Consistent publishing cadence, even modest frequency, 2-3 pieces per month, outperforms occasional large content drops

What Slows It Down

  • Treating AI visibility as a one-time project rather than an ongoing programme

  • Optimising only for Google without adapting content structure for AI extraction

  • No competitor benchmarking, which means no way to know whether you're gaining ground or falling behind

A Forward-Looking Note

The brands investing in AI visibility now are building a structural advantage that will be difficult to close in 12 to 18 months. AI models develop citation habits. They build a kind of implicit authority map for each category, and once that map is established, new entrants face a harder path to displacement. This is not speculation: it mirrors what happened in organic search between 2012 and 2016, when early SEO investment compounded into durable rankings that late movers struggled to overcome.

The window for early-mover advantage in AI search is open. It won't stay open indefinitely.

If you want to track what AI models are actually saying about your brand right now, Lua's platform runs that assessment across all four major models and shows you exactly where you stand against competitors. The gap between knowing and not knowing is where most brands are losing ground.

Cited publishes weekly. If a colleague forwarded this to you and you'd like to receive it directly, subscribe at luarank.com.


Sources referenced in this edition:

  1. OpenAI usage and citation behaviour analysis, internal Lua platform data, April 2026

  2. Perplexity AI content indexing patterns, Search Engine Journal, March 2026

  3. Google Search Generative Experience and E-E-A-T signals, Google Search Central documentation, 2025-2026

  4. Anthropic Claude model behaviour and source attribution, Anthropic model card documentation, 2025

Related articles