Cited Newsletter Issue 18 (May 10th 2026)

The cited-newsletter returns with Issue 18, packed with AI search visibility insights for marketers.

Cited Newsletter Issue 18: what's driving AI citation in 2026, GEO performance data from 40+ brands, and where AI search heads next.

Marketing professionals analysing AI search performance metrics, reflecting insights from the cited-newsletter on GEO visibility strategies

Welcome back to the Cited newsletter. Issue 18 lands at a genuinely interesting moment. AI search is no longer something marketing teams are "keeping an eye on." It's a channel that's actively sending traffic, influencing purchase decisions, and rewarding the brands that prepared early. If you've been following along since Issue 1, you'll have watched that shift happen in real time. If you're joining us fresh, the short version is this: the brands that get cited by AI models win. The ones that don't, become invisible to a fast-growing segment of buyers.

This issue covers three things we've been tracking closely: the structural shift in how AI models select citations, what's working right now across our 40+ active brand programmes, and where we think the next 12 months take this channel.

How AI Citation Selection Actually Works in 2026

There's still a lot of noise around this. Some of it comes from SEO practitioners mapping old mental models onto a new channel. Some of it comes from AI companies being deliberately vague. Here's what we know from running visibility tracking across ChatGPT, Perplexity, Google AI Overviews, and Claude.

Traditional SEO authority (domain rating, backlink volume, PageRank signals) still matters, but it's not the primary driver of AI citation. The models are selecting sources based on a different set of signals, including:

  • Topical depth and specificity: Brands that publish narrow, expert-level content on a defined subject area are getting cited more consistently than generalist publishers with high domain authority.

  • Structured data and entity clarity: Pages that clearly define what a brand does, who it serves, and what makes it different are easier for models to extract and attribute correctly.

  • Freshness signals: Perplexity in particular is weighting recently updated content. Pages that haven't been touched in 12+ months are losing ground, even where the information is still accurate.

  • Third-party corroboration: If your claims are only asserted on your own site and nowhere else on the web, models treat them as unverified. Coverage in trade publications, directories, and review platforms reinforces citation confidence.

None of this is revolutionary if you've been following the Cited newsletter from the start. But the degree to which these signals now dominate over traditional authority metrics is accelerating faster than most teams anticipated.

The Citation Gap Is Widening

We ran a benchmarking sweep across 14 competitive verticals in April 2026. In 11 of those 14, fewer than 15% of brands in the space were being cited with any consistency across more than two AI platforms. That's the opportunity. First-mover advantage in AI citation is real, and it compounds. Once a model learns to associate your brand with a topic, it takes sustained effort from a competitor to displace you.

The brands that are currently claiming that space share one characteristic: they started their GEO (Generative Engine Optimisation) programmes at least six months ago and treated it as a structured, ongoing process rather than a one-off content project.

What's Working Right Now Across Our Brand Programmes

Across the 40+ brands running active programmes through Lua, we're seeing consistent patterns in what's driving visibility gains. Here's a snapshot from the past 60 days.

Performance Snapshot: April to May 2026

Optimisation Activity

Avg. Visibility Lift

Primary Platform Benefiting

Time to Measurable Impact

Schema markup implementation (FAQ, HowTo, Product)

+34%

Google AI Overviews

2 to 4 weeks

Entity page creation (brand, team, methodology)

+28%

ChatGPT, Claude

4 to 6 weeks

Content refresh with citation-ready formatting

+22%

Perplexity

1 to 3 weeks

Third-party listing and review acquisition

+19%

All platforms

6 to 10 weeks

Conversational FAQ content targeting query patterns

+41%

ChatGPT, Perplexity

3 to 5 weeks

The standout result this quarter is conversational FAQ content. Brands that built out pages directly addressing the specific question patterns their buyers ask AI models are seeing the biggest lifts, and the fastest. This is not generic FAQ content. It's content structured around *how people actually phrase questions to AI*, which is different from how they phrase keyword searches.

A Counterargument Worth Taking Seriously

Some SEO practitioners argue that optimising for AI citation is premature. The reasoning goes: AI search still represents a small fraction of total search volume, and the attribution is murky enough that you can't clearly prove ROI. That's a fair point, not one we dismiss.

The counter is timing. Traditional SEO taught us that early movers in any channel accumulate structural advantages that are expensive to overcome later. The brands that invested in content and technical SEO in 2010 to 2013 were still benefiting from those decisions a decade later. We're in a similar window right now with AI visibility. Waiting until AI search share is undeniable means competing against brands that have 18 to 24 months of citation authority already built.

If your business is in a sector where buyers are already using ChatGPT or Perplexity for research and shortlisting (and most B2B categories now qualify), the "wait and see" approach has a real cost, even if it's hard to quantify precisely today.

Where This Goes in the Next 12 Months

Multi-Model Visibility Becomes a Core KPI

Right now, most marketing teams track organic search traffic and rankings. By early 2027, we expect AI citation share to appear alongside those metrics in standard marketing dashboards at mid-market companies. The tools to measure it are maturing fast. Perplexity has already started surfacing more structured citation data. Google is integrating AI Overview performance into Search Console in more granular ways. The measurement gap that currently makes some teams hesitant is closing.

The Specialist Content Advantage Will Compound

Generalist content is already losing ground to specialist content in AI citation. By 2027, we expect this to sharpen considerably. AI models are getting better at evaluating expertise signals, and brands that have built genuine topical depth (not just volume) will pull further ahead. If your content strategy still prioritises breadth over depth, now is the time to rethink it.

Platform Differentiation Will Require Platform-Specific Strategy

ChatGPT, Perplexity, Claude, and Google AI Overviews are not the same product. They index differently, weight signals differently, and serve different user intents. A single optimisation approach across all four is already starting to underperform compared to platform-specific programmes. Brands that treat AI search as a monolithic channel are leaving visibility on the table. The sophistication gap between generic GEO advice and genuinely platform-calibrated execution is only going to widen.

That's Issue 18 wrapped. If you've got questions about anything covered here, or want to see how your brand's current AI visibility compares to competitors, the assessment is at luarank.com. We publish Issue 19 on May 24th. Until then, keep executing.

Sources: Internal Lua visibility tracking data (April to May 2026), Perplexity AI citation analysis reports (Q1 2026), Google Search Central documentation on AI Overviews, Moz State of Search 2026, SparkToro AI Search Behaviour Study (March 2026).

Related articles