Cited Newsletter Issue 19 (May 13th 2026)

Issue 19 of the cited-newsletter delivers the latest AI search visibility updates for marketing teams.

Cited Newsletter Issue 19: AI search visibility updates for May 2026, covering Google AI Overviews expansion, Perplexity citation shifts, and what drives rankings.

Marketing director reviewing AI visibility metrics, reflecting cited-newsletter coverage of GEO strategies and competitor benchmarking tools

Welcome back to Cited, the newsletter for marketing teams who are serious about building visibility in AI search. Issue 19 lands on May 13th 2026, and this week there is a lot to unpack. The pace of change in how AI models surface and cite content has accelerated sharply over the past few weeks, and we are seeing real divergence between brands that prepared early and those still treating this as a future problem.

If you are reading this for the first time, Cited is produced by the team at Lua Rank. We build the tools, run the programmes, and track the data. What we share here comes directly from what we observe across the 40+ brands on the Lua platform, not from industry speculation.

What Changed in AI Search This Week

Google AI Overviews Expands Commercial Query Coverage

Google's AI Overviews is now triggering on a significantly wider range of commercial queries across multiple markets including the UK, Australia, Canada, and Germany. This is not a gradual rollout any more. We tracked a 34% increase in Overview appearances for mid-funnel keywords across the Lua client base between April 28th and May 9th. If your business relies on organic clicks from comparison or category searches, this matters now, not in Q3.

The practical implication is that brands without structured, citable content are losing visibility they previously held through traditional rankings. A position 2 result for a category page is worth less when an AI Overview fills the top 40% of the screen and answers the query without a click.

Perplexity's Citation Behaviour Shifts Toward Topical Authority

We have been tracking citation patterns on Perplexity across 11 verticals for the past four months. The most consistent signal we are seeing in May 2026 is that topical authority now outweighs domain authority in citation selection. A brand with 30 well-structured, deeply-sourced articles on a specific topic is being cited more frequently than a high-DA generalist site with surface-level coverage.

This is a meaningful structural shift. It rewards specialists who actually explain things well. If you have a content programme focused on genuine depth rather than keyword volume, you are likely better positioned than you think.

ChatGPT Source Attribution Gets More Granular

OpenAI pushed an update in early May that makes source attribution in ChatGPT more granular in browsing mode. Specific sections of pages, not just domains, are now being referenced. This makes schema markup and clear heading structure more important than ever. If your page structure is flat and your subheadings are vague, the model has a harder time extracting and attributing the right section to the right answer.

One of our brands saw a 40% jump in ChatGPT citation frequency after implementing FAQ schema and restructuring three key service pages. The work took under a week. The visibility impact was visible within 30 days.

The Counterargument Worth Taking Seriously

Not everyone agrees that AI search deserves immediate resource allocation. The sceptic position goes roughly like this: click-through rates from AI-generated answers are low, attribution is poor, and the channel is too immature to justify diverting budget from proven channels.

That view has some validity. If your business is highly transactional, depends on immediate conversion, and your team is already stretched, chasing AI visibility at the expense of well-performing paid or organic campaigns would be a mistake.

But here is where the counterargument breaks down. Early visibility in AI search compounds. The brands being cited today are building citation history, topical signals, and model familiarity that will be much harder to displace in 18 months. This is not unlike the early days of featured snippets: the teams that optimised for zero-click answers in 2017 and 2018 held those positions for years. Waiting until AI search "matures" likely means starting from behind.

This Issue's Focus: What Drives Citation Frequency Across Models

A Practical Breakdown by Platform

One question we get every week from readers of the Cited newsletter is: "What should I actually prioritise?" Below is our current read on what drives citations across the four major platforms we track, based on Lua's multi-model visibility data from May 2026.

Platform

Primary Citation Driver

Secondary Driver

Common Mistake

ChatGPT

Structured content with clear schema

Third-party mentions and backlinks

Flat page structure, missing H3/H4 hierarchy

Perplexity

Topical depth and source density

Freshness and update frequency

Thin category pages with no supporting content

Google AI Overviews

E-E-A-T signals and traditional ranking

Concise, directly answerable paragraphs

Burying the answer in long preambles

Claude

Factual accuracy and citation of primary sources

Consistent brand voice across content

Overly promotional tone that triggers trust filters

The Single Highest-Leverage Action Right Now

If you have limited time this week, focus on one thing: answer-layer content. These are short, precise paragraphs (typically 40 to 80 words) that directly answer a specific question your audience asks. They sit below your main narrative content, ideally under a clear H3 or H4 subheading formatted as a question.

AI models are trained to extract direct answers. If your content buries the point in four paragraphs of context before getting to the actual answer, the model will often find a source that does not. Structure is not a technical nice-to-have. It is your visibility strategy.

What We Are Watching Going Into Q3 2026

A few signals worth tracking over the next quarter:

  • Multimodal citation: Both Google and OpenAI are expanding how images and video transcripts factor into sourcing decisions. Brands with strong visual content and properly labelled assets may gain a citation advantage that text-only programmes miss.

  • Real-time indexing pressure: Perplexity and ChatGPT with browsing enabled are increasingly surfacing fresh content. Publishing cadence is becoming a citation signal, not just a relevance signal.

  • Brand entity strength: We expect that AI models will increasingly rely on knowledge graph-style entity recognition to validate sources. Brands with clear, consistent entity signals across the web (Wikipedia presence, structured company data, consistent NAP) will have an advantage in citation trust.

The longer-term arc here points toward AI search functioning more like a reputation channel than a traffic channel. The brands that build citation authority now are building something that looks more like brand equity than a keyword ranking. That is a meaningful shift in how marketing teams should account for this work internally.

Issue 20 of the Cited newsletter publishes May 27th 2026. We will be covering the first results from Lua's Q2 competitive benchmarking study across six verticals, with data on which content formats are generating the highest citation frequency by model. If you have a colleague who should be reading this, forward it on.

Cited is produced by Lua Rank. The Lua platform gives marketing teams a complete AI visibility programme including a 13-layer website assessment, a 12-month execution plan, and multi-model tracking across ChatGPT, Perplexity, Google AI Overviews, and Claude. You can explore it at luarank.com.

Related articles