Cited Newsletter Issue 15 (May 7th 2026)

Cited Newsletter Issue 15: AI visibility strategies from Lua Rank — what's driving citations on ChatGPT, Perplexity, and Google AI Overviews right now.

Welcome back to Cited, the newsletter from Lua Rank where we track what's actually moving in AI search, share what we're seeing across the platforms we monitor, and give you something you can act on before the week is out. Issue 15 lands at a genuinely interesting moment. The gap between brands that understand AI visibility and those that are still treating it as a future problem is widening fast. If you're reading this, you're on the right side of that gap. Let's make sure you stay there.
What We're Seeing Across ChatGPT, Perplexity, and Google AI Overviews Right Now
The biggest shift in the last few weeks is authority consolidation. AI models are becoming more selective, not less, about which sources they cite. We're tracking this across 40+ brands on the Lua platform, and the pattern is consistent: domains with clear topical depth on a narrow subject area are outperforming generalist sites with broader but shallower coverage, even when the generalist site has higher overall domain authority.
This matters because a lot of SEO-first teams are assuming their existing authority translates directly into AI visibility. It doesn't, at least not automatically. A well-optimised blog with twenty posts on a single topic is beating sites with hundreds of posts across dozens of categories in AI-generated responses on Perplexity and ChatGPT. The model rewards specificity.
Platform-Specific Observations This Week
ChatGPT (GPT-4o)
Structured content continues to win. Pages with clear question-and-answer formatting, explicit definitions, and summary sections at the top are getting extracted more consistently. We saw one Lua client move from zero ChatGPT citations to consistent first-page mentions in 34 days after restructuring four key service pages using this approach. No new content. Same site. Different structure.
Perplexity AI
Perplexity is still the most citation-transparent of the major AI search platforms, which makes it the easiest to learn from. We're noticing that recency signals matter more here than on ChatGPT. Pages updated within the last 60 days with a visible "last updated" date are appearing in Perplexity results more frequently than older, static pages, even when the older pages cover the topic more thoroughly. Update frequency is now an optimisation lever, not just a hygiene factor.
Google AI Overviews
Google's AI Overviews are pulling heavily from sources that already rank in positions 1 to 5 for the core query. This is the most traditional of the three platforms in that respect. But there's a nuance: the specific passages being extracted are not always from the ranking page's main body. Google is increasingly pulling from structured callouts, tables, and defined terms within the page. If you're not using semantic HTML to mark up your most citable content, you're leaving surface area on the table.
The Counterargument Worth Taking Seriously
Some smart people are pushing back on the entire AI visibility category right now, and they're not entirely wrong. The argument goes: AI search volumes are still small relative to traditional search, user trust in AI-generated answers is inconsistent, and the citation patterns across models change frequently enough that optimising for them is chasing a moving target.
We take this seriously. Here's our read on it.
The volume argument is real but backward-looking. Adoption curves for new search behaviours have consistently been underestimated, and the brands that built early authority in voice search, featured snippets, and local packs before they went mainstream reaped disproportionate returns. Waiting for volume confirmation means you're entering a crowded market rather than an open one.
The instability argument is partially valid. Citation patterns do shift, which is exactly why a structured, multi-model tracking approach (the kind Lua's platform provides) matters more than one-off optimisations. You need to be watching which sources are being cited, why, and what changes when models update. That's not possible if you're checking manually once a month.
The trust argument is the one we watch most carefully. If users stop trusting AI-generated answers, the whole channel contracts. But the data from SparkToro and Statista suggests the opposite trajectory: AI search query volume grew approximately 47% year-on-year in Q1 2026, and return visit rates to AI search tools are increasing, not declining. Users are learning to trust, and then act on, AI-generated responses at a pace that should make any growth-focused marketing team pay attention.
Three Things to Implement Before Issue 16
We keep this section practical. No ten-step frameworks. These are the three highest-leverage actions based on what we're tracking right now.
1. Run a Topical Depth Audit on Your Core Category Pages
Pick the two or three topics you most want to be cited for in AI search. Map every page on your site that touches those topics. Then ask an honest question: does this content, taken together, establish genuine depth and authority on the subject, or does it skim across it? If it's the latter, consolidation or expansion is more valuable than new content creation right now. AI models reward depth over breadth.
2. Add an "In Summary" Block to Your Five Most Important Pages
This is one of the clearest signals we've seen across the Lua platform data. Pages with a short, structured summary near the top (three to five sentences, covering the key claim, the evidence, and the takeaway) are being extracted into AI responses at a higher rate than pages without one. Think of it as writing your own pull quote for the model to use. It takes 20 minutes per page. Do it this week.
3. Check Your Competitor Citation Rates on Perplexity
Search for five queries where you'd expect to appear. Note who gets cited. If the same two or three competitors keep showing up and you don't, that's your benchmark. Understanding why they're getting cited (content structure, recency, source authority) is the starting point for closing the gap. Sources like Ahrefs and Moz can help you audit their technical foundations; Lua handles the AI-specific visibility benchmarking.
Looking Ahead: What the Next 90 Days Will Test
We expect the next quarter to clarify two things that are currently uncertain. First, whether Google's AI Overviews begin surfacing more non-ranking sources (which would significantly change the SEO-to-GEO relationship). Second, whether Anthropic's Claude expands its search integrations in a way that creates a meaningful new citation surface. Both are worth monitoring. We'll cover both in depth as they develop.
The cited newsletter lands every two weeks. If a colleague forwarded this to you and you want to receive it directly, you can subscribe at luarank.com. And if you want to see where your brand currently sits across ChatGPT, Perplexity, Google AI Overviews, and Claude, Lua's 13-layer assessment gives you a full picture in under 48 hours.
See you in issue 16.
The Lua Rank team
Related articles
Cited Newsletter Issue 3 (April 9th 2026)
Cited-newsletter strategies that get content referenced across ChatGPT, Claude, and Perplexity. Proven tactics from 40+ brands building AI visibility.
Cited Newsletter Issue 2 (April 2nd 2026)
Cited-newsletter performance has evolved dramatically in 2026. Learn how AI models select sources and build citation authority that drives results.