Cited Newsletter Issue 20 (May 14th 2026)

Issue 20 of the cited-newsletter delivers the latest AI search visibility insights for marketing teams.

Cited Newsletter Issue 20: AI citation behaviour has shifted — freshness signals and schema markup now drive visibility across ChatGPT, Perplexity, and Google AI Overviews.

Marketing director reviewing AI search rankings dashboard, featured in the cited-newsletter covering GEO strategies for mid-market brands

Welcome to Issue 20. Twenty editions in, and the landscape has shifted more in the past six months than in the previous three years of SEO combined. This week we're covering the structural change happening inside AI model citation behaviour, why some brands that were ranking well in ChatGPT three months ago are now invisible, and what the data from our own platform tells us about which optimisation layers are actually moving the needle right now.

If you're new here: the Cited newsletter goes out every two weeks. We share what we're seeing across the brands running programmes on Lua, what the AI platforms are changing, and what you should be doing about it. No filler, no hype.

What's Changed in AI Citation Behaviour Since March 2026

The most significant shift we've tracked over the past eight weeks is a change in how Perplexity and ChatGPT weight source freshness against source authority. For most of 2025, a well-structured page with strong topical authority could hold a citation position for months without touching it. That's no longer reliable.

The Freshness Signal Is Now Active

Brands that built strong AI visibility in late 2024 and early 2025 are reporting citation drop-offs even when their underlying content quality hasn't changed. What has changed is how recently that content was updated or supplemented. Our tracking data across 40+ brands shows a clear pattern: pages updated within the last 45 days are being cited at a meaningfully higher rate than equivalent pages sitting static beyond 60 days.

This doesn't mean you need to rewrite everything constantly. It means you need a structured cadence for refreshing key pages, adding new data points, and signalling to crawlers that your content is alive. Lua's 12-month execution calendar now flags pages for scheduled refresh based on their citation decay rate, not arbitrary timelines.

Structured Data Is Doing More Work Than It Was

We've also seen Google AI Overviews pull significantly more from pages with clean schema markup than from pages that are simply well-written. The content quality floor has risen across all models, so schema is increasingly what separates citations from near-misses. If you haven't audited your structured data in 2026, that's where we'd start.

AI Platform

Primary Citation Signal (May 2026)

Change vs. Q4 2025

ChatGPT

Topical authority + freshness

Freshness weighting increased

Perplexity

Source credibility + recency

Recency now a primary filter

Google AI Overviews

Schema + E-E-A-T signals

Schema influence significantly higher

Claude

Depth + factual density

Largely stable

Platform Update: What We Shipped in the Last Two Weeks

A quick update on what's new inside Lua for subscribers who are also platform users.

Competitor Citation Tracking Now Live Across All Four Models

You can now track how your competitors are cited across ChatGPT, Perplexity, Google AI Overviews, and Claude in a single dashboard view. This was the most requested feature from our user base, and we built it properly rather than quickly. You get citation frequency, the types of queries triggering competitor citations, and a side-by-side comparison against your own visibility scores.

This matters because AI visibility is a relative game. Your citation rate in isolation tells you very little. What tells you something useful is whether you're gaining or losing ground against the two or three competitors your prospects are actually comparing you to.

Automated Schema Generation for Service Pages

For users on the Growth and Scale plans, Lua now generates and deploys structured data for service pages automatically, with CMS-specific instructions for WordPress, Webflow, and Shopify. We've seen this alone move citation rates by 15 to 30 percent on pages where schema was absent or incomplete. It's one of those optimisations that looks small on a task list but consistently outperforms content changes in the short term.

What We're Watching: The Counterargument and What Comes Next

Not everyone agrees that brands should be investing heavily in AI search visibility right now. The counterargument worth taking seriously is this: AI search volumes are still a fraction of traditional search volumes, and optimising aggressively for a channel that hasn't yet proven commercial conversion rates carries real opportunity cost.

That's a fair point. We don't think it justifies inaction, but it does justify being selective. The brands getting the best return from AI visibility programmes right now are those in considered-purchase categories: B2B services, professional services, higher-ticket consumer products. These are environments where a buyer might ask ChatGPT "what's the best project management software for a 50-person agency" before ever visiting a website. Being cited there, at that moment, matters.

Where This Goes in the Next 12 to 18 Months

Our prediction is that the citation landscape will bifurcate. Brands that establish strong **AI search presence** in 2026 will hold those positions at lower maintenance cost over time, similar to how domain authority worked in traditional SEO. Brands that wait will face a harder, more expensive entry once the channel matures and competition for citations intensifies.

We're also watching the emergence of agentic AI workflows. As AI agents begin executing multi-step tasks on behalf of users, the question shifts from "will the model cite my brand in a response?" to "will the agent select my brand when completing a purchase or booking task?" That's a different optimisation problem, and one we're building toward inside Lua's roadmap.

  • Sources with verified author credentials are seeing stronger citation stability across Claude and Perplexity

  • FAQ-format content structured around specific query patterns continues to outperform long-form prose for short-answer citations

  • Brands publishing original research or proprietary data are earning citations that persist significantly longer than editorial content

That last point is worth acting on now. If your brand has internal data that isn't published, turning it into a structured, citable report is one of the highest-leverage things you can do for AI visibility this quarter. Original data is scarce. AI models actively seek it out as a citation source because it's something they can't synthesise from aggregated web content.

Issue 21 lands in two weeks. We'll be looking at how Claude's citation behaviour has evolved following Anthropic's latest model updates, and sharing a breakdown of which content formats are driving the highest citation rates across B2B SaaS brands in our dataset. If you have a question you'd like us to dig into, reply directly to this email.

The Cited newsletter is published by Lua Rank, the AI visibility platform for marketing teams who want measurable results without agency fees.

Related articles