This Week in AI Visibility


Welcome to this edition of Cited, our regular newsletter tracking what matters in AI search visibility. If you're responsible for how your brand shows up across ChatGPT, Perplexity, Google AI Overviews, and Claude, this is where we share what we're seeing across the 40+ brands on the Lua platform, plus the wider shifts shaping the space.
This week: a pattern we're calling the "citation cliff," why structured data is back in the spotlight for completely different reasons than 2019, and a hard look at whether the brands investing early are actually pulling ahead.
The Citation Cliff: Why Some Brands Disappear from AI Responses After Week Six
We've been tracking a pattern across brands that started their AI visibility programmes in Q4 2025. Early gains are real. Several brands on Lua hit first-page ChatGPT placements within 40 days of starting their programme, which aligns with what we've seen consistently. But around week six to eight, brands that haven't continued executing their content and technical tasks start to see those rankings plateau or drop.
We're calling this the citation cliff. It's the point where early wins from quick-fix optimisations (schema cleanup, FAQ restructuring, entity consolidation) stop compounding, and only brands with a sustained execution rhythm keep climbing.
What Drives the Drop-Off
The AI models pulling citations aren't doing a one-time crawl. They're continuously re-evaluating sources based on freshness, authority signals, and topical depth. Brands that publish one well-optimised piece and stop are essentially freezing their position at a moment in time. The sources that keep getting cited are the ones adding content depth week over week.
Three factors we see consistently in brands that avoid the cliff:
Topical clustering: They don't just cover a topic once. They build out supporting content that signals genuine expertise to the model.
Consistent structured data maintenance: Schema isn't a one-time implementation. Models re-evaluate structured signals regularly.
Multi-platform presence: Brands visible on Perplexity and Google AI Overviews tend to reinforce each other's authority signals in ChatGPT citations.
What This Means for Your Programme
If you started an AI visibility push in Q1 2026 and saw early traction, don't interpret that as the work being done. The brands pulling ahead are the ones treating AI search like an editorial programme, not a technical project with a finish line. This is exactly why Lua's execution calendar schedules tasks day by day across a 12-month horizon rather than delivering a one-time audit and leaving you to figure out the rest.
Structured Data in 2026: Different Problem, Same Tool
Structured data had a moment in 2018 and 2019 when everyone was chasing rich snippets. Then enthusiasm cooled as Google reduced rich result eligibility and the SEO community moved on. Now it's back, but the reason is different.
AI models use structured data not primarily to display rich results in a traditional SERP sense, but to understand entity relationships. When a model is deciding which brand to cite as the answer to "what's the best platform for AI search visibility," it's pulling from a web of signals about what your brand does, who it serves, what problems it solves, and how it relates to adjacent entities in its training and retrieval data.
Schema Types That Are Moving the Needle Right Now
Schema Type | Why It Matters for AI Citation | Priority Level |
|---|---|---|
Organization | Establishes brand entity identity and core attributes | High |
FAQPage | Provides extractable Q&A pairs that map directly to query patterns | High |
HowTo | Signals procedural expertise, frequently cited in instructional responses | Medium-High |
Article / BlogPosting | Enables authorship and publication date signals | Medium |
Product | Critical for any brand with commercial offerings being evaluated by AI | High (commercial) |
The brands seeing the strongest schema-driven gains aren't just adding markup. They're ensuring that what the schema describes matches the actual content on the page and the brand's positioning across external sources. Inconsistency between your schema, your About page, and your third-party mentions is one of the fastest ways to confuse an AI model's entity resolution.
According to research from Schema.org and corroborated by analysis from Search Engine Journal, structured data consistency across domains is one of the cleaner signals available to models trying to establish source credibility.
Are Early Movers Actually Pulling Ahead? The Data So Far
This is the question we get most from heads of marketing evaluating whether to commit to an AI visibility programme now or wait six months until the channel "matures." Here's our honest read.
Yes, early movers are building advantages. But those advantages are concentrated in specific conditions. The brands seeing the clearest gains are in categories where AI models are frequently asked for recommendations and comparisons, think software, professional services, specialist retail, and health and wellness. In commoditised categories with thin content differentiation, the gains are harder to isolate.
The Counterargument Worth Taking Seriously
Some researchers, including analysts at Gartner and commentary from SparkToro, have noted that AI search citation patterns are still volatile and that algorithmic updates from OpenAI, Google, and Anthropic can reshuffle rankings significantly. This is a fair point. Brands that build their entire marketing thesis on AI citation volume today may find that metric shifting as models evolve.
Our view: the volatility is real, but it cuts both ways. Brands that have built genuine topical authority, clean technical infrastructure, and consistent entity signals are better positioned to weather model updates than brands that haven't. The underlying work that earns AI citations is largely the same work that earns sustained organic visibility. Doing it earlier means you're compounding for longer.
What We're Tracking Going Into Q2 2026
Google's continued expansion of AI Overviews into non-English markets, which is accelerating AI search adoption outside the US and UK
Perplexity's growing share in B2B research queries, particularly for software and services categories
Claude's increasing presence as an enterprise research tool, which changes the citation authority signals that matter most for B2B brands
The emerging use of brand mentions in AI training data as a long-cycle visibility lever (separate from retrieval-time citation but increasingly discussed in the research community)
Sources we're drawing on for this tracking include reporting from Reuters on AI search market dynamics and ongoing analysis published by the teams at Search Engine Land.
The next edition of the Cited newsletter will focus on multi-model visibility tracking: specifically, why a brand's presence on ChatGPT and its presence on Perplexity often diverge, and what that tells you about which optimisation layer to prioritise next. If you're running your AI visibility programme through Lua, you'll see these insights reflected in your next monthly execution plan update.
Related articles
Scale Your Brand's Blog with Automation
Scale lifestyle brand scaling through intelligent blog automation. Reduce content production time by 70% while increasing output 400%.
Stop Losing Blog Traffic to Slow Schedules
Discover proven blog traffic loss solutions that stop declining visitors. Learn systematic publishing strategies that beat slow schedules.