What we measured before we changed our minds.
Any shift this large should be backed by data, not anecdote. So the first thing we did when our instinct said "something is different with snippets this year" was set up a measurement panel and watch it for six months.
The panel: 340 queries across 6 client verticals (B2B SaaS, security, legal tech, fintech, devtools, healthtech). For each query, we tracked where the client appeared on the Google SERP (organic rank and snippet ownership), what the SERP layout looked like (AI Overview present, People Also Ask, knowledge panel), and how many clicks the client's page earned from that query over a week.
We then decomposed clicks by SERP feature: how many came from organic position 1 through 10, how many from the featured snippet when the client owned it, how many from "People Also Ask" expansions, how many from AI Overview citations, and how many from branded searches bypassing the SERP entirely.
We ran the same panel against the snapshot of our own 2023 performance data, which was still in the client's analytics. The comparison is honest because the panel is the same queries, same sites, same definitions.
Three numbers moved.
The snippet click-through collapse.
Featured snippets used to earn roughly 25 to 35 percent of the clicks on a query, depending on vertical. That was the whole reason chasing them was worth the optimization effort.
Our 2026 panel number: featured snippets earn roughly 8 to 13 percent of clicks on the queries where they appear at all. Roughly a third of what they used to earn. The click is not going to another result; it is not happening. The user is reading the snippet answer and moving on.
This had been a slow trend since around 2019, when the snippet answer format got good enough that many users no longer needed to click through to confirm. It accelerated in 2024 and 2025 as Google tuned snippets to be even more complete, and then it accelerated again when AI Overviews launched on a growing share of commercial queries.
The snippet-ownership value proposition used to be: "we pay the optimization cost, we get a 3x lift on click-share for that query." The new proposition is: "we pay the same optimization cost, we get a 1x lift, and the 1x is getting smaller quarter over quarter." At some point the math flips negative. For most of our client queries, it has flipped.
AI Overviews: the final cannibalization.
Google's AI Overview feature appears on an increasing share of informational commercial queries. When it appears, it sits above the organic results and the featured snippet, takes up most of the above-the-fold area, and often includes citations to 3 to 8 sources with direct links.
Our panel data, current quarter: AI Overviews appeared on 38 percent of the 340 queries (up from 11 percent a year ago). On queries where the Overview appeared, the featured snippet below it earned roughly half the clicks it earned on queries where the Overview was absent. The Overview is eating the snippet's role.
The honest read: in categories where AI Overviews are common, optimizing for a featured snippet is optimizing for a feature that is being visually demoted and click-demoted on the same page. You can still win the snippet. You win half as much traffic for the same effort, and the trend line is down.
Meanwhile, the AI Overview itself is a citation surface. Getting cited in the Overview is the new position-zero equivalent. That is its own optimization target, and the optimization moves are much closer to the citation patterns we discussed in the citations-as-backlinks essay than to the old snippet tactics.
The four conditions where snippets still pay.
We are not telling clients to abandon featured-snippet work entirely. For a minority of queries it still earns its cost. The question is whether your queries fall into this minority.
Condition 1: AI Overview does not appear.
In verticals where AI Overviews are rare (highly technical, heavily regulated, very long-tail), snippets still earn their old click-share. Niche B2B verticals with strong jargon are the clearest examples. Run your panel and see: if the query gets an Overview, deprioritize snippet optimization on that query; if it does not, snippet optimization is still worth it.
Condition 2: the snippet requires action beyond reading.
Queries whose answer is "a calculator," "a form," "a code snippet you need to run." The snippet can present the surface but the user has to click to use it. Click-share holds up because the snippet is a teaser for an interaction, not a complete answer.
Condition 3: brand-present queries.
Queries that include the brand name, or queries where the brand is the answer (navigational queries). Owning the snippet here is table stakes. Deprioritizing these is a mistake; they are cheap to win and the click-share still looks like it did.
Condition 4: the snippet answers a different question than the target.
Sometimes a search engine extracts a snippet that only partially answers the user's intent, enough to inform but not to satisfy. These still earn clicks because users need the rest of the answer. Identifying them ahead of time is tricky; often you can only tell from post-hoc click data whether a given query has this property.
Across our panel, about 22 percent of queries satisfy at least one of these conditions and are still worth optimizing for. The other 78 percent are not.
Where we moved the effort.
Effort is finite. Budget saved from one tactic has to go somewhere. Here is how we reallocated the hours we used to spend on snippet optimization.
First: AI Overview citation work.
Whatever fraction of queries have AI Overviews is the fraction of queries where getting cited in the Overview is the new prize. The optimization: publish primary-data content, adopt claim-evidence-source structure, earn the topical presence that makes the model choose you as a citation. See the ranking-in-answer-engines guide for the full playbook.
Second: LLM citation work.
Claude, ChatGPT, Perplexity, Gemini citation panels. Same structural work as AI Overview citation, with the side benefit that these models increasingly drive real traffic on their own. Monitor weekly; report monthly.
Third: topical-map depth.
Deep coverage of a narrow topic earns both backlinks and citations. We moved SEO writer hours from "optimize this single page for a snippet" to "complete the cluster so we are the comprehensive source." See the topical map template for the structural approach.
Fourth: author E-E-A-T signal work.
Named authors with credentials, linked profiles, biographies, real-world speaking engagements, citations in third-party publications. Search engines and LLMs alike are increasingly weighting author signals. A page by a named expert with a real profile beats the same words under "editorial team."
Fifth: schema and structured data.
Not because it directly improves rankings, but because it makes the page's claims machine-readable, which helps both the SERP and the LLM reliably extract and surface your content. Organization, Article, FAQPage, HowTo, Product schemas in JSON-LD on every relevant page.
How we talk about this with clients who are still anchored.
Many clients still have "featured snippet share" on their SEO dashboard and report on it in the monthly. When we take over, we do not yank the metric. We add companion metrics and let the data show the story.
The conversation goes something like this. We report on featured snippet share alongside AI Overview citation share and LLM citation share, all on the same dashboard, with click-share attribution for each feature. Three months in, the client sees the data themselves: the snippet column is flat to down, the Overview and LLM columns are growing, and the click-attribution numbers show where the traffic is actually coming from.
At that point, the metric shift is not us telling them "stop chasing snippets." It is them pointing at the dashboard and asking why we are still reporting on snippets so prominently. We shift prominence, keep the historical data, and the conversation ends in alignment rather than confrontation.
This is the general pattern for any metric shift in a conservative discipline like SEO: do not argue, measure, show the client the picture, let the picture change their mind. Featured snippets are just the first example of a broader phenomenon. Every old SEO KPI will go through this transition in the next two years. Keep the ones that still measure what they claim to measure. Retire the ones that are now proxies for nothing.
Operator-tone writing on Applied AI, Security, SEO, and Economics.
One essay per week. No hype. No tracking pixels. Unsubscribe in one click.