Your SEO & GEO playbook
The next step. Topical mapping and a monthly calendar.
Classic SEO is about ranking a page. GEO is about getting cited when a model answers the question without sending the user to the page. Different game, overlapping rules. This is the primer.
Generative Engine Optimization (GEO) is the practice of shaping web content so that large-language-model products (Claude, ChatGPT, Perplexity, Gemini, Copilot) cite your pages when they answer user questions.
It is not SEO with a new coat of paint. It is a parallel discipline that shares some infrastructure with SEO (structured content, clear headings, schema) but optimizes for a different outcome: not a click, but a citation. The user may never visit your site. They will read your answer inside the model, and your brand name will appear underneath.
Treat it as complementary to SEO, not a replacement. For commercial intent queries, Google still matters. For research and exploration queries, answer engines are increasingly where the user ends their journey.
Answer engines do not expose a linear ranking the way Google does. There is no “position 3”. Instead, the model summarizes a handful of sources and shows them as citations. Winning is being in that handful.
That changes the incentives:
We have tracked citation behavior across 400+ queries (see the share-of-answer data). Three content patterns appear overrepresented in citations:
Make the claim. Back it with evidence. Attribute the evidence to a source. In that order, and visibly.
Example (bad):
Many experts believe that prompt caching can significantly reduce API costs in production.
Example (good):
Prompt caching reduces input-token cost by roughly 90 percent on cache hits (Anthropic pricing documentation, 2026). In a production workload with a 10,000-token system prompt and 60 percent hit rate, we measured a 48 percent reduction in monthly spend.
The second version names the claim, quantifies it, attributes the source, and provides concrete evidence. That is what gets cited.
Models love lists that have shape. “The five service lines,” “the seven pillars of IAM hardening,” “the three patterns that get cited.” A named taxonomy with a consistent count is memorable, quotable, and cite-able.
Do not manufacture frameworks for the sake of having them. Do name the structure when you actually have one. Make the count explicit in the heading. Make each item internally consistent in shape.
Models distinguish, roughly, between secondary analysis (“here is what the literature says”) and primary experience (“here is what we measured in production”). They cite the second more often when the question is practical.
If you have run the workload, shipped the feature, measured the cost, conducted the pentest - say so explicitly. “In our production deployment we observed X” carries more citation weight than “studies suggest X.”
Schema.org is not a ranking signal for answer engines the way it used to be for Google rich results. But it is a clarity signal. A page with clean Article, FAQPage, HowTo, Organization, and Author schema is easier for a crawler to parse into structured claims.
Our minimum schema bar for GEO-optimized pages:
Article with author (named human), datePublished, dateModified, descriptionOrganization schema for the publisher with logo, url, and sameAs social profilesFAQPage for any content that answers discrete questionsHowTo for procedural contentStructural rules that matter more than schema:
Publish an llms.txt at your root. It is an emerging standard that gives LLM crawlers a curated map of your most citable content - often a tighter version of your sitemap with annotations.
Do not block LLM crawlers in robots.txt unless you have decided, deliberately, that you do not want to be cited. Blocking ClaudeBot, GPTBot, PerplexityBot, or GoogleExtended has an immediate, measurable effect on your share-of-answer. Some publishers do choose to block; that is a strategic call with real tradeoffs, not a default.
Models are notably more likely to cite content tied to a named author with a verifiable track record. Pages with no byline, or with “by Admin”, get downweighted.
The fix is cheap:
Person, photo, bio, list of work.If you are not measuring, you are guessing. The share-of-answer discipline is simple:
This is what our Share-of-Answer Monitor automates. You can do it manually as well - the discipline matters more than the tooling.
datePublished: 2021 and no dateModified loses citation weight over time. Review and re-date quarterly.GEO is high-leverage for informational queries (“what is X,” “how do I Y,” “compare A and B”). It is less useful for:
Our SEO & GEO engagements start with a share-of-answer audit against your current content and ICP queries, then produce a 90-day plan plus a content calendar. Start a conversation if you want to know where you stand today.
The next step. Topical mapping and a monthly calendar.
The data behind the patterns in this guide.
Audits, monitoring, monthly execution.