H1 Signature Handbook · SEO and GEO sample

A topical surface your competitors cannot copy.
Not a content factory that burns the team out.

Below is a sanitized sample from a hypothetical SEO and GEO engagement with Cloudwrit (the fictional document platform we use in every sample). The subject is rank in Google and share of answer across Claude, ChatGPT, Perplexity, and Gemini. Names changed, numbers representative, structure real.

NexcurAI Handbook · Vol. IV GEO-2026-Q1

The Cloudwrit SEO and GEO Handbook,
written to be cited, not just crawled.

A seventy-two page account of how a document platform earns rank in Google and citation in answer engines at the same time. Seventeen findings. Three critical. A six-month content calendar with three pillars and twenty-one spokes. One firm opinion about featured snippets.

Sanitized sample · Not a real engagement

Chapter 01 Where you rank and get cited today, in one page.

Cloudwrit ranks in the top ten of Google for eighteen of the two hundred queries your ICP actually searches for. Share of answer, measured across the four large answer engines over four weeks, sits at twelve percent on Claude, eight percent on ChatGPT, twenty-two percent on Perplexity (a strength we will press), and four percent on Gemini (a weakness we will address). Organic traffic is fourteen thousand two hundred sessions a month, which is flat year over year, which is the specific number this handbook exists to change.

The thesis of this handbook is that Cloudwrit's content velocity (one substantial piece per month) is the binding constraint, and that the unit of work should not be “a blog post” but “a citable passage on a topic your ICP asks about.” Google and answer engines reward the same thing for the same reason, and the handbook is written to give your team one content process that serves both surfaces.

North star By Q4 2026, Cloudwrit should rank top five on Google for at least seventy ICP queries, hold share of answer above twenty percent on Claude and ChatGPT, have a topical map of three pillars and twenty-one spokes all published, and be publishing four pieces a month at the quality bar set in chapter three.

We wrote this for three readers. N. Patel, who runs content day to day. M. Torres, who approves the retainer budget. And the content lead you will hire in Q3, whose first month should be re-reading this handbook and shipping one spoke without asking for scope.

Chapter 02 Three content gaps to close this quarter.

We ran a twelve day engagement across your indexed pages, your Search Console export, your analytics, and a four-week share-of-answer dataset we captured across four LLMs on two hundred queries. Seventeen findings. Three require action this quarter. The rest are sequenced in chapter six.

F-001

Three high-intent query clusters have no pillar page

“Document collaboration,” “version history,” and “PDF workflows” together represent the intent behind forty-one percent of your ICP's long-tail queries. None of the three has a dedicated pillar page on Cloudwrit. Competitors who do (Notion, Confluence) are the top-cited source in answer engines for these clusters. Fix: three pillar pages scoped in chapter six with outlines inline.

Critical
F-002

Forty-one percent of indexed pages lack schema markup

Specifically: no FAQPage, Article, or Organization markup on ninety-eight of your two hundred and forty indexed pages. This is eaten by Google (which infers) but read strictly by Claude and Perplexity, which visibly prefer content with explicit structure when choosing citations. Fix: four schema patterns specified in appendix B, retrofittable in a two-day sprint.

High
F-003

llms.txt is missing; answer engines are crawling everything or nothing

No llms.txt file at your root. This is a missed opportunity to name the canonical URLs you want answer engines to cite, and to exclude the marketing collateral you do not want in training or retrieval data. Drafted in appendix C. Six sections, under four hundred lines, reviewed quarterly.

High

These are not writing problems. They are structure problems that writing alone cannot fix. Chapter 04 explains the structure your team should be building; chapter 06 sequences the three pillar pages and the retrofit work across the first eight weeks of the retainer.

Chapter 03 Findings: technical, content, citation surface.

The full chapter reproduces all seventeen findings, grouped across three axes, with the query the finding was triggered by, the before-and-after citation data, and a suggested fix with an estimate. Appendix D carries the full share-of-answer dataset (two hundred queries, four engines, four weeks) so your team can re-run the analysis or query differently.

A fragment follows. The full chapter is omitted from the sample.

Chapter 04 How topical authority works for doc platforms.

Two people at Cloudwrit know this intuitively. After this chapter, every content contributor does. The chapter opens with a working definition (we call ours the “three pillars / twenty-one spokes” model), walks through how Cloudwrit specifically should stack pillars (doc collaboration / version history / PDF workflows), and then spends eight pages on the mechanics of citation-shaped writing: the claim-evidence-source pattern, passage length and density, internal linking, and the choices that make an answer engine cite your paragraph rather than a competitor's.

  1. Pillar pages: one per cluster, five thousand to seven thousand words, updated quarterly.
  2. Spokes: seven per pillar, one thousand five hundred to two thousand five hundred words, updated annually.
  3. The claim-evidence-source pattern, with the three variants we use for different query intents.
  4. Internal linking: the specific graph shape that produces durable topical rank on Google.
  5. The citation-surface habits (schema, llms.txt, og:image, structured data) that make your passages easier for LLMs to lift.

Chapter 05 The answer engine monitoring setup.

You cannot manage what you do not measure, and answer engines do not provide dashboards. This chapter documents the setup your team will inherit: the two hundred queries (inline list), the four engines, the two people who will run the weekly capture, the spreadsheet and Looker board where it lives, and the two alerts that route to Slack when your share of answer on any primary cluster drops more than five points week over week. This chapter is operational, not conceptual.

Chapter 06 The six month content calendar, sequenced.

All three pillar pages plus seven spokes per pillar, sequenced week by week across twenty-four weeks, with research inputs, draft owners, review steps, publish dates, and one week of post-publish monitoring built in. Every item has an estimate, a definition of done, and an expected citation-surface outcome we will measure against. The calendar is drafted so you can swap any spoke without breaking the pillar it hangs from; the chapter explains the swap rules.

Opinion, clearly marked as such We do not chase featured snippets. We have seen enough client data now to say that featured-snippet pursuit costs content calendar days, often at the expense of a higher-yield pillar-page investment. Chapter 06 deliberately does not schedule any spoke whose primary success metric is “own the featured snippet.” Essay in the library defending this.

The remaining pages continue in this register: plain, sequenced, specific. Appendices include the full two-hundred-query dataset (CSV inline), the four schema patterns with code, the llms.txt draft, Claude's reasoning transcripts on every finding, and a decision log that starts today and keeps a running record of every content choice you make for as long as this handbook is live.

End of sample The remaining forty-six pages cover the full calendar, the schema appendix, the llms.txt appendix, the query dataset, and the decision log template. If this is the kind of artifact you want for your own topical surface - not a content backlog in Notion, but a document your team ships against - we would like to write one for you.
H2 Other handbook shapes

The same artifact, four more ways.

Every engagement ends with a Signature Handbook. The structure is consistent. The content is wholly yours. Browse the other four samples to see how the shape bends across service lines.

Ready when you are

Commission an SEO and GEO handbook for your surface.

Start with a two-week audit. We measure current rank and share-of-answer, scope the retainer, and commit to the content calendar before either of us signs the full shape.

Start a project