The handbook thesis, defended.
Why we ship a long-form document at the end of every engagement, and why the slide deck is about to die as a consulting artifact. Featured.
Twenty-four essays, four pillars, one voice. Filter by pillar or scan by date. Essays we have not yet published appear with a muted byline and a draft indicator. We link only the ones that are shipped.
Why we ship a long-form document at the end of every engagement, and why the slide deck is about to die as a consulting artifact. Featured.
When a 140 page handbook is generated against the same 20k token context forty times, the cache pays for the whole engagement.
Every deliverable runs through four stages now: draft, self-critique, human edit, eval.
When to turn on extended thinking, when to pay for it in tokens, and how to know if your prompt actually needs it.
Our template library has been through three Claude versions without a rewrite. The principles that make a template portable.
Concrete consulting examples: a retrieval-grounded pentest reviewer, a code-reading auditor, and a topical map generator.
Prompt patterns and real traces from our handbook production pipeline.
Retrieval patterns for long-running engagements. Why bigger contexts do not remove the need for careful assembly.
Sixty percent of pentest reports we inherit are unreadable to the executive and unactionable to the engineer.
One policy, one role, one bucket. The same misconfiguration shows up at seventy percent of Series A SaaS companies we audit.
How we use it without letting it run the engagement.
A workshop trace with actual prompts, actual diagrams, and actual decisions.
Template plus how Claude shortens the write-up without hiding the decisions that matter.
A checklist for security readers who want to audit their own LLM-backed products without becoming AI researchers.
A structural audit of what actually gets cited in Claude, ChatGPT, Perplexity, and Gemini answers. Data, not theory.
A tactical guide to being the source that LLMs quote, not just the page they read.
With ChatGPT and Perplexity absorbing the top-of-funnel, the snippet game is a loser's game. What we are doing instead.
Dual optimization with one content model. Hub, spoke, and the citation surface in between.
Practical robots.txt and llms.txt patterns, with the list of crawlers we actually allow.
When an operator plus Claude can do a week of classical consulting work in a day, the hourly rate incentivizes slower delivery.
Our actual cost structure in the first year, broken down line by line. Published so the next founder does not have to guess.
The pricing framework we use, including the two places we almost always underprice.
Why we chose not to hire associates. What a senior plus a model produces that an associate cannot.
The decision tree we actually use with prospects, with the three questions that settle it in under ten minutes.
We publish one essay a week on Tuesdays. Subscribe here to get each piece in your inbox.