What we build, and what we do not
We build marketing sites, SaaS product surfaces, dashboards, docs portals, internal tools, and the web layer of hybrid web-plus-native products. We do not build mobile-only products, Electron apps, or games. We will integrate with your native mobile clients, not author them.
Typical stack: TypeScript, React (Next.js or Vite), Tailwind or CSS tokens, Postgres on a managed platform (Supabase, Neon, RDS), edge hosting where it makes sense. We adapt to your stack if it is reasonable; we push back if it is not.
Week 0: intake
Before the engagement starts, we need to know:
- The product definition: what this is, whom it is for, what the minimum-viable version contains.
- The non-goals: what you explicitly do not want in v1.
- Constraints: budget, deadline, team availability, existing systems we have to play nicely with, regulatory requirements.
- Existing assets: design system (if any), brand kit, copy, analytics, auth provider, existing backend.
- The success criteria: what does “shipped” mean to you, measurably?
The intake is a structured questionnaire (the same one in /templates/_shared/client_intake.md) plus an interview. We write the intake summary and send it back for you to confirm before we start. This is the document we will refer to when priorities come up for debate during the build.
Week 1: architecture
The first week is design. Not UI; system design.
- Data model. Entities, relationships, multi-tenancy approach, access control model.
- Auth and session. Provider, flow, roles, impersonation, MFA posture.
- Navigation and information architecture. Routes, shells, empty states, error states.
- Performance budget. Target Lighthouse scores, target Core Web Vitals, target TTI on a 3G connection.
- Accessibility target. WCAG 2.2 AA on every page we ship. Non-negotiable.
- Observability. Error tracking, structured logs, analytics events.
At the end of week 1, you receive the architecture decision record (ADR) with these items settled. You sign off; we start building.
Weeks 2 onward: the build
We work on one-week PR cycles. Every PR ships through the same gate:
- TypeScript clean. Zero type errors. No
anyoutside a documented integration boundary. - Lint clean. ESLint + Prettier on every commit.
- Tests. Unit tests on business logic, integration tests on critical flows, Playwright on the happy paths.
- Accessibility check. axe-core on every page rendered in a test; manual keyboard-navigation check on any new interactive component.
- Performance check. Lighthouse CI on the pages that changed, with regression threshold.
- Design system compliance. New UI uses existing tokens and components unless an ADR authorizes a new one.
- Operator review. A named human reviews the PR and signs off. Claude may have drafted some of the code; the operator owns the commit.
Claude in the build
Claude drafts. The operator reviews. Concretely:
- Claude generates initial component scaffolds from the design.
- Claude proposes test cases from the spec; the operator reviews and adds edge cases.
- Claude handles mechanical refactors: renames, file moves, import fixups.
- Claude does not make architectural decisions. It drafts the pattern; the operator accepts or reworks.
- Claude does not run migrations or mutate production.
Every PR has a named human author. If Claude drafted most of the code, the operator is still the author and is responsible for the code's behavior in production.
Week N-1: pre-launch
The week before launch is a freeze on new features and a focus on:
- Performance tuning against the budget set in week 1.
- Accessibility audit, full pass, manual plus automated.
- Security pass: basic hygiene against the web pieces of our IAM hardening field manual and the application layer checks in the pentest report template.
- Monitoring and alerting: error tracking on, alerts tuned, dashboards live.
- Runbook for the first week post-launch: who is on call, what the escalation path is, what we roll back to if a critical regression lands.
- Content review: the operator reads every page for voice, for accuracy, for typos. AI slop does not ship to production.
Launch day
We deploy with a named operator present, a rollback plan in hand, and monitoring open. Deploys outside business hours are avoided unless required by the client's customer timezone. Launch day ends with a five-minute handoff standup documenting what was deployed and what to watch for overnight.
The sixty-day post-launch window
Included in every Signature Web engagement: sixty days of post-launch support, at no extra fee, for:
- Regression fixes on features we shipped.
- Performance regressions caused by data volume changes.
- Accessibility findings we missed that the client or a user reports.
- Minor copy and content revisions.
- Handover training to the client's team.
Out of scope in the window: new features, third-party integrations we did not include in the original scope, support for use cases the product was not designed for.
Honest timelines
What actually ships in six weeks:
- Landing site with CMS, analytics, and forms: yes, usually week 3 or 4.
- SaaS dashboard with auth, 3-4 primary views, basic admin: yes, if the backend exists.
- New SaaS product from scratch, including a real backend with multi-tenancy, billing, admin, onboarding: no. Ten to fourteen weeks, minimum. We will say so at intake.
- Replacement of an existing in-production SaaS: varies wildly; we scope case by case.
We have walked away from engagements where the requested scope did not fit the available time. It is the honest thing to do.
What you own at handover
- The source code, in your repository, with full commit history.
- The deployment pipeline, in your cloud account.
- The design system as tokens and components, reusable and documented.
- The Signature Handbook: architecture, decisions, open issues, ninety-day roadmap.
- The runbook for the production environment.
- Access handover: every credential the engagement used, transferred to your vault.