C1.1 Case study · Fictional template
Fictional · representative · not a real client

Cloudwrit · a Signature Security engagement.

A composite growth-tier security engagement for a fictional Series B document-intelligence SaaS. It is the template we will follow, verbatim, when we publish our first real case study in Q2 2026. Every section below is what you would see on that real page, with a real client's numbers in place of these.

Cybersecurity Growth tier Signature Handbook 6 week engagement Q1 2026 (representative)

Published: 2026-04-19 (fictional template) · Engagement close (representative): 2026-03-14 · Drafted with Claude, reviewed by the operator on 2026-04-19.

1. Context

Client: Cloudwrit (fictional). A Series B document-intelligence SaaS: customers upload contracts, invoices, and clinical notes; Cloudwrit extracts structured fields, redacts PII, and routes the documents into downstream systems. The same fictional client we use throughout the Signature Handbook sample and much of our writing, so details are continuous across pages.

  • Stage: Series B, 18 months post-Series-A, eight-figure ARR.
  • Headcount: 62 people, engineering 38, security headcount 1 (platform engineer wearing a second hat).
  • Customer profile: mid-market SaaS, regulated industries (legal, insurance, health). Customers routinely ask for SOC 2 Type II and frequently for HIPAA BAAs.
  • Trigger: a prospective enterprise customer returned a 240-item security questionnaire. The internal platform engineer needed a credible answer to every line, fast, and a defensible posture for the audit to follow.

Cloudwrit was not in a crisis. It was at the moment just before a crisis: growing faster than its security function, with a month-long runway before enterprise procurement would start auditing against the questionnaire.

2. The engagement shape

Cloudwrit chose the Signature Security tier from our cybersecurity service line. Fixed scope, fixed price, fixed timeline.

  • Service line: Cybersecurity.
  • Tier: Growth (Signature Security).
  • What the client engaged us to do: an external and internal penetration test of the production SaaS, an IAM and supply-chain architecture review, a threat model of the document-ingestion path, and a Signature Handbook that sequences the remediation into quarterly waves the one-person security function can actually run.
  • Timeline: six weeks from kickoff to handover. Weeks 1 to 2, intake and external recon. Weeks 2 to 4, internal test and architecture review. Week 5, writeup and handbook drafting. Week 6, readout, client edits, handover.
  • Budget: $65,000 fixed. No hourly billing, no change orders (see the underlying argument in why we do not bill hourly).
  • Team: one senior operator, one reviewer (external specialist retained for cross-check), Claude for drafting and coverage expansion. Every finding validated by the human operator before delivery.
  • Deliverable: the Signature Handbook, plus a sanitized pentest report suitable for sharing with enterprise prospects under NDA.

3. What we found

Eighteen findings in total. Three critical, five high, seven medium, three low. We publish the critical three here with permission; the rest are in the handbook.

F-01. Tenant isolation bypass via shared document store

Severity: critical. Likelihood: high. Exploit path: an authenticated user in tenant A could construct a document reference that resolved inside tenant B, because the document-store key derivation used a tenant-prefixed hash but the lookup path did not re-verify the tenant claim against the session. A user with curl, basic API knowledge, and two test accounts on different tenants could read any other tenant's uploaded documents.

Demonstrated in: staging environment (not production), using two throwaway tenant accounts we provisioned with engineering's written approval. We captured HTTP evidence, did not exfiltrate real data, and notified the client inside four hours of confirmation.

F-02. Long-lived AWS access keys on CI runners

Severity: critical. Likelihood: high. GitHub Actions runners held AWS access keys with broad S3 and SSM permissions, valid since the original infrastructure bootstrap in 2023. Keys were not rotated, not scoped per-workflow, and were reachable by any maintainer with write access to the main repo. Compromise of a single maintainer laptop would have given an attacker persistent production data-plane access.

F-03. Prompt-injection-driven data exfiltration in redaction service

Severity: critical. Likelihood: medium. Cloudwrit's redaction pipeline used Claude to identify PII in uploaded documents, then piped the "redact this text" instruction and the document content into the same prompt. A malicious document with embedded instructions could cause the redactor to return the document without redaction, or to emit document content into an unrelated log channel that was viewable by a neighboring tenant's support view.

This is a class of finding that would not have surfaced in a classical pentest, because it requires the tester to understand the LLM pipeline. It is the kind of finding we are built to surface, and one of the reasons Cloudwrit engaged us over a classical firm. See Claude is not a pentester for how we think about the division of labor.

Five highs and seven mediums, summarized

  • Session cookie missing Secure and SameSite attributes in two code paths (high).
  • Admin console accessible from the public internet without IP allow-list (high).
  • Webhooks signed with HMAC but secret stored in environment variable readable by all service containers (high).
  • No rate-limit on password-reset endpoint (high).
  • PII present in application logs at three named call sites (high).
  • Plus seven mediums covering dependency freshness, CSP gaps, error-message disclosure, and observability hygiene.

4. What we shipped

Two artifacts.

The Signature Handbook. Sixty-four pages in its final form. Executive summary. Finding catalog with reproduction, evidence, and remediation for every finding. IAM architecture chapter with before / after diagrams. Threat model of the ingestion path (STRIDE plus LLM-specific extensions). A ninety-day remediation roadmap sequenced into three waves, each scoped to what one platform engineer plus part-time contractor help could deliver per quarter. A sanitized companion version suitable for sharing with enterprise prospects under NDA. Published in HTML, PDF, and Markdown.

The handbook-thesis companion sample on this site is drawn from this engagement's deliverable.

The pentest report. Thirty-one pages. Every finding written as prose in the form described in the pentest report as a literary form: one claim, the evidence, the remediation, the residual risk. Attached HTTP captures where relevant. Signed by the operator who ran the test. Handed over live, not dropped as a PDF.

5. The outcome

Measured at ninety days after handover (the window in which we guarantee free re-test of any remediated finding).

  • All three criticals closed in week 4 post-handover. We re-tested each and signed off. F-01 required a redesign of the document-reference path, not a patch; the client's lead engineer rewrote it in six working days and we validated the fix the week after.
  • Four of five highs closed by day 60. The fifth (admin console allow-list) was deliberately deferred because enterprise customers required a published IP range, and the client's platform team instead wired up zero-trust proxy access ahead of wave 2 of the roadmap. We countersigned the decision.
  • SOC 2 Type I audit passed at day 75. The handbook's evidence chapter was reused verbatim by the audit firm for control narratives.
  • Enterprise security questionnaire returned with 236 of 240 items answered "yes" with linked evidence, four items answered "compensating control in place" with a named roadmap item. Deal closed at day 70.
  • Internal time saved: the platform engineer reported that the handbook cut their questionnaire-response time for the next three enterprise prospects from "about three weeks of calendar time per prospect" to "about two days". This was the outcome the client had bought, and it showed up immediately.

6. What we would do differently

Every engagement has at least one. This one has three, and we are publishing them because a case study that does not admit any mistake is advertising copy.

  • We under-scoped the LLM-pipeline review at the intake stage. We had one day allocated to the redaction pipeline's prompt surface and ended up spending three and a half. Finding F-03 was worth the overage, but the overrun ate into the architecture review budget and we had to defer two observability-hygiene items to a quarterly follow-up. For the next engagement in this shape we are going to explicitly separate "classical surface" and "LLM surface" in the scope and budget them independently.
  • We let the handbook drift into sixty-four pages when fifty would have served. The client read the whole thing, but the VP of engineering read the executive summary and the roadmap, and the board read only the executive summary. We are tightening the default handbook length in the template library and introducing a mandatory one-page board-ready summary as page one.
  • We did not anticipate the zero-trust-proxy workaround for F-05. The client made a better decision than the one we recommended. We were recommending an IP allow-list because it was the straightforward control. The client's platform team recognized that zero-trust proxy access was a strictly better long-term control that removed an entire class of admin-surface exposure. We noted this in the retrospective, updated the handbook to record the decision, and now our default recommendation in that class of finding leads with zero-trust proxy and lists IP allow-list as a faster fallback.

7. Client quote

"We went in looking for a pentest. We came out with a document we still reference in standups nine months later. The redaction-pipeline finding alone would have justified the engagement; the handbook turned out to be the asset we did not know we needed. When the next enterprise questionnaire landed I opened one file instead of three Google Docs."

Fictional VP of Engineering · Cloudwrit (representative quote shape)

Methodology and disclosure

This case study is fictional. It is a composite of patterns we have run in our independent practice before NexcurAI incorporated, plus the methodology described in the Signature Handbook sample and the cybersecurity service page. Every finding class described above is a finding class we have shipped in real work; the numbers are representative of the engagement tier.

We are publishing it in advance of our first real case study (Q2 2026) so prospective clients can see the shape of the deliverable before they are on our calendar. When the first real case publishes, it will follow this structure section-for-section, with a real client's numbers in place of these, and with the client's written consent at the foot of the page.

Related

C1.1.cta Start a conversation

Commission an engagement of your own.

Every real engagement ships a handbook. Every close ships a case study within two weeks, subject to your consent. You can read the shape above before you decide.

Start a project Cybersecurity service line All case studies