L3 Legal · Security

We eat our own cooking.

We ship security work. That means we hold ourselves to the same standard we hold our clients. This page documents our current posture, our disclosure process, and the contacts for any security concern.

Last updated: April 2026 · Version 1.0 · Next scheduled review: October 2026

Reporting a vulnerability

If you believe you have found a security issue in any NexcurAI property (nexcur.ai, any subdomain, any open-source repo, our APIs), email us at hello@nexcur.ai with:

  • A clear description of the issue
  • Reproduction steps
  • Your name or handle (for credit), or “anonymous” if you prefer

You will hear back within 24 hours (business day SLA; we do not respond from vacations or holidays, but we will acknowledge on return). Critical findings are triaged within one business day. Disclosure is coordinated: we target 30 days to fix, longer if the issue is architectural and you are willing to wait.

Safe harbor

If you follow this policy in good faith, we will not pursue legal action, DMCA takedowns, or similar. Testing in scope:

  • nexcur.ai (this site)
  • *.nexcur.ai subdomains (except staging.nexcur.ai, which is internal-only)
  • Open-source code in github.com/nexcur

Out of scope: social engineering of our team, physical security testing, client engagement repositories (ask us first), DoS / resource exhaustion, automated scanners at volume.

Acknowledgements

We maintain a public acknowledgements page for security researchers who have reported valid findings. With your consent, we credit you by name or handle, with a link of your choosing. No bug bounty monetary program today; we reassess in 2027.

Our security posture

Access & identity

  • All operator accounts enforce hardware-backed 2FA (YubiKey or equivalent).
  • SSO via Google Workspace with security keys required for access to client data.
  • Least-privilege IAM in all cloud accounts, reviewed quarterly.
  • Zero standing admin; all admin access granted just-in-time with audit log.

Data handling

  • Client engagement content segregated per engagement in dedicated private repositories.
  • Secrets stored in 1Password Teams, never in code or config files.
  • Access tokens granted to operators rotate every 90 days or on role change.
  • Laptops enforce FileVault / BitLocker full-disk encryption.

Infrastructure

  • Website: static HTML, served via edge CDN with HTTPS-only, HSTS enforced.
  • Internal systems on AWS with CloudTrail, GuardDuty, Config, and Inspector enabled by default.
  • Supply chain: dependency scanning, signed commits, pinned versions where practical.
  • API calls to Anthropic use API keys scoped per-operator, rotated on role change.

Operational practices

  • Quarterly internal tabletop exercise (data breach, vendor compromise, rogue operator).
  • Annual external pentest by an independent firm, summary published here.
  • Incident playbooks for: data incident, phishing of operator, Anthropic API key compromise, client account compromise.
  • On-call rotation for security reports during business hours, 24/7 during active engagements.

AI-specific safeguards

  • Claude usage runs under the Zero Retention option where applicable to the endpoint.
  • Client engagement content is never used to train or fine-tune any model.
  • Prompts are versioned and evaluated nightly against a regression suite before any prompt change ships.
  • Every Claude output that reaches a client deliverable has been reviewed by a named human operator.

Compliance

  • Working toward SOC 2 Type I in Q4 2026; Type II in 2027.
  • GDPR, CCPA, and PIPEDA compliant for personal data handling.
  • US HIPAA and PCI DSS handled on a per-engagement basis with explicit SOWs; we do not store PHI or cardholder data by default.
  • Quarterly review of Anthropic Usage Policy and Commercial Terms, logged publicly at /compliance/anthropic-review-log.md.
  • Per-service-line fallback model plan published at /compliance/fallback-models.md, exercised quarterly.

PGP key

For encrypted disclosures, use our PGP key (fingerprint below). Full key at /pgp.asc.

Fingerprint: 4A2B 8C1D 5E9F 2A34 7B01  C8D2 A5F6 BB12 3C4D 5E6F
UID: NexcurAI Security <hello@nexcur.ai>
Key type: ed25519
Valid: 2026-01 through 2028-01
          

Canary

We publish a monthly canary at /canary.txt asserting that we have not received any law enforcement request that we cannot legally disclose. The canary is signed with the PGP key above.

Contact

  • Security disclosure: hello@nexcur.ai
  • General security questions: hello@nexcur.ai
  • Media & research inquiries: hello@nexcur.ai