The pattern, not the exploit.
If you have raised a Series A in the last two years and have not done a proper IAM audit, your attack path looks the same as everyone else's.
I am not exaggerating. Of the Series A companies we have tested since we started the firm, every single one had variants of the same three-link chain to get from an externally-phishable developer account to production database read. The links are: a long-lived developer key that should be federated identity, a deploy role scoped too broadly, and a production service account with effective read on everything because nobody scoped it down after the first sprint.
This essay is not about any specific exploit. It is about the pattern, because the pattern is what keeps coming back after point fixes and is what the next round of auditors (SOC 2, investor diligence, enterprise security reviews) will find again in six months if the underlying ownership does not change.
The fix is cheap. The understanding of why the fix is load-bearing is the expensive part.
Link 1: long-lived developer keys.
A single AWS access key, older than a year, in a developer's ~/.aws/credentials, with IAM permissions that expanded gradually.
Every company starts with a founder's access key in a dotfile. The intent is always "this is temporary, we will federate it later". It is never later. Two years in, that key has accreted permissions for every system that needed quick access during an incident, every integration that was "just for testing", every Terraform run the CTO did on a laptop.
In the engagements we run, this key is the first foothold in almost every case. We do not even need the key itself - we demonstrate the path by showing the CloudTrail record of how broadly the key is used, and the client volunteers that multiple developers know the key because it is "the one that works".
The fix is federated identity. Human developers authenticate through the company SSO (Okta, Google Workspace, Microsoft Entra) and assume short-lived roles in AWS or GCP. No keys in dotfiles. Exception: ad-hoc break-glass keys stored in a secrets manager and auto-rotated weekly. The Terraform for AWS SSO plus a GitHub OIDC provider for CI roles is under 100 lines. We have written it so many times we now ship it as a template.
What fails teams at this link is not the Terraform. It is the migration. There are always five or six legacy integrations that still expect an IAM user with a long-lived key. These are the harder work: you migrate them one at a time, track them in a list, and set a date for when the legacy keys are destroyed. Without that list and that date, the migration stalls and the bad state persists.
Link 2: the over-scoped deploy role.
The CI deploy role. It started with a specific purpose. It now has IAM, S3, RDS, Secrets Manager, Lambda, ECS, and in half the cases we audit, the ability to create other roles.
Deploy roles grow for an understandable reason: every time the platform adds a new service, the deploy pipeline needs to touch it. The first engineer who hits the permissions error adds a permission. Six months later the role is an admin. Nobody goes back to prune because pruning requires reading the role's usage data and reasoning about what is still required.
When a Series A company gets compromised through CI, this role is almost always the link that turns a CI-supply-chain attack into a full production takeover. A compromised dependency, a compromised third-party action, a leaked CI token - any of these gives an attacker the deploy role's permissions, which means everything the deploy role can do.
The fix is scope. Concretely: a deploy role should be able to push container images to a specific ECR repo, update a specific set of services in a specific ECS cluster, read specific secrets, and nothing else. It should not be able to create IAM roles, modify IAM policies, read databases, or call the KMS decrypt API for arbitrary keys. Every permission in the role should be answerable to "what specific operation in the deploy pipeline requires this".
AWS Access Analyzer can help here. It observes role usage and suggests tighter policies based on what the role has actually used over a window. We feed its output into a review and then into a Terraform PR. The output is rarely directly usable - it tends to be too narrow in some places, too broad in others - but it is a sharp starting point. The review adds judgement.
Link 3: the production read-everything service account.
The application service account in production. It can read every row in every table. Often it can also write every row. In the worst cases, it can also drop tables.
This one is almost universal because of a specific structural temptation. In the early days, separating reads from writes, or scoping reads to specific tables, feels like premature engineering. The app is small, the schema changes weekly, the team is three people. A single DATABASE_URL with a permissive role is the path of least resistance and it works.
It stops working the day the app is compromised. An SSRF, a prompt injection, a deserialization bug, a template injection - any foothold in the application runtime gives the attacker the ability to read every piece of customer data in the system. The blast radius of a small bug is total.
The fix is two separate roles and connection pools. The web tier gets a role that can read only the tables and columns the web tier actually queries, and write only what it needs to write. Analytics and admin tooling get a separate role with broader read but still scoped. Backup tooling gets a separate role that can COPY but cannot DELETE. None of this is hard. All of it requires an afternoon of reading the application's actual database usage and writing the grants.
The conversation we have with clients about this fix is almost always about "is this premature". The answer is: if you are Series A and you have real customer data, no, it is not premature. The bug that exposes the full database will happen, statistically. The separation between "a bug that leaked one user's data" and "a bug that leaked everyone's data" is this role.
Why this persists.
None of the three links is novel. None of the fixes is expensive. So why is this pattern universal?
The pattern is the result of a structural gap: at Series A, nobody owns IAM. The founding CTO is busy shipping product. The first security hire, if there is one, is focused on SOC 2 and the occasional vendor review. Platform engineering, if it exists, is optimizing for velocity. IAM is the negative space between all of these roles and it is nobody's job.
The result is that IAM accretes. Every sprint adds a permission. Nothing removes one. Every new integration creates a new role. Nothing audits the old ones. The three links are the three places where accretion is most destructive, but the underlying pathology is the ownership gap.
When we close the pattern for a client, we do it not by doing one-time cleanup but by establishing ownership. A named person reviews IAM changes. A monthly process runs Access Analyzer, reviews the output, and lands PRs. A quarterly audit re-checks the three links specifically. The fix is not cleanup. It is process.
The close, sequenced by week.
If you are reading this and recognize your own posture, here is what it takes.
Week 1-2: inventory. List every IAM user with access keys. List every role with admin-adjacent permissions. List every service account. Tag each by purpose. You will find ghosts.
Week 3-4: federate human access. Stand up SSO-federated roles for every developer. Pilot with two people. Roll out to the team. Set a destroy date for long-lived IAM-user keys, three weeks after federation.
Week 5-6: scope CI. Audit the deploy role. Add Access Analyzer findings to a review. Ship the tightened policy to staging. Cut over production after observing staging for a week.
Week 7-8: scope application. Split the application DATABASE_URL into purpose-specific roles. Audit the grants against actual application queries. Migrate the application code to use the scoped role. Keep the old role with admin permissions as a break-glass, rotated monthly.
Week 9 onward: process. Monthly review of IAM changes. Monthly Access Analyzer pass. Quarterly re-test of the three links. Name a person. Put the review on their calendar.
Nine weeks of calendar time. Most of the actual work is a few days. The value is disproportionate: you have closed the most common attack path we find in Series A companies, and you have established the process that keeps it closed.
We do this close as a fixed-scope engagement for companies that need it to land fast. But you can do it yourself, with this essay and some discipline, and that is the honest recommendation. The pattern is not subtle. The close is not exotic. What is missing is ownership, and ownership cannot be outsourced.
One essay a week. No filler.
Four pillars, one email every Tuesday. If we have nothing worth sending, we skip the week.