The 144:1 Problem: Why Non-Human Identities Are Your Biggest Blind Spot
For every human identity in your organization, there are 144 machine identities operating in the shadows. Most have privileged access. Few are being governed.
When security teams think about identity governance, they picture employees, contractors, and maybe partners. They implement MFA, run access certifications, enforce password policies. But there’s a massive category of identities that rarely appears in those reviews: service accounts, API keys, OAuth tokens, bots, and AI agents—collectively known as Non-Human Identities (NHIs).
These machine identities now outnumber humans by staggering ratios. According to recent research, the NHI-to-human ratio has exploded to 144:1—a 44% increase from the previous year. And as organizations deploy AI agents across every department, this trajectory is only accelerating.
Why Your Security Controls Don’t Work on NHIs
The security measures that protect human identities simply don’t translate to machines:
- MFA? Service accounts can’t respond to push notifications
- Password rotation? Hardcoded API keys in config files don’t rotate themselves
- SSO? Machine-to-machine authentication bypasses your identity provider
- Access reviews? Good luck finding a human who knows what “svc_legacy_etl_prod_03” actually does
The result? Once a service account credential leaks, it’s immediately exploitable. There’s no second factor. Organizations often don’t know one has been exposed until it’s already been used for lateral movement.
The Real-World Risk
According to Google Cloud’s H1 2025 Threat Horizons Report:
NHIs frequently hold privileged access because they need broad permissions to perform automated tasks across systems. If attackers gain control of a privileged NHI, they can access data, escalate privileges, and move laterally—all without triggering the behavioral anomalies you’d see from a compromised human account.
The Persistence Problem
Perhaps the most dangerous characteristic of NHIs is their persistence without human oversight. When an employee leaves, HR triggers offboarding. Their access is revoked. But when a project ends or an application is retired, what happens to its service accounts?
The Zombie Identity Problem
Service accounts created for a 2019 migration project are still active in 2026. API keys generated by a developer who left three years ago still have production access. OAuth tokens granted to a vendor during a proof-of-concept were never revoked. These “zombie identities” accumulate silently—each one a potential entry point.
The Governance Gap
According to the Cloud Security Alliance’s State of Non-Human Identity Security report, governance remains the weakest link:
Less than 25%
of organizations have documented and formally adopted policies for creating or removing AI and machine identities.
The remaining 75%+ are flying blind. Hardcoded secrets, long-lived tokens, and static API keys proliferate. These credentials are rarely rotated and often stored insecurely in code repositories, configuration files, or shared wikis.
What Good NHI Governance Looks Like
Non-human identities require the same lifecycle governance as humans—just with different mechanisms:
Map every NHI across your ecosystem: IAM roles in AWS, Service Principals in Azure, Service Accounts in GCP, API tokens in GitHub, connection strings in databases. You can’t govern what you can’t see.
Every NHI must have a designated human owner who is accountable for it. No orphan identities. When that human leaves, ownership must transfer—just like any other asset.
Switch to short-lived tokens. Use workload identity federation to inject credentials at runtime. The goal: no secrets in your environment to steal.
NHIs should have only the specific permissions needed—nothing more. Review permissions quarterly. If a permission hasn’t been used in 90 days, remove it.
Don’t ask humans to certify machine access—ask the logs. “Has Service Account A used Permission B in 90 days?” If no, auto-revoke. This is right-sizing at scale.
When an application is decommissioned, its NHIs must be automatically flagged for review and revocation. Tie identity lifecycle to asset lifecycle in your CMDB.
The Agentic AI Curveball
Just as organizations are waking up to NHI risk, a new category is emerging: AI agents. These aren’t just service accounts running scripts—they’re autonomous systems making decisions, calling APIs, and taking actions on behalf of humans.
Effective governance of agentic AI requires:
- Purpose-bound credentials that limit what the agent can do
- Time-limited tokens that automatically expire after task completion
- Clear delegation chains linking AI authority back to accountable human owners
- Continuous behavioral monitoring and anomaly detection
- Audit trails capturing every action for compliance and forensics
Start Today
The good news: investment in NHI security is accelerating. 24% of organizations plan to invest within six months, and another 36% within twelve months. But waiting puts you further behind.
Your first step: Get visibility. Run an inventory of every service account, API key, and OAuth token in your environment. Identify the orphans. Find the overprivileged. Then build the governance framework to manage them at scale.
The 144 machine identities operating behind each human employee aren’t going away. In fact, as AI adoption accelerates, that ratio will only grow. The question is whether you’re governing them—or just hoping nothing goes wrong.
Explore how access certifications, SOD enforcement, and risk scoring work in a hands-on IGA demo.