Machine accounts now outnumber humans — and one forgotten OAuth token can see more than your entire sales team. This is how you put them on a leash.
On August 9, 2025, at 11:51 UTC, someone accessed Cloudflare’s Salesforce tenant.
Not with a password. Not through a phishing email. Not by exploiting a zero-day.
They used an OAuth token — a credential belonging to Drift, Salesloft’s AI chat agent, which had been granted access to Salesforce instances across hundreds of companies.
The attacker, tracked as UNC6395 by Mandiant and GRUB1 by Cloudflare, had gained access to Salesloft’s GitHub account sometime between March and June 2025. From there, they downloaded code repositories, added a guest user, and established workflows to maintain persistence. Then they moved laterally into Drift’s AWS environment and stole OAuth tokens — the credentials that allowed Drift to connect to customer Salesforce instances.
Over the next eight days, the attacker systematically queried and exported data from more than 700 organizations. They used Salesforce’s standard APIs, executing SOQL queries to retrieve Cases, Accounts, Users, and Opportunities. They ran bulk export jobs. They scanned the stolen data for plaintext AWS keys, VPN credentials, and Snowflake tokens. And they deleted query logs to cover their tracks.
The victim companies had no idea.
Cloudflare’s investigation later showed the attacker’s reconnaissance started August 9. Active exfiltration began August 12. By August 17, they had downloaded Cloudflare’s entire support case database — every ticket, every customer inquiry, every technical detail shared with support.
The attack wasn’t detected for days. Salesforce and Salesloft didn’t notify affected organizations until August 23. By then, the attacker had been gone for nearly a week.
On August 20, Salesforce revoked all Drift OAuth tokens and removed Drift from the AppExchange entirely.
The scope: Over 700 companies compromised. Victims included Avalara, Dynatrace, Fastly, HackerOne, Pantheon, PagerDuty, Proofpoint, SpyCloud, Tanium, Toast, and Zscaler — along with Cloudflare, Google, and Palo Alto Networks.
The method: Stolen OAuth credentials for a single integration app.
The root cause: A non-human identity with broad permissions that most security teams couldn’t even inventory, let alone govern.
This wasn’t a password breach. It was an identity breach. And the identity wasn’t human.
The Numbers Don’t Lie: Non-Human Identities Are Your New Attack Surface
The Salesloft-Drift breach isn’t an anomaly. It’s a pattern.
According to recent identity reports, machine identities now outnumber human identities by ratios of 10:1, 20:1, or higher in many enterprises. Service accounts, OAuth apps, API keys, workflow automations, and AI agents have proliferated faster than security teams can track them.
SaaS security vendors point out that these non-human identities often have far broader permissions than any individual employee — read and write access to CRM data, cloud storage, code repositories, and production systems. Yet unlike human users, they’re rarely reviewed. They don’t go through onboarding. They don’t get offboarded when the engineer who created them leaves. They don’t have managers who approve their access requests.
According to a 2026 survey of SaaS and AI ecosystems, essentially all organizations (99%+) experienced at least one SaaS or AI-related security incident in 2025, frequently involving misconfigured or over-privileged non-human identities.
In the Drift breach, the integration user had permissions that no single salesperson would ever need:
- Read access to all Salesforce objects (Cases, Accounts, Contacts, Opportunities)
- Bulk export capabilities
- API access with no rate limits
- Credentials that never expired
- No requirement for MFA or additional verification
In 2026, your most powerful “users” are no longer your employees. They’re the machine identities and agents you’ve never put through an onboarding, review, or offboarding process.
And most security teams can’t even answer: “How many non-human identities do we have?”

Introducing The Silicon Protocol
This is the first in a series I’m calling The Silicon Protocol — a playbook for moving AI from pilots to governed production systems.
The core idea: AI models, agents, and automations are a digital workforce. They act on your behalf. They access your data. They make decisions. They interact with customers and partners.
If you wouldn’t let a human employee operate without identity management, access controls, and audit trails, why would you let an AI agent?
The Silicon Protocol treats every AI system — LLM, agent, automation, integration — as a principal that must be:
- Identified — Who or what is this? What does it do? Who owns it?
- Constrained — What data can it access? What actions can it take?
- Observable — What did it actually do? Can we audit its actions?
- Interruptible — How do we stop it when it misbehaves or gets compromised?
This first article focuses on the identity layer — because if you don’t know which agents exist and what they can do, nothing else in your AI stack is under control.
Future parts will cover autonomy bounds, kill switches, tool registries, and sovereign infrastructure for AI in regulated industries.
What Counts as an Identity in 2026?
Let’s expand your mental model of “identity” beyond the HR directory.
Traditional identities (what most IAM covers):
- Human users (employees, contractors)
- Service accounts (database users, application accounts)
Non-human identities (what breaks your security model):
- OAuth connected apps (Salesforce integrations, Slack apps, Google Workspace add-ons)
- API keys and tokens (GitHub PATs, AWS access keys, Stripe API keys)
- Workflow automations (Zapier workflows, Make scenarios, n8n pipelines, internal ETL jobs)
- LLM agents and orchestration (ChatGPT plugins, LangChain agents, custom AI copilots)
- RAG pipelines (document indexers, vector database crawlers, knowledge base scrapers)
- MCP servers (Model Context Protocol endpoints providing tools to AI agents)
- CI/CD identities (GitHub Actions, GitLab runners, deployment pipelines)
Most IAM and zero-trust programs still center on humans plus a small number of service accounts. The long tail of machine principals — OAuth apps, automations, agents — is ignored.
In the Salesloft-Drift case, the integration user was “just” a non-human identity. But it held more power than any single employee:
- Access to every customer record
- Bulk export capabilities
- No MFA requirement
- No expiration date
- No usage monitoring
The lesson: Non-human identities are super-users by default, and most organizations don’t even have an inventory of them.
Why Traditional IAM Falls Short for Agents
IAM is designed around HR lifecycles:
- Joiner: New employee onboarded, provisioned with access based on role
- Mover: Employee changes teams, access updated to match new responsibilities
- Leaver: Employee exits, all access revoked within 24 hours
This works when identities map to people with job descriptions, managers, and HR records.
But AI agents, integrations, and automations don’t have HR records.
They’re created via:
- “Click to connect” OAuth flows
- curl commands to generate API keys
- Scripts that spin up service accounts
- Engineers deploying workflow automations during late-night deploys
The lifecycle is completely different:
- Created: Via API call or OAuth consent screen, often by a single engineer
- Forgotten: No one reviews its permissions after initial setup
- Orphaned: Creator leaves the company; no one knows what the agent does
- Compromised: Credentials stolen; continues operating as “legitimate” traffic
Authorization outlives intent. An OAuth token granted for “testing the Salesforce integration” three years ago is still active, with full read-write access, even though the original use case is long gone.
In the Drift breach, many victim companies couldn’t answer:
- What permissions did the Drift integration have?
- Which Salesforce objects could it access?
- When were those scopes last reviewed?
- Who was the business owner responsible for approving this integration?
Zero-trust slogans like “never trust, always verify” are meaningless if you don’t even have an inventory of the non-human identities to verify.
You can’t secure what you don’t know exists.
Design Pattern: Agent Passports
Here’s the first Silicon Protocol pattern: Agent Passports — a canonical record for every non-human identity in your environment.

An agent passport is the identity profile for anything that acts autonomously or on behalf of a human.
Core fields:
from dataclasses import dataclass
from datetime import datetime
from typing import List, Optional
from enum import Enum
class AgentType(Enum):
"""Classification of non-human identity types"""
OAUTH_INTEGRATION = "oauth_integration" # Salesforce apps, Slack bots
API_KEY = "api_key" # GitHub PATs, Stripe keys
WORKFLOW_AUTOMATION = "workflow_automation" # Zapier, Make, n8n
LLM_AGENT = "llm_agent" # ChatGPT plugins, custom agents
RAG_PIPELINE = "rag_pipeline" # Document indexers
MCP_SERVER = "mcp_server" # Model Context Protocol tools
CI_CD_IDENTITY = "cicd_identity" # GitHub Actions, GitLab runners
SERVICE_ACCOUNT = "service_account" # Database users, app accounts
class RiskTier(Enum):
"""Risk classification for access scope"""
LOW = "low" # Read-only, limited scope
MEDIUM = "medium" # Write access, single system
HIGH = "high" # Write access, multiple systems
CRITICAL = "critical" # Broad permissions, sensitive data
@dataclass
class AgentPassport:
"""
Canonical identity record for non-human principals
This is what you wish you had for the Drift integration
before the breach happened.
"""
# Core identity
agent_id: str
name: str
type: AgentType
# Ownership and authorization
owner_business: str # Which team owns this (e.g., "Sales Operations")
owner_technical: str # Technical contact (e.g., "jane@company.com")
authorized_by: str # Who approved this integration
authorization_date: Optional[datetime] # When was it approved
# Lifecycle
created_at: datetime
created_by: str
expires_at: Optional[datetime]
last_review_at: Optional[datetime]
next_review_due: Optional[datetime]
# Access and scope
connected_systems: List[str]
scopes: List[str] # Actual OAuth scopes or permissions
scope_justification: str # Business reason for these permissions
# Risk assessment
risk_tier: RiskTier
accesses_sensitive_data: bool
has_write_permissions: bool
spans_multiple_systems: bool
# Governance controls
approved_by_security: bool
requires_periodic_review: bool
review_frequency_days: int # How often to review (30, 90, 180, 365)
mfa_enforced: bool # Does this require MFA for setup/changes
ip_restrictions: Optional[List[str]] # Allowed IP ranges
# Operational metadata
last_active: Optional[datetime]
total_api_calls_last_30d: int
def to_audit_record(self) -> dict:
"""Generate audit trail entry"""
return {
'agent_id': self.agent_id,
'name': self.name,
'type': self.type.value,
'owner_business': self.owner_business,
'owner_technical': self.owner_technical,
'authorized_by': self.authorized_by,
'connected_systems': self.connected_systems,
'risk_tier': self.risk_tier.value,
'scopes': self.scopes,
'scope_justification': self.scope_justification,
'flags': {
'write_access': self.has_write_permissions,
'sensitive_data': self.accesses_sensitive_data,
'multi_system': self.spans_multiple_systems,
'security_approved': self.approved_by_security,
'mfa_enforced': self.mfa_enforced
},
'review': {
'last_review': self.last_review_at.isoformat() if self.last_review_at else None,
'next_due': self.next_review_due.isoformat() if self.next_review_due else None,
'frequency_days': self.review_frequency_days
}
}
How this would have changed the Drift breach:
If organizations had agent passports for their Salesforce integrations:
- Ownership would be clear — “Sales Ops owns the Drift integration, engineering contact is jane@company.com”
- Scopes would be documented — “Read/write access to Cases, Accounts, Contacts, Opportunities”
- Risk tier would be obvious — “CRITICAL: Multi-system integration with write access to customer data”
- Review cadence would be enforced — “Quarterly review required; last review: 18 months ago (OVERDUE)”
- Revocation would be fast — Security knows exactly which systems are affected, who to notify, what credentials to rotate
Instead, most companies scrambled to answer basic questions:
- Do we use Drift?
- What does it have access to?
- Who approved this integration?
- How do we revoke it?
An agent passport makes these questions trivial to answer.
Design Pattern: Non-Human Identity Inventory and Risk Scoring
The second pattern: automated discovery and risk scoring for machine identities.
You can’t rely on engineers to self-report. You need to scan your environment and find every non-human identity programmatically.
Discovery methods:
- SaaS OAuth APIs — Query Slack, GitHub, Salesforce, Google Workspace for installed apps
- Cloud provider IAM — List service accounts, API keys, instance profiles (AWS, Azure, GCP)
- Network traffic analysis — Identify unknown API callers by User-Agent, IP, behavior
- Code repository scanning — Find hardcoded API keys, OAuth client IDs, tokens
Risk scoring logic:
class AgentInventory:
"""
Automated discovery and risk assessment for non-human identities
"""
def calculate_risk_score(self, agent: AgentPassport) -> float:
"""
Calculate risk score (0-100) based on Drift breach patterns
Score 0-30: LOW
Score 31-60: MEDIUM
Score 61-85: HIGH
Score 86-100: CRITICAL
"""
score = 0.0
# Data access risk (Drift exfiltrated customer CRM data)
if agent.accesses_sensitive_data:
score += 30
if agent.has_write_permissions:
score += 15
# Supply chain risk (Drift attack spanned multiple systems)
if agent.spans_multiple_systems:
score += 20
# Governance failures (what enabled the breach)
if not agent.approved_by_security:
score += 25
# Credential lifecycle (stolen tokens worked for days)
if agent.expires_at is None:
score += 15
elif (datetime.utcnow() - agent.expires_at).days > 90:
score += 8
# Review cadence (Drift integrations ran years without review)
if agent.last_review_at is None:
score += 20
elif (datetime.utcnow() - agent.last_review_at).days > 365:
score += 12
# Ownership (many companies couldn't find who set up Drift)
creator_departed = self.check_creator_status(agent.created_by)
if creator_departed:
score += 18
return min(score, 100.0)
def classify_risk_tier(self, score: float) -> RiskTier:
"""Map score to risk tier"""
if score >= 86:
return RiskTier.CRITICAL
elif score >= 61:
return RiskTier.HIGH
elif score >= 31:
return RiskTier.MEDIUM
else:
return RiskTier.LOW
def identify_high_risk_agents(
self,
agents: List[AgentPassport]
) -> List[AgentPassport]:
"""
Find agents that need immediate attention
These are the identities that would show up red
in your security dashboard.
"""
high_risk = []
for agent in agents:
score = self.calculate_risk_score(agent)
if score >= 61: # HIGH or CRITICAL
agent.risk_tier = self.classify_risk_tier(score)
high_risk.append(agent)
return sorted(
high_risk,
key=lambda a: self.calculate_risk_score(a),
reverse=True
)
def discover_salesforce_connected_apps(
self,
salesforce_api_token: str
) -> List[AgentPassport]:
"""
Example: Enumerate OAuth apps connected to Salesforce
This would have discovered the Drift integration
and flagged it as CRITICAL risk.
"""
# In production, call Salesforce API:
# GET /services/data/v58.0/sobjects/ConnectedApplication
# Pseudo-code for demonstration:
apps = []
# Example result: Drift integration
drift_app = AgentPassport(
agent_id="salesforce-drift-integration",
name="Drift AI Chat",
type=AgentType.OAUTH_INTEGRATION,
owner_business="Sales Operations",
owner_technical="former-engineer@company.com",
authorized_by="unknown", # No record of who approved
authorization_date=None,
created_at=datetime(2023, 3, 15),
created_by="former-engineer@company.com",
expires_at=None,
last_review_at=None,
next_review_due=None,
connected_systems=["salesforce", "slack", "google_workspace"],
scopes=[
"salesforce:read:cases",
"salesforce:read:accounts",
"salesforce:read:contacts",
"salesforce:read:opportunities",
"salesforce:write:cases",
"salesforce:bulk_api"
],
scope_justification="Enable chat widget to access support cases", # Original justification
risk_tier=RiskTier.CRITICAL,
accesses_sensitive_data=True,
has_write_permissions=True,
spans_multiple_systems=True,
approved_by_security=False,
requires_periodic_review=True,
review_frequency_days=90,
mfa_enforced=False,
ip_restrictions=None,
last_active=datetime.utcnow(),
total_api_calls_last_30d=450_000
)
score = self.calculate_risk_score(drift_app)
# Score: 30 (sensitive data) + 15 (write) + 20 (multi-system)
# + 25 (not approved) + 15 (no expiry) + 20 (never reviewed)
# + 18 (creator left) = 143 → capped at 100 → CRITICAL
apps.append(drift_app)
return apps
What this system would have shown before the Drift breach:
CRITICAL RISK IDENTITIES (3):
1. Drift AI Chat (salesforce-drift-integration)
Risk Score: 100/100 (capped from 143)
Owner: Sales Operations (technical contact: former-engineer@company.com - DEPARTED)
Authorized by: Unknown - no approval record
Scopes: Read/write Salesforce (Cases, Accounts, Contacts, Opportunities, Bulk API)
Connected: Salesforce, Slack, Google Workspace
Credentials: Never expire
Last review: Never
Next review: Overdue by 730+ days
Flags:
- Write permissions to customer PII
- Bulk export capability
- Multi-system integration (supply chain risk)
- Creator departed 8 months ago
- No security approval on record
- No MFA enforcement
- 450K API calls/month (high volume)
ACTION REQUIRED: Immediate revocation + scope audit + credential rotation + ownership assignment
That dashboard would have made the Drift integration impossible to ignore.

Minimum Viable Controls for 2026
Here’s what you need to implement this week — not next quarter — to avoid becoming the next breach headline.
1. You can answer: “How many non-human identities do we have, by system?”
If you can’t produce this list in under 10 minutes, you’re flying blind.
Drift breach lesson: Most victim companies didn’t know Drift was connected until after the revocation.
2. Every high-risk agent has a named business owner
Not “Sales team” or “Engineering.” A specific person accountable for:
- Approving scope changes
- Reviewing quarterly access
- Authorizing credential rotation
- Being on-call when something breaks
Drift breach lesson: When Salesforce revoked tokens, companies scrambled to find who “owned” the Drift integration. Days were lost.
3. Scoped permissions (least privilege)
No integration should have admin or * scope unless there's documented business justification.
Drift breach lesson: Drift didn’t need bulk export. It didn’t need write access to all objects. It didn’t need to span Salesforce + Slack + Google. But it had all three.
4. Credentials have expiry dates and review cadence
- LOW risk: Annual review, 12-month credential rotation
- MEDIUM risk: Quarterly review, 6-month credential rotation
- HIGH risk: Monthly review, 3-month credential rotation
- CRITICAL risk: Weekly review, 30-day credential rotation
Drift breach lesson: If Drift’s OAuth tokens expired every 90 days, the breach window would have been 90 days maximum — not the multi-year exposure that actually existed.
5. No “eternal” tokens
API keys that never expire are security incidents waiting to happen.
Drift breach lesson: Stolen OAuth tokens worked for days because they never expired. Rotating tokens would have limited the blast radius.
6. Documented process for creating new integrations
Before clicking “Connect to Salesforce,” an engineer must:
- Submit request with business justification
- Get security approval for requested scopes
- Assign business owner
- Set credential expiry
- Schedule first quarterly review
Drift breach lesson: Most Drift integrations were connected via self-service OAuth flows with zero security review.
What This Changes
If these controls had been in place before August 2025:
Discovery would be instant: Security dashboard shows “Drift AI Chat” as CRITICAL risk identity with 100/100 risk score.
Ownership would be clear: “Sales Ops owns this, jane@company.com is technical contact.”
Scope would be visible: “Read/write access to Cases, Accounts, Contacts, Opportunities + Bulk API.”
Review would be mandatory: “Quarterly review overdue by 18 months” triggers alert.
Revocation would be fast: When Salesforce announced the breach, security team knows exactly:
- Which systems Drift touches
- What data it can access
- Who to notify
- How to rotate credentials
Instead of: “Wait, we use Drift? What does it do? Who set this up? Can we turn it off?”
The difference between hours and weeks of incident response.
The Identity Layer Is Your Foundation
Before you talk about fancy AI agents and copilots, you need to fix your identity layer.
If you can’t inventory your non-human identities, you can’t constrain them. If you can’t constrain them, you can’t observe them. If you can’t observe them, you can’t interrupt them when they’re compromised.
The Drift breach was a supply chain attack, yes. But it was successful because of an identity governance failure.
Next in The Silicon Protocol:
- Episode 2: The Model Hosting Decision — Self-hosted vs. API vs. hybrid ($200K GPU cluster vs. $50K/year API)
- Episode 3: The De-identification Decision — Regex vs. NER vs. LLM-based de-identification pipelines
- Episode 4: The Prompt Logging Decision — Sanitized audit trails that pass OCR review
- Episodes 5–16: Guardrails, scale, and compliance patterns for production AI
The Silicon Protocol
The Silicon Protocol is my playbook for moving AI from pilots to a governed digital workforce and sovereign infrastructure. Identity is the first layer: if you don’t know which agents exist and what they can do, nothing else in your AI stack is really under control.
Building production AI for regulated industries where one compromised OAuth token isn’t a “learning opportunity” — it’s a $4.88M breach notification.
Can your security team answer these questions right now:
1. How many OAuth apps are connected to your Salesforce/Slack/GitHub?
2. Which integration has the broadest permissions?
3. When were those scopes last reviewed?
If you can’t answer all three in under 5 minutes, you have an identity governance gap. Drop a comment with which question you got stuck on.
The Silicon Protocol: The Identity Crisis — When Machine Accounts Become Your Real Super-Users was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.