Every major governance framework was designed for the chatbot era. Agent-specific coverage is either absent, bolt-on, or less than three months old. Your security team is evaluating agent deployments with the wrong risk model. Here's what to use instead.
Governance & Risk · security-leaders Data points for your next board presentation.
What each covers for agents — and where each stops.
| Framework | Agent Coverage | Status |
|---|---|---|
| NIST AI 600-1 | None — designed for chatbot/GenAI era | Published Jul 2024 |
| NIST CAISI Agent Standards | First agent-specific initiative. SP 800-53 overlay approach. | Info-gathering, no standard yet |
| Google SAIF 2.0 / CoSAI | Agent risk map added | Available |
| EU AI Act Article 15 | Technology-neutral — covers agents implicitly | Obligations Aug 2, 2026 |
| ISO/IEC 42001 | None — management system, not technical | Certifiable now |
| CSA AI Controls Matrix | 243 controls. "Agentic Control Plane" concept launched Mar 2026. | Available + recent agent work |
| OpenAI Governance Practices | 7 practices purpose-built for agents | Principles only, no controls |
| Singapore MGF Agentic AI | World's first agent-specific governance framework (4 dimensions) | Published Jan 2026, voluntary |
When Agent A delegates to Agent B which calls Tool C — who is accountable for the outcome? No framework defines accountability chains for multi-agent delegation. The confused deputy problem (Kill Chain Stage 4) has no governance equivalent.
71% of organizations say AI tools access core systems (Salesforce, SAP), but only 16% govern that access effectively (Cybersecurity Insiders & Saviynt, 2026). Agents need IAM-like governance — the CSA calls this the "agentic control plane" — but no standard defines how to implement it.
Persistent compromise across conversations — an attacker poisons the agent's memory once, and every future session follows the attacker's instructions. No governance framework addresses memory integrity verification. See Kill Chain Stage 6.
Gradual shift in agent behavior over time that looks like legitimate evolution but is actually compromise. Requires behavioral baselines — but no framework mandates them. Microsoft elevated AI observability to a security requirement in March 2026, but observability alone doesn't solve detection.
Tool schema poisoning, rug pulls, cross-server exfiltration — attack vectors at the protocol layer between agents and tools. See MCP Security for 6 documented attack vectors. No governance framework addresses tool protocol security.
Agents are digital participants that need identities, permissions, oversight, and accountability — like human users. But IAM systems were designed for humans and service accounts, not autonomous decision-makers that reason about which tools to use.
Which agents exist, what tools they access, what permissions they have, whether they delegate to other agents. If you don't know this, you can't govern it.
Remove auto-approve from sensitive operations. Scope tool permissions to the minimum required. This is Kill Chain Stage 4 — the highest-impact, lowest-effort control.
Assess and bound risks upfront, make humans meaningfully accountable, implement technical controls, enable end-user responsibility. It's the cleanest agentic governance model available.
Complete audit trails of all agent interactions — prompts, responses, tool calls, parameters. If Microsoft elevated this to a security requirement, your organization should too. See Behavioral Baselines.
The SP 800-53 AI overlay approach means agent security controls integrate into your existing security program. If your org is NIST-aligned, this will be the standard. Comments close April 2, 2026.
This is the governance layer of the Agentic AI Kill Chain. For technical controls, see Hook Guardrails, MCP Security, and Red Teaming. For detection, see Behavioral Baselines.
Practitioner content on agentic AI security — threat models, controls, and governance.
This work represents the author's independent research and personal views. It is not related to or endorsed by the author's employer.