Body Snatchers & Agentic Possession
An Exorcist’s Field Manual for the AI Era
Most “AI security” programs are compliance theater dressed in technical language.
A control taxonomy here. A policy memo there. A vendor questionnaire. A risk register with adjectives. The industry is stacking frameworks like Pokémon cards and calling it progress.
It isn’t.
Frameworks help you design a compliant AI system. They do not secure AI usage in practice—shadow AI, agent sprawl, prompt-driven data leakage, tool abuse, model supply chain drift. That gap is where “Agentic Era” programs go to die. Your frameworks certified the org chart while possessed interns with API keys wandered the production environment.
The industry is converging on a truth it doesn’t want to hear: AI security isn’t a new tower to build. It’s a coordination plane between functions that already exist.
The Convergence Nobody Asked For
AI security is not a new discipline. It's a forcing function that pushes existing functions towards operational integration whether you like it or not. The framework vendors and empire-builders want you to believe otherwise—new towers, new budgets, new headcount. Ignore them. The technical reality has already decided where AI security lives — as the glue and enforcement engine that binds cyber to data governance, privacy, and MRM.
AI security collapses into data security because AI models are data stores. LLMs emit training data verbatim. Model inversion attacks reconstruct faces with enough fidelity that crowdworkers identify individuals at 95% accuracy. The distinction between “model” and “database” has collapsed. The failure modes are no longer binary; they are a function of probability distributions. We are no longer defending a perimeter; we are managing the P(leakage∣prompt) across an infinite state space.
AI security collapses into data privacy because you cannot grep weights. GDPR grants the right to data erasure, but nobody defined erasure for neural networks. Recent research introduced “ununlearning”—where unlearned knowledge gets reintroduced in-context. The “right to be forgotten” needs math, not assurances. The math is still being worked out on the chalkboard.
AI security collapses into data governance because lineage and provenance are no longer documentation exercises—they’re runtime requirements. When your RAG system pulls from enterprise document stores, when your agents access APIs with delegated credentials, governance stops being a committee and becomes runtime policy. Or it stops being governance at all.
AI security collapses into model risk management because the system is probabilistic and the failure modes are statistical. The Federal Reserve’s SR 11-7 defines model risk as occurring when “a model may have fundamental errors and produce inaccurate outputs.” AI hallucination is an integrity failure within established risk management categories. The regulatory framework already exists. Use it.
Convergence does not mean consolidation.
MRM has validated complex algorithms for twenty years. We aren’t trying to replace them. The problem is velocity. MRM detects drift over months. They aren’t built to detect a prompt injection happening in real-time. By the time their process catches it, the data is already gone.
MRM sets the Law. Cyber provides the Enforcement.
MRM defines what “effective challenge” means for model validity. Cyber builds the automated harness that runs those checks in CI/CD, adds adversarial evaluation that MRM’s mathematical frame doesn’t capture, and monitors runtime behavior for attacks that validation-time testing cannot anticipate. If you’re still running these as separate programs with no operational integration, you’re building four different dashboards for one fire. And the fire is already burning.
The Possessed Agentic Intern
Agentic systems don’t just have “answer authority.” They have action authority—tools, APIs, delegated identity, and a supply chain explosion of plugins, registries, and orchestration layers. The thing you’re trying to secure isn’t a model anymore. It’s a runtime that can read your data, reason about it, and take actions in production systems.
The theoretical became operational in January 2026 with BodySnatcher—described as “the most severe AI-driven security vulnerability uncovered to date.”
ServiceNow’s Virtual Agent API shipped with a hardcoded, platform-wide authentication secret—the same token across all customer instances. An unauthenticated attacker, knowing only a target’s email, could bypass MFA and SSO, impersonate an administrator, and execute AI agents to create backdoor accounts with full privileges. The exploit weaponized ServiceNow’s own agent to provision admin credentials. No clicks required. No credentials needed. Just an email address.
When you give an agent autonomous rights, you bypass the entire human-centric identity stack. The configuration choices that enabled BodySnatcher—hardcoded secrets, trust-on-email auto-linking, overprivileged default agents—could resurface in any organization’s code. This is not a ServiceNow problem. This is an agentic architecture problem.
Your unit of control is no longer “a model” or “a prompt.” It’s a runtime. If you can’t enforce per-tool authorization, least privilege, provenance tracking, and trace logging, your “agent” is just a privileged intern with amnesia and a corporate credit card.
And as BodySnatcher demonstrated, that intern can be body-snatched by anyone who knows an email address.
Variance: The CIA Triad’s Plus One
Generative systems introduce probabilistic variance as an operational property: the same input can yield different outputs, with different risk, under the same “system.” That breaks every classic security assumption you’ve relied on for thirty years:
Confidentiality becomes memorization and inversion risk. Zero-click exfiltration attacks hijack enterprise copilots during summarization, exfiltrating documents via hidden prompt instructions. Your perimeter didn’t see it. Your DLP didn’t catch it. The model was the exfiltration channel.
Integrity becomes hallucination, poisoning, and backdoors—truthfulness as a control objective. Corrupting 2% of training labels achieves near-perfect backdoor success. Nation-states are producing models where provenance is unknown. You’re deploying black boxes with unknown origins into production.
Availability becomes denial of wallet— this AI-native version of an asymmetric attack makes cost now an attack surface. Attackers weaponize pay-per-token billing to inflict financial damage. Your SOC is watching for intrusions. The attacker is running up your cloud bill.
The traditional checkbox compliance model can’t address any of this. It optimizes for point-in-time attestations instead of continuous proof. It treats “the application” as the unit of control while AI systems are shifting compositions of models, pipelines, tools, and vendor components. It externalizes risk to review boards instead of encoding requirements into shipping defaults.
In AI, “compliance passed” can coexist with prompt-mediated exfiltration, tool abuse, and provenance collapse. The highest-impact failures—data exfiltration, policy bypass, unsafe autonomy—are rarely “a missing security tool.” They are failures of boundaries, lifecycle controls, and evidence.
Security teams can’t firewall their way out of this. MRM teams can’t “validate” their way out of it alone. Unless risk ownership, enforcement, and monitoring are unified into an engineering control plane, you’re certifying theater.
AI Security Is Quantitative Engineering
The traditional IT security model—purchasing vendor tools, deploying agents, checking compliance boxes—fails catastrophically when applied to AI because it assumes deterministic systems with static perimeters.
In the AI era, the data is the logic, and the application is probabilistic. You cannot buy a “tool” to fix a model that has memorized PII; you must engineer a data pipeline that sanitizes the training set before the model is built. You cannot “configure” a DLP policy to catch a prompt injection that changes meaning based on context; you must architect structural isolation between untrusted input and privileged tools.
The deterministic shield is broken. You cannot firewall a concept. You cannot write a regex for “malicious intent” when that intent is semantically hidden inside a valid business request.
The control plane for AI security resembles an MLOps layer as much as it does a security gateway. The inherent variance in agentic infrastructure—where the same agent can take different actions on identical inputs—requires dynamic controls built on statistical models rather than static rule sets.
This is why convergence with MRM isn’t optional. MRM is the only discipline with the mathematical tooling to manage probabilistic variance: drift detection, distribution monitoring, confidence thresholds, effective challenge. These aren’t security concepts borrowed from risk management. They are security controls when your system is stochastic.
Reliance on policy documents and risk registers is bureaucratic coping. The only effective control is governance engineering—paved roads, execution airlocks, and CI/CD harnesses that enforce safety constraints at the code and infrastructure level.
If security teams cannot write the code to govern the runtime, they are no longer participants in the defense. They are spectators.
The Exorcist’s Field Manual
AI security frameworks are reference overlays. They are not control planes. Stop confusing the menu for the meal.
In the agentic era, “security” is inseparable from data security, privacy, governance, and MRM because the core system is probabilistic and action-capable. But inseparable does not mean consolidated—that’s a land grab that will fail politically and operationally. MRM, data governance, and privacy set the Law. Cyber provides the Enforcement.
The winning strategy is quantitative governance engineering: paved roads that embed secure-by-design into MLOps/LLMOps, with statistical monitoring, continuous evaluation, and supply-chain-grade provenance. One paved road serving multiple governance functions—not parallel checkpoints that create the gaps where attackers live.
The forced merger is not organizational consolidation but operational integration. The CISO org translates threats into risk language, builds the automated enforcement, and provides the adversarial mindset—while respecting the governance authority of functions that have been managing these risks for decades.
If you keep the old org chart—separate towers, review-heavy controls, parallel bureaucracies—you’ll get the predictable outcome: shadow agents, inconsistent guardrails, and a paper compliance program while the adversaries walk through your front door.
Anything else is compliance cosplay that collapses the first time a tool-using agent finds a path around your slide deck.

