Laava LogoLaava
News & Analysis

Microsoft's new Agent 365 exposes the hidden risk of ungoverned enterprise AI agents

Based on: VentureBeat

Microsoft just launched a $99/month governance platform for enterprise AI agents — because 29% of them are already running without IT approval. The "double agent" problem is real, and most companies have no idea how exposed they are.

What Microsoft just announced

On March 9, 2026, Microsoft announced the general availability of Agent 365 and Microsoft 365 Enterprise 7 — two products built around a single uncomfortable truth: AI agents are already running inside most large organizations, and nobody is watching them.

Agent 365 is priced at $15 per user per month and serves as what Microsoft calls the "control plane for agents" — a centralized registry and security platform that lets IT and security teams observe, govern, and block AI agents operating across an enterprise. The M365 Enterprise 7 bundle wraps this together with Copilot and Microsoft's full security stack for $99 per user per month.

The numbers behind the announcement are striking. More than 80% of Fortune 500 companies actively use AI agents built with low-code and no-code tools. IDC projects 1.3 billion agents will be in circulation by 2028. Microsoft itself now has visibility into more than 500,000 agents running inside its own corporate environment. And yet, according to Microsoft's own research, 29% of those agents in surveyed organizations operate without approval from IT or security teams. Only 47% of organizations use any security tools at all to protect their AI deployments.

The "double agent" problem

Microsoft has coined a pointed term for the emerging risk: "double agents." The concept describes scenarios where AI agents, built to serve an organization, are manipulated through prompt injection, model poisoning, or other techniques into acting against the organization's interests. In Microsoft's own red team experiments, these attacks were successful: agents were manipulated into accessing unauthorized data, exfiltrating information, or executing unintended actions.

Separately, Microsoft's security team identified more than 50 unique "AI Recommendation Poisoning" prompts from 31 companies across 14 industries — hidden instructions embedded in websites designed to hijack AI assistants. When a user clicks "Summarize with AI" on one of these pages, the agent receives covert instructions to treat an unknown third party as a trusted source.

This is not theoretical. Agents that process documents, emails, and web content — exactly the agents most businesses deploy first — are the most exposed. The attack surface is every piece of unstructured data the agent reads.

Why this matters for any business deploying AI agents

The broader pattern here is not unique to Microsoft's ecosystem. Organizations across every industry are deploying AI agents faster than they are building the controls to govern them. The dynamic is familiar: business teams build something that works in a demo, deploy it to production, and six months later nobody can answer basic questions — which agent is doing what? What data does it access? Who approved it? What happens when it makes a mistake?

The stakes are higher than they appear. AI agents that process invoices, contracts, customer emails, and HR data have access to sensitive business information and the ability to take consequential actions in ERP and CRM systems. A poorly governed agent is not just a performance risk — it is a compliance risk, a security risk, and increasingly a legal risk under frameworks like the EU AI Act.

Microsoft's announcement signals that the market is maturing past the "AI demo" phase. Governance is no longer a nice-to-have — it is the product. The question for every organization with agents in production is: did you build governance in from the start, or are you now buying it as a retrofit?

How Laava thinks about this

This is exactly the gap Laava was built to close. Before writing a single line of code, we map the business process: what is the trigger for this agent? What decisions does it make autonomously, and which require human approval? What data does it touch, and what are the access boundaries? Who is accountable when it makes a mistake?

Governance is not an afterthought in our architecture — it is baked into the agent design from day one. Every agent we deploy includes full audit trails (who approved what, when, and why), human-in-the-loop checkpoints at high-stakes decision nodes, least-privilege data access scoped to the task, and explicit escalation paths when the agent encounters uncertainty or anomalies.

We also take prompt injection seriously as a threat model, not just a curiosity. Agents that process external documents, supplier emails, or customer-submitted content need to be hardened against the exact attacks Microsoft's red team is now demonstrating at scale. This is especially relevant for document-processing agents operating in logistics, public sector, and financial back-office environments — the kinds of deployments we work on every day.

What you should do now

If your organization already has AI agents running in production, start with an audit: catalogue what is deployed, what data it accesses, who approved it, and whether there are audit logs. If any of those answers are "we don't know," that is your starting point.

If you are planning your first agent deployment, do not treat governance as a phase two problem. The cost of retrofitting controls into a live system is always higher than building them in from the start. Laava offers a structured Roadmap Session to map your process, identify risk points, and design an agent architecture that is both capable and defensible from day one. Reach out at laava.nl if you want to start that conversation.

Want to know how this affects your organization?

We help you navigate these changes with practical solutions.

Book a conversation

Ready to get started?

Get in touch and discover what we can do for you. No-commitment conversation, concrete answers.

No strings attached. We're happy to think along.

Microsoft's new Agent 365 exposes the hidden risk of ungoverned enterprise AI agents | Laava News | Laava