Your DLP strategy is not ready for Shadow AI

RSAC 2026 was packed with announcements But if you listened carefully (like really carefully) one theme cut through everything else. Not zero trust. Not incident response. Not even quantum-resistant cryptography.

Shadow AI.

Every major vendor had something to say about it. Every security leader the Kroll team spoke to was worried about it. And every organisation in the room was already dealing with it, whether they knew it or not. The conversation wasn’t theoretical. It was urgent. It was practical. And it was happening everywhere.

This isn’t speculation. This is what the industry is converging on. My Kroll team was there. I’ve been tracking the key vendor announcements and speaker sessions. Here’s some of what’s heard: 

It’s Already Inside.

The numbers hit you first. And they’re staggering.

MetricFinding
AI Tools in Use665 different applications
Data Exposures579,000+ incidents
Enterprise AI Prompts Analysed22 million

Source: Harmonic Security’s analysis of 22 million enterprise AI prompts,

The shift from Shadow IT to Shadow AI is subtle but critical. Previously, Shadow IT was about unsanctioned software (ex. Dropbox instead of SharePoint, Slack instead of Teams.) You could see it. You could block it. You could audit it. It was visible.

Shadow AI is different. It’s not a tool your employees are installing on their machines. It’s a service they’re accessing through a browser. Free-tier ChatGPT. Claude. Gemini. Even Microsoft’s own free Copilot. They’re logging in from their personal accounts, pasting data into a chat window, and getting answers. No VPN. No proxy. No conditional access. No audit trail. No visibility.

The thing that matters the most, is that their use of it is not malicious. It’s not rogue employees trying to steal data or sabotage the company. It’s people trying to get their jobs done faster. A developer asking ChatGPT to review code. A project manager asking Claude to summarise a contract. A financial analyst asking Copilot to model a scenario. A customer service representative asking an AI to draft a response. These are normal, everyday tasks. The problem is that they’re happening outside your security perimeter.

As one CISO from 1Password put it at RSAC:

“People aren’t trying to break your security. They’re trying to do their job.” And that’s exactly the problem. The motivation is innocent. The risk is real.

Key Point: Shadow AI isn’t a future risk. The numbers show it’s already in your organisation, right now. And it’s growing

What is actually leaving the door

Not all data is equal. And not all Shadow AI exposure is the same. Some of it is harmless. Some of it is problematic.

The data walking out the door isn’t random. It’s the stuff that matters most to your organisation:

  • Source code and intellectual property. Thse are the crown jewels of software companies.
  • Legal documents and contracts. Things that contains sensitive terms, pricing, obligations. etc.
  • Financial projections and budgets. Operational plans, Business plans and more.
  • Customer records and personally identifiable information (PII). Most of of which are regulatory liability especially here in Europe where GDPR penalties can break an company.
  • Strategic plans and board materials.
  • API keys and credentials. Things that malicious actors will give direct access to your systems.

There six (6) applications are driving most of the exposure. You can probably guess which ones: ChatGPT, Claude, Gemini, Copilot, and a handful of others. The concentration is so high that if you could control just those six tools, you’d eliminate the vast majority of your Shadow AI risk.

But here’s the problem that keeps security leaders awake at night: most of this is happening through personal accounts. Not corporate accounts. Not managed devices. Personal accounts.

Free-tier ChatGPT. A Gmail account. A personal laptop. No SSO. No conditional access. No multi-factor authentication. No audit trail. No visibility.

And (this is the part that should genuinely concern you) OpenAI and other vendors use your data to train their models (especially if you are using the Free one). Your source code. Your financial data. Your customer information. It’s all feeding the next version of the model. It’s all being used to make the AI smarter. And you have no control over it.

Remember kids: If you are not paying for the product, You are the Product.

Key Point: It’s not random data. It’s your most sensitive IP and it’s going through personal accounts with zero visibility or control.

The Agent Problem (Shadow AI 2.0)

If Shadow AI 1.0 was about employees pasting data into ChatGPT, Shadow AI 2.0 is about agents doing it for them. Automatically. Without asking. Without thinking. Without a human in the loop.

And that’s a fundamentally different problem.

An agent isn’t a chatbot. It’s not a tool you interact with. It’s a piece of software that can take actions. It can read your emails. It can access your files. It can make API calls. It can execute code. It can send messages. It can modify data. And it can do all of this without a human pressing a button. Without a human approving the action. Without a human even knowing it happened.

At RSAC, we heard about real incidents. Not hypothetical scenarios. Real things that have already happened:

  • GitHub MCP agents rewriting their own code to bypass security controls
  • Slack AI agents accessing channels they shouldn’t and sharing information
  • Email agents sending messages and making commitments without human approval
  • Autonomous agents exfiltrating data to external systems
  • Agents modifying their own prompts to become more dangerous

Cisco had a keynote at RSAC that nailed it perfectly:

“With chatbots, you worry about the wrong answer. With agents, you worry about the wrong action.” A chatbot gives you bad information. You can ignore it. An agent takes bad action. You can’t undo it.

With autonomous agents, you don’t have seconds. You have the time it takes for the agent to execute. Which could be milliseconds. Which could be before you even know something happened.

Key Point: Agents don’t need a human to make a mistake. They can make mistakes all on their own. And by the time you know about it, it’s too late.

Why Your DLP Isn’t Built for This

Your Data Loss Prevention (DLP) system is probably doing a decent job (keyword: probably). It’s catching files being emailed to personal accounts. It’s blocking USB drives. It’s monitoring file shares. It’s preventing copy-paste to cloud storage. It’s working hard to keep data inside your organisation.

But it wasn’t built for this. It wasn’t built for Shadow AI. And it’s starting to show.

Here are five critical gaps that your DLP can’t address:

  • No inspection of conversational streams: DLP looks at files and emails. It doesn’t understand the context of a conversation. You can paste an entire database into ChatGPT, and your DLP won’t see it. It won’t flag it. It won’t block it. Because it’s not a file. It’s a conversation.
  • No intent detection: Is the user asking for a summary, or are they exfiltrating data? Is the agent gathering information for a legitimate task, or is it stealing credentials? DLP can’t tell the difference. It doesn’t understand intent.
  • No non-human identity (NHI) governance: Your DLP was built for humans. Agents are neither human nor traditional service accounts. They operate in a grey zone. They have permissions. They have access. But they’re not governed like users or service accounts.
  • Can’t handle 665 tools: You can’t block every AI tool. Some are sanctioned. Some are essential. Some are used by specific departments. Your DLP can’t distinguish between them at scale. It can’t understand which tools are approved and which aren’t.
  • Can’t detect prompt injection: An attacker can craft a prompt that tricks an agent into revealing sensitive data, bypassing controls, or taking dangerous actions. Your DLP won’t catch it. Because it doesn’t understand prompts.

This isn’t any specific Security vendor takedown. It’s not saying DLP is bad or useless. DLP is still essential. But it’s a reality check. DLP was built for files and humans. Agents are neither. And until your DLP understands agents, it’s going to miss a lot of risk.

Key Point: DLP was built for files and humans. Agents are neither. Your existing tools are blind to this risk.

What was shown in RSA

Here’s what we saw at RSA 2026. These are announcements from major vendors about capabilities that are available now or coming in the next 90 days:

Microsoft

Microsoft made several announcements focused on AI governance and agent control. Read the full details in their Secure Agentic AI End-to-End blog post:

  • Agent 365: A control plane for managing AI agents across your organisation. Think of it as the governance layer for agents—visibility, control, and audit trails.
  • Shadow AI Detection: Built into Entra Internet Access to identify unauthorised AI tool usage. It detects when employees are using unapproved AI tools and reports it.
  • Prompt Injection Protection: Entra Internet Access now detects and blocks prompt injection attacks. This is critical for protecting agents from being tricked into dangerous actions.
  • Purview DLP for Copilot: Extended DLP policies to cover Microsoft 365 Copilot interactions. Your DLP rules now apply to Copilot conversations, not just files.
  • Security Dashboard for AI: Centralised visibility into AI agent activity and risk. One place to see what’s happening with AI across your organisation.
  • Data Security Posture Agent: Automated credential scanning and exposure detection in Purview. An agent that hunts for exposed credentials and sensitive data.

Google Cloud

Google’s approach focuses on agent-specific threats and model security. Details are available in RSAC ’26 Supercharging agentic AI Defense:

  • Agentic SOC: Security operations centre designed specifically for agent-driven threats. Not just monitoring agents—understanding them.
  • Model Armor: Prompt injection mitigation for Gemini and other models. Protects the model itself from being manipulated.
  • AI Protection in Security Command Centre: Threat detection and response for AI-driven attacks. Treats AI threats as a distinct category.
  • Autonomous Agent Threat Research: Mandiant’s research showing autonomous agents rewriting their own code—a new class of threat that didn’t exist before.

Other Key Announcements

  • Astrix: Four-method AI agent discovery architecture for full visibility and control. They’re building the visibility layer that most organisations need.
  • 1Password: Unified Access for governing credentials across humans, agents, and machines. Treating agents as first-class identities that need credential governance.

The pattern is clear: the industry is moving from detection to governance. From blocking to understanding. From reactive to proactive. The vendors aren’t saying “block all AI.” They’re saying “understand your AI, govern it, and use it safely.”

Key Point: The tools are arriving. The question is whether your organisation is ready to use them.


In the next post, I’ll list down what has been highlighted as the way forward.

(click here)


Sources: