Revolgy blog

Shadow AI is your biggest new security risk (and we know how to fix it)

Written by Jana Brnakova | November 13, 2025

You’ve probably heard of the term Shadow IT, where your employees use third-party tools like Slack, Dropbox, or WhatsApp that your security team hasn’t approved for safe use. Everything was great initially, until it turned into uncontrolled IT sprawl and huge security and compliance issues.

That was just the beginning. As if that wasn’t enough, we now also have Shadow AI.

What is Shadow AI?

With all this new generative AI, your teams are likely more productive than ever, but what if this productivity comes with a huge, hidden risk? Not only are they merrily using all these different AI tools, but they’re unlocking hidden AI tools within the already approved tools.

Here are a few examples of what Shadow AI is:

  • A sales rep pastes their private customer notes into ChatGPT to help them write a follow-up email.
  • An HR employee uploads a performance review to Claude to help them rephrase it.
  • A marketing person uploads an unreleased product photo to Midjourney to get ideas for a new ad campaign.

If you think your company’s AI policy has you covered, you might want to check again. The reality is that employees are using far more AI than you think, and they’re doing it in ways that completely go around your security.

What the data says

A Q1 2025 report from Harmonic Security shows the widespread use of AI. They looked at over 176,000 AI prompts and thousands of files uploaded by 8,000 enterprise users.

They found out that the average company isn’t just using ChatGPT. Its employees interacted with 254 distinct AI applications (not counting tools accessed via mobile or APIs).

Employees are using tons of different tools, from big names like ChatGPT and Claude to all sorts of random apps, all on their own, without approval.

 

Proportion of personal vs corporate accounts used for sensitive data (Source: Harmonic Security)

 

According to the study, 45.4% of sensitive data is at risk. Nearly half of all sensitive AI interactions happened from personal email accounts, not corporate ones. Of those accounts, 57.9% were Gmail addresses.

ChatGPT is the leading tool within the studied sample, being the destination for 79.1% of sensitive data. From that, more than one-fifth, or 21%, of sensitive data was uploaded directly to the free tier of ChatGPT, where it can be used for model training.

 

Sensitive data categories by % of data in each application (Source: Harmonic Security)

 

Some other findings revealing potential security gaps:

  • 7% of users accessed Chinese-built AI tools (like DeepSeek, Ernie Bot, and Qwen Chat), which often have unclear data policies.
  • Image files made up 68.3% of uploads to ChatGPT, showing a growing comfort with putting multimedia content into public AI, regardless of company policy.
  • Standard work documents (.docx, .pdf, .xlsx) with proprietary business data are routinely uploaded to public models.

Why are employees using their personal accounts?

It’s easy to just blame employees, but the thing is, they’re just trying to get their work done. That causes friction.

People will always choose the easiest way to get things done. If the official company AI tool is slow, confusing, or has too many rules, they’ll just open a new tab and use their personal ChatGPT or Gemini account. It’s just faster and easier for them.

We saw the exact same behavior with Shadow IT, where teams started using tools like Slack, Trello, or Dropbox long before they were officially approved because they were just better and easier than the corporate-approved solutions.

But the end result is always a huge security problem. You can’t see what’s happening, you can’t control it, and you can’t make sure rules are being followed.

Set up rules, don’t just block

Simply blocking all AI won’t work. People will just find other ways to use it that are even harder to track. Instead of stopping people, you need to show them the right way. The trick is to make the safe, company-approved option the easiest one for everyone. Here’s how.

1. Discover & classify

The first step is to discover and map the AI apps your employees are actually using, and then classify the data.

But not all data is equal, and understanding the difference between pasting public marketing copy and uploading sensitive customer data is key to prioritizing your security efforts.

2. Provide an “approved” list

The best way to stop employees from using risky, unapproved platforms is to give them safe alternatives.

Define and clearly communicate a list of sanctioned AI tools, like an enterprise version of ChatGPT or Google’s Vertex AI. This gives employees the tools they want in a secure environment where you control the data.

3. Set clear boundaries

Instead of just blocking, set clear usage boundaries and enforce them. This means moving beyond policy docs to real, automatic controls. For example, you can allow the use of an approved AI but actively block sensitive data (like customer info) from being uploaded. A simple pop-up “nudge” can do the trick, too.

4. Make security a team effort

Train your employees to be partners in AI security, not just a problem to be managed. The goal isn’t to scare them or stop them from using AI. When people understand the why behind the rules, they are far more likely to follow them.

 

 

 

The future is managed AI

We know your teams are using AI. If you try to stop them, they’ll just find creative, unmanaged, and risky workarounds. The risk isn’t the AI itself, but its unmanaged use.

It’s time to move past passive policy documents no one actually reads. Revolgy works with you to move past outdated policies and build an active AI governance plan. Our goal is to make the secure way the easiest way.

Ready to get visibility into your company’s AI usage and build a strategy that works? Let’s talk.