You’ve probably heard of the term Shadow IT, where your employees use third-party tools like Slack, Dropbox, or WhatsApp that your security team hasn’t approved for safe use. Everything was great initially, until it turned into uncontrolled IT sprawl and huge security and compliance issues.
That was just the beginning. As if that wasn’t enough, we now also have Shadow AI.
With all this new generative AI, your teams are likely more productive than ever, but what if this productivity comes with a huge, hidden risk? Not only are they merrily using all these different AI tools, but they’re unlocking hidden AI tools within the already approved tools.
Here are a few examples of what Shadow AI is:
If you think your company’s AI policy has you covered, you might want to check again. The reality is that employees are using far more AI than you think, and they’re doing it in ways that completely go around your security.
A Q1 2025 report from Harmonic Security shows the widespread use of AI. They looked at over 176,000 AI prompts and thousands of files uploaded by 8,000 enterprise users.
They found out that the average company isn’t just using ChatGPT. Its employees interacted with 254 distinct AI applications (not counting tools accessed via mobile or APIs).
Employees are using tons of different tools, from big names like ChatGPT and Claude to all sorts of random apps, all on their own, without approval.
Proportion of personal vs corporate accounts used for sensitive data (Source: Harmonic Security)
According to the study, 45.4% of sensitive data is at risk. Nearly half of all sensitive AI interactions happened from personal email accounts, not corporate ones. Of those accounts, 57.9% were Gmail addresses.
ChatGPT is the leading tool within the studied sample, being the destination for 79.1% of sensitive data. From that, more than one-fifth, or 21%, of sensitive data was uploaded directly to the free tier of ChatGPT, where it can be used for model training.
Some other findings revealing potential security gaps:
It’s easy to just blame employees, but the thing is, they’re just trying to get their work done. That causes friction.
People will always choose the easiest way to get things done. If the official company AI tool is slow, confusing, or has too many rules, they’ll just open a new tab and use their personal ChatGPT or Gemini account. It’s just faster and easier for them.
We saw the exact same behavior with Shadow IT, where teams started using tools like Slack, Trello, or Dropbox long before they were officially approved because they were just better and easier than the corporate-approved solutions.
But the end result is always a huge security problem. You can’t see what’s happening, you can’t control it, and you can’t make sure rules are being followed.
Simply blocking all AI won’t work. People will just find other ways to use it that are even harder to track. Instead of stopping people, you need to show them the right way. The trick is to make the safe, company-approved option the easiest one for everyone. Here’s how.
The first step is to discover and map the AI apps your employees are actually using, and then classify the data.
But not all data is equal, and understanding the difference between pasting public marketing copy and uploading sensitive customer data is key to prioritizing your security efforts.
The best way to stop employees from using risky, unapproved platforms is to give them safe alternatives.
Define and clearly communicate a list of sanctioned AI tools, like an enterprise version of ChatGPT or Google’s Vertex AI. This gives employees the tools they want in a secure environment where you control the data.
Instead of just blocking, set clear usage boundaries and enforce them. This means moving beyond policy docs to real, automatic controls. For example, you can allow the use of an approved AI but actively block sensitive data (like customer info) from being uploaded. A simple pop-up “nudge” can do the trick, too.
Train your employees to be partners in AI security, not just a problem to be managed. The goal isn’t to scare them or stop them from using AI. When people understand the why behind the rules, they are far more likely to follow them.
We know your teams are using AI. If you try to stop them, they’ll just find creative, unmanaged, and risky workarounds. The risk isn’t the AI itself, but its unmanaged use.
It’s time to move past passive policy documents no one actually reads. Revolgy works with you to move past outdated policies and build an active AI governance plan. Our goal is to make the secure way the easiest way.
Ready to get visibility into your company’s AI usage and build a strategy that works? Let’s talk.