Is Shadow AI the New Cybersecurity Threat Hiding in Plain Sight?
- Gerard DeFreitas
- Jun 19
- 4 min read
In 2023, employees were sneaking Dropbox links past IT. In 2024, it was Zoom installs without admin approval. Now, in 2025, we’re facing something far sneakier: Shadow AI.
If that term’s new to you, don’t worry you’re not behind. But chances are, your organization is already affected by it.
What Is Shadow AI?
Shadow AI refers to employees using artificial intelligence tools like ChatGPT, Claude, Midjourney, GitHub Copilot, or any number of LLM-based platforms without formal approval or oversight from IT or security teams.
And it’s happening everywhere!
Marketing teams are feeding prompts with customer data into generative text tools. Developers are pasting sensitive code into online AI debuggers. HR teams are using résumé screening tools with unknown data storage practices. Shadow AI is like Shadow IT, but more powerful, more invisible, and potentially more dangerous.
Why It’s Growing So Fast
Unlike other shadow tools, AI apps don’t require installation. They’re cloud-based, fast, and often free. All you need is a browser and an idea. The barrier to entry is low, but the risk profile is sky-high.
According to Axios, companies are now reporting dozens of active AI tools in use, the majority of which are not officially approved or monitored by IT. A Gartner survey suggests that more than 40% of employees in corporate environments are using generative AI tools regularly and fewer than 25% of organizations have a governance framework in place.
Why IT Leaders Should Be Paying Attention
Here’s the hard truth: Shadow AI opens major risk vectors that weren’t even on the radar two years ago. Some of the most pressing issues include:
Data Leakage
Let’s say a well-meaning employee asks ChatGPT to summarize a legal contract or troubleshoot a proprietary code issue. That data may now be part of the AI's training set or logged by the service provider, depending on the terms of service and usage model. You’ve just had a shadow data breach, and it didn’t trip a single alarm.
Compliance Risk
Many regulations, such as GDPR, HIPAA, PIPEDA, and upcoming AI-specific laws require clear data processing agreements, auditability, and control over where sensitive data goes. AI tools that aren’t part of your official stack can create non-compliance without anyone knowing.
For example, if your customer’s identifiable health information is fed into a U.S.-hosted AI tool, you could be in breach of Canadian health data sovereignty laws even if that data never leaves the browser window.
Insecure AI Supply Chains
Some employees experiment with open-source AI platforms or APIs sourced from public repositories, many of which are not vetted. In 2024, several high-profile incidents showed “model poisoning” attacks, where malicious actors altered models to introduce bias, leak data, or execute code in downstream environments.
Real-World Examples
Still not convinced this is real? Let’s look at the headlines.
Samsung Data Leak: In 2023, Samsung banned the use of ChatGPT after an engineer uploaded confidential source code to the tool. Even with best intentions, the damage was done, and policy had to change overnight.
The Pentagon Ban: In early 2025, the U.S. Department of Defense blocked access to China-based AI tool DeepSeek due to concerns about data privacy and national security.
Financial Sector Overexposure: Financial News London reported on growing use of unvetted AI tools in investment firms, leading to flagged compliance violations and potential fines.
What You Can Do About It
The good news? You don’t need to ban AI to gain control. In fact, heavy-handed blocking often backfires, and encourages more shadow use. Here’s a better path forward:
Start With Visibility
You can’t manage what you can’t see. Use browser telemetry, endpoint tools, and proxy logs to identify which AI services are being accessed across your organization. Solutions like Netskope can provide high-level insights without deep packet inspection.
Implement Guardrails, Not Roadblocks
AI security vendors like Prompt Security and Nightfall AI now offer solutions that analyze and sanitize prompts before they reach external services. You can filter sensitive information or redact PII before it leaves the browser.
These tools are ideal for enabling safe, compliant AI use while giving IT the control it needs.
Create an AI Acceptable Use Policy
Work with HR and legal teams to draft a simple, clear AI usage policy. It should cover:
Approved tools
Prohibited data types
Storage and retention rules
Model explainability expectations
Employee responsibilities
Don’t forget to train your staff on how they can use AI productively and safely.
Add AI to Your Risk Register
Generative AI should now be considered an enterprise asset category. Like endpoints and cloud apps, AI tools must be:
Inventoried
Risk-assessed
Reviewed during audits
The National Institute of Standards and Technology (NIST) recently added AI risk management guidance as part of their broader cybersecurity framework.
The Takeaway
AI is no longer emerging—it’s embedded. Shadow AI isn’t a sign of employee negligence, it’s often a sign of enthusiasm, innovation, and a desire to move fast. But without governance, that speed becomes risk.
The key is to balance innovation with visibility and control. Organizations that do this well are enabling their people to experiment, improve productivity, and stay secure; all at once.
-----
References
Axios. (2025, February 4). Shadow AI creates new headaches for company IT teams. https://www.axios.com/2025/02/04/shadow-ai-cybersecurity-enterprise-software-deepseek
Axios Codebook. (2025, February 4). Lurking in the shadow [Newsletter]. https://www.axios.com/newsletters/axios-codebook-38f09de0-e257-11ef-8ac2-05372d4f3eec
Financial News London. (2024, October 19). The rush to AI in the financial sector risks more data breaches. https://www.fnlondon.com/articles/the-rush-to-ai-in-the-financial-sector-risks-more-data-breaches-7dd577d6
National Institute of Standards and Technology (NIST). (2023). AI Risk Management Framework (AI RMF 1.0). U.S. Department of Commerce. https://www.nist.gov/itl/ai-risk-management-framework