Shadow AI refers to the unsanctioned use of artificial intelligence tools, especially generative AI, within an organization. It often involves employees using personal devices, browser extensions, or public models like ChatGPT to process company data or perform work tasks. These tools operate outside formal systems, beyond IT control, and without proper oversight.
For Chief Learning Officers and Chief Data Officers, shadow AI poses both a risk and an opportunity. The problem is not that employees are leveraging AI. It’s that they’re doing so without clear guidance, secure infrastructure, or an understanding of the consequences.
“There have already been a lot of stories about people using shadow AI and creating huge problems,” says Merav Yuravlivker, Chief Learning Officer at Data Society Group. “It’s happening quietly and quickly, and it’s usually invisible until something breaks.”
Understanding what shadow AI is becomes the first step toward designing a proactive response that protects the business while enabling innovation.
MUST READ: The Brain Behind Better Learning: How Neuroscience is Shaping L&D Design
The Risks Are Real
Shadow AI introduces urgent challenges in AI risk management—especially for CDOs tasked with governing data use and CLOs responsible for enabling ethical, productive learning.
When employees use unapproved tools, the organization loses visibility and control. Sensitive information may be exposed, confidential projects compromised, and regulatory compliance unintentionally violated.
Shadow AI also creates inconsistencies in AI literacy, leading to fragmented decision-making and unpredictable outputs.
Risks include:
Data privacy violations (e.g., uploading PII or PHI to public models)
Breaches of IP or client confidentiality
Regulatory non-compliance (GDPR, HIPAA, etc.)
Untracked model usage that impacts version control and auditability
Erosion of trust in enterprise systems and leaders
“If you are part of an industry that has a regulatory body, like finance or healthcare, you are putting people’s lives at risk,” Yuravlivker explains. “You are facing severe fines, potential jail time.”
From a leadership lens, AI risk management must go beyond IT policies. It requires cross-functional coordination across data governance, compliance, learning, and strategy.
Why Shadow AI Happens

Most employees are not trying to circumvent policies. They’re trying to get their work done. They encounter friction, outdated tools, lack of access, or slow processes, and turn to AI to fill the gap. Often, these behaviors begin informally: asking ChatGPT to write an email, summarize notes, or troubleshoot code. But they scale quickly.
Shadow AI is a sign that your workforce wants to work smarter, and that your current systems may not be meeting their needs.
“It is really important for people to understand the cost, not just to themselves, but to the people they are serving,” Yuravlivker says. Without training, employees may not know what constitutes risky behavior or how to evaluate an AI tool’s appropriateness.
For CLOs, this is a wake-up call to adapt learning systems. For CDOs, it’s a mandate to expand oversight to include real-time behavior, not just tools and policies.
MUST READ: Learning That Meets You Where You Are: Adaptive Design for a Hybrid Workforce
How to Manage the Risk
Addressing shadow AI requires more than blocking tools or issuing blanket policies. It calls for a holistic approach that blends policy, enablement, and education. For both CLOs and CDOs, this is a moment to lead.
For Chief Learning Officers:
Design training that reflects how people actually use AI: fast, flexibly, and intuitively
Make compliance part of workflow-based training, not just stand-alone modules
Partner with data and compliance leaders to embed governance into onboarding
For Chief Data Officers:
Clarify which tools are approved and why
Collaborate with L&D to share examples of what safe AI usage looks like
Monitor for signals of shadow AI use and respond with training, not just restrictions
“One of the best ways to prevent shadow AI is to provide good alternatives and then to provide training on those tools,” Yuravlivker says.
Effective AI risk management is not just about limiting exposure. It’s about building confidence and competence so teams can use AI responsibly, creatively, and securely.
What’s Next for CLOs and CDOs?
Shadow AI is not a fringe behavior. It’s already shaping how work gets done in your organization, whether you’ve sanctioned it or not. And the stakes are high. Poor visibility into AI usage can lead to regulatory penalties, data leaks, and uneven performance across teams.
But there is another path.
Data Society partners with learning and data leaders to reduce AI risk while empowering your workforce. Through hands-on training, secure toolkits, and real-world workflows, we help your teams develop the skills, and the judgment, they need to use AI responsibly.
If you’re ready to align your AI risk management strategy with your learning culture, we’re here to help.
Reach out to Data Society to explore tailored training programs that stop shadow AI before it starts and turn it into a catalyst for capability.
Q&A: What Is Shadow AI?
It bypasses governance and exposes sensitive data, leading to potential violations of data privacy laws, intellectual property breaches, and reputational damage.