ShadowAI is already everywhere. Enterprises just haven’t admitted it yet.
This line of thinking crystallized for me in an investor group meeting I was in yesterday. One of the companies pitching was building AI compliance and monitoring software. Their entire business exists to help enterprises understand how employees are using LLMs at work, and more importantly, what data is leaving the organization when they do.
What caught my attention wasn’t the product. It was the problem they were describing.
In a surprising number of enterprises, there is no corporate AI subscription at all. Instead, staff are using free tiers or paid, often reimbursed, personal accounts to get their work done. Copying and pasting from internal systems, documents, tickets, and codebases into public LLMs that the organization has zero visibility into and zero contractual relationship with.
This is Shadow IT again, just with a new name. ShadowAI.
Against that backdrop, it’s not surprising that more and more organizations are rolling out formal AI usage policies. On the surface, these read like common sense. Don’t paste customer data. Don’t paste internal documents. Don’t paste source code. The subtext, however, is far more revealing. Enterprises are trying to draw a hard boundary around what is allowed to leave the building and end up in the hands of global AI players.
Once those policies exist, enforcement tends to follow quickly. Security teams get involved. Browser extensions are flagged. New tools appear, like the one I mentioned earlier, whose sole purpose is to detect when employees use public LLMs and paste something in that they probably shouldn’t. In some environments, the final move is the blunt one. Block the LLM at DNS and move on.
The “do not train on my data” checkbox, which many corporate policies hang off, doesn’t materially change the risk calculation. From an enterprise perspective, it’s still an external promise they can’t independently verify. Once data leaves the organization, control is gone. Auditability is gone. Legal certainty becomes fuzzy very quickly.
So we end up in an awkward place. Individual leaders and workers are convinced LLMs will change how knowledge work is done, while the organization struggles to justify the cost of an enterprise-wide AI subscription. Rock, meet hard place. Hence, ShadowAI.
What’s interesting is that the time, effort, and cost being poured into controlling AI usage could quite easily be redirected into equipping staff with an enterprise-sanctioned LLM. But here we are.
At the same time, security teams increasingly see public LLMs as a data exfiltration path with a conversational interface. The emergence of MCP servers arguably makes this tension worse, or better, depending on which side of the fence you sit on.
If you follow that tension to its logical conclusion, the outcome probably isn’t “no AI at work.” The more likely outcome is “AI, but inside the fence.”
Which points to decentralized or privately hosted LLMs.
Instead of sending prompts to a shared public model, the model runs within the enterprise boundary. Models are shared by industry, fine-tuned locally, wired into internal systems, and never exposed to the public internet. At that point, LLMs stop being a novelty and start looking like a core business application. Something owned and operated by IT, running on the right hardware, sized for the organization, and justified like any other enterprise system.
The conversation shifts from model size to economics. What does it cost to self-host a model that is “good enough” to be used across the business, versus the cost of an enterprise LLM subscription, versus simply accepting that ShadowAI is an unmanaged but tolerated risk?
The more interesting question is timing. How long until self-hosted LLMs become the default way LLMs are consumed in an enterprise or business context?
My suspicion is sooner than most expect. Regulated industries will lead. One or two high-profile data leakage incidents will accelerate things. Once enterprises realize they can get LLM capability without punching a hole in their trust boundary, the decision becomes fairly obvious.
Public LLMs don’t disappear in this world. They just get repositioned. Great consumer tools. Great learning aids. Less likely to be where serious enterprise work happens.
So when I see enterprises tightening AI policies, blocking endpoints, and deploying detection tooling, I don’t see resistance to AI. I see early signals of where AI is actually heading.
When organizations start treating something as infrastructure, it usually means it’s here to stay.
