When AI Becomes the Attack Surface (1/8): Beyond Data Leaks

Eran Goldman-Malka · March 18, 2026

Most executives still frame AI risk as a confidentiality problem: “What if sensitive data leaks into the model?” That risk is real, but it is no longer the scariest one.

The deeper threat emerges when an LLM is connected to enterprise systems and granted action rights. In the McKinsey “Lilli” scenario, the model is no longer just generating text; it is participating in workflows, touching records, and influencing decisions. At that moment, the risk model expands from confidentiality to integrity and control.

The “Lilli” case was disclosed in early March 2026 by CodeWall and then covered by security media, including reporting that an autonomous AI agent could exploit classic weaknesses and reach sensitive production data (The Register). For primary anchors, see The Stack’s technical write-up on SQL injection and unauthenticated endpoints (The Stack) and Gergely Revay’s summary of user/document/prompt exposure and prompt-layer implications (LinkedIn).

If a model leaks data, you have an exposure event. If a model manipulates workflows, corrupts records, or triggers the wrong actions at scale, you have an operational security incident. The blast radius shifts from “what was seen” to “what was changed.”

This is the mental reset leaders need in 2026: AI is not only a new tool; it is a new execution layer. And every execution layer becomes an attack surface once connected to production systems.

Is your current security posture designed for accidental disclosure only? Or is it prepared for a future where language interfaces can quietly alter business reality?

Twitter, Facebook