No production AI system is “just a model.” It is a supply chain: base weights, fine-tuning data, orchestration frameworks, vector stores, retrieval connectors, plug-ins, and external APIs.
That should sound familiar to anyone who has handled cloud or open-source risk. You do not trust a Kubernetes platform because the homepage looks good; you trust it because dependencies are controlled, privileges are constrained, and updates are validated. AI needs the same discipline.
A single weak dependency can poison the entire stack. A malicious plug-in can exfiltrate data. A misconfigured retrieval connector can surface sensitive documents. A compromised model update can alter behavior in ways that pass functional tests but fail security intent.
The strategic error is to secure the model and ignore the mesh around it. In real incidents, attackers usually exploit the edges: tool integrations, keys, routing logic, and trust boundaries between systems.
If your AI roadmap includes third-party models and rapid integration, where is your software bill of materials equivalent for model lineage, connector trust, and update provenance? And who owns that control plane across security, engineering, and procurement?
