The EU AI Act regulates how AI systems are placed on the market and used, with escalating obligations for high‑risk categories, transparency for certain general‑purpose and consumer‑facing cases, and governance expectations that land on deployers as well as providers. Open‑source weights or code do not automatically exempt a real‑world deployment from duties once the system is part of a product or business process.
What to align during migration toward open AI stacks
- Role clarity: provider vs. deployer vs. importer/distributor—who maintains the system, who fine‑tunes, who operates inference?
- Documentation and logging suitable for conformity paths where applicable: model cards, data governance summaries, risk controls, human oversight hooks, and incident traceability.
- Transparency and user notices where required; internal chatbots and decision support can still trigger obligations depending on use case and audience.
- Fundamental rights and safety reviews for high‑risk use classes; migrating to an open model without changing the use case does not remove the classification problem.
Open source helps with auditability and portability—you can inspect pipelines, pin versions, and avoid black‑box APIs for core logic. It does not replace governance design: acceptable use, evaluation harnesses, red‑teaming, and release gates.
Have you classified your AI use cases, assigned accountable owners, and tied open‑source model lineage to the same change control as production software?
