Many organizations continue to view Regulation (EU) 2024/1689 on artificial intelligence as legislation aimed almost exclusively at major tech players. In reality, the AI Act applies across the entire supply chain: to those who develop AI systems, but also to those who integrate them into their products, distribute them on the market, or use them in business processes, based on a risk-based approach and the protection of fundamental rights.

The most common misconception is that the “ordinary” use of AI tools in a company is legally neutral. This is not the case: every company is required to assess the system it uses, verifying whether it falls under prohibited practices, high-risk systems, or those subject to specific transparency obligations, with direct impacts on internal governance, contracts with suppliers, control procedures, and the allocation of responsibilities.

For companies, the question is not “whether” to use artificial intelligence, but “how” to legally oversee its use. This involves mapping actual uses, understanding the role assumed in the technology chain (supplier, distributor, deployer), verifying disclosure obligations and requirements for human supervision, and coordinating the AI Act with the GDPR, cybersecurity, and internal controls. In the coming years, the degree of organizational maturity will be measured precisely by the ability to integrate these regulatory frameworks into a coherent system of AI governance.