On February 18, 2026, the Spanish Data Protection Agency (AEPD) published guidelines dedicated to “agent-based” artificial intelligence, i.e., systems that do not merely generate responses but interact autonomously with the digital environment to pursue complex objectives. From a legal standpoint, the agent’s autonomy radically expands privacy risks: it is no longer sufficient to evaluate inputs and outputs; rather, one must govern a dynamic, adaptive, and only partially predictable process.

In this scenario, principles such as transparency, data minimization, purpose limitation, and privacy by design cannot be managed using standard approaches. If the agent learns from the context, selects sources, performs actions, and modifies its own behavior, the data subject’s control risks becoming merely theoretical, just as generic notices, abstract internal instructions, or purely formal forms of supervision prove insufficient.

The robustness of the system is measured by the organization’s ability to correctly assign roles and responsibilities, reconstruct information flows, define the agent’s operational scope in advance, ensure effective human oversight, and align these aspects with policies, internal procedures, supplier relationships, and accountability mechanisms. Operationally, this requires at least: mapping the areas in which the agent operates and the data it uses; defining functional limits, instructions, and thresholds for human intervention; adapting governance and documentation to actual operations; assessing the impacts on rights in advance; and establishing continuous supervision with periodic review. With agentic AI, therefore, it is not enough to control the tool: its actual behavior must be governed over time.