23.03.2026
Gernot Fritz, Tanja Pfleger, Fabian Duschnig
Artificial intelligence is no longer a topic of the future but a core element of business reality. While public debate has so far focused largely on generative AI – systems that create text, images, or code – another development that has been emerging for some time is now reaching practical implementation: so-called AI agents. The difference is fundamental. While a chatbot reacts to input, an AI agent independently pursues a goal. It plans intermediate steps, accesses systems, and executes actions. In short: the chatbot responds, the agent acts.
It is precisely this ability to actively intervene in processes that makes agentic systems particularly attractive for businesses. In many areas – from customer service and HR to IT and sales – repetitive, data-driven workflows can be largely automated. An agent does not merely answer a query; it checks internal policies, gathers missing information, initiates processes, and implements decisions. This shifts the role of AI from isolated support to the orchestration of entire process chains.
Data Protection: When Data Processing Becomes Dynamic
This development brings with it a new level of legal complexity, particularly in the field of data protection. As illustrated, for example, by the Spanish Data Protection Authority’s guidelines on agentic AI, the structure of data processing is fundamentally changing. While traditional systems typically involve clearly delineated processing steps, AI agents operate dynamically, across multiple stages, and in a context-dependent manner.
They combine data from different sources, interact with external services, and in some cases determine processing steps only during runtime. This puts key GDPR principles under pressure. Purpose limitation becomes harder to define when processing evolves situationally. Data minimisation loses clarity where systems require broad access to function effectively. Transparency is also challenged when data flows are no longer linear but unfold across multiple systems and interactions.
In addition, many agentic systems rely on memory functions, storing and reusing information over extended periods. As a result, data processing becomes not only more complex but also more persistent.
Data Protection as an Architectural Question
Against this backdrop, data protection can no longer be treated as a downstream compliance layer. It becomes an integral part of system architecture. Key decisions must be made at the design stage: What data may the agent process? Which systems may it access? Which actions may it execute autonomously?
These questions are not merely technical, but also legal in nature. Addressing them too late creates a risk of deploying systems that are difficult – or even impossible – to make compliant retrospectively.
Agentic systems therefore illustrate particularly clearly what “privacy by design” means in practice: not documentation after the fact, but structure from the outset.
The AI Act: A Risk-Based Approach Rather Than an “Agent” Category
The AI Act addresses these developments without explicitly introducing a dedicated category for “agents.” Instead, it follows a risk-based approach focused on the specific use case.
Two key conclusions can be drawn for practice. First, not every AI agent used in a business context automatically qualifies as a high-risk system. Second, certain applications can very quickly fall into high-risk categories – particularly in HR, workforce management, recruitment, credit scoring, or risk assessment in insurance contexts.
The legal assessment therefore starts not with the technical architecture, but with the concrete use case. It must be determined whether a use is prohibited, whether it qualifies as a high-risk system under Annex III, or whether specific transparency obligations apply. Even outside the high-risk category, core requirements remain relevant – including AI literacy, transparency obligations in certain contexts, as well as data protection, employment law, and sector-specific regulations.
Timing is also critical. Prohibitions and the obligation to ensure adequate AI literacy have been in force since February 2025. The general application of the AI Act will begin in August 2026, while certain product-related obligations will apply from August 2027. Companies are therefore already in a phase where initial compliance requirements must be actively implemented.
Human Oversight: More Than a Formal Approval Step
A particularly important aspect – both from a data protection and AI Act perspective – is human oversight. In the context of powerful, autonomous systems, formal control mechanisms alone are insufficient. Oversight must be effective in practice.
This requires, above all, that individuals understand how the system works and where its limitations lie. They must be able to identify anomalies, interpret outputs correctly, and – where necessary – consciously deviate from the system’s recommendations. Equally important is the ability to override decisions or stop processes entirely.
In practice, effective oversight is rarely achieved through a single mechanism. Rather, it requires a combination of approaches. In sensitive scenarios, human approval may be required before any external effect occurs – for example, before payments are executed, contracts are sent, or personnel decisions are implemented. In other cases, systems may operate largely autonomously, provided that escalation mechanisms are triggered in the event of irregularities or threshold breaches. Particularly in cases involving fundamental rights, a four-eyes principle is likely to become standard.
From both a technical and organisational perspective, it is also crucial that agents are granted only the permissions strictly necessary for their specific task. The broader the access, the greater the risk. This must be complemented by comprehensive logging of system activities and clear mechanisms to halt processes or revert to a controlled fallback mode at any time.
New Risks: When Errors Become Actions
Beyond regulatory requirements, additional risks arise from the very nature of agentic systems. Unlike traditional AI applications, errors do not remain confined to outputs or recommendations; they can directly impact processes and systems.
A misinterpreted objective may lead an agent to act in a formally correct yet ultimately undesirable or harmful way. Errors can propagate across multiple processing steps and amplify within interconnected systems. Extensive access rights further increase the potential impact, particularly where sensitive business areas are involved.
From a security perspective, new attack vectors also emerge. If an agentic system is compromised, the consequences may be far more severe than in traditional environments, as the agent is capable of executing actions autonomously. In addition, there is a human factor that is often underestimated: the more reliable a system appears, the more likely users are to trust it unquestioningly. This “automation bias” can result in insufficient scrutiny of outputs and delayed detection of errors.
Conclusion: Governance as a Prerequisite, Not an Add-On
AI agents mark a turning point in the use of artificial intelligence. Systems no longer merely support processes – they actively shape and execute them. This increases not only efficiency potential but also the need for control, transparency, and accountability.
Guidance from supervisory authorities and the requirements of the AI Act make one thing clear: governance can no longer be added retrospectively. It must be embedded in system design from the outset. Companies are therefore well advised to first understand processes before automating them, to deliberately limit access rights before connecting systems, and to establish clear control mechanisms before scaling solutions.
The bottom line is: First the process, then the agent. First the permissions, then the tools. First governance, then scaling.
With AI agents, we are entering a phase in which systems no longer merely generate outputs – they act. And once systems act, control becomes the central challenge.

