{"id":51658,"date":"2026-03-23T18:55:27","date_gmt":"2026-03-23T17:55:27","guid":{"rendered":"https:\/\/www.eh.at\/?p=51658"},"modified":"2026-04-07T12:55:09","modified_gmt":"2026-04-07T10:55:09","slug":"ai-agents-when-systems-act-and-why-data-protection-needs-to-be-rethought","status":"publish","type":"post","link":"https:\/\/www.eh.at\/en\/ai-agents-when-systems-act-and-why-data-protection-needs-to-be-rethought\/","title":{"rendered":"AI Agents: When Systems Act \u2013 and Why Data Protection Needs to Be Rethought"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"51658\" class=\"elementor elementor-51658 elementor-51653\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-f13f19c elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"f13f19c\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-fc05408\" data-id=\"fc05408\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-d0b46cd elementor-widget elementor-widget-text-editor\" data-id=\"d0b46cd\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<p>23.03.2026<em><br \/><a href=\"https:\/\/www.eh.at\/team\/gernot-fritz\/\">Gernot Fritz<\/a>,\u00a0<a href=\"https:\/\/www.eh.at\/team\/tanja-pfleger\/\">Tanja Pfleger<\/a>, <a href=\"https:\/\/www.eh.at\/en\/team\/fabian-duschnig\/\">Fabian Duschnig<\/a><\/em><\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-aa0e467 elementor-widget elementor-widget-text-editor\" data-id=\"aa0e467\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<p>Artificial intelligence is no longer a topic of the future but a core element of business reality. While public debate has so far focused largely on generative AI \u2013 systems that create text, images, or code \u2013 another development that has been emerging for some time is now reaching practical implementation: so-called AI agents. The difference is fundamental. While a chatbot reacts to input, an AI agent independently pursues a goal. It plans intermediate steps, accesses systems, and executes actions. In short: the chatbot responds, the agent acts.<\/p><p>It is precisely this ability to actively intervene in processes that makes agentic systems particularly attractive for businesses. In many areas \u2013 from customer service and HR to IT and sales \u2013 repetitive, data-driven workflows can be largely automated. An agent does not merely answer a query; it checks internal policies, gathers missing information, initiates processes, and implements decisions. This shifts the role of AI from isolated support to the orchestration of entire process chains.<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1ec9b86 elementor-widget elementor-widget-heading\" data-id=\"1ec9b86\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Data Protection: When Data Processing Becomes Dynamic<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-f81edd7 elementor-widget elementor-widget-text-editor\" data-id=\"f81edd7\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<p>This development brings with it a new level of legal complexity, particularly in the field of data protection. As illustrated, for example, by the Spanish Data Protection Authority\u2019s <a href=\"https:\/\/www.aepd.es\/guias\/orientaciones-ia-agentica.pdf\" target=\"_blank\" rel=\"noopener\">guidelines on agentic AI<\/a>, the structure of data processing is fundamentally changing. While traditional systems typically involve clearly delineated processing steps, AI agents operate dynamically, across multiple stages, and in a context-dependent manner.<\/p><p>They combine data from different sources, interact with external services, and in some cases determine processing steps only during runtime. This puts key GDPR principles under pressure. Purpose limitation becomes harder to define when processing evolves situationally. Data minimisation loses clarity where systems require broad access to function effectively. Transparency is also challenged when data flows are no longer linear but unfold across multiple systems and interactions.<\/p><p>In addition, many agentic systems rely on memory functions, storing and reusing information over extended periods. As a result, data processing becomes not only more complex but also more persistent.<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1272037 elementor-widget elementor-widget-heading\" data-id=\"1272037\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Data Protection as an Architectural Question<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e3c6f42 elementor-widget elementor-widget-text-editor\" data-id=\"e3c6f42\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<p>Against this backdrop, data protection can no longer be treated as a downstream compliance layer. It becomes an integral part of system architecture. Key decisions must be made at the design stage: What data may the agent process? Which systems may it access? Which actions may it execute autonomously?<\/p><p>These questions are not merely technical, but also legal in nature. Addressing them too late creates a risk of deploying systems that are difficult \u2013 or even impossible \u2013 to make compliant retrospectively.<\/p><p>Agentic systems therefore illustrate particularly clearly what \u201cprivacy by design\u201d means in practice: not documentation after the fact, but structure from the outset.<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6399d97 elementor-widget elementor-widget-heading\" data-id=\"6399d97\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">The AI Act: A Risk-Based Approach Rather Than an \u201cAgent\u201d Category<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-36226f1 elementor-widget elementor-widget-text-editor\" data-id=\"36226f1\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<p>The AI Act addresses these developments without explicitly introducing a dedicated category for \u201cagents.\u201d Instead, it follows a risk-based approach focused on the specific use case.<\/p><p>Two key conclusions can be drawn for practice. First, not every AI agent used in a business context automatically qualifies as a high-risk system. Second, certain applications can very quickly fall into high-risk categories \u2013 particularly in HR, workforce management, recruitment, credit scoring, or risk assessment in insurance contexts.<\/p><p>The legal assessment therefore starts not with the technical architecture, but with the concrete use case. It must be determined whether a use is prohibited, whether it qualifies as a high-risk system under Annex III, or whether specific transparency obligations apply. Even outside the high-risk category, core requirements remain relevant \u2013 including AI literacy, transparency obligations in certain contexts, as well as data protection, employment law, and sector-specific regulations.<\/p><p>Timing is also critical. Prohibitions and the obligation to ensure adequate AI literacy have been in force since February 2025. The general application of the AI Act will begin in August 2026, while certain product-related obligations will apply from August 2027. Companies are therefore already in a phase where initial compliance requirements must be actively implemented.<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3ed559f elementor-widget elementor-widget-heading\" data-id=\"3ed559f\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Human Oversight: More Than a Formal Approval Step<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-a36e9ab elementor-widget elementor-widget-text-editor\" data-id=\"a36e9ab\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<p>A particularly important aspect \u2013 both from a data protection and AI Act perspective \u2013 is human oversight. In the context of powerful, autonomous systems, formal control mechanisms alone are insufficient. Oversight must be effective in practice.<\/p><p>This requires, above all, that individuals understand how the system works and where its limitations lie. They must be able to identify anomalies, interpret outputs correctly, and \u2013 where necessary \u2013 consciously deviate from the system\u2019s recommendations. Equally important is the ability to override decisions or stop processes entirely.<\/p><p>In practice, effective oversight is rarely achieved through a single mechanism. Rather, it requires a combination of approaches. In sensitive scenarios, human approval may be required before any external effect occurs \u2013 for example, before payments are executed, contracts are sent, or personnel decisions are implemented. In other cases, systems may operate largely autonomously, provided that escalation mechanisms are triggered in the event of irregularities or threshold breaches. Particularly in cases involving fundamental rights, a four-eyes principle is likely to become standard.<\/p><p>From both a technical and organisational perspective, it is also crucial that agents are granted only the permissions strictly necessary for their specific task. The broader the access, the greater the risk. This must be complemented by comprehensive logging of system activities and clear mechanisms to halt processes or revert to a controlled fallback mode at any time.<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-a377e3b elementor-widget elementor-widget-heading\" data-id=\"a377e3b\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">New Risks: When Errors Become Actions<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d84856b elementor-widget elementor-widget-text-editor\" data-id=\"d84856b\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<p>Beyond regulatory requirements, additional risks arise from the very nature of agentic systems. Unlike traditional AI applications, errors do not remain confined to outputs or recommendations; they can directly impact processes and systems.<\/p><p>A misinterpreted objective may lead an agent to act in a formally correct yet ultimately undesirable or harmful way. Errors can propagate across multiple processing steps and amplify within interconnected systems. Extensive access rights further increase the potential impact, particularly where sensitive business areas are involved.<\/p><p>From a security perspective, new attack vectors also emerge. If an agentic system is compromised, the consequences may be far more severe than in traditional environments, as the agent is capable of executing actions autonomously. In addition, there is a human factor that is often underestimated: the more reliable a system appears, the more likely users are to trust it unquestioningly. This \u201cautomation bias\u201d can result in insufficient scrutiny of outputs and delayed detection of errors.<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-73ca6f7 elementor-widget elementor-widget-heading\" data-id=\"73ca6f7\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Conclusion: Governance as a Prerequisite, Not an Add-On<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-140d881 elementor-widget elementor-widget-text-editor\" data-id=\"140d881\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<p>AI agents mark a turning point in the use of artificial intelligence. Systems no longer merely support processes \u2013 they actively shape and execute them. This increases not only efficiency potential but also the need for control, transparency, and accountability.<\/p><p>Guidance from supervisory authorities and the requirements of the AI Act make one thing clear: governance can no longer be added retrospectively. It must be embedded in system design from the outset. Companies are therefore well advised to first understand processes before automating them, to deliberately limit access rights before connecting systems, and to establish clear control mechanisms before scaling solutions.<\/p><p>The bottom line is: First the process, then the agent. First the permissions, then the tools. First governance, then scaling.<\/p><p>With AI agents, we are entering a phase in which systems no longer merely generate outputs \u2013 they act. And once systems act, control becomes the central challenge.<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6b0d52d elementor-widget elementor-widget-heading\" data-id=\"6b0d52d\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">If you are looking to implement AI agents in a strategic and compliant way, we are happy to support you.<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>23.03.2026Gernot Fritz,\u00a0Tanja Pfleger, Fabian Duschnig Artificial intelligence is no longer a topic of the future but a core element of business reality. While public debate has so far focused largely on generative AI \u2013 systems that create text, images, or code \u2013 another development that has been emerging for some time is now reaching practical [&hellip;]<\/p>\n","protected":false},"author":20,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"rank_math_lock_modified_date":false,"inline_featured_image":false,"footnotes":""},"categories":[125],"tags":[897,898],"group":[],"area":[],"location":[],"systype":[],"class_list":["post-51658","post","type-post","status-publish","format-standard","hentry","category-legal-update","tag-ai-agents","tag-data-protection"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.eh.at\/en\/wp-json\/wp\/v2\/posts\/51658"}],"collection":[{"href":"https:\/\/www.eh.at\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.eh.at\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.eh.at\/en\/wp-json\/wp\/v2\/users\/20"}],"replies":[{"embeddable":true,"href":"https:\/\/www.eh.at\/en\/wp-json\/wp\/v2\/comments?post=51658"}],"version-history":[{"count":13,"href":"https:\/\/www.eh.at\/en\/wp-json\/wp\/v2\/posts\/51658\/revisions"}],"predecessor-version":[{"id":52092,"href":"https:\/\/www.eh.at\/en\/wp-json\/wp\/v2\/posts\/51658\/revisions\/52092"}],"wp:attachment":[{"href":"https:\/\/www.eh.at\/en\/wp-json\/wp\/v2\/media?parent=51658"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.eh.at\/en\/wp-json\/wp\/v2\/categories?post=51658"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.eh.at\/en\/wp-json\/wp\/v2\/tags?post=51658"},{"taxonomy":"group","embeddable":true,"href":"https:\/\/www.eh.at\/en\/wp-json\/wp\/v2\/group?post=51658"},{"taxonomy":"area","embeddable":true,"href":"https:\/\/www.eh.at\/en\/wp-json\/wp\/v2\/area?post=51658"},{"taxonomy":"location","embeddable":true,"href":"https:\/\/www.eh.at\/en\/wp-json\/wp\/v2\/location?post=51658"},{"taxonomy":"systype","embeddable":true,"href":"https:\/\/www.eh.at\/en\/wp-json\/wp\/v2\/systype?post=51658"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}