AI Act and Contracts: Why AI Compliance Starts at the Negotiation Table

25.03.2026

Gernot Fritz, Tanja Pfleger, Amina Kovacevic

When companies deploy AI today, they rarely do so entirely in-house. In practice, systems are typically procured, integrated, or accessed via platforms. Companies are therefore not merely acquiring technology, but often complex, dynamic systems whose functionality, data basis, and behaviour continuously evolve.

This is precisely where a key difference to traditional software lies. AI systems are not static products; they change during operation. They rely on training data, learn from new inputs, and are continuously adapted through updates. This creates risks that can no longer be adequately addressed through traditional IT contractual clauses alone.

Against this background, contract design becomes a central element of AI compliance. The AI Act does not only regulate the technology itself, but also the collaboration along the AI value chain. Many of its requirements can, in practice, only be met if they are contractually reflected between the involved actors.

Allocation of roles as the starting point of any contract

A central anchor point of the AI Act is the allocation of regulatory roles. The Regulation distinguishes, in particular, between providers and deployers of AI systems, attaching significantly different obligations to each role. While providers are responsible, for example, for risk management, technical documentation, and conformity assessments, deployers are primarily subject to requirements relating to use, monitoring, and documentation.

In practice, however, this allocation is rarely straightforward. Complex projects often involve multiple parties – such as model providers, integrators, platform operators, and end users. In addition, roles may shift. Companies may qualify as providers even if they did not originally develop the system – for instance, where they place it on the market under their own name, substantially modify it, or change its intended purpose.

This dynamic illustrates why contractual clarification of roles is not a mere formality, but a key prerequisite for effective compliance. A lack of clarity may result in companies assuming regulatory obligations without having consciously managed that outcome.

Cooperation along the value chain

Role allocation alone is not sufficient. The AI Act also requires structured cooperation between the actors involved – particularly where multiple parties contribute to the functionality and compliance of a system.

In the case of high-risk AI systems, providers will often depend on information, components, or support from third parties. To enable them to fulfil their regulatory obligations, these contributions must be reliably organised. In practice, this means that contracts must clearly define which information is to be provided, which technical access is required, and which cooperation obligations apply.

Similarly, deployers require clear contractual arrangements with providers to meet their own obligations – for example, regarding instructions for use, provision of documentation, data control, and access to logs.

As a result, the function of contracts is shifting. They no longer merely delineate commercial responsibilities, but become an operational instrument for managing compliance across the entire value chain.

Training data and data use as a critical pressure point

One particularly sensitive area concerns the handling of data. Many AI systems are continuously improved through further training or refinement based on new data. This raises a key contractual question: what happens to the customer’s data?

Especially in the context of generative AI, it is by no means self-evident that inputs are used solely for the provision of the specific service. Providers often reserve the right to use such data – at least in aggregated or anonymised form – for training purposes. For companies, this can create significant risks, particularly with regard to trade secrets, personal data, and regulatory requirements.

Contracts must set clear boundaries in this respect. This includes defining whether and to what extent data may be used for training purposes, which forms of anonymisation are envisaged, and whether customers have a right to opt out. In practice, this issue often becomes one of the central negotiation points and has a decisive impact on the risk profile of an AI project.

Dynamic systems require dynamic contractual mechanisms

Another key difference to traditional software lies in the dynamic nature of AI systems. Models are continuously refined, functionalities are adjusted, and underlying risk profiles may change. Updates are therefore not only a technical issue, but also a legal one.

Contracts must take this into account. It is not sufficient to simply allow or exclude updates. Rather, the key questions are how changes are communicated, whether and under what conditions customers can object to them, and how it is ensured that regulatory assessments remain up to date.

In sensitive use cases, it may be crucial that changes are transparently documented and their impact remains traceable. Otherwise, there is a risk that a system initially considered low-risk gradually “evolves” into a regulated category without this being recognised in time.

Liability and risk allocation in complex systems

As AI systems become more complex, the importance of clear risk allocation increases. Errors may originate from training data, model decisions, integration, or actual use. As a result, attributing responsibility is often difficult.

Contracts play a central role here. They define who is responsible for which risks, which liability limitations apply, and in which cases indemnities are triggered. Particularly relevant are issues relating to the origin and quality of training data, as well as the outputs generated by the system.

In practice, it becomes clear that traditional liability models are often insufficient to capture the specific characteristics of AI systems. Contract design must therefore be more granular and reflect the different risk spheres along the value chain.

This gains additional importance in light of the revised Product Liability Directive (EU) 2024/2853, which explicitly brings software and AI systems within the scope of product liability and thereby introduces new requirements for contractual risk allocation.

Transparency, audit, and the limits of the black box

A recurring challenge in AI projects is the limited transparency of many systems. At the same time, the AI Act – particularly for regulated use cases – requires a high degree of traceability and control.

This raises a key question for companies: how can compliance with regulatory requirements be verified if there is no insight into the system? Contractual clauses on audit and control rights become significantly more important in this context. They provide at least partial access to relevant information, for example regarding training data, model behaviour, or implemented security measures.

Even if full transparency will remain unrealistic in many cases, one point is clear: without contractually secured information and control rights, robust AI compliance is hardly achievable.

Conclusion: Contract design as the key to AI compliance

The AI Act does not only reshape regulatory requirements – it also changes how AI projects must be structured. Many obligations cannot be fulfilled in isolation within a single organisation, but require coordinated implementation across the entire value chain.

Contracts therefore become the central instrument for ensuring this coordination. They define roles, structure information flows, allocate risks, and create the foundation for transparency and control.

For practice, this means one thing above all: AI compliance does not begin when a system is deployed, but already at the negotiation table – because what is not reflected in the contract will be difficult to enforce later on.

If AI projects are to scale, they require robust contractual frameworks – we are happy to support you in getting them right.