From “label your deepfakes” to a full content provenance and governance framework
Transparency is one of the cornerstones of the EU Artificial Intelligence Act (AI Act). Yet, until recently, it remained unclear what transparency would mean in practice for generative AI systems and their users. Is it enough to add a disclaimer? A small icon? A line in the terms of service?
The first draft Code of Practice on Transparency of AI-Generated Content, published by the EU AI Office in December 2025, provides a clear answer: no. What the draft Code proposes is not a cosmetic transparency layer, but a structural compliance architecture that affects system design, organisational processes and the entire AI value chain.
While formally voluntary, the Code is explicitly positioned as a central tool for demonstrating compliance with Article 50 AI Act. Companies would therefore be well advised to treat it as an early preview of what regulators are likely to regard as state of the art in the near future.
Article 50 AI Act as the legal anchor
Article 50 AI Act establishes a dual transparency regime, which the draft Code closely mirrors and significantly elaborates.
On the provider side, Article 50(2) and (5) require providers of AI systems generating synthetic audio, image, video or text content to ensure that outputs are:
+ marked in a machine-readable manner, and
+ detectable as AI-generated or manipulated,
as far as this is technically feasible and in line with requirements of effectiveness, robustness, reliability and interoperability.
On the deployer side, Article 50(4) and (5) impose an obligation to clearly and distinguishably disclose:
+ deepfakes, and
+ AI-generated or manipulated text published to inform the public on matters of public interest,
subject to limited and narrowly framed exceptions, in particular for law enforcement use and cases of genuine human editorial control.
The draft Code operationalises this split by dedicating Section 1 to providers and Section 2 to deployers, while making clear that these obligations are interlocking and cumulative.
By translating Article 50 into granular commitments and measures for both sides of the AI value chain, the draft Code establishes a demanding and technically prescriptive framework that goes well beyond simple disclosure mechanics.
Provider commitments: transparency by technical design
Multi-layered marking is no longer optional
The most fundamental provider commitment is the obligation to implement a multi-layered marking approach for AI-generated or AI-manipulated content.
The draft Code explicitly states that, at the current state of the art, no single marking technique is sufficient to meet the legal requirements under Article 50(2) AI Act. Providers are therefore expected to combine multiple techniques, depending on the content modality and the associated risk profile.
These techniques may include:
+ Metadata embedding, including digitally signed provenance information identifying the AI system and the type of operation performed (e.g. generation, editing, prompting).
+ Imperceptible watermarking, interwoven directly into the content and designed to withstand common transformations.
+ Fingerprinting or logging mechanisms, particularly where metadata or watermarking is unreliable, such as in certain text-based use cases).
+ Structural marking at model level, especially for open-weight models, to facilitate compliance by downstream system providers.
This approach reframes transparency as a systems engineering challenge, rather than a compliance add-on applied at the end of the production process.
Detection as a core obligation, not a by-product
One of the most far-reaching elements of the draft Code concerns detectability.
Providers are expected not only to mark content, but also to ensure that it can be actively detected as AI-generated or manipulated. To this end, the Code introduces several concrete expectations:
+ Providers should offer free detection interfaces (such as APIs or public tools) enabling users and third parties to verify content provenance.
+ Detection results should include confidence scores and human-understandable explanations, rather than mere binary outputs.
+ Detection mechanisms should be maintained throughout the system’s lifecycle – and, in some cases, made available to authorities even after a provider exits the market.
Transparency thus moves into the realm of verifiability and auditability, implicitly anticipating scrutiny by regulators, platforms, fact-checkers and civil society.
Interoperability and cooperation along the value chain
The draft Code places strong emphasis on cooperation and standardisation.
Providers are encouraged – and in some respects expected – to:
+ support open and interoperable marking standards,
+ collaborate on shared or aggregated verification tools,
+ ensure detection mechanisms function across platforms and contexts, and
+ facilitate downstream compliance by deployers and system integrators.
These expectations are particularly relevant for general-purpose AI and model providers, whose technologies are typically embedded in a wide range of downstream applications.
Organisational commitments: testing, monitoring and documentation
Technical measures alone are not sufficient. Providers are required to back them up with organisational safeguards, including:
+ documented compliance frameworks explaining how marking and detection obligations are met,
+ regular testing under real-world and adversarial conditions,
+ adaptive threat modelling to address evolving manipulation techniques,
+ staff training, and
+ cooperation with market surveillance authorities.
In practice, this aligns transparency obligations with broader AI governance and risk management structures under the AI Act.
Deployer commitments: transparency at the point of exposure
While provider commitments are largely technical, deployer commitments focus on how AI-generated content is presented to humans.
Disclosure as a contextual obligation
Deployers must disclose deepfakes and certain AI-generated text clearly and distinguishably at the time of first exposure. The draft Code leaves little room for minimalist interpretations.
Disclosure must be:
+ visible without additional interaction,
+ adapted to the content modality (text, image, video, audio),
+ accessible to persons with disabilities, and
+ consistent across dissemination channels.
Hiding disclosures in metadata, footnotes or secondary interfaces is insufficient.
A common taxonomy: fully AI-generated vs AI-assisted content
To avoid both over- and under-disclosure, the Code introduces a two-level taxonomy:
+ Fully AI-generated content, created autonomously by the system.
+ AI-assisted content, where AI materially affects meaning, authenticity or perception.
This distinction matters because it determines how disclosure is framed and how users are informed about the nature of AI involvement. The Code explicitly rejects binary “AI / non-AI” labelling as overly simplistic.
Icon-based disclosure and future EU harmonisation
Deployers are expected to use a common icon-based disclosure system, designed for immediate recognisability. Until a final EU-wide icon is adopted, interim solutions are permitted, but consistency is emphasised.
The Code also anticipates interactive disclosure, allowing users to access more detailed information about what exactly was AI-generated or manipulated, drawing on the machine-readable markings implemented by providers.
Deepfakes: modality-specific and strict
Deepfakes are subject to the most detailed and restrictive rules.
The Code specifies disclosure practices for:
+ real-time video (e.g. persistent on-screen indicators),
+ non-real-time video (e.g. disclaimers, icons or credits),
+ images (fixed and clearly visible icon placement), and
+ audio-only formats (spoken disclosures or audio cues).
Even artistic, fictional or satirical works are not exempt, although disclosure must be implemented proportionately so as not to undermine creative expression.
AI-generated text and the limits of the editorial control exception
For AI-generated or manipulated text published on matters of public interest, the Code closely tracks Article 50(4) AI Act, while making its practical implications explicit.
Deployers may avoid disclosure only if:
+ the text has undergone genuine human review or editorial control, and
+ a natural or legal person assumes editorial responsibility.
Crucially, the Code requires procedural safeguards and minimal documentation to rely on this exception. Editorial responsibility must be real, traceable and defensible – not merely asserted.
This is particularly relevant for media organisations, corporate communications and political or policy-related publications.
Governance obligations for deployers
Like providers, deployers are expected to implement internal compliance structures, including:
+ internal classification and labelling processes,
+ staff training on disclosure obligations,
+ monitoring and correction mechanisms for mislabelled content, and
+ cooperation with authorities and third-party flagging systems.
Transparency thus becomes part of content governance, not merely legal compliance.
Transparency as organisational governance
Beyond technical and interface requirements, the draft Code places significant weight on internal governance.
Providers and deployers alike are expected to implement compliance frameworks, testing and monitoring processes, staff training and cooperation mechanisms with authorities. Transparency therefore becomes an ongoing organisational obligation, not a one-off product decision.
While proportionality for SMEs and smaller mid-caps is repeatedly emphasised, the direction of travel is clear: transparency scales with systemic impact, not organisational size alone.
What happens next: from first draft to final Code of Practice
Procedurally, the publication of the first draft marks the beginning, not the end, of the Code of Practice process.
The Chairs and Vice-Chairs explicitly describe this version as a foundation rather than a finished instrument. Stakeholders participating in the process are invited to submit written feedback within a defined consultation window, and further dedicated workshops are planned.
Based on this input, subsequent drafts are expected to:
+ refine and concretise commitments and measures,
+ address open technical questions (such as challenging content types, agentic AI, short-form text),
+ further develop the common icon and accessibility solutions, and
+ align the Code more closely with emerging standards and Commission guidance.
In parallel, the European Commission is preparing separate guidelines on Article 50 AI Act, which will clarify scope, definitions and exceptions. The final Code will need to align with, and likely complement, those guidelines.
Once finalised, the Code is expected to be formally endorsed at EU level and to function as a key reference point for market surveillance authorities when assessing compliance with Article 50. While adherence remains voluntary, deviation from the Code will require providers and deployers to demonstrate compliance through alternative, equally robust means.
In practical terms, the window between draft and final Code should be understood as a preparatory phase, not a grace period.
What this means in practice
The draft Code challenges several common assumptions and leaves little doubt about the direction of travel.
For providers of generative AI systems, transparency obligations increasingly resemble technical infrastructure requirements, integrated into model design, system architecture and lifecycle management. Key questions include:
+ Do our systems support multi-layered marking across all relevant modalities?
+ Are detection mechanisms available, robust and interoperable?
+ Have we designed our systems with downstream disclosure obligations in mind?
+ Can we document effectiveness, robustness and reliability in a way regulators can assess?
For deployers, transparency is no longer satisfied by generic disclaimers. It requires context-aware disclosure, documented editorial responsibility and operational consistency across formats and channels- Key questions include:
+ Where exactly does AI enter our content workflows?
+ Do we publish AI-assisted content in contexts that qualify as matters of public interest?
+ Is our editorial control genuine, documented and defensible?
+ Are disclosure practices consistent across formats, channels and jurisdictions?
For both, the draft Code makes clear that transparency under the AI Act is not about adding a label at the end of the process. It is about building traceability, detectability and accountability into AI systems and content workflows from the outset.
Most importantly, the Code signals that Article 50 AI Act will be enforced with reference to concrete, evolving standards, not abstract principles.
Executive summary
The first draft Code of Practice on Transparency of AI-Generated Content gives operational substance to Article 50 AI Act by introducing detailed and demanding commitments for both providers and deployers.
It transforms transparency from a high-level obligation into a technical, organisational and governance framework spanning the entire AI value chain. Providers are pushed towards multi-layered marking, detection and provenance infrastructure. Deployers face granular disclosure obligations, with limited and documentable exceptions.
Although formally voluntary and still subject to consultation, the draft Code already functions as a regulatory benchmark. Organisations that engage with its requirements now will be significantly better positioned once Article 50 becomes fully enforceable and supervisory scrutiny intensifies.
Transparency under the AI Act is no longer about visibility alone. It is about making AI-generated content reliably identifiable throughout its lifecycle – and being able to prove that this is the case.

