AI Regulation: Duties, Risks, Labeling, and Compliance
The introduction of the EU AI Act presents creators and teams in the DACH region with new challenges regarding labeling, copyright, and documentation. Since August 1, 2024, the law has been in effect with phased obligations through 2027. In particular, transparency and copyright obligations for General-Purpose AI (GPAI), which became effective on August 2, 2025, are relevant.
EU AI Act Overview
The EU AI Act is the first comprehensive AI regulation in the EU. It establishes risk-based rules ranging from prohibitions to transparency requirements. As of August 2, 2025, it also addresses GPAI models, such as large language models, with specific requirements. The law aims to ensure trust, copyright compliance, and traceability in dealing with AI.
Article 50 of the EU AI Act requires labeling of AI interactions and synthetic or manipulated content (deepfakes, certain text cases) as artificial. This labeling must be technically detectable, for example through metadata or watermarks. For GPAI providers, Article 53 calls for transparency, including a comprehensible summary of the training data, and compliance with copyright. In cases of systemic risk, additional obligations arise from Article 55.
Platforms like YouTube have already introduced their own policies. YouTube requires clear declaration of realistically altered or synthetic content. If disclosure is missing, YouTube may label the content itself. As a technical solution for provenance, C2PA/Content Credentials emerges, a standard for tamper-evident provenance information.
In the DACH context, note that Switzerland, though not an EU member, has had a revised data protection law (nDSG) since 01.09.2023, mandating modern, transparent data processing for companies.
Implementation & Timeline
The chronology of implementing the EU AI Act is clearly defined: On July 12, 2024 the Official Journal publication occurred. The law came into force on August 1, 2024, but with no applicable obligations yet. From February 2, 2025 bans on certain AI practices and AI literacy take effect. The GPAI rules, governance, confidentiality, and the sanctions framework came into force on August 2, 2025. At the same time, member states must designate supervisory authorities and set penalties.
In parallel, the EU released on July 10, 2025 the voluntary but officially recognized Code of Practice for GPAI, giving companies an easier path to demonstrate compliance. On July 24, 2025 the EU Commission provided the binding template for the public summary of training content under Art. 53(1)(d), which must be used.
YouTube introduced in 2024/2025 the obligation to disclose realistically synthetic content. For sensitive topics, labels are displayed more prominently. Monetization remains possible, provided content adheres to partner and advertising policies.
Quelle: YouTube
Impact & Compliance
The EU relies on early, actionable rules to ensure trust, copyright compliance, and traceability. Especially with GPAI, it concerns a minimum standard of documentation and respect for copyright, including training data opt-outs and rights reservations. The Commission confirmed compliance with the timetable despite industry pleas for delays and provided accompanying guidance documents such as the Code of Practice and templates.
Platforms like YouTube create parallel transparency for viewers, without broadly penalizing AI content. Rather than a ban, labels are used, with a stronger focus on originality and quality. This is reflected in monetization policies that do not categorically exclude AI content, but tie them to compliance with general guidelines.
Quelle: YouTube
For GPAI or fine-tuning providers, this means the need for robust documentation (technical, copyright policy) and a public, comprehensible summary of training data according to the EU template. Content producers must clearly label realistically synthetic passages and should consider Content Credentials (C2PA) as a technical provenance proof. Maintaining 'Model Cards' or 'System Cards' serves as a proven transparency artifact to categorize capabilities, limits, and risks.
In the DACH region, the Swiss nDSG remains separately relevant. Companies must plan their processes so that both the EU AI Act (for the EU market) and the nDSG (when dealing with Switzerland) are complied with. The penalty framework under the AI Act was set at up to 35 million EUR or 7% of global annual revenue; member states must determine the concrete enforcement.

Quelle: techzeitgeist.de
Key deadlines and milestones mark the gradual transition to the full application of the EU AI Act.
Analysis & Misconceptions
The fact is that GPAI obligations (Art. 53 ff.) have applied since August 2, 2025. The Code of Practice serves as an accepted evidence track, and the training data summary must be published according to the EU template. YouTube requires disclosure of realistically synthetic content and may label it itself if necessary. Monetization depends on general policies; AI is not categorically excluded.
It is unclear how consistently platform labels, C2PA metadata, and future detection systems will interact. The EU lists several technical options (watermarks, metadata, cryptography) without fixing a single mandatory technology. Claims that the EU has postponed deadlines are false; the Commission explicitly confirmed that the timetable applies. Similarly, the notion that AI videos on YouTube are demonetized across the board is misleading; what matters is originality, policy compliance, and disclosure.

Quelle: techzeitgeist.de
Compliance with the new AI regulations requires substantial effort and adjustments within companies.
Industry associations and some governments called for a pause due to complexity and costs. The Commission pushed back and relied on accompanying guidelines and the Code of Practice to increase legal clarity. Media and professional associations see in the clear obligations, such as training data summaries, an opportunity to make copyrights more visible and reduce misunderstandings.
Future Outlook
Open questions concern the interoperability of platform labels, C2PA, and future EU requirements for machine-readable labeling. The EU is working on further guidance, including on Article 50, and is gathering input on implementation. It remains open how consistently authorities will review training data summaries and at what intervals updates are expected.

Quelle: haufe-akademie.de
The risk-based approach of the EU AI Act classifies AI systems according to their potential risk to fundamental rights and safety.
The direction is clear: visibility over guesswork. Those who develop models must document clearly and respect copyrights. Those who publish content should reliably label realistic AI components and, where possible, rely on robust provenance. With the Code of Practice, YouTube labels, and C2PA, practical tools are already available that enable trust and reach to be treated as a common currency.