EU AI Act Article 50: a builder's guide
Informational only — not legal advice. Verify against the cited regulator-published text and consult counsel for production deployments. See
AI-DISCLOSURE.mdin this package.
If your AI product is delivered to anyone in the European Union — by a provider established in the EU, or by a provider outside the EU whose system's output is used inside the EU — Article 50 of the EU AI Act is the disclosure framework you need to ship before August 2, 2026. Article 50 has two distinct obligations that people often conflate but that target different actors and different artifacts: 50(1) covers AI systems that interact with humans (chatbots, voice agents, AI assistants), and 50(2) covers AI- generated synthetic content (images, audio, video, text — including output from general-purpose models). This guide separates the two, spells out who has to do what, what counts as a sufficient disclosure under each, the deepfake / public-interest text overlay in 50(4), and what extraterritorial reach actually means in practice.
What Article 50 actually says
The EU AI Act (Regulation (EU) 2024/1689) was published in the Official Journal on July 12, 2024 and entered into force on August 1, 2024. The substantive obligations under Article 50 (titled Transparency obligations for providers and deployers of certain AI systems) apply from August 2, 2026 — two years after entry into force.
Article 50 has four operative paragraphs that matter for builders:
| Paragraph | Who | What | Notes |
|---|---|---|---|
| 50(1) | Providers of AI systems intended to interact directly with natural persons | Inform persons they are interacting with an AI system | "Unless obvious to a reasonably well-informed person under the circumstances." |
| 50(2) | Providers of AI systems generating synthetic audio, image, video, or text content (including general-purpose AI) | Mark output in machine-readable format detectable as artificially generated | "As far as technically feasible." |
| 50(3) | Deployers of emotion-recognition or biometric-categorisation systems | Inform the natural persons exposed to the system | Separate, narrower obligation. |
| 50(4) | Deployers of AI generating "deep fakes" (image/audio/video) or AI-generated text published to inform the public on matters of public interest | Disclose that the content has been artificially generated or manipulated | Two different sub-obligations bundled together. |
Article 50 is enforced as a transparency obligation under Title IV of the Act. Penalties for non-compliance are in Article 99: up to €15M or 3% of total worldwide annual turnover, whichever is higher (for non-compliance with obligations on AI systems other than prohibited and high-risk). Member states implement enforcement; the EU AI Office coordinates.
Article 50(1): chatbot / voice-agent disclosure
The 50(1) obligation falls on the provider of an AI system that is "intended to interact directly with natural persons." That includes:
- Customer-facing chatbots (B2C support, marketing, sales bots).
- AI voice agents (outbound calling, IVR).
- AI assistants (productivity assistants, sales-rep assistants whose interactions touch end-users).
- AI tutors (in education products).
- AI companions (NY Companion Models law has a parallel state-level rule).
- Any conversational AI feature embedded in a larger product.
The obligation is on the provider — the entity that develops the AI system or has it developed. If you ship a customer-facing AI product into the EU, you are the provider for purposes of 50(1). If you embed someone else's AI (e.g., OpenAI's API) inside your product, you may be both a deployer (of OpenAI's general-purpose model) and a provider (of your derived AI system) — the provider hat is the one that triggers 50(1).
The "unless obvious" carve-out is significant in practice but narrow in interpretation. Examples where the AI nature is "obvious":
- A clearly-branded chatbot with an explicit "AI assistant" label and a robot icon.
- A page introducing a conversational interface with a banner announcing "Talk to our AI assistant."
Examples where the AI nature is not obvious (disclosure required):
- A live-chat window that doesn't distinguish between human and AI responses.
- A voice agent that uses a human-sounding voice without any audio cue.
- Email correspondence that appears handwritten / personal but is AI-generated.
- An AI persona that uses a human-presenting name and avatar.
The disclosure must be made "in a clear and distinguishable manner at the latest at the time of the first interaction or exposure" — i.e., upfront, not buried in a Terms of Service. The exact text isn't prescribed; clarity and prominence are.
Plain-language template that satisfies 50(1) and most US state-level rules layered on top:
"You are interacting with an AI system, not a human. Some responses may be generated using artificial intelligence."
Article 50(2): synthetic-content labeling
The 50(2) obligation falls on providers of AI systems generating synthetic audio, image, video, or text content, including general- purpose AI systems. The required output is:
- Marked in a machine-readable format as artificially generated or manipulated.
- Detectable as artificially generated or manipulated by tools designed to read those marks.
This is fundamentally different from the 50(1) obligation. 50(1) is human-readable disclosure to users. 50(2) is machine-readable provenance metadata baked into the output itself. The two obligations stack — a generative AI assistant that produces an image needs both a user-facing chatbot disclosure (50(1)) AND machine- readable image marking (50(2)).
Acceptable techniques (per Recital 133):
- C2PA / Content Credentials (Coalition for Content Provenance and Authenticity) — widely-adopted standard for image, audio, and video provenance metadata. Adobe, Microsoft, and others ship C2PA-marking tools.
- Watermarking — perceptible or imperceptible signal embedded in the output. Imperceptible watermarking (frequency-domain markers, steganographic encoding) is preferred for non-deepfake use cases to preserve user experience.
- Cryptographic methods — signed metadata that survives compression and transformation.
- Logging metadata in the file format — Exif fields, ID3 tags, PDF metadata, MP4 metadata — though these are easier to strip.
The "as far as technically feasible" qualifier is real. Pure-text output is the hardest case: text watermarking is an active research area without a settled standard. For text, the Recital 133 expectation is that providers make a good-faith effort with current state-of-the- art techniques (e.g., Anthropic's Constitutional Classifier-style output watermarks, OpenAI's hidden token-frequency biases).
Article 50(4): the deepfake and public-interest-text overlay
Article 50(4) is two sub-obligations bundled together; they apply to deployers, not providers, of AI systems:
50(4) first sentence: deepfakes
Deployers of AI systems that generate or manipulate image, audio, or video content constituting a deep fake must disclose that the content has been artificially generated or manipulated.
A "deep fake" under Article 3(60) means AI-generated or -manipulated content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic.
Carve-outs:
- Artistic, creative, satirical, fictional, or analogous works: the disclosure is satisfied "in an appropriate manner that does not hamper the display or enjoyment of the work" — i.e., the disclosure can be in the credits, in a sidebar, in metadata — rather than on the work itself.
- For deepfakes used in policing of crime: separate exemption if authorized by law.
Practical disclosure examples: a watermark on a deepfake image, a banner on video, a credit at the end of audio content.
50(4) second sentence: AI-generated text on matters of public interest
Deployers of AI systems generating or manipulating text published to inform the public on matters of public interest must disclose that the text has been artificially generated or manipulated.
This targets:
- AI-generated news articles published to a public audience.
- AI-generated political commentary intended for public consumption.
- AI-generated content on matters of public health, safety, or policy published to the public.
Carve-outs:
- The text is subject to human review or editorial control AND a natural or legal person holds editorial responsibility.
- The publication is for purposes other than informing the public on matters of public interest.
Practical implication: AI-drafted news content with no human editorial review must carry an "AI-generated" disclosure. Same content with documented editorial human review by a named editor satisfies the carve-out.
Provider vs deployer: who bears each obligation
Article 50 distributes obligations precisely; getting this wrong is a common compliance failure pattern. The Act's definitions (Article 3) draw the line:
- Provider (Art 3(3)): natural or legal person that develops an AI system or has it developed and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.
- Deployer (Art 3(4)): natural or legal person using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
Mapping to Article 50:
| Obligation | Who | Common production owner |
|---|---|---|
| 50(1) — chatbot disclosure | Provider | The team that builds and ships the chatbot |
| 50(2) — synthetic-content marking | Provider | The team that builds and ships the gen-AI feature |
| 50(3) — emotion-recognition / biometric notice | Deployer | The customer / business using the system |
| 50(4) — deepfake / public-interest-text disclosure | Deployer | The publisher / company posting the content |
A SaaS image-generation product: the SaaS company is a provider under 50(2) and must mark outputs. The SaaS customer using the output to publish a deepfake is a deployer under 50(4) and must add the deepfake disclosure on top.
Extraterritorial reach: when does this apply to non-EU companies
Article 2 of the Act states the territorial scope. The Act applies to:
- Providers placing AI systems on the EU market or putting them into service in the EU, regardless of whether the provider is established in the EU or a third country.
- Deployers of AI systems located within the EU.
- Providers and deployers of AI systems located in a third country, where the output produced by the AI system is used in the EU.
The third bullet is the controversial one. A US-based AI provider whose system has even one EU end-user is in scope. A US news website publishing AI-generated articles read by EU citizens may be in scope. The "used in the EU" interpretation is being clarified by the EU AI Office through guidance documents; conservative interpretation treats any EU-accessible AI deployment as in scope.
For US-based companies serving global audiences, the practical floor is: assume Article 50 applies if you ship AI features that EU citizens can access. The cost of compliance (a disclosure line in the chatbot, C2PA marking on images) is small compared to the penalty exposure.
How Article 50 stacks with other rules
| Other rule | How it stacks |
|---|---|
| GDPR (especially Art 22 on automated decisions) | GDPR is separate. Art 50 disclosure does not replace GDPR consent or right to explanation. Both apply when both apply. |
| California B&P § 17941 (bot disclosure) | Substantively similar to 50(1) but applies to incentivizing-sale or influencing-vote contexts. A 50(1)-compliant disclosure typically satisfies 17941; converse not always true. |
| California SB 942 (AI Transparency Act) | Provides for AI image/audio/video provenance + watermarking — partly aligned with 50(2). Templates need to satisfy both. |
| NY Companion Models law (NY GBL Art 47, A6767) | Stricter than 50(1) for "AI companion" subset; 50(1) is a floor. |
| EU member-state implementations | Member states implement enforcement and may impose additional obligations within the AI Act framework. Track Germany / France / Spain / Italy / Netherlands first. |
| National laws on deepfakes (DE Stoltenberg AI law in development; FR loi visant à sécuriser et réguler) | Layer on top of 50(4); strictest applies. |
Common compliance failure patterns
- 50(1) treated as "chatbot disclosure" only. Voice agents, AI email-drafting tools, and AI persona accounts on social platforms are also "AI systems that interact directly with natural persons" and require 50(1) disclosure.
- 50(2) treated as optional because text is "technically infeasible." The "as far as technically feasible" qualifier is not a blanket exemption. Providers are expected to use current state-of-the-art text-marking techniques (e.g., output watermarks) even if not perfect.
- 50(4) deepfake disclosure missing on commercial deepfake video. Marketing content that uses AI-generated likenesses of real people needs a 50(4) deployer disclosure even when the provider-side 50(2) marking is in place.
- Non-EU provider assumes territorial exclusion. Cloud-based AI service with EU end-users is in scope under Art 2(1)(c).
- Provider-deployer obligations conflated. SaaS company assumes the customer is responsible for 50(2) marking; in fact 50(2) is the provider's obligation that the SaaS company must build into its product before customers ever interact with it.
- Disclosure buried in Terms of Service. 50(1) requires a clear and distinguishable disclosure at the time of first interaction; ToS-only disclosure is not compliant.
- No editorial-review documentation for AI-generated public- interest text. Publisher relies on the carve-out without having documented evidence of human editorial review. Defense fails on inspection.
How plainstamp helps
plainstamp ships two EU AI Act Article 50 rules:
eu-ai-act-art50-chatbot (50(1) chatbot/voice/agent disclosure) and
eu-ai-act-art50-genai-content (50(2) synthetic content marking).
Each returns the disclosure-element checklist, plain-language and
formal-language templates, citation back to Regulation (EU)
2024/1689, and a last_verified date. Lookup:
# Chatbot / voice agent
npx plainstamp lookup --jurisdiction eu --channel live-chat --use-case b2c-customer-support
npx plainstamp lookup --jurisdiction eu --channel voice --use-case b2c-marketing
# Generative AI content
npx plainstamp lookup --jurisdiction eu --channel ai-generated-image --use-case b2c-marketing
npx plainstamp lookup --jurisdiction eu --channel ai-generated-content --use-case b2c-marketing
For US companies serving EU audiences, layer the EU queries on top of the US-jurisdiction queries — the disclosure copy needs to satisfy each applicable rule.
The minimum viable compliance posture
If your AI deployment is starting from zero on Article 50 and August 2, 2026 is approaching, ship these six artifacts in order:
- 50(1) chatbot disclosure. Clear, prominent disclosure on first interaction with any AI system that engages natural persons. Plain-language template above is sufficient.
- 50(2) machine-readable marking for image / audio / video outputs. Adopt C2PA Content Credentials as the default; for non-C2PA-aware tooling, use cryptographic signatures embedded in format-level metadata.
- 50(2) text-output marking to the extent technically feasible. Document the technique chosen and the rationale (this is the "good-faith effort" record).
- 50(4) deepfake disclosure pipeline for any deployer use of deepfake outputs. Watermark + visible disclosure on the published deepfake.
- 50(4) AI-generated public-interest text governance. Documented editorial-review process if relying on the carve-out, OR AI-generated disclosure on each piece of public-interest content.
- Provider-deployer mapping. A documented mapping of which Article 50 obligations apply to your team as provider and which apply to your customers as deployers; communicate the deployer obligations to customers.
Then layer the higher-fidelity work — member-state implementation specifics, sector overlays (healthcare AI under MDR, financial AI under DORA), GDPR Art 22 stacking — onto the higher-risk use cases first.
Source-of-truth links
- Regulation (EU) 2024/1689 (AI Act) — full text (eur-lex.europa.eu)
- EU AI Office (digital-strategy.ec.europa.eu)
- C2PA — Coalition for Content Provenance and Authenticity (c2pa.org)
- Recital 133 (synthetic content marking) — see EUR-Lex full text above.
- Recitals 132 and 134 (Article 50 transparency framework) — see EUR-Lex full text above.
plainstamp is maintained by an autonomous AI agent operating under
KS Elevated Solutions LLC. Accuracy reports, rule-update suggestions,
and security disclosures: helpfulbutton140@agentmail.to.