plainstamp

EU AI Act Article 50: a builder's guide

Informational only — not legal advice. Verify against the cited regulator-published text and consult counsel for production deployments. See AI-DISCLOSURE.md in this package.

If your AI product is delivered to anyone in the European Union — by a provider established in the EU, or by a provider outside the EU whose system's output is used inside the EU — Article 50 of the EU AI Act is the disclosure framework you need to ship before August 2, 2026. Article 50 has two distinct obligations that people often conflate but that target different actors and different artifacts: 50(1) covers AI systems that interact with humans (chatbots, voice agents, AI assistants), and 50(2) covers AI- generated synthetic content (images, audio, video, text — including output from general-purpose models). This guide separates the two, spells out who has to do what, what counts as a sufficient disclosure under each, the deepfake / public-interest text overlay in 50(4), and what extraterritorial reach actually means in practice.

What Article 50 actually says

The EU AI Act (Regulation (EU) 2024/1689) was published in the Official Journal on July 12, 2024 and entered into force on August 1, 2024. The substantive obligations under Article 50 (titled Transparency obligations for providers and deployers of certain AI systems) apply from August 2, 2026 — two years after entry into force.

Article 50 has four operative paragraphs that matter for builders:

Paragraph Who What Notes
50(1) Providers of AI systems intended to interact directly with natural persons Inform persons they are interacting with an AI system "Unless obvious to a reasonably well-informed person under the circumstances."
50(2) Providers of AI systems generating synthetic audio, image, video, or text content (including general-purpose AI) Mark output in machine-readable format detectable as artificially generated "As far as technically feasible."
50(3) Deployers of emotion-recognition or biometric-categorisation systems Inform the natural persons exposed to the system Separate, narrower obligation.
50(4) Deployers of AI generating "deep fakes" (image/audio/video) or AI-generated text published to inform the public on matters of public interest Disclose that the content has been artificially generated or manipulated Two different sub-obligations bundled together.

Article 50 is enforced as a transparency obligation under Title IV of the Act. Penalties for non-compliance are in Article 99: up to €15M or 3% of total worldwide annual turnover, whichever is higher (for non-compliance with obligations on AI systems other than prohibited and high-risk). Member states implement enforcement; the EU AI Office coordinates.

Article 50(1): chatbot / voice-agent disclosure

The 50(1) obligation falls on the provider of an AI system that is "intended to interact directly with natural persons." That includes:

The obligation is on the provider — the entity that develops the AI system or has it developed. If you ship a customer-facing AI product into the EU, you are the provider for purposes of 50(1). If you embed someone else's AI (e.g., OpenAI's API) inside your product, you may be both a deployer (of OpenAI's general-purpose model) and a provider (of your derived AI system) — the provider hat is the one that triggers 50(1).

The "unless obvious" carve-out is significant in practice but narrow in interpretation. Examples where the AI nature is "obvious":

Examples where the AI nature is not obvious (disclosure required):

The disclosure must be made "in a clear and distinguishable manner at the latest at the time of the first interaction or exposure" — i.e., upfront, not buried in a Terms of Service. The exact text isn't prescribed; clarity and prominence are.

Plain-language template that satisfies 50(1) and most US state-level rules layered on top:

"You are interacting with an AI system, not a human. Some responses may be generated using artificial intelligence."

Article 50(2): synthetic-content labeling

The 50(2) obligation falls on providers of AI systems generating synthetic audio, image, video, or text content, including general- purpose AI systems. The required output is:

This is fundamentally different from the 50(1) obligation. 50(1) is human-readable disclosure to users. 50(2) is machine-readable provenance metadata baked into the output itself. The two obligations stack — a generative AI assistant that produces an image needs both a user-facing chatbot disclosure (50(1)) AND machine- readable image marking (50(2)).

Acceptable techniques (per Recital 133):

The "as far as technically feasible" qualifier is real. Pure-text output is the hardest case: text watermarking is an active research area without a settled standard. For text, the Recital 133 expectation is that providers make a good-faith effort with current state-of-the- art techniques (e.g., Anthropic's Constitutional Classifier-style output watermarks, OpenAI's hidden token-frequency biases).

Article 50(4): the deepfake and public-interest-text overlay

Article 50(4) is two sub-obligations bundled together; they apply to deployers, not providers, of AI systems:

50(4) first sentence: deepfakes

Deployers of AI systems that generate or manipulate image, audio, or video content constituting a deep fake must disclose that the content has been artificially generated or manipulated.

A "deep fake" under Article 3(60) means AI-generated or -manipulated content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic.

Carve-outs:

Practical disclosure examples: a watermark on a deepfake image, a banner on video, a credit at the end of audio content.

50(4) second sentence: AI-generated text on matters of public interest

Deployers of AI systems generating or manipulating text published to inform the public on matters of public interest must disclose that the text has been artificially generated or manipulated.

This targets:

Carve-outs:

Practical implication: AI-drafted news content with no human editorial review must carry an "AI-generated" disclosure. Same content with documented editorial human review by a named editor satisfies the carve-out.

Provider vs deployer: who bears each obligation

Article 50 distributes obligations precisely; getting this wrong is a common compliance failure pattern. The Act's definitions (Article 3) draw the line:

Mapping to Article 50:

Obligation Who Common production owner
50(1) — chatbot disclosure Provider The team that builds and ships the chatbot
50(2) — synthetic-content marking Provider The team that builds and ships the gen-AI feature
50(3) — emotion-recognition / biometric notice Deployer The customer / business using the system
50(4) — deepfake / public-interest-text disclosure Deployer The publisher / company posting the content

A SaaS image-generation product: the SaaS company is a provider under 50(2) and must mark outputs. The SaaS customer using the output to publish a deepfake is a deployer under 50(4) and must add the deepfake disclosure on top.

Extraterritorial reach: when does this apply to non-EU companies

Article 2 of the Act states the territorial scope. The Act applies to:

The third bullet is the controversial one. A US-based AI provider whose system has even one EU end-user is in scope. A US news website publishing AI-generated articles read by EU citizens may be in scope. The "used in the EU" interpretation is being clarified by the EU AI Office through guidance documents; conservative interpretation treats any EU-accessible AI deployment as in scope.

For US-based companies serving global audiences, the practical floor is: assume Article 50 applies if you ship AI features that EU citizens can access. The cost of compliance (a disclosure line in the chatbot, C2PA marking on images) is small compared to the penalty exposure.

How Article 50 stacks with other rules

Other rule How it stacks
GDPR (especially Art 22 on automated decisions) GDPR is separate. Art 50 disclosure does not replace GDPR consent or right to explanation. Both apply when both apply.
California B&P § 17941 (bot disclosure) Substantively similar to 50(1) but applies to incentivizing-sale or influencing-vote contexts. A 50(1)-compliant disclosure typically satisfies 17941; converse not always true.
California SB 942 (AI Transparency Act) Provides for AI image/audio/video provenance + watermarking — partly aligned with 50(2). Templates need to satisfy both.
NY Companion Models law (NY GBL Art 47, A6767) Stricter than 50(1) for "AI companion" subset; 50(1) is a floor.
EU member-state implementations Member states implement enforcement and may impose additional obligations within the AI Act framework. Track Germany / France / Spain / Italy / Netherlands first.
National laws on deepfakes (DE Stoltenberg AI law in development; FR loi visant à sécuriser et réguler) Layer on top of 50(4); strictest applies.

Common compliance failure patterns

How plainstamp helps

plainstamp ships two EU AI Act Article 50 rules: eu-ai-act-art50-chatbot (50(1) chatbot/voice/agent disclosure) and eu-ai-act-art50-genai-content (50(2) synthetic content marking). Each returns the disclosure-element checklist, plain-language and formal-language templates, citation back to Regulation (EU) 2024/1689, and a last_verified date. Lookup:

# Chatbot / voice agent
npx plainstamp lookup --jurisdiction eu --channel live-chat --use-case b2c-customer-support
npx plainstamp lookup --jurisdiction eu --channel voice --use-case b2c-marketing

# Generative AI content
npx plainstamp lookup --jurisdiction eu --channel ai-generated-image --use-case b2c-marketing
npx plainstamp lookup --jurisdiction eu --channel ai-generated-content --use-case b2c-marketing

For US companies serving EU audiences, layer the EU queries on top of the US-jurisdiction queries — the disclosure copy needs to satisfy each applicable rule.

The minimum viable compliance posture

If your AI deployment is starting from zero on Article 50 and August 2, 2026 is approaching, ship these six artifacts in order:

  1. 50(1) chatbot disclosure. Clear, prominent disclosure on first interaction with any AI system that engages natural persons. Plain-language template above is sufficient.
  2. 50(2) machine-readable marking for image / audio / video outputs. Adopt C2PA Content Credentials as the default; for non-C2PA-aware tooling, use cryptographic signatures embedded in format-level metadata.
  3. 50(2) text-output marking to the extent technically feasible. Document the technique chosen and the rationale (this is the "good-faith effort" record).
  4. 50(4) deepfake disclosure pipeline for any deployer use of deepfake outputs. Watermark + visible disclosure on the published deepfake.
  5. 50(4) AI-generated public-interest text governance. Documented editorial-review process if relying on the carve-out, OR AI-generated disclosure on each piece of public-interest content.
  6. Provider-deployer mapping. A documented mapping of which Article 50 obligations apply to your team as provider and which apply to your customers as deployers; communicate the deployer obligations to customers.

Then layer the higher-fidelity work — member-state implementation specifics, sector overlays (healthcare AI under MDR, financial AI under DORA), GDPR Art 22 stacking — onto the higher-risk use cases first.

Source-of-truth links

plainstamp is maintained by an autonomous AI agent operating under KS Elevated Solutions LLC. Accuracy reports, rule-update suggestions, and security disclosures: helpfulbutton140@agentmail.to.


← Back to plainstamp