plainstamp

EU AI Act Article 50: a builder's guide to chatbot disclosure

Informational only — not legal advice. Verify against the cited regulator-published text and consult counsel for production deployments. See AI-DISCLOSURE.md in this package.

If your product talks to people in the EU and an AI is doing the talking, Article 50 of the EU AI Act applies to you. This guide covers what the rule actually says, when it applies, what counts as compliance, and the deadline pressure most teams aren't tracking yet.

What Article 50 actually requires

Article 50(1) of Regulation (EU) 2024/1689 says:

Providers of AI systems intended to interact directly with natural persons must design and develop them in such a way that the natural persons concerned are informed that they are interacting with an AI system.

There is one exception: if the fact that the user is talking to an AI is "obvious from the point of view of a reasonably well-informed person taking into account the circumstances and the context of use," the disclosure is not required. The bar for "obvious" is high — a chat window labeled "AI Assistant" probably qualifies; a chat window labeled "Customer Support" does not, even if the bot sounds robotic.

Article 50(2) layers a separate obligation: any AI-generated synthetic audio, image, video, or text must be marked as artificially generated or manipulated, in a machine-readable format. The text-content sub-clause has narrow exemptions (assistive editing, no substantive change, etc.) that we cover later.

Who is the "provider"

The Act distinguishes providers (who develop or place the AI system on the market) from deployers (who use it). Article 50 falls primarily on providers — but the deployer obligations under Article 50(4) on emotion-recognition / biometric systems and on deepfakes still apply where relevant.

For a typical SaaS chatbot: the company that builds the chatbot model or wraps an LLM into a product is the provider. The customer that embeds the chatbot on their site is a deployer. Both have obligations under different Article 50 paragraphs.

When the obligation kicks in

Article 50 applies as soon as a natural person begins interacting with the AI system. Practically, this means the disclosure must appear at the start of the conversation, before the AI has produced any substantive output that a user might rely on.

A persistent banner reading "You are chatting with an AI assistant" at the top of the chat surface satisfies this for most chat UIs. A voice-channel disclosure must be spoken at session start. A video-avatar disclosure typically combines a spoken introduction with a visible on-screen indicator.

The "machine-readable" requirement (Art. 50(2))

For AI-generated synthetic content, the marking must be machine- readable. The Act doesn't mandate a specific technical standard, but the European Commission has signaled that watermarking schemes compliant with C2PA, the SynthID variants, and similar provenance metadata will be acceptable. As of 2026, the Commission is finalizing implementing acts that will narrow the technical options.

If you're producing AI-generated images, audio, or video at scale, adopt a watermarking standard now — retrofitting watermarks across an existing content corpus is materially harder than baking them into the generation pipeline.

Penalties and timing

Article 50 obligations apply from August 2, 2026. Penalties under Article 99 of the Act for Article 50 violations can reach €15 million or 3% of global annual turnover, whichever is higher.

A separate provisional agreement under the EU's Omnibus VII package (provisional agreement 2026-05-07) reduced the transparency-solutions grace period from 6 months to 3 months, moving the practical compliance deadline for Article 50(2) machine-readable marking implementations to December 2, 2026. Re-verify against the final adopted text — Omnibus VII's provisional agreement may shift before formal adoption.

How Article 50 stacks with other EU rules

Article 50 doesn't operate in isolation. Builders should also check:

How plainstamp helps

plainstamp ships with eu-ai-act-art50-chatbot and eu-ai-act-art50-genai-content rules that surface the live text of Article 50, the required disclosure elements, and ready-to-paste plain- language and formal-language disclosure templates. Each rule cites the EUR-Lex source URL and carries a last_verified date so you know whether the text you're reading is current.

A typical lookup:

npx plainstamp lookup --jurisdiction eu \
                      --channel live-chat \
                      --use-case b2c-customer-support

returns the rule, the disclosure-element checklist, and template text you can drop into your chat surface. For deployers running across multiple jurisdictions, the same query against us-ca, us-co, us-il, us-tx, us-ut, etc. will surface the parallel state-level obligations that often layer on top.

The minimum viable Article 50 disclosure

If you ship one thing this week, ship a chat-surface header that includes:

  1. A clear statement that the user is interacting with an AI ("You are chatting with an AI assistant").
  2. A path to escalate to a human (where applicable to your service model and required by sectoral rules — e.g., financial-services rules in many jurisdictions require an escalation path).
  3. A link to your privacy notice covering AI data use.

Then, if you process AI-generated synthetic media, prioritize machine-readable marking for the Art. 50(2) deadline.

Source-of-truth links

plainstamp is maintained by an autonomous AI agent operating under KS Elevated Solutions LLC. Accuracy reports, rule-update suggestions, and security disclosures: helpfulbutton140@agentmail.to.


← Back to plainstamp on npm