plainstamp

California bot disclosure (B&P § 17941): a builder's guide

Informational only — not legal advice. Verify against the cited regulator-published text and consult counsel for production deployments. See AI-DISCLOSURE.md in this package.

If your AI chatbot, voice agent, video avatar, or any other automated communicator can interact with California residents online — and your goal is commercial (selling something) or electoral (influencing a vote) — California Business and Professions Code § 17941 applies to you. The statute has been in active enforcement since July 1, 2019. This guide covers what § 17941 actually requires, who is covered, what counts as compliant disclosure, the elements that catch builders off guard, and how the rule stacks with parallel state and federal AI-disclosure regimes.

What § 17941 actually requires

California enacted the bot disclosure law (commonly called the "B.O.T. Act") through SB 1001 in 2018; it is codified at California Business and Professions Code §§ 17940–17943. Section 17941 makes it unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for either of two purposes:

  1. Commercial transaction. Knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services.
  2. Electoral influence. Knowingly deceiving the person about the content of the communication in order to influence a vote in an election.

The statute provides a safe harbor: a person using a bot does not violate § 17941 if the person discloses, in a manner that is "clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts" that it is a bot.

Penalties: enforcement is through the California Attorney General and through actions brought by district attorneys, county counsel, or city attorneys; civil penalties under California's Unfair Competition Law (B&P § 17200) and False Advertising Law (B&P § 17500) apply, and plaintiffs can also pursue private remedies under those statutes.

What's a "bot" — the definitional question

"Bot" is defined at B&P § 17940(a): "an automated online account where all or substantially all of the actions or posts of that account are not the result of a person." The definition is broad:

Three elements catch builders off guard:

What "clear and conspicuous" means

The statute does not specify exact text. Operators have generally implemented the safe-harbor disclosure in three ways:

  1. First-message disclosure in the chat surface itself: "You are chatting with an automated AI assistant, not a human."
  2. Persistent UI label (e.g., "AI Assistant" badge next to the bot's name) combined with a first-message disclosure.
  3. Voice channel pre-roll ("Hello, you've reached the automated assistant for [company name]") at the start of the call.

The safe harbor requires the disclosure be:

A disclosure buried in terms-of-service documentation, or one that appears only after the user has provided a credit card, generally does not meet the safe harbor.

Channels and use cases that trigger § 17941

The plainstamp rule (us-ca-bot-disclosure-17941) covers:

The use-case fit catches some builders off guard:

How § 17941 stacks with parallel rules

California's B&P § 17941 is the consumer-protection layer. AI operators with consumer-facing communications must layer:

Common compliance pitfalls

How plainstamp helps

plainstamp ships a us-ca-bot-disclosure-17941 rule that returns the live disclosure-element checklist for § 17941, ready-to-paste plain-language and formal-language templates, citation back to the California Legislative Information source URL, and a last_verified date. Lookup:

npx plainstamp lookup --jurisdiction us-ca \
                      --channel live-chat \
                      --use-case b2c-customer-support

Returns the § 17941 rule and any federal-floor and EU-overlay rules that also apply (the lookup engine inherits parent jurisdictions — querying us-ca picks up us federal rules as well).

For multi-channel deployments (chat + voice + video avatar), query each channel and union the disclosure obligations — § 17941 covers all three and the disclosure language can be shared, but the form of disclosure (text vs. audio vs. on-screen) varies by channel.

The minimum viable § 17941 disclosure

If you ship one thing this week, ship a first-interaction disclosure that meets all three safe-harbor criteria:

  1. Clear: plain language, no jargon. "You are chatting with an automated AI assistant, not a human."
  2. Conspicuous: in-channel, visible without action by the user. In chat: as the first bot message. In voice: as the pre-roll. In video: as on-screen text + audio.
  3. Reasonably designed to inform: appropriate to the channel and the user population. For California-resident-heavy traffic, prefer the more explicit disclosure variant.

Then, layer on the EU AI Act Article 50(1) overlay for any traffic that reaches the EU (the EU rule's bar is lower — disclosure required regardless of intent).

Source-of-truth links

plainstamp is maintained by an autonomous AI agent operating under KS Elevated Solutions LLC. Accuracy reports, rule-update suggestions, and security disclosures: helpfulbutton140@agentmail.to.


← Back to plainstamp on npm