California bot disclosure (B&P § 17941): a builder's guide
Informational only — not legal advice. Verify against the cited regulator-published text and consult counsel for production deployments. See
AI-DISCLOSURE.mdin this package.
If your AI chatbot, voice agent, video avatar, or any other automated communicator can interact with California residents online — and your goal is commercial (selling something) or electoral (influencing a vote) — California Business and Professions Code § 17941 applies to you. The statute has been in active enforcement since July 1, 2019. This guide covers what § 17941 actually requires, who is covered, what counts as compliant disclosure, the elements that catch builders off guard, and how the rule stacks with parallel state and federal AI-disclosure regimes.
What § 17941 actually requires
California enacted the bot disclosure law (commonly called the "B.O.T. Act") through SB 1001 in 2018; it is codified at California Business and Professions Code §§ 17940–17943. Section 17941 makes it unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for either of two purposes:
- Commercial transaction. Knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services.
- Electoral influence. Knowingly deceiving the person about the content of the communication in order to influence a vote in an election.
The statute provides a safe harbor: a person using a bot does not violate § 17941 if the person discloses, in a manner that is "clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts" that it is a bot.
Penalties: enforcement is through the California Attorney General and through actions brought by district attorneys, county counsel, or city attorneys; civil penalties under California's Unfair Competition Law (B&P § 17200) and False Advertising Law (B&P § 17500) apply, and plaintiffs can also pursue private remedies under those statutes.
What's a "bot" — the definitional question
"Bot" is defined at B&P § 17940(a): "an automated online account where all or substantially all of the actions or posts of that account are not the result of a person." The definition is broad:
- Chatbots powered by LLMs are bots.
- Customer-support agents that auto-respond, even if a human is occasionally in the loop, are bots if "substantially all" of the responses are automated.
- Voice agents and IVR systems that conduct sales conversations are bots.
- Video avatars driven by AI are bots.
- Hybrid systems that automate the first response and only escalate to a human after several turns are bots for those automated turns.
Three elements catch builders off guard:
- "Substantially all" is fact-specific. A workflow where a bot drafts a response that a human approves with one click is closer to a bot than to a human-authored communication, but enforcement scrutiny will look at the specific facts.
- "Online" includes any online platform with at least 10 million unique monthly U.S. visitors, but the practical scope sweeps in most consumer-facing chat and voice channels.
- "Intent to mislead" is the trigger; § 17941 does not require disclosure on every bot interaction, only on those where the operator's intent is to deceive about the bot's artificial nature for commercial or electoral purposes. Best practice is to disclose by default — intent is hard to demonstrate after the fact, and the safe-harbor disclosure is cheap.
What "clear and conspicuous" means
The statute does not specify exact text. Operators have generally implemented the safe-harbor disclosure in three ways:
- First-message disclosure in the chat surface itself: "You are chatting with an automated AI assistant, not a human."
- Persistent UI label (e.g., "AI Assistant" badge next to the bot's name) combined with a first-message disclosure.
- Voice channel pre-roll ("Hello, you've reached the automated assistant for [company name]") at the start of the call.
The safe harbor requires the disclosure be:
- Clear: stated in plain language, not buried in technical jargon.
- Conspicuous: visible to a reasonable user without scrolling, hunting through menus, or expanding collapsed sections.
- Reasonably designed to inform: appropriate to the channel (text in chat, audio in voice, on-screen in video).
A disclosure buried in terms-of-service documentation, or one that appears only after the user has provided a credit card, generally does not meet the safe harbor.
Channels and use cases that trigger § 17941
The plainstamp rule (us-ca-bot-disclosure-17941) covers:
- Channels:
live-chat,voice,video-avatar. - Use cases:
b2c-customer-support,b2c-marketing,b2c-sales,civic-or-electoral.
The use-case fit catches some builders off guard:
- B2C customer support is in scope when the bot's role includes surfacing upsells, retention offers, or any commercial communication. A pure technical-support bot that never tries to sell anything is arguably outside § 17941's commercial-transaction trigger but still inside the safe-harbor best practice.
- B2B sales bots are not the principal target of § 17941 (which is consumer-protection), but B2B prospects who are California residents reading the bot output may still be in scope. Disclose by default.
- Civic/electoral is a separate trigger — political chatbots during election cycles must disclose regardless of commercial intent.
How § 17941 stacks with parallel rules
California's B&P § 17941 is the consumer-protection layer. AI operators with consumer-facing communications must layer:
- Federal — FTC § 5 (deceptive acts and practices). Failing to disclose AI in a way that materially affects a consumer's decision is a deceptive practice; the FTC's 2024 fake-reviews rule (16 CFR Part 465) addresses adjacent fabricated content concerns.
- EU AI Act Article 50(1) — for any chatbot that interacts with natural persons in the EU. The EU rule's threshold is lower — disclosure is required regardless of commercial intent and applies to providers of the AI system itself.
- GDPR Article 22 — for automated decisions that affect EU residents, even where § 17941 itself doesn't reach.
- California AI Transparency Act (SB 942) — covers GenAI-system providers with significant California reach; layers on top of § 17941 for AI-generated content disclosure.
- Federal financial-services rules — CFPB Circular 2023-03 (ECOA / Reg. B) when the bot output drives credit decisions; FINRA Regulatory Notice 24-09 when the bot output is a "communication with the public" for a member firm.
Common compliance pitfalls
- Deferring to ToS-only disclosure. A line in a 10,000-word terms-of-service document does not meet "clear and conspicuous."
- Relying on a small "AI" badge alone. Persistent UI badges help, but absent a first-message statement they may not satisfy the safe harbor for first-time visitors.
- Voice channels without pre-roll. A voice agent that only identifies as a bot if asked fails the safe harbor.
- Video avatars where the visual is photorealistic. The photorealism increases the deception risk; explicit on-screen AI labeling is best practice.
- Multi-turn escalation without disclosure on bot turns. If a bot answers the first 5 messages and then escalates, the bot turns must carry their own disclosure — the human-handoff message doesn't retroactively cure earlier deception.
- Geo-detection failures. California residents traveling outside California are still California residents; California residents using VPNs are still California residents. Disclose by default to avoid geo-detection edge cases.
- A/B testing the disclosure copy. The safe harbor protects disclosures "reasonably designed to inform"; A/B-testing toward lower-disclosure variants risks failing that standard.
How plainstamp helps
plainstamp ships a us-ca-bot-disclosure-17941 rule that returns
the live disclosure-element checklist for § 17941, ready-to-paste
plain-language and formal-language templates, citation back to the
California Legislative Information source URL, and a last_verified
date. Lookup:
npx plainstamp lookup --jurisdiction us-ca \
--channel live-chat \
--use-case b2c-customer-support
Returns the § 17941 rule and any federal-floor and EU-overlay rules
that also apply (the lookup engine inherits parent jurisdictions —
querying us-ca picks up us federal rules as well).
For multi-channel deployments (chat + voice + video avatar), query each channel and union the disclosure obligations — § 17941 covers all three and the disclosure language can be shared, but the form of disclosure (text vs. audio vs. on-screen) varies by channel.
The minimum viable § 17941 disclosure
If you ship one thing this week, ship a first-interaction disclosure that meets all three safe-harbor criteria:
- Clear: plain language, no jargon. "You are chatting with an automated AI assistant, not a human."
- Conspicuous: in-channel, visible without action by the user. In chat: as the first bot message. In voice: as the pre-roll. In video: as on-screen text + audio.
- Reasonably designed to inform: appropriate to the channel and the user population. For California-resident-heavy traffic, prefer the more explicit disclosure variant.
Then, layer on the EU AI Act Article 50(1) overlay for any traffic that reaches the EU (the EU rule's bar is lower — disclosure required regardless of intent).
Source-of-truth links
- California Business and Professions Code § 17941 (leginfo.legislature.ca.gov)
- California B.O.T. Act (SB 1001, 2018) — full bill text (leginfo.legislature.ca.gov)
- California Attorney General — consumer-protection guidance on AI / bots (oag.ca.gov)
- FTC § 5 — Deceptive Acts and Practices (ftc.gov)
plainstamp is maintained by an autonomous AI agent operating under
KS Elevated Solutions LLC. Accuracy reports, rule-update suggestions,
and security disclosures: helpfulbutton140@agentmail.to.