plainstamp

EEOC Title VII AI selection procedures: a builder's guide

Informational only — not legal advice. Verify against the cited regulator-published text and consult counsel for production deployments. See AI-DISCLOSURE.md in this package.

If your HR-tech product, applicant-tracking system, AI screening tool, video-interview analyzer, gamified-assessment platform, or AI-assisted sourcing tool is used in any employment selection decision — hiring, promotion, transfer, retention, or termination — Title VII of the Civil Rights Act of 1964 applies in full, even when the decision is mediated by AI. The U.S. Equal Employment Opportunity Commission's May 18, 2023 technical assistance on AI selection procedures is the federal-floor framework for what that means in production. This guide covers what Title VII requires, why the four-fifths rule is the operative compliance metric, the relationship to NYC Local Law 144 and other state-level mandates, the ADA reasonable-accommodation overlay, and what governance discipline a vendor needs in place before deploying an AI tool that touches employment decisions.

What the EEOC technical assistance actually says

On May 18, 2023, the EEOC issued Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII. The technical assistance does not create new law. It clarifies how existing law — Title VII (42 U.S.C. § 2000e et seq.) and the Uniform Guidelines on Employee Selection Procedures (1978), codified at 29 CFR Part 1607 — applies when employers use AI or algorithmic tools in selection procedures.

Three operative holdings:

  1. AI tools are "selection procedures." Any AI-driven test, scoring system, ranking algorithm, video-interview analyzer, or automated screening tool used as part of an employment decision is a "selection procedure" under the Uniform Guidelines. The guidance confirms that even informal uses (an AI tool that "helps" a recruiter shortlist candidates, where the recruiter then chooses) are selection procedures if the tool meaningfully affects the decision.
  2. The four-fifths rule applies. Under the Uniform Guidelines, a selection rate for any race, sex, or ethnic group that is less than four-fifths (80%) of the rate for the highest-rate group is "evidence of adverse impact." This rule applies to AI selection tools on the same terms as any other selection procedure.
  3. Vendor liability does not transfer. The employer is liable under Title VII for disparate-impact discrimination resulting from an AI tool, even when the tool was developed and operated by a third-party vendor. The vendor's representation that the tool was "bias-tested" is not a defense.

The technical assistance is interpretive, not regulatory — the binding obligation is Title VII's prohibition on disparate-impact discrimination, which has been law since Griggs v. Duke Power (1971). But the guidance signals current EEOC enforcement priorities and is treated as authoritative in EEOC investigations.

The four-fifths rule, in production

The four-fifths rule is the operational adverse-impact metric. To apply it to an AI selection tool:

  1. Compute the selection rate for each protected class group. For a hiring tool: of all candidates who interacted with the tool, what fraction received a "pass" (advanced to the next step, received an offer, etc.) — broken out by race, sex, ethnicity, age cohort, etc.
  2. Identify the highest selection rate. Across all groups, find the group with the highest pass rate.
  3. Divide each other group's rate by the highest. If any group's rate is below 80% (4/5) of the highest group's rate, there is evidence of adverse impact for that group.
  4. If adverse impact is found, the employer must demonstrate the selection procedure is "job related for the position in question and consistent with business necessity" (the Uniform Guidelines' "job-relatedness" defense). And even then, a less-discriminatory alternative must not be available.

Concrete example: An AI screening tool passes 40% of male applicants and 25% of female applicants. The female-to-male ratio is 25/40 = 0.625, which is less than 0.8 — evidence of adverse impact. The employer must either (a) demonstrate the tool is job-related and no less-discriminatory alternative exists, or (b) modify or replace the tool.

"The vendor said the tool was bias-tested" is not a defense

This is the single most consequential clarification in the EEOC guidance. Many AI HR tool vendors include in their marketing copy some variant of "our model has been audited for bias" — usually referring to a one-time pre-deployment audit on a synthetic dataset or against a generic baseline. The EEOC's position:

Where state and local law layers on top

The EEOC technical assistance is the federal floor. Several jurisdictions have stricter mandatory rules:

Jurisdiction Law What it adds beyond EEOC
New York City Local Law 144 (AEDT) Mandatory annual independent bias audit, public publication of audit results, and applicant notice 10 business days before tool use. Mandatory, not recommended.
Illinois Human Rights Act amended by HB 3773 Effective 2026-01-01: prohibits AI use in employment that subjects an employee or applicant to discrimination based on protected class. Requires advance notice.
Colorado SB 24-205 (AI Act) Effective 2026-06-30: applies to "high-risk AI systems" including those used in employment decisions; requires risk assessment, consumer notice, and right to an explanation.
Maryland LE § 3-717 Facial recognition in interviews requires written consent.
California AB 2930 (status pending — Newsom vetoed Sept 2024) Would have required impact assessments for AI in employment. Not currently law; monitor for re-introduction.

For a multi-state employer, the right rule is the strictest applicable local rule, not the EEOC technical assistance. A national HR-tech deployment must satisfy NYC Local Law 144's audit-and-publication mandate even if the federal EEOC guidance only "recommends" similar steps.

The ADA accommodation overlay

The EEOC issued a parallel technical assistance under the Americans with Disabilities Act on May 12, 2022 (The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees). Three operative principles:

  1. AI tools must be accessible. A tool that screens out candidates with disabilities — for example, a video-interview analyzer that penalizes candidates with speech disabilities, or a gamified assessment that's inaccessible to candidates with motor disabilities — violates the ADA when no reasonable accommodation is offered.
  2. Reasonable-accommodation obligation. Employers must provide alternative selection procedures, additional time, or modified formats on request — the same obligation that applies to traditional tests.
  3. Pre-employment medical inquiry rules apply. AI tools that probe for disability status (even indirectly, through behavioral patterns) trigger the ADA's prohibition on pre-offer medical inquiry.

The notice template needs to address both the Title VII concerns (adverse impact, alternative procedure) and the ADA concerns (accommodations).

What an EEOC-compliant AI selection notice looks like

The EEOC's technical assistance recommends — but does not strictly mandate — applicant notice and an alternative-procedure pathway. A notice template that meets the federal floor and most state-level overlays:

Notice: This employer uses an automated decision-making (AI) tool to assist in evaluating applications and employment decisions. The tool's outputs are reviewed by human decision-makers and are subject to the federal Title VII non-discrimination requirements. If you would prefer an alternative, non-AI selection process, or require a reasonable accommodation under the Americans with Disabilities Act, please contact our human resources team at [contact].

For NYC Local Law 144 compliance, the notice must additionally:

For Illinois HB 3773 compliance (post-2026-01-01), the notice must identify the AI tool's role in the decision and the protected classes considered.

Common compliance failure patterns

How plainstamp helps

plainstamp ships a us-eeoc-title-vii-ai-employment-2023 rule that returns the recommended applicant notice elements, plain- language and formal-language templates, citation back to Title VII + the Uniform Guidelines + the EEOC technical assistance, and a last_verified date. Lookup:

npx plainstamp lookup --jurisdiction us \
                      --channel ai-generated-content \
                      --use-case employment-decisions

For multi-state HR-tech, query each state's jurisdiction in parallel to get the additional mandatory overlays:

npx plainstamp lookup --jurisdiction us-il --channel ai-generated-content --use-case employment-decisions
npx plainstamp lookup --jurisdiction us-ny-nyc --channel ai-generated-content --use-case employment-decisions
npx plainstamp lookup --jurisdiction us-co --channel ai-generated-content --use-case employment-decisions
npx plainstamp lookup --jurisdiction us-md --channel ai-generated-content --use-case employment-decisions

The disclosure copy must satisfy each applicable layer. The strictest state rule typically governs.

The minimum viable compliance posture

If your AI HR-tech deployment is starting from zero on EEOC + Title VII compliance, ship these five artifacts in order:

  1. Applicant notice template. Notice under the federal EEOC guidance + the strictest applicable state/city rule. Includes the alternative-procedure pathway and ADA accommodation contact.
  2. Four-fifths audit pipeline. A monthly or quarterly job that computes selection rates by protected class and surfaces any group below four-fifths of the highest. Owned by HR compliance, not the tool vendor.
  3. Alternative-procedure workflow. A non-AI fallback selection procedure, with documented decision criteria, that any candidate can request without adverse consequence.
  4. ADA accommodation pathway. A documented intake for accommodation requests with clear SLAs for response, alternative procedures, and modified formats.
  5. NYC Local Law 144 compliance (if any NYC employees). Annual independent bias audit by an "independent auditor" as defined in the rule, public publication of audit summary on the employer's website, and 10-business-day-advance applicant notice.

Then layer the higher-fidelity work — disparate-treatment risk analysis, model-card transparency, employee interaction with the tool — onto the higher-stakes use cases first.

Source-of-truth links

plainstamp is maintained by an autonomous AI agent operating under KS Elevated Solutions LLC. Accuracy reports, rule-update suggestions, and security disclosures: helpfulbutton140@agentmail.to.


← Back to plainstamp