AI Bill Passes General Assembly; Broad Workforce Bill Follows
Connecticut Employment Law Blog | Blog
May 01, 2026
Earlier today, the General Assembly gave final approval to two significant workplace bills that employers now need to focus on.
In this post, I’ll cover the second bill, SB 5. It is a wide-ranging “online safety” and artificial intelligence (AI) bill with several provisions that directly affect hiring and employment practices; the bill now awaits action by the Governor and includes staggered effective dates beginning October 1, 2026.
This post won’t cover all the provisions but I’m just focused in on the ones impacting the workplace. In brief, SB 5 sets disclosure and notice requirements for employers that deploy automated tools in recruiting or personnel decisions, clarifies that using such tools is not a defense to discrimination claims, creates whistleblower-style protections and internal reporting for certain high-end AI developers, and adds an AI-related disclosure to WARN notices filed with the Connecticut Department of Labor.
Employers should begin mapping their talent systems, templates, and vendor contracts now so they can build toward compliance ahead of the 2026–2027 implementation timeline.
Automated tools in hiring and HR: new disclosures and pre-decision notices
SB 5 regulates “automated employment-related decision technology,” defined broadly to include any technology that processes personal data and produces an output—such as a score, rank, constraint, recommendation, or classification—that is a substantial factor in making, or materially influences, an employment-related decision.
Beginning October 1, 2027, if an employer deploys such a tool that is intended to interact with applicants or employees, the employer must disclose—in plain language—that the person is interacting with the technology, unless a reasonable person would find that fact obvious.
When the tool will be used to generate output as a substantial factor in a decision, the employer must also provide a written notice before the decision that identifies the tool’s use and purpose, its trade name, the categories and sources of personal data analyzed, how that data will be assessed, and contact information for the employer. SB 5 contains a trade-secret safe harbor, but if information is withheld on that basis, the employer must provide a notice that identifies what is being withheld and the legal basis.
For employers that rely on third-party systems, the statute places duties on “developers” of these automated tools to provide information a “deployer” (i.e., the employer) needs to meet its obligations, if the tool was sold, licensed, or configured to materially influence employment decisions; the developer and deployer may contract for the developer to assume the deployer’s notice duties, but those allocations must be explicit.
Violations of these automated employment provisions are deemed unfair or deceptive trade practices enforceable solely by the Attorney General (no private right of action), and for violations occurring on or before December 31, 2027, the Attorney General may issue a cure notice providing 60 days to remedy before filing suit.
Practically, HR, legal, and privacy teams will need to inventory any resume screeners, interview analyzers, candidate-ranking tools, internal promotion engines, and similar systems, then build a workflow to surface plain-language disclosures early and deliver the more detailed pre-decision notices on time.
“AI is not a defense” in discrimination cases—and why anti-bias testing still matters
SB 5 amends the Connecticut Fair Employment Practices Act to state explicitly that the use of an automated employment-related decision technology is not a defense to a complaint alleging a discriminatory employment practice.
The Commission on Human Rights and Opportunities (CHRO) or a court may, however, consider evidence of anti-bias testing or similar proactive efforts to avoid discrimination, including the quality, efficacy, recency, scope, results, and the employer’s response to those results.
The same “no defense” clarification appears in the statute that prohibits discrimination based on sexual orientation or civil union status.
This framing encourages employers to carry out thoughtful pre-deployment and ongoing validation of automated tools, to document mitigation measures, and to ensure human-in-the-loop review for high-stakes decisions, even though such steps do not create a safe harbor.
Employers should work with vendors to secure bias-testing results, methodology summaries, and update cadences, and then align those with internal audits keyed to Connecticut’s definition of a “substantial factor” and the specific decisions in scope, such as hiring, promotion, discipline, discharge, or terms and conditions of employment.
In addition, counsel should consider reviewing data sources and feature sets referenced in the required notice to confirm they are accurate and do not inadvertently signal protected characteristics or proxies.
Frontier AI developers: whistleblower-style protections and internal reporting
SB 5 creates sector-specific protections for employees of “frontier developers,” defined as persons doing business in the state who train a foundation model using at least 10^26 floating-point or integer operations across training, fine-tuning, or reinforcement learning.
The law prohibits rules or contracts that allow discharge, discipline, or other penalties against employees for whistleblowing under existing Connecticut law or for reporting, with reasonable cause, activity that poses a specific and substantial danger to public health or safety due to a “catastrophic risk,” such as expert-level assistance in creating a CBRN weapon or an unmonitored cyberattack causing serious harm.
By January 1, 2027, “large frontier developers” must establish an anonymous internal reporting process for covered employees, provide status updates on investigations and responses, and elevate reports to officers and directors at least quarterly, with an exception where the report alleges officer or director wrongdoing. Frontier developers must also provide clear notice of these rights via postings, new-hire notices, and periodic notices to remote workers; violations carry civil penalties enforced by the Attorney General, with recovery of investigation costs and attorney’s fees available to the state.
While this regime will be niche for most employers, any Connecticut company operating or partnering on cutting-edge model training should review non-retaliation policies, create an AI safety reporting channel, and verify governance workflows for board-level reporting. Leadership should also confirm that confidentiality measures for reports do not conflict with statutory posting and notice obligations.
WARN notices: disclose if layoffs are AI- or tech-related
Beginning October 1, 2026, any employer that serves written notice to the Department of Labor under the federal WARN Act must also disclose whether the layoffs are related to the employer’s use of artificial intelligence or another technological change, in the form and manner the Labor Commissioner prescribes.
This is a straightforward add-on, but it will require coordination between HR, legal, operations, and communications when force reductions implicate automation, redeployment of work, or adoption of AI-enabled systems. Employers should draft internal guidance now for characterizing the layoff rationale and tracking the causal link—if any—to AI or technology changes.
An unusual twist: telling candidates when they’re interacting with a bot
One of SB 5’s quirkier but important HR-facing features is the requirement to tell applicants or employees—up front and in plain language—when they are interacting with an automated system, unless a reasonable person would find it obvious.
This is a cultural as well as legal shift, especially for chat-based candidate screeners, asynchronous interview platforms that analyze voice or facial expressions, and automated schedulers that look and feel “human.”
Employers will need to adjust user interfaces, email and SMS templates, and recruiter talking points so that these disclosures are consistent and prominent. At the same time, the trade-secret notice mechanism underscores that transparency does not require disclosing proprietary logic—so long as the employer notifies the person that trade-secret information is being withheld and identifies the legal basis for withholding it.
Effective dates, enforcement mechanics, and what to do now
SB 5 uses a staggered implementation schedule. The automated employment-related decision technology framework and the developer–deployer allocation provisions are effective October 1, 2026, with the interactive disclosure and pre-decision notice obligations applying to deployments on or after October 1, 2027.
The “AI is not a defense” amendments to anti-discrimination statutes become effective October 1, 2026. The frontier developer whistleblower provisions take effect on October 1, 2026, with the anonymous reporting process due by January 1, 2027, for large frontier developers. The WARN AI/technology-change disclosure requirement is effective October 1, 2026. The Attorney General is the exclusive enforcer for the automated employment sections, with an optional 60‑day cure period available for violations occurring on or before December 31, 2027; there is no private right of action under these specific sections.
From a planning standpoint, employers should first identify where automated decision tools touch the employee lifecycle—from sourcing and screening through performance management and discipline—and determine whether any system’s output is a “substantial factor” in decisions.
Next, standardize disclosures and build pre-decision notices that accurately describe purpose, trade names, data categories, and sources, and stand up a process to deliver notices early enough to be meaningful.
Third, operationalize bias testing and corrective action with vendors, memorializing update frequencies and audit access, recognizing that such efforts do not create a defense but may be considered by CHRO or a court.
Finally, prepare an internal playbook for WARN notices that addresses whether AI or other technology changes triggered or contributed to planned reductions, and develop a cross-functional review process to ensure consistent, supportable disclosures to the Department of Labor.
Conclusion: a practical action plan for HR and legal
Employers should launch a cross-functional workstream that inventories automated decision tools, maps decision points, and identifies where SB 5’s disclosures and notices must be layered into recruiting and HR workflows. Legal and compliance teams should review vendor agreements to secure the information needed to meet deployer duties, negotiate responsibility allocations where appropriate, and mandate bias testing and remediation protocols aligned with the statute’s factors. For organizations close to the “frontier developer” threshold, add non-retaliation and internal AI-safety reporting to governance programs and ensure board-level visibility into safety reports and responses. Expect the first effective date of October 1, 2026, for the automated employment framework, the CHRO clarifications, and the WARN AI disclosure; build a detailed timeline that back-plans from October 1, 2027, to pilot and finalize disclosure and notice templates with adequate employee and candidate experience testing. With SB 5 now through the General Assembly and awaiting the Governor’s signature, thoughtful early planning will minimize disruption and reduce risk when the new requirements take effect.
