Explore MVALAW.COM

Artificial Intelligence Tools in Employment: The EEOC is Watching Too

The U.S. Equal Employment Opportunity Commission (“EEOC”) is tasked with administrative enforcement of a variety of employment discrimination laws, including the Americans with Disabilities Act as amended (the “ADAAA”). The ADAAA prohibits discrimination against job applicants and employees based on “disabilities”, generally defined as a physical or mental impairment that substantially limits the individual in a major life activity. Employers of employees with a disability are required to provide disabled employee with a reasonable accommodation to enable the employee to perform the essential functions of their job, unless the reasonable accommodation would impose an undue hardship on the employer or in certain instances where the employee would still pose a direct threat to the health or safety of themselves or others that cannot be addressed by a reasonable accommodation. It is interesting, therefore, that the EEOC issued  Technical Assistance on May 12, 2022 entitled The American with Disabilities Act and the Use of Software, Algorithms and Artificial Intelligence to Assess Job Applicants and Employees. The stated concern is that use of AI tools will disadvantage job applicants and employees with disabilities.  

The EEOC’s Technical Assistance is not law. It is not even regulation. But it does signal how the EEOC might deal with charges of discrimination brought by applicants and employees based on an employer’s use of AI. Courts also sometimes refer to the EEOC’s technical assistance memoranda to understand how the EEOC interprets the law. Because the ADA was originally enacted in 1992 and amended in 2008, it is an easy bet that legislators did not have AI in mind.

Definitions.

The Technical Assistance starts out with some basic definitions. These are worth taking note of, because, as we’ve seen, not all agencies and lawmakers define AI in the same way. The EEOC defines an “algorithm” as a set of instructions that can be followed by a computer to accomplish some end. Not surprisingly, the EEOC uses the definition of AI in the National Artificial Intelligence Initiative Act of 2020 at section 5002(3). The NAIIA defines AI as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” The EEOC also references the National Institute of Standards and Technology Special Publication 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.

Examples of AI Tools Highlighted by the EEOC.

In cautioning employers about compliance with the ADAAA while using AI, the EEOC specifically mentions (i) automatic resume-screening software that prioritize applications using certain keywords; (ii) chatbot software for hiring that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements; (iii) video interviewing software that evaluates candidates based on their facial expressions and speech patterns; (iv) analytics software; (v) employee monitoring software that rates employees on the basis of their keystrokes or other factors; (vi) testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test; and (vii) and worker management software. But any AI that is used to process data to evaluate, rate, and make other decisions about job applicants and employees is fair game. “In the employment context, using AI has typically meant that the developer relies partly on the computer’s own analysis of data to determine which criteria to use when making employment decisions. AI may include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems.” 

Because “major life activities” under the ADAAA include functions such as communicating, speaking concentrating, or the operation of major bodily functions, such as brain or neurological functions, AI could, intentionally or unintentionally, screen out persons who are substantially limited in these functions as a result of a physical or mental impairment. AI might, for example, reduce the accuracy of the assessment, create special circumstances not taken into account in evaluating the candidate or employee, or prevent the individual from participating in the assessment altogether. 

Vendor Diligence and Contracts Are Important.

Employers can’t hide behind their software vendors either. The EEOC is clear that employers will be responsible for discrimination under the ADAAA if they use an AI tool that discriminates on the basis of disability even if that tool is developed by or administered by a third party.  Vendor contracts and indemnification provisions, therefore, will continue to be important. 

Reasonable Accommodations.

As noted above, employers will need to consider whether a candidate or employee will need a reasonable accommodation in connection with the employer’s AI. The Technical Assistance has some useful examples of the reasonable accommodation process. As with other instances of accommodations, an employer should (i) give the employee or applicant notice of the AI tools used in the hiring or employment process and ask if the individual needs a reasonable accommodation connected with the tool. If the individual responds affirmatively, the employer can, if needed, ask the individual to provide medical documentation confirming the need for an accommodation. Employers must follow, however, the ADAAA’s requirements limiting medical inquiries both before and during employment. Asking candidates if they need an accommodation to successfully use the screening software that measures keystrokes is ok. Asking candidates to disclose any medical condition that would prevent them from using such software is not ok (at least not until the candidate requests an accommodation). Because the EEOC is also concerned about AI that screens out employees or candidates with disabilities who, because of their disabilities, don’t match the AI’s programmed criteria, employers likely will need to disclose the use of the AI and possibly its logic, in order to meet the EEOC’s expectations.   

In some cases, there will not be an accommodation that will enable the disabled individual use the AI or to enable the AI to measure the individual fairly. In such cases, the EEOC wants employers to be ready to provide an alternative method of measuring or analyzing the skill, trait or performance of the employee or candidate that the AI is designed to measure or analyze. That might mean allowing an applicant to test orally instead of manually, giving extended test time, or using accessible technology such as a screen reader.   Employers are expected to engage in a discussion with the individual to determine a reasonable accommodation, if there is one. 

Traditional Validation Testing of AI Will Not Work.

Under the EEOC’s longstanding Uniform Guidelines on Employee Selection Procedures (29 C.F.R. §§ 1607.5–9), a test that is used to determine whether a candidate meets certain criteria can be “validated” by a third party to make sure that the test is job related and consistent with business necessity. This validation is a defense to a claim that the test has a disparate impact (disproportionately screens out) persons in certain protected classes (such as race). AI also can be analyzed to ensure that it does not have an impermissible disparate impact on protected classes by having different demographic groups take the test, comparing the test results of each group, and then modifying the test until any disproportionate adverse impact on the protected group is eliminated. This sort of validation to reduce bias in AI is espoused by different regulators. The EEOC, however, notes that it does not work well when the protected class is persons with disabilities because of the variety and uniqueness of disabilities and their impact on an individual. In fact, the Uniform Guidelines expressly do not apply to disability discrimination. This validation also does not take into account reasonable accommodations that might be needed to enable the individual to perform. For example, if the test evaluates a candidate’s ability to ignore distractions, a person with PTSD might perform poorly unless accommodations were considered, such as a quiet workstation or noise-cancelling headphones.  Using a professionally validated AI tool is a good idea, but employers will still need to offer reasonable accommodations.

Ways to avoid violating the ADAAA.

The EEOC’s guidance suggests the following steps for employers to avoid unwittingly violating the ADAAA in the employer’s use of AI:

  1. Vendor due diligence. Employers should ask the vendor to disclose whether and how the tool was developed to account for persons with disabilities. The EEOC proposes the following questions: 
    • If the tool requires applicants or employees to engage a user interface, did the vendor make the interface accessible to as many individuals with disabilities as possible?
    • Are the materials presented to job applicants or employees in alternative formats? If so, which formats? Are there any kinds of disabilities for which the vendor will not be able to provide accessible formats, in which case the employer may have to provide them (absent undue hardship)?
    • Did the vendor attempt to determine whether use of the algorithm disadvantages individuals with disabilities? For example, did the vendor determine whether any of the traits or characteristics that are measured by the tool are correlated with certain disabilities?
  2. Develop AI with persons with disabilities in mind. Employers should evaluate the same questions as those posed to a vendor. The employer should also ensure that the tool is only evaluating characteristics necessary to perform the job. The EEOC counsels that employers may need to hire experts on various types of disabilities throughout the development process: “For example, if an employer is developing pre-employment tests that measure personality, cognitive, or neurocognitive traits, it may be helpful to employ psychologists, including neurocognitive psychologists, throughout the development process in order to spot ways in which the test may screen out people with autism or cognitive, intellectual, or mental health-related disabilities.”   
  3. Avoid tools that rate abilities in comparison to the “typical” person who is successful at a task. The EEOC notes that AI that rates an individual in comparison to a baseline developed from characteristics of a “typical” person who is good at a task may screen out persons with disabilities: “For example, if an open position requires the ability to write reports, the employer may wish to avoid algorithmic decision-making tools that rate this ability by measuring the similarity between an applicant’s personality and the typical personality for currently successful report writers. By doing so, the employer lessens the likelihood of rejecting someone who is good at writing reports, but whose personality, because of a disability, is uncommon among successful report writers.”  Using tools that measure the desired abilities or qualifications directly instead of measuring characteristics or scores correlated with those abilities or qualifications is preferable. 
  4. Make sure the tool does not ask questions that could elicit information about a person’s physical or mental impairment or health. See Pre-Employment Inquiries and Medical Questions & Examinations, and Enforcement Guidance on Disability-Related Inquiries and Medical Examinations of Employees under the ADA.
  5. Let employees and candidates know that an AI tool is being used, what and how it measures, and that accommodations are available, and train employees and third parties administering tests in how to recognize the need for and provide reasonable accommodations. Information about the AI could include, for example, “which traits or characteristics the tool is designed to measure, the methods by which those traits or characteristics are to be measured, and the disabilities, if any, that might potentially lower the assessment results or cause screen out.” With respect to accommodations, employers should disclose both that reasonable accommodations are available and provide clear instructions for requesting such accommodations. The EEOC guidance suggests that employees and third parties administering tests should be trained in how to spot the need for an accommodation and how to response to that need. That might include offering specific accommodations or alerting HR about the need as soon as possible.

Human Intervention and Oversight.

Although not specifically highlighted by the Technical Assistance, human intervention will be important to avoid violations of the ADAAA when using AI. Employers should not blindly rely on the results of AI processing that makes decisions that could negatively impact an employee or applicant, including hiring, promotion, discipline and termination decisions. Most regulation and a number of legal decisions that address bias in AI note the need for human oversight to evaluate the AI results and correct any errors. For example, if the employer uses screening software that rejects applicants with greater than a twelve month gap in their employment history, human review may be needed to follow-up with an otherwise qualified applicant for an explanation of the gap and to avoid screening out individuals whose gap in work history resulted from a disability. 

AI in Employment—Already an Issue, and Expect More.

The EEOC already launched last year its artificial intelligence fairness initiative. The EEOC, however, is not alone in keeping an eye on the use of AI in employment. Illinois has had a law on its books since 2020 limiting the use of artificial intelligence in analyzing videotaped employment interviews. Under the Illinois Artificial Intelligence Video Interview Act, employers must notify applicants that it uses the AI, explain the AI and the characteristics evaluated, and get the applicants’ opt-in (consent) to the use of the AI prior to the interview.  Earlier this year, New York City passed an ordinance, effective January 1, 2023, imposing certain obligations on employers using “Automated Employment Decision Tools” for screening candidates for employment or employees for promotion within NYC, in place of “discretionary” decision making. Employers cannot use such tools unless it is subject to an audit, no more than one year prior to use, by an independent third party to determine if the tool has a disparate impact on any EEO-1 category. Employers must also (i) make publicly available on their websites a summary of the audit results and the distribution date of the tool subject to the audit; (ii) give notice, not less than ten days before use, to candidates and employees who have applied for a position that the tool will be used and the job qualifications and characteristics the tool will evaluate, and allow the candidate/employee to request an alternative process; and (iii) if not posted on the website, make available within 30 days of the request of the employee or candidate, the types of data collected, the source of the data, and the employer’s retention period (except if such disclosure would violate the law). The law provides for civil penalties up to $1500 per violation (after the first violation—which is at $500), and expressly does not preclude a private cause of action.  The sweeping California Privacy Rights Act (CPRA) will apply to employee data beginning January 1, 2023. The law imposes various requirements in connection with “automated decision-making,” including notice obligations. Attorney General regulations (yet to be issued) under the CPRA are expected to address audit requirements, opt-out requirements, and whether businesses using automated decision-making will need to disclose the underlying logic. The European Union’s proposed “Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Act” COM/2021/206 final, if passed, will impose significant requirements with respect to the use of AI. All of these laws address concerns regarding bias and discrimination in some way. Employers should prepare for further regulation by carefully evaluating how AI is used in their workplaces and how to reduce the risk of discriminatory results from its application. (For the U.S. Department of Justice (DOJ) technical assistance document on the use of AI under the ADAAA:  Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring)

Note: NYC announced that it would postpone enforcement until its regulations were effective. Those new regulations, which can be found here https://rules.cityofnewyork.us/wp-content/uploads/2023/04/DCWP-NOA-for-Use-of-Automated-Employment-Decisionmaking-Tools-2.pdf, will be effective July 5, 2023. https://rules.cityofnewyork.us/rule/automated-employment-decision-tools-updated/ 

About Data Points: Privacy & Data Security Blog

The technology and regulatory landscape is rapidly changing, thus impacting the manner in which companies across all industries operate, specifically in the ways they collect, use and secure confidential data. We provide transparent and cutting-edge insight on critical issues and dynamics. Our team informs business decision-makers about the information they must protect, and what to do if/when security is breached.

Stay Informed

* indicates required
Jump to Page

Subscribe To Our Newsletter

Stay Informed

* indicates required

By using this site, you agree to our updated Privacy Policy and our Terms of Use.