Discrimination in AI Hiring?

Businesses using AI in hiring could be on the hook for discrimination as EEOC cracks down

by Shannon Pierce and Bruna Pedrini

The use of artificial intelligence in the recruiting and hiring process has seen increased popularity in recent years. Many businesses, seeking to lower hiring costs and also reduce potential claims of discrimination (by taking human discretion out of certain aspects of the hiring process), have turned to AI to handle functions such as locating talent, screening applicants, performing skills-based tests and even administering certain phases of the pre-hire interview process. 

While automating various aspects of hiring (and post-hire performance management) processes can be effective in eliminating the potential for intentional discrimination, this is not the only type of discrimination that federal and state anti-discrimination laws prohibit. Under (1) Title VII of the Civil Rights Act of 1964, which protects against discrimination on the basis of race, color, national origin, religion and sex (and sex-related factors such as pregnancy, sexual orientation and gender identity); (2) the Americans with Disabilities Act, which prohibits discrimination on the basis of actual, perceived, or historical disability; and (3) the Age Discrimination in Employment Act, which protects individuals 40 years of age or older from discrimination, discrimination can also be found where employers use tests or selection procedures that, while intended to be neutral, have the effect of disproportionately excluding persons based on one or more of the above protected characteristics. This is known as “disparate impact” or “adverse impact” discrimination.

In the case of AI, if the AI tool that a business utilizes inadvertently screens out individuals with physical or mental disabilities (e.g., by assessing candidates based on their keystrokes and thereby excluding individuals who cannot type due to a disability), or poses questions that may be more familiar to one race, sex or other cultural group as compared to another, this could yield a finding of disparate impact discrimination. 

Disparate impact discrimination often occurs when a computer is tasked to complete a function typically performed by a person such as recognizing facial expressions during an interview. For example, if an employer uses facial and voice analysis technologies to evaluate applicants’ skills and abilities, people with autism or speech impairments may be screened out, even if they are qualified for the job. Facial and voice analysis technologies have also been problematic in properly interpreting the facial expressions and voice fluctuations of women and cultural and ethnic minorities. 

Similarly, discrimination may arise during a hiring process that requires an applicant to take a test that includes an algorithm, such as an online interactive game or personality assessment. Under the ADA, employers must ensure that any such pre-employment tests or games measure only the relevant job skills and abilities of an applicant, rather than reflecting the applicant’s impaired sensory, manual or speaking skills. The standard under the ADA is whether the applicant can perform the essential function of the job with or without a reasonable accommodation. An employer must, therefore, use an accessible test that measures the applicant’s job skills, not their disability, or make other adjustments to the hiring process so that a qualified person is not eliminated using the application technology because of their disability. Common examples where this arises are use of AI for applicant with a vision impairment.

Employers also need to be aware of the potential legal pitfalls associated with overreliance on historical data by some AI technologies. In the employment arena, AI technology often uses a data base that relies on past hiring decisions of a company instead of applying a current analysis of job-related criteria. Employers must exercise caution because use of biased data can train algorithms to introduce bias. For example, training a model on résumés submitted by one demographic group can disproportionately skew the key terms searched by the system in reviewing and pre-screening new applicant submissions, thereby screening out more diverse applicants.

Recent guidance from the U.S. Equal Employment Opportunity Commission – which is the federal agency responsible for administering anti-discrimination laws — confirms that rooting out AI-based discrimination is among the Commission’s top strategic priorities. EEOC Guidance also confirms , where such discrimination occurs, the EEOC will hold the employer, not the AI vendor, responsible. That means the employer could be held liable for many of the same types of damages as are available for intentional discrimination, including back pay, front pay, emotional distress and other compensatory damages, and attorneys’ fees. 

Due to the risks involved, businesses should consult with employment counsel before implementing AI tools in the hiring and performance management processes. While not an exhaustive list, the following may be among the mechanisms counsel can use to help businesses mitigate risk. 

Question the AI vendor about the diversity and anti-bias mechanisms they build into their products. Many vendors boast that their AI tools actually foster, rather than hinder, diversity. By selecting vendors that prioritize diversity, and by asking the vendor to explain how their products achieve this goal, businesses can potentially decrease the likelihood that their chosen AI solutions will yield a finding of discrimination.

Understand what the AI product measures, and how it measures it. As noted above, measuring typing speed or keystrokes, or using culturally biased hypotheticals, can increase the likelihood that an AI tool will be deemed discriminatory. By questioning AI vendors in advance about the specific measuring tools that are built into the AI product, businesses can more easily distinguish between helpful — versus potentially costly — AI. 

Ask for the AI vendor’s performance statistics. Whether an AI-based technology causes a disparate impact involves a complex statistical analysis. While not used in every occurrence, one rule of thumb that the EEOC uses in assessing potential disparate impact is known as the “four-fifths rule.” This rule compares the percentage of candidates from one protected classification (e.g., men) who are hired, promoted or otherwise selected through the use of the AI technology to the percentage of candidates chosen out of other protected classifications (e.g., women). If the percentage of women who were chosen, when divided by the percentage of men chosen, is less than 80% (or four-fifths), this can be an indication that discrimination occurred. While even a passing score of 80% or more does not necessarily immunize employers from liability, when choosing an AI product, businesses should learn whether their AI vendors have analyzed their AI products using the four-fifths rule and other statistical and practical analyses, and what the results of those analyses have shown. 

Test the company’s AI results annually. Just as businesses should question their AI vendors about their statistical findings before implementing an AI hiring solution, businesses should also self-monitor after the AI product is chosen and implemented. At least annually, companies should consider running their own internal statistical analyses to determine whether, in the context of their unique business, the AI product yields fair, non-discriminatory results.

Offer accommodations to disabled individuals. Where a candidate discloses that they have a physical or mental disability th5at prohibits (or limits) their participation in AI-driven processes, the employer should work with the individual to determine whether there is another hiring or performance management process, or some other form of reasonable accommodation, that can be used in lieu of the AI at issue. 

When in doubt, seek indemnification. Since the AI vendor is, ultimately, in the best position to design AI tools in a manner that avoids both intentional and unintentional discrimination, businesses should consider building into the vendor agreement indemnity language that protects the business in the event the vendor fails to design their AI in a manner that prevents actual and/or unintended bias.   

Shannon Pierce is a director at Fennemore. Licensed in California and Nevada, Shannon is on the cutting edge of both technology and the changing business culture. She has nearly 20 years of experience litigating on behalf of management concerning claims of employment discrimination, wrongful termination, leaves of absence, and other traditional employment and commercial litigation.

Bruna Pedrini is a director at Fennemore practicing in the areas of anti-discrimination, accessibility, and education law. She represents public and private educational institutions as well as builders, developers, sports and concert stadiums and venues, and the hospitality industry in their roles as both employers and places of public accommodations to comply with federal, state, and local anti-discrimination laws and accessibility requirements.

In Business Dailies

Sign up for a complimentary year of In Business Dailies with a bonus Digital Subscription of In Business Magazine delivered to your inbox each month!

  • Get the day’s Top Stories
  • Relevant In-depth Articles
  • Daily Offers
  • Coming Events