The Weekly Guide to Employment Law Developments

The Rocky Mountain Employer

Labor & Employment Law Updates

EEOC Releases Guidance on Employer Use of Artificial Intelligence to Avoid Liability Under Title VII

Adrian Sanchez, Law Clerk

The Equal Employment Opportunity Commission ("EEOC") recently released guidance regarding the potential negative effects of using artificial intelligence (“AI”) technology when making employment-related decisions, such as hiring or promotion decisions. The EEOC has emphasized that while the use of AI can be beneficial from an efficiency standpoint, this technology has the potential to result in inadvertent discrimination against protected groups under Title VII of the Civil Rights Act of 1964—i.e. disparate impact discrimination.

AI Use in the Employment Context and Assessing AI Use for Disparate Impact

With the recent surge in AI technology development and usage, many employers have adopted AI to assist with employment decisions.  For example, AI may be used to screen large pools of potential employee candidates via specific algorithms and data points, while increasing percentages of diverse employee hires and saving both time and money in the process.[1]  However, reliance on AI technology in the employment context does not come without risks. Specifically, if AI is used blindly in employment decisions, employers run the risk of inadvertently discriminating against protected groups under Title VII.

Among other protections, Title VII of the Civil Rights Act of 1964 prohibits employment practices which, while neutral on their face, nonetheless result in a negative, disparate impact upon protected classes of employees—commonly known as disparate impact discrimination. The practical implication of this is that although using AI to assist with employment decisions is itself a facially neutral practice, it could nonetheless have a negative impact upon protected classes of employees. For example, consider the following scenario:

·        XYZ Corp. is looking to hire a new manager.

·        XYZ Corp. uses AI to fill the open manager position.

·        XYZ Corp. sets AI parameters to filter candidates based on years of experience.

·        There are 10 female candidates and 10 male candidates.

·        The AI parameters set by XYZ Corp. filter out all but one of the female candidates, while recommending 7 male candidates for the position.

In this scenario, although XYZ Corp. applied the same AI parameters to all candidates (a facially neutral practice), the end results clearly had a disparate impact on female candidates because they were nearly all disqualified from the manager position while over half of all male candidates were recommended for the position, which leads to the inference that the employer’s selection criteria may be discriminatory.  

When evaluating facially-neutral employment decision-making practices (such as AI usage) for potential disparate impact discrimination, the EEOC recommends that employers refer to the “four-fifths” rule.[2]  The four-fifths rule states that if the selection rate for a protected class of employees is less than 80% of that of the class of employees with the highest selection rate, then the employer’s practice has a substantially different impact on the former class of employees, which may present evidence of disparate impact discrimination. “Selection rate” refers to the proportion of candidates who are hired, promoted, or otherwise selected. [3]  In the above scenario, the selection rate for female employees was 10%, or 1/10, while the selection rate for males was 70%, or 7/10.  Because the ratio of selection rates (1:7) is less than 4:5, there is evidence of a substantial negative impact on female candidates under the rule. 

            The EEOC has noted that the four-fifths rule is not a rigid standard, but rather a rule of thumb that may assist employers in assessing whether their employment practices (including any reliance on AI algorithms) may have a substantially different impact on one class of employees versus another class.  The rule is not intended to be a substitute for an in-depth statistical analysis, and certainly other considerations unrelated to disparate impact discrimination may drive uneven results between members of different classes (for example, using the hypothetical above, the small size of the candidate pool of men and women being considered for the management position).  However, the four-fifths rule is a useful tool for employers to assess whether their usage of AI in employment decisions may present risk under Title VII or other anti-discrimination laws. 

Employer Considerations

While the ongoing proliferation of AI in the workplace presents exciting opportunities for employers to improve and streamline their businesses, just as with any new technology, it also presents new challenges in ensuring compliance with existing laws—including anti-discrimination laws.  Employers should keep the EEOC’s guidance in mind when implementing AI in their recruiting or promotional decisions in order to avoid any potential pitfalls.  As always, Campbell Litigation is available to assist employers with these and other employment law related concerns.


[1]See e.g. https://www.hirevue.com/blog/hiring/ai-in-recruiting-what-it-means-for-talent-acquisition; https://www.hirevue.com/case-studies/global-talent-acquisition-unilever-case-study.

[2] https://www.eeoc.gov/select-issues-assessing-adverse-impact-software-algorithms-and-artificial-intelligence-used

[3] Id.