A recent study by SHRM shows that approximately 26% of organizations use Artificial Intelligence (“AI”) to support Human Resource-related activities.[1] As AI adoption quickens and its use expands, employers must be mindful of compliance obligations under anti-discrimination laws, privacy laws, and new laws specifically regulating the use of AI in the workplace. This article reviews where we are today with regard to compliance obligations and where we are headed (tip: watch for more bumps in the road!).
How Are Employers Using AI?
Employers increasingly are turning to AI to perform tasks that previously were completed by Human Resource (“HR”) professionals. As the frequency of use increases, the scope expands. For example, an HR manager might use an AI-enabled tool in various stages of the recruitment process — to generate job descriptions, target job postings to specific recruitment pools, screen resumes, or weed out candidates during initial interview rounds. Other ways in which HR might implement AI include performance reviews, employee onboarding and offboarding, employee engagement projects, talent development, training, and workforce planning, structure, and design.
Compliance Obligations — Where Are We Now?
Discrimination and AI
AI-enabled tools can be used in the recruiting process for many functions. As one example, video interviewing software can evaluate candidates based on neutral factors such as facial expressions and speech patterns. As another example, testing software can provide “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test. Federal anti-discrimination laws prohibit the use of neutral factors or tests (such as these) to eliminate or advance job candidates if such neutral factors or tests disproportionately and adversely impact persons of a protected characteristic (such as race, sex, and age). When the AI tool utilizes algorithms to recognize patterns and make predictions, it can lead to biased or discriminatory results.
The Equal Employment Opportunity Commission (“EEOC”) has issued guidance that addresses the use of AI in the employee selection process. The guidance explains that AI-enabled tools that result in a disparately negative impact on individuals of a protected characteristic violate Title VII of the Civil Rights Act of 1964 unless the employer can demonstrate that use of the tool is “job related and consistent with business necessity.” A “negative impact” might be disproportionately eliminating individuals who share a protected characteristic as viable job candidates. The EEOC’s guidance does not have the force or effect of binding law, and instead, explains the EEOC’s interpretation and application of the laws the EEOC enforces. Courts sometimes look to the EEOC’s guidance in rendering judicial opinions.
AI-enabled tools face similar compliance challenges under the Americans With Disabilities Act (“ADA”), another law the EEOC enforces. In its most recent guidance, the EEOC highlights concerns unique to the ADA with respect to AI. These concerns include biased results through the use of predictive AI (similar to its Title VII concerns), problems with the accessibility of AI tools for visually or auditorily impaired candidates, and providing reasonable accommodation for any individual who, for reasons of an impairment, cannot properly use or be evaluated by an AI-enabled tool. Additionally, AI software may pose “disability-related inquiries,” meaning inquiries that are likely to elicit information about a disability or violate the ADA’s pre-offer limitation on medical inquiries.
Several other federal anti-discrimination laws, along with analogous state laws, may also be implicated by employers’ use of AI-enabled tools in making employment decisions.
Data Privacy
Although the term data privacy is relatively new, the concept of data privacy has existed for decades in the employment context. For example, rules forbidding employers from taking polygraph tests are a form of data privacy protection — in that instance, protecting bodily privacy. Video recording, with restrictions and notice requirements, is another practice that implicates privacy concerns.
AI creates an intersection of employment and data privacy. In this arena, states and cities have focused largely on the concept of bodily privacy concerns with predictive AI technology. For example, Illinois, somewhat ahead of the curve, adopted the Artificial Intelligence Video Act in 2020, which imposes requirements on employers who conduct video interviews and use AI analysis of the videos in their candidate evaluation process. New York City and Maryland similarly regulate the use of AI in the recruitment and hiring context.
Although the United States lacks comprehensive privacy protections such as those afforded by the European Union’s General Data Protection Regulation ("GDPR"), various federal and state laws address data privacy in the United States. The California Privacy Rights Act ("CPRA"), for example, requires that notice of a business’s personal data handling practices be provided to employees prior to, or at the time of, initial data collection. Many other states are considering, and some have implemented, their own versions of the CPRA. Although many of these data privacy laws exclude employee data from coverage, the tide could shift in the other direction, extending greater protection to employees.
Compliance Obligations — Where Are We Headed?
Federal Government Enforcement of Measures to Protect Employees Against AI
In late 2023, President Biden issued an Executive Order which articulates a risk of increased workplace surveillance, bias, and job displacement as a result of employers’ use of AI. The Executive Order directed the U.S. Department of Labor (“DOL”) to develop principles and best practices to mitigate these stated harms. In response, the DOL posted a message on its blog, making clear its commitment to developing the principles and practices President Biden directed. Moreover, President Biden released the fiscal year 2025 budget in early 2024, and the budget includes funding for “a new AI policy office to oversee and manage AI related work” at the DOL. The clear implication here is that employers can expect increased scrutiny and attention from the DOL with respect to employers’ use of AI.
The DOL’s recent Field Bulletin on AI, dated April 29, 2024, is a clear example of this intent. The bulletin focuses on various potential issues related to the use of AI, tackling different topics from assessing employee productivity to the use of AI to assign tasks. A common theme is prevalent throughout this guidance: AI comes with a unique set of wage and hour risks, and accordingly, human oversight is required to ensure legal compliance.
State Enforcement of Measures to Protect Employees Against AI
At least a dozen states have passed laws covering the use of AI, some with implications on the use of AI by private employers. For example, New York City passed a law regulating the use of automated employment decision tools in hiring. The NYC law requires that such tools be subjected to bias audits and that employers and employment agencies notify employees and job candidates that such tools are being used to evaluate them. Other states have introduced or passed bills similar to the NYC law. In the short term, employers can expect to see laws requiring them to provide notice to employment candidates regarding the use of AI in recruiting and hiring.
More state-level legislation is on the horizon. California, in particular, has several proposed bills and administrative rules under development that would impact the use of AI. Given heightened concerns regarding job displacement by AI, employers can expect legislation mandating notice requirements regarding the use of AI and potential job displacement, which would be supplemental to any WARN requirements that might otherwise apply.
What Should Employers Do?
- Self-audit any AI-enabled tools to determine whether they appear to adversely impact groups of persons protected under state or federal law based on their protected characteristics. Understanding the data inputs that go into generative AI is just as important as evaluating the output.
- Consider promulgating an AI policy that would require audits and other safeguards prior to the implementation of AI. The AI policy should, at a minimum, require employers to map, measure, and manage discrimination risk.
- Consider undertaking an independent assessment of AI for discrimination-based risk and require human oversight regarding the results from AI.
- Keep a close eye on legal developments at both the state and federal levels (and the local level in cities with a high level of employment regulation). Staying on top of these laws will enable your organization to be prepared as they are enacted.
[1] “2024 Talent Trend: Artificial Intelligence in HR,” January 31, 2024, available at https://shrm-res.cloudinary.com/image/upload/AI/2024-Talent-Trends-Survey_Artificial-Intelligence-Findings.pdf.
Lane Powell’s team of labor and employment attorneys is here to help your organization comply with state and local laws, and develop and implement the strategy that supports your business and your employees. For more information, contact Beth G. Joffe or Rishi Puri, or visit our firm’s Labor, Employment, and Benefits page. Keep up-to-date by subscribing to Lane Powell’s Legal Updates.