One of the most evident workplace trends creating new compliance challenges is the growing use of AI in recruiting and hiring. AI-driven tools can improve efficiency and reduce administrative burden. However, employers remain responsible for employment decisions — and that responsibility is not diminished simply because a third-party platform or algorithm is involved in the process.
Hiring Trends Utilizing AI
Job seekers are increasingly using AI tools like JobCopilot and LazyApply to generate résumés and mass apply. Employers are seeing large volumes of generic, AI-generated applications, and many companies are adjusting their screening processes to filter those out. Using AI as a drafting tool can be helpful, but mass-applying to hundreds of jobs tends to reduce the quality of the application and may actually hurt a candidate’s chances.
Employers generally have the right to define the rules of their hiring process, especially when they want to assess a candidate’s true skills or writing ability. But employers should be thoughtful about how they structure those policies. AI tools are quickly becoming normal workplace productivity tools, so an outright ban can be difficult to enforce.
Because many employers now receive hundreds or thousands of applications for a single role (especially remote ones), automated screening tools have become almost unavoidable. These tools can help triage candidates based on keywords, experience or other criteria, but they also introduce risk when employers do not fully understand how decisions are being made.
At a minimum, employers should be asking vendors basic but critical questions: What data is the system trained on? How does it weigh different qualifications? Has the tool been tested for disparate impact across protected groups? Too often, employers rely on vendor assurances without conducting meaningful diligence or requesting documentation.
This is where risk begins to build. If an automated system unintentionally screens out certain groups of candidates — whether based on race, gender, age, disability or another protected category — businesses may face discrimination claims without clear insight into how the decision was made. The “black box” nature of some AI tools makes defending these claims more difficult.
Employers should focus on three core areas: transparency, vendor due diligence and ongoing review. That means not only vetting tools before implementation, but also periodically auditing outcomes to identify patterns that could create legal exposure. Documentation of these efforts can be just as important as the efforts themselves.
The Future of Hiring
The early stages of hiring will likely become more automated on both the employer and employee side. It is not hard to imagine a near future where AI tools representing applicants are submitting materials into employer systems that are also AI-driven.
From a legal perspective, however, employers should be cautious about fully automating hiring decisions without human oversight. Algorithms can replicate or amplify existing biases or unintentionally filter out protected groups. If an AI hiring tool produces a discriminatory impact, the employer — not the vendor — is still responsible. This creates exposure under state and federal employment laws, even where the employer had no intent to discriminate.
AI can be a powerful tool to manage volume, but it should not be treated as a substitute for judgment. Employers should avoid relying exclusively on automated screening and, instead, build in human review at key decision points. Even a limited secondary review process can significantly reduce risk.
Compliance Tips and Actionable Steps
For employers navigating this evolving landscape, a proactive and practical approach can significantly reduce risk:
- Conduct AI vendor audits. Understand how AI tools function, what data they rely on, and whether they have been evaluated for bias. Avoid reliance solely on vendor assurances.
- Build in human oversight. Avoid fully automated decision-making in hiring or performance management. Ensure that key decisions include human review.
- Audit outcomes, not just processes. Periodically review hiring, compensation and promotion data to identify patterns that could create disparate impact on wage and hour exposure.
- Train managers. Even well-drafted policies create risk if they are not consistently understood and applied.
- Be transparent with candidates. Consider notifying applicants when AI tools are used in the hiring process. Transparency can help build trust and may become a legal requirement in certain jurisdictions.
- Avoid over-reliance on automation. AI should enhance — not replace — decision-making. Employers who treat these tools as infallible increase both legal and operational risk.
As AI continues to reshape the workplace, employers who take a thoughtful, informed approach will be best positioned to benefit from its efficiencies while minimizing legal exposure. The key is not avoiding AI altogether, but using it responsibly — with accountability, oversight and a clear understanding of the risks involved.
Haley Harrigan is a shareholder at Gallagher & Kennedy in Phoenix. She represents and counsels individuals, small businesses, franchised operations and large companies on a wide range of employment issues, ranging from internal compliance to wage-and-hour litigation. Harrigan serves as chair of the firm’s employment and labor law department.












