How Employers Are Harnessing AI’s Value while Reining in the Risks

Developing employee policies around its use is critical

by Melissa L. Shingles

Two years after the public debut of ChatGPT, the adoption of artificial intelligence by companies throughout North America continues apace. This increased use of AI in the corporate world has, in turn, spurred continuing conversations about AI’s utility and value in the workplace along with its potential risks.

Littler recently conducted its “2024 AI C-Suite Survey Report,” querying more than 330 C-suite executives about their perceptions of both generative and predictive AI. Those surveyed include chief executive officers, general counsel, chief human resources officers, chief operating officers and chief technology officers. The results show increasing acceptance of AI as well as an improved understanding that developing employee policies around its use is critical to harnessing the value of AI while reining in the risks for organizations.

Employers Are Implementing a Range of AI Policies, with Few Prohibiting AI Usage Altogether

Of the executives surveyed, 44% report that their organizations have implemented specific policies for employee use of AI, with an additional 25% indicating they are in the process of establishing such a policy. Of the companies reporting that they have AI usage policies in place, 75% have mandatory policies while 23% simply offer guidelines. Only 3% of those with usage policies prohibit AI outright.

More than half (55%) of respondents say that employee use of generative AI is limited to approved tools, whereas 40% of executives report that their organization limits employee use to approved tasks. Certainly, monitoring and enforcement are easier on a tool-by-tool, rather than a per task, basis.

AI Policies Are Only as Good as the Enforcement and Education Measures that Support Them

Of course, regardless of their subject matter, workplace policies do not enforce themselves. Employers report utilizing a range of methods to track and enforce their AI policies. The majority of respondents with generative AI policies (67%) indicate they are relying on expectation setting (e.g., establishing clear expectations and depending on employees to meet those expectations). A slightly smaller share (55%) utilize access controls to limit AI tools to certain employees. Other executives responded that they rely on employees to report violations (52%). Companies are also employing such methods as audits and reviews (43%); automated monitoring systems (38%), such as software to track generative AI use; and individualized check-ins (33%). Only 5% say they are not tracking compliance or enforcing their AI policies.

Educating employees about AI usage policies and expectations is key to not only achieving buy-in and adherence, but mitigating risks to employers as well. Surprisingly, however, only about a third of respondents (31%) report that their organizations offer such training. An additional 15% of executives say they are in the process of developing and rolling out training, 31% are considering it and 24% have no plans for training.

Those employers who are offering training to their employees report they are focusing on issues such as AI literacy (79%), data privacy (78%), confidentiality and protection of proprietary company information (76%) and ethical and responsible use of AI (72%). There are clear opportunities for organizations to expand the range of training topics.

How Can Phoenix Businesses Harness the Value of AI while Reining in the Risks?

As Littler’s latest survey shows, AI is continuing to take hold in the workplace as companies recognize its potential to increase efficiency and reduce workloads. Whether, and how, organizations adopt AI are weighty decisions that may depend on a range of factors, including a company’s size, industry and mission. As AI usage ramps up, however, so does AI-related litigation and legal risks to employers. Survey responses make clear that there is a growing awareness of these risks by organizations, and they are responding accordingly.

The takeaway for Phoenix employers is that there is a range of approaches to AI usage policies, from detailed handbook provisions to more generalized guidelines. Spelling out the requirements or expectations around AI — whatever they are — can not only help improve adherence among employees, but also mitigate risk and ease concerns as well. Indeed, there is still a fair amount of fear and uncertainty surrounding AI in the workplace being fed by dire predictions of worker replacement by this burgeoning technology. Corporate messaging about acceptable use of AI by both the organization and its employees can help to quell these concerns.

As with the development of any workplace policies, input from legal counsel can help ensure that AI usage provisions mitigate risk as intended. After all, AI in the workplace touches on a host of legal issues, including intellectual property, privacy, and employment law, as well as a multitude of new and proposed legislation. A well-designed approach to AI usage, to include clearly stated and vetted requirements, can help companies reduce AI-related risks while reaping its benefits.

Melissa L. Shingles is a management-side labor and employment attorney in the Phoenix office of Littler Mendelson P.C. who represents and counsels employers of all sizes on a broad range of employment law matters, including in the rapidly developing area of artificial intelligence in the workplace.

In Business Dailies

Sign up for a complimentary year of In Business Dailies with a bonus Digital Subscription of In Business Magazine delivered to your inbox each month!

  • Get the day’s Top Stories
  • Relevant In-depth Articles
  • Daily Offers
  • Coming Events