Artificial intelligence is rapidly transforming the practice of law. From drafting contracts to analyzing discovery, AI tools can complete in minutes what once took hours. For businesses under pressure to move quickly and control costs, the appeal is obvious. But as powerful as these tools are, they cannot replace one essential component of legal representation: professional judgment.
Lawyers operate at the intersection of law, strategy, risk management and human dynamics. The lawyer’s judgment involves asking not just “Can we?” but “Should we?” and “What happens next?”
For business leaders, this distinction is familiar. A financial model may show that a course of action is legally permissible and potentially profitable. But executives still weigh brand impact, stakeholder perception, regulatory scrutiny and long-term positioning. Legal decisions require the same layered analysis.
The Rule 11 Example: Where Judgment Matters
Recently, I experienced this tension firsthand when a client came to me armed with an AI-generated legal strategy. The client had been involved in litigation and believed the opposing party had made arguments that were weak, perhaps even misleading. Curious what technology might suggest, he entered the facts of the case into an AI platform and asked whether he should seek Rule 11 sanctions against the opposing party for filing a frivolous lawsuit. The AI’s answer was confident and direct: yes. It outlined the legal standard, summarized Rule 11’s purpose and concluded that sanctions were appropriate. From a purely analytical standpoint, the response was impressive. It cited the rule, identified relevant factors and presented what looked like a persuasive roadmap.
When I examined my client’s case, I asked questions that the AI could not meaningfully weigh:
- How has this judge historically treated Rule 11 motions?
- Would filing such a motion strengthen our long-term position — or derail it?
- Is the opposing argument weak, or merely aggressive?
- Could the same objective be achieved more effectively through a motion to dismiss or summary judgment?
- How might this affect settlement posture?
Filing a sanctions motion also carries other risks, like escalating tensions or inviting reciprocal scrutiny of one’s own filings. AI, however, didn’t warn my unsuspecting client of such danger. The AI had analyzed the legal standard. It had not analyzed the courtroom.
After weighing the factors, I advised against pursuing Rule 11 sanctions. The better course was to attack the substance of the claim directly. That approach ultimately advanced the client’s position without the collateral risks that a sanctions motion could have introduced. The AI was not “wrong” in a technical sense. It simply lacked judgment.
What AI Can Do Well
Artificial intelligence can be extraordinarily useful in legal practice. Attorneys who ignore it risk falling behind.
In my practice, that means:
- Using an enterprise version of ChatGPT that prohibits OpenAI from using or retaining client data to train ChatGPT; and, with client consent, generating initial issue lists then refining them based on experience and additional research.
- Employing enterprise ChatGPT for document review while maintaining attorney oversight.
- Leveraging enterprise ChatGPT to test arguments from an adversarial perspective to prepare for briefing or oral argument.
- Using enterprise ChatGPT summaries to streamline large data sets or create timelines.
An enterprise version of a protected AI can help frame issues, identify questions worth exploring and ensure that no obvious authority is overlooked. For business clients, this efficiency can translate into lower costs and faster turnaround times.
For me, our enterprise version of ChatGPT is particularly helpful in early-stage brainstorming. If a client asks, “What are all the possible claims we could assert?” or “What defenses might the other side raise?” AI can generate a comprehensive starting list in seconds. But a starting list is not a strategy.
Why AI Cannot Replace Professional Judgment
Artificial intelligence is designed to predict and synthesize language based on patterns in data. It does not:
- Bear professional responsibility
- Appear before a judge
- Maintain a client relationship
- Evaluate reputational impact
- Consider business realities beyond legal doctrine
The Hidden Risk of Overreliance
One of the emerging risks in the AI era is overconfidence. AI tools often present answers in a tone of certainty. For non-lawyers, and for lawyers, that confidence can be persuasive.
For lawyers, AI does not always distinguish between strong authority and marginal authority. It may summarize a rule without appreciating how narrowly courts apply it. It may suggest aggressive tactics without appreciating how rarely they succeed. In litigation, credibility is currency. Lawyers build it carefully. Once lost, it is difficult to regain. Filing motions that courts view as unnecessary or excessive can erode that credibility. A machine cannot assess that reputational calculus. The most productive approach is not rejection or blind adoption but integration. AI works best as a powerful assistant — one that enhances human expertise rather than replacing it.
Emily Ward is a business litigation attorney and director at Fennemore who focuses on complex and high-dollar business disputes in Arizona. She concentrates on business litigation, public policy and appeals, litigating high-stakes cases for both plaintiffs and defendants in federal and state courts across the country.
















