Back
Business

Legal Sector Navigates Dual AI Challenge: Low Adoption Amidst Client Demands and Rising Penalties for Misuse

View source

The Legal Industry's AI Paradox: Slow Adoption Meets Rising Sanctions

The legal industry is navigating a complex landscape regarding artificial intelligence (AI). It's characterized by both a slow rate of adoption among professionals and an increasing trend of courts imposing sanctions for errors generated by AI tools in legal filings. While investments in legal technology are substantial and clients demand more efficient services, many law firms and individual lawyers exhibit hesitation in integrating AI. Concurrently, courts are emphasizing attorneys' responsibility for the accuracy of AI-generated content, leading to significant penalties for missteps.

AI Integration: Challenges and Hesitant Adoption

Artificial intelligence has become a prominent topic within the legal sector, with conferences like Legalweek highlighting its potential to enhance efficiency. Billions of dollars have been invested in legal technology, driven by expectations that AI could deliver productivity gains mirroring those seen in technology companies. Despite these investments and client demands for faster, more cost-effective services, AI adoption within law firms remains inconsistent.

Evidence from a Microsoft representative's informal poll at Legalweek indicated that a small fraction of attendees utilized software for automated contract review, a clear application for large language models. Industry experts have stated that firms failing to integrate AI risk client attrition.

Emma Dowden, Chief Operating Officer of Burges Salmon, noted that "Revenue is at risk," while Derek Morales, an in-house lawyer at Macquarie Capital, indicated that "AI maturity" already influences companies' selection of outside counsel.

Why the Slowdown?

Several factors contribute to the slow adoption rate. Lawyers have expressed concerns regarding potential job displacement, the impact on traditional hourly billing models, and a general lack of understanding of the technology. Additionally, some partners may prioritize piloting new technologies in practice areas other than those benefiting most from AI.

Contrary to some assumptions, younger lawyers are not consistently early adopters. Sarah Eagen, who leads learning and development at Cleary Gottlieb, observed that many associates view automation as a threat to careers built on entry-level work.

The Training Gap

Training has been identified as a critical gap. Ian Nelson, who operates Hotshot, a company that assists law firms with training programs, stated that too few firms offer comprehensive AI training. He noted that training is often delayed until after a tool is licensed or is too narrowly focused on tool-specific demonstrations, lacking broader context on risks and firm policies. Nelson also cautioned that some lawyers use chatbot tools irrespective of formal training programs.

The Cost of AI Errors: Courts Impose Sanctions

Simultaneously, courts have increased sanctions against attorneys for submitting legal documents containing errors generated by artificial intelligence tools. This trend reflects a rising number of incidents where individuals are penalized for using erroneous AI-generated information.

Damien Charlotin, a researcher at HEC Paris, maintains a global count of court sanctions related to AI errors, reporting over 1,200 such cases to date. Approximately 800 of these incidents have originated from U.S. courts, and the rate of new cases is increasing.

Notable Cases of Missteps

Noteworthy cases include:

  • Lawyers representing MyPillow CEO Mike Lindell were fined $3,000 each for filing briefs that included fictitious, AI-generated citations.
  • A federal court ordered a lawyer in Oregon to pay $109,700 in sanctions and costs for filing documents containing AI-generated errors.
  • In Nebraska, attorney Greg Lake was questioned regarding a brief with fictitious case citations and was subsequently referred for disciplinary action by the state supreme court. A similar situation was reported in the Georgia Supreme Court.

Carla Wale, associate dean at the University of Washington School of Law, emphasized that while ethical rules concerning AI use are evolving, lawyers remain responsible for the accuracy of their filings. She highlighted the professional conduct rule requiring attorneys to verify the accuracy of all cited cases, regardless of whether AI assistance was used. Some courts have begun establishing more expansive ethics rules, mandating that lawyers label any AI-produced content with specific details to facilitate identification.

Beyond Risk: AI and the Evolving Malpractice Question

The confluence of these trends has led to new ethical considerations for the legal profession. A question emerged during Legalweek regarding whether resistance to using AI, particularly if it can provide better, more cost-effective legal services, could eventually constitute malpractice.

Corporate lawyer Michael Pierson, whose firm Pierson Ferdinand reportedly utilizes AI tools extensively and operates without associates, raised this issue.

He questioned whether the non-utilization of AI in the daily delivery of legal services could be viewed as malpractice, asserting that client service necessitates exploring technologies that lead to excellent work product.

This suggests a shift from AI as a potential risk to one where its absence could become a liability, further complicating the legal sector's ongoing integration of artificial intelligence.