• You are here:
  • News & Events
  • News
  • Attorneys Examine the Good, the Bad, and the Unknown About AI Use
  • Print Page

Attorneys Examine the Good, the Bad, and the Unknown About AI Use

September 22, 2023

By John Murph

Julienne Pasichow and Hilary P. Gerzhoy

ChatGPT is a “manslayer,” artificial intelligence technology that will “give a supremely authoritative answer without the benefit of the truth, without the benefit of research, without the benefit of an authority,” Fastcase CEO and cofounder Edward J. Walters cautioned attendees of the D.C. Bar’s Artificial Intelligence & Chatbot Summit on September 18.

The daylong summit, featuring attorneys at the forefront of AI adoption, explored AI’s practical applications to legal work and the ethical challenges posed. In the session about the day-to-day use of AI and chatbots by lawyers,” Walters talked about the perils of using ChatGPT, citing the case of New York attorney Steven Schwartz who was sanctioned in June for using the AI tool to generate a legal brief. It was later discovered that the brief included six fictitious case citations.

“When he asked ChatGPT questions regarding the case, it produced very complete looking answers,” Walters said. “What he got back is not what you’d get if he’d typed it into Google.”

Edward J. Walters“There are plenty of places where you ask the software to do something, and if it doesn’t know how to do it, it will tell you. We’re used to seeing ‘Error 404’ or something,” Walters added. “[ChatGPT] is made to bypass the ‘Error 404’ message and instead produce a completely cogent, competent answer.”

Walters advocated sympathy for Schwartz and others who are “in the trenches” of trying to use AI technology, but he also underscored the importance of conducting due diligence in legal work. Walters argued that the problem with generative pretrained transformer tools like ChatGPT is that they were designed to generate statistically likely answers.

“They are supposed to finish the sentence in a way that mathematically is most likely,” he explained. “Are they looking for truth? Not at all. They are not doing research behind the scenes, finding the answer, then reporting it. They are looking at math and statistical plausibility. Steven Schwartz was using the wrong tool for the job.”

Experimenting & Refining

Carolyn Elefant, founder of the Law Offices of Carolyn Elefant, said there are AI tools that perform better than ChatGPT, including Casetext’s CoCounsel, LawDroid, and Clearbrief. Elefant advised attendees to consider if they need a legal-specific product if privacy and costs are major considerations, and if the task is unique to legal.

The important thing for attorneys to keep in mind when using AI technology is that “you still must use your legal training because the ways that you use things before [AI] are still relevant,” Elefant said.

Carolyn Elefant“I’m quite sure all of you have used Google for doing research for a client to see a general proposition. I’m also sure that none of you could actually rely on [just Google] for your case research in [determining whether] a motion [is] viable because that is not what Google was designed for. You will probably use Westlaw or Fastcase,” she added.

Walters and Elefant also discussed maximizing the use of AI tools like ChatGPT and Casetext through better prompting. Elefant advised four steps: Tell the tool to function as a particular personality or persona, give it a task in more detail, give it instructions on how you want the output, and then clarify and refine the prompt.

“You really have to get your hands dirty with this stuff in figuring out what these tools are,” said Elefant, who use ChatGPT for web copy, e-books, and marketing content. “There are a lot of people out there saying what all these AI tools can do, when in actuality they can’t. So, I’m all about testing [them] out.”

She also encouraged attendees to share their experiences with AI use with their colleagues. “One of the frustrating things about these tools is that they are very temperamental,” Elefant said. “They don’t always work when they are supposed to. But if you come back with the same prompting and use it again and again, you may get good results.”

“These tools are always changing. I think now ChatGPT is even trying to correct itself from the Steven Schwartz incident because now when you ask for case studies, it will let you know, ‘I’m not a lawyer. I cannot come up with case studies,’” she added.

Minding the Ethics Risks

Lawyers have an ethical obligation to keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology (Rule 1.1). “What that means is that there is no Luddite defense,” said Hilary P. Gerzhoy, vice chair of the legal ethics and malpractice group at HWG LLP. “You can no longer say that you just don’t understand technology, which is something that you saw a lot of lawyers in the past do. That is no longer a viable defense.”

To mitigate the risks of using AI, lawyers should consider whether the product generated by the tools is defensible, consistent, and coherent, she added.

Gerzhoy also suggested taking proactive steps to understand how AI systems work and playing an active role in training the system, as well as asking vendors about how the technology operates, some of its common pitfalls, and tips for accurate outcomes.

Other ethical rules relevant to AI are Rule 1.6 regarding confidentiality and protection of the client’s personal information and Rule 1.5 addressing reasonable fees, a relevant concern when AI can reduce billable hours, said Julienne Pasichow, an associate at HWG LLP, where she practices civil litigation, government investigations and enforcement actions, immigration, and legal ethics.

To ensure that an AI system is compliant with the rules of confidentiality, Pasichow recommended that lawyers ask vendors to what extent documents are retained by the system, what type of technology it is using to protect the documents from unauthorized exposure, and who at the AI system is able to access information.

“And you should ask what their plan is in the case of a data breach,” Pasichow said. “If you’re ever in a situation in which information is disclosed from a data breach, you want to be able to show that you did the proactive steps to inform yourself about the AI system that you’re using.”

Recent News

Skyline