News Detail Banner
All News & Events

Generative AI: How It Will Change Litigation

May 23, 2023
Firm Memoranda

Click here to view PDF. 

Recent rapid improvements in generative artificial intelligence (“GenAI”) have resulted in high-profile, publicly available tools that respond to text prompts with entirely machine-generated natural language answers that in many cases are indistinguishable from human-created output. And unlike humans who may take hours or days to prepare a response to certain questions, GenAI can often return an answer in seconds.

These GenAI breakthroughs have garnered much attention. They have also led to a surge in the development of GenAI products for litigators, as well as predictions about how GenAI will disrupt the legal industry. While these tools have flaws and limitations, their power is undeniable, especially for tasks that involve parsing and finding relevant material in large data sets—such as the universe of discovery material in any complex litigation. As we explain in this Note, GenAI tools are not poised to replace human litigators any time soon, but they are bound to become ubiquitous in the legal industry and to make human litigators more effective in a wide range of tasks.

I.  What is Generative AI, and Why Is This New Technology Relevant to Litigators?

Prompt: “Why is generative AI so popular in the legal community?”

Bing AI Response: Generative AI has been gaining popularity in the legal community due to its ability to automate tasks such as document review and contract analysis. It can also be used to power chatbots that provide basic legal guidance, answering common questions or directing users to appropriate resources or services, making legal assistance more accessible to the public.

Generative AI systems like ChatGPT and Dall-E raise a host of novel questions for a legal system that always imagined people, rather than machines, as the creators of content. However, new research reveals that 82% of legal professionals believe that generative AI such as ChatGPT can be applied to legal work.

Although GenAI has only recently made waves in the legal community, artificial intelligence and chatbots have existed for decades. As early as the 1960s, MIT professor Joseph Weizenbaum created an AI computer program named ELIZA, designed to play the role of a therapist in response to prompts from human users.1 At a high level, ELIZA used a set of built-in rules to identify key words in a prompt, select an appropriate template response from a script based on those key words, and generate a specific response based on the template and the prompt. In the 1990s, Richard Wallace created a more complex AI chatbot called ALICE (“Artificial Linguistic Internet Computer Entity”), which received awards for its human-like conversations over several years.2 Like ELIZA, ALICE used rules to select from a pre-written (but extensive) list of responses based on key words detected in a prompt.

More recently, companies such as Amazon and Apple have developed AI-based “virtual assistants,” such as Alexa and Siri, for use in consumer electronics. Unlike the scripted responses of ELIZA and ALICE,these “virtual assistant” AIs use a technique known as Natural Language Processing to effectively understand language-based commands and prompts and select an appropriate response based on that understanding.3

But what ELIZA, ALICE, Siri, Alexa, and other similar tools all lack is the ability to generate new content. ELIZA and ALICE could imitate basic human conversations by selecting an appropriate pre-scripted response to a prompt, but they could not analyze a complex legal problem or summarize a document. Siri and Alexa represent an improvement in the ability to understand human language, but they are similarly limited in their ability to respond. Both can respond to queries about simple facts—such as the date of an event—but for more sophisticated and open-ended questions, they can only search for and direct the user to existing, publicly available content on the internet. Ultimately, none of these tools even come close to effectively replicating the types of analytical and creative tasks that attorneys perform on a daily basis.

GenAI, by contrast, has recently demonstrated that it is capable of performing at least some tasks that conventionally could be performed only by human attorneys. Unlike historical AIs such as ELIZA and ALICE, and “virtual assistant” AIs such as Alexa and Siri, GenAI chatbots such as Google’s Bard and OpenAI’s ChatGPT use a fundamentally different approach to analyzing and responding to human prompts, which allows them to analyze facts, legal precedent, and documents, and to then create natural language narrative responses that can identify documents of interest, summarize facts and law, or create pleadings and briefs. GenAI creates these natural language responses anew; it does not function merely by copying and pasting from existing sources, or even from assembling fragments of different existing sources.

Chatbots like Bard and ChatGPT are based on Large Language Models (“LLMs”) that are trained on enormous data sets of text to learn the concepts associated with different words and the relationships between words, allowing LLMs to “understand” context, nuance, sentiment, and other implicit ideas beyond the literal meanings of words—though arguably not in the same ways that a human “understands” these concepts.4 As an example, GPT-3, a previous iteration of OpenAI’s LLM, was trained on sources containing roughly 300 billion words of text; and BERT, an LLM developed by Google that pre-dated Bard, was trained on text comprising roughly 250 billion words.5 These models may then be fine-tuned through additional training—for example, by presenting the model with exemplary prompts and “correct” responses—or through reinforcement learning techniques, in which human annotators rank the model’s responses to various prompts.6

Beyond mere hypothetical capabilities, recent demonstrations have made clear that GenAI not only is capable of performing analytical and creative tasks previously only thought possible by humans, but it can perform those tasks better than many humans, and is rapidly improving. For example, OpenAI’s GPT-3.5 LLM, released on November 30, 2022, was able to pass several law school exams obtained from the University of Minnesota with a C+ average, including Constitutional Law, Employee Benefits, Taxation, and Torts, but obtained below average results on exams such as the Uniform Bar Exam and the LSAT.7,8 OpenAI’s GPT-4 LLM, released just a few months later, on March 14, 2023, achieved impressive scores of 163 on the LSAT and a 298 on the Uniform Bar Exam, placing it in roughly the 90th percentile of test takers for both exams.9

 

Simulated Exams

GPT‐3.5

(November 2022)

estimated percentile

GPT‐4

(March 2023)

estimated percentile

Uniform Bar Exam (MBE+MEE+MPT)

213/400

~10th

298/400

~90th

LSAT

149

~40th

163

~88th

 

OpenAI’s public release of ChatGPT on November 30, 2022 further caught the public’s eye— including the eyes of litigators. Early experiments with ChatGPT highlighted its ability to apply human-like creativity to tasks such as writing poems, songs, and essays,10 and within weeks, prominent legal publications were already asking whether ChatGPT could play a role in the legal profession.11 The answer to that question has been an emphatic “yes.” Legal columnists and bloggers were quick to post ChatGPT’s responses to legal queries to demonstrate its ability to perform legal analysis and answer legal questions, even without any targeted training on legal precedents, statutes, or other documents.12 And companies that provide legal technology and services have been rapidly working to integrate generative AI into their product offerings. For example, last month, Casetext, best known for its legal research platform, launched its GPT-4-powered “AI legal assistant,” named CoCounsel, which is advertised as capable of “document review, legal research memos, deposition preparation, and contract analysis.”13

The remainder of this Client Alert explores some of the GenAI legal products currently on the market, and the capabilities and drawbacks of using GenAI in the legal context.14

II.  New Tools and Opportunities for Litigators

Prompt: “Why is generative AI so popular in the legal community?”

ChatGPT Response: “Generative artificial intelligence (AI) has the potential to significantly enhance the work of litigators, primarily by automating repetitive and time-consuming tasks, improving document review and analysis, and providing valuable insights into case strategy.”

ChatGPT is right: GenAI has already advanced to the stage where it can support human litigators by substantially automating many common and time-consuming tasks that are part of any complex litigation. And GenAI can be adept at performing certain routine and less-complicated legal tasks, such as answering straightforward objective questions or creating standard or form documents. To be sure, and as we discuss in more detail below, GenAI is not going to replace human attorneys any time soon, because there are countless critical litigation skills and tasks—especially in complex litigation—that even the most sophisticated and purpose-built GenAI tools do not claim to be able to perform.

What GenAI can do, however, it tends to do very well when properly trained and implemented. Although many GenAI legal tools are either still early in development, or otherwise geared toward transactional work and other non-adversarial work—such as drafting and analyzing contracts—a number of companies have already launched (or are in the processing of developing or launching) GenAI tools targeting litigators. Many of these tools are built on the same engine that powers ChatGPT—OpenAI’s GPT-3 or GPT-4 LLM—but employ additional training to optimize the underlying LLM for specific legal tasks. These tools help demonstrate the types of tasks GenAI is presently equipped to address.

GenAI tools are being used for document review. Many GenAI companies recognize that document review is a near-universal task in complex litigation, and that it is almost always an immense task. It requires hundreds if not thousands of attorney hours to parse through, categorize, and understand the universe of documents in discovery, and to find relationships and key themes across the documents that help litigators build their case. With current technology, finding key documents usually requires developing and then running a set of search terms. The terms need to be carefully tuned so that they are broad enough to locate the relevant material, but focused enough to avoid bringing in too much irrelevant material. This can involve multiple iterations of testing different search terms, and then reviewing the search term hits individually to find the necessary information. GenAI presents the possibility for an entirely different paradigm.

The natural language processing capabilities of LLMs makes them especially effective at these same types of tasks—reviewing, analyzing, and summarizing documents, and locating documents of interest many times faster than a human attorney. Tools such as CoCounsel, Lawbotics, and FileRead include document review features where a user can enter natural language questions about a case or set of documents, to cause the tool to review the corpus of case documents to determine the answer and provide answers or summaries with citations to relevant documents. For example, in a case involving the alleged theft of confidential business documents, an attorney could ask the GenAI whether the defendant (plaintiff’s competitor) was discussing the information in the documents, who was involved in the discussions, and when they occurred. The GenAI would mine the documents produced in discovery and deposition transcripts, and then provide a narrative answer with citations to relevant documents. Although more human searching and review might be necessary in Generative AI & Intellectual Property: Questions Surrounding Novel Machine Learning and Artificial Intelligence Tools, Feb. 9, 2023 Quinn Emanuel Client Note. To ensure the answer is comprehensive, the GenAI provides a useful starting point that has significant advantages over using search terms, especially in terms of speed and ease of use. These tools may also be used to review documents for privilege or confidentiality in preparation for production.

Limited trials with these tools suggest they could be at least as effective as a typical human reviewer in locating and identifying case-critical documents, and that they are capable of doing so in a fraction of the time. And as these tools (and the engines that power them) continue to develop and become more mainstream, they may fundamentally alter how attorneys approach other discovery-related tasks, such as negotiating discovery searches, creating summaries or timelines, and identifying relevant documents for depositions or trial. Attorneys are on the verge of having the ability to ask a GenAI tool during a deposition whether anything in a database containing of hundreds of thousands of documents supports or contradicts an answer the deposition witness just gave, and to receive a response in seconds, with the relevant documents.

GenAI has also sought to optimize legal research. Tools such as CoCounsel and Harvey AI allow litigators to circumvent the typical legal research workflow by asking GenAI to summarize provisions of statutes, compare laws and precedents between jurisdictions, or identify a supporting precedent for an argument. For example, CoCounsel can receive a legal question or issue from a user and, in response, generate an answer with supporting citations, summarize cases, or draft a research memorandum. Other tools advertise the ability to cite-check a brief, to find areas in an opposing party’s brief that are susceptible to attack, to prepare an introduction to a brief that summarizes the body of the brief, or even to do things like reduce the word count on a brief without affecting any substantive content. Even early versions of these tools, which still require a level of human review and correction, can in some instances be more powerful and efficient than doing Boolean searches in legal databases, using a search engine to try to find existing legal work product on a particular subject matter, or manually reading each case cited in a brief to look for distinctions as a starting point for other work.

GenAI can also automate the creation of certain typical case documents, especially ones that have standard structures or boilerplate language. For example, LegalMation offers a Complaint Response Tool and a Discovery Response Tool, which can be used to automate the generation of responsive pleadings and responses to discovery requests. Both of these tasks typically involve repeated assertions of the same admissions or denials (in an answer) or objections and responses (in discovery responses), which can be custom- tailored for the particular case before they are applied by the GenAI to generate the response. Other tools, such as CoCounsel, can also generate research memoranda, or identify relevant exemplars from a firm’s internal repository of work product. And while even general purpose tools such as ChatGPT and Bard will respond to prompts asking them to write complete legal briefs with something that superficially resembles a brief, GenAI tools that can draft a persuasive motion or brief addressing complex factual and legal issues do not appear to be on the horizon yet.15

As the focus of current GenAI legal tools demonstrates, GenAI is still most useful in tasks specifically tailored to GenAI’s capabilities. GenAI is likely to transform how litigators handle these time-consuming and often-mechanical tasks, and to free up time for attorneys to focus on the tasks that AI is not equipped to handle—specifically, the tasks that require creativity, strategy, and performance.

III.         Generative AI’s Shortcomings and Pitfalls

Prompt: “What are some problems with using generative AI for legal analysis?”

Bard Response: “There are a number of potential problems with using generative AI for legal analysis. These include:

  • Accuracy: Generative AI models are trained on large datasets of text and code, but they can still make For example, a generative AI model might generate text that is factually incorrect or that does not follow the law.
  • Bias: Generative AI models are trained on data that is created by humans, and this data can reflect the biases of the people who created For example, a generative AI model that is trained on a dataset of legal documents might generate text that is biased against certain groups of people.
  • Interpretability: Generative AI models are often complex and difficult to This can make it difficult for lawyers to understand how the models work and to trust the results that they produce.
  • Liability: If a generative AI model produces incorrect or biased text, the lawyers who use the model could be held liable for any damages that result.”

Shortly after GenAI caught the attention of the media and the general public, some of its shortcomings and limitations became apparent. For example, early users of ChatGPT quickly discovered two alarming phenomena: its ability to “hallucinate”16—which essentially describes GenAI “making things up”—and its potential for bias.17 Google and OpenAI have both also warned users about these risks in their GenAI products. For example, Google has cautioned that GenAI tools can “provide[] a convincing but completely made-up answer.”18 And OpenAI has acknowledged that ChatGPT has limitations, including its tendency to “write[] plausible-sounding but incorrect or nonsensical answers,” and its occasional willingness to “respond to harmful instructions or exhibit biased behavior.”19

These limitations are at least in part an unavoidable consequence of how GenAI chatbots like ChatGPT and Bard work. At root, GenAI chatbots are designed to perform a simple task: given a sequence of text, select the most appropriate next word to continue the sequence. When presented with a new prompt, this is how the chatbot selects the first word of its response; and thereafter, every subsequent word is selected based on the prompt and the contents of the answer thus far.20 The resulting responses typically sound human- like because the underlying LLM has been trained on such vast amounts of data that it has learned, almost unerringly, what a comprehensible sentence should look like. But because the response is determined based on what is only human-sounding, factual accuracy is not a direct concern, and biases common in the training data can influence GenAI to tend toward similar biases in its own responses. These tendencies to output false information and reproduce biases may be reduced through additional training. For example, OpenAI uses

Reinforcement Learning from Human Feedback—where humans provide exemplary answers to prompts, and rate different responses provided by the GenAI itself—to reduce unwanted behavior such as hallucinations and biases.21 Although the creators of ChatGPT are hopeful that they can use such techniques to eliminate hallucinations entirely,22 they have not yet achieved that goal.

Using GenAI for litigation tasks, specifically, also raises a number of other concerns. Legal tasks that involve review and analysis of client documents inherently raise concerns of privilege and confidentiality. Document review tasks involving materials produced in litigation may similarly involve documents produced subject to a protective order, whose confidentiality must be maintained. Because effective use of GenAI requires it to “learn” from the materials provided to it, measures must be taken to ensure that the GenAI is not capable of disclosing sensitive materials from one matter or client in an unrelated matter or, worse, to a different customer.23

These limitations are sufficient reason to be wary of using generalized GenAI tools, such as ChatGPT or Bard, for sensitive and critical legal tasks. However, GenAI tools designed for the legal market have taken steps to minimize the risks that these shortcomings will impact the discrete tasks that GenAI legal tools are currently designed to perform. The tools also advise that human supervision is necessary, including to ensure that lawyers are complying with rules of professional ethics. For example, to reduce the risk of “hallucinations,” Casetext integrated CoCounsel with its legal search tool, to ensure that answers to legal questions are referenced from existing precedent only.24 Casetext also claims to safeguard confidential and privileged client information by ensuring that material uploaded to CoCounsel is not shared with OpenAI—and thus cannot be used to train the underlying LLM—and that all client data is handled according to strict, externally-validated requirements.25 Still, these strict measures advertised by Casetext only stress the need for practitioners to be wary when evaluating a new GenAI tool, to question vendors and developers as to the steps taken to address these problems inherent to GenAI, and to remain vigilant in checking GenAI’s output for accuracy and thoroughness. 

IV. Will Generative AI Replace Litigators, or Give Them Superpowers?

Prompt: “How will litigators use generative artificial intelligence to help their work?”

Bard Response: “Overall, generative AI has the potential to revolutionize the way that litigators work. It can help lawyers to save time and effort, to improve the accuracy and quality of their work, and to make better decisions about their cases. However, it is important to note that generative AI is not a replacement for human lawyers. It is a tool that can be used to help lawyers do their jobs better, but it is not a substitute for human judgment and expertise.”

Notwithstanding some of the hyperbole in recent press reports,26 GenAI is far from being able to replace human attorneys. Beyond the inherent shortcomings of GenAI tools, such as hallucination and limitations in the GenAI training sets, even the most advanced GenAI tools do not have the capabilities that are necessary for any commercial litigator, and they cannot perform tasks that arise every day in complex litigation. As Casetext—one of the GenAI tools tailored to litigation—acknowledges on its website, while GenAI can handle certain substantive legal tasks, there are “dozens and dozens of things AI can’t do—make strategic decisions, counsel clients, attend court appearances, and take depositions, to name just a few.”27 GenAI tools can provide answers to questions in seconds that a seasoned human litigator (or team of litigators) might take days to answer. But those tools cannot deliver an impassioned closing argument that adapts in real time to the jurors’ reactions. GenAI still does not understand how to make a brief “sing,” or to write in a way that will appeal to a particular judge, especially in situations involving the novel legal issues and complicated fact patterns that invariably arise in complex commercial litigation.

Indeed, the very steps taken to limit the shortcomings of GenAI tools for the legal market also restrict GenAI’s ability to replicate these more creative and performative aspects of litigation and legal strategy. For example, an experienced litigator does not merely draw on the facts of the case and reported decisions. He or she draws on experience litigating similar issues in different cases with similar or potentially overlapping facts. The need to limit access to and use of confidential and privileged information, however, means that GenAI legal tools are intentionally deprived of the ability to learn from other matters (beyond public filings that the tool has processed as part of its training). Similarly, litigation strategy rarely involves the rote application of known precedents. The best litigators know how to exercise creativity to apply analogous (but not directly related) precedents to a new set of facts, to interpret unhelpful precedents in ways that distinguish the facts of a particular case, and to devise unconventional arguments or ways to frame the facts to support a desired conclusion or outcome. The need to control for hallucinations in GenAI, however, necessarily hampers its ability to exercise such creativity within the practice of law. There is a clear line between interpreting a case in an unconventional (but defensible) way, and simply making up new case law. And although human litigators are equipped to toe that line and make those strategic decisions, for a GenAI tool to be reliable, it must consistently favor accuracy over creativity.

Rather than replacing litigators, GenAI will become a critical and force-multiplying tool for them. Of course, new technologies have improved efficiency in the legal profession before. As one example, legal research databases with search tools allowed litigators to find key cases faster and spend less time poring through hard copy reporters. GenAI tools have even greater promise. They are not about just saving time, but also can (or will soon be able to) perform tasks that litigators may have thought impossible or at least impractical, from finding support for key themes across an impossibly large universe of documents, to near-

***

If you have any questions about the issues addressed in this memorandum, or if you would like a copy of any of the materials mentioned in it, please do not hesitate to reach out to:

Ryan Landes
Email: ryanlandes@quinnemanuel.com
Phone: 213-443-3145

Ron Hagiz
Email: ronhagiz@quinnemanuel.com
Phone: 212-849-7453

Josef Ansorge
Email: josefansorge@quinnemanuel.com
Phone: 202-538-8267

To view more memoranda, please visit www.quinnemanuel.com/the-firm/publications/ To update information or unsubscribe, please email updates@quinnemanuel.com

 

1 See generally Weizenbaum, J., “ELIZA—A Computer Program for the Study of Natural Language Communication Between Man and Machine,” Communications of the ACM, Vol. 9, No. 1 (Jan. 1996), available at https://dl.acm.org/doi/pdf/10.1145/365153.365168.

2                        See Thompson, C., “I Chat, Therefore I Am…,” Discover (May 2, 2007), available at https://www.discovermagazine.com/technology/i-chat-therefore-i-am; “Alice chatbot wins for third time,” BBC News, Sept. 20, 2004, available at http://news.bbc.co.uk/2/hi/technology/3672424.stm.

3        See generally “Natural Language Processing Glossary,” available at https://www.arm.com/glossary/natural-language- processing.

4                     See,     e.g.,    Lee,     A.,    “What     Are     Large     Language     Models     Used     For?,”     NVIDIA,     available     at

https://blogs.nvidia.com/blog/2023/01/26/what-are-large-language-models-used-for/.

5 Brown, et al., “Language Models are Few Shot Learners,” Proceedings of the 34th International Conference on Neural Information Processing Systems, pp. 1877-1901 (Dec. 2020), available at https://arxiv.org/abs/2005.14165.

See, e.g., Lambert, N., Castricato, L., von Werra, L., Havrilla, A., “Illustrating Reinforcement Learning from Human Feedback (RLHF),” Hugging Face (Dec. 9, 2022), available at https://huggingface.co/blog/rlhf.

7 Choi, Jonathan H. and Hickman, Kristin E. and Monahan, Amy and Schwarcz, Daniel B., “ChatGPT Goes to Law School,” Minnesota Legal Studies Research Paper No. 23-03 (Jan. 23, 2023), available at https://ssrn.com/abstract=4335905.

8 Sloan, K., “Bar exam score shows AI can keep up with ‘human lawyers,’ researchers say,” Reuters (Mar. 15, 2023), available at https://www.reuters.com/technology/bar-exam-score-shows-ai-can-keep-up-with-human-lawyers- researchers-say-2023-03-15/.

9 Katz, Daniel Martin and Bommarito, Michael James and Gao, Shang and Arredondo, Pablo, “GPT-4 Passes the Bar Exam” (March 15, 2023), available at https://ssrn.com/abstract=4389233.

10            See, e.g., Cushman, J. “ChatGPT: Poems and Secrets,” Library Innovation Lab (Dec. 20, 2022) available at https://lil.law.harvard.edu/blog/2022/12/20/chatgpt-poems-and-secrets/; Gandhi, M., “What is ChatGPT and how to use it to write poems, songs & essays?,” Newsroom Post (Dec. 27, 2022), available at https://newsroompost.com/tech/what- is-chatgpt-and-how-to-use-it-to-write-poems-songs-essays/5219101.html.

11 See, e.g., Wilkins, S., “ChatGPT Is Impressive, But Can (and Should) It Be Used in Legal?,” Law.com (Dec. 15, 2022), available at https://www.law.com/legaltechnews/2022/12/15/chatgpt-is-impressive-but-can-and-should-it-be-used-in- legal/.

12 See, e.g., Rothman, J., “ChatGPT Is Scarily Good At Answering Legal Questions,” Above the Law (Feb. 15, 2023), available at https://abovethelaw.com/2023/02/chatgpt-is-scarily-good-at-answering-legal-questions/.

13  CaseText - CoCounsel, available at https://casetext.com/.

14 GenAI has also been the subject of litigation, particularly when applied in creative endeavors, such as art, and engineering tasks, such as programming. The legal issues surrounding the use of GenAI in those industries are discussed

15  ChatGPT has been used to draft a motion to dismiss for a hypothetical case involving a simplified fact pattern, but few real cases are so straightforward. See “GPT-4 and Law: ChatGPT Applies Copyright Law,” available at https://youtu.be/nqZcrhR8yPU.

16 See, e.g., Kravitz, A., “ChatGPT and the Future of Corporate Legal Work: Insights and Hallucinations,” Fordham Journal of Corporate and Financial Law (Mar. 25, 2023), available at https://news.law.fordham.edu/jcfl/2023/03/25/chatgpt-and- the-future-of-corporate-legal-work-insights-and-hallucinations/.

17  See,  e.g.,  “ChatGPT  and  large  language  model  bias,”  60  Minutes  Overtime  (Mar.  5,  2023),  available  at

https://www.cbsnews.com/news/chatgpt-large-language-model-bias-60-minutes-2023-03-05/.

18          More, R., “Google cautions against ‘hallucinating’ chatbots, report says,” Reuters (Feb. 10, 2023), available at

https://www.reuters.com/technology/google-cautions-against-hallucinating-chatbots-report-2023-02-11/.

19  See OpenAI, Blog, “Limitations,” available at https://openai.com/blog/chatgpt.

20 For a more detailed and technical description of how GenAI such as ChatGPT perform this task, see Wolfram, S., “What Is ChatGPT Doing…and Why Does It Work?,” Writings (Feb. 14, 2023), available at https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/.

21 See OpenAI, Research, available at https://openai.com/research/instruction-following; Heikkila, M., “How OpenAI is trying  to  make  ChatGPT  safer  and  less  biased,” MIT  Technology  Review  (Feb.  21,  2023),  available  at https://www.technologyreview.com/2023/02/21/1068893/how-openai-is-trying-to-make-chatgpt-safer-and-less- biased/.

22 Smith, C., “ChatGPT-4 Creator Ilya Sutskever on AI Hallucinations and AI Democracy,” Forbes (Mar. 145, 2023), available at https://www.forbes.com/sites/craigsmith/2023/03/15/gpt-4-creator-ilya-sutskever-on-ai-hallucinations-and- ai-democracy/?sh=6055b0e11218.

23 See, e.g., Sayers, F., “ChatGPT and Ethics: Can Generative AI Break Privilege and Waive Confidentiality?,” Law.com (Jan. 26, 2023), available at https://www.law.com/legaltechnews/2023/01/26/chatgpt-and-ethics-can-generative-ai-break- privilege-and-waive-confidentiality/.

24 See Casetext, “Meet CoCounsel—the world’s first AI legal assistant,” available at https://casetext.com/blog/casetext- announces-cocounsel-ai-legal-assistant/. Casetext also claims that it further trained and fine-tuned CoCounsel through human testing to avoid hallucinations.

25  See id.

26  See, e.g., https://www.nytimes.com/2023/04/10/technology/ai-is-coming-for-lawyers-again.html.

27  https://casetext.com/blog/casetext-think-chatgpt-will-eliminate-lawyers/.instantaneously synthesizing and summarizing decades of case law, to providing real time assistance in depositions, hearings, and trials. Lawyers who do not incorporate these tools into their practice will be at a disadvantage compared to litigators who do.