Uncategorized

Legal Training Should Embrace Generative AI Large Language Models



To what extent large language models, and foundation models more broadly, will be integrated in the legal domain is unclear at the moment. There’s quite a spectrum of opinion on safety and trustworthiness of such models, particularly for end-product applications. Concerns about how these tools will work as co-pilots for legal workflows and processes seem to spotlight professional conduct and responsibility of practitioners.

The California State Bar has recently addressed these questions. And cases such as Mata v. Avianca and People v. Zachariah C. Crabill demonstrate that while existing rules of conduct should sufficiently account for responsible use of generative AI, these technologies remain objects of fascination and fear. Behavioral uncertainty creates a gap between perceived and actual use.

Law firms and other legal professionals have suggested that when sandboxing firm-wide deployment of generative AI, there’s lack of convergence on when tools should be used and the specific standard for performance. But there’s mounting pressure to acknowledge these tools’ incredible potential to bring efficiency to the discipline and profession. So how do we reconcile these countervailing forces?

The best response is to focus on the future of legal education and training of legal professionals. The hard lines drawn on perceived use, and practitioner responsibilities surrounding their use, derive from a sense these technologies benefit completion of legal tasks.

But there are no clear definitions or metrics that qualify what exactly is a legal task, nor requirements to ensure this task is done at a certain level of quality. When we’re questioned about the delivery of quality legal services—such as what defines a good from a great contract—we often receive lightly quantifiable responses. Some might discuss the years of experience on this work; others may respond with the number of deals or transactions they have done.

The lack of explicit answers stems from one of the complexities of the legal industry—much of the work and value-add is implicit. It’s tucked away by personal experience and the specific know-how of the client and industry base.

In a paper we presented earlier this year, 12 senior associates and partners at DLA Piper annotated and identified clauses they found to be potentially conflicting or contradictory from a set of five contracts. Not only were there highly variable annotations, but only two lawyers converged on a set of clauses that they found contradictory.

Even then, the reasoning wasn’t the same, because the greatest value-add for legal practitioners is precisely in the advisory component. The specificity of issue-spotting and the eloquence of argumentation made the work diverse and highly varied.

Until the recent advent of large language models, the profession has been interested in standardization, such as construction of playbooks, or guidelines on best practice. Yet, we’ve attributed immense value in the opposite direction—the legal profession values personalization. The wisdom of experience and mentorship that exists in legal institutions shows the extent of personality and individuality in the practice.

We should be imagining how these tools can capture expertise that’s tailorable at the individual level. The advent of custom GPTs is enabling and even encouraging this type of increased personalization. At the Stanford Center for Legal Informatics, we’ve begun to experiment in this direction in two ways: simulating contract negotiation and implicit contract redlining. In the former, we’re inspired by 2023 research on generative and communicative agents using large language models.

We’re working with experienced merger and acquisition partners to develop tools that would allow lawyers to pilot negotiations at varying starting positions and leverage, legal complexity, and impact. In the latter, we’re collaborating with law firms’ specific redlines in contracts.

The goal is to tease out the voice of individual lawyers and how they would spot issues, flag concerns, or provide comments. This would allow a junior associate to more explicitly understand how they should be reviewing those contracts, for example. We also anticipate an opportunity to compare against voices and perspectives of other lawyers.

While there are outstanding technical and legal hurdles to better assess how to integrate such models into end-product uses, we should take advantage of these models as powerful tools for training. We should consider how to better prepare law students and junior associates to interpret and react to the nuances of the industry. We have existing mentorship models in this regard. So, let’s build digital versions of them to teach and cultivate this domain more dynamically.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Megan Ma is research fellow and assistant director of the Stanford Program in Law, Science, and Technology and Stanford Center for Legal Informatics (CodeX).

Write for Us: Author Guidelines



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *