close

Lawyering is thinking

May 02, 2026
Image shows digital balancing scale representing justice system. — Council of Europe website/File
Image shows digital balancing scale representing justice system. — Council of Europe website/File

Writing in 1993, Patti Ogden warned that the new online legal databases were luring researchers into a false sense of confidence. A lawyer, seduced by the ease of Westlaw, might overlook a case. Now, with large language models, the worry is that the machine will invent one.

Earlier this month, a partner at Sullivan & Cromwell, one of America’s oldest law firms, wrote to apologise to a New York bankruptcy judge. A motion filed by the firm ten days earlier had contained fabricated citations and misquoted statutes. The firm advises OpenAI on the responsible deployment of artificial intelligence. The episode is one of many. Lawyers are leaning more heavily on LLMs, and used carelessly, they will produce more Dietderichs.

A public database now tracks more than 1,100 court filings in which judges have flagged AI-generated errors.

To see why carelessness is so costly, it helps to look at what a lawyer actually does.

Litigation usually runs in four stages: understanding the facts and framing the issues, researching the law, drafting and arguing the case in court. Each depends on the one before. An error in the first falls through the rest like a domino.

The first stage is the most important and the least suited to delegation. To frame an issue well, a lawyer must sit with the facts until the questions of law and fact reveal themselves. To use an LLM is tempting here because it is fluent; ask it to identify the issues in a dispute and it will produce a quick list. The problem is that the list will often be wrong. The model, reaching for plausibility, invents facts that were never in the brief and builds issues around them.

The second stage is research, and it is here that Ogden’s warning fits well. Ask a model for case law on a point and it will oblige, often with citations that look real and are not. Even when the case exists, the coverage tends to be shallow. A single legal question usually, on close reading, splits into four or five sub-issues, each turning on a slightly different line of authority. The model flattens them. It also lacks access to the paid databases where most case law lives. The best use at this stage is as a starting point: a rough map of the terrain.

The third stage is drafting, where the temptation is strongest. A draft is what the court sees. Used well, a model can pressure test a draft for gaps and contradictions. Used badly, it writes the draft for you. A profession already accused of formulaic prose will then lose the ability to write.

The fourth stage is advocacy, and here the models are at their most useful. Once a case is prepared, an LLM makes a patient sparring partner: willing to play a hostile bench, ready to probe weak points in an argument.

Sullivan & Cromwell had the policies in place – and the error still occurred. Lawyers must accept that these tools can be useful, but also that lawyering is, above all, thinking. Delegating any of the four stages to a machine hollows out the reading and thinking on which the craft depends.



The writer is a lawyer.