Some lawyers have been reprimanded for citing fictitious cases
| I |
n a judgment that should unsettle every practicing lawyer in England and Wales, the Upper Tribunal has delivered a stark and timely warning: if you put fiction before a court, you should expect real consequences. The judgment is already being framed as a case about artificial intelligence. That characterisation, while convenient, misses the point entirely. This is not a case about technology. It is about professional discipline, accountability and a legal culture that may have grown a little too comfortable with shortcuts.
The facts, on any reading, are extraordinary. Legal representatives cited authorities that simply did not exist. These were not obscure precedents or marginal errors buried in footnotes. They were entirely fictitious cases, presented with the confidence and structure of genuine legal authorities. What might once have been dismissed as careless drafting has taken on a more worrisome dimension. It reflects a profession increasingly exposed to tools that generate persuasive but unreliable outputs, and a failure in some cases to rigorously interrogate those outputs.
The tribunal’s response is notable for its clarity. It refused to allow artificial intelligence to become a convenient scapegoat. This was not, in its view, a case about “AI hallucinations;” it was about lawyers ignoring the most fundamental obligation to the court: not to mislead. That duty is neither new nor optional. It sits at the very core of legal practice. Whether an error originates from a junior fee-earner, a precedent bank or a machine is irrelevant. The responsibility lies with those who put their name(s) to the document.
Perhaps the most striking aspect of the judgment is its treatment of supervision. The tribunal makes it clear that a supervising solicitor who fails to check the work of others may be more culpable than the individual who made the original mistake. That is a powerful statement. It reframes supervision not as an administrative layer, but as a substantive safeguard. In an era where work is increasingly delegated and technology is readily available, the role of supervision becomes more, not less, critical.
This raises uncomfortable questions for the profession. Have we become too reliant on efficiency at the expense of accuracy? Have we allowed the pressure of volume and turnaround times to dilute the care that legal work demands? The judgment suggests that, in some cases, the answer may be yes.
It also exposes a deeper risk. The latest AI tools are capable of producing outputs that sound fluent, structured and convincing. They mimic the tone and form of legal reasoning with remarkable precision. But beneath that surface coherence, they may be fundamentally flawed. They can cite cases that do not exist, attribute reasoning to authorities that never said it and construct arguments that collapse under scrutiny. The danger lies not in obvious error, but in plausible inaccuracy.
For lawyers, this presents a unique challenge. The traditional safeguards of legal practice rely on the assumption that sources can be verified and authorities traced. AI disrupts that assumption. It introduces a layer of apparent authority that must be actively dismantled and checked. The Tribunal’s message is clear: the obligation to verify remains absolute. If anything, it is heightened in the face of such tools.
Another dimension of this decision deserves equal attention. The Tribunal addresses the use of open AI platforms in handling client information; its warning is unequivocal. Uploading confidential documents to such systems risks placing that information into the public domain. In doing so, it may breach client confidentiality and waive privilege. This is not a theoretical concern. It is a real and immediate risk, particularly for practitioners who may not fully understand how these tools process and store data.
In a profession built on trust that risk cannot be taken lightly. Clients rely on their lawyers to safeguard their information with the highest degree of care. The casual use of AI tools for summarising documents or refining correspondence, without proper safeguards, may amount to a serious ethical failing. It is an area where regulatory scrutiny is likely to intensify.
The Tribunal has not limited itself to criticism. It has taken concrete steps to address the issue. Procedural changes now require legal representatives to confirm that the authorities they cite exist, can be located and support the propositions for which they are relied upon. This may appear to be a modest adjustment, but its implications are significant. It places the burden of verification squarely on the practitioner and signals that the court will no longer assume that this basic level of diligence has been exercised.
For clients, particularly those navigating the complexities of immigration law, the stakes are high. These are not abstract procedural concerns. Errors of this nature can affect outcomes, delay proceedings and erode confidence in the system. The Tribunal itself recognises that many individuals appearing before it are vulnerable. They are entitled to expect that the legal arguments advanced on their behalf are grounded in reality, not constructed from unreliable sources.
From a practitioner’s perspective, the decision feels necessary; if overdue. The legal profession has, for some time, been grappling with the integration of new technologies. There has been an understandable enthusiasm for tools that promise efficiency and accessibility. But that enthusiasm has not always been matched by a corresponding awareness of risk. This judgment serves as a corrective. It reminds us that innovation must not come at the expense of fundamental duties.
It also invites a broader reflection on professional culture. The law has always demanded precision. It does not tolerate approximation or assumption. An authority either exists or it does not; a proposition is either supported or it is not: there is no middle ground. In that sense, the standard has not changed. What has changed is the environment in which lawyers operate and the tools they use.
The challenge now is to ensure that professional standards keep pace with that environment. This requires more than individual diligence. It calls for proper training, clear internal policies, and a renewed emphasis on supervision. It requires firms to engage seriously with the risks of AI, rather than treating it as a harmless convenience.
Ultimately, the Tribunal’s message is disarmingly simple. Artificial intelligence did not mislead the court. Lawyers did. The failure was not technological, but human. It was a failure to check, to question and to take responsibility. The law does not accept “almost right”. It demands that it be right.
The writer is a London-based lawyer.