<!-- Bizible Script --> <script type="text/javascript" class="optanon-category-C0004" src="//cdn.bizible.com/scripts/bizible.js" ></script> <!-- End Bizible Script -->
Legal

The ruling that should make every law firm leader uncomfortable

Earlier this year, the Upper Tribunal handed down a judgment that every law firm leader in the UK should read. Not because it’s about immigration law. But because buried inside it is a clear, direct statement about AI that applies to every firm, every fee earner, and every matter file in the country. 

Munir v Secretary of State for the Home Department is the first English court or tribunal to address the use of AI and legal professional privilege directly. The Tribunal’s conclusion was unambiguous: uploading client material into open-source AI tools – ChatGPT, public Gemini, Google AI search – breaches client confidentiality and waives legal privilege. Any regulated legal professional who does so should notify their regulator and consult with the ICO. This isn't just a data breach. It's a direct violation of SRA Principle 2 (integrity) and Principle 7 (competence) - and unlike most compliance failures, it's irreversible. Once privilege is waived, it cannot be recovered. 

It went further. The judgment places explicit responsibility on supervising lawyers for the AI-assisted work of their junior fee earners. A supervisor who fails to check that work, whether a hallucinated case came from a trainee or from ChatGPT, is in the Tribunal’s view, potentially more culpable than the individual who produced it. 

That’s a significant shift in professional risk. And if you’re reading this thinking “that won’t be happening in our firm” – our research suggests you may want to look more carefully. 

AI in Legal
5 min
andrew stevens access legal

by Andrew Stevens

General Manager, Access Legal

Posted 14/05/2026

The gap nobody wants to talk about 

We recently commissioned independent research with Censuswide, surveying 200 UK legal professionals across fee earner and leadership roles. We wanted to understand the real picture of how AI is being used inside law firms today. The headline finding speaks directly to Munir: 59% of fee earners admit to using unapproved AI apps, like ChatGPT, often because their official firm software is slow, clunky, or lacks modern AI capabilities. Whilst 68% of firm leaders believe they have full visibility and zero risk of unapproved AI usage for client work. 

Read those two numbers together. More than half of fee earners are using the exact tools the Tribunal just ruled put client confidentiality and privilege at risk. And more than two-thirds of leaders don’t think it’s happening. And it's not evenly distributed: 71% of paralegals and 57% of solicitors in our research admit to using unapproved AI to keep pace with workload - the very fee earners who are handling the most volume and working closest to client files. Compounding this, 50% of clients now expect AI to be involved in their legal work. That market pressure is real, and it's part of what's driving the behaviour. 

This isn’t a technology problem. It’s a workflow problem. Our research shows that fee earners aren’t using public AI tools out of recklessness. They’re reaching for them because the tools their firm has provided aren’t fast enough or integrated enough to keep pace with the demands of the job – something that changes with a platform like CaseMatters Evo at the heart of legal operations. 

What the Tribunal actually said 

It’s worth dwelling on the precise language, because it matters for how firms respond. 

The Tribunal explicitly distinguished between open-source AI tools and closed-source enterprise environments. It named Microsoft Copilot as an example of the latter and confirmed that such tools are available for tasks like summarising “without these risks.” 

That distinction – closed vs. open, enterprise vs. consumer, governed vs. ungoverned – is the operative one for any firm thinking about AI governance right now. The question isn’t whether your firm is using AI. It’s whether the AI your fee earners are using is the AI you think they’re using, and whether it sits inside a contractually controlled environment that keeps client data off the public internet. 

The Tribunal also made clear that responsibility doesn’t sit only with the fee earner who used the tool. Supervisors are accountable for the work their teams produce, regardless of how it was generated. That changes the risk calculus significantly for anyone in a management or oversight role. 

The instinctive response for many firms will be to ban AI tools outright. But the Munir ruling, read alongside the SRA's competence framework, makes clear that prohibition isn't a safe harbour. Firms have a positive obligation to equip their people to work competently and a blanket ban that leaves fee earners without the tools they need to benefit from significant efficiency gains and meet client expectations doesn't solve the compliance problem – it relocates it. The answer isn't less AI. It's governed AI. 

For a detailed legal commentary on the ruling, DAC Beachcroft and Hill Dickinson have both published useful analyses. 

Three things worth checking this week 

For most firms, the immediate question is “what do we actually do on Monday morning?” Here are three practical starting points. 

1.  Find out what tools your fee earners are actually using

Not the tools you’ve sanctioned – the tools they’re actually opening, on actual matters, today. The gap between what leaders think is happening and what fee earners are doing is real and, in light of Munir, now carries regulatory weight.  

A good starting point is understanding your firm's current exposure. Our free cyber security audit includes a dark web compromise report that tells you whether any of your staff's credentials have already been exposed, an independent security score from Microsoft, and a custom PDF report with best practice recommendations, at no cost. It won't tell you everything about shadow AI usage, but it will tell you whether the problem is already bigger than you think. 

For firms operating within Microsoft 365, there are already tools available that can provide this visibility without invasive monitoring or guesswork. Solutions like Microsoft Defender for Cloud Apps can identify unsanctioned AI services, including tools such as ChatGPT and other generative AI platforms, based on real usage rather than declared policy. 

This allows firms to understand which AI tools are being accessed, from where, and by whom, so that informed decisions can be made about governance and risk. Crucially in the context of Munir, it helps firms move from assumed compliance to evidenced oversight, which is exactly what regulators will expect if AI usage is ever questioned. 

2.  Audit whether your current AI tools are closed-environment

If your firm has already invested in AI tools, check whether they meet the standard the Tribunal described: a closed-source, enterprise environment that does not place client data in the public domain. If you’re not certain, your vendor should be able to confirm clearly. If they can’t, that’s an answer in itself. 

3.  Check whether your supervision framework covers AI-generated work

The judgment makes clear that supervisors are responsible for AI-assisted outputs from their teams. If your supervision and review processes don’t currently account for AI usage, now is the time to close that gap. The SRA will be watching how firms respond to this ruling. 

The direction of travel 

The legal sector has been having the AI conversation for two years. Much of it has been about possibility – what AI could do for productivity, for access to justice, for the profession. Munir moves that conversation into a different register. It’s the first moment where a court has looked directly at how lawyers are actually using AI and said: that is not acceptable. 

It won’t be the last judgment of its kind. 

For firms that have been waiting to see how the regulatory landscape settles before making decisions about AI – the landscape is settling. The direction is clear: closed environments, governed workflows, supervised outputs, full audit trails. 

The firms navigating this well are not the ones that have turned AI off, but the ones that have found a governed, integrated route to using it. That's a practical problem with practical solutions, and it's exactly the kind of challenge we've been building for. We'll be sharing our full research findings shortly. 

In the meantime, if you want to understand the risks of shadow AI in more depth - and what good looks like - we've pulled together the most relevant guidance, tools, and case studies in one place.

Explore our AI risk hub: everything your firm needs to navigate AI safely and compliantly  


Research methodology 
Research conducted by Censuswide on behalf of Access Legal, 20–28 April 2026. Sample: 100 legal professionals (fee earners, solicitors, legal executives, paralegals, aged 21+) and 100 legal leaders (managing directors, practice managers, aged 30+), all based in the UK. Censuswide is a member of the Market Research Society (MRS) and British Polling Council (BPC) and adheres to the MRS Code of Conduct and ESOMAR principles. 

andrew stevens access legal

By Andrew Stevens

General Manager, Access Legal

Andrew Stevens is General Manager of Access Legal, part of The Access Group. Access Legal provides case management and practice technology to over 3,500 UK law firms, including CaseMatters Evo - a legal intelligence platform built for everyday AI usage in SME law firms.