The gap nobody wants to talk about
We recently commissioned independent research with Censuswide, surveying 200 UK legal professionals across fee earner and leadership roles. We wanted to understand the real picture of how AI is being used inside law firms today. The headline finding speaks directly to Munir: 59% of fee earners admit to using unapproved AI apps, like ChatGPT, often because their official firm software is slow, clunky, or lacks modern AI capabilities. Whilst 68% of firm leaders believe they have full visibility and zero risk of unapproved AI usage for client work.
Read those two numbers together. More than half of fee earners are using the exact tools the Tribunal just ruled put client confidentiality and privilege at risk. And more than two-thirds of leaders don’t think it’s happening. And it's not evenly distributed: 71% of paralegals and 57% of solicitors in our research admit to using unapproved AI to keep pace with workload - the very fee earners who are handling the most volume and working closest to client files. Compounding this, 50% of clients now expect AI to be involved in their legal work. That market pressure is real, and it's part of what's driving the behaviour.
This isn’t a technology problem. It’s a workflow problem. Our research shows that fee earners aren’t using public AI tools out of recklessness. They’re reaching for them because the tools their firm has provided aren’t fast enough or integrated enough to keep pace with the demands of the job – something that changes with a platform like CaseMatters Evo at the heart of legal operations.
What the Tribunal actually said
It’s worth dwelling on the precise language, because it matters for how firms respond.
The Tribunal explicitly distinguished between open-source AI tools and closed-source enterprise environments. It named Microsoft Copilot as an example of the latter and confirmed that such tools are available for tasks like summarising “without these risks.”
That distinction – closed vs. open, enterprise vs. consumer, governed vs. ungoverned – is the operative one for any firm thinking about AI governance right now. The question isn’t whether your firm is using AI. It’s whether the AI your fee earners are using is the AI you think they’re using, and whether it sits inside a contractually controlled environment that keeps client data off the public internet.
The Tribunal also made clear that responsibility doesn’t sit only with the fee earner who used the tool. Supervisors are accountable for the work their teams produce, regardless of how it was generated. That changes the risk calculus significantly for anyone in a management or oversight role.
The instinctive response for many firms will be to ban AI tools outright. But the Munir ruling, read alongside the SRA's competence framework, makes clear that prohibition isn't a safe harbour. Firms have a positive obligation to equip their people to work competently and a blanket ban that leaves fee earners without the tools they need to benefit from significant efficiency gains and meet client expectations doesn't solve the compliance problem – it relocates it. The answer isn't less AI. It's governed AI.
For a detailed legal commentary on the ruling, DAC Beachcroft and Hill Dickinson have both published useful analyses.
Three things worth checking this week
For most firms, the immediate question is “what do we actually do on Monday morning?” Here are three practical starting points.
1. Find out what tools your fee earners are actually using
Not the tools you’ve sanctioned – the tools they’re actually opening, on actual matters, today. The gap between what leaders think is happening and what fee earners are doing is real and, in light of Munir, now carries regulatory weight.
A good starting point is understanding your firm's current exposure. Our free cyber security audit includes a dark web compromise report that tells you whether any of your staff's credentials have already been exposed, an independent security score from Microsoft, and a custom PDF report with best practice recommendations, at no cost. It won't tell you everything about shadow AI usage, but it will tell you whether the problem is already bigger than you think.
For firms operating within Microsoft 365, there are already tools available that can provide this visibility without invasive monitoring or guesswork. Solutions like Microsoft Defender for Cloud Apps can identify unsanctioned AI services, including tools such as ChatGPT and other generative AI platforms, based on real usage rather than declared policy.
This allows firms to understand which AI tools are being accessed, from where, and by whom, so that informed decisions can be made about governance and risk. Crucially in the context of Munir, it helps firms move from assumed compliance to evidenced oversight, which is exactly what regulators will expect if AI usage is ever questioned.
2. Audit whether your current AI tools are closed-environment
If your firm has already invested in AI tools, check whether they meet the standard the Tribunal described: a closed-source, enterprise environment that does not place client data in the public domain. If you’re not certain, your vendor should be able to confirm clearly. If they can’t, that’s an answer in itself.
3. Check whether your supervision framework covers AI-generated work
The judgment makes clear that supervisors are responsible for AI-assisted outputs from their teams. If your supervision and review processes don’t currently account for AI usage, now is the time to close that gap. The SRA will be watching how firms respond to this ruling.
The direction of travel
The legal sector has been having the AI conversation for two years. Much of it has been about possibility – what AI could do for productivity, for access to justice, for the profession. Munir moves that conversation into a different register. It’s the first moment where a court has looked directly at how lawyers are actually using AI and said: that is not acceptable.
It won’t be the last judgment of its kind.
For firms that have been waiting to see how the regulatory landscape settles before making decisions about AI – the landscape is settling. The direction is clear: closed environments, governed workflows, supervised outputs, full audit trails.
The firms navigating this well are not the ones that have turned AI off, but the ones that have found a governed, integrated route to using it. That's a practical problem with practical solutions, and it's exactly the kind of challenge we've been building for. We'll be sharing our full research findings shortly.
In the meantime, if you want to understand the risks of shadow AI in more depth - and what good looks like - we've pulled together the most relevant guidance, tools, and case studies in one place.
Explore our AI risk hub: everything your firm needs to navigate AI safely and compliantly
Research methodology
Research conducted by Censuswide on behalf of Access Legal, 20–28 April 2026. Sample: 100 legal professionals (fee earners, solicitors, legal executives, paralegals, aged 21+) and 100 legal leaders (managing directors, practice managers, aged 30+), all based in the UK. Censuswide is a member of the Market Research Society (MRS) and British Polling Council (BPC) and adheres to the MRS Code of Conduct and ESOMAR principles.
AU & NZ
SG
MY
US
IE