<!--BEGIN QUALTRICS WEBSITE FEEDBACK SNIPPET--> <script type='text/javascript'> (function(){var g=function(g){ this.go=function(){var a=document.createElement("script");a.type="text/javascript";a.src=g;document.body&&document.body.appendChild(a)}; this.start=function(){var t=this;"complete"!==document.readyState?window.addEventListener?window.addEventListener("load",function(){t.go()},!1):window.attachEvent&&window.attachEvent("onload",function(){t.go()}):t.go()};}; try{(new g("https://zn2ptfmrcqqpevvuv-theaccessgroup.siteintercept.qualtrics.com/SIE/?Q_ZID=ZN_2PtFmrcqqPevVUv")).start()}catch(i){}})(); </script><div id='ZN_2PtFmrcqqPevVUv'><!--DO NOT REMOVE-CONTENTS PLACED HERE--></div> <!--END WEBSITE FEEDBACK SNIPPET-->
Contact Sales

Re-thinking Clinical Safety: The Impact of AI in Health and Social Care Software

Artificial Intelligence (AI) is rapidly reshaping health and social care IT, offering powerful capabilities in workflow optimisation, diagnostics, triage and personalised treatment. However, its adoption also presents new clinical safety risks, especially where current safety assessment frameworks may no longer be fully sufficient. In the UK, clinical safety is defined under NHS standard DCB0129 as “the avoidance of unintended or unexpected harm to patients that is caused by the use of health IT systems” (NHS Digital, 2023a).

While this standard provides robust processes for traditional software, AI introduces uncertainties, adaptivity, and non-deterministic behaviours that challenge existing models. AI is disrupting clinical safety as traditionally understood, and standards such as DCB0129 and DCB0160 (NHS Digital, 2023b) must evolve to remain effective.

Julie Tyas, Social Care Safety Officer from The Access Group's Governance, Risk and Compliance team, writes about Re-thinking Clinical Safety in the light of new AI solutions.

Social Care Health & Support Homecare Residential Care
5 minutes
Liam Sheasby healthcare writer

by Liam Sheasby

Healthcare writer

Posted 23/06/2025

Two surgeons training using VR and AI technology.

Traditional Clinical Safety Assurance: DCB0129 and DCB0160

The UK adheres to two key NHS-defined standards for clinical risk management in health IT: DCB0129 and DCB0160. These standards form the backbone of patient safety and system reliability in digital healthcare.

  • DCB0129 applies to software manufacturers and governs the development of clinical applications.
  • DCB0160 is aimed at healthcare organisations and covers the deployment and use of these systems (NHS Digital, 2023a; NHS Digital, 2023b).


Both standards require the following core components:

  • A Clinical Risk Management Plan
  • Hazard identification and analysis
  • A structured Clinical Safety Case
  • Appointment of Clinical Safety Officers (CSOs)
  • Clear documentation of mitigation strategies and controls


These methods have proven effective for traditional systems - those that are deterministic, produce repeatable outputs, and operate based on explicitly coded logic. In such environments, clinical safety assessments could be exhaustive and conclusive, as system behaviour was predictable and bounded.

However, these assumptions no longer hold in the context of AI. Unlike traditional software, AI systems are often adaptive, non-deterministic, and opaque in their decision-making. This marks a fundamental shift that challenges the foundations of existing clinical safety frameworks. The pace of AI development only amplifies this disruption. Some experts even suggest that the transition from Artificial Narrow Intelligence (ANI) to Artificial General Intelligence (AGI) may be closer than previously anticipated (Nature, 2024). As a result, there is an urgent need to rethink how we ensure safety, accountability, and trust in AI-driven clinical environments.

How AI Challenges Conventional Clinical Safety Standards

AI introduces a range of challenges that traditional clinical safety frameworks were not designed to address. These include:

Lack of Transparency in AI Systems

Many AI models—especially those based on deep learning—operate as “black boxes,” offering little insight into how decisions are made. This lack of explainability undermines clinicians’ ability to verify outputs and increases the risk of uncritical acceptance of AI-generated recommendations (MHRA, 2025).

Non-Deterministic and Evolving Behaviour

Unlike traditional software, machine learning models can yield different results in similar clinical scenarios due to subtle variations in input data. Over time, their performance may degrade—a phenomenon known as model drift. Current safety standards assume consistent system behaviour, which does not hold for adaptive AI systems (UK Government, 2023).

Bias Embedded in Training Data

AI systems trained on biased datasets can perpetuate or even amplify health disparities. A notable example involved an AI tool in U.S. hospitals that underestimated the care needs of black patients due to biased historical data (Obermeyer et al., 2019). Without proactive bias detection and mitigation, such systems can cause systemic harm.

Automation Bias and Human Factors

Clinicians may place undue trust in AI outputs—a cognitive bias known as automation bias. Even when AI recommendations conflict with clinical judgment, users may defer to the system (Li et al., 2024). This subtle but serious risk is often overlooked in traditional hazard assessments.

A carer using mobile care software with AI features.

Evolving the Clinical Safety Framework for AI in Healthcare

To effectively address the unique risks posed by AI, the existing NHS clinical safety standards—DCB0129 and DCB0160—must be expanded or supplemented with AI-specific requirements. Key enhancements should include:

  • AI-Specific Hazard Taxonomies

Safety cases should incorporate hazards unique to AI systems, such as data drift, algorithmic bias, and limitations in explainability. These taxonomies must be dynamically maintained throughout the system’s lifecycle (NHS Digital, 2023a; NHS Digital, 2023b).

  • Explainability and Interpretability

The use of Explainable AI (XAI) techniques should be mandatory for systems that provide clinical recommendations. Even surrogate methods—such as saliency maps or rule-based approximations—can support clinicians in interpreting AI outputs (MHRA, 2025).

  • Continuous Monitoring and Revalidation

Adaptive AI systems require ongoing performance monitoring beyond initial deployment. Clinical risk management plans should mandate periodic reassessment, particularly after model retraining or exposure to new datasets (UK Government, 2023).

  • Enhanced Multidisciplinary Oversight

Clinical safety governance must involve a broader range of expertise, including data scientists, ethicists, and regulatory specialists, alongside clinicians. This ensures a comprehensive evaluation of technical, ethical, and societal risks.

  • Simulation-Based Testing

Traditional test scripts are insufficient for validating AI systems. Scenario-based testing and edge-case simulations should be employed to uncover unpredictable or unsafe model behaviours (MHRA, 2025).

  • Human-in-the-Loop Design

AI systems must be designed to support—not replace—clinical decision-making. Clinicians should retain ultimate authority, with AI serving as an assistive tool (European Commission, 2023).

Ethical and Regulatory Considerations

As clinical safety frameworks evolve to accommodate AI, they must align with emerging regulatory landscapes and uphold ethical standards. Key regulatory developments include:

  • UK AI Regulation White Paper (2023)

Proposes a context-specific, flexible regulatory framework that could significantly influence the oversight and enforcement of AI in clinical environments (UK Government, 2023).

  • MHRA’s Software and AI as a Medical Device (SaMD) Guidance

Emphasises the importance of transparency and adaptability in AI systems, reinforcing the need to embed AI-specific considerations into clinical safety cases (MHRA, 2025).

  • EU AI Act

Classifies medical AI systems as “high-risk,” requiring stringent transparency, monitoring, and accountability measures (European Commission, 2025).


From an ethical standpoint, these regulatory shifts underscore the importance of ensuring that AI does not exacerbate existing health disparities. It is essential that affected populations are meaningfully involved in the design, development, and evaluation of clinical AI systems to promote fairness, trust, and inclusivity.

A nurse using healthcare software supported by AI

The integration of AI into health and social care software presents both a tremendous opportunity and a profound responsibility. While standards like DCB0129 and DCB0160 have laid a solid foundation for clinical risk management in traditional IT systems, they were not designed to address the complexities of adaptive, opaque, or continuously learning technologies.

At The Access Group, as our health, support, and care product portfolio expands to include innovative AI-driven functionalities, our Clinical Safety team is proactively evolving its assessment processes. We are actively incorporating AI-specific enhancements into our clinical safety framework to ensure it remains robust, responsive, and future-ready.

As AI becomes increasingly embedded in health and social care workflows, our approach to clinical safety must evolve: from static to dynamic, from linear to iterative, and from single-discipline to multidisciplinary. By modernising our safety frameworks to meet the demands of AI, we can ensure that these technologies enhance - rather than compromise - patient safety and trust.

Liam Sheasby healthcare writer

By Liam Sheasby

Healthcare writer

Liam Sheasby is a Healthcare writer in the Access HSC team, with a Journalism degree in pocket and over eight years of experience as a writer, editor, and marketing executive.

This breadth of experience offers a well-rounded approach to content writing for the Health, Support and Care team. Liam ticks all the SEO boxes while producing easy-to-read healthcare content for curious minds and potential customers.