Home BTG Advisory logo

AI in healthcare: pre-empting unintended consequences

Date Published: 28/05/25

The advance of artificial intelligence (AI) in healthcare is underway. These rapid technological developments offer significant productivity benefits, but also introduce operational, financial, ethical and legal risks that must be understood before healthcare operators integrate AI across their services and systems.

According to the World Health Organization (WHO), AI refers to the capability of algorithms to learn from data and perform automated tasks without requiring explicit programming for each step. Generative AI is a fast-growing subset of AI where algorithms trained on large data sets can generate new content – text, images, or recommendations – based on learned patterns. One powerful form is large multi-modal models (LMMs), which can accept multiple data inputs and generate diverse outputs. These models are expected to transform diagnostics, workflow automation, personalised care, remote monitoring, drug discovery, and public health decision-making.

However, their versatility comes with unproven generalisability, opaque decision-making processes, and integration challenges that could impact clinical quality and business resilience. The WHO has recently issued detailed guidance on responsible LMM deployment in healthcare, which we examine here in the context of risks for small and mid-sized providers.

The benefits

One widely cited study from UCLA, conducted in collaboration with Unfold AI, found that an LMM-driven tool could identify prostate cancer from pathology slides with 84% accuracy, outperforming the 67% accuracy achieved by physicians. It also proved 45 times more accurate at predicting tumour size and improved treatment planning, reducing the risk of residual cancer and accelerating intervention.

In the UK, AI is supporting faster clinical decision-making, better patient outcomes, and easing pressures on overstretched healthcare resources. The NHS has partnered with multiple AI providers, including C2-Ai, whose tools help hospitals identify high-risk patients on waiting lists, cut complications and reduce hospital stays. At Chelsea and Westminster Hospital, the use of Smart Triage software enables clinicians to quickly assess severity, improve appointment scheduling and prioritise A&E cases. Cera, a home healthcare company, uses AI to match carers with patients and predict deteriorations in health with 97% accuracy, reportedly preventing up to 2,000 hospital admissions per day. These systems are now used across more than two-thirds of NHS integrated care systems and help reduce hospitalisations by up to 70%. These applications all point to better decision-making, improved allocation of resources, and lighter burdens on overstretched healthcare workers.

The risks

But beneath these innovations lies a more complex reality. AI integration introduces upfront investment, skilled labour needs, workflow redesign, and compliance burdens. Over time, these pressures can distort operating models, weaken financial resilience, and amplify governance risk. For smaller providers with limited in-house data science capabilities, governance infrastructure, tighter margins and reduced financial flexibility to absorb implementation setbacks, the risks are higher.

Risks from AI in healthcare include:

  • Cost distortion and hidden implementation burdens: Initial AI tools may appear affordable, but costs can escalate rapidly. Cloud infrastructure, data storage, model customisation, staff training, and integration with legacy systems can strain already fragile financial models. For smaller operators, unplanned costs pose a material threat to business continuity.
  • Vendor lock-in: Over-reliance on external AI providers may expose operators to long-term risks if models are modified, access is lost, or licensing becomes unaffordable. This dependency introduces a layer of strategic and contractual vulnerability.
  • Reputational and insurance exposure: A single incident, such as an AI-driven misdiagnosis, data breach, or evidence of biased outcomes, can severely damage trust, increase insurance costs, trigger regulatory scrutiny, or result in litigation. Smaller providers may lack the reputational capital or legal buffers to recover easily.
  • Legal ambiguity: AI-driven misdiagnoses and treatment errors, particularly in clinical decision-making contexts, create complex ethical and legal questions to determine liability among AI developers, healthcare providers, and investors. The UK’s legal framework is still catching up.
  • Black box risk: Many LMMs lack explainability. When a model’s recommendations cannot be transparently justified, it complicates clinical oversight, undermines auditability, and weakens legal defence in the event of adverse outcomes or investigations.
  • Data governance: LMMs demand vast quantities of structured and unstructured data. This places enormous pressure on digital infrastructure, governance protocols, and cybersecurity systems. Weak or poorly configured safeguards can trigger data breaches, unlawful processing, or violations of the UK data protection regulations (GDPR) that can result in fines, reputational damage, and significant costs to resolve the cybersecurity failings.
  • Workforce disruption: While AI promises productivity gains, it risks inadvertently disempowering staff. Clinicians may spend less time applying judgement, while junior professionals have fewer opportunities to learn through active case work. Over time, this may erode workforce capabilities at a time when recruitment and retention challenges remain acute.
  • Automation bias: Over-reliance on AI recommendations can lead to potential misdiagnoses if LMMs generate incorrect information or flawed analysis. Without continuous human oversight, these risks can delay escalations, distort medical priorities, and expose operators to operational and legal risk.

Considerations for healthcare providers

These challenges are particularly acute for mid-sized and smaller privately run healthcare providers, which operate under tight margins and navigate strict regulatory frameworks. The promise of AI must be matched by equal rigour in risk mitigation, governance, and strategic foresight to avoid unintended consequences. 

AI adoption in healthcare is inevitable. But those who succeed will not be the fastest adopters – they will be those who manage complexity with discipline, balancing innovation with oversight and productivity gains with strong controls. BTG Advisory helps healthcare providers navigate these risks and build the operational resilience to adopt AI responsibly. Our expertise in forensic risk assessment, digital infrastructure stress testing, and operational turnaround enables clients to move forward at a pace aligned with their resources, controls, and care priorities.

Insights

Daily News Round Up

Sign up to our daily news round up and get trending industry news delivered straight to your inbox

© 2025 BTG Advisory LLP

This site uses cookies to monitor site performance and provide a mode responsive and personalised experience. You must agree to our use of certain cookies. For more information on how we use and manage cookies, please read our Privacy Policy.