AI could improve NHS commissioning decisions, if we let it


14th August 2025

Extensive discussion of the role of AI in the NHS is one of the few aspects of the 10 Year Health Plan for England which is truly new. That reflects the fact that while it’s been widely known how the NHS needs to change for a long time, achieving those changes requires a mammoth effort and cultural shift.

Many of the references to AI in the 10 Year Health Plan for England relate to its use in clinical settings, for which it will in many instances be regulated as software and artificial intelligence as a medical device.

But what is the potential role of AI in commissioning? The thought of AI having an important (if not decisive) role in decisions about the NHS services we rely on provokes a range of responses, which in most cases probably include a mix of exhilaration and trepidation.

AI to support commissioning

There is no prospect of a “Computer says…” approach to service design. But the prospect of accessing and synthesising large amounts of information about what good looks like in support of decision-making is difficult to resist. Even though humans would remain firmly in the loop, it will be necessary to recognise and seek to mitigate the risk of “automation bias” – where individuals and organisations place undue emphasis on the output of system, even in the face of contradictory evidence.

Decision-makers will need to understand “how” and “why” certain outputs are obtained and be able to critically assess those outputs.

Better decision-making?

One of the concerns often expressed about AI is the “black box” which makes decisions inscrutable. From my point of view, I’m optimistic that deploying AI appropriately, and with the right safeguards, could increase both the quality of decision-making and its transparency.

My optimism comes from the potential to define “end points” for AI analysis – the specific, measurable outcomes which the deployment of the AI is seeking to achieve. In the context of the NHS, what relative values are we placing on quality, affordability and equity? Should it be a 40:40:20 split, or something else?

Next time you’re asking AI about where you should plan your next weekend break, ask it to make the recommendations on the basis of weighted criteria and you’ll get a sense of how this works in practice. You’ll be given suggestions along with reasoned recommendations about whether it should be Budapest (good value but cold in the autumn) or Lisbon (a bit more expensive but better weather).

Deploying AI and defining its end points can achieve what I’ve seen described as “cognitive unpacking”. It increases transparency as to whether, and if so to what extent, different criteria are weighed in making recommendations. Whereas human reasoning often involves extensive tacit knowledge and sometimes undisclosed criteria (i.e. unconscious biases), these should factor less in AI’s generation of recommendations, particularly when used to reach defined end points. If human decision-makers critically appraise the AI’s recommendation and decide it is valid, then that recommendation serves as a good anchor for the discussion and debate which will lead to a final human-made decision.

Application to the NHS

The city break example I’ve used above clearly oversimplifies the issue. Decisions in the NHS must be made on the basis of a wide range of criteria because of the inherent complexity of designing and delivering health services. But do we perhaps make these decisions even harder than they need to be? NHS commissioners are subject to a plethora of statutory duties which they may have to act with a view to achieving, or at least have regard to. The sheer number of these duties means that, for at least some of them, their consideration is an exercise in box ticking rather than a substantive assessment.

We can take some comfort in terms of legal risk from the fact that Courts in judicial review generally defer to the weightings given to different criteria by decision-makers, so long as all of the prescribed criteria have been taken into account to some extent. But if by my reckoning an ICB has to take into account at least 15 prescribed statutory criteria when making decisions about service design, clearly they can’t all be weighed equally and in some instances their assessment won’t be meaningful. Is it perhaps time to retire some of these as statutory duties, not because any of them are devoid of value, but because of the need for more focused and transparent decision-making about key priorities?

It’s also worth thinking about how weightings should be attributed to the criteria in different contexts. Should financial sustainability be given a greater weighting in systems in substantial deficit, or should equity be given greater weight in relatively disadvantaged communities? Would quality be given greater weight in generating recommendations for some maternity services where there have been significant concerns?

AI holds a lot of promise as a tool to support objective decision-making by NHS leaders. But at the same time it may require the NHS to be more candid about its priorities and what is realistically achievable.

Healthcare tech experts

Speak to one of our specialist lawyers for healthcare tech and digitisation expertise

Arrange a call

Enjoy That? You Might Like These:


events

19 August
Join Blake Morgan for a Public Sector Insights webinar on 25 September where Partner, Eve Piffaretti will be joined by guest speaker, barrister Ian Brownhill of 39 Essex Street. Read More

articles

30 July
Will individuals in receipt of NHS Continuing Healthcare soon be able to opt for Direct Payments? The Health and Social Care (Wales) Act 2025 (“the 2025 Act”) has recently received royal... Read More

articles

28 July
How does the Welsh Government intend to reform the regulation and inspection of social care in Wales? The Health and Social Care (Wales) Act 2025 (“the 2025 Act”) has recently... Read More