Will AI Doctors Replace Clinicians — or Empower Them?
In a speech earlier this month, the UK’s former Prime Minister Tony Blair urged Britain to embrace AI-driven doctors and nurses, arguing that NHS staff shortages, long waiting times, and financial pressures make this an urgent priority. But is this the right direction — and are we truly ready?
As ever, the answer is: it depends. It depends on what AI systems are expected to do, what time and costs they might save, and how patients will respond.
In some areas, AI is already outperforming humans. Image interpretation is one such domain — neural networks can be trained on far more data than a radiologist might see in a lifetime. They support (and often outperform) human review in routine cases. Still, someone has to order the right test — and that requires human judgment. But by shortening the path to a reliable result, AI can help relieve bottlenecks in diagnosis.
Still, human intuition remains essential. I recently heard the story of a patient presenting with pain at the back of their neck. Rather than ordering an immediate MRI, an experienced and curious doctor asked further questions. Long story short: the pain was cardiac in origin, referred rather than musculoskeletal. A heart scan revealed three blocked arteries, and a triple bypass followed. It’s unlikely any current AI programme would have made that leap.
AI can, however, support clinicians facing complex, multi-morbid cases. At Metadvice, one of my ventures, we’re applying AI to cardiometabolic conditions — where the doctor must weigh actions on blood pressure, lipids, and diabetes to reduce overall cardiovascular risk. In situations where no single guideline applies, our system supports less experienced clinicians with precision medicine recommendations based on patterns of successful treatment in similar patients. The final decision always remains with the clinician — but AI reduces the time patients spend waiting for the most experienced doctor to become available.
One of the most dramatic recent advances has been the emergence of large language models (LLMs), capable of holding naturalistic conversations with patients. I’ve seen compelling demos of virtual ‘nurses’ checking in post-surgery: asking about symptoms, monitoring adherence, and surfacing early signs of complication. These systems will become more accepted as they adapt better to cultural context and tone.
That said, two guardrails remain essential: patients must know they’re speaking with a machine, and red-flag systems must be in place to escalate to a human where needed. These could be triggered by tone, timing, or specific keywords indicating misunderstanding or distress.
We’re exploring these possibilities in mental health, too. At 2BWell — a charity initiative I co-founded — we’re developing a chatbot called “Uncle Zen” to support Ukrainian refugees in their native language. The goal is to offer empathetic, accessible interaction in contexts where therapists are in short supply, backed by trained volunteers and a rapid-response mechanism for those in acute need.
The message, I hope, is clear: AI won’t replace clinicians — but it will become an indispensable assistant to modern healthcare. The opportunity isn’t in automating human resources, but in using these tools to expand access, personalise care, and support better decisions across the entire continuum — from prevention and early detection to diagnosis, management, and healthy longevity.