Dr AI will see you now

One evening during my anaesthesiology career, I walked towards the hospital parking lot, where I saw a surgeon kissing a nurse in a dark corner. I was surprised, as I thought the surgeon had been dating a different nurse. Later that week, I found myself in an operating theatre with the surgeon and the jilted nurse. A sinister silence pervaded the room. At one point, the surgeon asked the nurse to get him another suture pack. She glared at him, looking like she wanted to throw him down a flight of stairs. I caught the angry, tearing sound she made when opening up the bag. At the end of the procedure a suture needle was missing. We needed to take an X-ray to check if it had been left inside the patient’s abdomen. Meanwhile, thick secretions were obstructing the patient’s endotracheal tube. I had to remove it as soon as possible to prevent a worsening situation. People looked everywhere for the needle and its package, as both were missing. Suddenly, I remembered the loud, angry tearing sound emitted when the nurse had ripped open the suture pack. The noise had suggested great force, the kind of force one might use to tear apart a cheating lover. I wondered if a stray package had flown out of the bag while the nurse was distracted. In my mind I drew a circle with an eight-foot radius from where I thought she had been standing. I studied the circle’s perimeter and noticed a large basin filled with blood and saline. I put on a glove and swept it. My finger pulled out the red suture pack and needle. With the needle count now satisfied, I quickly woke the patient up: their breathing improved dramatically after the endotracheal tube was removed. No AI could have rescued this situation. AI would not have seen the kiss in the parking garage, recognised its import or noted the fury with which the nurse had opened the suture pack. Under AI’s care, the patient would have likely suffered severe lung complications. Charlotte Blease, a non-physician researcher who in Dr Bot envisions the future of healthcare, would likely dismiss my anecdote as another example of what she calls anecdata: “a personal story and not a hard factual generalisation”. With her background in health information and philosophy, she approaches healthcare not from the granular level of personal experience but from the highest regions of abstraction, where concepts, studies and generalisations reign. The air up there is a little rarefied but, to her, the light is brilliant and clear. From her high altitude she concludes that AI can and should replace doctors, at least in primary care. AI, she says, would be more adept than doctors at picking up subtle signs and symptoms, and making correct diagnoses. It would filter out the “noise”, as she calls it, that causes so much variation among doctors, thereby improving consistency in care. It would help weed out unwanted biases. AI would even be more empathetic than doctors. Nor can doctors effectively compete against AI, she says; their very “human-ness”, she explains, is the problem. Her idea of AI replacing doctors is dangerous, but how to argue against it? One might challenge her generalisations with more anecdotes. For example, according to Blease, studies show that patients have an easier time talking to AI than to human doctors, and so reveal more. Yet I once had a patient with a history of drug abuse who denied having used cocaine in the previous 24 hours. Recent cocaine use has important anaesthetic considerations and can be dangerous. I suspected the patient was lying. To scare the truth out of him, I told him that if he was lying, he would die. The patient’s face turned pale, his eyes grew wide, and he admitted to taking the drug a few hours before. AI has no intuitive ability to suspect, nor a capacity to act on that intuition with necessary severity. Subscribe to The New Statesman today from only £8.99 per month Yet this method of argumentation accomplishes too little, too late. Instead, one must battle Blease on her level. Two long-standing trends emerging from philosophy and social science have converged in healthcare in the form of AI. This poses a danger to patients, although with ramifications far beyond. In the Middle Ages, theologians observed two opposing ways to acquire knowledge. In the first, people generalise from individual cases to create ideas. A part of something – a body, a painting, or a society – is extracted for consideration, then related to a similar part in thousands of other cases, generating a distinct concept expressed as a word. This tendency towards abstraction comes easily to people, especially at university. It is how we get overarching concepts such as depression, impressionism and fascism. Far more difficult is acquiring knowledge in the opposite direction, when a person must return from the general concept to the individual case, a process called judgement. Unlike abstraction, judgement, as in the two anaesthetic cases described, takes time and experience to develop. As an informaticist and philosopher, with no immediate professional experience in clinical medicine, Blease speaks from the world of abstract ideas, the world of “big data”, studies drawn from that data, and concepts drawn from those studies. AI is also a product of that world. Because AI lacks intuition, suspicion, instinct, presentiment and feeling, it lacks judgement in the human sense. It can only work with abstractions – that is, with words. It can never get behind the words. It can never get deep inside matters. And so, AI risks making the same mistakes that inexperienced doctors who lack judgement sometimes make. For example, AI could be quick to apply a diagnosis created relatively recently, such as fibromyalgia or ADHD, without pausing to reflect on whether that diagnosis might be the product of overzealous conceptualisers who abstracted from thousands of cases to create it. This could then contribute to the larger trend towards overdiagnosis and over-treatment. Unlike a doctor with judgement, AI might not hesitate to apply that diagnosis so long as the diagnostic criteria, expressed in words, are met. In another example, AI could be quick to diagnose depression, a single concept drawn from thousands of individual cases, and then reflexively applied to every individual case. This, too, could lead to overzealous diagnosis and therapy. AI could also vacuum up innumerable studies from the internet and treatment recommendations without suspecting that the studies might be fake – a growing problem. The second long-standing trend in philosophy and social science involves identifying the causes of events. When an event occurred in the Middle Ages, people typically blamed God or the Devil. In the 19th century, history replaced religion among the educated as an explanation for the cause of events – for example, the notion that Stonewall Jackson’s death “caused” the American South to lose its civil war. Yet no one knew where to stop the train of causes. If the soldier who shot Jackson forgot to wear his glasses that evening, was that then the cause of the South’s loss? In the 20th century, social science displaced history as the most popular way to explain the “cause” of social phenomena. In studies and surveys, researchers applied the scientific method to problems such as crime and poverty, drawing from thousands of individual cases and creating many abstract concepts in the process. But error was built into the results. When dealing with human beings, there exists an almost infinite number of variables that have to be accounted for, which prevents a perfect social science experiment from being conducted. Social scientists purposely exclude those variables when conducting their experiments, but in doing so they adopt a stance of wilful ignorance. This explains social science’s relative lack of predictability. In some quarters, science has become the new force to explain “causes”. An example is the popular belief that neurotransmitter imbalance, more than a person’s social situation, “causes” unhappiness or feelings of loneliness. This method is far too simplistic. Blease occupies the third phase. Drawing on social science research and concepts, such as racism and sexism, she tries to explain the “cause” of certain medical events. For example, she cites a study in which black patients are more likely to get better care from black doctors than white ones, with racial bias a possible “cause”. Informed by this study, would AI, in its future primary care role, steer a black patient towards a black surgeon, even if a particular white surgeon had more experience managing that patient’s problem? In another example, would AI push “gender” (another social science concept) reassignment surgery, having referred to published studies that say untreated gender dysphoria is more likely to “cause” suicide, although doctors have judged some of those studies to be flawed? Thirty years ago, a social scientist wrote in a peer-reviewed journal that sexism pervades the operating room. She had observed several Caesarean sections being performed, and noted that the operating table was always tilted to the left during those surgeries, causing the patient’s blood to spill onto the nurse’s shoes rather than on to the doctor’s. The “cause” of this abomination, she said, was sexism. But at the time, all Caesarean sections were tilted to the left because the pregnant uterus was thought to compress the vena cava, the vein that returns blood to the heart and lies slightly to the right of the spinal cord. Because most of these surgeries are done under spinal anaesthesia, which drops blood pressure, vena cava compression risked dropping blood pressure even more. The leftward tilt was thought to prevent that. The mostly male, right-handed surgeons delivered babies while standing on the patient’s right side, to achieve the best angle of approach. The assisting nurse, at the time likely female, stood on the left side to help. Therefore, what appeared to be sexism was rooted in physiology. Having noted this journal article, would AI recommend to an anaesthesiologist that they put the obstetric patient in neutral position, to avoid “bias” albeit risking the mother’s life in the process? Blease ominously predicts a two-tiered healthcare system emerging as a consequence of AI. In the upper tier, she says, the wealthy will enjoy AI medicine, while in the lower tier, the poor will get stuck with human doctors. I agree with Blease that a two-tiered system is emerging, but I believe the tiers will be reversed. The poor will get stuck with AI medicine, built on the sand of abstract concepts and unreal causes; the wealthy will enjoy human doctors, warts and all, exercising judgement. I hope to wrangle my way into the latter. Dr Bot: Why Doctors Can Fail Us – and How AI Could Save LivesCharlotte BleaseYale University Press, 352pp, £18.99 Purchasing a book may earn the NS a commission from Bookshop.org, who support independent bookshops [See also: The battle for Royal Mail] Content from our partners Related This article appears in the 10 Sep 2025 issue of the New Statesman, The Fight Back
AI Article