There is a hype in the air these days around AI. There is discussion among all walks of life about what it is, what it means, and how it will change not only society but the individual human beings who make up society. Some speak of Skynet and Terminators while others of the blessings and benedictions this technology will bring to the human race. However, one thing appears certain: it is coming. For if one country puts a “pause” on AI research, as some have called for, then other nations will merely develop it first. Better our guys, then theirs, or so I have heard it put. There is a line from the original Matrix movie in which the evil Agent Smith is holding the protagonist Neo down on a subway line and one can hear the train approaching. “You hear that, Mr. Anderson?” Agent Smith intones, “that is the sound of inevitability.” Barring catastrophe, AI appears to be barreling down the track and there are those who believe this moment will be something like the invention of the printing press, the Copernican Revolution, and the development of the Internet all rolled into one.
There is no doubt that just as automation and the industrial revolution devalued the need for pure blue-collar brawn in the work force and set a premium on more intellectual tasks, AI appears poised to devalue introductory white-collar jobs (performing research, drafting documents, reviewing and collating documents, paralegal work, etc.) as the technology will simply be, and already is, vastly more capable and efficient in accomplishing these types of activities. Just as grocery clerks, fast food staff and bank tellers have begun to disappear, replaced by self-check-out kiosks and ATM machines, so will these critical entry-level, white-collar jobs disappear. In 1940, 5% of US adults had earned at least a bachelor’s degree and by 2015 that percentage had risen to 33%. (1) Interestingly, the percentage of American living and working on farms in 1900 was 39.3%, already down significantly from the 18th and 19th centuries, while in 2000, only 1.1% of Americans lived and worked on farms. (2) Office jobs and assembly lines had replaced the farmer’s hale strength and ingenuity. Now, as AI seems poised to do in 5 seconds what an entire office of entry-level college graduates takes an hour to do, it appears we may be on the verge of another massive societal disruption, except this time it will be much quicker and increasingly more focused on white-collar employment.
The effects of this level of disruption on various sectors of the economy are difficult to predict however I would like to focus this essay, and possibly subsequent pieces, on what I know best: emergency medicine. What will the effects of the AI revolution be for emergency physicians, staff, departments, and, crucially, patients in the next 1 to 10 years? How will medicine generally, and emergency medicine specifically, adapt to such rapid and incredible change? Is emergency medicine AI-proof or shall we too fade into obscurity as our calling in life is subsumed by the overwhelming technological onslaught that lies just beyond the bend in the road ahead? As with all difficult questions, I suspect the answer is potentially both concerning and hopeful, depending upon how we incorporate and how we adapt this powerful new technology to be in concert with our human lives. Perhaps the only thing that may be said with any certainty is that great changes certainly must be expected.
Perhaps to better grasp the question, one should first ask what AI is today, what it might be in a few years and, importantly, what it isn’t now and can never be. It would certainly be remiss for me not to mention as well that I am not in any way an expert on these matters-merely an interested observer. As such, I may get a few things wrong regarding the current state of the AI world but am open to all criticism and corrections! If I’ve got it wrong, let me know!
So far as I can tell, and to put robotics aside just for the moment, the major advances of AI thus far have been in LLM, large language models (GPT-4, Claude, etc.), and in RAG (Retrieval Augmented Generation) models (OpenEvidence, etc.) with the main difference being that RAG models retrieve data from some sort of knowledge database (OpenEvidence obtains data from the Mayo Clinic Platform, New England Journal of Medicine and the JAMA network, for instance) whereas LLM’s draw their ability to answer queries based on whatever it was “taught” during training. They are both powerful resources however it would appear to me that for medical purposes the RAG model would be superior as it can incorporate up-to-date published scientific research into its responses whereas the LLM model would need to have that information incorporated into its training in order to include it in its responses. That said, there are studies looking at clinicians using both models to try and determine which would be more helpful and accurate in an actual clinical setting. (3)
The obvious upshot to all this is that in a world where it is truly impossible for any physician to be completely up to date with all the scientific literature, AI is making it possible to be a query or two away from the answer to almost any medical question that might arise in the course of one’s clinical work. Rather than sifting through Uptodate for ten minutes to find the answer to whatever one’s highly specific medical question might be, or flipping through the already outdated medical textbook that’s been parked on the shelf above your computer for three years, all one has to do is ask the question on AI and wait for approximately 5 to 10 seconds. And just like that, the relevant papers are searched, results collated, and summary recommendations are made based upon the available literature that the RAG model has access to. It is really quite remarkable, and in the emergency department, where I am often looking for answers to highly specific questions and where I have perhaps thirty seconds to obtain the information I need, this is game-changing.
Of course, patients will be (and likely already are) quite capable of doing this at home as well. Many of my colleagues have somewhat sarcastic coffee mugs that state something to the effect of “Don’t confuse your Google search with my medical degree”. This witticism is referring to the not uncommon issue that arises when a patient has typed their symptoms into the Google search engine and has been terrified by the search results. They often present concerned about a very real, though very unlikely, medical condition that they learned of about an hour prior to coming to the emergency department. Nowadays, however, with the advent of RAG AI models, those search results may be far superior to what a traditional search engine was able to put out. An instrument like OpenEvidence is tapping directly into the medical literature and giving a specific answer rather than providing an interlocutor with a list of different websites that are related to the question. A knowledgeable patient who is able to ask smart and specific questions may be far more likely to accurately self-diagnose with such a powerful tool on hand.
In fact, as an interesting aside, there is a real concern among the traditional search engines that AI will change the entire financial model of the Internet itself as people are now beginning to use AI so much that traditional search engine queries are decreasing. Why use a search engine that provides links to a whole host of websites when AI will just give you the answer? The problem for the search engines is that they traditionally earned their keep via advertising sales built into their search algorithms. If people are bypassing search engines altogether, what will happen to the traditional financial model that has been the status quo for so long? (4) I suspect that the way we use the Internet will change and the Internet will adapt in kind while the practice of emergency medicine, pari passu, will also adapt in kind as the effects of AI become more prevalent and folded into our day-to-day work.
Returning to the question at hand, physicians may soon see their monopoly on the medical science, and their subsequent gate-keeper status, markedly diminished. This leveling effect will not only be seen between physicians and informed patients but also among physicians themselves. The science of emergency medicine will be not only equally available to all of us physicians but, importantly, easily available, eradicating academic differences between physicians (differences between greater or lesser “funds of knowledge” learned over the course of our training and our careers). An obvious question arises: what would drive a young college graduate considering a career in medicine to go ahead and spend another four years in medical school and then another 3 to 6 years in post-graduate residency training to learn all the hard science in his or her specialty when anyone with access to a decent medical AI model could have all that information at their fingertips in seconds? Perhaps there is the beginning of an answer in the preceding thoughts regarding patient access to the scientific literature. For access is one thing but understanding what it all means and what the next steps in diagnosis and treatment are quite another.
After all, even if it may be true that years of intense study of the human body and of all the ways in which it may break down, fail or succumb to illness or injury may be quickly found in an AI query, understanding the answer often requires a baseline level of knowledge. Any tool, even a tool as powerful as AI, is only as good as the intelligence using it. A pharmaceutical antidote, a CT scan, or a scalpel may be good or bad depending on the user. Consider a drug. Perhaps a physician wants to know if this drug would be the best for this particular patient as opposed to perhaps an alternative therapy. To even ask the question implies a certain fund of knowledge regarding the patient’s medical condition and test results, the drug’s mechanism of action and adverse effects, the potential effects of each drug on that particular patient (perhaps thinking about comorbidities or allergies), and the patient’s goals of care, desires and fears. The science may be at every physician’s fingertips, there for the taking, but the application of that science to each unique patient encounter-that may be the future of medicine.
This will likely require a change in how we teach medicine as much as in how we practice it. The basic science will still need to be taught for to conduct cutting-edge research, whether it be bench science or clinical research, or to practice at the bedside, one must be able to ask cutting-edge questions. But I can envision a future in which the more subtle art of medicine begins to take precedence. I can see medical school being at least a year shorter with residency training programs being highly variable depending on the specialty. Medical school would evolve and begin to apply more focus to the art of medicine-on human-centered concepts such as the imagination, empathy and pursuing medicine within the context of each patient’s unique life, listening and establishing a type of presence in the room, and shared decision-making in the form of physicians being a kind of translator of the science as options are weighed, highlighting the importance of intuition, the imagination, and the physical exam. I would like to briefly consider each of these ideas within the context of AI and then conclude with some final thoughts on the irreplaceable human being.
For those of you who are long-time readers of my Substack, these themes will appear familiar and, for me personally, therein lies the potential upside of AI. Generally speaking, I have often found myself as a technological Luddite, and so it is with some surprise that I have found myself being faintly, or at least potentially, optimistic on the subject of AI in medicine. After all, if it can, by harnessing the power of our science and making it easily available, liberate physicians to focus more on the human being in front of us, then I am cautiously hopeful that it may be a boon for both physicians and patients alike.
Allow me to first consider the imagination, a word that is unfortunately loaded, fraught with modern connotations of whims of fancy and fantasy. However, in this context I mean to use the word as the poet Coleridge understood it. In his Biographia Literaria (1817), he famously distinguishes imagination from fancy. Very briefly, he describes our imagination as being split between primary imagination and secondary imagination. Primary imagination is given to all of us and is the way we organize and understand our sensory inputs of the world into something coherent and understandable. This happens effortlessly and is a passive power. Secondary imagination is the active process (by the individual, artist, or creator) of taking those experiences or memories and recombining them into a new form that sheds light on the universe and on humanity. By doing so, the secondary imagination blends multiple concepts into entirely new forms but forms that still bear back upon reality and the human condition and thus can tell us something important about the universal. Fantasy on the other hand recombines experiences and memories into something entirely novel but not real. It doesn’t reflect back upon the real world in the same way as secondary imagination does.
An analogy Coleridge uses is that secondary imagination blends ideas into something entirely new whereas fantasy is like the mixing of particulate matter together in which both elements retain their identities intact. The one, secondary imagination, is more like the creation of an alloy and the other (fancy) is like the mixing of two different particulate solids (salt and sugar mixed together or like various types of rock mixed together to form gravel). An example of fancy might be for Coleridge the idea of a mythical gryphon, in which components of an eagle and a lion are combined into a fanciful new form that does not actually exist in nature (does not reflect back upon the world in an original way so that we are able to see the real world in a novel yet honest manner) and in which the two component parts retain their identities. Secondary imagination, in contrast, might be a poem in which experiences and memories of sensations gleaned from the primary imagination are recombined into the artistry of the poem to give the reader a new way of seeing the world they might not have otherwise and that is also true. The second half of Coleridge’s own Kubla Khan is often cited as a good example of this type of secondary imagination.
How does this digression reflect back upon the meaning I intend to impart on imagination within the world of medicine and its relationship to AI? In brief, if we accept Coleridge’s terminology, we can think of physicians as obtaining information through a history of present illness and physical exam with every patient encounter. Primary imagination sensory experiences are related to the physician by the patient and direct observations are obtained via the physical exam. These concepts are then recombined and put together using the secondary imagination, not as a matter of fancy, but rather blended into a new, but still real, vision of the patient that, when unified together helps to shed light on potential diagnosis one might consider given that specific scenario. In short, we use the secondary imagination every day in bedside medicine even if we don’t think of Coleridge’s concept of the secondary imagination while we are doing so.
Imagination in this manner is different from the mere collation and provision of scientific data that a RAG AI model might provide. It might be argued that diagnostic aides are being developed with AI that might be just as good as this but the proper information, obtained by a human physician, must still be entered as an input into the AI model in order to obtain the proper output (the right diagnosis). This can only be obtained by asking the right questions, which is itself an example of the imagination.
Suppose a patient presents to the emergency department with a chief complaint of unilateral leg pain and swelling that began after a long car ride. A subsequent ultrasound of the affected limb confirms a deep venous thrombosis (DVT) and the patient is admitted to the hospital for a hypercoagulable work-up to try and determine if that patient has a genetic predisposition to developing a DVT. Suppose, however, that an astute second physician asks what the patient means by a “long car ride” and realizes that she only meant a twenty-minute ride to the grocery store. Suddenly, the work-up has changed. Could it be that the patient has an occult malignancy that set her up to develop a DVT? The work-up has changed on account of new information gleaned due to merely asking the right question. Asking the right questions is a form of using Coleridge’s secondary imagination to form new lines of inquiry based upon the information gleaned in the history of present illness and physical exam (i.e., information obtained through the “primary imagination”). Medicine requires an active imagination and it is not clear to me that AI is capable, in its current state, of imagining in this way.
This leads naturally to thinking about empathy-the ability to place oneself in another’s shoes. It seems that emotional openness, one might say a type of empathic imagination, is in operation when this is effectively done, and this is critical to the creation of an effective physician-patient relationship. Empathy requires understanding a patient’s unique problem within the context of his or her individual life. Every patient is a moment in time, and this changes not only between patients (of course) but within the same patient on different days. Perhaps your patient has decided to quit his fentanyl addiction between his last visit and today’s. Perhaps he has instead started using fentanyl again. He is the same patient but also he is not the same patient. The conditions have changed, and a good physician must keep the fluid nature of a patient’s life in mind. This requires a kind of presence in the room and an ability to listen more and to speak less. It requires both imaginative questioning and sustained attention while listening. AI cannot be empathetic because it is not a human being. It has no body because it is nobody. While it may be able to “listen” it does not maintain a presence in the room like another human being does. Furthermore, the remarkable and natural protean characteristic of the human condition, seen in both patients and physicians, calls to mind the great Latin aphorism “si duo faciunt idem, non est idem.” If two people do the same thing, it is not the same. Subtle distinctions easily captured by the astute physician often fall into ill-fitting algorithms when “considered” by computer-based algorithms, leading to less-than-ideal outcomes.
Shared decision making, the end-result of an optimal physician-patient relationship, must therefore by necessity take the context of the patient’s life into consideration. AI can present the hard science but it has no moral or ethical ability to make treatment or even diagnostic recommendations beyond these rigorous and scientific parameters. This choice must be made by the patient and many factors besides the science of the case will weigh on a person’s mind. Perhaps a possible treatment is worse than the disease, or at least in so far as that particular patient is concerned. Physicians could conceivably and quickly obtain all the treatment options available through AI but it is the patient, perhaps in conjunction with the physician helping to translate the science into something they can understand, who must make the ultimate decision about what to do. AI can provide hard data, but it cannot make hard choices for the patient. It has no deep and corporeal understanding of what the patient might have to suffer through with another round of chemotherapy as opposed to comfort care and hospice. How could AI make any sense of a patient who chose to forego a potential treatment option? Not being human, it cannot make human choices. It can only crunch data and answer hard, scientific and logical questions-a very powerful skill indeed, but not a skill that has any bearing on a mother’s love for her children or an elderly man’s decision to die with dignity at home. There is no code that can understand the depth of the human soul.
Intuition, the art of the subconscious, gleaned and honed in the crucible of experience, is also something that an AI program seems unable to tap into. Intuition, like reason, is not perfect but it can, like reason, be improved upon over time and it can be quite good. It is the slight glance, the aspect of a gait, the tremor of a finger, or the faint smell of tobacco that prompts the astute clinician into asking the perfect question that seals the diagnosis. Perhaps it is a paucity of the imagination on my part, but I have difficulty envisioning AI tapping into these deep wells of human knowledge.
Finally, it should go without saying that one thing AI does not possess is a human body. There is a corporeality to the human condition. What does AI know of physical pain, let alone psychological distress? How can it recommend decisions for patients without an innate sense of humanity. Robotics does not seem to be as advanced as the LLM/RAG AI programs are but even if they were to catch up quickly, it is hard to imagine a robot performing a subtle physical exam or gently registering the fear and pain of a child with appendicitis. Perhaps in the future the physical exam could be entirely bypassed but we are not there yet and I’m not certain that I would ever like to be. The importance of human touch should not be discounted among human beings.
And human beings are not rational actors. We lie. We often don’t know what’s important and what isn’t. We often have illogical desires and addictions. We do not operate by the rules. And, most importantly, unlike the other animals, we know that we must one day die. There is a sense of the mortal and a hope of the transcendent that lies ineluctably within us and AI might be able to “know” this but it can never, by virtue of its non-corporeality, “feel” it. There is a real problem of pain at work here as AI is unable to feel or understand human pain, physical or otherwise. There is a mortal complexity among men that has always made the sagest among us also the most capable in navigating these illogical and often counter-productive desires. A computer-based AI system cannot code for an organism that quite often seems to at times act irrationally and operate against its own health; that makes “illogical” decisions based upon factors that an AI model cannot really understand; that lies; that has a body; that senses its own mortality; and that senses transcendence in a finite world.
With all that said, given AI’s incredible strengths and inherent weaknesses, what can we ultimately determine about AI in emergency medicine? It seems certain that this will be a game-changing technology that alters the way the science of medicine is taught to physicians and brought to bear upon patients. However, this apparent supremacy of the science will, I believe, lead to a perceived and real increase in the importance of the art of medicine. All of the elements of emergency medicine that I am constantly harping on about in my Substack are now suddenly, in my estimation, rising in importance again. For the cold hard science is now a known entity and it will be, if not already is, known by the computer with far more rapidity and assurance then the most brilliant of physicians. To know the complexity of the conditio humana, to appreciate the intricacy and beauty of, say, the contrapuntal fugue rather than to merely understand the mathematical underpinnings that give it existence, will be the task of future physicians. We will need our Shakespeare, Bach and Michelangelo as much as we will need our Grey or our Tintinalli, which of course begs the question-haven’t we always?
Lots of food for thought here, Andrew, and nicely summarized. It makes me wonder, though, if human-to-human interactions will change as our Digital Age progresses.
A big topic to tackle! Your conclusion on the role of "art" vs. "science" reminds me of a story in Kahneman's book "Thinking Fast: Thinking Slow". When Israeli army psychologists combined statistically verified & objective questionnaires (i.e., the science) with intuitive & subjective judgements of interviewers to modify, but not set, final results (i.e., the art), they got the best results in terms of predictive power i.e. from combining science and art. Good piece Andrew!