Accessible AI: Beyond Clinical Care

Photo courtesy of Free Stock photos by Vecteezy

Much has been made in recent years of the potential for artificial intelligence, or AI, to act as an assistant to workers in an array of professions. Once a concept confined to the likes of a phones’ autocorrect feature, in the past year AI has firmly made the leap to center stage with the introduction of deep artificial neural networks like ChatGPT by OpenAI and Bard by Google. And nowhere has AI shown more promise as a copilot than in the medical industry.



Although the average patient may not be aware of the extent to which AI is being used in their care, they are mostly positive about its potential to improve their own outcomes. Offering a sense of scale for AI’s penetration into healthcare, one article in the New England Journal of Medicine noted, “There is virtually no area in medicine and care delivery that is not already being touched by AI.” 



Whether it’s AI-enabled robots or using machine learning in the drug discovery process (an approach that’s akin to precision medicine, a topic I covered in some depth here), ever-more-sophisticated algorithms and artificial neural networks are routinely put forward as the answer to intractable problems like healthcare workforce shortages in the face of an aging worldwide population. Some even envision robots one day replacing physicians…but that’s likely still a ways off


Because so many articles have been written about the clinical applications of AI, I thought it would be interesting to take a slightly different approach. While any survey of AI in medicine wouldn’t be complete without at least a passing word about clinical integrations of AI, after doing so, I’d like to take a detour and explore the inroads AI has made into non-clinical areas of medicine.


Setting a Baseline


Everyone’s entry point to AI is different, but one thing is becoming clear: those who refuse to familiarize themselves with these monumental innovations will likely face an uphill battle once the tech takes hold – and that hill will only get steeper in time. Since this is a complicated topic, let’s start by defining some key terms.


The Stanford University Human-Centered Artificial Intelligence (HAI) group has put together a handy glossary of terms related to AI that I think does the trick, and I’ll mostly be drawing upon it for establishing a baseline of understanding here (along with some other insights from the likes of Amazon Web Services (AWS) and IBM).


Aside from generally agreeing with how the HAI folks define the most relevant terms, it’s fitting to use their glossary since it was a professor of computer science at Stanford named John McCarthy who coined the term “Artificial Intelligence.” Here are some useful definitions from their website, augmented by those from a few other sources:


Artificial Intelligence: Defined by emeritus Stanford Professor John McCarthy in 1955 as “the science and engineering of making intelligent machines,” the website states that “today, we emphasize machines that can learn, at least somewhat like human beings do.” I could add more detail than this, of course, but in the interest of keeping this post as straightforward as possible, this should suffice.


Machine Learning: A sub-field of AI, machine learning, or ML, “is the part of AI studying how computer agents can improve their perception, knowledge, thinking, or actions based on experience or data. For this, ML draws from computer science, statistics, psychology, neuroscience, economics and control theory.” ML algorithms can be categorized as supervised, semi-supervised, unsupervised, and reinforcement learning algorithms. Again, much more in the way of details could be added here, but this is enough for now. 


Supervised Versus Unsupervised Learning: Although I won’t delve much into the differences between supervised and unsupervised learning here, a few foundational points are in order: 


  • Supervised learning involves utilizing a certain ground truth to help an algorithm learn a function that, “given a sample of data and desired outputs, best approximates the relationship between input and output observable in the data.” In other words, supervised learning involves using a training set of data to teach models to “yield the desired output.”


  • The term “supervised learning” comes from the fact that humans are relied upon to provide accurate “labels” or “label sets” upon which the model can learn to detect relevant patterns in the data. In this context, labels can be defined as “a labeled set of data that the model can learn from to make correct decisions.” According to the IBM blog, “To put it simply, supervised learning uses labeled input and output data, while an unsupervised learning algorithm does not.”


  • In contrast to supervised learning, unsupervised learning doesn’t rely on training datasets or labels. And as the name implies, unsupervised learning doesn’t depend on human supervision to work effectively but works on its own to find patterns in unlabeled data. As defined on the Google Cloud website, “Unsupervised learning in artificial intelligence is a type of machine learning that learns from data without human supervision. Unlike supervised learning, unsupervised machine learning models are given unlabeled data and allowed to discover patterns and insights without any explicit guidance or instruction.”  


Deep Learning: This one will take a little longer to define, but stick with me. First of all, it’s technically correct to categorize deep learning (DL) as a subset of ML; however, some people take the distinction further, referring to deep learning as a completely separate approach that stands in contrast to ML. 


The AWS definition is instructive here. As they put it, “Deep learning algorithms emerged in an attempt to make traditional machine learning techniques more efficient. Traditional machine learning methods require significant human effort to train the software” whereas with deep learning, less supervision is necessary.


Another major advantage of using deep learning over other types of machine learning is that it can deal more effectively with unstructured data. In a 2021 blog post, IBM reported that unstructured data  represents “over 80% of all enterprise data, while 95% of businesses prioritize unstructured data management.”


According to AWS, “Machine learning methods find unstructured data, such as text documents, challenging to process because the training dataset can have infinite variations. On the other hand, deep learning models can comprehend unstructured data and make general observations without manual feature extraction.”    


Deep learning marks a departure in terms of its ability to handle data complexity. The previously cited Stanford glossary classifies deep learning as the “use of large multi-layer (artificial) neural networks that compute with continuous (real number) representations, a little like the hierarchically organized neurons in human brains. It is currently the most successful ML approach, usable for all types of ML, with better generalization from small data and better scaling to big data and compute budgets.”


Photo courtesy of Free Stock photos by Vecteezy

The Case for AI in Healthcare


Now that we’ve got some definitions out of the way, let’s examine the technological landscape in which we now find ourselves. While AI seems to have come out of nowhere to the average person, the concept has, as we’ve seen, been around in some form or fashion since at least the 1950s. Although not categorized as AI at the time (because the term hadn’t yet been invented), an automated checkers computer program was written in 1951 by a British man, Christopher Strachey, who would later become director of the Programming Research Group at the University of Oxford. 


If I had to guess, I’d say that aside from natural language processing used by call centers, another way most people first became familiar with AI was through text prediction technology (that pesky autocorrect feature on our phones). While these technologies have been in wide usage for a while, the medical industry has been slower to adopt them into the mainstream.


But that trend is changing. In the past decade, there has been a steadily growing desire on the part of healthcare practitioners to use AI-enabled technology. A 2021 article in the journal Nature, for instance, surveyed doctors in Australia and New Zealand who practice in five specialties – ophthalmology, dermatology, radiology, and radiation oncology – about their attitudes concerning AI in healthcare. Most survey respondents (71.0%) predicted that AI would improve their field, with the top three ranked potential advantages of AI being “improved patient access to disease screening,” “improved diagnostic confidence,” and “reduced time spent by specialists on monotonous tasks.”


In their recent “Digital Doctor Survey,” a survey of 3,428 physicians across 20 markets that seeks to clarify “digital behaviour and future trends among physicians,”  Ipsos found similar results:


“More than two in three (68%) of the physicians we surveyed globally are excited about the role of artificial intelligence (AI) in the future of healthcare, with improved accuracy of diagnosis perceived as a key benefit of AI solutions.”


As enthusiastic as many physicians are to trust AI in improving care, these numbers stand somewhat in contrast to patient sentiments. For instance, a February 2023 study by the Pew Research Center found that six-in-ten U.S. adults would feel uncomfortable if their doctor relied on AI to diagnose diseases and recommend treatments, with only 39% saying they’d feel comfortable with this scenario. In some ways this contrast bodes well for physicians, ensuring that for the foreseeable future, at least, they will be relied upon to verify, sometimes overrule, and in most cases work in partnership with algorithms in improving care outcomes.


Clinical Integrations of AI


Before we discuss non-clinical uses of AI, I thought it would be good to first touch on the other end of the spectrum – clinical applications of AI – to give us a sense of perspective. Although among the most compelling applications of artificial intelligence, clinical AI integrations often come with a hefty price tag which, depending on the use case, can make for a tricky cost-benefit proposition.
For example, a recent JAMA article focused on whether deploying AI-enabled clinical decision-support tools in three disciplines – dermatology, dentistry, and ophthalmology – could be deemed “cost-effective.” The authors set out to analyze data from three Markov models which were used in previous cost-effectiveness studies, and which “were adapted to compare AI vs standard of care to detect melanoma on skin photographs, dental caries on radiographs, and diabetic retinopathy on retina fundus imaging.”


The study’s conclusions were decidedly mixed, with the authors stating that “AI used as a decision-support system came with limited and use case-specific cost-effectiveness advantages, which were sensitive not only to the costs assigned to AI but also the subsequent therapy paths assumed after the diagnosis.” 


Translation: AI use in clinical problem-solving is highly dependent on the setting in which it’s activated. To arrive at how much using AI as a clinical decision support (CDS) tool cost when stacked up against human physicians working without such diagnostic support, the study’s authors adopted “a payer perspective.” To do this, they looked at how much payers in three different countries (the U.S. in the case of dermatology, Germany in the case of dental care, and Brazil in the ophthalmological use case) would pay out on a fee-for-service basis for the two types of care approaches.


As I alluded to earlier, the results of the study were mixed. When the dermatologists used AI-based CDS tools versus making decisions in the absence of such tools, the fees paid by payers were similar, as were the health outcomes. This was, in part, because AI was found more likely to be cost-effective at a lower “willingness to pay” (WTP) level (where WTP is defined as the additional costs a decision-maker is willing to take on for gaining an additional quality-adjusted life-year (QALY) or tooth retention–year). Increasing WTP, on the other hand, increased the uncertainty of whether AI could be considered cost-effective.


Contrast these results with those found when a similar strategy was applied to dentistry and ophthalmology: for dentistry, using AI-enabled decision support tools had more clinical utility and led to lower costs, while diagnosis with AI in ophthalmology equated to higher costs but a similar level of utility. These data points serve to support the idea that injecting highly sophisticated algorithms as they now exist into every clinical context might not always lead to optimal outcomes.


Generative AI


We haven’t yet discussed the elephant in the room – generative pre-trained transformer (GPT) models – so let’s get to it now. Neural network models like ChatGPT and Bard are beginning to play a more significant role not just in society in general, but also in patient care. Despite the fact that, up until now, these tools have mostly been employed as a part of research studies to examine capabilities such as AI-assisted medical diagnosis, I’d hazard a guess that we’ll soon see them used with increasing regularity across medicine, including in clinical practice.


As we talked about earlier, large language models and ML algorithms have been around for years. Built on this foundational knowledge, generative AI, or “gen AI,” is a related technology that has “the ability to create human-like text and content (images, music, and more), and answer questions in a conversational manner.” Anyone who’s used ChatGPT can attest to how lifelike some interactions with the software can be.


Just when you thought we’d finished defining terms, here’s one more. Georgetown University’s Center for Security and Emerging Technology provides an instructive definition when it comes to generative AI:


“Generative AI is a broad term that can be used for any AI system whose primary function is to generate content. This is in contrast to AI systems that perform other functions, such as classifying data…grouping data (e.g., identifying customer segments with similar purchasing behavior), or choosing actions (e.g., steering an autonomous vehicle).”


Even though at present, while incredibly impressive, gen AI has proven to be somewhat error-prone. In addition, there are valid concerns about the data these models have trained on being subject to bias, thereby causing the model output to reflect similar biases. For this reason and others, including humans in the training and, in many cases, deployment loop remains an important component for ensuring GPT model effectiveness.


Photo courtesy of Free Stock photos by Vecteezy


While on the surface gen AI may seem like a novelty without many applications in the real world, this capability actually creates all sorts of opportunities for physicians to be freed of mundane, repetitive tasks – or tasks for which they don’t have enough time – and focus on higher-level critical thinking activities. In the spirit of freeing up mental bandwidth, a recent report by McKinsey & Company made clear that the real genius of gen AI is that it can automate “tedious and error-prone operational work, bringing years of clinical data to a clinician’s fingertips in seconds, and by modernizing health systems infrastructure.”


In the category of “tasks for which physicians don’t have enough time,” one could place interactions with patients. Because of this, many health systems are eyeing gen AI to act as a sort of cyborg counterpart, or sophisticated chatbot, that can converse more naturally with patients (and for as long as a patient wants) than their hectic clinical schedules allow. As an example of this, Dr. Gege Gatt provided a glimpse into the future of patient/physician interactions at an NHS innovation event in the UK back in March.


Starting at the 9:20 mark of the video, listen to how Dr. Gatt describes an “automated pathway” interaction between three fictional characters. The two main characters are a patient named Harry and a medical chatbot (or, as he calls it, an “IAPT Virtual Assistant”) named Sally.


If you can get past the tongue-in-cheek references to the 80s movie “When Harry Met Sally,” it’s a fascinating look at how, starting with scheduling a PCP visit (what folks in the UK call “general practitioner” or GP appointments), Sally guides Harry through how to fill out pre-assessment paperwork in the form of an interactive, back-and-forth conversation (as opposed to filling out forms). Once the care pathway is set up, the virtual assistant periodically reminds Harry of his upcoming appointments, allowing Harry to interrupt her at any point to ask questions. The scenario ends with Sally working with the patient to arrange transportation to the healthcare site.


Relying on algorithms to chart care pathways, like the example above, will become increasingly important as the clinical care workforce shortage reaches its pinnacle in the years to come.


Operations Applications


While not all medical providers may be in a position to deploy top-shelf AI that acts as a competent clinical assistant, there are plenty of less rigorous use cases for AI that are attainable right now. Some of these IT solutions are brand new, while others have been around for years and are currently getting a boost due to technological breakthroughs; whatever the case, there is seemingly no excuse for those in the medical field to not begin becoming familiar with at least some of them.


For years now, AI has been used in non-clinical applications from scheduling to billing to a range of other repetitive office tasks. Billing provides a particularly interesting use case for AI, since so much of it – from manual coding to determining patient eligibility – chew up a lot of time that could otherwise be spent on higher-level tasks. According to the American Institute of Healthcare Compliance, tasks like eligibility confirmation can be streamlined using automation: 


“AI-powered systems can integrate with insurance databases and instantly verify patient eligibility, coverage limits, and pre-authorization requirements. This real-time eligibility verification reduces claim rejections and denials due to coverage issues, enabling physicians to provide timely and appropriate care while maximizing revenue potential.”


Also of note is how AI can be brought to bear on patient scheduling and workflows. In a 2022 study that appeared in the journal BMC Health Services Research, the authors performed a prospective content validity assessment to figure out the most relevant features of an “AI-enhanced care pathway planning and scheduling system.”


As part of the study, surveys were distributed to clinical professionals at five university hospitals in three European countries (Finland, Italy, and Spain). Interestingly, the authors found that the top facets for patient care pathways enhanced by AI included the following: “risk assessment, patient profiling, and the use of shared resources (e.g., personnel, time) at the patient and unit levels.”  
I find these results interesting because the identified features align well with AI technology already in use. The article goes on to say the following:


“In the literature, AI-enhanced scheduling systems have been used to identify modifiable risk factors and to stratify patients into high- and low-risk groups to optimize preventive measures in advance…In addition, intelligent digital services have been used to predict the duration of surgery (DOS)…and the postoperative length of stay…to optimize resource management with a high degree of accuracy.”


Making Practices More Efficient


The purpose of this blog post isn’t to comment on how high or low the barrier to entry is in terms of price when it comes to these technologies; however, I will say that the fact that the top concerns this cohort of healthcare practitioners expressed so closely matches AI technologies that have existed in the field for some time likely speaks to how reasonably they might be used in a range of practice settings. 


Speaking of price, these AI integrations are all about saving time and money for physicians, hospital administrators, and back office staff. A recent IBM blog post highlighted a few of these applications – such as those that improve administrative workflow, act as virtual nursing assistant chatbots, reduce dosage error, and prevent fraud – and noted that practices are using them to great effect right now.


It’s also important to note in the improved workflow space is our old friend CDS. While we touched on integrations of AI into decision support tools earlier in the post, AI will likely come to bear in another way on CDS in the near future: clarifying the messages CDS software communicates to the physicians who use them. In discussing the many ways AI will soon impact healthcare, Kevin B. Johnson, MD, MS, of the University of Pennsylvania notes in a recent JAMA article that AI can streamline the information that CDS conveys:


“Looking at the literature now, we’re just starting to see people getting beyond some of the more commonly talked-about areas to questions like ‘How can we take clinical decision support and critique it before we actually release it?’ So, making sure that the messages are clear, making sure that they’re succinct, making sure that the references we’ve chosen match the concern that we might want to have in terms of education is now something that we can do.”


Looking Ahead


In addition to implementing non-clinical uses of AI, another good place for healthcare professionals to focus when it comes to gaining fluency with AI is on data aggregation. When the time comes for training LLMs and other forms of advanced AI on their data, hospitals and practices will want to make sure that their data – likely a mix of structured and unstructured data, which I defined earlier in this post – is collected in a place and format accessible to machine learning algorithms.


As generative AI models get better at parsing unstructured data, it becomes incumbent on healthcare entities to compile as much information on their patients as possible. To the extent possible, this need must be balanced against a requirement that the data is relatively current and doesn’t reflect antiquated practices. 


One group that’s focused on the need for more robust data collection is the American College of Cardiology. In a recent article published by the organization titled “What Are the Potential Uses and Challenges of AI in Syncope Management?”, the authors highlight how compiling a larger volume of data for machine learning algorithms to better understand how to manage a condition called syncope, which is “a complex medical condition where adverse cardiovascular conditions, although rare, may occur,” is not only desirable, but fast becoming a requirement.


Suffice it to say that watching this innovative space, where new techniques become antiquated almost before they see the light of day, will be fascinating. I’ll make sure to report back when I see interesting applications of AI to population health management and other large-scale approaches to improving patient care. 
Have you found interesting AI use cases that I didn’t mention? Are you a healthcare professional who uses AI in your day-to-day work? Whether you’re a tech pro or just gaining interest in the field, drop me a line and let me know your thoughts.

Leave a Reply

Your email address will not be published. Required fields are marked *