Accessible AI: Beyond Clinical Care

Photo courtesy of Free Stock photos by Vecteezy

Much has been made in recent years of the potential for artificial intelligence, or AI, to act as an assistant to workers in an array of professions. Once a concept confined to the likes of a phones’ autocorrect feature, in the past year AI has firmly made the leap to center stage with the introduction of deep artificial neural networks like ChatGPT by OpenAI and Bard by Google. And nowhere has AI shown more promise as a copilot than in the medical industry.

Although the average patient may not be aware of the extent to which AI is being used in their care, they are mostly positive about its potential to improve their own outcomes. Offering a sense of scale for AI’s penetration into healthcare, one article in the New England Journal of Medicine noted, “There is virtually no area in medicine and care delivery that is not already being touched by AI.” 

Whether it’s AI-enabled robots or using machine learning in the drug discovery process (an approach that’s akin to precision medicine, a topic I covered in some depth here), ever-more-sophisticated algorithms and artificial neural networks are routinely put forward as the answer to intractable problems like healthcare workforce shortages in the face of an aging worldwide population. Some even envision robots one day replacing physicians…but that’s likely still a ways off

Because so many articles have been written about the clinical applications of AI, I thought it would be interesting to take a slightly different approach. While any survey of AI in medicine wouldn’t be complete without at least a passing word about clinical integrations of AI, after doing so, I’d like to take a detour and explore the inroads AI has made into non-clinical areas of medicine.

Setting a Baseline

Everyone’s entry point to AI is different, but one thing is becoming clear: those who refuse to familiarize themselves with these monumental innovations will likely face an uphill battle once the tech takes hold – and that hill will only get steeper in time. Since this is a complicated topic, let’s start by defining some key terms.

The Stanford University Human-Centered Artificial Intelligence (HAI) group has put together a handy glossary of terms related to AI that I think does the trick, and I’ll mostly be drawing upon it for establishing a baseline of understanding here (along with some other insights from the likes of Amazon Web Services (AWS) and IBM).

Aside from generally agreeing with how the HAI folks define the most relevant terms, it’s fitting to use their glossary since it was a professor of computer science at Stanford named John McCarthy who coined the term “Artificial Intelligence.” Here are some useful definitions from their website, augmented by those from a few other sources:

Artificial Intelligence: Defined by emeritus Stanford Professor John McCarthy in 1955 as “the science and engineering of making intelligent machines,” the website states that “today, we emphasize machines that can learn, at least somewhat like human beings do.” I could add more detail than this, of course, but in the interest of keeping this post as straightforward as possible, this should suffice.

Machine Learning: A sub-field of AI, machine learning, or ML, “is the part of AI studying how computer agents can improve their perception, knowledge, thinking, or actions based on experience or data. For this, ML draws from computer science, statistics, psychology, neuroscience, economics and control theory.” ML algorithms can be categorized as supervised, semi-supervised, unsupervised, and reinforcement learning algorithms. Again, much more in the way of details could be added here, but this is enough for now. 

Supervised Versus Unsupervised Learning: Although I won’t delve much into the differences between supervised and unsupervised learning here, a few foundational points are in order: 

  • Supervised learning involves utilizing a certain ground truth to help an algorithm learn a function that, “given a sample of data and desired outputs, best approximates the relationship between input and output observable in the data.” In other words, supervised learning involves using a training set of data to teach models to “yield the desired output.”

  • The term “supervised learning” comes from the fact that humans are relied upon to provide accurate “labels” or “label sets” upon which the model can learn to detect relevant patterns in the data. In this context, labels can be defined as “a labeled set of data that the model can learn from to make correct decisions.” According to the IBM blog, “To put it simply, supervised learning uses labeled input and output data, while an unsupervised learning algorithm does not.”

  • In contrast to supervised learning, unsupervised learning doesn’t rely on training datasets or labels. And as the name implies, unsupervised learning doesn’t depend on human supervision to work effectively but works on its own to find patterns in unlabeled data. As defined on the Google Cloud website, “Unsupervised learning in artificial intelligence is a type of machine learning that learns from data without human supervision. Unlike supervised learning, unsupervised machine learning models are given unlabeled data and allowed to discover patterns and insights without any explicit guidance or instruction.”  

Deep Learning: This one will take a little longer to define, but stick with me. First of all, it’s technically correct to categorize deep learning (DL) as a subset of ML; however, some people take the distinction further, referring to deep learning as a completely separate approach that stands in contrast to ML. 

The AWS definition is instructive here. As they put it, “Deep learning algorithms emerged in an attempt to make traditional machine learning techniques more efficient. Traditional machine learning methods require significant human effort to train the software” whereas with deep learning, less supervision is necessary.

Another major advantage of using deep learning over other types of machine learning is that it can deal more effectively with unstructured data. In a 2021 blog post, IBM reported that unstructured data  represents “over 80% of all enterprise data, while 95% of businesses prioritize unstructured data management.”

According to AWS, “Machine learning methods find unstructured data, such as text documents, challenging to process because the training dataset can have infinite variations. On the other hand, deep learning models can comprehend unstructured data and make general observations without manual feature extraction.”    

Deep learning marks a departure in terms of its ability to handle data complexity. The previously cited Stanford glossary classifies deep learning as the “use of large multi-layer (artificial) neural networks that compute with continuous (real number) representations, a little like the hierarchically organized neurons in human brains. It is currently the most successful ML approach, usable for all types of ML, with better generalization from small data and better scaling to big data and compute budgets.”

Photo courtesy of Free Stock photos by Vecteezy

The Case for AI in Healthcare

Now that we’ve got some definitions out of the way, let’s examine the technological landscape in which we now find ourselves. While AI seems to have come out of nowhere to the average person, the concept has, as we’ve seen, been around in some form or fashion since at least the 1950s. Although not categorized as AI at the time (because the term hadn’t yet been invented), an automated checkers computer program was written in 1951 by a British man, Christopher Strachey, who would later become director of the Programming Research Group at the University of Oxford. 

If I had to guess, I’d say that aside from natural language processing used by call centers, another way most people first became familiar with AI was through text prediction technology (that pesky autocorrect feature on our phones). While these technologies have been in wide usage for a while, the medical industry has been slower to adopt them into the mainstream.

But that trend is changing. In the past decade, there has been a steadily growing desire on the part of healthcare practitioners to use AI-enabled technology. A 2021 article in the journal Nature, for instance, surveyed doctors in Australia and New Zealand who practice in five specialties – ophthalmology, dermatology, radiology, and radiation oncology – about their attitudes concerning AI in healthcare. Most survey respondents (71.0%) predicted that AI would improve their field, with the top three ranked potential advantages of AI being “improved patient access to disease screening,” “improved diagnostic confidence,” and “reduced time spent by specialists on monotonous tasks.”

In their recent “Digital Doctor Survey,” a survey of 3,428 physicians across 20 markets that seeks to clarify “digital behaviour and future trends among physicians,”  Ipsos found similar results:

“More than two in three (68%) of the physicians we surveyed globally are excited about the role of artificial intelligence (AI) in the future of healthcare, with improved accuracy of diagnosis perceived as a key benefit of AI solutions.”

As enthusiastic as many physicians are to trust AI in improving care, these numbers stand somewhat in contrast to patient sentiments. For instance, a February 2023 study by the Pew Research Center found that six-in-ten U.S. adults would feel uncomfortable if their doctor relied on AI to diagnose diseases and recommend treatments, with only 39% saying they’d feel comfortable with this scenario. In some ways this contrast bodes well for physicians, ensuring that for the foreseeable future, at least, they will be relied upon to verify, sometimes overrule, and in most cases work in partnership with algorithms in improving care outcomes.

Clinical Integrations of AI

Before we discuss non-clinical uses of AI, I thought it would be good to first touch on the other end of the spectrum – clinical applications of AI – to give us a sense of perspective. Although among the most compelling applications of artificial intelligence, clinical AI integrations often come with a hefty price tag which, depending on the use case, can make for a tricky cost-benefit proposition.
For example, a recent JAMA article focused on whether deploying AI-enabled clinical decision-support tools in three disciplines – dermatology, dentistry, and ophthalmology – could be deemed “cost-effective.” The authors set out to analyze data from three Markov models which were used in previous cost-effectiveness studies, and which “were adapted to compare AI vs standard of care to detect melanoma on skin photographs, dental caries on radiographs, and diabetic retinopathy on retina fundus imaging.”

The study’s conclusions were decidedly mixed, with the authors stating that “AI used as a decision-support system came with limited and use case-specific cost-effectiveness advantages, which were sensitive not only to the costs assigned to AI but also the subsequent therapy paths assumed after the diagnosis.” 

Translation: AI use in clinical problem-solving is highly dependent on the setting in which it’s activated. To arrive at how much using AI as a clinical decision support (CDS) tool cost when stacked up against human physicians working without such diagnostic support, the study’s authors adopted “a payer perspective.” To do this, they looked at how much payers in three different countries (the U.S. in the case of dermatology, Germany in the case of dental care, and Brazil in the ophthalmological use case) would pay out on a fee-for-service basis for the two types of care approaches.

As I alluded to earlier, the results of the study were mixed. When the dermatologists used AI-based CDS tools versus making decisions in the absence of such tools, the fees paid by payers were similar, as were the health outcomes. This was, in part, because AI was found more likely to be cost-effective at a lower “willingness to pay” (WTP) level (where WTP is defined as the additional costs a decision-maker is willing to take on for gaining an additional quality-adjusted life-year (QALY) or tooth retention–year). Increasing WTP, on the other hand, increased the uncertainty of whether AI could be considered cost-effective.

Contrast these results with those found when a similar strategy was applied to dentistry and ophthalmology: for dentistry, using AI-enabled decision support tools had more clinical utility and led to lower costs, while diagnosis with AI in ophthalmology equated to higher costs but a similar level of utility. These data points serve to support the idea that injecting highly sophisticated algorithms as they now exist into every clinical context might not always lead to optimal outcomes.

Generative AI

We haven’t yet discussed the elephant in the room – generative pre-trained transformer (GPT) models – so let’s get to it now. Neural network models like ChatGPT and Bard are beginning to play a more significant role not just in society in general, but also in patient care. Despite the fact that, up until now, these tools have mostly been employed as a part of research studies to examine capabilities such as AI-assisted medical diagnosis, I’d hazard a guess that we’ll soon see them used with increasing regularity across medicine, including in clinical practice.

As we talked about earlier, large language models and ML algorithms have been around for years. Built on this foundational knowledge, generative AI, or “gen AI,” is a related technology that has “the ability to create human-like text and content (images, music, and more), and answer questions in a conversational manner.” Anyone who’s used ChatGPT can attest to how lifelike some interactions with the software can be.

Just when you thought we’d finished defining terms, here’s one more. Georgetown University’s Center for Security and Emerging Technology provides an instructive definition when it comes to generative AI:

“Generative AI is a broad term that can be used for any AI system whose primary function is to generate content. This is in contrast to AI systems that perform other functions, such as classifying data…grouping data (e.g., identifying customer segments with similar purchasing behavior), or choosing actions (e.g., steering an autonomous vehicle).”

Even though at present, while incredibly impressive, gen AI has proven to be somewhat error-prone. In addition, there are valid concerns about the data these models have trained on being subject to bias, thereby causing the model output to reflect similar biases. For this reason and others, including humans in the training and, in many cases, deployment loop remains an important component for ensuring GPT model effectiveness.

Photo courtesy of Free Stock photos by Vecteezy

While on the surface gen AI may seem like a novelty without many applications in the real world, this capability actually creates all sorts of opportunities for physicians to be freed of mundane, repetitive tasks – or tasks for which they don’t have enough time – and focus on higher-level critical thinking activities. In the spirit of freeing up mental bandwidth, a recent report by McKinsey & Company made clear that the real genius of gen AI is that it can automate “tedious and error-prone operational work, bringing years of clinical data to a clinician’s fingertips in seconds, and by modernizing health systems infrastructure.”

In the category of “tasks for which physicians don’t have enough time,” one could place interactions with patients. Because of this, many health systems are eyeing gen AI to act as a sort of cyborg counterpart, or sophisticated chatbot, that can converse more naturally with patients (and for as long as a patient wants) than their hectic clinical schedules allow. As an example of this, Dr. Gege Gatt provided a glimpse into the future of patient/physician interactions at an NHS innovation event in the UK back in March.

Starting at the 9:20 mark of the video, listen to how Dr. Gatt describes an “automated pathway” interaction between three fictional characters. The two main characters are a patient named Harry and a medical chatbot (or, as he calls it, an “IAPT Virtual Assistant”) named Sally.

If you can get past the tongue-in-cheek references to the 80s movie “When Harry Met Sally,” it’s a fascinating look at how, starting with scheduling a PCP visit (what folks in the UK call “general practitioner” or GP appointments), Sally guides Harry through how to fill out pre-assessment paperwork in the form of an interactive, back-and-forth conversation (as opposed to filling out forms). Once the care pathway is set up, the virtual assistant periodically reminds Harry of his upcoming appointments, allowing Harry to interrupt her at any point to ask questions. The scenario ends with Sally working with the patient to arrange transportation to the healthcare site.

Relying on algorithms to chart care pathways, like the example above, will become increasingly important as the clinical care workforce shortage reaches its pinnacle in the years to come.

Operations Applications

While not all medical providers may be in a position to deploy top-shelf AI that acts as a competent clinical assistant, there are plenty of less rigorous use cases for AI that are attainable right now. Some of these IT solutions are brand new, while others have been around for years and are currently getting a boost due to technological breakthroughs; whatever the case, there is seemingly no excuse for those in the medical field to not begin becoming familiar with at least some of them.

For years now, AI has been used in non-clinical applications from scheduling to billing to a range of other repetitive office tasks. Billing provides a particularly interesting use case for AI, since so much of it – from manual coding to determining patient eligibility – chew up a lot of time that could otherwise be spent on higher-level tasks. According to the American Institute of Healthcare Compliance, tasks like eligibility confirmation can be streamlined using automation: 

“AI-powered systems can integrate with insurance databases and instantly verify patient eligibility, coverage limits, and pre-authorization requirements. This real-time eligibility verification reduces claim rejections and denials due to coverage issues, enabling physicians to provide timely and appropriate care while maximizing revenue potential.”

Also of note is how AI can be brought to bear on patient scheduling and workflows. In a 2022 study that appeared in the journal BMC Health Services Research, the authors performed a prospective content validity assessment to figure out the most relevant features of an “AI-enhanced care pathway planning and scheduling system.”

As part of the study, surveys were distributed to clinical professionals at five university hospitals in three European countries (Finland, Italy, and Spain). Interestingly, the authors found that the top facets for patient care pathways enhanced by AI included the following: “risk assessment, patient profiling, and the use of shared resources (e.g., personnel, time) at the patient and unit levels.”  
I find these results interesting because the identified features align well with AI technology already in use. The article goes on to say the following:

“In the literature, AI-enhanced scheduling systems have been used to identify modifiable risk factors and to stratify patients into high- and low-risk groups to optimize preventive measures in advance…In addition, intelligent digital services have been used to predict the duration of surgery (DOS)…and the postoperative length of stay…to optimize resource management with a high degree of accuracy.”

Making Practices More Efficient

The purpose of this blog post isn’t to comment on how high or low the barrier to entry is in terms of price when it comes to these technologies; however, I will say that the fact that the top concerns this cohort of healthcare practitioners expressed so closely matches AI technologies that have existed in the field for some time likely speaks to how reasonably they might be used in a range of practice settings. 

Speaking of price, these AI integrations are all about saving time and money for physicians, hospital administrators, and back office staff. A recent IBM blog post highlighted a few of these applications – such as those that improve administrative workflow, act as virtual nursing assistant chatbots, reduce dosage error, and prevent fraud – and noted that practices are using them to great effect right now.

It’s also important to note in the improved workflow space is our old friend CDS. While we touched on integrations of AI into decision support tools earlier in the post, AI will likely come to bear in another way on CDS in the near future: clarifying the messages CDS software communicates to the physicians who use them. In discussing the many ways AI will soon impact healthcare, Kevin B. Johnson, MD, MS, of the University of Pennsylvania notes in a recent JAMA article that AI can streamline the information that CDS conveys:

“Looking at the literature now, we’re just starting to see people getting beyond some of the more commonly talked-about areas to questions like ‘How can we take clinical decision support and critique it before we actually release it?’ So, making sure that the messages are clear, making sure that they’re succinct, making sure that the references we’ve chosen match the concern that we might want to have in terms of education is now something that we can do.”

Looking Ahead

In addition to implementing non-clinical uses of AI, another good place for healthcare professionals to focus when it comes to gaining fluency with AI is on data aggregation. When the time comes for training LLMs and other forms of advanced AI on their data, hospitals and practices will want to make sure that their data – likely a mix of structured and unstructured data, which I defined earlier in this post – is collected in a place and format accessible to machine learning algorithms.

As generative AI models get better at parsing unstructured data, it becomes incumbent on healthcare entities to compile as much information on their patients as possible. To the extent possible, this need must be balanced against a requirement that the data is relatively current and doesn’t reflect antiquated practices. 

One group that’s focused on the need for more robust data collection is the American College of Cardiology. In a recent article published by the organization titled “What Are the Potential Uses and Challenges of AI in Syncope Management?”, the authors highlight how compiling a larger volume of data for machine learning algorithms to better understand how to manage a condition called syncope, which is “a complex medical condition where adverse cardiovascular conditions, although rare, may occur,” is not only desirable, but fast becoming a requirement.

Suffice it to say that watching this innovative space, where new techniques become antiquated almost before they see the light of day, will be fascinating. I’ll make sure to report back when I see interesting applications of AI to population health management and other large-scale approaches to improving patient care. 
Have you found interesting AI use cases that I didn’t mention? Are you a healthcare professional who uses AI in your day-to-day work? Whether you’re a tech pro or just gaining interest in the field, drop me a line and let me know your thoughts.

Pharmacists and Population Health

Photo courtesy of Free Stock photos by Vecteezy

In all the time I’ve been writing about population health, I’ve somehow managed not to discuss pharmacists or pharmaceuticals very much. Partly that’s because I associate the two topics closely and, apart from briefly mentioning efforts to make medications more affordable, I haven’t devoted much thought to how medications – or those who distribute them – fit into a population health framework.

Like many people, I’ve always seen the pharmacist as standing somewhat apart from the medical establishment. Yes, many pharmacists work in clinical settings (around 34% according to a 2019 national workforce study), but I’d argue that most people for most of their lives only see pharmacists in the community pharmacy setting, and even then they mainly deal with pharmacy technologists, not the actual pharmacist. If they had a role to play in population health management, went my logic up till now, it would’ve already happened. 

But as it turns out, there’s a lot more to the pharmacist profession than simply dispensing medications. After digging into how pharmacists actually interact with clinicians and the general public, how their role evolved basically overnight during the COVID pandemic, and how much potential they have to move the needle on population health, I have come to see pharmacists and pharmacy technologists in an exciting new light. 

Location, Location, Location

As it turns out, my own perception that you can’t swing a recently-deceased cat without hitting a pharmacy is actually quite accurate to most people’s experience in the U.S. In fact, a study published last year in the Journal of the American Pharmacists Association (JAPhA) reported that “Across the overall U.S. population, 48.1% lived within 1 mile of any pharmacy, 73.1% within 2 miles, 88.9% within 5 miles, and 96.5% within 10 miles.”

Now that’s what I call patient access! If you’re new to this blog or the concept of population health in general, accessibility of care is of central importance when it comes to improving patient outcomes. If you create too many obstacles for patients to seek care (long waits, exorbitant costs, too little time with their primary care physician), they often won’t comply with preventative care strategies. This in turn often results in patients presenting at the hospital with preventable health issues, and puts further strain on an already overstressed healthcare system.

On the contrary, opening up availability of medical services to the maximum number of patients (or potential patients who are dealing with risk factors) puts healthcare providers in the best position to maintain healthy populations. Although most pharmacists in the U.S. aren’t as highly trained as the average clinician – which, as we’ll soon see, can cause very real challenges to physicians delegating clinical services to pharmacists – they are still generally very knowledgeable about how to deploy medications to improve patient outcomes.

So the upside of there being so many pharmacies is that, unless you live in Alaska, South Dakota, North Dakota, or Montana – where there are fewer pharmacies per capita – you shouldn’t encounter much trouble in speaking either directly to a pharmacist or to a trained pharmacy technologist. The downside is that while you might be able to ask a pharmacist or technician more questions than you might to your PCP due to time constraints placed on the latter, you often won’t have the assurance that A) the medical practitioner is as highly trained as your primary care doctor, and B) unless they work within a healthcare network alongside prescribing providers as coordinated members of a patient’s care team, the pharmacist may not be able to as readily access as much of your medical information through an electronic health record. Limiting access to the EHR, as the linked article notes, “makes ensuring that patients are receiving proper medications and monitoring for efficacy and safety a challenge, if not impossible in some situations.” 

These barriers, both regulatory and technological, present tough challenges to creating a fertile environment in which PCPs and pharmacists can become more equal partners in value-based care delivery. That said, I still believe that the framework is in place for pharmacists to begin taking a more active role in population health management. The fact that most Americans can conveniently receive care from a trained pharmacy employee (in both clinical and community settings) positions pharmacists and their staff members to greatly improve the health of the communities they serve.

Photo courtesy of Free Stock photos by Vecteezy

Your Friendly Neighborhood Pharmacist (and Tech)

Before we delve too far into the specifics around how pharmacy teams can more tightly coordinate care with PCPs to become key value-based care delivery partners, let’s talk about another commonly-held perception: their trustworthiness.

On top of being highly accessible, pharmacists are generally considered honest and ethical by the public. According to a series of Gallup polls run over the course of many years, pharmacists have consistently ranked near the top of the most trusted professions. In parallel with this trend, the rates of pharmacists seeking advanced degrees are increasing; indeed, according to the aforementioned workforce study, “the proportion of licensed pharmacists whose highest degree is a Doctor of Pharmacy (PharmD) degree was 53.5% in 2019 compared to 37.8% in 2014 and 21.6% in 2009.”

Also of note, in 2019 “36.2% of licensed pharmacists held a BS degree as their highest degree compared to 52% in 2014 and 66.3% in 2009,” a clear indication that pharmacists are looking to arm themselves with more knowledge at an ever-increasing rate. I can’t prove that the two data points are correlated, but it certainly does seem that achieving ever-higher levels of education has made a favorable impression on the public. Since a discussion of regulatory barriers could fill a whole separate blog post I won’t go into it much here; suffice it to say, however, that if pharmacists were allowed practice to the top of their license, many could better tailor their care to the patients who live and work in the communities they serve.

Stepping Up in Challenging Times

Remember that part above about clinicians delegating authority to pharmacists, and how important it may be to the future of population health? Well, this theory was put to the test during the COVID-19 pandemic, which proved to be a watershed moment for the pharmacy profession. The fact that pharmacists became an integral partner not just in dispensing COVID-19 vaccines, but also in testing people for the illness, was lost on neither physicians nor patients.

Up until then, many people viewed their local pharmacist as someone who stood behind a counter dispensing medications ordered by their doctors; but when COVID hit, these benign figures snapped to action, becoming indispensable foot soldiers in a life-and-death battle, administering needed vaccines directly to members of their community when the need exceeded the capacity of primary care physicians to do so.

This dramatic shift in course is borne out in the numbers. In a report published last year that surveyed more than 3,000 patients, 1,000 pharmacists, and 500 providers, the latter of which included physicians and nurse practitioners, with the purpose of understanding the expanding role of pharmacists, the following results begin to tell a compelling story:

  • A majority of pharmacists anticipate transitioning from transactional care to taking on more direct patient care responsibilities.
  • More than half (53.3%) of pharmacists agreed their current training and education are sufficient to manage patients. However, pharmacists did identify opportunities for additional training in chronic disease education, diagnosing, and prescribing.
  • Nearly half of patients would find it helpful to have routine testing and medical visits done from home. Many pharmacists are equipped to interface with patients using telepharmacy technology, and of the pharmacists who use this technology, more than a third told the report authors that “it gives them more time to interact with patients, allowing them to provide crucial information and support when – and where – patients need it most.”  

So if pharmacists could step up during a public health emergency to ensure patient backlogs didn’t build up in clinical settings, might they not also be able to play a more active role in preventative healthcare more generally?

Pharmacists and Population Health

I was reading an article by the American Society of Health-System Pharmacists (ASHP) on how pharmacists can influence population health, and I happened across my new favorite definition of population health management:

“While population health refers to the outcomes of a defined group of individuals and the…drivers and determinants of health, population health management refers to the infrastructure that enables programs to target defined populations and employ a variety of interventions aimed at slowing disease progression, improving health-related outcomes, and decreasing total cost of care.”

There is a growing consensus within the pharmacy field that participation in population health should be a priority for pharmacists. In fact, as part of their 2030 Patient Advancement Initiative, the ASHP puts forth several recommendations pertaining to population health:

  • Pharmacists in all care settings should be included as integral members of the healthcare team and share accountability for patient outcomes and population health.
  • Pharmacists should use health information technologies to advance their role in patient care and population health.
  • Pharmacy should establish standards for the application of artificial intelligence (AI) in the various steps of the medication-use process, including prescribing, reviewing medication orders, and assessing medication-use patterns in populations.
  • Pharmacists should be leaders in federal and state legislative and regulatory policy development related to improving individual and population health outcomes. 

These statements are, of course, only aspirational and don’t necessarily signal progress in any of the specified areas. Still, the ambitiousness of these goals speaks volumes about where ASHP sees the future of pharmacy with respect to population health.

In the document “FAQ: Getting started with population health management” I quoted earlier in the post, the ASHP elaborates on the above goals, stating that, “Identification of barriers, both clinical and non-clinical, that prevent patients from achieving better therapeutic outcomes is a key to successful and sustained care” and providing the following examples of ways pharmacists can participate in population health management: “performing comprehensive medication management, transitions of care services, wellness promotion classes, monitoring for high-risk or specialty medications, and employee health assessments.”

All of the above functions are important, and most share a common feature: they involve collaborating closely with other members of a patient’s care team, particularly the patient’s primary care provider.

Photo courtesy of Free Stock photos by Vecteezy

Partnering with Clinicians

The term “comprehensive medication management” (CMM) or, as it’s also called, “medication optimization” or, more broadly, “medication management services,” can mean many things to many people. But in this context, a definition that appeared in a 2021 article in the journal American Health & Drug Benefits, seems to fit best: “when a patient’s medication has been optimized by the healthcare team and the patient uses the regimen in the ideal manner to improve health outcomes.”

It’s also important to note that CMM is often delivered in clinical settings, as noted in this definition offered by the American Association of Colleges of Pharmacy (AACP):

“CMM is a patient-centered approach to optimizing medication use and improving patient health outcomes that is delivered by a clinical pharmacist working in collaboration with the patient and other health care providers.”

Key here is the term “clinical pharmacist”; I couldn’t find many examples of CMM happening widely in community or retail pharmacy practice, at least not without the supervision of a more highly-trained physician (a point I’ll elaborate on in a minute). One of the case studies highlighted in the AACP article illustrates the potential effectiveness of a CMM approach in improving the health outcomes of defined patient populations, a key feature of population health. As described in the article, a center called the Monroe Clinic that serves rural Southern Wisconsin sought to solve a problem: chronic disease patients there were waiting up to six weeks to see their PCPs and, as a result, many ended up in the emergency department.

To solve this issue, the pharmacy team at the Monroe Clinic approached their PCP colleagues about allowing the pharmacists to meet with patients for chronic disease management. As part of this effort, the pharmacists provided their clinical colleagues with a CMM framework that would relieve the primary care doctors’ caseload and allow the latter to see patients who had more acute issues. The approach is paying dividends and, according to the article, one of the pharmacists has been offered a full-time role managing the CMM process.

As in the example of the Monroe Clinic, CMM is an important step forward when it comes to pharmacists becoming key players in population health management. But in my research, I didn’t turn up many ways – aside from being part of an integrated specialty pharmacy within an accountable care organization (ACO) – that pharmacists are directly being reimbursed for implementing CMM or other sorts of value-based care.

Chronic Care Management

One potential exception to pharmacists not being reimbursement for value-based care exists in the form of something called “medication management.” For those pharmacists working in community practice and who deal with Medicare beneficiaries with two or more chronic health conditions, medication management offers an avenue for reimbursement through a program formalized in 2015 by the Centers for Medicare & Medicaid Services (CMS) called Chronic Care Management, or CCM.

Not to be confused with CMM, CCM is defined by CMS in the following way:

“CCM is care coordination services done outside of the regular office visit for patients with two or more chronic conditions expected to last at least 12 months or until the death of the patient, and that place the patient at significant risk of death, acute exacerbation/decompensation, or functional decline. These services are typically not face-to-face and allow eligible practitioners to bill for at least 20 minutes or more of care coordination services per month.” 

As I mentioned above, medication management is an integral component of a comprehensive care plan for chronic care management and, according to the ASHP, pharmacists can engage in CCM services in the following ways (and please note, “QHP” here stands for “qualified healthcare provider”):

“Under CMS guidelines, pharmacists are recognized as “clinical staff” who can provide CCM services within their scope of practice under general supervision of a QHP. The pharmacist can be directly employed, or under contract (independent contractor) or leased employment of the qualified billing healthcare professional that is providing CCM services. The QHP must be able to provide general supervision of CCM services provided by a pharmacist but is not required to be physically present or colocated (must be available by phone). Under CCM guidelines, a pharmacist cannot consent a patient, develop a comprehensive care plan, or bill for CCM services.”

As key as pharmacists are to a successful CCM program, most are not considered billing providers and, as stated above, must work under the supervision of a physician or other qualified provider. ASHP lists the following as QHPs who can bill for CCM services: physicians, nurse practitioners, physician assistants, clinical nurse specialists, and certified nurse midwives, with only one practitioner per patient being eligible to receive payments for CCM services per calendar month. The document goes on to say the following:

“CCM services may also be furnished by clinical staff that meet Medicare’s “incident to” rules. Licensed clinical staff include the following: clinical pharmacists, RN, LSCSW, LPN, MAs or CMAs. The time spent by clinical staff members furnishing CCM services directed by a QHP counts towards the time thresholds. Non-clinical staff time cannot be counted.”

Since most pharmacists aren’t as highly trained as physicians, their avenues for participation in CCM are limited to collecting data, maintaining and informing updates for the care plan, providing 24/7 access to care, and documenting CCM services, and, again, their work for the most part is subject to the supervision of a physician.

CCM in Practice

A real-world example of pharmacists acting as highly effective practitioners of so-called medication therapy disease management (MTDM) appeared in a 2018 article in the journal ClinicoEconomics and Outcomes Research. The authors noted that the approach “has shown improvement in clinical outcomes in patients with certain chronic diseases.” One of these diseases, diabetes mellitus (DM), affects over 30 million people in the U.S. and represents an enormous cost burden for those who suffer from the disease; indeed, states the article, “the cost burden for an individual with DM is more than twice as high as it is for an individual without DM, while the total direct and indirect cost burden of DM in 2012 was estimated to be $245 billion.” 

Additionally, studies suggest that when healthcare teams treat diabetes patients and include a pharmacist instead of other healthcare professionals like nurses, doing so is associated with “greater reductions in patients’ blood glucose levels.” Incredibly, despite this clear advantage, at least one study noted that fewer than 10% of U.S. adults with DM have had the factors contributing to their illness adequately controlled. This is primarily because of a lack of data on the impact of this approach on both health care utilization and overall cost of care. 

There are any number of other ways that pharmacists can enhance their role in population health, from becoming members of transitions of care teams to working with patient-centered medical homes. But since reimbursement pathways haven’t kept pace with innovation, population health management care approaches have been unevenly adopted throughout the pharmacy industry. It will be interesting to see how areas like medication management and CCM evolve, and if other payers adopt these proven ways for pharmacists to positively influence patient outcomes.

Precision Medicine and Precision Public Health: A Genomic Approach to Improved Outcomes

Photo courtesy of Free Stock photos by Vecteezy

While researching a separate blog post on artificial intelligence in healthcare, I kept running into articles about precision medicine, a relatively new field that holds much promise for optimizing patient outcomes. As I took a detour down the precision medicine rabbit hole, I soon happened upon a related approach to public health that I’d like to explore in more depth here. It’s called “precision public health” (PPH) and despite its name, the concept is very much in line with the principles of population health, a subject I discuss often on this blog.

If you’ve heard of precision medicine, you might be wondering how a discipline whose purpose is to map out care plans based on a given patient’s genetics (i.e. medicine practiced on a molecular level) can possibly tie into public health, a field that, as the CDC Foundation notes, concerns itself “with protecting the health of entire populations.”

Aren’t the scales of these two endeavors too dissimilar to admit of any overlap?

You can be forgiven for wondering about this seeming paradox, because at first I couldn’t quite grasp it either. But stick with me and I’ll walk you through the main ideas so that by the end of this post, you’ll be armed with enough information to not be caught off guard when someone brings up PPH in conversation.

Defining Precision Medicine

Although the concept of precision medicine has only been in the public consciousness for about twenty years, it has made major inroads into healthcare delivery in the United States. With an attendant boom in the field of genomics over the past few decades, the field of precision medicine has matured at a rapid pace. Its rise to notoriety has been so swift, in fact, that in 2015, President Obama revealed the Precision Medicine Initiative, a major push to move precision medicine into common clinical practice.

As traditionally defined, precision medicine uses information about an individual’s genome, environment, and lifestyle “to guide decisions related to their medical management” with an overall goal of providing “a precise approach for the prevention, diagnosis and treatment of disease.” The term is analogous to “personalized medicine,” “personalized care,” or “individualized medicine” and, according to the American Cancer Society, “Precision medicine is a way health care providers can offer and plan specific care for their patients, based on the particular genes, proteins, and other substances in a person’s body.”

According to the Brookings Institution, precision medicine “uses personal information, such as DNA sequences, to prevent, diagnose, or treat disease. From targeting late-stage cancers to curing rare genetic diseases, precision medicine is poised to impact millions of people within the next decade.” The American Medical Association goes still further in its assessment of the field, stating that “Precision medicine is a tailored approach to health care that accounts for the individual variability in the genes, environment and lifestyle of each person.”

The emphasis here is mine, and in a bit we’ll delve into how placing more emphasis on a person’s “environment and lifestyle” can represent an area of difference between precision medicine and its cousin, precision public health.

Precision Medicine: A Brief History

As the name would suggest, precision medicine is all about tailoring medical care pathways to individual patients. Throughout most of history, medical treatments were aimed at helping the average patient. Without a detailed understanding of genetic differences between patients, a one-size-fits-all approach predominated. This makes sense given the limited tools available to physicians over the past few centuries; as a consequence, however, some patients benefited from treatments while others did not.

It’s an oversimplification to call this a scattershot approach, but that description isn’t too far off the mark. Thankfully, with the advent of the Human Genome Project (HGP) — an initiative that aimed to map the genetic makeup of humans and other organisms — researchers were able to unlock how disease states operate on a molecular level. The project, which ran from October of 1990 to April of 2003 (with the final 8% of the human genome having been sequenced just last year), resulted in sequencing “around 20,000 genes of those that make up the blueprint of our bodies.”

This advance represented a giant leap forward in human understanding, but gaps in our knowledge remained. Aside from the remaining 8% of the human genome left to be sequenced that I mentioned above, many in the scientific community were concerned about the fact that nearly 70% of the genome referenced during the HGP came from one single man who lived in the Buffalo, New York area and had his genes sequenced as part of the project.

Although the man was likely of mixed race, it didn’t sit well with some that the other HGP participants used to make reference genomes were mostly of European descent. The lack of diversity in reference data prompted a host of questions, including whether variations from the reference genome should actually be considered abnormal, and how much one genome could be relied upon to provide an accurate picture of variation among every person on the globe. Some work-arounds were employed, but evidently none of them provided the accuracy sought by experts.

Fast forward to earlier this year, when it was reported that, in an effort to create a more accurate standard measurement for the human genome, scientists had created what they called the “pangenome.”

The New York Times article linked to above states that “the new ‘pangenome’ incorporates near-complete genetic sequences from 47 men and women of diverse origins, including African Americans, Caribbean Islanders, East Asians, West Africans and South Americans.” Of particular note, the article goes on to say that the “revamped genome map represents a crucial tool for scientists and clinicians hoping to identify genetic variations associated with disease. It also promises to deliver treatments that can benefit all people, regardless of their race, ethnicity or ancestry…”

Photo courtesy of Free Stock photos by Vecteezy

Precision medicine offers tremendous benefits, because, as a recent article in the Harvard Public Health magazine points out, “not everyone benefits equally from the same intervention. Someone at increased genetic risk for breast cancer, for instance, might benefit far more from regular mammograms than a neighbor at low risk.” This realization, says the author, is driving the development of the field.

Interestingly, with such a strong push to include non-clinical social determinants of health (SDOH) in patients’ care decisions, it is perhaps not surprising that healthcare practitioners are turning to precision medicine and the related — though more encompassing — field of “precision health” to help tailor healthcare delivery that optimizes for positive patient outcomes. But before we delve into the related but separate topic of precision public health, let’s first gain a better understanding of how genomic data is collected and analyzed.

Big Data and Precision Medicine

While advances in precision medicine represent a welcome milestone in the fight for health equity, mapping varieties in the human genome on an ever more fine-grained level has led to growing uneasiness on a number of fronts. A 2017 paper that appeared in the Journal of Community Genetics encapsulates many of these concerns. Chief among them are the following items:

  • How to report genetic variants of uncertain significance (i.e. how to deal with incidental findings unrelated to a given test’s clinical indication)
  • The need to expand the availability of genetic counseling services
  • Training healthcare providers to interpret and better understand results from next-generation sequencing (NGS) technologies
  • How best to deal with the ethical tensions inherent in practicing in this new clinical environment     

Aside from these complications, there is the monumental question of how, in the era of electronic health records and wearable sensors, can clinicians best collect the avalanche of data the new technology has occasioned. This question is hard enough to answer on its own, but perhaps an even bigger challenge lies in how to analyze the treasure trove of new data to find appropriate clinical applications. With data rolling in at ever increasing rates from multiple platforms in a variety of geographic locations, how do physicians and researchers keep it all straight?

Enter big data analysis, a field that has been undergoing an explosion in recent years. According to an article in the International Journal of Molecular Sciences, “Big Data analytics uncovers hidden patterns, unknown correlations, and other insights through examining large-scale various data sets.” The article also states the following:

“While integration and manipulation of diverse genomic data and comprehensive electronic health records (EHRs) on a Big Data infrastructure exhibit challenges, they also provide a feasible opportunity to develop an efficient and effective approach to identify clinically actionable genetic variants for individualized diagnosis and therapy.”

Put another way, as a result of this new wellspring of patient information, the need for big data analytics to search for patterns that can lead to effective medical solutions is paramount. And by “effective medical solutions,” I’m not just referring to treating disease states that have already manifested; indeed, precision medicine can also be deployed to predict illnesses before they occur.

If you want to really be dazzled by precision medicine’s promise of enhancing patient outcomes, consider this outlook on the discipline’s potential to head off diseases before they appear:

“Other genome-based technology platforms [for example, assays for RNA, proteins, metabolites] are also increasingly being used to classify disease states (as diagnostic tests) and to predict future clinical outcomes (as prognostic tests). Together, these approaches form the basis for 1) a new molecular taxonomy of disease, 2) provide more precise ways to screen for and to detect disease at its earliest molecular manifestations, often pre-clinically, and 3) allow the selection of certain drugs guided by a patient’s underlying genetic makeup.”

The authors go on to write the following: “Given that a disease’s evolution from baseline risk to clinical signs and symptoms often occurs over many years, it is likely, in the future, periodic molecular and digital profiling will shift health care strategies from acute intervention and disease management to a focus on assessing health and proactive management of disease risks and prevention.”

I don’t know about you, but I find the prospect of detecting illnesses and diseases before they show up on our doorstep to be nothing short of mind-blowing. New avenues for treatment like personalized drug therapy, creating “digital twins” to help researchers better administer treatments to a range of people, and pursuing mutation-targeted therapies for cancer will likely soon supplant older, less focused treatments and usher in a new era of medicine. 

Precision Health Versus Precision Public Health

If we can get back to definitions for a minute, I’d like to offer an observation: it seems to me that the main difference between precision medicine and its closely-related cousin “precision public health” is one of scale: precision medicine applies to situations where individuals, or perhaps small patient cohorts, are targeted with healthcare interventions that conform to their individual, often molecular-level circumstances. Precision public health, on the other hand, takes a solution that’s designed to target an individual’s biological makeup and scales it up to address health problems shared by entire patient populations.  

It should be noted that, as is often the case with relatively new medical fields, there doesn’t seem to be a consensus on what exactly the term “precision public health” constitutes. More to the point, multiple sources use the terms “precision population health” and “precision public health” in a way that leads me to believe that they can be used interchangeably, at least for now.

An article in the Journal of Translational Medicine makes this point directly:

“Reframing and broadening precision medicine beyond ‘omics’ (e.g., genomics, proteomics, metabolomics) has been referred to as ‘precision population health’ or ‘precision public health.'”    

Since there doesn’t seem to be a unifying term for this new medical pursuit, for the duration of this blog post I’ll use “precision public health” to denote a wide-scale application of genomics to healthcare. This seems to be the favored term in the research I’ve read, so we’ll go with it.

That being said, however, we should pause to quickly examine a sort of “halfway term” that exists between the two poles of precision medicine and PPH, namely “precision health.”

Precision health can be thought of as an approach to patient care that encompasses more of the factors that impact a person’s health outcomes. According to the CDC, precision health is broader than precision medicine, taking into account not just genetic factors, “but also approaches that occur outside the setting of a doctor’s office or hospital, such as disease prevention and health promotion activities.”

Sound familiar? (Hint: this is what SDOH is all about). The CDC makes the further point that, when precision health is used to elucidate approaches that public health practitioners can take to help improve the outcomes of distinct patient populations, this is called “precision public health.”

Photo courtesy of Free Stock photos by Vecteezy

A Public Health Approach

With that slight detour out of the way, let me add one final flourish before we discuss the nascent field of PPH: the term “population health” should be kept in mind as an adjunct to any broad approach that applies molecular-level testing to large patient cohorts. Allow me to explain.

Population health has been defined as “the health outcomes of a group of individuals, including the distribution of such outcomes within the group” and includes considerations of “health outcomes, patterns of health determinants, and policies and interventions that link these two.” Since groups targeted by public health initiatives can be thought of as discrete patient populations, it seems reasonable that the word “population” would be employed when discussing the utilization of genomics to better allocate healthcare resources.

Put another way, although the use of the term “population health” isn’t widespread in the literature focusing on PPH, I think the concept bears keeping in mind as clinicians begin to employ risk stratification and other tactics based on genetic testing that divide patients into groups.

As if to prove my point, the authors of a 2019 article in Frontiers of Public health noted the following when it comes to genomic data:

“One of the many potential roles of PPH is to use population level data to better identify how individuals can be aggregated into larger groups. This could be achieved using the increased knowledge derived from precision medicine about the biological pathways involved in disease. Such an approach may be critical to ensuring that evidence-based research methodologies can still inform decision-making in the context of increasingly smaller target groups for therapies and diagnostics.”

Remember the AMA quote from earlier about how precision medicine accounts not just for the individual variability in genes, but also for a person’s environment and lifestyle? With all due respect to the AMA, in my judgment this is actually where the fields of precision medicine and PPH begin to diverge. The authors of a 2020 paper that appeared in PLOS Medicine neatly encapsulate the field of PPH in a way that I find useful:

“Big data enable the potential for more ‘precision’ in medicine and public health. In theory, more data at the individual level can help redefine the meaning of healthy and the progression from health to disease, helping to uncover preventable disease risk factors and allowing more precision diagnostic and prognostic information. At the population level, big data can help integrate multiple social and environmental risk factors such as air pollution, neighborhood walkability, and access to healthy food.”

It’s this last part that distinguishes PPH from its antecedents: taking genetic-level data and combining it with SDOH information like the air pollution a patient experiences on a daily basis. Doing so helps physicians create a more complete picture not just of a person’s current health, but of the health risks they face in the future.

Put succinctly, the authors of the PLOS Medicine article write, “If precision medicine is about delivering the right intervention to the right individual at the right time, PPH can be simply viewed as delivering the right intervention to the right population at the right time.” 

Of course, employing health surveillance in this way addresses health disparities in a more comprehensive way than has thus far been possible. One can imagine innumerable applications for this technology in fighting health inequities, including fine-tuning social vulnerability indices (which I wrote about in a previous blog post).

Although the rampant pace of development is exciting, significant hurdles to full-fledged adoption of PPH exist. Realities like a lack of incentives for health entities to share electronic health information and entrenched regulatory barriers represent just part of the range of challenges ahead. But despite these pitfalls, I firmly believe that, as the use of artificial intelligence opens up new avenues for health care delivery, PPH will flourish. For this reason, it is a field worth keeping a close eye on. 

Public Health and Urban Planning

Photo courtesy of Free Stock photos by Vecteezy

The disciplines of public health and urban planning have a long tradition of working in symbiosis. Ancient Romans practiced sound public health when they built army barracks far from swamps to prevent insects from spreading diseases among the troops. During the medieval period in Europe, monasteries were models of cleanliness, built mostly on the outskirts of densely-packed towns or in the countryside. These structures “had fresh running water, ‘lavers’ (wash rooms), flush ‘reredorters’ (latrines) connected to sewers, clean towels and a compulsory bath four times a year.”

While much progress has been made in the years since, the infrastructure of cities and towns still plays a major role in people’s health outcomes. This realization has, in recent years, occasioned a resurgence in collaborations between public health practitioners and urban planning experts. According to Whitney Austin Gray, PhD, faculty member of the Urban & Regional Planning program and Executive Director of Research and Innovation at Georgetown University, in the U.S., the fields of public health and urban planning began drifting apart in the 1950s when the public health community “started focusing more on the individual, and in doing so, looked at pharmaceutical interventions” to fix public health problems. 

Only recently, Dr. Gray says, have experts in public health and urban planning rediscovered each other. In so doing, they have begun looking at chronic diseases on a population level which, in her words, “urban planners are the best suited to do.”

This reinvestment in city planning as a way to help solve public health issues is necessary in a country like the U.S., which is so heavily dependent on cars. Indeed, says Leslie Meehan, Deputy Commissioner for Population Health at the Tennessee Department of Health, “when you look outside of the the coastal areas, there’s not a lot of public transit and so what we are finding is that housing which is close to where people want to work, where the grocery stores are, where the parks are, where the schools are,” is getting more and more expensive.

Nowhere is this situation more true than in the Northern Virginia suburbs of Washington, DC. As in most affluent parts of the U.S., some residents of these suburbs live in comfortable conditions while others live in relative squalor. To highlight this dichotomy and the preconditions that often lead to it, I’d like to focus on the supply of adequate, affordable housing in Fairfax County, Virginia as an important cause of divergent health outcomes on a county level.

A Social Determinants Lens

Before we get into specific strategies for making housing more widely accessible in Fairfax County, Virginia, we should first set a foundation for how to look at these issues through a social determinants of health (SDOH) lens. As defined by the World Health Organization, SDOH are “the non-medical factors that influence health outcomes. They are the conditions in which people are born, grow, work, live, and age, and the wider set of forces and systems shaping the conditions of daily life.”

From a public health perspective, healthcare providers are becoming more active in addressing the social drivers of negative health outcomes, including how to manage the housing needs of their patient populations. The author of an article called “Social Determinants of Health 101 for Health Care: Five plus Five,” asserts that accountable care organizations (ACOs) “are responding to nonmedical needs of patients such as transportation, housing, and food with the assumption that outcomes and cost will improve.” Adding to this, author and social epidemiologist Steven H. Woolf notes that “Good health requires access to high-quality housing and transportation, clean air and drinking water, and stores that sell healthy food.”

Photo courtesy of Free Stock photos by Vecteezy

Relatedly, in 2023, Medicaid section 1115 waivers have been approved in states like California, Arizona, and Arkansas to help people afford housing. Although Virginia does not appear to have any such active waivers (with the possible exception of the “VA Building Independence Waiver” which expires in June of 2023), an interesting past example of how the state attempted to address the SDOH of its Medicaid enrollees was in 2016 when Virginia’s Department of Medical Assistance Services (DMAS) submitted a §1115 waiver application for a program called the “Virginia Delivery System Transformation Section 1115 Demonstration.”

This proposed demonstration project sought authority to implement two initiatives: Medicaid Managed Long-term Services and Supports (MLTSS) and the Delivery System Reform Incentive Payment (DSRIP) programs. Alignment of these two programs aimed to “strengthen and integrate Virginia Medicaid’s community delivery structure and accelerate payment reforms toward value-based purchasing.”

The first phase of Virginia’s DSRIP proposal was meant to support the creation of high-performing, integrated partnerships known as Virginia Integration Partners (VIPs). The VIPs would partner with managed care organizations to improve care for high-cost enrollees, providing resources in the community to help “connect the VIP beneficiaries to housing, employment, and nutrition supports.” While it looks like this waiver never came to fruition, it marks one possible avenue of collaboration between, as the waiver application words it, “medical, behavioral health, and long-term services and support providers” that would be “anchored by strong care navigation and community supports.”

An Uneven Distribution of Resources

Access to safe, affordable housing varies widely depending on one’s zip code in Fairfax County, Virginia. To bring the challenge into sharper focus, it is helpful to examine a few relevant housing-related data points. Since one’s income often determines the level of access one has to safe housing, it will help to start there.

According to the U.S. Census Bureau, for the 2017-2021 reporting period the median household income in Fairfax County was $133,974. By contrast, the real median household income in the United States for the same period was $70,784. Relatedly, as of 2020, out of a population of 1,150,305 people, 72,584 Fairfax County residents lived in poverty. This equates to around a 6% poverty rate. Given that the median poverty rate of all counties in the U.S. was 12.8% in 2020, at first glance this would suggest that Fairfax is doing relatively well compared to most counties.

But when one looks closer, a different picture begins to emerge. Because housing affordability plays a major role in people’s ability to thrive in their day-to-day lives, this variable must also be taken into account when assessing the area’s standard of living. According to one report, “Fairfax County has a median home value of $569,100 and a median rent of $2,033 — about twice as high as the national averages of $244,900 and $1,163, respectively.”

The problem of home affordability has grown to nearly unsustainable levels in recent years. Indeed, according to a needs assessment published by the Fairfax County Department of Management and Budget in 2022 (PDF version here), “the Washington-Arlington-Alexandria area saw inflation in housing-related costs of 5.4% from May 2021 to May 2022, and a rise of 23.5% over ten years.”

This trend has caused a rift among those who can afford housing in the county versus those who can’t, and is most pronounced among renters:

“While the proportion of Fairfax County homeowners who are cost burdened has steadily decreased over time, the proportion of cost burdened renters continues to rise. Cost burden occurs when 30% or more of household income is spent on housing costs. Additionally, when 50% of household income is spent on housing, this household is considered severely housing cost burdened.”

A County in Crisis

When looked at this way, it begins to make sense why some are saying that Fairfax County is in a “severe housing affordability crisis.” A number of factors often contribute to the relative lack of housing opportunities for lower socioeconomic residents, but one key one key consideration is the problem of racism. Despite past reports by entities like the Fairfax County Office of Human Rights and Equity Programs that detail discriminatory lending practices, my research shows that quantifying housing inequality on a county level remains a somewhat elusive goal.

That being said, we know that on a national level, housing inequality impacts people of color more acutely than it does their white counterparts (as I explored in a previous blog post). As defined by the foundational article “Levels of Racism: A Theoretic Framework and a Gardener’s Tale” by Camara Phyllis Jones, MD, MPH, PhD, racism can be thought of as taking one of three forms: institutionalized, personally mediated, or internalized racism. When looked at through this prism, although partaking of all three causes in one way or another, the housing crisis in Fairfax County most closely aligns with Dr. Jones’ definition of institutionalized racism:

“In this framework, institutionalized racism is defined as differential access to the goods, services, and opportunities of society by race. Institutionalized racism is normative, sometimes legalized, and often manifests as inherited disadvantage…With regard to material conditions, examples include differential access to quality education, sound housing, gainful employment, appropriate medical facilities, and a clean environment.”

Photo courtesy of Free Stock photos by Vecteezy

To put this challenge of institutionalized racism as it pertains to housing inequality into context (PDF version here), it is instructive to look at our example of Fairfax County, Virginia. In Fairfax, the split between those who can afford to own their own home breaks down along racial lines, since white homeowners outstrip their share of the population, whereas the reverse is true for all other races: “White residents own homes at a rate greater than their share of the population, while residents of all other races and Hispanic residents own homes at rates less than their share of the population.”

The report makes the point that because homeownership is a key element to wealth accumulation, the ability of racial and ethnic minorities in the county to improve their financial situation is limited. An important subset of this demographic is immigrants, who make up a substantial part of the population of Fairfax County – indeed, nearly 350,000 foreign born persons call the county home, which as of 2021 represented about 30% of the county’s population. For those immigrants who are considered undocumented, a number that is hard to know but which was estimated to be 269,000 state-wide in 2016, the prospect of attaining affordable, safe housing presents substantial challenges.

Statistics are one thing, however, and personal accounts are quite another. Participants in one study that aimed to understand the impact that the Deferred Action for Childhood Arrivals (DACA) program had on on “health access, status, and behaviors” of Asians and Pacific Islander (API) undocumented young adults, really brings this issue into focus. One study participant noted her frustration that even though DACA deferred her and her family’s deportation, which was a positive, it ignored other issues that contributed to her loved ones’ worsening health:

“People would always talk about the ‘American Dream.’ What is it? You have people living here in the United States but there’s no American dream for them. But I would kind of like lay awake there crying and wish like, ‘why can’t I have that house? Why can’t I have that white picket fence? Why can’t I have the green grass? Why can’t my family have that benefit of getting healthcare?'”

Solutions to the Home Affordability Issue

So what can be done to help people struggling to afford a home in Fairfax County, Virginia? Thankfully, positive steps are already being taken to alleviate the county’s housing dilemma. In 2019, for instance, the Fairfax County Board of Supervisors endorsed something called the Communitywide Housing Strategic Plan. This plan identified ways to develop affordable housing, and one of its goals was to build a minimum of 5,000 new affordable housing units within the next 15 years. Two years later the Board of Supervisors approved doubling that number to 10,000 net new units.

Further, as part of the 2023 Fairfax Countywide Strategic Report, it was noted that the Fairfax Board of Supervisors recently adopted something called Indicators of Community Success. These guidelines will help the county focus on improving not just access to “affordable and quality housing,” but on other residential issues such as “access to amenities that promote healthy neighborhoods; flexibility and adaptability of land use rules; and a mission to prevent and end homelessness.”

While all of this is definitely a step in the right direction, I would argue that efforts should be increased when it comes to building new housing. A recent exploration by U.S. News & World Report gave Fairfax County a substandard housing capacity rating, finding that only 4% of the county’s homes were vacant as compared to the national median of 16.3%. A new regional fair housing plan put together “by a team of representatives from eight localities, including Fairfax County, and a few partner groups” seems to be on the right track.

In part, the plan calls on the county to “change zoning and land use policies to expand access to fair housing,” along with  lowering income requirements for residents. The comment period for this plan recently closed, so it will be interesting to keep up with this story to see the direction affordable housing takes in Fairfax County.

Tools for Addressing Inequality

I saw an interesting tweet recently that got me thinking about the paradox that sometimes exists in our most highly-tuned healthcare interventions: sometimes the very tools deployed to help people can end up making things worse.

Let me say that I admire anyone who attempts to alleviate healthcare problems and the upstream negative social determinants of health (SDOH) that contribute to them, so this post is not meant as a criticism. Rather, I want to explore some of the sophisticated tools used by researchers to gain a better understanding of how they work, and also to examine potential gaps that might unintentionally lead to imprecise results that can, in turn, affect how healthcare is delivered in the real world.

Accounting for Social Risk

It’s conventional wisdom nowadays that devoting an equal amount of healthcare resources to every person in the U.S. would not result in favorable health outcomes for all involved. There are many reasons for this, but the two primary factors include 1) people at the lower end of the socioeconomic spectrum suffer disproportionately poor health outcomes due to being faced with a broader array of negative SDOH, and thus require a higher investment of financial resources to stay healthy, and 2) socioeconomically disadvantaged people often have less access to quality healthcare.

As important as ready access to quality care is, however, it’s only one of many factors that contribute to a person’s health. According to a recent report issued by the Assistant Secretary for Planning and Evaluation’s Office of Health Policy, “Studies estimate that clinical care impacts only 20 percent of county-level variation in health outcomes, while social determinants of health (SDOH) affect as much as 50 percent. Within SDOH, socioeconomic factors such as poverty, employment, and education have the largest impact on health outcomes.” 

Put another way, although the care people receive in clinical settings is important, it plays a relatively small part in their overall quality of life. To make up for this gap, some experts propose adjusting value-based performance measures to account for social risk factors. But critics point out that this approach often results in forcing vulnerable populations to accept lower standards of care and doesn’t really fix the problem. Another approach, which is currently being taken up by public (and some private) insurers is to adjust payments to healthcare providers who administer care to underserved groups, a concept called social risk adjustment.

How Social Risk Adjustment Works

Adjusting payments has to be done very carefully, because doing so without considering every nuance can counter-intuitively reinforce health inequities. This point was made by Robert Saunders, PhD, during a recent webinar convened by the Patient-Centered Primary Care Collaborative, when he presented evidence that people living in underserved areas (as measured by a social vulnerability index, a concept I’ll cover a little further down) often face obstacles to accessing healthcare.

This reality can lead to a situation where, as Dr. Saunders notes, “healthcare utilization may not be as high as it needs to be,” and people’s current social needs aren’t being met by the healthcare system. In Dr. Saunders’ example, what this means is that Medicare expenditures for such a population aren’t always closely associated with their reported need. Dr. Saunders added the following: 

“If you use the traditional approach to risk adjustment where we plug in variables into a model looking at the past, so looking at the association between that factor and healthcare utilization, you may artificially lower the amount of money that is going to populations that may have high social needs, so you have to be very thoughtful about how you use it.”

There’s also the challenge of paying practitioners during a given year for social needs addressed in a previous year, which can result in a misalignment of resources. When physicians aren’t given the tools they need to mitigate patients’ social needs, it can lead to burnout. In resource-strapped areas this can be especially true, and it can lead to dire consequences. So how can such a circumstance be avoided? 

To a large extent, insurers can improve the situation by dispersing prospective payments, also known as capitation payments, that more accurately anticipate patients’ needs from year to year. This practice is already in use by payers like the Centers for Medicare & Medicaid Services (CMS), over two-thirds of whose Medicaid beneficiaries are enrolled in risk-based managed care plans. In this program, CMS provides upfront payments to healthcare plans for anticipated costs of contracted services, the latter of which are influenced by each beneficiary’s risk factors. Similarly, the benchmarks Medicare Advantage utilizes for rate-setting are adjusted for medical risk. According to the Better Medicare Alliance, “CMS risk-adjusts the capitated payments to Medicare Advantage plans based on an enrollee’s ‘risk score’ – a measure of the expected costs associated with a person’s care.”

An example of social risk payment adjustments can be found in the Maryland Primary Care Program (MDPCP) HEART Payment Playbook. The Maryland Department of Health (MDH) collaborated with CMS to create a healthcare transformation program called the MDPCP. The MDPCP HEART payment program offers additional support to MDPCP participants who provide care to Medicare enrollees who are contending with complex social and environmental factors. The program “aims to improve health outcomes and lower costs in this targeted group of high-need individuals” by collecting social needs screening data and using these funds to develop and implement interventions.

Tools to Measure Deprivation

As I alluded to above, there is a substantial amount of money in play when it comes to government agencies and insurers allocating extra resources to help those most in need. So how do entities tasked with distributing this money decide where to spend it? Enter the deprivation index, a tool that helps assess a geographic population’s risk based on a range of factors.

Before we launch into an examination of particular social deprivation measurement tools, I’d like to start with a definition to give us proper context. The Centers for Disease Control and Prevention (CDC) defines any “locally sensitive” area deprivation index as being “an ideal measure to identify and screen for the health care and social services needs and to advance the integration of social determinants of health with clinical treatment and disease prevention.”

There are a range of deprivation indices in existence, but no consensus on exactly how many. According to one 2022 study there are “sixty deprivation indices in seventeen countries,” while, during a recent Health Affairs podcast episode, guest Dr. Meera Kotagal identified 44 distinct deprivation indices. In my research, the index most often used to measure social needs is the Area Deprivation Index (ADI) introduced by the University of Wisconsin (UW) back in 2018.

In the years since its inception, the UW index has been “refined, adapted, and validated” to the Census Block Group (CBG) neighborhood level by a team at the University of Wisconsin-Madison. Given that Census blocks are “the smallest geographic area for which the Bureau of the Census collects and tabulates decennial census data,” some researchers consider it to be the superior index.

Others, however, have pushed back on the effectiveness on the UW ADI. Rounding back to the tweet I cited earlier in this post and the study to which it links, the authors used the UW ADI to examine social risk among Medicare beneficiaries and found that “community-level social risk explained little variation in health care spending, was negatively correlated with spending conditional on demographics and clinical characteristics, and was poorly correlated with self-reported social risk factors.”

Crucially, the authors stated that “Conditional on demographic and clinical characteristics, ADI remained significantly associated with spending, but the direction of the association reversed, with every 1-point increase in ADI associated with a $11.08 decrease in spending.” Echoing Dr. Saunders here, the authors seem to be saying that just because a given population is in greater need of medical services, that doesn’t mean they will automatically seek medical care — and this factor should inform any deprivation index attuned to a small geographic area. People may stay away from medical practitioners because of perceived racism or any number of other factors or, as mentioned above, they may experience physical or geographical challenges when trying to access medical care.    

In any discussion of deprivation indices, it’s important to note the spectrum they cover: while some focus on wide geographical areas like one designed specifically for the state of Utah, others, like the first iteration of the U.S. Census Bureau’s Multidimensional Deprivation Index (MDI), have examined deprivation on the county level. Interestingly, the MDI was revised in 2021 using the UW ADI to allow the Bureau to investigate deprivation in a more fine-grained way because, as the authors acknowledged, “even mid-sized counties have a significant amount of heterogeneity that is not captured by county-level measures.”

Although differences exist across these and other indices, what many of them have in common is that they calculate socioeconomic disadvantage using data from the American Community Survey (ACS). The U.S. Census Bureau website states that “The American Community Survey (ACS) is an ongoing survey that provides vital information on a yearly basis about our nation and its people.”

The survey, which is distinct from the census in that it is much more up to date, collects dozens of indicators – including some the census excludes like education, employment, internet access, and transportation. Several prominent ADIs utilize what’s called ACS 5-Year Estimates, which are estimates representing data collected over a five-year period of time. According the U.S. Census Bureau, the main advantage of using multi-year estimates “is the increased statistical reliability of the data for less populated areas and small population subgroups.” Also of note, the five-year estimates are available down to the block group level for the entire country.

It should also be noted that different indices incorporate varying numbers of indicators into their formulation; for example, the UW ADI “includes factors for the theoretical domains of income, education, employment, and housing quality,” while the Robert Graham Center’s social deprivation index (SDI) uses seven demographic characteristics collected by the ACS. 

Deprivation Indices in Action

So why all the hubbub about deprivation indices? Sure they provide insight into where socially vulnerable people live, but how do they affect day-to-day life? As with many topics having to do with social deprivation, the U.S. Census Bureau provides us with an answer: “Local communities depend on information from the American Community Survey, as well as the decennial census, to decide where schools, highways, hospitals, and other important services are needed. The data collected through the American Community Survey and the 2020 Census help determine how to distribute more than $675 billion of federal spending each year.”

Clearly these tools, as advanced as they are, still have a way to go in accounting for the specific needs of individual patients. As Dr. Saunders of Duke advises, they should be used with a great deal of care. Looking forward, it will be interesting to see how these tools progress in the era of artificial intelligence as it becomes more important than ever to keep patients at the center of the care process.

Improving Population Health Through Housing

For a while now I’ve been meaning to write a post about the connection between housing instability and health. Of late, this topic has drawn a good deal of coverage in high-profile news outlets, and organizations across the healthcare spectrum are taking notice. The idea is buoyed by the notion that addressing negative social determinants of health (SDOH) is a worthy aim not just because it’s the right thing to do, but because there’s a solid business case to be made for adopting such an approach. 

I’ve written multiple posts about the interplay between insurance and population health, but in this post I’d like to discuss how health insurance companies are investing in housing to improve the health outcomes of some beneficiaries. At first it might sound counterintuitive that payers, who are in the business of making money, would shell out significant amounts of cash to either fix up substandard housing or subsidize their beneficiaries’ accommodations outright. But a deeper look shows how this strategy is paying dividends, both in terms of better health outcomes and enhanced bottom lines.

Photo courtesy of Free Stock photos by Vecteezy

Housing Instability

The connection between housing instability and poor health outcomes has been known for some time. Indeed, as a 2017 Health Affairs article put it, “Access to health care has been shown to improve health, and housing instability is correlated with poor access to health care.” The authors of that same study found that those receiving US Department of Housing and Urban Development (HUD) housing assistance had “a lower uninsurance rate and a lower rate of unmet need due to cost” when compared with those who hadn’t received housing assistance.

Put another way, when people are given financial help, they don’t have to worry as much about making tradeoffs between healthcare visits and paying rent each month. Ensuring access to healthcare is a central tenet of population health, and alleviating the difficult choice between healthcare and housing seems to have finally caught on with insurers across the country.  

But before we launch into a discussion about payers and subsidized housing, I’d like to clarify what I mean by the term “housing instability” and its synonym, “housing insecurity.” While it may be tempting to think of housing instability as equivalent to homelessness, it is actually a separate, if related, term. The Office of Disease Prevention and Health Promotion defines it this way:

“Housing instability encompasses a number of challenges, such as having trouble paying rent, overcrowding, moving frequently, or spending the bulk of household income on housing.”

So being housing unstable means that the costs associated with housing are burdensome to a given person or group of people. If left unmitigated, the factors leading to this turmoil can precipitate homelessness. While the problem of housing instability is multifaceted, high housing costs relative to income is a primary culprit. The U.S. Department of Health and Human Services frames the issue of expensive housing this way:

“Households are considered to be cost burdened if they spend more than 30 percent of their income on housing and severely cost burdened if they spend more than 50 percent of their income on housing. Cost-burdened households have little left over each month to spend on other necessities such as food, clothing, utilities, and health care.”

Substandard Housing

Not to belabor the topic of housing instability, but I want to zoom in a little further on its component parts to give us a deeper appreciation for what insurers are up against in combating it. For starters, substandard housing is one cause of housing instability that can lead to sub-optimal health. I’ve written about this topic in the past, and how some community organizations have banded together to overcome it.

The National Center for Healthy Housing calculates that nearly six million U.S. homes fall into the substandard category. To get a better handle on how this problem is quantified, it’s instructive to consult the American Housing Survey (AHS), which is sponsored by HUD and conducted by the U.S. Census Bureau. The AHS rates housing as “severely inadequate” based on eight criteria:

  • Unit does not have hot and cold running water.
  • Unit does not have a bathtub or shower.
  • Unit does not have a flush toilet.
  • Unit shares plumbing facilities.
  • Unit was cold for 24 hours or more and more than two breakdowns of the heating equipment have occurred that lasted longer than 6 hours.
  • Electricity is not used.
  • Unit has exposed wiring, not every room has working electrical plugs, and the fuses have blown more than twice.

For the eighth criterion, the habitation in question must show evidence of five or six of these structural conditions:

  • Unit has had outside water leaks in the past 12 months.
  • Unit has had inside water leaks in the past 12 months.
  • Unit has holes in the floor.
  • Unit has open cracks wider than a dime.
  • Unit has an area of peeling paint larger than 8 by 11 inches.
  • Rats have been seen recently in the unit.

An issue brief by the Robert Wood Johnson Foundation states that “Substandard housing such as water leaks, poor ventilation, dirty carpets and pest infestation can lead to an increase in mold, mites and other allergens associated with poor health.” In addition, note the brief’s authors, “Concentration of substandard housing in less advantaged neighborhoods further compounds racial and ethnic as well as socioeconomic disparities in health,” a phenomenon often referred to as “housing inequity.”

To this latter point, it should come as no surprise that substandard housing affects some demographic groups more than others. A 2016 article from the Brookings Institution titled “Time for justice: Tackling race inequalities in health and housing” notes the following:

“Substandard housing conditions such as pest infestation, lead paint, faulty plumbing, and overcrowding disproportionately affect black families and lead to health problems such as asthma, lead poisoning, heart disease, and neurological disorders. Blacks are 1.7 times more likely than the rest of the population to occupy homes with severe physical problems. Concentrated housing inequity also disproportionately exposes black communities to environmental pollutants and isolates black populations from essential health resources…” 

Overcrowded Housing

In addition to substandard housing, overcrowding is another factor leading to housing instability. Sometimes referred to as a form of “hidden housing instability,” 3.7 million people lived in overcrowded housing in 2019. Robust recent data on the number of children living in overcrowded homes in the U.S. is sparse, but it has been established that “living in a crowded home can negatively affect academic performance, educational outcomes, behavioral health, and physical health.”

Although there doesn’t seem to be a consensus on what constitutes overcrowding, a 2016 study by HUD’s Office of Policy Development and Research that examined various interventions to combat homelessness used an outcome measure called “persons per room” to track crowding. Looking at “information collected from the adult respondent about the number of rooms in the housing unit (not counting kitchens, hallways, and bathrooms) and the number of people living in the housing unit,” in this formulation, housing situations where more than one person occupied a single room were considered crowded.

Because the parameters of overcrowding aren’t well-defined, measuring overcrowding or, as it’s sometimes called, “doubled-up” homelessness, is challenging. In addition to HUD’s definition above, a range of organizations quantify the concept differently, taking different variables into account. One measure that seems to fill in a few gaps found in other approaches was put forward by the authors of a 2021 study called “Quantifying Doubled-Up Homelessness: Presenting a New Measure Using U.S. Census Microdata.” The authors’ rationale for arriving at an overall number is as follows:

“(W)e defined doubled-up homeless persons as poor or near-poor individuals in a poor or near-poor household (at or below 125% of a geographically adjusted poverty threshold) who met the following conditions: a relative that the household head does not customarily take responsibility for (based on age and relationship); or a nonrelative who is not a partner and not formally sharing in household costs (not roomers/roommates). Single adult children and relatives over 65 may be seen as a householder’s responsibility, so such cases are included only if the household is overcrowded — an arrangement that we believe, based on the literature and feedback from experts working in the homelessness response system, provides evidence of economic hardship and involuntary doubling up.”  

Homelessness and the U.S. Population

Now let’s focus on the separate but related concept of homelessness. According to the HUD Exchange, “individuals who lack resources and support networks to obtain permanent housing meet HUD’s definition of homeless.” As with the housing unstable population in the U.S., thanks to a raft of aid programs, homelessness has not worsened significantly in recent years. It bears saying, however, that this trend will likely be influenced by aid programs ending with the impending end of the Public Health Emergency (PHE) and the eviction moratorium. But because data on the end of the PHE won’t be available for some time, it’s worth noting trends in this population over the past few years.

According to a December 2022 report produced by HUD, there was “a .3% increase in the number of people experiencing homelessness from 2020 to 2022.” This number includes a drop in the number of veterans experiencing homelessness between 2020 and 2022 of 11%, and, during the same period, a decrease in the number of families with children experiencing homelessness by 6%.

As mentioned above, while these numbers have been heading in the right direction, some experts worry that with the Supreme Court’s ruling that overturned the eviction moratorium in 2021, housing instability and homelessness may once again be on the rise. Similar to those experiencing housing instability, this burden falls unevenly, with certain underserved populations experiencing homelessness at rates much higher than the rest of the country.

Indeed, the Centers for Disease Control and Prevention (CDC) reports that “People who are Black or African American and those who are American Indian or Alaska Native have higher rates of homelessness.” Similarly, America’s elderly are predicted to see a significant downturn in their housing status. In a recent article, the Los Angeles Times quoted a 2019 study by the University of Pennsylvania that predicted that “the U.S. population of people 65 and older experiencing homelessness will nearly triple from 40,000 to 106,000 by 2030, resulting in a public health crisis as their age-related medical problems multiply.”

Home ownership, which can serve as a rough proxy for housing stability, does not break evenly along demographic lines. The Pew Research Center notes that “Nationwide, about 58% of households headed by Black or African American adults rent their homes, as do nearly 52% of Hispanic- or Latino-led households…By contrast, roughly a quarter of households led by non-Hispanic White adults (27.9%) are rentals, as are just under 40% of Asian-led households.” In other words, African-American and Hispanic households are more likely than White households to be burdened by costs associated with housing.

A Nationwide Problem

Issues like substandard housing, overcrowding, homelessness, and other challenges related to ensuring that everyone in the U.S. has adequate housing are set against a housing crisis that’s persisted more or less since the Great Recession of 2007-2010. While the number of owner-occupied homes is not insignificant at 64.6%, that leaves about a third of U.S. adults who rent their homes. Estimates of the housing shortfall in the States range from 1.5 to 5 million homes. For myriad reasons the US homeowner vacancy rate, which tracks the percent of units available for occupancy, is at near historically low levels, which equates to rising prices.

While there is an argument to be made that, financially speaking, it makes more sense to rent than to buy in a hot housing market like the one much of the country is still experiencing, in the long run, as the Brookings Institution puts it, “Owning one’s home…provides greater stability and predictability of housing expenditures than renting.”

Given the shockwaves that the COVID-19 pandemic sent through most sectors of the economy, many have faced housing instability over the past three years. As bad as things got, however, if not for government interventions like the eviction moratorium instituted by the CDC in September of 2020, the situation could have been worse. The moratorium helped many renters by giving them a reprieve from losing their accommodations, and as we’ll soon see, maintaining stable housing has a direct impact on positive health outcomes.

A Role for Insurers

When taken as a whole, the above issues negatively affect access to quality healthcare. And when people can no longer keep themselves healthy, everyone loses — including insurers. Key to the success of any payer is maintaining balanced risk pools, which have been defined as “groups of individuals across the medical complexity spectrum, which allow both private and public payers to potentially offset the cost of sicker individuals with higher medical expenses with premiums from healthier individuals with lower utilization rates.”

Large and balanced risk pools often lead to more predictable — and thus more stable — premiums. But if the balance between sick and healthy beneficiaries is thrown off and the insured population grows more and more sick, a phenomenon called “adverse selection” can occur where an outsized number of unhealthy beneficiaries crowd out healthier members. When this happens, it becomes hard for payers to manage financial risk and keep premiums at manageable levels without sacrificing profit.

After all, healthier people are not as likely to need expensive treatments, which equates to fewer claims for the insurance company to pay out. By extension, this helps keep costs down, resulting in lower overall premiums. So with the aim of reducing healthcare costs by addressing SDOH related to unstable housing, some health insurance carriers have begun to proactively offer housing support to their lower-income beneficiaries.

An example of this can be seen with insurance giants Humana and UnitedHealth. In recent years, these companies have begun to invest heavily in housing. According to a recent article in Forbes, in reaction to their own Medicaid data showing a link between housing instability and sub-optimal health outcomes, “In 2022, UnitedHealth invested $100 million in building affordable housing in parts of the country where they operate. That’s in addition to the $700 million they’ve already invested in the past decade, creating a total of nearly 20,000 homes — so far — for low-income residents.” Interestingly, the housing projects UnitedHealth is developing include on-site health services for nearby residents and access to public transportation, among other services. 

For its part, Humana has invested $90 million in affordable housing since 2021. Billed as “Humana’s Bold Goal,” the company has dedicated itself to improving the health of the communities they serve by 20% “by addressing the health of the whole person.” For the time being, their approach seems more limited in scope than UnitedHealth’s in that they aim to stabilize housing insecure beneficiaries for 90 days, providing services like job training and behavioral health. In addition, Humana has explored promoting permanent supportive housing programs through various funding models in the “pay for success” mold. These models resemble healthcare value-based payment models in that they are outcomes-based approaches that “align payment for support services to priority objectives” and can potentially establish sustainable funding for these programs.  

With housing issues mounting, these are just a few ways insurers are looking to both improve health outcomes and control healthcare costs. Medicare and Medicaid have also entered the subsidized housing market, using rebates and Section 1115 waivers to offer stable living conditions to their beneficiaries. I may focus a future blog post on how public payers are changing the housing landscape of the United States. Let me know in the comments if you know of any other similar approaches taken by insurers either in this country or around the world.

PHM and Rural Healthcare — Part 2

In a post last month, I explored the current state of rural healthcare in America. Building on that theme, I now want to look at how the concept of population health management (PHM) can play a key role in improving health outcomes in rural areas. While it might not be the first setting one thinks of when considering a population health approach, when done with an eye toward leveraging existing resources and forging strong bonds between community and clinical partners, PHM can be a dynamic way to deliver quality care.

Why Population Health in Rural Settings?

I always like to start with definitions when possible, and I think that since the term “population health” is so flexible, we need to be specific when applying it to various situations. Since we’ll be discussing rural America in this post, I think this definition by the CDC works well:

“CDC views population health as an interdisciplinary, customizable approach that allows health departments to connect practice to policy for change to happen locally. This approach utilizes non-traditional partnerships among different sectors of the community – public health, industry, academia, health care, local government entities, etc. – to achieve positive health outcomes. Population health ‘brings significant health concerns into focus and addresses ways that resources can be allocated to overcome the problems that drive poor health conditions in the population.’”

The emphasis here is mine. To me, the aspects highlighted in this definition contribute to effective population health management in rural settings: a focus on care change happening locally; healthcare entities actively partnering with community-based organizations, sometimes with the assistance of bridge organizations that navigate patients between clinical and behavioral health centers, the latter of which usually provide mental health and substance abuse disorder treatment services; and allocating scarce healthcare resources to those patient populations that will most benefit from them.

So why should we consider applying a population health approach in rural settings? On the face of it, there are a raft of challenges that might make such an enterprise appear daunting: for starters, rural hospitals are on average far smaller than their urban counterparts, and they also have much less money at their disposal. A report by the Center for Healthcare Quality and Payment Reform provides a few details that point in this direction:

  • Most urban hospitals have over 200 inpatient beds, whereas most rural hospitals have 25 or fewer beds.
  • One-half of urban hospitals have expenses of more than $250 million, whereas only 2% of rural hospitals are that large.
  • One-half of rural hospitals have total expenses of less than $35 million, compared to only 4% of urban hospitals.

In addition to these numbers, the American Hospital Association points out that “59% of the decline in the number of U.S. community hospitals between 2015 and 2019 were rural hospitals,” and an article out of the Leonard Davis Institute of Health Economics at Penn notes that “rural hospitals typically have less than half the median profit margins of urban hospitals.” As a consequence, the same Penn article highlights the fact that in excess of “130 facilities have closed since 2010” leaving “about 2,250 remaining rural hospitals out of about 5,000 facilities nationwide.”

All this leaves aside the stark reality of a physician shortage in the U.S. While this shortfall is hitting the nation as a whole, it is most pronounced in rural areas, a fact that makes coordination of care – a hallmark of population health management – much more challenging. Indeed, according to data provided by the National Rural Health Association, the patient-to-primary care physician ratio in rural areas falls far short of that in urban areas: 39.8 physicians versus 53.3 physicians per 100,000 people, respectively. Further, as an article by the Association of American Medical Colleges mentions, “while 20% of the U.S. population lives in rural communities, only 11% of physicians practice in such areas.” 

This being the case, I’ll ask again: why should we consider applying a population health approach in less densely inhabited settings? Answers vary, but many believe that, despite the drawbacks mentioned above, rural areas hold some relative advantages to their urban counterparts. In a report titled “Advancing Population Health in Rural Places: Key Lessons and Policy Opportunities” by the Rural Policy Research Institute (RUPRI), the authors highlight eight resources (framed as areas of “capital”) that make less populated areas a favorable training ground for PHM solutions.

While some of these “areas of capital” may strain credulity – pointing to rural areas as home to more walkable areas than urban areas, for instance, might be a bit far-fetched – by-and-large the authors have a point. Most importantly for our purposes, two themes run throughout many of these areas of emphasis that redound to population health’s advantage: 1) healthcare entities’ familiarity with community partners and 2) an emphasis on local solutions. If I were to boil these ideas down still further, the common denominator here seems to be a strong sense of trust.

Building Clinical-Social Bonds

This idea of relatively deeper bonds of trust existing between residents and institutions in rural settings is an interesting one, and one that’s borne out by the numbers. For instance, a 2018 Pew Research survey found that, while inhabitants of urban, suburban, and rural areas report a nearly equal familiarity with at least some of their neighbors (53% vs. 49% vs. 47%, respectively), and despite the fact that adults in all three geographic categories attest to similar levels of attachment to their local community, “about six-in-ten of those in the suburbs (62%) and in rural communities (61%) say they have a neighbor they would trust with a set of keys to their home, compared with about half (48%) in urban areas.”

In a similar vein, a 2021 report by the Survey Center of American Life found that “Americans in more densely inhabited places are…much less willing to leave their doors unlocked,” with just 35 percent of those residing in big cities being willing to leave the doors of their homes unlocked versus 69 percent in rural settings.

While these examples may seem superficial at first glance, I think they reinforce a larger point that once a personal bond is formed, this relationship often rests on a more solid foundation in rural areas than it does in others (at least insofar as people self-report their personal inclinations). To this point, the authors of a study called “Growing Up in Rural America,” which was produced by the Johns Hopkins University Project Muse program, noted that, when it comes to social bonds between rural residents, “the importance of local social relationships and working collectively on common issues and the limited number of neighbors makes developing these relationships easier.”

For instance, the authors point out that rural counties often boast infrastructure that facilitates greater communication between healthcare entities and community-based organizations, a key partnership when trying to open up care access to all segments of society. “For example,” the RUPRI report tells us, “a grocery store could collaborate with the hospital on promoting healthy diet changes. Similarly, a school could be a common point of contact for informing community residents about how the hospital and other community organizations can address needs such as hunger and housing.”

Building on this idea, the same report argues that deep reserves of political capital exist in many rural communities such that “the influence that individuals and organizations hold…can be used to achieve population health goals. Unlike in larger communities,” the report goes on to say, “rural leaders of population health activities are likely to be individuals that community members know personally. This familiarity can facilitate trust and community buy-in throughout the process.”

The bottom line is this: we can expect stronger social and institutional bonds to exist in less populated places, and these relationships can persuade folks otherwise disinclined to seeking medical or behavioral care to do so.

Who Pays for Population Health?

With all this talk of building reserves of trust and strategically realigning healthcare resources in rural settings, it’s important to identify how population health approaches are funded. I should start by saying that none of the research I’ve done suggests that implementing this type of care delivery model is easy or straightforward, no matter what environment you’re talking about. And arguably the biggest barrier a site or system will face is in regards to cost.

Plainly stated, it costs money to implement electronic health record platforms and closed-loop communications systems. It costs money to allocate staff time to working closely with community health partners. It costs money (not to mention time) to change workflows that more tightly coordinate care between providers.

To my mind, if we proceed with our eyes open to this reality, we’ll be better equipped to identify strategies that work in the real world. With this in mind, let’s focus on a few trends that have begun to emerge that allow population health management to flourish in rural communities where non-clinical social determinants of health (SDOH) play a substantial role in people’s health outcomes.

Private and Public Insurance

More and more these days, insurers are better able to reimburse healthcare providers for addressing SDOH. For example, in a report called “Advancing Population Health in Rural Places: Key Lessons and Policy Opportunities,” which was produced by RUPRI, the authors note the following:

“Medicare Advantage (MA) plans can now pay for services addressing some social determinants of health, such as transportation (including to grocery stores), meal kits, and telehealth; and Medicaid can use managed care organizations, State plan amendments, and waivers to do the same.”

While a range of insurers can support such an effort, rural settings often feature a unique payer mix that could present roadblocks for the faint of heart. In their 2019 Rural Report, the American Hospital Association noted that “Rural hospitals are more likely to serve a population that relies on Medicare and Medicaid. However, these programs reimburse less than the cost of providing care, making rural hospitals especially vulnerable to policy changes in payment of services.” 

The difference in reimbursement levels between public and private insurers is pretty staggering. In one Kaiser Family Foundation issue brief, the authors reviewed the findings of 19 studies that compared Medicare and private health insurance payment rates for both physician services and hospital care. I was bowled over by the results: “Private insurers,” note the authors, “paid nearly double Medicare rates for all hospital services (199% of Medicare rates, on average), ranging from 141% to 259% of Medicare rates across the reviewed studies.”

Of the eight studies considered in the issue brief that compared private insurance to Medicare payment rates for inpatient hospital services, “Private insurance payment rates for inpatient hospital services averaged 189% of Medicare rates across studies.” This gulf in reimbursement levels helps illustrate the challenges facing CMS when it tries to entice rural healthcare practitioners into entering risk-based contracts, some of which are built around population health management: in an environment where healthcare providers are getting lower payments for their services, it’s not immediately clear why most would want to risk being further penalized for not meeting quality benchmarks. 

Left out of this discussion so far have been private health insurance companies. While accounting for relatively fewer beneficiaries in rural America, it’s noteworthy that these companies often take their cues from CMS. This is no more true than when it comes to employing risk-based models that foreground accountable care. Leaving aside the fact that some private insurers collaborate with CMS to administer Medicare, i.e. through Medicare Advantage plans, private insurers have followed CMS’ lead in structuring value-based reimbursement models.

Of note, in 2021 the Health Care Payment Learning & Action Network (LAN) reported that “Private payers covered 62 percent of the lives represented in the LAN’s data…Additionally, more payments made to providers by private payers (11.1 percent) were tied to two-sided risk models in 2019. The report shows that 53.5 percent of payments were from fee-for-service, too.”    

In addition to the preponderance of government-funded healthcare in rural counties, the composition of patient populations in such places also presents challenges. This patient mix often involves older, less affluent, and sicker patients than other settings. Added to this is the fact that there are simply fewer patients to pay for services, which makes providing quality care, and ensuring access to that care equally across demographic segments, not as straightforward as in other, more well-funded settings.

Indeed, in a document titled “Report of the Council on Medical Service: Addressing Payment and Delivery in Rural Hospitals,” the American Medical Association notes the following:

“Low patient volume represents a persistent challenge to the financial viability of rural hospitals. There is a minimum level of cost needed to maintain the staff and equipment required to provide a particular type of service, whether it be an ED, a laboratory, or a primary care clinic. As a result, the average cost per service will be higher at a hospital that has fewer patients.”

Bridge Organizations

On the topic of insurer-funded population health, in recent years a consensus has been forming around the idea that gaps exist between clinical care and community services, the latter of which is commonly classified as a mix of mental health and substance abuse disorder treatment, otherwise known as behavioral health.

In response to this, CMS has begun making a significant push to facilitate connections between clinical and community-based health organizations. Funding so-called “bridge organizations” to connect these two entities, CMS is signaling its willingness to support this type of care coordination both financially and logistically.

One high-profile example of this effort is a care delivery model called the Accountable Health Communities Model (AHCM). Initiated by CMS in 2017, the AHCM tests “whether systematically identifying and addressing the health-related social needs of Medicare and Medicaid beneficiaries’ (sic) through screening, referral, and community navigation services will impact health care costs and reduce health care utilization.” In other words, by addressing non-clinical SDOH, this CMS Innovation Model seeks to lower overall costs and improve health outcomes for Medicare and Medicaid populations served by participating organizations. 

Of the thirty-two original participants in the program, twenty-eight organizations remained as of 2021. Throughout its existence, the model has sought to address non-clinical, health-related social needs such as “housing instability, food insecurity, utility needs, interpersonal violence, and transportation needs” along two different tracks:

Assistance Track – Provide community service navigation services to assist high-risk beneficiaries with accessing services to address health-related social needs

Alignment Track – Encourage partner alignment to ensure that community services are available and responsive to the needs of the beneficiaries

The results of the program have been mixed, but offer some cause for optimism. A 2020 report noted that most data collected on program participants up to that point had been for Medicare fee-for-service beneficiaries, whereas the authors speculated that the program would yield the biggest value to Medicaid patients. Since not much data was available on Medicaid patients at the time of the report, however, evaluation of the program’s true reach remains elusive:

“The early impact analysis, which focused on Medicare FFS beneficiaries in the Assistance Track, shows reductions in the number of ED visits, although impacts on other outcomes were not statistically significant. The lack of statistical significance is attributable partially to the relatively few Medicare beneficiaries exposed to the Assistance Track intervention in the first year…Future analyses will incorporate data for Medicaid beneficiaries, who comprise more than 70% of the navigation-eligible sample.”

State-Level Patient Navigation

Although not affiliated with the AHCM, a state-level initiative in the AHCM mold that’s yielding encouraging results is Colorado’s Regional Accountable Entities (RAE) program. Designed to provide bridging services between clinical and community-based health organizations to keep patients out of the hospital system, the RAE program is an outgrowth of a reform effort within the state’s Medicaid program, which is called Health First Colorado.

From its inception in 2018, this restructuring effort promised to have a significant impact on Colorado residents living in rural areas, since three quarters of a million people reside in rural-designated areas of the state. About 300,000 of these Coloradans are enrolled in Medicare, Medicaid, or both. Indeed, states an article on the Colorado Health Institute’s website, “Rural Colorado counties have higher rates of public insurance enrollment than their urban counterparts (38.7 percent compared to 33.5 percent).”

Like the AHCM, the RAE program – which includes seven distinct entities and is part of a larger, two-part restructuring of Colorado’s Medicaid program called the “Accountable Care Collaborative” – is turbocharging integration of primary care and behavioral health services on behalf of its Medicaid population. The program’s core goals align with two major aims of population health management:

“The RAEs’ responsibilities include ensuring Health First Colorado members have access to primary care and behavioral health services, coordinating members’ care and monitoring data to ensure members are receiving quality care.”

The emphasis here is mine. In addition to prioritizing access to care and data monitoring, bonus payments are used to incentivize primary care doctors affiliated with the RAEs to improve the care they provide. This risk-sharing aspect works to hold the private organizations that coordinate the efforts of the state’s RAEs (and administer Medicaid services) accountable for providing quality care. Accountable care is another hallmark of value-based payment models, which in turn are closely associated with population health management.     

Nonprofit Hospitals

Besides payers, tax-exempt hospitals can also be drivers of population health adoption. In order to keep their tax-exempt status under Section 501(c)(3) and Revenue Ruling 69-545PDF, hospital organizations are required to meet certain requirements that address the health needs of the communities in which they operate.

Among other requirements, the IRS mandates that every three years these tax-exempt organizations conduct a Community Health Needs Assessments, or CHNAs. Further, after completing a CHNA, these organizations must develop a plan to remediate identified problems.

I wrote about CHNAs in a prior post so I won’t go into much depth here, but suffice it to say that CHNAs “Take into account input from persons who represent the broad interests of the community served by the hospital facility, including those with special knowledge of or expertise in public health.” In addition to the views of public health authorities, insights from representatives of “medically underserved, low-income, and minority populations” are also taken into account. 

Although CHNAs – along with an associated community benefit mechanism called Community Health Implementation Plans (CHIP) – should be essential tools in any nonprofit hospital’s PHM arsenal, some feel they’re not living up to the hype. As the authors of a 2020 scoping study pointed out, while CHNAs and CHIP hold great promise for incentivizing population health adoption, some articles “continue to suggest that non-profit hospitals should take a larger role in population health improvement and to use community benefit as cornerstone of such work.”

In 2021, of the 85% of American hospitals designated as “community hospitals” (defined as “All nonfederal, short-term general, and specialty hospitals whose facilities and services are available to the public”), there were 2,978 nonprofit hospitals in the U.S., as compared to 1,235 for-profit hospitals and 944 in the “state/local government hospital” category. With the majority of hospitals having to fulfill a mandate to segment out and prioritize care delivery to the distinct patient populations they serve, nonprofit hospitals should act as an incubator for effective population health management.

There is much more ground to cover when it comes to PHM in rural America, and it’s a subject to which I’ll likely return in the future. I’ll leave it there for now, but will continue to learn all I can about this fascinating topic.

PHM and Rural Healthcare — Part 1

With the recent passage of the bipartisan fiscal year 2023 omnibus appropriations legislation, and considering its positive impact on rural healthcare, I figured this was a good time to focus on rural population health. In this first of a two-part series, I’ll focus on the state of healthcare in rural America, and in the second part I’ll look at why a population health approach in rural settings is being touted by many as a viable solution to improving overall health.

Among many other provisions in the $1.7 trillion appropriations bill, there are two in particular that bolster rural health: an extension of flexibilities that ensure continued access to telehealth put in place during the Covid-19 public health emergency (PHE), as well as new rules that, while soon ending restrictions against disenrolling Medicaid beneficiaries who no longer qualify during the coverage redetermination process, bolsters support for other underserved groups.

I’ll come to why these two aspects of the new law are of particular note when it comes to rural population health a little further on. But first, let’s establish a baseline understanding of what healthcare looks like in more rural areas. 

Rural Health in the U.S. 

For starters, let’s define what we mean by “rural.” The authority for this in the United States is the U.S. Census Bureau, which considers rural areas to be “any population, housing, or territory NOT in an urban area.” This is obviously tied to their definition of an urban area, which the Bureau has broken down into two parts:

  • “Urbanized Areas” have a population of 50,000 or more.
  • “Urban Clusters” have a population of at least 2,500 and less than 50,000.

Now that we understand the difference between what constitutes an urban versus a rural environment, let’s look at health in these areas by the numbers. Depending on the source, between 15% and 19% of the U.S. population lives in rural counties. Despite these differing population estimates, there is a broad consensus that rural residents tend to be older, sicker, and less affluent than their urban counterparts. As of 2015, the median age in rural settings was 51, whereas it was 45 in more urban places.

Educational attainment also maps well onto underserved rural populations, with districts populated by those with lower academic achievement often experiencing worse health outcomes than higher-achieving districts. According to the County Health Rankings & Roadmaps, which is put together by the University of Wisconsin Population Health Institute and the Robert Wood Johnson Foundation (and which I discussed in a previous blog post), rural counties are “disproportionately represented among counties with school funding deficits, particularly those with large deficits. On an annual basis, 70% of counties with deficits of more than -$4,500 per student are rural.”

A key marker of health, the uninsured rate, can also help explain the health disparities that exist between urban and rural counties. Between 2010 and 2019, the uninsured rate for rural residents was about 2-3 percentage points higher than those in urban areas. Provisions of the American Rescue Plan (which I wrote about in a previous post) and Medicaid expansion have improved things somewhat, but uninsured rates remain disproportionately higher in states that haven’t yet expanded Medicaid. And perhaps most significantly of all, between 1999 and 2019 the age-adjusted death rate in rural areas worsened from 7% above that in urban areas to 20% higher.

As with other underserved groups, the COVID-19 pandemic shone a spotlight on rural health disparities and, in many cases, made them even worse. Rural health researchers with the National COVID Cohort Collaborative (N3C) examined data collected during much of the pandemic, and found that health outcomes in rural areas were far inferior to those in urban centers. The authors found the following:

“In rural communities that are near urban areas, people with COVID-19 were 18% more likely to be hospitalized, and those who lived far from urban areas were 29% more likely to be hospitalized. Mortality rates showed an even sharper disparity. After adjustments, rural residents — no matter how near they lived to urban areas — were about 36% more likely than urban residents to die within 90 days after COVID-19 hospitalization.”

These numbers are unbelievably high, and they bear some exploration. But before we go any further, let’s define what we mean by “health disparities.” There are a number of definitions out there, but the one I prefer comes from the U.S. Department of Health and Human Services’ (HHS) Healthy People 2030 initiative: 

“(A) particular type of health difference that is closely linked with social, economic, and/or environmental disadvantage. Health disparities adversely affect groups of people who have systematically experienced greater obstacles to health based on their racial or ethnic group; religion; socioeconomic status; gender; age; mental health; cognitive, sensory, or physical disability; sexual orientation or gender identity; geographic location; or other characteristics historically linked to discrimination or exclusion.”

Populations in Rural America

Although some folks may have a preconceived notion that rural America is mono-cultural and its landscape reflects a bygone era, that is a bit of a misconception. While it’s true that, as mentioned above, rural areas have higher poverty rates and that they’re populated by more elderly residents — and that on average the populations there are less diverse than their urban counterparts — it’s also true that, for instance, these areas are becoming more racially and ethnically diverse by the year, with people of color making up 24% of rural America in 2020. This represents an increase of 3.5 percentage points between 2010 and 2020.

In other words, a number of subpopulations reside outside the borders of most cities, many of which can be seen to one degree or another in urban settings as well. This segmentation is an important consideration when applying a population health approach to improving health outcomes, which we’ll further explore in a subsequent blog post. For now, let’s be content to say that individual subpopulations like military veterans or people in certain age groups can be targeted for healthcare interventions. 

So with limited resources and, on average, a higher disease burden to contend with, how do rural healthcare providers keep these populations healthy? For some insight into this, I’ll spotlight two provisions in the new omnibus appropriations bill, telehealth and an altered Medicaid coverage redetermination process, to see how the federal government is helping rural residents continue to fight the pandemic and improve health outcomes. 

Telehealth and Rural Residents

Gaining a better understanding of how these two provisions from the omnibus bill work will provide a backdrop to how creative thinking can be employed to shore up healthcare gaps in rural America. First off, let’s look at telehealth. During the early part of the PHE, when social distancing was key and it was advisable to avoid elective procedures in medical settings, telehealth became an important mechanism for delivering care.

A report by the Assistant Secretary for Planning and Evaluation noted that “The use of telehealth services surged during the COVID-19 pandemic. A 2020 study found that telehealth use during the initial COVID-19 peak (March to April 2020) increased from less than 1 percent of visits to as much as 80 percent in places where the pandemic prevalence was high, and a recent ASPE report found that Medicare telehealth utilization increased 63-fold between 2019 and 2020.”

As the PHE stretched on, telehealth became a significant component of healthcare: telehealth utilization peaked at more than 32% of Medicare claims in April 2020, then came down to between 13% and 17% by July of 2021. This is still orders of magnitude higher than its usage before the pandemic started. According to the Harvard Business Review, “The investments that have already been made in technology and training were only possible due to the waiver of a mountain of federal regulations that have hampered telehealth adoption for decades.”

The bottom line is this: telehealth adoption has opened up care access to underserved groups in general, and to rural populations in particular — especially as it pertains to Medicare and Medicaid beneficiaries. For the Medicare population, the omnibus spending legislation extends by two years certain telehealth flexibilities. Among other provisions, during the PHE (which is still ongoing at the time of this writing), beneficiaries are permitted to remain in their homes for telehealth visits that are reimbursed by Medicare; previously, beneficiaries needed to travel to a healthcare facility in person for these services to be covered.

In addition, telehealth visits have been authorized to take place on smartphones for those enrolled in Medicare, as opposed to being restricted to only taking place using equipment with both audio and video capability. This is key for rural residents, who on average have lower levels of access to broadband internet than do their urban counterparts. Indeed, according to the Federal Communication Commission’s 2020 Broadband Deployment Report, “22.3% of Americans in rural areas and 27.7% of Americans in Tribal lands lack coverage from fixed terrestrial 25/3 Mbps broadband, as compared to only 1.5% of Americans in urban areas.”

For Medicaid enrollees during the PHE, telehealth extensions have been similarly effective, and they’ve been ubiquitous:

“All 50 states and DC expanded coverage and/or access to telehealth services in Medicaid. States have broad authority to cover telehealth in Medicaid and CHIP without federal approval, including flexibilities for allowable populations, services and payment rates, providers, technology, and managed care requirements.”

Although these allowances may be tied to either the federal or state PHEs, states are planning to enshrine some Medicaid telehealth flexibilities into law.

Medicaid Coverage Redetermination

Another change enacted during the PHE that impacted rural residents was a change in Medicaid coverage redetermination requirements. At the beginning of the pandemic in March of 2020, Congress enacted the Families First Coronavirus Response Act (FFCRA), a piece of legislation that I wrote about in a previous blog post. Many of the provisions of this law, including enhanced federal funding for Medicaid and also for maintenance of effort (MOE) protections, were put in place with the understanding they’d be phased out at the end of the PHE.

More specifically, for anyone enrolled in Medicaid as of March 18, 2020, or for anyone newly enrolled between that date and the end of the national PHE, the FFCRA made it so states could not disenroll any beneficiary. CMS permitted states to “suspend or continue income checks or redeterminations during the emergency,” and conduct regular renewals. But importantly, the MOE prevents states from dropping Medicaid coverage during the PHE. Put another way, it mandated that these enrollees be given continuous eligibility unless they decided to move out-of-state, or they requested voluntary termination from the program.

Since nearly a quarter of individuals under the age of 65 who reside in rural areas are covered by Medicaid, and since 22 percent of them are dually enrolled in Medicaid and Medicare, the FFCRA had an outsized impact on rural residents. That said, the Consolidated Appropriations Act, 2023 actually makes changes to “the continuous enrollment condition and availability of the temporary increase in the Federal Medical Assistance Percentage (FMAP) under section 6008 of the Families First Coronavirus Response Act” such that it separates the end of the continuous enrollment condition from the end of the PHE, ending that condition on March 31, 2023.
This will allow states to terminate Medicaid enrollment of those beneficiaries who no longer meet Medicaid eligibility requirements. While in many ways this move is suboptimal, the coverage redetermination process restart will likely result in savings that will in turn be put toward funding other Medicaid coverage priorities such as “making permanent the postpartum coverage state option and requiring all states to establish 12 months continuous eligibility for children.”

The kind of creativity and flexibility demonstrated by the administration in extending telework allowances and amending the Medicaid coverage redetermination process during the PHE should be used when thinking about how to apply a population health mindset to rural healthcare delivery. I’ll examine how this might work in my next blog post.

Top Posts of 2022

To round out the year, I thought I’d choose my favorite blog posts in a “Best of 2022” post. This year brought so many interesting aspects of U.S. healthcare to light, from the unclear future of multiple plans meant to help folks through the pandemic, to the U.S. Government Accountability Office’s research highlighting how maternal outcomes worsened during the pandemic. So here, without further ado, and in no particular order, are my picks for top blog posts of the year:

Benchmarking APMs

How do insurance plans structure arrangements that encourage healthcare providers to deliver increasingly better care year after year, while also not sacrificing quality? That’s where benchmarks come in. In this post I get into the nitty-gritty of how benchmarking works, and how healthcare benchmarks work in different payment models.

Maternal Mortality and PHM part 1

The U.S. leads all wealthy nations in terms of maternal mortality rates. This statistic is hard to swallow, especially since two in three such complications are preventable. In part one of a two-part series, I explore why maternal and infant health in the U.S. is substandard when compared to other peer nations.

Maternal Mortality and PHM part 2

In this second installment of a two-part series, I look at how population-level care strategies might optimize maternal and newborn health. 

What is a Health Insurance Exchange? 

So what is a health insurance exchange anyway? We’ve all heard about them, but admittedly I never looked into them in much depth. In this installment of my “What Is…?” series, I put a magnifying glass up to what these exchanges are, and how to maintain coverage if legislation like ARPA and FFCRA expire. 

Mental Health and PHM

Counter to the widely-accepted view that mental and physical wellbeing are inextricably linked, care for mental and physical conditions is often not coordinated between healthcare providers. Indeed, behavioral health services (which include both mental health and substance use treatment services) are often located in geographically separate locations from physicians’ offices. In this post, I look at the challenges of matching people up with appropriate mental healthcare in such a system.

Place-Based Care in the UK

In this deep-dive piece, I delve into an approach to PHM undertaken by National Health Service (NHS) England, namely its so‐called “Place‐Based Systems of Care” program. Somewhat related to the piece linked to above about mental health and PHM, the Brits have decided that co-locating care along geographic lines makes the most sense when rationing limited healthcare resources. 

Helping Children with Asthma

Being as I live in the Washington, DC area, this post is particularly close to my heart. In it, I examine an ongoing program called the Healthy Housing Virtual Home Visiting (VHV) Program, which is a collaboration of medical and housing experts. The group works to repair homes in low socioeconomic areas of Washington, DC to cut down on the level of allergens and improve the health of children with asthma.  

Involving Specialists in PHM

This post looks at a Harvard Business Review article that makes a compelling argument for health systems to recruit specialists to become entry points into the population health management ecosystem. In some instances, specialists may stand a better chance at enrolling patients into early disease management programs that help address negative social determinants of health.

Benchmarking APMs

With the release of the CY 2023 Medicare Physician Fee Schedule Final Rule on November 1st, I saw a flurry of online activity about how, after much consideration, CMS seemed not to have accepted many of the comments offered during the open comment period. In consequence, they appeared not to have changed much of anything from their Proposed Rule back in July.

One Twitter thread on this topic caught my attention in particular, and made me curious about how alternative payment models (APMs) – and specifically population-based payment (PBP) models that require plans and providers to manage total cost of care – will be financed going forward:

I’ve been studying APMs for a while now, but until I saw this tweet, I never drilled down into the particulars of how CMS decides how to incentivize plans and providers that rely on PBPs to perform at high levels, a process called benchmarking. Here is my take on how the process currently works, and how it might work going forward.

Benchmarking Basics

Before we delve too far into the complexity of scoring healthcare providers and reevaluating performance standards, it might be good to cover some basics. For starters, although the term “benchmarking” emerged in the 1800s during the industrialization of weapons manufacture, the concept can be applied to a range of disciplines where results of one endeavor are compared to those of another. For example, some common continuous improvement tools like client surveys and SWOT analyses make use of benchmarking.

Philip de Vroe, a.k.a. the Finance Storyteller, has a good primer video on benchmarking, which he describes as “Making meaningful comparisons to others, and identifying opportunities to improve.” Deciding on which peer groups against which to compare your performance, along with focusing on the drivers of that performance, are key elements of benchmarking. This process is one way, de Vroe says, to identify a current leader in a given field, zero in on any gaps between you and the market leader, and take action to eliminate the gap.

Although benchmarking has been around in healthcare for a while (for example, the federal government has been keeping track of total health care spending in the U.S. via the National Health Expenditure Accounts since 1960), the complexity with which insurers now deploy the concept when entering into shared-risk arrangements with providers is relatively new. Benchmarking has evolved into a solution for quantifying and addressing areas like cost, care quality, and other gaps on the individual physician, practice, hospital, or health system level.

Although much of this blog post is devoted to examining benchmarks assigned to large groups of physicians, hospitals, and health systems, on the individual practitioner level, benchmarks can act as guardrails that help clinicians monitor and improve key metrics in the clinical, operational, financial, and equity spaces, among others. Apart from keeping track of the day-to-day operations of clinicians, the term “benchmarking” is also flexible enough to describe efforts such as the Physician Practice Benchmark Survey, an ongoing initiative of the American Medical Association to track “the practice arrangements and payment methodologies of physicians who take care of patients for at least 20 hours per week and don’t work for the federal government.”

A Role for ACOs

Speaking of payment methodologies, benchmarking has become a key way insurers gauge the provision of value-based, cost-effective care with respect to payment arrangements like MIPS, advanced APMs, and a subset of APMs called accountable care organizations (ACOs). I’ve mentioned ACOs a few times in past blog posts (here, here, and here), but I’d like to elaborate on them a bit more here and examine how incentive structures and attendant bonuses and penalties are derived from contract period to the next.

The concept of the ACO in all its complexity deserves its own blog post, but for our purposes here, we can think of ACOs as “groups of doctors, hospitals, and other health care providers, who come together voluntarily to give coordinated high-quality care to their Medicare patients.” Best described as Medicare population-based payment models, or PBPs (I wrote about PBPs in a past blog post), ACOs incentivize providers to efficiently manage total cost of care, i.e. to provide the right care at the right time and avoid duplication of effort – a concept called “coordinated care.” Any cost savings realized while not sacrificing quality is passed along to the ACO.

It’s important to note that over the past decade or so, CMS has increasingly pushed affiliated physicians to enter into risk-sharing arrangements, with an ultimate goal of 100 percent of traditional Medicare (TM) beneficiaries being covered under an accountable care relationship by 2030. One of the main vehicles for delivering this result, if it is to be achieved, will be ACOs.

From the outset, however, ACOs have experienced mixed results, with “fierce debates over the ability of these organizations to meet their performance goals, as well as any unintended consequences that could adversely impact members of the health supply network.” Overall momentum in the adoption of APMs has slowed in recent years, and savings generated by ACO models haven’t kept pace with expectations. 

Still, ACOs and other APMs likely represent the best chance U.S. healthcare has of decoupling from the fee-for-service (FFS) payment model and becoming more efficient at delivering high-quality care. Physician-led ACOs are leading the way when it comes to delivering cost savings coupled with improved outcomes, often producing better results than hospital-led ACOs. A key factor in their success appears to be longevity of program participation mixed with both a nimbleness to learn how care partners achieve efficiency levels, and a willingness to implement these approaches.

So how do insurance plans structure arrangements such that they encourage providers to find new ways of providing increasingly better care while also not sacrificing quality? That’s where benchmarks come in.

Benchmarking ACOs

As population-based payment models like ACOs have matured, so too have their methods of motivating healthcare providers to provide top-notch services while also being mindful of costs. This concept is best encapsulated in the Institute for Healthcare Improvement’s Triple Aim, where a balance is struck between the patient experience of care, healthcare outcomes, and reducing per capita costs. One useful definition of benchmarks as they relate to ACOs has been provided by the National Association of ACOs (NAACO): 

“ACO performance is measured using a multi-step process that evaluates an ACO’s effectiveness in lowering expenditures for a group of assigned beneficiaries against a financial benchmark reflective to historical costs. Benchmarks are initially established for new ACOs, updated during agreement periods and reset or rebased when ACOs enter subsequent agreement periods.”

As things stand now, one issue that challenges broad uptake of the ACO model is this “rebasing” process. We’ll get more into rebasing a little further on, but for now, suffice it to say that rebasing occurs when benchmarks are adjusted as a result of an organization’s past success or failure at controlling spending. I’m using the term “success” loosely here, because as we’ll soon see, physicians who manage to lower costs while maintaining quality care are often “penalized” by having the bar for success raised ever higher, a phenomenon called “ratcheting”.

Although often a net positive in the short run because it helps lower program spending, in time this upward adjustment of standards can dampen plan or provider incentives to participate in an ACO as it becomes increasingly harder to identify new efficiencies. In consequence, new opportunities to save money dry up, leading to lower levels of shared savings. This vicious spiral often leads to plans or providers being less inclined to participate in ACOs over the long haul.

Despite this suboptimal playing field, CMS has persisted with the rebasing process. Reasons for this are varied, but the authors of a 2021 paper discussing the merits of the Medicare Advantage (MA) program versus PBPs explain it this way:

“Medicare can save money if the benchmark is set below what would otherwise have been spent, if Medicare keeps a large enough share of any savings, or if any efficiencies in care delivery spill over to populations outside the PBP model. Higher benchmarks induce plan or provider participation but increase program expenditures. Lower benchmarks may reduce available benefits in MA or reduce plan participation in MA or provider participation in voluntary ACO models.”

We’ll talk about spillover effects in a little bit, but suffice it to say that relying on the efficiencies you’ve realized in caring for ACO beneficiaries to spill over to non-ACO patients may not be the most optimal strategy in building an enduring framework for value-based care delivery.

This brings up an interesting point: hospitals and health systems administer care to patients belonging to a patchwork of different insurance carriers (and often to patients who have no coverage at all). So does that mean they cater to ACO beneficiaries differently than everyone else? And if so, how do they pay their contracted providers within the ACO versus non-ACO practitioners? The paper quoted above provides a useful explanation of how this works: 

“ACOs typically operate on budget-based versions of PBP, where FFS payment is used to pay all claims, but bonuses (or penalties) are paid to ACOs at the end of the year based on accrued FFS spending relative to a benchmark.” 

So if I’m understanding this correctly, providers who work with ACO beneficiaries are paid using a FFS model, and the rewards and deductions are handled separately. It’s an interesting idea, but isn’t one of the core purposes of ACOs to transition medicine away from the FFS model?

The short answer, as far as I can tell, is yes; however, until FFS no longer dominates the reimbursement landscape, hospital and health system executives and their insurer partners have to work within the system to effect change. To do this, many ACOs predicated on budget-based payment systems like bundled payment, capitation, and shared savings arrangements base bonus payments on projected FFS spending. This configuration abrogates the need for ACOs to contract with non-ACO providers while maintaining the overall value-based incentive structure.

External Empirical Benchmarking

As the above example illustrates, there are any number of approaches one can take in tracking an ACO’s performance over time. But even so, there are three forms of benchmarking that are commonly used: empirical benchmarks, bidding-based benchmarks, and administratively set benchmarks. Since empirical benchmarking currently dominates the APM landscape, let’s focus on it for the remainder of this post.

For PBP in Medicare, empirical benchmarking has proven attractive to many ACOs up to this point. It’s important to note that benchmarking for MA programs is slightly different than benchmarking for ACOs, with empirical benchmarks for MA programs called “external” benchmarks while those for ACOs are known as “circular” benchmarks. Despite these differences, however, there are a number of similarities in the ways MAs and ACOs chart performance and, as a result, they are often treated almost synonymously.

In both approaches, CMS seeks to save money by sharing in the savings when MAs or ACOs spend below their benchmarks, or by charging them when they overspend. As mentioned earlier, CMS also can benefit financially from changes in practice patterns brought about by the MAs or ACOs that spill over to other non-attributed patients. Although there have been some successes in using benchmarking, it has been argued that, particularly in the case of the Medicare Shared Savings Program (MSSP), which is a type of ACO, results have been skewed for a variety of reasons

Starting with MA benchmarking, an external sector must first be chosen that provides a status quo that risk-based contracts aim to beat. These benchmarks are set using “observed spending.” The “external sector” against which MA programs are measured has often been TM populations. In other words, they factor in a given entity’s county-specific benchmark for non-MA attributed Medicare patients. The benchmark is a multiple of the average spending in the TM sector for each county in a given plan’s service area, often with a slight discount built in. This approach is different from that taken by ACOs because in the MA configuration, a given MA’s historical spending patterns are not factored into the benchmark.

In the early days of benchmarking the MA program, the vast majority of Medicare patients didn’t fall under the auspices of an alternative payment model. For this reason, comparing their performance to TM beneficiaries provided ample opportunities to out-perform the benchmark. At that time, benchmarks were set at 95% of spending in the TM system. Despite the passage of time, and even though legislation has been instituted to bolster program participation, the tenant of basing benchmarks on TM has endured for both MA programs and ACOs.

This may prove to be a problem going forward, however, because when the TM population against which an MA is benchmarked shrinks too much, the benchmarks can fluctuate unpredictably, thereby invalidating a core purpose of these payment models which is to stabilize revenue. As a result, plans may find it too hard to endure such vicissitudes and choose to drop out of the program at the end of their contract period.

If, on the other hand, plans persist and remain a part of the payment model, this can become a problem not just for participating physicians but for patients, particularly if patients in TM prove less costly over a given time period. In this instance, lower benchmarks mean lower levels of benefits and higher premiums charged to plan beneficiaries. If costs pass a threshold beyond which patients cannot afford their care, this becomes an access to care issue. So as more MA programs enter the market, it becomes less useful to rely on external empirical benchmarks to chart performance.

Circular Empirical Benchmarking

Now let’s turn to empirical benchmarking with regards to ACOs, a practice that has been termed “circular” benchmarking. As mentioned above, like those of MA programs, ACO benchmarks are based partly on TM spending in the ACO’s service area. But unlike “external” MA benchmarks, circular ones also take into account an ACO’s historical spending, blending it with TM spending in a given market to arrive at a hybrid number. Adjustments in this kind of benchmark are informed by either projected or actual TM spending growth.

The component of the benchmark derived from the ACO’s historical spending patterns comes into focus when it’s time for a new contract period to begin. As the authors of the 2021 paper put it, “When an ACO transitions to a new contract period, the ACO-specific component of the benchmark is rebased such that the spending in the performance period of the first contract period contributes to the baseline of the next contract period. The regional component of the benchmark rises with regional TM spending and receives increasing weight over time, up to 50%.”

This creates a circular pattern in the sense that historical spending feeds directly into the calculation of subsequent years’ new benchmarks. So one major issue with circular benchmarking is the “ratchet” effect I touched on earlier. Another issue arises from the regional component in this scenario. Specifically, if one ACO (or a small group of ACOs) dominates a certain market, they can basically dictate where the benchmark is set, discouraging wider participation.

Aside from what’s been mentioned above, there are additional advantages to using empirical benchmarks, which include the following:

  • When ACOs stay in a given market instead of dropping out due to incurring too many penalties, the regional component of empirical benchmarks exposes ACOs to competitive pressures that force them to emulate the success of other ACOs in their market, which could improve overall savings (although there’s a fine balance, as mentioned above, and dominance by one ACO or a small cadre of ACOs can have undue influence on a given region).

  • Empirical benchmarks are flexible in the sense that they can be adjusted for forces that affect spending outside an insurer’s or provider’s control such as shifts in the economy, novel technologies coming online, and changes to care standards.

Despite these positive aspects, I have to wonder if empirical benchmarks will be around for the long haul. As I mentioned earlier, CMS has set a goal of 100 percent of TM beneficiaries being covered under an accountable care relationship by 2030. And with around 68% of Medicare beneficiaries who are enrolled in both Medicare Part A and B being currently enrolled in MA or attributed to an ACO or direct contracting entity, empirical benchmarks may cease to be useful in the near future as more Medicare patients are siphoned away from TM.

There’s a lot left to be said about benchmarking, including an examination of types of APMs and their varying levels of success with respect to using benchmarks. A particularly interesting example of this is the aforementioned MSSP, which has been called an “off ramp” to relying on FFS. But I’ll reserve evaluation of various benchmarking configurations for another time, and will continue learning all I can about this fascinating topic.