Artificial IntelligenceFuture of Health CareRiskValue-Based Health Care

Can Artificial Intelligence and Machine Learning Reduce Health Care Costs?

By January 29, 2020 No Comments

All the experts’ 2020 health care technology predictions have one trend in common—more Artificial Intelligence. AI and its subset Machine Learning are tagged as the winning ticket to advances not only in clinical medicine and research, but also in administration and management. The hype promotes so many potential applications for AI that it begs for an answer to one of its key claims: Can AI reduce health care costs?

I’m referring particularly to Artificial Intelligence that is beyond clinical medicine and new medical technology. AI focused on clinical medicine, from genomics to the latest radiology and cardiology diagnostic capabilities, uses algorithms to create better diagnostics and targeted therapies. AI now can identify ejection fraction more precisely and find tumors that would be missed by human eyes.

Clinical AI is advancing at a fast pace partly because it is specialized—but also because the decision-makers and decision-making process are apart from the quagmire of health care operations and finance. Clinical AI is very likely to help better make diagnoses and decisions in health care. But they won’t come cheap, because new technology is expensive, and so is precision.

It’s AI in the “other world” of health care that many believe can affect the bottom line. That world is more complicated—and undergoing a seismic shift that mandates living under financial risk, being held accountable for a patient population, and staying competitive against other providers. The business engine of health care needs solutions for affordability. Let’s examine whether Artificial Intelligence tools would make a good investment.

In Health Care Operations, Current AI Focus is Problem Identification

Current AI applications in health care operations tend to focus on identifying problems that will lead to losses under real/potential Value-Based payment contracts, claims denials, or reduced revenues. AI in this context is often Machine Learning, finding patterns from huge data sets of patients. Typical examples include:

  • Predicting high-risk or high-cost patients
    • for hospitalization, ER use, or other patterns of high utilization, especially associated with patient history, prior usage, lab values, social determinants, and other factors
    • for suicides or mental health crises that might require hospitalization
    • for patients with emerging clinical crises
    • for patients at risk of readmission
  • Raising patient risk levels for revenue purposes or risk-related programs, like an HCC score, through diagnosis capture technology
  • Reducing workload and duplication/gaps of patient registration and eligibility data
  • Improving AR results through better billing processes
  • Developing patient populations specific to patient, gender, and management of specific conditions for use in population health

For most of these objectives, the AI solution is to ask which patients, populations, or processes are a problem to be fixed.

There are shortfalls to the use of AI to identify problems and risks associated with patients. For one, the data is often incomplete because the patient may be seeing multiple provider entities, or because the number of patients is small. Second, even the most comprehensive EMRs are subject to variable and/or insufficient clinical documentation associated with a patient. Since clinical and transactional patient data feed AI, gaps in both of these can produce poor results on risk.

Third, AI algorithms—especially when they are not really machine-learning-based and incorporate human biases and assumptions—can be faulty. In short, some “Artificial Intelligence” may be a masquerade. For example, we don’t know for certain, based on large-scale studies, which assumptions about risk-creating instances are true. While we can identify “Frequent Flyer” patients with multiple conditions and risk factors, there are also others in the high cost group that simply represent one-time traumas or episodes that are not high risk.

Furthermore, “risk score” and true risk are not necessarily equivalent. The former is a method of adjusting payment so that providers feel like they are not being disadvantaged, but there are no studies that validate such risk scores to real patient risk.

We also don’t know how AI has baked in biases related to gender, race, and socio-economic status.

But all those issues are secondary to the big one: Identifying the problem of cost, even if real, is not the same as creating a solution to that problem.

If the Issue is Cost, What is the Problem We Want AI to Solve?

In looking at the list of AI applications, it is immediately apparent that we are not using AI to solve the problems we identify. In fact, there are few developed use cases that incorporate testing of population health solutions. Instead, providers are commonly using historical efforts—patient letters, calls, case managers, care coordinators, efforts that didn’t work in health plans—to reach out to patients, and then to assign those patients to some process defined by the provider. Does it work? That’s a question too few are asking.

Organizations may have used AI to help calculate costs and identify areas of cost overruns, but it is entirely feasible that those same organizations have under-performed in reducing costs by interventions. One provider group that did study its Frequent Flyer efforts—believed to be very successful—was disappointed at discovering through a randomized trial that the effects were illusory.

Predictive analytics, the most common AI application in health care operations, have a primary purpose of informing smart solutions. The issue of cost for each patient predicted as high risk revolves around this question: what does the patient do next? The answer to that question involves a medical event and response. The response is dependent not only on the particulars of the patient’s conditions and environment plus available options, but also on how engaged that patient has become through the provider’s intervention (most likely in population health). Yet AI designed to improve the engagement of either patients or providers hardly features in most providers’ initial forays.

Artificial Intelligence—drawing on patient feedback and preferences data in addition to claims history, and incorporating physician engagement measures—should be used to refine specific population health techniques and then test them, to ensure that they are effective.

Four Questions to Ask Before Leveraging AI to Manage Financial Risk

AI requires a thoughtful process of determining what questions we want to be answered with regard to managing costs under financial risk. It would be wasteful to chase less than optimal solutions because of faulty assumptions about patients or, for that matter, providers.

1. What kind of Value-Based arrangements make sense for us?

The provider’s participation in financial risk is a strategic decision, and the factors that weighed into that decision should inform the priorities for AI. Right off the bat, organizations with a large primary care provider base will have a different financial risk structure than a heavily specialized organization. Alternatively, specialty care will predominate. These factors will determine whether the risk is global or partial capitation or some derivative, such as medical episode bundled payments, or whether the risk is based on specific specialty procedure or condition-related bundled payments.

2. In what part of that Value-Based arrangement are we most likely to be vulnerable to excessive costs, and what will drive those costs? What will be the target of our first initiative?

Providers with global or capitated risk will generally focus on patients with high disease or utilization risk. But the solutions involve remedies to those issues, which could be one or more of these typical approaches:

  • Outreach to ensure referrals for co-conditions that lead to ER use or admissions, such as mental health, substance abuse;
  • Patient engagement for better management of core conditions through measured efforts to define and meet goals, avoid crises or hospital use, or through shared decision-making.

AI can be used to calculate the ideal staffing and processes that will create better engagement (as measured by patient response).

Providers with specialty risk may focus on the variation of procedure costs, complications, and outcomes. Some organizations may wish to apply AI to determine which patients have fared well or poorer, testing methods that may be used to improve outcomes and costs. Others may evaluate the variation in individual components of episodic costs such as imaging, type of anesthesia, and post-procedure rehab.

3. How do we ensure that the AI tools provide an unbiased, valid mechanism for testing and evaluating interventions to reduce costs?

Diversity will be essential to guide both problem identification and solutions through AI, including both different levels of providers and staff, and a sampling of patients. As provider organizations begin to adopt more financial risk arrangements under Value-Based Health Care, AI must also be applied to ensure that providers are not dumping high-risk patients, avoiding the provision of necessary care, or following other spurious practices to improve their financial position under risk.

4. How will we share results with providers, especially on variation of costs results? How will we help providers gain additional skills to facilitate their changing roles with patients, as guides in making health goals and decision-making?

Once both analytics and AI produce data-based results—including which solutions are working and which providers are doing well with them—sharing that data within a constructive context is a critical next step. Providers will need to rebuild trust with physicians who are accustomed to being “scored,” in order for the process to be effective. AI can help identify physicians for participation as mentors and/or collaborators in reviewing data, as well as test processes for engaging physicians to evaluate the potential for motivation and aspiration, while balancing time constraints.

For Machine-Learning initiatives and conclusions to be accepted by providers, there must also be transparency in the data all the way down to each individual patient. The algorithms should be built to analyze the data, not just to provide aggregate or single provider results.

Artificial Intelligence—like all the health care technology that has preceded it—is no panacea. Most real solutions to reducing health care costs haven’t been tested. AI will be an essential tool to target both problems and solutions in the future. If carefully designed in conjunction with a strategic direction, AI can illuminate the path to lower cost in a way that is both compelling and non-threatening for stakeholder physicians, and can improve engagement and results for patients. Equally as important, AI and Machine Learning applications must include transparency to avoid harm to patients and deterioration of physician engagement.

Founded in 2002, Roji Health Intelligence guides health care systems, providers and patients on the path to better health through Solutions that help providers improve their value and succeed in Risk.

Image: Robynne Hu

mm

Author Theresa Hush

More posts by Theresa Hush

Leave a Reply