Skip to main content

Advertisement

Advertisement

ADVERTISEMENT

Commentary

How Payers Are Using AI to Address Big Data Challenges

Keeping members healthy and health care costs in check are never-ending challenges for payers that are often exacerbated by incomplete, fragmented member data.

Accurate, holistic member data is essential for payers to properly assess patients’ health status, rigorously assign risk adjustment, and proactively identify gaps in care, but such data is rarely easy to harness. Much of the challenge revolves around volume, as the typical hospital generates 50 petabytes of data per year—equivalent to about 11,000 4K movies.

Adding to the difficulty is that much of this data is unstructured or semi-structured, meaning it is trapped in the notes sections of electronic health records (EHR) systems and not readily available for analysis. This unstructured data contains important information pertaining to patients’ symptoms, disease progression, lifestyle factors, and lab tests, for example.

Payers’ ability to deliver convenient, seamless access to patient data has grown in importance as a result of the US Centers for Medicare and Medicaid Services’ (CMS) approval of the Interoperability and Patient Access final rule, which requires payers to provide patients access to their own health data. While initial compliance with the rule is likely to focus on structured data, payers will ultimately need to include unstructured data for patients to fully realize the benefits of data access.

Natural Language Processing Technology Automates Data Capture

To gain a more comprehensive picture of patient health from unstructured data, payers routinely resort to expensive chart reviews in which clinical professionals manually comb through patient records in search of nuggets of useful information. More recently, payers have turned to artificial intelligence-powered tools such as natural language processing (NLP) to overcome the limitations of time-consuming, manual searches through mountains of data.

NLP automates text mining by simulating the human ability to understand a natural language, enabling the analysis of unlimited amounts of text-based data without fatigue in a consistent, unbiased manner. Essentially, NLP allows computers to understand the nuanced meaning of clinical language within a given body of text, such as identifying the difference between a patient who is a smoker, a patient who says she quit smoking five years ago, and a patient who says she is trying to quit.

By uncovering these previously hidden insights, payers gain hard evidence to feed predictive models that enable them to improve risk adjustment, reduce costs, and enhance patient care. Because health plans are realizing such value from NLP, for most payers it is more a question of “how,” as opposed to “if,” they should leverage the technology.

The following are three use cases that describe examples of payers using NLP to solve fundamental business challenges.

Improving Risk Adjustment

Risk adjustment is the process by which comorbidities are accounted for to ensure that appropriate funds are made available to care for patients with greater health needs. This information is then captured through hierarchical condition categories, or HCC codes, which rely on accurate clinical coding to assign the most appropriate risk category to patients.  The difference between an incorrect HCC code and the correct one can cost health plans millions of dollars per year. Therefore, to mitigate the risk of inappropriate risk adjustment, payers are employing large groups of clinicians to perform chart review every year.

Independence Blue Cross uses NLP to augment its chart reviewers, establishing two clear goals for the program: first, to speed up chart review, enabling reviewers to comb through more documents per hour, and second, to increase the capture of diagnoses that otherwise might have been missed by chart reviewers. The payer reports that its clinician chart-review teams have found the NLP tool to be invaluable to their workflows, streamlining chart reviews and freeing teams to be more productive and efficient.

The NLP system was able to identify features for HCC codes with over 90% accuracy, processing documents between 45 and 100 pages long per patient. The NLP process enables Independence Blue Cross to process millions of documents per hour, which is particularly valuable when the documentation is as complex and detailed as patient health records often are. Note: these initial results are from their pilot of NLP, which is now used in production.

Driving Predictive, Preventive Models

Data scientists are increasingly valuable members of a payer’s workforce, as they use member data to drive predictive models aimed at identifying patients at risk of disease progression and more costly health care management. Illustrating the importance of data science, in a late 2020 PwC survey of health care executives, nearly 75% of respondents said their organizations would invest more in predictive modeling in 2021.

One payer used NLP to develop a model predicting patient risk of developing diabetic foot ulcers, which can lead to amputations and, in turn, escalating costs, if untreated. Clues that may signal patient risk are often found in unstructured notes in EHRs, such as body mass index data, lifestyle factors, comments on medications, and documented foot diseases. The model has identified 155 at-risk patients who could be proactively managed, which potentially translates to between $1.5 million and $3.5 million in annual savings from prevented amputations.

Identifying Social Determinants of Health

The COVID-19 pandemic has once again highlighted the essential role social determinants of health (SDoH) play in patients’ overall health, with data from the US Centers for Disease Control and Prevention (CDC) showing that low-income and communities of color are at higher risk of serious illness if infected with coronavirus.

Payers have long appreciated the value of SDoH but have had difficulty obtaining accurate information pertaining to specific members, as much SDoH data—for example, information on a member’s housing, transportation, and employment—remains trapped in unstructured sources such as admissions, discharge, and progress notes. Whilst payers are now routinely using community or population SDoH data to predict risk, being able to isolate each individual member’s SDoH data holds much more power.  

An academic medical center has used NLP to search the case notes of patients with prostate cancer to identify those patients with or at risk of social isolation. By its nature, prostate cancer affects a population more at risk of social isolation, so using NLP facilitates outreach to patients more at risk of missing appointments and suffering from unchecked disease progression.

This type of use case has been more frequently seen in providers where there has historically been more access to unstructured data. However, this is increasingly possible for payers. With the mandates around interoperability and advice from Micky Tripathi, national coordinator at the Office of the National Coordinator for Health Information Technology, payers should be adopting NLP to deal with incoming unstructured data in the next year.

To make the right decisions in risk adjustment, care gap closure, and patient outreach, payers need accurate, reliable data. While much of that high-quality patient data does exist, it is too often unstructured and beyond payers’ immediate grasp. With AI-powered technologies such as NLP, payers can finally unlock the full value of their data.

Disclaimer: The views and opinions expressed are those of the author(s) and do not necessarily reflect the official policy or position of the Population Health Learning Network or HMP Global, their employees, and affiliates. Any content provided by our bloggers or authors are of their opinion and are not intended to malign any religion, ethnic group, club, association, organization, company, individual, or anyone or anything.

Advertisement

Advertisement

Advertisement