AI in behavioral health care: When artificial intelligence became real
Not long ago, the concept of artificial intelligence (AI) in health care may have seemed like the stuff of which science fiction books and movies is made, but AI is fast becoming positioned to become business as usual in an industry that is already steeped in data and analytics.
AI amps up the game, and Beacon Health Options has become a willing player in using this cutting-edge technology to improve people’s health at a reduced cost. The concept is simple: better prediction and identification of the most at-risk people and stronger connections to services before their condition escalates, thereby preventing avoidable emergency or inpatient care.
For some, the devil is in the distrust of AI, sometimes referred to as the “black box” challenge. Beacon’s Christina Mainelli, Executive Vice President and Chief Growth Officer, was featured in a recent CIO Magazine article explaining how Beacon overcame that distrust to embrace AI as part of a better business practice.
Go by the data to improve lives
The numbers are impressive. For example, compared to traditional analytics, Beacon found that AI predictions are 220 percent more accurate at identifying people at high-risk for a psychiatric event, according to Ms. Mainelli.
What’s the difference between the two – traditional analytics and AI – that leads to this greater accuracy?
While traditional analytics base predictions primarily on a handful of past performance metrics, such as highest inpatient admissions, machine learning-based algorithms can consider thousands of variables related to a member. This capacity facilitates a more tailored prediction of rising risk, recognizing that one member’s risk factors are not the same as another’s. Those variables expand beyond structured data, such as claims data, and also incorporate unstructured data, such as narrative clinical notes.
Compared to traditional analytics, Beacon found that AI predictions are 220 percent more accurate at identifying people at high-risk for a psychiatric event.
This wider net draws the highest risk people into case management. More specifically, predictions based on a wider array of data sources add previously uncaptured members to case management not typically identified through traditional analytics.
Finally, AI is in a constant state of continuous quality improvement, refining interventions based on what is most effective. Traditional analytics models are static and fail to evolve due to the restricted introduction of new data points. Therefore, case managers using these models do not benefit from a continuous quality improvement approach – an approach that continually refreshes the identification algorithms, supporting improved decision-making about people’s care.
The end result is that Beacon case managers or Beacon providers can prioritize those members who most need case management based on their individual levels of predicted risk.
It’s a matter of trust
So what’s the issue? Health care organizations distrust AI because they don’t always understand why AI reaches the conclusions it does, the article states. In turn, a wrong decision based on “bad” information can lead to a serious health outcome. Indeed, a survey of 200 health care organizations revealed that the “black box” challenge was the main reason for not readily adopting AI, according to the article.
Ms. Mainelli said that Beacon met that distrust head on by taking three steps to test the efficacy of AI:
- Convene a multidisciplinary team – clinicians, IT professionals, providers and more – to determine basic guidelines and the AI algorithms
- Conduct experiments to see whether the algorithms run on data produced during a specific timeframe did, in fact, accurately predict psychiatric events for those people
- Devise enhanced interventions for the identified people and follow up with those recommendations
The circle is complete when the validated clinical interventions become part of Beacon’s business model.
“Artificial intelligence gets to the very core of Beacon’s mission – helping our members live their lives to the fullest potential,” commented Ms. Mainelli for today’s blog. “These fine-tuned predictions of who is at risk for a psychiatric admission or a visit to the ER deliver a whole new level of person-centered care. We can intervene with those people and connect them to appropriate care before they experience an adverse event.”
The same survey that revealed the black box challenge as a main obstacle in accepting AI also revealed that AI appears to have a healthy future, with 37 percent of respondents stating that they are already using AI. Beacon, in its quest to solve the most urgent and complex mental health challenges through clinical innovation and care integration, is already part of that future.