Predicting Outcomes

futuristic screen and shapes superimposed over Caucasian person pointing to a laptop screen with a stylus

Could deep learning help to predict patient  outcomes?

By: Cortney Gensemer

Did you know that when your email sorts incoming messages as spam or not spam, it is relying on an artificial intelligence algorithm (AI) to do so? AI researchers across the country are interested in whether similar tools can be used to predict which patients are at high risk of a poor health outcome based on data in their electronic health record (EHR).

MUSC AI researcher Jihad ObeidM.D.Smart State Endowed Chair in Biomedical Informatics Associated with Clinical Effectiveness and Patient Safety, is particularly interested in using algorithms to decipher important clues in EHR clinical notes about a patient’s condition or potential outcome. Instead of trying to pre-identify keywords or terms to help the algorithm search the notes, he is using deep learning, a type of AI in which the computer learns from scratch by analyzing patient data in training sets” and then applies what it has learned to new patients. 


Deep learning is a class of AI that uses many layers of artificial neural networks, a technology inspired by how the human brain processes information.Although artificial neural networks have been around for many decades, these techniques have become much more powerful in recent years due to the advances in computing technology.


Deep learning models are built on research in cognitive neuroscience that enabled biologists to understand how the brain works,” explained Obeid. “Research has shown that some neurons in the visual cortex recognize just the edges of an image, then at a deeper level they can start recognizing eyes and ears, and then with humans we can start recognizing faces. So you can train artificial neural networks to do that as well.”

Pattern recognition and prediction

These neural networks are not limited to analyzing images but can also look for patterns in other types of data as well, including text data. Obeidis using this technology to identify medical conditions in clinical text. In one project, he is using deep learning to identify altered mental status (AMS) in patients with pulmonary embolism,and in another he is looking at new ways to predict which patients are at risk for future suicide attempts.  


Identification of AMS icritical in patients with pulmonary embolism, because it is a strong indicator of a poor outcome. To help physicians identify AMS earlier, Obeid is using emergency department clinical notes at MUSC to train and test an algorithm to look for patterns in the notes that suggest AMS. This algorithm will enable physicians to know which patients are at higher risk and to make more timely and appropriate treatment decisions. Obeid has trained the algorithm on MUSC emergency data but will study whether it is also effective when applied at another institution, a field of research known as “transfer learning.” The project is currently funded by pilot grant from the Delaware Clinical and Translational Research ACCEL program, and Obeid hopes to test the algorithm at Christiana Care Health System, a hospital in Delaware affiliated with that programTransfer learning is a key component of expanding the project, as it will help ensure that the algorithm is not biased to a particular patient population and can be applied with similar results in various clinical settings. 


In another project, Obeid is working on a deep learning approach to identify patients with suicidal behavior and predict which patients are at risk of future suicide attempt. First, the algorithm was trained on data from patient recordidentified using a known set of ICD-10 (diagnosis) codes related to suicidal behavior. Once it had learned to recognize patterns in the clinical notes,it then accurately identified new patient records with noted suicidal behavior (98% accuracy). The same algorithm was trained to identify future risk of suicidal behavior (up to a month before such behavior occurs).When given fresh data to analyze,it proved to be almost 80% accurate in predicting suicide attempts before they occur


Our results are quite competitive with results from other models reported in the literature,” said Obeid. “Improving the precision of these algorithms could lead to better follow-up of patients who are at risk for future suicide attempts and help mental health professionals provide the necessary care to prevent future self-harm.

Explainable AI

We may trust AI to filter our email, but will physicians trust AI predictions enough to base their treatment decisions on them?Only if researchers can find a way to make AI’s predictions credible.


An important point before you have people follow the recommendation from AI is that you have to have explainable AI,” said ObeidYou have to tell the physician why this model is giving you a high probability for intentional self-harm.” 

For example, an AI-based alert would be more convincing if it pointed to areas in the clinical notes that led to that prediction. Such explainable AI will be essential if physicians are to incorporate AI insights into their practiceOne day, physicians might be able to download AI apps specific to their specialty from, for example, Epic’s app orchard and plug them into the EHR to assist them in their care of patients. 


Using AI in healthcare has many important applications. For example, these techniques may be used to help doctors make more informed decisions when caring for patients, or to identify participants for clinical trials, and may be used to help public health officials address emerging epidemics. The research being done at MUSC is on the forefront of data science and AI applications in medicine.