Quality, Equity, and AI in Emergency Cardiac Care
Maame Yaa (Maya) Yiadom, MD and a team of emergency medicine physician-researchers tested an AI model against human practice in identifying patients with acute coronary syndrome (ACS). Their findings emphasize the disparities in age-based ACS screening and highlight the delicate interplay between human expertise and AI algorithms in the pursuit of precision emergency care.
Time is of the essence in identifying patients who present to the emergency department (ED), with acute coronary syndrome (ACS), commonly known as a heart attack. Within the first 10 minutes of arrival, an early electrocardiogram (ECG) is critical to uncovering ST-elevation myocardial infarction or STEMI.
Traditionally non-clinical registration staff use internationally accepted criteria like age and chief complaint, as well as human judgment to screen for ACS patients in need of an early ECG. However, statistics reveal Black, Native American, Pacific Islander, and Alaskan individuals experience heart attacks at much younger ages than their White or Asian counterparts. Any screening based on age can introduce bias and inequity at this critical decision point and become a barrier to accessing care.
Yiadom, an emergency medicine associate professor, explored the efficacy of different screening methods in a five-year retrospective study encompassing nearly 280,000 ED visits, funded by the Stanford Institute for Human-Centered Artificial Intelligence.
Yiadom and team compared:
- Traditional staff-administered clinical screening protocol.
- A predictive diagnostic AI screening model.
- Human observation augmented by the predictive AI model.
Yiadom found that screening by protocol systematically underdiagnosed young, Black, Native American, Alaskan, Hawaiian/Pacific Islander, and Hispanic patients. The predictive AI model occasionally missed ACS and STEMI cases but did identify an additional 11.1% of patients versus the conventional practice.
The best-performing option was screening by human observation, bolstered by the AI model, which displayed the highest sensitivity in detecting ACS and exhibited consistency and equity.
AI plus humans edged out the AI model. Yiadom attributes this to the algorithm-based model exhibiting unpredicted biases. Admissions staff were picking up on nuances and correcting for the model’s overly simple paradigm. “When it comes to AI-based models, simplicity can be detrimental in trying to achieve precision medicine,” said Yiadom.
The initial research study was performed in vitro — within a computer but with a keen focus on clinical needs and limitations. “Too often AI models are built without real-world testing in a clinical setting by technical teams who don’t understand how it will impact patient care,” notes Yiadom. “My team’s advantage is our intimate knowledge of the clinical care environment.”
Next Steps
Yiadom recently received a $3.8 million R01 grant from the National Institutes of Health’s National Heart, Lung, and Blood Institute for the next phase of the project.
Real-world testing of the current model will be done in partnership with Vanderbilt University Medical Center where the model will also be exposed to live patient data, running silently in the background as patients are admitted, and storing the results for further analysis. In addition, a higher-performing and more equitable model will be developed at Stanford using data from three diverse patient populations including Vanderbilt (Tennessee), Beaumont (Michigan), and Cooper Hospital (New Jersey).
In the subsequent phase of the project, the refined model will be activated to work in real-time during ED arrival intake. Yiadom shares, “At this stage our primary interest is ensuring the effectiveness, safety, and reliability of the AI model much like we do for drug and medical device development with rigorous scientific testing; but also equity. This is a new frontier in medicine.”
Updated Spring 2024