Reading People’s Faces

Stanford is exploring if a video algorithm can predict hospital admission likelihood by assessing patients’ visual cues, in a project co-led by Ryan Ribeira, MD, site principal investigator.

The goal of the project is to create a tool that can assist telemedicine physicians and emergency department (ED) staff in making informed decisions about patient admissions based on visual cues. The algorithm, currently in the experimental phase, could potentially predict a patient’s likelihood of admission to the hospital.

The research team embarked on a comprehensive data-gathering journey by recording telemedicine encounters with patient consent. As the focus expanded to include ED presentations, the team utilized a cell phone to capture preset activities mirroring real-life triage scenarios. Patients, following prescribed activities like reading fixed statements and performing specific hand and head movements, contributed to a diverse dataset.

The team then built a robust database comprising over 500 filtered data points from both telemedicine and ED cases of all kinds. An extensive video cleanup process was necessary to address biases based on variables such as how many wires were connected to the patient or whether they were in street clothes versus hospital gowns.

After mapping areas for algorithm consideration, the team built a deep learning algorithm to “read” the videos, allowing the algorithm to identify patterns and draw its own associations and conclusions without predefined criteria.

The preliminary results from the data fed into the algorithm are promising. In a head-to-head comparison, members of the study team analyzed the same video data that the algorithm utilized for its decisions, to determine if a patient should be admitted to the hospital. The algorithm performed on par with, if not better than, human counterparts. This early success suggests the potential effectiveness of the algorithm in predicting patient outcomes based on visual cues.

Updated Spring 2024