Dr. Matthew Strehlow: Artificial intelligence is moving fast, some say too fast, and it's already starting to reshape the way we practice medicine.
To help us make sense of all I'm joined by Dr. Christian Rose. He's an emergency physician, an assistant professor, and a clinical informaticist at Stanford. His work focuses on building AI tools that can actually help doctors without disrupting how they work, and he's deeply engaged in the ethical side of using AI in real time care.
Dr. Rose, welcome and thank you for joining us.
Dr. Christian Rose: Thanks for having me. So glad to be here.
Dr. Matthew Strehlow: Let's start big picture. All this innovation comes with some big questions. Where are we headed? What's actually working in the real world, and what should make us stop and think?
Dr. Christian Rose: When I think about artificial intelligence, I mostly think of it just as a way that we can use data and information technologies to mimic parts of what humans do or what we think humans use intelligence for.
There are many different ways that artificial intelligence can look like that. It can be sorting through data and looking for patterns, but it can also be writing an Atlantic article or a New Yorker article, or a scientific research paper.
So artificial intelligence is all the forms of mimicking human intelligence, which is also a little limiting because we don't even think about the ways in which humans don't work well and the things that machines could do better than us slash different than us on their own anyway.
Dr. Matthew Strehlow: I hear the terms generative AI and predictive AI. Can you help us understand these terms?
Dr. Christian Rose: When we talk about that, what we mean is it's generating something. People saw DALL-E; I think that was the earliest sort of version that I had seen broadly that friends were, look at this generative AI. It'll make a picture when I describe what I want. It will generate something.
Then we got GPT and chat GPT, and we were like, oh maybe now - it will generate text for me. Generate a summary. It will make something for me.
That is-to be distinguished from the predictive part, which is they take data, they make a prediction, and then the prediction that a generative model is using is “What does Matt want this sentence to reflect and what is the idea he's trying to get at?”
And it looks through multidimensional vector space and says, I think I've got a target on where I'm trying to get to and gives you essentially the string through that multidimensional space.
The reason I think it's important to think about how we use these terms is I've seen papers that refer to generative AI when they're talking about predictive AI.
They mention predictive AI when they're actually just talking about observations or not making predictions but sorting different data points.
That gets to the subset of machine learning, which is a subset of artificial intelligence and then deep learning and neural networks, which are a subset of that, and other things.
So, a lot of these are sort of new terms in emergency medicine, new-ish terms in medicine though this is the third wave of artificial intelligence in the medical sphere. There were waves in the sixties, in the eighties, and nineties. And then we have this most recent sort of third wave of AI in medicine.
We've been using tools like this for a long time, and it's been at the forefront of where people thought medicine was going again for about 60 years. We just didn't call it that. Data science and predictive analytics have been around for a long time. All of us have benefited from it in some way. And medicine has been using that and trying to allocate resources like liver transplants, which is regression analyses and predictions.
Dr. Matthew Strehlow: Predicting, if it's predicting something, some outcome, you generally are thinking of it as a predictive model. If it's generating new content or things, you're thinking of it as a generative model.
But there is a lot of overlap and complexity in how these terms are used and which models are doing what exactly. The other thing that I really took home from that is that I've now realized that I've been using predictive AI maybe for 20 years, because I'm always misspelling things in Google and it was always saying, “did you mean,” and that, you're telling me that's AI. So, I’m, really facile with AI.
I've been doing that for...
Dr. Christian Rose: Yeah, you have been. And also, I would say a lot of our emergency physician colleagues have all been - parts of predictive AI is you know, regression analyses. Like when we use our TI-83 plus to put in data points and get a curve and a best fit line. I think obviously these technologies have become extremely robust. The amount of data that they can process is wild. The way that they present it back to us has gotten much easier to navigate.
Dr. Matthew Strehlow: So where are we at now? What AI tools are being used in the ED today?
Dr. Christian Rose: I have been doing informatics work since my college senior thesis was a little bit around space medicine and that got me interested in the informatics space, which got to how do people place alerts or orders and how do they get alerted to mistakes they might be making?
And I'm going to be honest, very little has changed, I feel, even though I've been paying attention now for, what is that, like 17 years? 2008 until 2025. And it's still addressing the same sort of problems we see over and over again in emergency medicine. The tools that I've seen people working on are things like Qventus, when I was in residency trying to understand how people arrive in the ED and predict and have resources available, queuing theory is an artificial intelligence, you know, methodology to try to predict who's going to come in.
That gets you to the next logical space that I've seen many, many, many companies try to work on, which is triage and improving ESI scores based on when someone comes and not leaving that up to a human who might be biased at the front desk and not really know that patient population X presents slightly different than patient population Y.
And that still tends to be one of the biggest problems that people focus on right now in emergency medicine as well as, who is the most likely to be sick or ill from the types of injuries, car accidents, motor vehicle accidents, or penetrating trauma. So, a lot of the same tools are a lot, to me, a lot of the same things we thought about before.
To ultimately target and treat at risk patients who we might otherwise fail.
Dr. Matthew Strehlow: I think that we've all felt the pain of patient flow. We've all felt the pain of patients being triaged into difficult areas for us to manage them because they're under triaged and things like that.
Dr. Christian Rose: And who needs to go where? Flight, I think flight things. You know, one of our awesome colleagues, Dr. Rice works on this. You have a critical resource, and you worry about who needs to get sent. We don't want to overwhelm systems. And you know, there's a burden there on us that we worry about. So sometimes we like to come up with a machine learning, or artificial intelligence model to make that easier for us.
Dr. Matthew Strehlow: When you see these models being rolled out, on the front lines, actually in the emergency department, how do you see clinicians, or emergency medicine staff responding to them?
Dr. Christian Rose: I'm going to flip it back to you. How have you seen people responding to them?
Dr. Matthew Strehlow: I try to shut my eyes. I try to shut my eyes a lot and ask the experts like yourself. You know, I think it comes up that the response is in two camps. Right? I think there are people that are what I would call over-adopters and people that are under-adopters, or at least overexcited and under-excited. And I think we as emergency medicine providers have a reasonable dose of skepticism around them.
I think our health systems may not have enough skepticism around them.
I don't know if they necessarily fully understand the complexity and the expertise that go into running the emergency department at the clinical level. I do see that kind of difference. So, I would say most clinicians are responding to them right now with what I would say is some skepticism, but I'm not sure that the health systems at large are doing that.
Dr. Christian Rose: Let's just think of our tension points.
There's the population health needs for emergency medicine. I know you think a lot about this in your work too. There's what a community needs, what a hospital needs.
If we subdivide down to that…so larger community and distribution of resources, the hospital at any point in time, what it can handle and not. That then funnels down to the provider. And then the provider needs are often different than what the patient needs. And each one of those represents a point at which we're trying to find alignment, an overused word in many ways, but also, you're trying to say, how do we meet all of these goals at once?
I feel for our colleagues. The amount of things we try to hold at once while trying to make a decision is really tough and rarely does any one tool capture all of those at once. And so, I think some of the skepticism can be that you feel competing demands, which is the patient in front of you, and always your responsibility to them, while also recognizing there's the next patient to come in. What the situation will look like in five minutes. And then the larger health system, you know, and how resources get distributed.
The reason the skepticism and models can come up is it might, to your point… some people might like it because they say, “Oh, this helped me sort of turn off. I was overthinking or I'm so fatigued trying to hold all of these elements at once, be a population health scientist and a primary care physician. This helps me just sort of limit and say, this patient could go to A, B, or C destination or, those are all reasonable outcomes, and it helps you offload some of that cognitive burden, which is some of the things that we're researching at Stanford.
I think that when I see people and I look at the literature from how they've used most of the to-date deployed technologies, they've just had very small effects. It's often hard for anything in medicine to show a large-scale improvement in something that we already do kind of well.
And that's not to say medicine is perfect right now or doesn't need to change. It's just quite hard to put in the resources up front to build a model, to find the data points, to do all of the cleaning, to deploy it in a way that's authentic to the patient and physician experience. Then to see the results and see if that's actually changed in a formative way, the outcome of that individual patient. That's really hard.
Again, each of those transitional points is an area of research and is an area of need for someone to take ownership of how well the tool functions.
Some people either find, “Oh my God, it's helped me in something that has truly burdensome,” and others say, “Oh, I never, I didn't practice this way. I practice bread and butter decision rule-based medicine to begin with. And now this has just added 14 additional considerations.”
The EHR was supposed to make it easier for us to place orders, prevent medical errors, and make it so the physicians have less administrative burden of trying to get the work done.
And it turns out the EHR has just doubled that and made it way worse, such that 50% of our time is spent trying to click through and place an order and find the right order in the right context.
So, there's hope that some of these tools will make that better. It's just that humans are human and the way we engage is often unique to each person's current problem.
And those are often a moving target too. So, some people on the one end of the bell curve, they're like, I love it. I would use it every day. Some people are like, it messed up twice, so I don't trust it anymore. I'll never use it again.
Then you have everyone in the middle who's like, it helps some of the time with some of the questions I had, and I'm sort of ambivalent. I would say most people fall into that and most of the implementations all the way back to CPOE have been sort of the same result.
There are very few things that it's just like, “This is the way to do it.” and then we've changed wholesale.
Dr. Matthew Strehlow: Yeah, I mean, I agree with you. I think the hype train's a little out front of where we're actually at. But I know you're a basketball player. I'm a basketball player. I teach my kids not to throw the ball not where the player is, but where they're going to be.
And I do see, when I look where we are going to be a few years down the road, the combination of the EHR with AI and integrated, electronic health records across multiple systems and situations is going to allow us to unlock some things that are going to make - that have the potential to be really impactful in our care.
I do worry though about the downsides, right? And the risks of this. And I think there are so many that it's like we can't cover them all.
But tell me, what are some of the ones that really would keep you up at night? What are the biggest risks of us adopting AI in emergency care?
Or at least let's say early adoption.
Dr. Christian Rose: Well, let's also think - the risks to whom?
Let's break down or just talk about at least two: risk to the patients themselves and the care we provide, and the risk to us as physicians. EHR hurt us a lot and actually caused a lot of burnout and has made it so that patients don't feel like they get time with their doctors. So, there was a harm to the patient physician relationship because of technology.
It's rare that we have a problem or need an AI model for the average. You know, the ones where you hear hoof beats and think horses. We usually need it for the, “I hear hoof beats, and I'm worried it might be a zebra. Let me not miss the zebra.”
It has been really difficult to have data that supports decision making for patients that truly takes into account all of their needs and not just their clinical scenario, but what they go home to, who their doctor is, how to get to them and deliver that care.
So, while you might diagnose stuff, diagnosis is only a part of what we do, in fact, a small part of what we do. So, the risk is that you automate some of that, a lot of people get a version of care that's cookie cutter in ways and lacks the deep knowledge you need by having a human in the loop that can actually still steer the ship for them. That can leave people marginalized at the edge, leaves the edge people more at the edge, you know, so you have a doubling down of the negative effects for people that aren't usually part of the health system. People who don't have access might get access but might have a differential version of it.
That's one of the big, again, low hanging fruit versions of the risk, is you automate away some of the human elements because we just don't have the humans. And people get a version that's not quite the version they expected. And it leaves them feeling, again, sort of left at the margins.
Dr. Matthew Strehlow: I was going to say, I've heard you write and speak on bias. And I think you know, as you talk about people that are on the margins, that's one of the things that you've really talked about and opened up my thinking around.
Can you explain to people, just an overview, of why AI can actually exacerbate bias? Because I think we think about humans being biased. But I know that people that are in this space like yourself talk a lot about machines being biased. So, can you explain, how does that happen? Are there things we can do and what's being done?
Dr. Christian Rose: Okay. There are certain hospitals that we all know that have had EHRs for a long time and early adopters. The VA is one of those. And then a lot of academic centers. When we think of academic centers and where they're located, they're not generally representative of a lot of the country - if you're in a liberal coastal city versus a middle of the country, smaller rural distribution city, and different access to healthcare. We'll just start there.
So different people we know present to different types of hospitals and different hospitals have different access to digital versions of the patient information.
The data that's been collected that we use, that we throw into the models then is from places where people have access to the highest quality of academic care on average.
And San Francisco patients look a little bit different or respond a little different to people in Houston, which is a little bit different than Indianapolis, or New York City.
That's fine if you know and are aware of that. But it also means that anything being operated at scale just doesn't have a great way of saying, here's a patient population who goes to the rural hospital. That is still on a paper chart or maybe just started on EMR.
That EMR doesn't actually send data to the big companies that build the model. And so, it sort of always lives in the periphery. And if they ever present to one of the major centers, the model doesn't really know how to deal with them. So, to me it's just, it's sort of a problem of certain places have more data points. But that doesn't accurately, represent the whole picture. I'm not saying every person in every model. Some friends of ours talk about having hyperlocal algorithms then and just deploying them at their shop.
But if you didn't have any data to begin with, and every patient you see is a sort of edge case, you’re going to marginalize, you’re going to keep those people who weren't part of it, stuck at the fringe because we don't have a good way of dealing with that.
I do want to just briefly state though, that it's important to remember that humans themselves do things and make decisions that are based on our experiences. And physicians may move around the country and go to places.
There's a real opportunity for artificial intelligence to take out some unconscious biases that humans do anyway. Humans make these biases too, so you can flip the script and say, hey, can I develop a tool that mostly just alerts someone to the fact that they might be missing something? Or they might have prematurely closed on their differential. People working on differential builders. And so, there are ways to mitigate that same problem with the same tool, but framing the question more from a, hey, what might I be missing here? As opposed to, how do I get this done as quickly as possible or deliver the best for this person. There are just many ways in which those biases can seep in.
There are also many ways to address them, and I don't want to leave the impression that it will all ultimately lead to this. Something we need to be aware of, that not everyone is seen, it's like the squeaky, wheel gets the grease. If something squeaks, you've heard of it. It's a chronically a problem for you in one place. It just might not be for other people.
Dr. Matthew Strehlow: Yeah. I see. Like I read an article the other day about the rise of black lung again, right? And you can see how those patients that are now coming back with having a disease that we've kind of forgotten in most of the country and who probably aren't plugged into our databases that are building our AI models, That may not work well when they come in with a cough somewhere else, um, where they are using those models or starting to implement those models.
So, there's that risk. But I love that you mentioned some of the positives there. You said like, hey, there is this risk, but if we're attentive to that risk of bias that we can actually help it alleviate some of our biases that we have.
Dr. Christian Rose: Yep.
Dr. Matthew Strehlow: Tell me about some other things that you think are potential positives and that you're excited about with AI and healthcare.
Dr. Christian Rose: I worry about not setting realistic goals. I think things are blowing up and I'm I actually am very excited. It's just that we are probably in the trough of disillusionment right now, because people were super hyped from when they first heard of GPT and almost no business has seen a large-scale return on investment for what they've put up for their AI models.
Sort of all over people put up billions of dollars and they're not sure when they're going to see their money again. That is a normal experience too. That is a normal, well-known, well-described Gartner hype cycle of 40 plus years. I'm trying to think of when that was originally defined.
And so, it's important to not become disillusioned for long-term formative change. To remember, okay, what's our goal and where are we setting our sites? A lot of the reasons I spend time thinking about what can go wrong is mostly to try to avoid the trough of disillusionment.
Thinking about bias, thinking about the worries is important. But while I'm actually quite excited, I do think the history of science, the history of technological innovation also says that like our jobs will probably get…I think there's a lot of reason to be worried that they might get worse. The risk to physician practice is very real right now.
I'm sure you have heard in our audience, people worry that they're just going to get replaced by AI tools. That you'll just loop the doctor out, that maybe we'll uptrain advanced practice providers to just do the stuff then while the AI model is doing all of the critical thinking.
That's a reasonable though but that has not ever been the case in automation. You know, whether it's tellers, autopilots in planes, the loom, they’ve just changed industries more than it’s ever wholesale, knocked a major part of it out.
So, I recognize that people should be worried, and that is a one of the major risks of these technologies to our work. Maybe we will become even more data checkers than we currently are. And we don't get to spend as much time doing the analytic thinking, the talking to your friend and trying to, you know, figure out this really interesting case in front of you because it's answered in a second from, putting it into one of the many tools that are available.
Dr. Matthew Strehlow: Yeah. I think it's a challenge where if we say, oh, we can take away some of that thinking, that knowledge, that expertise, one of the challenges I have is AI models currently are not great at fact checking themselves. Right? And the hardest thing to do is to figure out when your senior resident is wrong. It's not hard to figure out when your junior resident or the medical student is wrong. It's hard to figure out when the senior resident is wrong because they're right 90 percent of the time, 95, 99 percent of the time.
And so, it actually takes more expertise to fact check the model. I do think that it's going to be tough to figure out that balance of, okay, we're going to put this AI into the loop, but then we're also going to have to have somebody with real expertise. But how do you have those people if you've relied on AI for most of it, because so much of it is built over experience and study and perseverance, which may just not kind of, our system may not be designed for that.
But on the positive side, most of the world doesn't operate like where you and I are operating. Right. I do a lot of global health and most of the world operates in a situation where there's a massive healthcare workforce shortage that's getting worse every day. And that is true in the United States and its more true globally.
And so, things we can do to alleviate that workforce shortage mm-hmm is, is just imperative. Otherwise, millions of people will die, and potentially die annually from just a lack of healthcare workforce. So, I do see that there's so much opportunity there to augment our practice. I always worry about trust, and I think trust gets talked about a lot and you see that whether it's automating planes or you're automating driving and things like that.
I'm wondering if you can talk a little bit about your views on whether our patients and our providers are going to trust AI models. Not whether they should, but whether they will. I think most of the time I read that people won't trust them. But my concern is always that people will trust them and, and they'll trust them too soon.
What do you think?
Dr. Christian Rose: I think there's a couple of ways to frame it. I think trust is a huge issue in all of healthcare. Back to our bias discussion. The reason some people never showed up was just trust in the ability to go to the ED and get provided the care that they needed.
There's lots of people today who don't trust coming to the healthcare system because of concerns about deportation or what happens, even where we work. And lots of our colleagues work endlessly and tirelessly to try to make people feel like there's trust in the healthcare system.
I think then to the tools, a lot of this comes from the use of it. Like any tool, do I trust my stethoscope, the ultrasound to work. There are artifacts all the time on the ultrasound, but we don't have long-term discussions about the windowing, and what this artifact meant. We go, oh, that didn't work that well. I'm not really sure what I'm seeing, so I go to another one. Or I ask for support.
I think in the history of technology and implementations and things that we've been developing against since the sixties - a thing that actually isn't as new as we may perceive- a part of that is having champions and people to answer the questions and being ready to, not go to bat for the technology, but go to bat for us and our patients together, such that we can trust that when there's a problem or if we feel like there's an inaccuracy, there's something to do about it.
The biggest discussion I hear on this from the trust on the physician and the patient side has been that you'll get an autonomous version of care and people won't be able to get out of the loop once they're in it and they'll become hugely mistrustful. They’ll say I've got to go get my human friend. I got to go call my friend. Which we've all experienced too. Someone's called you because they said, I'm at an ED, they say, my dad's having a blank. What is it? Does that make sense based on what's going on? And you go. oh my God, it's three in the morning for me and I'm getting this frantic call from a friend I haven't seen in 10 years.
Trust in healthcare is…it's a huge issue that, again, people in in our department work a lot on, yourself included. I am a pro champions of the system. I am pro testing things heavily upfront and having stakeholders involving everyone in these tools. That's sort of what our paper was about. The EHR was sort of hammer looking for nails and there was a lot of reasons to just get people up to date fast, which left a lot of people feeling like this thing was made not with them and their best interests in mind.
Again, AI has been around for a while. It's just finally graduated and come of age enough to answer some of the difficult questions we have had a hard time doing at scale before.
I think that in order to have trust, you need to feel like you are able to push back, stop a system and not feel like it’s Kafkaesque, like once I'm put into the AI, that's just it. I'm stuck there. Because then you'll avoid it forever.
And I think that goes to patient and provider side. Providers don't want to feel like they're always going to be pigeonholed. One of the risks, is, maybe you're just going to do a ton of testing. Because in order for a machine to know what's going on, it's going to recommend the CT, the Angio, all the blood work, whatever.
So, you'll be given a recommendation, you can't do, you can't really fulfill, then you'll feel let down by it and you won't have any sort of recourse for your patient or, if you are a patient, to get anything else. So, you'll just sort of feel let down and distrustful and mistrustful of the system.
I think from what I've seen, the trust comes from the ability to feel like you're an engaged member and that there's someone you can call up when you need a hand.
And it's all happening so fast right now that I think many people feel like it's a computer versus us.
Dr. Matthew Strehlow: So, there's a lot coming our way. Right? And I think you mentioned that physicians are feeling skeptical and overwhelmed at times. What advice can you give them? What do you say to those folks that come to you and express skepticism?
Dr. Christian Rose: Parts of innovation and informatics are all just what every emergency physician friend of ours does anyway. We process information, we try to build tools. Most of the EM physicians I meet are as good or better than me at all of these things, and they didn't necessarily do an informatics fellowship.
That means engaging with the technology, experimenting with it wherever you can, trying stuff out to see and find. When ultrasound started, when ultrasound was first used, I think its first medical use was on an eyeball because it's a fluid filled sac, and ultrasound works really well in it. And then people started trying it on other stuff. This is the way that medicine advances and then has been doing that since the stethoscope.
So really what that means to me is just, you don't have to do it on clinical care necessarily. But you can practice and see how different models work. Free versions of almost all of these things are accessible right now before they eventually get more expensive. Ask questions, try stuff out, think about the cases you encounter, and then look to see if there's a way for these technologies to sort of help.
I think that's actually normal academic emergency medicine and just emergency medicine is trial and error and not doing anything inherently dangerous but trying to see where the future is going to lie. The only way you get there is by taking your first steps and your first movements towards the end destination you were talking about earlier.
Dr. Matthew Strehlow: Well, that's a great place for us to leave it, but we always like to close with the question that's a little bit on the lighter side. So, you in my mind, are kind of the epitome of a Renaissance person. And it's hard for me to find anything that you don't know about or think about. So, if you weren't in medicine, what would you be doing?
Dr. Christian Rose: Oh my God. I mean…well, when we come up with these in our heads, it's…you know, we're kids for the most part.
And I think, for me, my head is still in space, and I would love to be an astronaut and get to look back on the earth like Carl Sagan and enjoy the pale blue dot that is the world that we live in.
Dr. Matthew Strehlow: Our colleague Anil Menon, been selected to go to space next year, emergency medicine physician, proud grad of Stanford. So don't give up on your dreams, Christian.
On that note, that's a wrap for today's episode.
Huge thanks to Dr. Rose for sharing his insight and experience. Clearly, we've only scratched the surface of a fast moving and fascinating field. Keep tuning in for more on this topic in the future. For those listening, be sure to check out the show notes for links to Dr. Rose's latest work and studies. If you like today's episode, don't forget to subscribe, leave a review and share it with a colleague.
And as always, we want to hear from you. Send us your questions, ideas, or feedback at the link in the description. Thanks for tuning in. We'll see you next time.
And until then, keep taking care of anyone, anything at any time.
Dr. Matthew Strehlow: I think you need to speak in an actuarial conference. You can just be like, "Hey, I know what you guys are facing, but man, don't worry about it. It's all good."
Dr. Christian Rose: Thanks. Yeah. Real life goal there. Actuarial Conference Europe 2026, baby. That's
Dr. Matthew Strehlow: Right. Line. It. Up.