Smarter health: The ethics of AI in health care

Deutschland Nachrichten Nachrichten

Smarter health: The ethics of AI in health care
Deutschland Neuesten Nachrichten,Deutschland Schlagzeilen
  • 📰 WBUR
  • ⏱ Reading Time:
  • 250 sec. here
  • 6 min. at publisher
  • 📊 Quality Score:
  • News: 103%
  • Publisher: 63%

'AI can actually predict pretty accurately when people are gonna die,' Dr. Steven Lin says. Listen in to this episode to On Point Radio for more:

An algorithm doesn't raise its hand and pledge the Hippocratic oath. Clinicians do. We are the ones who pledge the oath. That means we're responsible for the algorithm as well.

What should the doctor do? Hypothetical Helen is not a real patient, but her scenario is based on very real technology that's currently in use at Stanford Hospital. CHAKRABARTI: If you're a computer scientist, the tool is a highly complex algorithm that scours a patient's electronic health record and calculates that person's chance of dying within a specified period of time. If you're in hospital administration, it's the advance care planning, or ACP model. For the rest of us, and for Helen, it's a death predictor.

DR. LIN [Tape]: But what do we do with that data? And so one potential use case for this is to really improve our rates of advance care planning conversations. Along with a diagnostic ultrasound of her urinary system, history of bladder disease and the difficulties she had breathing following a previous surgery, along with the number of days she spent in the hospital this time, all that together puts the 40-year-old mom of three at very high risk of dying in the next year, according to the model. It also surprises her doctor.

The guide suggests that doctors ask for a patient's permission to have the conversation, to talk about uncertainties, and frame them as wishes. Such as, Helen, I wish we were not in this situation, but I am worried that your time may be as short as one year. It also suggests asking patients, Does their family know about their priorities and wishes?

CHAKRABARTI: Finally, as a once and future patient myself, my mind wanders back to that searingly human moment, in hypothetical Helen's case. The moment when the doctor first sees that alert, when she looks at Helen lying in bed, still hooked up to medical monitors, wanting to go home. WILSON: And I think it would be important for me to know some of the baked in equity issues. For example, in algorithms that, you know, I want to be clear about the kind of coding and how the algorithm arrived at that decision. So that's an unqualified yes for me.

So that's where the training data came from for the death predictor. But now, as we've talked about, it's being used on every patient, no matter who they are, who comes into Stanford Hospital. So, Professor Wilson, what ethical questions does that raise for you? COHEN: I often say as a middle aged white guy living in Boston, I am like dead center in most data sets, in terms of what they predict and how well they do prediction. But that's very untrue for other people out there in the world. And the further we go away from the training data, from the algorithm, the more questions we might have about how good the data is in predicting other kinds of people.

So one thing to think about is that when the AI looks at the data set, it really doesn't see black and white necessarily, unless that's coded in or it's asked to look at it. It looks at a lot of different variables. And the variables it look[s] like may influence, and be more strongly in some directions and others compared to human decision makers. So what we really want to know is performance. That's my question.

CHAKRABARTI: Well, I will say, in the more than three dozen conversations and interviews that we had in the course of developing this series, this one thing came up again and again. And I just want to play a little bit of tape from someone else who reflected those same concerns. CHAKRABARTI: That's Dr. Vindell Washington. And by the way, we're going to hear a lot more from him in episode four of this series.

WILSON: Yes, absolutely. I mean, we already see those kinds of lapses in care for particularly Black patients and Latino patients, that in terms of pain management, in terms of certain kinds of treatment options, we already see disparities there. And so in some ways, this kind of information could provide cover for those biases, right? Like, Oh, I'm not biased. I'm just going where the data lead leads me to go.

Do we need their permission to use information about their death process in order to build a model like this one? Or can we say, you know what, you're a patient record. We're going to de-identify the data. We're not going to be able to point the finger at you, but you're going to have participated in the building of this thing that you might have strong feelings about.

DR. RICHARD SHARP [Tape]: And the focus of the work that we're doing in bioethics is really going out to patients, making them aware of these trends that are beginning, and asking them what they think about these developments. CHAKRABARTI: And here's the point that Dr. Sharp makes that I think is most fascinating. Computer scientists and physicians, as I noted earlier, essentially have different viewpoints or mindsets regarding ethical considerations. So Dr. Sharp says it's the computer scientists and technologists that need to adopt medicine's ethical standards.

Wir haben diese Nachrichten zusammengefasst, damit Sie sie schnell lesen können. Wenn Sie sich für die Nachrichten interessieren, können Sie den vollständigen Text hier lesen. Weiterlesen:

WBUR /  🏆 274. in US

Deutschland Neuesten Nachrichten, Deutschland Schlagzeilen

Similar News:Sie können auch ähnliche Nachrichten wie diese lesen, die wir aus anderen Nachrichtenquellen gesammelt haben.

Elon Musk says Tesla AI Day pushed to Sept 30Elon Musk says Tesla AI Day pushed to Sept 30Elon musk announced on Thursday night that Tesla's AI Day is being pushed to Sept. 30 from Aug. 19.
Weiterlesen »

How AI Could Revolutionize Sports: Trainers Tap Algorithms to Boost Performance, Prevent InjuryHow AI Could Revolutionize Sports: Trainers Tap Algorithms to Boost Performance, Prevent InjuryCoaches and elite athletes are betting on new technologies that combine artificial intelligence with video to predict injuries before they happen and provide highly tailored workouts and practice drills that reduce the risk of getting hurt
Weiterlesen »

AI Ethics And The Perplexing Societal And Legal Role Of AI Activism, Including In The Case Of Autonomous Self-Driving CarsAI Ethics And The Perplexing Societal And Legal Role Of AI Activism, Including In The Case Of Autonomous Self-Driving CarsAI activism is on the rise, dovetailing into the AI Ethics movement, so it is worthy to take a close look at what AI activism is all about, including in the case of AI-based self-driving cars.
Weiterlesen »

Tesla AI Day #2 Moved, Tesla Might Have Working Optimus Bot Prototype ThereTesla AI Day #2 Moved, Tesla Might Have Working Optimus Bot Prototype ThereTesla CEO Elon Musk announced that Tesla's AI Day 2 has been pushed back to September 30, 2022. He said on Twitter
Weiterlesen »

New Major Release for Nebullvm Speeds Up AI Inference by 2-30x | HackerNoonNew Major Release for Nebullvm Speeds Up AI Inference by 2-30x | HackerNoonNebullvm 0.3.0 features more deep learning compilers and now supports additional optimization techniques, including quantization and half precision.
Weiterlesen »

Oregon dropping AI tool used in child abuse casesOregon dropping AI tool used in child abuse casesChild welfare officials in Oregon will stop using an algorithm to help decide which families are investigated by social workers, opting instead for a new process that officials say will make better, more racially equitable decisions.
Weiterlesen »



Render Time: 2025-02-28 07:01:12