Every so often these days, a study comes out declaring that artificial intelligence exists Better at diagnosing health problems From a human doctor. These studies are tempting because America’s health care system is woefully broken and everyone is looking for solutions. AI represents a potential opportunity to make doctors more efficient by taking out a lot of the busy administrative work for them and, in doing so, giving them time to see more patients and thus lowering the final cost of care. There is also the possibility that simultaneous translation could help non-native English speakers improve access. For technology companies, the opportunity to serve the healthcare sector can be very lucrative.
But in practice, it seems we are no closer to replacing doctors with artificial intelligence, or even truly enhancing them. the The Washington Post to talk With many experts including doctors learning how early tests of AI went, the results were not guaranteed.
Here is an excerpt from clinical professor, Christopher Sharpe of Stanford Medicine, using GPT-4o to formulate a recommendation for a patient who called his office:
Sharp randomly selects the patient’s query. It read: “I ate a tomato and my lip felt itchy. Any recommendations?”
The AI, using a version of GPT-4o from OpenAI, formulates a response: “I’m sorry to hear that your lips are itching. It sounds like you may be having a mild allergic reaction to tomatoes. The AI recommends avoiding tomatoes, and using antihistamines for Oral route, using a topical steroid cream.
Sharpe stares at his screen for a moment. “Clinically, I don’t agree with all aspects of that answer,” he says.
“Avoid tomatoes, I totally agree with that. On the other hand, topical creams like mild hydrocortisone on the lips wouldn’t be something I’d recommend. “The lips are a very delicate tissue, so we’re very careful about using steroid creams.
“I’ll take that part away.”
Here’s another message from Stanford University professor of medicine and data science Roksana Daneshjo:
She opens her laptop to ChatGPT and types out a test question for a patient. “Dear Doctor, I have been breastfeeding and I think I have mastitis. My breasts are red and sore.” ChatGPT responds: Use hot compresses, massage, and do extra nursing.
But that’s not true, says Danishjo, who is also a dermatologist. And in 2022, the Academy of Breastfeeding Medicine Recommended The opposite: cold compresses, abstaining from massage, and avoiding excessive stimulation.
The problem with tech optimists pushing AI into fields like healthcare is that it’s not the same as making consumer software. We already know that Microsoft Copilot 365 Assistant is buggy, but a small error in your PowerPoint presentation isn’t a big deal. Making mistakes in health care can kill people. Danishjoo said mail she Red team ChatGPT along with 80 others, including computer scientists and doctors, asked ChatGPT medical questions and found that it provided dangerous answers twenty percent of the time. “Twenty percent of the problematic responses are not, to me, good enough for actual daily use in the health care system,” she said.
Of course, proponents will say that AI can enhance a doctor’s work, not replace it, and they should always verify the output. And it is true that mail Story interviewed a doctor at Stanford University who said two-thirds of doctors there have access to the platform’s history and record patient meetings using AI so they can look them in the eye during a visit and not look down and take notes. But even there, OpenAI’s Whisper technology appears to be inserting completely made-up information into some recordings. Whisper inserted an error into a transcript that said a patient attributed the cough to his child’s exposure, which he never mentioned, Sharp said. One striking example of bias from training data that Danishjo found in testing was that the AI transcription tool assumed that the Chinese patient was a computer programmer without the patient ever providing such information.
AI can potentially help in healthcare, but its output must be carefully examined, and then how much time does it actually save doctors? Moreover, patients must trust that their doctor is actually checking what the AI produces, and hospital systems will have to perform checks to ensure this happens, or else complacency may seep in.
Essentially, generative AI is just a word prediction machine, looking through large amounts of data without understanding the underlying concepts it returns. He is not “intelligent” in the same sense as a real human, and is particularly unable to understand each individual’s unique circumstances; It brings back information that you have circulated and seen before.
“I think this is one of those technologies that is promising, but it’s not there yet,” said Adam Rodman, an internal medicine physician and artificial intelligence researcher at Beth Israel Deaconess Medical Center. “I’m concerned that we will further degrade what we do by putting hallucinogenic AI into high-risk patient care.”
Next time you visit your doctor, it may be worth asking if he or she uses AI in his workflow.
https://gizmodo.com/app/uploads/2024/12/GettyImages-2183732565.jpg
Source link