The AI boom has already begun to creep into the medical field in the form of AI-based visit summaries and patient case analysis. Now, new research shows how AI training techniques similar to those used in ChatGPT can be used to train surgical robots to act on their own.
Researchers from Johns Hopkins University and Stanford University have built a training model using video recordings of human-controlled robotic arms performing surgical tasks. By learning to imitate movements on video, researchers believe they can reduce the need to program each individual movement required for an action. from The Washington Post:
Robots have learned how to handle needles, tie knots, and sew wounds on their own. Moreover, the trained robots went beyond simple imitation, correcting their mistakes without being asked to do so – for example, picking up a dropped needle. Scientists have already begun the next phase of work: combining all the different skills into complete surgical procedures performed on animal cadavers.
Robots have certainly been used in the surgical room for years now – in 2018, the phrase “surgery on grapes” highlighted how robotic arms can aid surgical procedures by providing a high level of precision. almost 876 thousand robot-assisted surgeries Performed in 2020. Robotic instruments can reach places and perform tasks in the body that a surgeon’s hand could never fit, and they do not suffer from tremors. Thin, precise instruments can avoid nerve damage. But the robots are manually guided by a surgeon equipped with a controller. The surgeon is always responsible.
What worries skeptics of more autonomous robots is that AI models like ChatGPT are not “intelligent,” but simply mimic what they have seen before, and do not understand the basic concepts they are dealing with. The infinite diversity of diseases in an infinite range of human hosts poses a challenge, so what if the AI model has never seen a specific scenario before? Something can go wrong during surgery in a split second, and what if the AI isn’t trained to respond?
At the very least, autonomous robots used in surgical procedures will need FDA approval. In other cases where doctors use AI to summarize their patient visits and make recommendations, FDA approval is not required because the doctor is technically supposed to review and authenticate any information they provide. This is worrying because there is already evidence that AI bots will do this Giving bad recommendationsOr hallucinate and insert information into meeting transcripts that was never spoken. How often does a tired and overwhelmed doctor approve everything an AI produces without examining it closely?
It is reminiscent of recent reports regarding the condition of soldiers in Israel Relying on artificial intelligence to identify attack targets Without examining the information closely. He added: “Soldiers who did not receive good training in using technology attacked human targets without confirming the (artificial intelligence) predictions at all.” The Washington Post story He reads. “At certain times, the only documentation required was that the target was male.” Things can go awry when humans become complacent and not enough in the loop.
Healthcare is another high-risk industry, certainly higher than the consumer market. If Gmail incorrectly summarizes an email, it’s not the end of the world. AI systems incorrectly diagnosing a health problem, or making an error during surgery, is a much more serious problem. Who is responsible in this case? the mail He interviewed the director of robotic surgery at the University of Miami, and this is what he had to say:
“The stakes are very high, because this is a matter of life and death,” he said. Each patient’s anatomy is different, as is the way the disease behaves in patients.
“I look at (images from) CT scans and MRIs and then perform surgery” by controlling the robotic arms, Parekh said. “If you want the robot to do the surgery itself, it will have to understand all the imaging, how to read CT scans and MRIs.” In addition, robots will need to learn how to perform keyhole surgery, or laparoscopic surgery, which uses very small incisions.
It’s hard to take seriously the idea that AI will be infallible when no technology is ever perfect. This autonomous technology is certainly interesting from a research point of view, but the backlash from a botched surgery performed by an autonomous robot would be enormous. Who do you penalize when something goes wrong, and who has their medical license revoked? Humans aren’t infallible either, but at least patients have peace of mind knowing they’ve gone through years of training and can be held accountable if something goes wrong. AI models are primitive simulations of humans, sometimes behave unpredictably, and have no moral compass.
Another concern is whether over-reliance on autonomous robots to perform surgeries may eventually lead to doctors acquiring their own abilities and atrophying their knowledge; Similar to how facilitating dating through apps rusts away relevant social skills.
If doctors are tired and overworked — which is why researchers suggest why this technology could be valuable — perhaps the systemic problems causing the shortages should be addressed instead. It has been widely reported that the United States suffers from a severe shortage of doctors because… Increased inaccessibility to the field. The country is on track to face a shortage of 10,000 to 20,000 surgeons by 2036, according to a World Health Organization report. American Association of Medical Colleges.
https://gizmodo.com/app/uploads/2023/12/9f5aa2baa4ea7c398bca9009ce35cd1e.jpg
Source link