AI helps prevent medical errors in real-world clinics

AI helps prevent medical errors in real-world clinics

5 minutes, 56 seconds Read

There is much talk about the potential for AI in health, but most studies have so far been stand-ins for the actual practice of medicine: simulated scenarios that predict what the impact of AI could be in medical environments.

But in one of the first real-world tests of an AI tool, she works side by side with clinicians In Kenya, researchers showed that AI can reduce medical errors by no less than 16%.

In a study available at Openi.com This is subject to a scientific journal, researchers from OpenAi and Penda Health, a network of primary care clinics that are active in Nairobi, has shown that an AI tool can offer a powerful assistant to busy clinicians who cannot be expected to know everything about any medical condition. Penda Health has trained clinicians who have been in basic health care for four years: the equivalent of Arts Assistants in the US The Health Group, which operates 16 primary care clinics in Nairobi Kenya, has its own guidelines to help doctors navigate symptoms, diagnoses and treatments. But the required time span is a challenge for every practitioner.

That is where AI comes in. “We feel it acute because we ensure such a wide range of people and circumstances,” says Dr. Robert Korom, Chief Medical Officer at Penda. “So one of the biggest things is the width of the tools.”

Read more: A psychiatrist posed as a teenager with therapi chat bots. The conversations were alarming

Previously Korom said he and his colleague, Dr. Sarah Kiptinness, head of medical services, had to make individual guidelines for each scenario that doctors could often encounter – for example guides for uncomplicated malaria cases, or for malaria cases in adults, or for situations where patients have low platelets. AI is ideal for gathering all this knowledge and providing it under the correct corresponding circumstances.

Korom and his team built the first versions of the AI tool as a primary shadow for the doctor. If the doctor had a question about which diagnosis or which treatment protocol, he or she could press a button that would draw a block of related text that was collected by the AI system to help decision making. But the clinicians only used the function in about half of the visits, says Korom, because they did not always have time to read the text, or because they often found that they did not need the extra guidance.

So Penda improved the tool, called AI Consult, which, in the background of visits, essentially puts the decisions of the clinicians in the shade and only encouraged them if they took doubtful or inappropriate actions, such as overwriting antibiotics.

“It is as if you have an expert there,” says Korom – similar to how a senior who attends the doctor assesses the care plan of a medical resident. ‘In some respects, so is [this AI tool] works. It is a safety net – it does not dictate what the care is, but only gives corrective pushes and feedback when needed. “

Read more: The richest woman in the world has opened a medical school

Penda collaborated with OpenAI to conduct a study by AI Consult to document what impact it had on helping around 20,000 doctors to reduce errors, both in making diagnoses and prescribing treatments. The group of clinicians who used the AI Consult Tool reduced the errors in the diagnosis of 16% and treatment errors by 13% compared to the 20,000 penda providers who did not use it.

The fact that the study concerned thousands of patients in a Real-World setting is a powerful precedent for how AI could be used effectively in offering and improving health care, says Dr. Isaac Kohane, professor of biomedical computer science at the Harvard Medical School, who looked at the study. “We need much more of these kinds of prospective studies in contrast to the retrospective studies, where [researchers] View large observation data sets and predict [health outcomes] with ai. This is what I was waiting for. ‘

The research not only showed that AI can help to reduce medical errors and therefore improve the quality of care that patients receive, but the clinicians involved considered the tool as a useful partner in their medical education. That came as a surprise for OpenAi’s Karan Singhal, Health Ai Lead, who led the study. “It was a learning tool for [those who used it] And helped them to teach themselves and understand a broader width of care practices they had to know, “says Singhal.” That was a bit of a surprise, because it wasn’t what we wanted to study. “

Kiptinness says that AI Consult served as an important confidential builder, so that clinicians gain experience efficiently. “Many of our clinics now believe that AI Consult should remain to help them have more confidence in patient care and improve the quality of care.”

Doctors receive immediate feedback in the form of a green, yellow and red light system that evaluates their clinical actions, and the company gets automatic evaluations about their strengths and weaknesses. “In the future, we want to give more individualized feedback, such as:” You are good at managing obstetric cases, but in pediatrics are the areas you have to look at, “says Kiptiness.” We have many ideas for adapted training guides based on the AI feedback. ”

Read more: The surprising reason that national hospitals close

Such co-piloting can be a practical and powerful way to include AI in the delivery of health care, especially in areas with high needs and few care providers. The findings are “shifted what we expect as a standard of care within Penda,” says Korom. “We probably didn’t want our clinicians to be completely without this.”

The results were also the scene for more meaningful studies of AI in health care that move the practice from theory to reality. Dr. Ethan Goh, executive director of the Stanford Ai Research and Science Evaluation Network and Associate Editor of the Journal BMJ Digital Health & AIAnticipates that the study will inspire similar in other institutions, also in the US. “I think the more places such findings replicate, the more the signal really becomes in terms of how much value [from AI-based systems] We can record, “he says.” Maybe today we just catch mistakes, but what if we can continue tomorrow, and AI suggests accurate plans before a doctor makes mistakes to be at? “

Tools such as AI Consult can further expand the access of health care by giving in the hands of non-medical people such as social workers, or by offering more specialized care in areas where such expertise is not available. “How far can we push this?” says Korom.

The key, he says, would be to develop, as Penda did, a very adapted model that accurately records the workflow of the providers and patients in a certain setting. Pendas AI -Consult, for example, focused on the types of diseases that are most likely to occur in Kenya, and the symptoms that doctors are most likely to see. If such factors are taken into account, he says, “I think there is a lot of potential there.”

#helps #prevent #medical #errors #realworld #clinics

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *