As artificial intelligence and machine learning gain more traction in radiology, how can facilities be sure to appropriately evaluate new tools? How can one assess the efficacy and validity of a vendor’s application? These questions and more were answered on a webinar hosted by SIIM and moderated by Eliot L. Siegel, MD, FSIIM, Professor of Radiology, University of Maryland School of Medicine, and Bradley J. Erickson, MD, PhD, CIIP, FSIIM, Professor and Associate Chair of Research, Department of Radiology Mayo Clinic, Rochester. Panelists included Nina Kottler, MD, MSVP Clinical Operations Radiology Partners, and vendor perspectives from Chad McClennan, President & Chief Executive Officer, Koios Medical, Morris Panner, Chief Executive Officer, Ambra Health, and Jeff Sorenson, President & Chief Executive Officer, TeraRecon.
What is the right regulatory balance that maximizes positive patient outcomes? How can patients and physicians understand their rights and what happens to our data in this new world? Panner discussed how employees at Memorial Sloan Kettering Cancer Center protested an exclusive cancer data-sharing deal with AI startup Paige.ai. “We need to know how our data is being used to be able to make informed decisions about that use,” said Panner.
One of the initial questions asked by the moderators focused on how AI tools can be incorporated into clinical care workflows. McClennan shared that there are still many gaps and areas for improvement between detection and diagnosis paths. It’s up to vendors to fill these gaps. As a physician and AI tool developer, Kottler discussed how workflows are key and that even if the best tools are available to a physician they will be of no use unless a change management process integrates new tools into the workflow.
The panel also discussed how a consumer can determine whether an AI or machine learning tool is high-quality. Kottler shared her practice’s strategies including treating applications like candidates with resumes. Is it applicable to your patient application, is it flexible, and does it test well in a pilot? Additionally, she recommended that when testing an application include everyone it will involve, such as referring physicians, analysts, and project managers.
McClennan noted that the recommendations of the FDA are critical to take into consideration as well. Sorenson further highlighted the importance of a pilot and having physicians try out and become familiar with the product.
“When rolling out AI, it’s key to remember that the diagnostic and the clinical brain is still human. AI is augmenting when a clinician is doing. You aren’t bringing in a system that’s bypassing that judgement – just offering more options. Technology both requires more from clinicians and enables them to do more” shared Panner.
How does your facility select and determine the efficacy of AI applications? Comment below to participate in the discussion!