Jan 28, 2019 / by Heather Landi
Artificial intelligence-enabled clinical decision support (CDS) has the potential to equip clinicians with the actionable information they need to enhance overall health and improve outcomes. However, regulatory issues, improved product labeling and patient privacy concerns need to be addressed before AI is safely and widely adopted.
In a recent report (PDF), a working group at the Duke-Margolis Center for Health Policy examined the potential benefits and challenges when AI is incorporated into CDS software, particularly software that supports improved clinical diagnosis, as well as barriers that may be preventing development and adoption of this software.
Improved CDS could be useful in reducing diagnostic errors, the Duke-Margolis team noted, as diagnostic errors account for almost 60% of all medical errors and an estimated 40,000 to 80,000 deaths each year, according to the National Academies of Sciences, which also estimates that “nearly every American will experience a diagnostic error in their lifetime, sometimes with devastating consequences.”
AI-enabled diagnostic support software—a subset of CDS software—has the potential to augment clinicians’ intelligence, support their decision-making processes, help them arrive at the correct diagnosis faster, reduce unnecessary testing and treatments otherwise resulting from misdiagnosis, and reduce pain and suffering by starting treatments earlier, the working group wrote.
Several key issues are delaying innovation and adoption of AI-enabled diagnostic support software that stakeholders will need to address, the researchers wrote, including the need to demonstrate the value of AI-enabled software to provider systems. Developers will need to show clinical and economic evidence using data from a population representative of the health system, according to the working group.
“This evidence will include the effect of the software on patient outcomes, care quality, total costs of care, and workflow; the usability of the software and its effectiveness at delivering the right information in a way that clinicians find useful and trustworthy; and the potential for reimbursement for use of these products by payers,” the researchers wrote.
Clinicians also need effective patient risk assessment of these products, as developers’ ability to explain how the software works and how the algorithms have been trained will significantly impact how regulators and clinicians view the risk to patients. Product labeling may need to be reconsidered and the risks and benefits of continuous-learning versus locked models must be discussed, the working group noted.