UPDF AI

Caution Is Required When Clinically Implementing AI Models: What the COVID-19 Pandemic Taught Us About Regulation and Validation

Keerthi B. Harish,Yindalon Aphinyanaphongs​

2021 · DOI: 10.54111/0001/ss1
0 Citations

TLDR

Concerns about the implementation of clinical artificial intelligence models are raised, using COVID-19 as an important case study, and it is asserted that machine learning models should be robustly vetted by facilities using local data to ensure that emerging technology does patients more good than harm.

Abstract

The novelty of COVID 19 ushered an expansion of artificial intelligence models designed to close clinical knowledge gaps, especially with regard to diagnosis and prognostication. These models emerged within a unique regulatory context that largely defers governance of clinical decision support tools. As such, we raise three concerns about the implementation of clinical artificial intelligence models, using COVID-19 as an important case study. First, flawed data underlying model development leads to flawed clinical resources. Second, models developed within one focus of geographic space and time leads to challenges in generalizability between clinical environments. Third, failure to implement ongoing monitoring locally leads to diminishing utility as diseases and implicated populations inevitably change. Experience with this pandemic has informed our assertion that machine learning models should be robustly vetted by facilities using local data to ensure that emerging technology does patients more good than harm.

Cited Papers
Citing Papers