RESUMO
Trust in artificial intelligence (AI) by society and the development of trustworthy AI systems and ecosystems are critical for the progress and implementation of AI technology in medicine. With the growing use of AI in a variety of medical and imaging applications, it is more vital than ever to make these systems dependable and trustworthy. Fourteen core principles are considered in this article aiming to move the needle more closely to systems that are accurate, resilient, fair, explainable, safe, and transparent: toward trustworthy AI.
Assuntos
Inteligência Artificial , Ecossistema , Diagnóstico por Imagem , HumanosRESUMO
Almost 1 in 10 individuals can suffer from one of many rare diseases (RDs). The average time to diagnosis for an RD patient is as high as 7 years. Artificial intelligence (AI)-based positron emission tomography (PET), if implemented appropriately, has tremendous potential to advance the diagnosis of RDs. Patient advocacy groups must be active stakeholders in the AI ecosystem if we are to avoid potential issues related to the implementation of AI into health care. AI medical devices must not only be RD-aware at each stage of their conceptualization and life cycle but also should be trained on diverse and augmented datasets representative of the end-user population including RDs. Inability to do so leads to potential harm and unsustainable deployment of AI-based medical devices (AIMDs) into clinical practice.