RESUMO
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the evergrowing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multisociety paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.
Assuntos
Inteligência Artificial , Radiologia , Sociedades Médicas , Humanos , Canadá , Europa (Continente) , Nova Zelândia , Estados Unidos , AustráliaRESUMO
As the role of artificial intelligence (AI) in clinical practice evolves, governance structures oversee the implementation, maintenance, and monitoring of clinical AI algorithms to enhance quality, manage resources, and ensure patient safety. In this article, a framework is established for the infrastructure required for clinical AI implementation and presents a road map for governance. The road map answers four key questions: Who decides which tools to implement? What factors should be considered when assessing an application for implementation? How should applications be implemented in clinical practice? Finally, how should tools be monitored and maintained after clinical implementation? Among the many challenges for the implementation of AI in clinical practice, devising flexible governance structures that can quickly adapt to a changing environment will be essential to ensure quality patient care and practice improvement objectives.
Assuntos
Inteligência Artificial , Radiologia , Humanos , Radiografia , Algoritmos , Qualidade da Assistência à SaúdeRESUMO
Artificial intelligence (AI)-based technologies are the most rapidly growing field of innovation in healthcare with the promise to achieve substantial improvements in delivery of patient care across all disciplines of medicine. Recent advances in imaging technology along with marked expansion of readily available advanced health information, data offer a unique opportunity for interventional radiology (IR) to reinvent itself as a data-driven specialty. Additionally, the growth of AI-based applications in diagnostic imaging is expected to have downstream effects on all image-guidance modalities. Therefore, the Society of Interventional Radiology Foundation has called upon 13 key opinion leaders in the field of IR to develop research priorities for clinical applications of AI in IR. The objectives of the assembled research consensus panel were to assess the availability and understand the applicability of AI for IR, estimate current needs and clinical use cases, and assemble a list of research priorities for the development of AI in IR. Individual panel members proposed and all participants voted upon consensus statements to rank them according to their overall impact for IR. The results identified the top priorities for the IR research community and provide organizing principles for innovative academic-industrial research collaborations that will leverage both clinical expertise and cutting-edge technology to benefit patient care in IR.
Assuntos
Inteligência Artificial , Radiologia Intervencionista , Consenso , Humanos , Pesquisa , Sociedades MédicasRESUMO
Imaging research laboratories are rapidly creating machine learning systems that achieve expert human performance using open-source methods and tools. These artificial intelligence systems are being developed to improve medical image reconstruction, noise reduction, quality assurance, triage, segmentation, computer-aided detection, computer-aided classification, and radiogenomics. In August 2018, a meeting was held in Bethesda, Maryland, at the National Institutes of Health to discuss the current state of the art and knowledge gaps and to develop a roadmap for future research initiatives. Key research priorities include: 1, new image reconstruction methods that efficiently produce images suitable for human interpretation from source data; 2, automated image labeling and annotation methods, including information extraction from the imaging report, electronic phenotyping, and prospective structured image reporting; 3, new machine learning methods for clinical imaging data, such as tailored, pretrained model architectures, and federated machine learning methods; 4, machine learning methods that can explain the advice they provide to human users (so-called explainable artificial intelligence); and 5, validated methods for image de-identification and data sharing to facilitate wide availability of clinical imaging data sets. This research roadmap is intended to identify and prioritize these needs for academic research laboratories, funding agencies, professional societies, and industry.
Assuntos
Inteligência Artificial , Pesquisa Biomédica , Diagnóstico por Imagem , Interpretação de Imagem Assistida por Computador , Algoritmos , Humanos , Aprendizado de MáquinaAssuntos
Neoplasias Induzidas por Radiação , Doses de Radiação , Exposição à Radiação , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Exposição à Radiação/efeitos adversos , Criança , Medição de Risco , Fatores de Risco , Proteção Radiológica/métodosRESUMO
PURPOSE: We assessed the changing use of prebiopsy prostate magnetic resonance imaging in Medicare beneficiaries. MATERIALS AND METHODS: Men who underwent prostate biopsy were identified in 5% Medicare RIFs (Research Identifiable Files) from October 2010 through September 2015. We evaluated the rate of prebiopsy prostate magnetic resonance imaging, defined as any pelvic MRI 6 months or less before biopsy with a prostate indication diagnosis code. Temporal changes were determined as well as variation by geography and among populations. RESULTS: In male Medicare beneficiaries the prebiopsy magnetic resonance imaging use rate increased from 0.1% in 2010 to 0.7% in 2011, to 1.2% in 2012, to 2.9% in 2013, to 4.7% in 2014 and to 10.3% in 2015. In 2015 the prebiopsy prostate magnetic resonance imaging rate varied significantly by patient age, including 5.7% for greater than 80 years vs 8.4% to 9.3% for other age ranges (p = 0.040) as well as by race, including 5.8% in African American vs 10.1% in Caucasian men (p = 0.009) and geographic region, including 6.3% in the Midwest to 12.5% in the Northeast (p <0.001). The rate was highest in Wyoming at 25.0%, New York at 23.7% and Minnesota at 20.5% but it was less than 1% in 10 states. CONCLUSIONS: Historical Medicare claims provide novel insights into the dramatically increasing adoption of magnetic resonance imaging prior to prostate biopsy. Following earlier minimal use the performance increased sharply beginning in 2013, exceeding 10% in 2015. However, substantial racial and geographic variation exists in adoption. Continued educational, research and policy efforts are warranted to optimize the role of prebiopsy magnetic resonance imaging and minimize sociodemographic and geographic disparities.
Assuntos
Imageamento por Ressonância Magnética/tendências , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem , Idoso , Idoso de 80 Anos ou mais , Biópsia por Agulha , Humanos , Masculino , Medicare , Próstata/patologia , Neoplasias da Próstata/patologia , Estados UnidosRESUMO
Despite the surge in artificial intelligence (AI) development for health care applications, particularly for medical imaging applications, there has been limited adoption of such AI tools into clinical practice. During a 1-day workshop in November 2022, co-organized by the ACR and the RSNA, participants outlined experiences and problems with implementing AI in clinical practice, defined the needs of various stakeholders in the AI ecosystem, and elicited potential solutions and strategies related to the safety, effectiveness, reliability, and transparency of AI algorithms. Participants included radiologists from academic and community radiology practices, informatics leaders responsible for AI implementation, regulatory agency employees, and specialty society representatives. The major themes that emerged fell into two categories: (1) AI product development and (2) implementation of AI-based applications in clinical practice. In particular, participants highlighted key aspects of AI product development to include clear clinical task definitions; well-curated data from diverse geographic, economic, and health care settings; standards and mechanisms to monitor model reliability; and transparency regarding model performance, both in controlled and real-world settings. For implementation, participants emphasized the need for strong institutional governance; systematic evaluation, selection, and validation methods conducted by local teams; seamless integration into the clinical workflow; performance monitoring and support by local teams; performance monitoring by external entities; and alignment of incentives through credentialing and reimbursement. Participants predicted that clinical implementation of AI in radiology will continue to be limited until the safety, effectiveness, reliability, and transparency of such tools are more fully addressed.
Assuntos
Inteligência Artificial , Radiologia , Humanos , Estados Unidos , Reprodutibilidade dos Testes , Diagnóstico por Imagem , Sociedades Médicas , Segurança do PacienteRESUMO
PURPOSE: To evaluate the real-world performance of two FDA-approved artificial intelligence (AI)-based computer-aided triage and notification (CADt) detection devices and compare them with the manufacturer-reported performance testing in the instructions for use. MATERIALS AND METHODS: Clinical performance of two FDA-cleared CADt large-vessel occlusion (LVO) devices was retrospectively evaluated at two separate stroke centers. Consecutive "code stroke" CT angiography examinations were included and assessed for patient demographics, scanner manufacturer, presence or absence of CADt result, CADt result, and LVO in the internal carotid artery (ICA), horizontal middle cerebral artery (MCA) segment (M1), Sylvian MCA segments after the bifurcation (M2), precommunicating part of cerebral artery, postcommunicating part of the cerebral artery, vertebral artery, basilar artery vessel segments. The original radiology report served as the reference standard, and a study radiologist extracted the above data elements from the imaging examination and radiology report. RESULTS: At hospital A, the CADt algorithm manufacturer reports assessment of intracranial ICA and MCA with sensitivity of 97% and specificity of 95.6%. Real-world performance of 704 cases included 79 in which no CADt result was available. Sensitivity and specificity in ICA and M1 segments were 85.3% and 91.9%. Sensitivity decreased to 68.5% when M2 segments were included and to 59.9% when all proximal vessel segments were included. At hospital B the CADt algorithm manufacturer reports sensitivity of 87.8% and specificity of 89.6%, without specifying the vessel segments. Real-world performance of 642 cases included 20 cases in which no CADt result was available. Sensitivity and specificity in ICA and M1 segments were 90.7% and 97.9%. Sensitivity decreased to 76.4% when M2 segments were included and to 59.4% when all proximal vessel segments are included. DISCUSSION: Real-world testing of two CADt LVO detection algorithms identified gaps in the detection and communication of potentially treatable LVOs when considering vessels beyond the intracranial ICA and M1 segments and in cases with absent and uninterpretable data.
Assuntos
Inteligência Artificial , Acidente Vascular Cerebral , Humanos , Triagem , Estudos Retrospectivos , Acidente Vascular Cerebral/diagnóstico por imagem , Algoritmos , ComputadoresRESUMO
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones.This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.Key points ⢠The incorporation of artificial intelligence (AI) in radiological practice demands increased monitoring of its utility and safety.⢠Cooperation between developers, clinicians, and regulators will allow all involved to address ethical issues and monitor AI performance.⢠AI can fulfil its promise to advance patient well-being if all steps from development to integration in healthcare are rigorously evaluated.
RESUMO
Artificial intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. KEY POINTS.
Assuntos
Inteligência Artificial , Radiologia , Humanos , Estados Unidos , Sociedades Médicas , Europa (Continente) , Canadá , Nova Zelândia , AustráliaRESUMO
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. This article is simultaneously published in Insights into Imaging (DOI 10.1186/s13244-023-01541-3), Journal of Medical Imaging and Radiation Oncology (DOI 10.1111/1754-9485.13612), Canadian Association of Radiologists Journal (DOI 10.1177/08465371231222229), Journal of the American College of Radiology (DOI 10.1016/j.jacr.2023.12.005), and Radiology: Artificial Intelligence (DOI 10.1148/ryai.230513). Keywords: Artificial Intelligence, Radiology, Automation, Machine Learning Published under a CC BY 4.0 license. ©The Author(s) 2024. Editor's Note: The RSNA Board of Directors has endorsed this article. It has not undergone review or editing by this journal.
Assuntos
Inteligência Artificial , Radiologia , Humanos , Canadá , Radiografia , AutomaçãoRESUMO
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.
Assuntos
Inteligência Artificial , Radiologia , Humanos , Canadá , Sociedades Médicas , Europa (Continente)RESUMO
The concept of primary healthcare is now regarded as crucial for enhancing access to healthcare services in low-income and middle-income countries (LMICs). Technological advancements that have made many medical imaging devices smaller, lighter, portable and more affordable, and infrastructure advancements in power supply, Internet connectivity, and artificial intelligence, are all increasing the feasibility of POCI (point-of care imaging) in LMICs. Although providing imaging services at the same time as the clinic visit represents a paradigm shift in the way imaging care is typically provided in high-income countries where patients are typically directed to dedicated imaging centres, a POCI model is often the only way to provide timely access to imaging care for many patients in LIMCs. To address the growing burden of non-communicable diseases such as cancer and heart disease, bringing advanced imaging tools to the POCI will be necessary. Strategies tailored to the countries' specific needs, including training, safety and quality, will be of the utmost importance.
RESUMO
RATIONALE AND OBJECTIVES: To assess key trends, strengths, and gaps in validation studies of the Food and Drug Administration (FDA)-regulated imaging-based artificial intelligence/machine learning (AI/ML) algorithms. MATERIALS AND METHODS: We audited publicly available details of regulated AI/ML algorithms in imaging from 2008 until April 2021. We reviewed 127 regulated software (118 AI/ML) to classify information related to their parent company, subspecialty, body area and specific anatomy type, imaging modality, date of FDA clearance, indications for use, target pathology (such as trauma) and findings (such as fracture), technique (CAD triage, CAD detection and/or characterization, CAD acquisition or improvement, and image processing/quantification), product performance, presence, type, strength and availability of clinical validation data. Pertaining to validation data, where available, we recorded the number of patients or studies included, sensitivity, specificity, accuracy, and/or receiver operating characteristic area under the curve, along with information on ground-truthing of use-cases. Data were analyzed with pivot tables and charts for descriptive statistics and trends. RESULTS: We noted an increasing number of FDA-regulated AI/ML from 2008 to 2021. Seventeen (17/118) regulated AI/ML algorithms posted no validation claims or data. Just 9/118 reviewed AI/ML algorithms had a validation dataset sizes of over 1000 patients. The most common type of AI/ML included image processing/quantification (IPQ; n = 59/118), and triage (CADt; n = 27/118). Brain, breast, and lungs dominated the targeted body regions of interest. CONCLUSION: Insufficient public information on validation datasets in several FDA-regulated AI/ML algorithms makes it difficult to justify clinical applications since their generalizability and presence of bias cannot be inferred.
Assuntos
Algoritmos , Inteligência Artificial , Humanos , Aprendizado de Máquina , Curva ROC , Estados Unidos , United States Food and Drug AdministrationRESUMO
PURPOSE: The ACR Data Science Institute conducted its first annual survey of ACR members to understand how radiologists are using artificial intelligence (AI) in clinical practice and to provide a baseline for monitoring trends in AI use over time. METHODS: The ACR Data Science Institute sent a brief electronic survey to all ACR members via email. Invitees were asked for demographic information about their practice and if and how they were currently using AI as part of their clinical work. They were also asked to evaluate the performance of AI models in their practices and to assess future needs. RESULTS: Approximately 30% of radiologists are currently using AI as part of their practice. Large practices were more likely to use AI than smaller ones, and of those using AI in clinical practice, most were using AI to enhance interpretation, most commonly detection of intracranial hemorrhage, pulmonary emboli, and mammographic abnormalities. Of practices not currently using AI, 20% plan to purchase AI tools in the next 1 to 5 years. CONCLUSION: The survey results indicate a modest penetrance of AI in clinical practice. Information from the survey will help researchers and industry develop AI tools that will enhance radiological practice and improve quality and efficiency in patient care.
Assuntos
Inteligência Artificial , Radiologia , Ciência de Dados , Humanos , Radiologistas , Inquéritos e QuestionáriosRESUMO
The pace of regulatory clearance of artificial intelligence (AI) algorithms for radiology continues to accelerate, and numerous algorithms are becoming available for use in clinical practice. End users of AI in radiology should be aware that AI algorithms may not work as expected when used beyond the institutions in which they were trained, and model performance may degrade over time. In this article, we discuss why regulatory clearance alone may not be enough to ensure AI will be safe and effective in all radiological practices and review strategies available resources for evaluating before clinical use and monitoring performance of AI models to ensure efficacy and patient safety.
Assuntos
Inteligência Artificial , Radiologia , Algoritmos , Humanos , RadiografiaRESUMO
A core principle of ethical data sharing is maintaining the security and anonymity of the data, and care must be taken to ensure medical records and images cannot be reidentified to be traced back to patients or misconstrued as a breach in the trust between health care providers and patients. Once those principles have been observed, those seeking to share data must take the appropriate steps to curate the data in a way that organizes the clinically relevant information so as to be useful to the data sharing party, assesses the ensuing value of the data set and its annotations, and informs the data sharing contracts that will govern use of the data. Embarking on a data sharing partnership engenders a host of ethical, practical, technical, legal, and commercial challenges that require a thoughtful, considered approach. In 2019 the ACR convened a Data Sharing Workgroup to develop philosophies around best practices in the sharing of health information. This is Part 2 of a Report on the workgroup's efforts in exploring these issues.
Assuntos
Disseminação de Informação , Confiança , Atenção à Saúde , HumanosRESUMO
Radiology is at the forefront of the artificial intelligence transformation of health care across multiple areas, from patient selection to study acquisition to image interpretation. Needing large data sets to develop and train these algorithms, developers enter contractual data sharing agreements involving data derived from health records, usually with postacquisition curation and annotation. In 2019 the ACR convened a Data Sharing Workgroup to develop philosophies around best practices in the sharing of health information. The workgroup identified five broad domains of activity important to collaboration using patient data: privacy, informed consent, standardization of data elements, vendor contracts, and data valuation. This is Part 1 of a Report on the workgroup's efforts in exploring these issues.
Assuntos
Inteligência Artificial , Privacidade , Atenção à Saúde , Humanos , Disseminação de Informação , Consentimento Livre e EsclarecidoRESUMO
The rapid development of artificial intelligence (AI) has led to its widespread use in multiple industries, including healthcare. AI has the potential to be a transformative technology that will significantly impact patient care. Particularly, AI has a promising role in radiology, in which computers are indispensable and new technological advances are often sought out and adopted early in clinical practice. We present an overview of the basic definitions of common terms, the development of an AI ecosystem in imaging and its value in mitigating the challenges of implementation in clinical practice.