Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 80
Filter
2.
J Am Coll Radiol ; 21(7): 1119-1129, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38354844

ABSTRACT

Despite the surge in artificial intelligence (AI) development for health care applications, particularly for medical imaging applications, there has been limited adoption of such AI tools into clinical practice. During a 1-day workshop in November 2022, co-organized by the ACR and the RSNA, participants outlined experiences and problems with implementing AI in clinical practice, defined the needs of various stakeholders in the AI ecosystem, and elicited potential solutions and strategies related to the safety, effectiveness, reliability, and transparency of AI algorithms. Participants included radiologists from academic and community radiology practices, informatics leaders responsible for AI implementation, regulatory agency employees, and specialty society representatives. The major themes that emerged fell into two categories: (1) AI product development and (2) implementation of AI-based applications in clinical practice. In particular, participants highlighted key aspects of AI product development to include clear clinical task definitions; well-curated data from diverse geographic, economic, and health care settings; standards and mechanisms to monitor model reliability; and transparency regarding model performance, both in controlled and real-world settings. For implementation, participants emphasized the need for strong institutional governance; systematic evaluation, selection, and validation methods conducted by local teams; seamless integration into the clinical workflow; performance monitoring and support by local teams; performance monitoring by external entities; and alignment of incentives through credentialing and reimbursement. Participants predicted that clinical implementation of AI in radiology will continue to be limited until the safety, effectiveness, reliability, and transparency of such tools are more fully addressed.


Subject(s)
Artificial Intelligence , Radiology , Humans , United States , Reproducibility of Results , Diagnostic Imaging , Societies, Medical , Patient Safety
3.
Can Assoc Radiol J ; 75(2): 226-244, 2024 May.
Article in English | MEDLINE | ID: mdl-38251882

ABSTRACT

Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever­growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi­society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.


Subject(s)
Artificial Intelligence , Radiology , Societies, Medical , Humans , Canada , Europe , New Zealand , United States , Australia
4.
Insights Imaging ; 15(1): 16, 2024 Jan 22.
Article in English | MEDLINE | ID: mdl-38246898

ABSTRACT

Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones.This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.Key points • The incorporation of artificial intelligence (AI) in radiological practice demands increased monitoring of its utility and safety.• Cooperation between developers, clinicians, and regulators will allow all involved to address ethical issues and monitor AI performance.• AI can fulfil its promise to advance patient well-being if all steps from development to integration in healthcare are rigorously evaluated.

5.
J Am Coll Radiol ; 21(8): 1292-1310, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38276923

ABSTRACT

Artificial intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. KEY POINTS.


Subject(s)
Artificial Intelligence , Radiology , Humans , United States , Societies, Medical , Europe , Canada , New Zealand , Australia
6.
Radiol Artif Intell ; 6(1): e230513, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38251899

ABSTRACT

Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. This article is simultaneously published in Insights into Imaging (DOI 10.1186/s13244-023-01541-3), Journal of Medical Imaging and Radiation Oncology (DOI 10.1111/1754-9485.13612), Canadian Association of Radiologists Journal (DOI 10.1177/08465371231222229), Journal of the American College of Radiology (DOI 10.1016/j.jacr.2023.12.005), and Radiology: Artificial Intelligence (DOI 10.1148/ryai.230513). Keywords: Artificial Intelligence, Radiology, Automation, Machine Learning Published under a CC BY 4.0 license. ©The Author(s) 2024. Editor's Note: The RSNA Board of Directors has endorsed this article. It has not undergone review or editing by this journal.


Subject(s)
Artificial Intelligence , Radiology , Humans , Canada , Radiography , Automation
7.
J Med Imaging Radiat Oncol ; 68(1): 7-26, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38259140

ABSTRACT

Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.


Subject(s)
Artificial Intelligence , Radiology , Humans , Canada , Societies, Medical , Europe
8.
J Am Coll Radiol ; 21(4): 617-623, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37843483

ABSTRACT

PURPOSE: Medical imaging accounts for 85% of digital health's venture capital funding. As funding grows, it is expected that artificial intelligence (AI) products will increase commensurately. The study's objective is to project the number of new AI products given the statistical association between historical funding and FDA-approved AI products. METHODS: The study used data from the ACR Data Science Institute and for the number of FDA-approved AI products (2008-2022) and data from Rock Health for AI funding (2013-2022). Employing a 6-year lag between funding and product approved, we used linear regression to estimate the association between new products approved in a certain year, based on the lagged funding (ie, product-year funding). Using this statistical relationship, we forecasted the number of new FDA-approved products. RESULTS: The results show that there are 11.33 (95% confidence interval: 7.03-15.64) new AI products for every $1 billion in funding assuming a 6-year lag between funding and product approval. In 2022 there were 69 new FDA-approved products associated with $4.8 billion in funding. In 2035, product-year funding is projected to reach $30.8 billion, resulting in 350 new products that year. CONCLUSIONS: FDA-approved AI products are expected to grow from 69 in 2022 to 350 in 2035 given the expected funding growth in the coming years. AI is likely to change the practice of diagnostic radiology as new products are developed and integrated into practice. As more AI products are integrated, it may incentivize increased investment for future AI products.


Subject(s)
Artificial Intelligence , Capital Financing , Academies and Institutes , Data Science , Investments
9.
J Am Coll Radiol ; 21(2): 329-340, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37196818

ABSTRACT

PURPOSE: To evaluate the real-world performance of two FDA-approved artificial intelligence (AI)-based computer-aided triage and notification (CADt) detection devices and compare them with the manufacturer-reported performance testing in the instructions for use. MATERIALS AND METHODS: Clinical performance of two FDA-cleared CADt large-vessel occlusion (LVO) devices was retrospectively evaluated at two separate stroke centers. Consecutive "code stroke" CT angiography examinations were included and assessed for patient demographics, scanner manufacturer, presence or absence of CADt result, CADt result, and LVO in the internal carotid artery (ICA), horizontal middle cerebral artery (MCA) segment (M1), Sylvian MCA segments after the bifurcation (M2), precommunicating part of cerebral artery, postcommunicating part of the cerebral artery, vertebral artery, basilar artery vessel segments. The original radiology report served as the reference standard, and a study radiologist extracted the above data elements from the imaging examination and radiology report. RESULTS: At hospital A, the CADt algorithm manufacturer reports assessment of intracranial ICA and MCA with sensitivity of 97% and specificity of 95.6%. Real-world performance of 704 cases included 79 in which no CADt result was available. Sensitivity and specificity in ICA and M1 segments were 85.3% and 91.9%. Sensitivity decreased to 68.5% when M2 segments were included and to 59.9% when all proximal vessel segments were included. At hospital B the CADt algorithm manufacturer reports sensitivity of 87.8% and specificity of 89.6%, without specifying the vessel segments. Real-world performance of 642 cases included 20 cases in which no CADt result was available. Sensitivity and specificity in ICA and M1 segments were 90.7% and 97.9%. Sensitivity decreased to 76.4% when M2 segments were included and to 59.4% when all proximal vessel segments are included. DISCUSSION: Real-world testing of two CADt LVO detection algorithms identified gaps in the detection and communication of potentially treatable LVOs when considering vessels beyond the intracranial ICA and M1 segments and in cases with absent and uninterpretable data.


Subject(s)
Artificial Intelligence , Stroke , Humans , Triage , Retrospective Studies , Stroke/diagnostic imaging , Algorithms , Computers
10.
J Am Coll Radiol ; 20(9): 821-822, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37467870
11.
J Am Coll Radiol ; 20(9): 828-835, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37488026

ABSTRACT

Artificial intelligence (AI)-based solutions are increasingly being incorporated into radiology workflows. Implementation of AI comes along with cybersecurity risks and challenges that practices should be aware of and mitigate for a successful and secure deployment. In this article, these cybersecurity issues are examined through the lens of the "CIA" triad framework-confidentiality, integrity, and availability. We discuss the implications of implementation configurations and development approaches on data security and confidentiality and the potential impact that the insertion of AI can have on the truthfulness of data, access to data, and the cybersecurity attack surface. Finally, we provide a checklist to address important security considerations before deployment of an AI application, and discuss future advances in AI addressing some of these security concerns.

12.
J Am Coll Radiol ; 20(8): 730-737, 2023 08.
Article in English | MEDLINE | ID: mdl-37498259

ABSTRACT

In this white paper, the ACR Pediatric AI Workgroup of the Commission on Informatics educates the radiology community about the health equity issue of the lack of pediatric artificial intelligence (AI), improves the understanding of relevant pediatric AI issues, and offers solutions to address the inadequacies in pediatric AI development. In short, the design, training, validation, and safe implementation of AI in children require careful and specific approaches that can be distinct from those used for adults. On the eve of widespread use of AI in imaging practice, the group invites the radiology community to align and join Image IntelliGently (www.imageintelligently.org) to ensure that the use of AI is safe, reliable, and effective for children.


Subject(s)
Artificial Intelligence , Radiology , Adult , Humans , Child , Societies, Medical , Radiology/methods , Radiography , Diagnostic Imaging/methods
13.
Curr Probl Diagn Radiol ; 52(5): 322-326, 2023.
Article in English | MEDLINE | ID: mdl-37069020

ABSTRACT

OBJECTIVES: To achieve consensus on the performance, interpretation and reporting of MS imaging according to up-to-date guidelines using the Peer Learning Methodology. MATERIALS AND METHODS: We utilized the Peer Learning Methodology to engage our clinical and radiology colleagues, review the current guidelines, acheive consensus on imaging techniques and reporting standards. After implementing changes, we collected radiologist feedback on the impact of the optimized images on their interpretation. RESULTS: Survey responders indicated a strong preference for the new protocol in terms of overall image quality, individual lesions conspicuity and confidence in the ability to detect an MS lesion. The new protocol was preferred for both MS diagnosis and MS surveillance in 25 of 28 responses. CONCLUSION: The Peer Learning Methodology is an effective tool to standardize and improve MR imaging quality, interpretation and reporting for Multiple Sclerosis in accordance with current guidelines.


Subject(s)
Magnetic Resonance Imaging , Radiology , Humans , Radiography , Magnetic Resonance Imaging/methods , Consensus
14.
Respir Res ; 24(1): 49, 2023 Feb 14.
Article in English | MEDLINE | ID: mdl-36782326

ABSTRACT

BACKGROUND: Interstitial lung abnormalities (ILA) are CT findings suggestive of interstitial lung disease in individuals without a prior diagnosis or suspicion of ILD. Previous studies have demonstrated that ILA are associated with clinically significant outcomes including mortality. The aim of this study was to determine the prevalence of ILA in a large CT lung cancer screening program and the association with clinically significant outcomes including mortality, hospitalizations, cancer and ILD diagnosis. METHODS: This was a retrospective study of individuals enrolled in a CT lung cancer screening program from 2012 to 2014. Baseline and longitudinal CT scans were scored for ILA per Fleischner Society guidelines. The primary analyses examined the association between baseline ILA and mortality, all-cause hospitalization, and incidence of lung cancer. Kaplan-Meier plots were generated to visualize the associations between ILA and lung cancer and all-cause mortality. Cox regression proportional hazards models were used to test for this association in both univariate and multivariable models. RESULTS: 1699 subjects met inclusion criteria. 41 (2.4%) had ILA and 101 (5.9%) had indeterminate ILA on baseline CTs. ILD was diagnosed in 10 (24.4%) of 41 with ILA on baseline CT with a mean time from baseline CT to diagnosis of 4.47 ± 2.72 years. On multivariable modeling, the presence of ILA remained a significant predictor of death, HR 3.87 (2.07, 7.21; p < 0.001) when adjusted for age, sex, BMI, pack years and active smoking, but not of lung cancer and all-cause hospital admission. Approximately 50% with baseline ILA had progression on the longitudinal scan. CONCLUSIONS: ILA identified on baseline lung cancer screening exams are associated with all-cause mortality. In addition, a significant proportion of patients with ILA are subsequently diagnosed with ILD and have CT progression on longitudinal scans. TRIAL REGISTRATION NUMBER: ClinicalTrials.gov; No.: NCT04503044.


Subject(s)
Lung Diseases, Interstitial , Lung Neoplasms , Humans , Early Detection of Cancer/adverse effects , Lung/diagnostic imaging , Lung Diseases, Interstitial/diagnostic imaging , Lung Diseases, Interstitial/epidemiology , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/complications , Retrospective Studies
16.
Abdom Radiol (NY) ; 48(4): 1526-1535, 2023 04.
Article in English | MEDLINE | ID: mdl-36801958

ABSTRACT

In 2017, our tertiary hospital-based imaging practice transitioned from score-based peer review to the peer learning methodology for learning and improvement. In our subspecialized practice, peer learning submissions are reviewed by domain experts, who then provide feedback to individual radiologists, curate cases for group learning sessions, and develop associated improvement initiatives. In this paper, we share lessons learned from our abdominal imaging peer learning submissions with the assumption that trends in our practice likely mimic others', and hope that other practices can avoid future errors and elevate the level of the quality of their own performance. Adoption of a nonjudgmental and efficient method to share peer "learning opportunities" and "great calls" has increased participation in this activity and increased transparency into our practice, thus allowing for visualization of trends in performance. Peer learning allows us to bring our own individual knowledge and practices together for group review in a collegial and safe environment. We learn from each other and decide how to improve together.


Subject(s)
Peer Review , Radiologists , Humans , Clinical Competence , Quality Assurance, Health Care
17.
Eur J Radiol Open ; 9: 100441, 2022.
Article in English | MEDLINE | ID: mdl-36193451

ABSTRACT

Radiology is integral to cancer care. Compared to molecular assays, imaging has its advantages. Imaging as a noninvasive tool can assess the entirety of tumor unbiased by sampling error and is routinely acquired at multiple time points in oncological practice. Imaging data can be digitally post-processed for quantitative assessment. The ever-increasing application of Artificial intelligence (AI) to clinical imaging is challenging radiology to become a discipline with competence in data science, which plays an important role in modern oncology. Beyond streamlining certain clinical tasks, the power of AI lies in its ability to reveal previously undetected or even imperceptible radiographic patterns that may be difficult to ascertain by the human sensory system. Here, we provide a narrative review of the emerging AI applications relevant to the oncological imaging spectrum and elaborate on emerging paradigms and opportunities. We envision that these technical advances will change radiology in the coming years, leading to the optimization of imaging acquisition and discovery of clinically relevant biomarkers for cancer diagnosis, staging, and treatment monitoring. Together, they pave the road for future clinical translation in precision oncology.

18.
Eur J Radiol Open ; 9: 100433, 2022.
Article in English | MEDLINE | ID: mdl-35909389

ABSTRACT

Cancer therapy has evolved from being broadly directed towards tumor types, to highly specific treatment protocols that target individual molecular subtypes of tumors. With the ever-increasing data on imaging characteristics of tumor subtypes and advancements in imaging techniques, it is now often possible for radiologists to differentiate tumor subtypes on imaging. Armed with this knowledge, radiologists may be able to provide specific information that can obviate the need for invasive methods to identify tumor subtypes. Different tumor subtypes also differ in their patterns of metastatic spread. Awareness of these differences can direct radiologists to relevant anatomical sites to screen for early metastases that may otherwise be difficult to detect during cursory inspection. Likewise, this knowledge will help radiologists to interpret indeterminate findings in a more specific manner.

19.
Curr Probl Diagn Radiol ; 51(5): 686-690, 2022.
Article in English | MEDLINE | ID: mdl-35623936

ABSTRACT

Peer learning is a model of continuous feedback, learning, and improvement that is now well-recognized as a method to address radiologist errors. The peer learning conference is the most public facing cornerstone of any peer learning program, and is critical in establishing and maintaining the "Just Culture" that allows the program to thrive. We describe here our 5-step approach to organizing and moderating peer learning conferences for continued growth and participation over the past 4 years, including: achieving group buy-in, setting expectations, preparing the conference, moderating the conference, and post-conference documentation.


Subject(s)
Radiology , Documentation , Feedback , Humans , Radiologists
20.
J Am Coll Radiol ; 19(7): 891-900, 2022 07.
Article in English | MEDLINE | ID: mdl-35483438

ABSTRACT

PURPOSE: Deploying external artificial intelligence (AI) models locally can be logistically challenging. We aimed to use the ACR AI-LAB software platform for local testing of a chest radiograph (CXR) algorithm for COVID-19 lung disease severity assessment. METHODS: An externally developed deep learning model for COVID-19 radiographic lung disease severity assessment was loaded into the AI-LAB platform at an independent academic medical center, which was separate from the institution in which the model was trained. The data set consisted of CXR images from 141 patients with reverse transcription-polymerase chain reaction-confirmed COVID-19, which were routed to AI-LAB for model inference. The model calculated a Pulmonary X-ray Severity (PXS) score for each image. This score was correlated with the average of a radiologist-based assessment of severity, the modified Radiographic Assessment of Lung Edema score, independently interpreted by three radiologists. The associations between the PXS score and patient admission and intubation or death were assessed. RESULTS: The PXS score deployed in AI-LAB correlated with the radiologist-determined modified Radiographic Assessment of Lung Edema score (r = 0.80). PXS score was significantly higher in patients who were admitted (4.0 versus 1.3, P < .001) or intubated or died within 3 days (5.5 versus 3.3, P = .001). CONCLUSIONS: AI-LAB was successfully used to test an external COVID-19 CXR AI algorithm on local data with relative ease, showing generalizability of the PXS score model. For AI models to scale and be clinically useful, software tools that facilitate the local testing process, like the freely available AI-LAB, will be important to cross the AI implementation gap in health care systems.


Subject(s)
COVID-19 , Deep Learning , Artificial Intelligence , COVID-19/diagnostic imaging , Edema , Humans , Tomography, X-Ray Computed/methods
SELECTION OF CITATIONS
SEARCH DETAIL