Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 85
Filter
1.
Clin Imaging ; 112: 110210, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38850710

ABSTRACT

BACKGROUND: Clinical adoption of AI applications requires stakeholders see value in their use. AI-enabled opportunistic-CT-screening (OS) capitalizes on incidentally-detected findings within CTs for potential health benefit. This study evaluates primary care providers' (PCP) perspectives on OS. METHODS: A survey was distributed to US Internal and Family Medicine residencies. Assessed were familiarity with AI and OS, perspectives on potential value/costs, communication of results, and technology implementation. RESULTS: 62 % of respondents (n = 71) were in Family Medicine, 64.8 % practiced in community hospitals. Although 74.6 % of respondents had heard of AI/machine learning, 95.8 % had little-to-no familiarity with OS. The majority reported little-to-no trust in AI. Reported concerns included AI accuracy (74.6 %) and unknown liability (73.2 %). 78.9 % of respondents reported that OS applications would require radiologist oversight. 53.5 % preferred OS results be included in a separate "screening" section within the Radiology report, accompanied by condition risks and management recommendations. The majority of respondents reported results would likely affect clinical management for all queried applications, and that atherosclerotic cardiovascular disease risk, abdominal aortic aneurysm, and liver fibrosis should be included within every CT report regardless of reason for examination. 70.5 % felt that PCP practices are unlikely to pay for OS. Added costs to the patient (91.5 %), the healthcare provider (77.5 %), and unknown liability (74.6 %) were the most frequently reported concerns. CONCLUSION: PCP preferences and concerns around AI-enabled OS offer insights into clinical value and costs. As AI applications grow, feedback from end-users should be considered in the development of such technology to optimize implementation and adoption. Increasing stakeholder familiarity with AI may be a critical prerequisite first step before stakeholders consider implementation.


Subject(s)
Tomography, X-Ray Computed , Humans , Primary Health Care , Surveys and Questionnaires , Attitude of Health Personnel , Mass Screening , United States , Male , Female , Artificial Intelligence , Incidental Findings
2.
J Imaging Inform Med ; 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38558368

ABSTRACT

In recent years, the role of Artificial Intelligence (AI) in medical imaging has become increasingly prominent, with the majority of AI applications approved by the FDA being in imaging and radiology in 2023. The surge in AI model development to tackle clinical challenges underscores the necessity for preparing high-quality medical imaging data. Proper data preparation is crucial as it fosters the creation of standardized and reproducible AI models while minimizing biases. Data curation transforms raw data into a valuable, organized, and dependable resource and is a fundamental process to the success of machine learning and analytical projects. Considering the plethora of available tools for data curation in different stages, it is crucial to stay informed about the most relevant tools within specific research areas. In the current work, we propose a descriptive outline for different steps of data curation while we furnish compilations of tools collected from a survey applied among members of the Society of Imaging Informatics (SIIM) for each of these stages. This collection has the potential to enhance the decision-making process for researchers as they select the most appropriate tool for their specific tasks.

4.
Radiol Artif Intell ; 6(3): e230227, 2024 May.
Article in English | MEDLINE | ID: mdl-38477659

ABSTRACT

The Radiological Society of North America (RSNA) has held artificial intelligence competitions to tackle real-world medical imaging problems at least annually since 2017. This article examines the challenges and processes involved in organizing these competitions, with a specific emphasis on the creation and curation of high-quality datasets. The collection of diverse and representative medical imaging data involves dealing with issues of patient privacy and data security. Furthermore, ensuring quality and consistency in data, which includes expert labeling and accounting for various patient and imaging characteristics, necessitates substantial planning and resources. Overcoming these obstacles requires meticulous project management and adherence to strict timelines. The article also highlights the potential of crowdsourced annotation to progress medical imaging research. Through the RSNA competitions, an effective global engagement has been realized, resulting in innovative solutions to complex medical imaging problems, thus potentially transforming health care by enhancing diagnostic accuracy and patient outcomes. Keywords: Use of AI in Education, Artificial Intelligence © RSNA, 2024.


Subject(s)
Artificial Intelligence , Radiology , Humans , Diagnostic Imaging/methods , Societies, Medical , North America
5.
Acad Radiol ; 31(2): 417-425, 2024 02.
Article in English | MEDLINE | ID: mdl-38401987

ABSTRACT

RATIONALE AND OBJECTIVES: Innovation is a crucial skill for physicians and researchers, yet traditional medical education does not provide instruction or experience to cultivate an innovative mindset. This study evaluates the effectiveness of a novel course implemented in an academic radiology department training program over a 5-year period designed to educate future radiologists on the fundamentals of medical innovation. MATERIALS AND METHODS: A pre- and post-course survey and examination were administered to residents who participated in the innovation course (MESH Core) from 2018 to 2022. Respondents were first evaluated on their subjective comfort level, understanding, and beliefs on innovation-related topics using a 5-point Likert-scale survey. Respondents were also administered a 21-question multiple-choice exam to test their objective knowledge of innovation-related topics. RESULTS: Thirty-eight residents participated in the survey (response rate 95%). Resident understanding, comfort and belief regarding innovation-related topics improved significantly (P < .0001) on all nine Likert-scale questions after the course. After the course, a significant majority of residents either agreed or strongly agreed that technological innovation should be a core competency for the residency curriculum, and that a workshop to prototype their ideas would be beneficial. Performance on the course exam showed significant improvement (48% vs 86%, P < .0001). The overall course experience was rated 5 out of 5 by all participants. CONCLUSION: MESH Core demonstrates long-term success in educating future radiologists on the basic concepts of medical technological innovation. Years later, residents used the knowledge and experience gained from MESH Core to successfully pursue their own inventions and innovative projects. This innovation model may serve as an approach for other institutions to implement training in this domain.


Subject(s)
Education, Medical, Graduate , Internship and Residency , Humans , Education, Medical, Graduate/methods , Clinical Competence , Curriculum , Radiologists , Hospitals
6.
Skeletal Radiol ; 53(2): 377-383, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37530866

ABSTRACT

PURPOSE: To develop a deep learning model to distinguish rheumatoid arthritis (RA) from osteoarthritis (OA) using hand radiographs and to evaluate the effects of changing pretraining and training parameters on model performance. MATERIALS AND METHODS: A convolutional neural network was retrospectively trained on 9714 hand radiograph exams from 8387 patients obtained from 2017 to 2021 at seven hospitals within an integrated healthcare network. Performance was assessed using an independent test set of 250 exams from 146 patients. Binary discriminatory capacity (no arthritis versus arthritis; RA versus not RA) and three-way classification (no arthritis versus OA versus RA) were evaluated. The effects of additional pretraining using musculoskeletal radiographs, using all views as opposed to only the posteroanterior view, and varying image resolution on model performance were also investigated. Area under the receiver operating characteristic curve (AUC) and Cohen's kappa coefficient were used to evaluate diagnostic performance. RESULTS: For no arthritis versus arthritis, the model achieved an AUC of 0.975 (95% CI: 0.957, 0.989). For RA versus not RA, the model achieved an AUC of 0.955 (95% CI: 0.919, 0.983). For three-way classification, the model achieved a kappa of 0.806 (95% CI: 0.742, 0.866) and accuracy of 87.2% (95% CI: 83.2%, 91.2%) on the test set. Increasing image resolution increased performance up to 1024 × 1024 pixels. Additional pretraining on musculoskeletal radiographs and using all views did not significantly affect performance. CONCLUSION: A deep learning model can be used to distinguish no arthritis, OA, and RA on hand radiographs with high performance.


Subject(s)
Arthritis, Rheumatoid , Deep Learning , Osteoarthritis , Humans , Retrospective Studies , Radiography , Osteoarthritis/diagnostic imaging , Arthritis, Rheumatoid/diagnostic imaging
8.
Sci Rep ; 13(1): 189, 2023 01 05.
Article in English | MEDLINE | ID: mdl-36604467

ABSTRACT

Non-contrast head CT (NCCT) is extremely insensitive for early (< 3-6 h) acute infarct identification. We developed a deep learning model that detects and delineates suspected early acute infarcts on NCCT, using diffusion MRI as ground truth (3566 NCCT/MRI training patient pairs). The model substantially outperformed 3 expert neuroradiologists on a test set of 150 CT scans of patients who were potential candidates for thrombectomy (60 stroke-negative, 90 stroke-positive middle cerebral artery territory only infarcts), with sensitivity 96% (specificity 72%) for the model versus 61-66% (specificity 90-92%) for the experts; model infarct volume estimates also strongly correlated with those of diffusion MRI (r2 > 0.98). When this 150 CT test set was expanded to include a total of 364 CT scans with a more heterogeneous distribution of infarct locations (94 stroke-negative, 270 stroke-positive mixed territory infarcts), model sensitivity was 97%, specificity 99%, for detection of infarcts larger than the 70 mL volume threshold used for patient selection in several major randomized controlled trials of thrombectomy treatment.


Subject(s)
Deep Learning , Stroke , Humans , Tomography, X-Ray Computed , Stroke/diagnostic imaging , Magnetic Resonance Imaging , Infarction, Middle Cerebral Artery
9.
AJR Am J Roentgenol ; 220(2): 236-244, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36043607

ABSTRACT

BACKGROUND. CT-based body composition (BC) measurements have historically been too resource intensive to analyze for widespread use and have lacked robust comparison with traditional weight metrics for predicting cardiovascular risk. OBJECTIVE. The aim of this study was to determine whether BC measurements obtained from routine CT scans by use of a fully automated deep learning algorithm could predict subsequent cardiovascular events independently from weight, BMI, and additional cardiovascular risk factors. METHODS. This retrospective study included 9752 outpatients (5519 women and 4233 men; mean age, 53.2 years; 890 patients self-reported their race as Black and 8862 self-reported their race as White) who underwent routine abdominal CT at a single health system from January 2012 through December 2012 and who were given no major cardiovascular or oncologic diagnosis within 3 months of undergoing CT. Using publicly available code, fully automated deep learning BC analysis was performed at the L3 vertebral body level to determine three BC areas (skeletal muscle area [SMA], visceral fat area [VFA], and subcutaneous fat area [SFA]). Age-, sex-, and race-normalized reference curves were used to generate z scores for the three BC areas. Subsequent myocardial infarction (MI) or stroke was determined from the electronic medical record. Multivariable-adjusted Cox proportional hazards models were used to determine hazard ratios (HRs) for MI or stroke within 5 years after CT for the three BC area z scores, with adjustment for normalized weight, normalized BMI, and additional cardiovascular risk factors (smoking status, diabetes diagnosis, and systolic blood pressure). RESULTS. In multivariable models, age-, race-, and sex-normalized VFA was associated with subsequent MI risk (HR of highest quartile compared with lowest quartile, 1.31 [95% CI, 1.03-1.67], p = .04 for overall effect) and stroke risk (HR of highest compared with lowest quartile, 1.46 [95% CI, 1.07-2.00], p = .04 for overall effect). In multivariable models, normalized SMA, SFA, weight, and BMI were not associated with subsequent MI or stroke risk. CONCLUSION. VFA derived from fully automated and normalized analysis of abdominal CT examinations predicts subsequent MI or stroke in Black and White patients, independent of traditional weight metrics, and should be considered an adjunct to BMI in risk models. CLINICAL IMPACT. Fully automated and normalized BC analysis of abdominal CT has promise to augment traditional cardiovascular risk prediction models.


Subject(s)
Cardiovascular Diseases , Deep Learning , Stroke , Male , Humans , Female , Middle Aged , Retrospective Studies , Risk Factors , Outpatients , Body Composition , Tomography, X-Ray Computed/methods , Cardiovascular Diseases/diagnostic imaging
10.
Radiology ; 306(2): e220101, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36125375

ABSTRACT

Background Adrenal masses are common, but radiology reporting and recommendations for management can be variable. Purpose To create a machine learning algorithm to segment adrenal glands on contrast-enhanced CT images and classify glands as normal or mass-containing and to assess algorithm performance. Materials and Methods This retrospective study included two groups of contrast-enhanced abdominal CT examinations (development data set and secondary test set). Adrenal glands in the development data set were manually segmented by radiologists. Images in both the development data set and the secondary test set were manually classified as normal or mass-containing. Deep learning segmentation and classification models were trained on the development data set and evaluated on both data sets. Segmentation performance was evaluated with use of the Dice similarity coefficient (DSC), and classification performance with use of sensitivity and specificity. Results The development data set contained 274 CT examinations (251 patients; median age, 61 years; 133 women), and the secondary test set contained 991 CT examinations (991 patients; median age, 62 years; 578 women). The median model DSC on the development test set was 0.80 (IQR, 0.78-0.89) for normal glands and 0.84 (IQR, 0.79-0.90) for adrenal masses. On the development reader set, the median interreader DSC was 0.89 (IQR, 0.78-0.93) for normal glands and 0.89 (IQR, 0.85-0.97) for adrenal masses. Interreader DSC for radiologist manual segmentation did not differ from automated machine segmentation (P = .35). On the development test set, the model had a classification sensitivity of 83% (95% CI: 55, 95) and specificity of 89% (95% CI: 75, 96). On the secondary test set, the model had a classification sensitivity of 69% (95% CI: 58, 79) and specificity of 91% (95% CI: 90, 92). Conclusion A two-stage machine learning pipeline was able to segment the adrenal glands and differentiate normal adrenal glands from those containing masses. © RSNA, 2022 Online supplemental material is available for this article.


Subject(s)
Machine Learning , Tomography, X-Ray Computed , Humans , Female , Middle Aged , Tomography, X-Ray Computed/methods , Retrospective Studies , Algorithms , Adrenal Glands
11.
J Med Imaging (Bellingham) ; 10(6): 061405, 2023 Nov.
Article in English | MEDLINE | ID: mdl-38162316

ABSTRACT

Picture archiving and communication systems (PACS) that digitally acquire, archive, transmit, and display medical images ultimately enabled the transition from an analog film-based operation to a digital workflow revolutionizing radiology. This article briefly traces early generation systems to present-day PACS, noting challenges along with key technological advances and benefits. Thoughts for future PACS evolution are discussed including the promise of integration of artificial intelligence applications.

12.
J Med Imaging (Bellingham) ; 9(Suppl 1): S12210, 2022 Feb.
Article in English | MEDLINE | ID: mdl-36259081

ABSTRACT

To commemorate the SPIE Medical Imaging 50th anniversary, this article provides a brief review of the Picture Archiving and Communication Systems (PACS) and Informatics conferences. Important topics and advances, contributing researchers from both academia and industry, and key papers are noted.

13.
PLoS One ; 17(4): e0267213, 2022.
Article in English | MEDLINE | ID: mdl-35486572

ABSTRACT

A standardized objective evaluation method is needed to compare machine learning (ML) algorithms as these tools become available for clinical use. Therefore, we designed, built, and tested an evaluation pipeline with the goal of normalizing performance measurement of independently developed algorithms, using a common test dataset of our clinical imaging. Three vendor applications for detecting solid, part-solid, and groundglass lung nodules in chest CT examinations were assessed in this retrospective study using our data-preprocessing and algorithm assessment chain. The pipeline included tools for image cohort creation and de-identification; report and image annotation for ground-truth labeling; server partitioning to receive vendor "black box" algorithms and to enable model testing on our internal clinical data (100 chest CTs with 243 nodules) from within our security firewall; model validation and result visualization; and performance assessment calculating algorithm recall, precision, and receiver operating characteristic curves (ROC). Algorithm true positives, false positives, false negatives, recall, and precision for detecting lung nodules were as follows: Vendor-1 (194, 23, 49, 0.80, 0.89); Vendor-2 (182, 270, 61, 0.75, 0.40); Vendor-3 (75, 120, 168, 0.32, 0.39). The AUCs for detection of solid (0.61-0.74), groundglass (0.66-0.86) and part-solid (0.52-0.86) nodules varied between the three vendors. Our ML model validation pipeline enabled testing of multi-vendor algorithms within the institutional firewall. Wide variations in algorithm performance for detection as well as classification of lung nodules justifies the premise for a standardized objective ML algorithm evaluation process.


Subject(s)
Lung Neoplasms , Algorithms , Humans , Lung Neoplasms/diagnosis , Machine Learning , Retrospective Studies , Tomography, X-Ray Computed/methods
14.
Sci Rep ; 12(1): 2154, 2022 02 09.
Article in English | MEDLINE | ID: mdl-35140277

ABSTRACT

Stroke is a leading cause of death and disability. The ability to quickly identify the presence of acute infarct and quantify the volume on magnetic resonance imaging (MRI) has important treatment implications. We developed a machine learning model that used the apparent diffusion coefficient and diffusion weighted imaging series. It was trained on 6,657 MRI studies from Massachusetts General Hospital (MGH; Boston, USA). All studies were labelled positive or negative for infarct (classification annotation) with 377 having the region of interest outlined (segmentation annotation). The different annotation types facilitated training on more studies while not requiring the extensive time to manually segment every study. We initially validated the model on studies sequestered from the training set. We then tested the model on studies from three clinical scenarios: consecutive stroke team activations for 6-months at MGH, consecutive stroke team activations for 6-months at a hospital that did not provide training data (Brigham and Women's Hospital [BWH]; Boston, USA), and an international site (Diagnósticos da América SA [DASA]; Brazil). The model results were compared to radiologist ground truth interpretations. The model performed better when trained on classification and segmentation annotations (area under the receiver operating curve [AUROC] 0.995 [95% CI 0.992-0.998] and median Dice coefficient for segmentation overlap of 0.797 [IQR 0.642-0.861]) compared to segmentation annotations alone (AUROC 0.982 [95% CI 0.972-0.990] and Dice coefficient 0.776 [IQR 0.584-0.857]). The model accurately identified infarcts for MGH stroke team activations (AUROC 0.964 [95% CI 0.943-0.982], 381 studies), BWH stroke team activations (AUROC 0.981 [95% CI 0.966-0.993], 247 studies), and at DASA (AUROC 0.998 [95% CI 0.993-1.000], 171 studies). The model accurately segmented infarcts with Pearson correlation comparing model output and ground truth volumes between 0.968 and 0.986 for the three scenarios. Acute infarct can be accurately detected and segmented on MRI in real-world clinical scenarios using a machine learning model.

15.
Radiol Artif Intell ; 4(1): e210080, 2022 Jan.
Article in English | MEDLINE | ID: mdl-35146434

ABSTRACT

Body composition on chest CT scans encompasses a set of important imaging biomarkers. This study developed and validated a fully automated analysis pipeline for multi-vertebral level assessment of muscle and adipose tissue on routine chest CT scans. This study retrospectively trained two convolutional neural networks on 629 chest CT scans from 629 patients (55% women; mean age, 67 years ± 10 [standard deviation]) obtained between 2014 and 2017 prior to lobectomy for primary lung cancer at three institutions. A slice-selection network was developed to identify an axial image at the level of the fifth, eighth, and 10th thoracic vertebral bodies. A segmentation network (U-Net) was trained to segment muscle and adipose tissue on an axial image. Radiologist-guided manual-level selection and segmentation generated ground truth. The authors then assessed the predictive performance of their approach for cross-sectional area (CSA) (in centimeters squared) and attenuation (in Hounsfield units) on an independent test set. For the pipeline, median absolute error and intraclass correlation coefficients for both tissues were 3.6% (interquartile range, 1.3%-7.0%) and 0.959-0.998 for the CSA and 1.0 HU (interquartile range, 0.0-2.0 HU) and 0.95-0.99 for median attenuation. This study demonstrates accurate and reliable fully automated multi-vertebral level quantification and characterization of muscle and adipose tissue on routine chest CT scans. Keywords: Skeletal Muscle, Adipose Tissue, CT, Chest, Body Composition Analysis, Convolutional Neural Network (CNN), Supervised Learning Supplemental material is available for this article. © RSNA, 2022.

16.
J Neurosurg Spine ; : 1-11, 2022 Feb 25.
Article in English | MEDLINE | ID: mdl-35213829

ABSTRACT

OBJECTIVE: Cancer patients with spinal metastases may undergo surgery without clear assessments of prognosis, thereby impacting the optimal palliative strategy. Because the morbidity of surgery may adversely impact recovery and initiation of adjuvant therapies, evaluation of risk factors associated with mortality risk and complications is critical. Evaluation of body composition of cancer patients as a surrogate for frailty is an emerging area of study for improving preoperative risk stratification. METHODS: To examine the associations of muscle characteristics and adiposity with postoperative complications, length of stay, and mortality in patients with spinal metastases, the authors designed an observational study of 484 cancer patients who received surgical treatment for spinal metastases between 2010 and 2019. Sarcopenia, muscle radiodensity, visceral adiposity, and subcutaneous adiposity were assessed on routinely available 3-month preoperative CT images by using a validated deep learning methodology. The authors used k-means clustering analysis to identify patients with similar body composition characteristics. Regression models were used to examine the associations of sarcopenia, frailty, and clusters with the outcomes of interest. RESULTS: Of 484 patients enrolled, 303 had evaluable CT data on muscle and adiposity (mean age 62.00 ± 11.91 years; 57.8% male). The authors identified 2 clusters with significantly different body composition characteristics and mortality risks after spine metastases surgery. Patients in cluster 2 (high-risk cluster) had lower muscle mass index (mean ± SD 41.16 ± 7.99 vs 50.13 ± 10.45 cm2/m2), lower subcutaneous fat area (147.62 ± 57.80 vs 289.83 ± 109.31 cm2), lower visceral fat area (82.28 ± 48.96 vs 239.26 ± 98.40 cm2), higher muscle radiodensity (35.67 ± 9.94 vs 31.13 ± 9.07 Hounsfield units [HU]), and significantly higher risk of 1-year mortality (adjusted HR 1.45, 95% CI 1.05-2.01, p = 0.02) than individuals in cluster 1 (low-risk cluster). Decreased muscle mass, muscle radiodensity, and adiposity were not associated with a higher rate of complications after surgery. Prolonged length of stay (> 7 days) was associated with low muscle radiodensity (mean 30.87 vs 35.23 HU, 95% CI 1.98-6.73, p < 0.001). CONCLUSIONS: Body composition analysis shows promise for better risk stratification of patients with spinal metastases under consideration for surgery. Those with lower muscle mass and subcutaneous and visceral adiposity are at greater risk for inferior outcomes.

17.
Acad Radiol ; 29(2): 236-244, 2022 02.
Article in English | MEDLINE | ID: mdl-33583714

ABSTRACT

OBJECTIVE: To assess the impact of using a computer-assisted reporting and decision support (CAR/DS) tool at the radiologist point-of-care on ordering provider compliance with recommendations for adrenal incidentaloma workup. METHOD: Abdominal CT reports describing adrenal incidentalomas (2014 - 2016) were retrospectively extracted from the radiology database. Exclusion criteria were history of cancer, suspected functioning adrenal tumor, dominant nodule size < 1 cm or ≥ 4 cm, myelolipomas, cysts, and hematomas. Multivariable logistic regression models were employed to predict follow-up imaging (FUI) and hormonal screening orders as a function of patient age and sex, nodule size, and CAR/DS use. CAR/DS reports were compared to conventional reports regarding ordering provider compliance with, frequency, and completeness of, guideline-warranted recommendations for FUI and hormonal screening of adrenal incidentalomas using Chi-square test. RESULT: Of 174 patients (mean age 62.4; 51.1% women) with adrenal incidentalomas, 62% (108/174) received CAR/DS-based recommendations versus 38% (66/174) unassisted recommendations. CAR/DS use was an independent predictor of provider compliance both with FUI (Odds Ratio [OR]=2.47, p = 0.02) and hormonal screening (OR=2.38, p = 0.04). CAR/DS reports recommended FUI (97.2%,105/108) and hormonal screening (87.0%,94/108) more often than conventional reports (respectively, 69.7% [46/66], 3.0% [2/66], both p <0.0001). CAR/DS recommendations more frequently included instructions for FUI time, protocol, and modality than conventional reports (all p <0.001). CONCLUSION: Ordering providers were at least twice as likely to comply with report recommendations for FUI and hormonal evaluation of adrenal incidentalomas generated using CAR/DS versus unassisted reporting. CAR/DS-directed recommendations were more adherent to guidelines than those generated without.


Subject(s)
Adrenal Gland Neoplasms , Adrenal Gland Neoplasms/diagnostic imaging , Computers , Female , Follow-Up Studies , Humans , Incidental Findings , Male , Middle Aged , Retrospective Studies , Tomography, X-Ray Computed
18.
Radiol Artif Intell ; 3(6): e210152, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34870224

ABSTRACT

Artificial intelligence (AI) tools are rapidly being developed for radiology and other clinical areas. These tools have the potential to dramatically change clinical practice; however, for these tools to be usable and function as intended, they must be integrated into existing radiology systems. In a collaborative effort between the Radiological Society of North America, radiologists, and imaging-focused vendors, the Imaging AI in Practice (IAIP) demonstrations were developed to show how AI tools can generate, consume, and present results throughout the radiology workflow in a simulated clinical environment. The IAIP demonstrations highlight the critical importance of semantic and interoperability standards, as well as orchestration profiles for successful clinical integration of radiology AI tools. Keywords: Computer Applications-General (Informatics), Technology Assessment © RSNA, 2021.

19.
J Digit Imaging ; 34(6): 1424-1429, 2021 12.
Article in English | MEDLINE | ID: mdl-34608591

ABSTRACT

With vast interest in machine learning applications, more investigators are proposing to assemble large datasets for machine learning applications. We aim to delineate multiple possible roadblocks to exam retrieval that may present themselves and lead to significant time delays. This HIPAA-compliant, institutional review board-approved, retrospective clinical study required identification and retrieval of all outpatient and emergency patients undergoing abdominal and pelvic computed tomography (CT) at three affiliated hospitals in the year 2012. If a patient had multiple abdominal CT exams, the first exam was selected for retrieval (n=23,186). Our experience in attempting to retrieve 23,186 abdominal CT exams yielded 22,852 valid CT abdomen/pelvis exams and identified four major categories of challenges when retrieving large datasets: cohort selection and processing, retrieving DICOM exam files from PACS, data storage, and non-recoverable failures. The retrieval took 3 months of project time and at minimum 300 person-hours of time between the primary investigator (a radiologist), a data scientist, and a software engineer. Exam selection and retrieval may take significantly longer than planned. We share our experience so that other investigators can anticipate and plan for these challenges. We also hope to help institutions better understand the demands that may be placed on their infrastructure by large-scale medical imaging machine learning projects.


Subject(s)
Machine Learning , Tomography, X-Ray Computed , Abdomen , Humans , Radiography , Retrospective Studies
20.
Radiol Artif Intell ; 3(4): e200184, 2021 Jul.
Article in English | MEDLINE | ID: mdl-34350408

ABSTRACT

PURPOSE: To develop a deep learning model for detecting brain abnormalities on MR images. MATERIALS AND METHODS: In this retrospective study, a deep learning approach using T2-weighted fluid-attenuated inversion recovery images was developed to classify brain MRI findings as "likely normal" or "likely abnormal." A convolutional neural network model was trained on a large, heterogeneous dataset collected from two different continents and covering a broad panel of pathologic conditions, including neoplasms, hemorrhages, infarcts, and others. Three datasets were used. Dataset A consisted of 2839 patients, dataset B consisted of 6442 patients, and dataset C consisted of 1489 patients and was only used for testing. Datasets A and B were split into training, validation, and test sets. A total of three models were trained: model A (using only dataset A), model B (using only dataset B), and model A + B (using training datasets from A and B). All three models were tested on subsets from dataset A, dataset B, and dataset C separately. The evaluation was performed by using annotations based on the images, as well as labels based on the radiology reports. RESULTS: Model A trained on dataset A from one institution and tested on dataset C from another institution reached an F1 score of 0.72 (95% CI: 0.70, 0.74) and an area under the receiver operating characteristic curve of 0.78 (95% CI: 0.75, 0.80) when compared with findings from the radiology reports. CONCLUSION: The model shows relatively good performance for differentiating between likely normal and likely abnormal brain examination findings by using data from different institutions.Keywords: MR-Imaging, Head/Neck, Computer Applications-General (Informatics), Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms© RSNA, 2021Supplemental material is available for this article.

SELECTION OF CITATIONS
SEARCH DETAIL
...