Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Radiol Artif Intell ; 6(3): e230240, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38477660

RESUMO

Purpose To evaluate the robustness of an award-winning bone age deep learning (DL) model to extensive variations in image appearance. Materials and Methods In December 2021, the DL bone age model that won the 2017 RSNA Pediatric Bone Age Challenge was retrospectively evaluated using the RSNA validation set (1425 pediatric hand radiographs; internal test set in this study) and the Digital Hand Atlas (DHA) (1202 pediatric hand radiographs; external test set). Each test image underwent seven types of transformations (rotations, flips, brightness, contrast, inversion, laterality marker, and resolution) to represent a range of image appearances, many of which simulate real-world variations. Computational "stress tests" were performed by comparing the model's predictions on baseline and transformed images. Mean absolute differences (MADs) of predicted bone ages compared with radiologist-determined ground truth on baseline versus transformed images were compared using Wilcoxon signed rank tests. The proportion of clinically significant errors (CSEs) was compared using McNemar tests. Results There was no evidence of a difference in MAD of the model on the two baseline test sets (RSNA = 6.8 months, DHA = 6.9 months; P = .05), indicating good model generalization to external data. Except for the RSNA dataset images with an appended radiologic laterality marker (P = .86), there were significant differences in MAD for both the DHA and RSNA datasets among other transformation groups (rotations, flips, brightness, contrast, inversion, and resolution). There were significant differences in proportion of CSEs for 57% of the image transformations (19 of 33) performed on the DHA dataset. Conclusion Although an award-winning pediatric bone age DL model generalized well to curated external images, it had inconsistent predictions on images that had undergone simple transformations reflective of several real-world variations in image appearance. Keywords: Pediatrics, Hand, Convolutional Neural Network, Radiography Supplemental material is available for this article. © RSNA, 2024 See also commentary by Faghani and Erickson in this issue.


Assuntos
Determinação da Idade pelo Esqueleto , Aprendizado Profundo , Criança , Humanos , Algoritmos , Redes Neurais de Computação , Radiografia , Estudos Retrospectivos , Determinação da Idade pelo Esqueleto/métodos
2.
Skeletal Radiol ; 53(3): 445-454, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37584757

RESUMO

OBJECTIVE: The purpose of this systematic review was to summarize the results of original research studies evaluating the characteristics and performance of deep learning models for detection of knee ligament and meniscus tears on MRI. MATERIALS AND METHODS: We searched PubMed for studies published as of February 2, 2022 for original studies evaluating development and evaluation of deep learning models for MRI diagnosis of knee ligament or meniscus tears. We summarized study details according to multiple criteria including baseline article details, model creation, deep learning details, and model evaluation. RESULTS: 19 studies were included with radiology departments leading the publications in deep learning development and implementation for detecting knee injuries via MRI. Among the studies, there was a lack of standard reporting and inconsistently described development details. However, all included studies reported consistently high model performance that significantly supplemented human reader performance. CONCLUSION: From our review, we found radiology departments have been leading deep learning development for injury detection on knee MRIs. Although studies inconsistently described DL model development details, all reported high model performance, indicating great promise for DL in knee MRI analysis.


Assuntos
Lesões do Ligamento Cruzado Anterior , Inteligência Artificial , Ligamentos Articulares , Menisco , Humanos , Lesões do Ligamento Cruzado Anterior/diagnóstico por imagem , Ligamentos Articulares/diagnóstico por imagem , Ligamentos Articulares/lesões , Imageamento por Ressonância Magnética/métodos , Menisco/diagnóstico por imagem , Menisco/lesões
3.
Radiol Artif Intell ; 5(2): e220062, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37035428

RESUMO

Purpose: To evaluate the performance and usability of code-free deep learning (CFDL) platforms in creating DL models for disease classification, object detection, and segmentation on chest radiographs. Materials and Methods: Six CFDL platforms were evaluated in this retrospective study (September 2021). Single- and multilabel classifiers were trained for thoracic pathologic conditions using Guangzhou pediatric and NIH-CXR14 (ie, National Institutes of Health ChestX-ray14) datasets, and external testing was performed using subsets of NIH-CXR14 and Stanford CheXpert datasets, respectively. Pneumonia detection and pneumothorax segmentation models were trained using the Radiological Society of North America (RSNA) Pneumonia and Society for Imaging Informatics in Medicine (SIIM) Pneumothorax datasets, respectively. Model performance was evaluated using F1 scores. Usability was evaluated based on feasibility of image uploading and model training, ease of use, and cost. Results: NIH-CXR14 and CheXpert datasets contained 112 120 (mean age, 47 years ± 17 [SD]; 63 340 male patients) and 151 522 images (mean age, 61 years ± 18; 88 931 male patients), respectively. The other datasets did not report demographics (Guangzhou, 5826 images; RSNA, 26 683 images; SIIM, 15 301 images). Six platforms offered single-label classifiers, four multilabel classifiers, five object detection models, and one segmentation model. Guangzhou pneumonia classifiers demonstrated good internal (F1, 0.93-0.99) and poor external (F1, 0.39-0.44) performance. Multilabel NIH-CXR14 classifiers showed poor internal and external performance (F1, 0.00-0.36 and 0.00-0.76, respectively). NIH-CXR14 single-label classifiers performed poorly (F1, 0.00, all). The single successfully trained pneumonia detection model had an F1 score of 0.48. No segmentation model was successfully trained. Platform usability was limited, with all requiring some type of coded solution. Conclusion: CFDL platforms demonstrated limited performance and usability for chest radiograph analysis.Keywords: Artificial Intelligence, Automated Machine Learning, Chest Radiographs, Deep Learning, Code-Free Deep Learning, Pneumonia, Pneumothorax, Radiology Supplemental material is available for this article. © RSNA, 2023.

5.
Acad Radiol ; 30(5): 971-974, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-35965155

RESUMO

RATIONALE AND OBJECTIVES: With a track record of innovation and unique access to digital data, radiologists are distinctly positioned to usher in a new medical era of artificial intelligence (AI). MATERIALS AND METHODS: In this Perspective piece, we summarize AI initiatives that academic radiology departments should consider related to the traditional pillars of education, research, and clinical excellence, while also introducing a new opportunity for engagement with industry. RESULTS: We provide early successful examples of each as well as suggestions to guide departments towards future success. CONCLUSION: Our goal is to assist academic radiology leaders in bringing their departments into the AI era and realizing its full potential in our field.


Assuntos
Serviço Hospitalar de Radiologia , Radiologia , Humanos , Inteligência Artificial , Radiologia/educação , Radiologistas , Previsões
6.
Radiology ; 306(2): e220505, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36165796

RESUMO

Background Although deep learning (DL) models have demonstrated expert-level ability for pediatric bone age prediction, they have shown poor generalizability and bias in other use cases. Purpose To quantify generalizability and bias in a bone age DL model measured by performance on external versus internal test sets and performance differences between different demographic groups, respectively. Materials and Methods The winning DL model of the 2017 RSNA Pediatric Bone Age Challenge was retrospectively evaluated and trained on 12 611 pediatric hand radiographs from two U.S. hospitals. The DL model was tested from September 2021 to December 2021 on an internal validation set and an external test set of pediatric hand radiographs with diverse demographic representation. Images reporting ground-truth bone age were included for study. Mean absolute difference (MAD) between ground-truth bone age and the model prediction bone age was calculated for each set. Generalizability was evaluated by comparing MAD between internal and external evaluation sets with use of t tests. Bias was evaluated by comparing MAD and clinically significant error rate (rate of errors changing the clinical diagnosis) between demographic groups with use of t tests or analysis of variance and χ2 tests, respectively (statistically significant difference defined as P < .05). Results The internal validation set had images from 1425 individuals (773 boys), and the external test set had images from 1202 individuals (mean age, 133 months ± 60 [SD]; 614 boys). The bone age model generalized well to the external test set, with no difference in MAD (6.8 months in the validation set vs 6.9 months in the external set; P = .64). Model predictions would have led to clinically significant errors in 194 of 1202 images (16%) in the external test set. The MAD was greater for girls than boys in the internal validation set (P = .01) and in the subcategories of age and Tanner stage in the external test set (P < .001 for both). Conclusion A deep learning (DL) bone age model generalized well to an external test set, although clinically significant sex-, age-, and sexual maturity-based biases in DL bone age were identified. © RSNA, 2022 Online supplemental material is available for this article See also the editorial by Larson in this issue.


Assuntos
Aprendizado Profundo , Masculino , Feminino , Humanos , Criança , Lactente , Estudos Retrospectivos , Radiografia
7.
Radiol Artif Intell ; 4(5): e220081, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36204536

RESUMO

Purpose: To evaluate code and data sharing practices in original artificial intelligence (AI) scientific manuscripts published in the Radiological Society of North America (RSNA) journals suite from 2017 through 2021. Materials and Methods: A retrospective meta-research study was conducted of articles published in the RSNA journals suite from January 1, 2017, through December 31, 2021. A total of 218 articles were included and evaluated for code sharing practices, reproducibility of shared code, and data sharing practices. Categorical comparisons were conducted using Fisher exact tests with respect to year and journal of publication, author affiliation(s), and type of algorithm used. Results: Of the 218 included articles, 73 (34%) shared code, with 24 (33% of code sharing articles and 11% of all articles) sharing reproducible code. Radiology and Radiology: Artificial Intelligence published the most code sharing articles (48 [66%] and 21 [29%], respectively). Twenty-nine articles (13%) shared data, and 12 of these articles (41% of data sharing articles) shared complete experimental data by using only public domain datasets. Four of the 218 articles (2%) shared both code and complete experimental data. Code sharing rates were statistically higher in 2020 and 2021 compared with earlier years (P < .01) and were higher in Radiology and Radiology: Artificial Intelligence compared with other journals (P < .01). Conclusion: Original AI scientific articles in the RSNA journals suite had low rates of code and data sharing, emphasizing the need for open-source code and data to achieve transparent and reproducible science.Keywords: Meta-Analysis, AI in Education, Machine LearningSupplemental material is available for this article.© RSNA, 2022.

8.
AJR Am J Roentgenol ; 219(6): 869-878, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35731103

RESUMO

Fractures are common injuries that can be difficult to diagnose, with missed fractures accounting for most misdiagnoses in the emergency department. Artificial intelligence (AI) and, specifically, deep learning have shown a strong ability to accurately detect fractures and augment the performance of radiologists in proof-of-concept research settings. Although the number of real-world AI products available for clinical use continues to increase, guidance for practicing radiologists in the adoption of this new technology is limited. This review describes how AI and deep learning algorithms can help radiologists to better diagnose fractures. The article also provides an overview of commercially available U.S. FDA-cleared AI tools for fracture detection as well as considerations for the clinical adoption of these tools by radiology practices.


Assuntos
Fraturas Ósseas , Radiologia , Humanos , Inteligência Artificial , Radiologistas , Algoritmos , Radiografia , Fraturas Ósseas/diagnóstico por imagem
9.
Acad Radiol ; 2022 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-35105524

RESUMO

RATIONALE AND OBJECTIVES: The introduction of AI in radiology has prompted both excitement and hesitation within the field. We performed a systematic review of original studies evaluating the attitudes of radiologists, radiology trainees, and medical students towards AI in radiology. MATERIALS AND METHODS: We searched PubMed for studies published as of August 24, 2021 for original studies evaluating attitudes of radiologists (attendings and trainees) and medical students towards AI in radiology. We summarized the baseline article characteristics and performed thematic analysis of the questions asked in each study. RESULTS: Nineteen studies were included evaluating attitudes across different levels of training (medical students, radiology trainees, and radiology attendings) with representation from nearly every continent. Medical students and radiologists alike favored increased educational initiatives, and displayed interest in learning about and implementing AI solutions themselves, despite reporting of a current gap in formal AI training. There was general optimism about the role of AI in radiology, although radiologists and trainees had greater consensus than medical students. CONCLUSION: Although there is interest in incorporating AI into medical education and optimism among radiologists towards AI, medical students are more divided in their views. We propose that outreach to and AI education for medical students may help improve their attitudes towards the potentially transformative technology of AI for radiology.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...