Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
3.
Radiology ; 310(1): e230764, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38165245

RESUMEN

While musculoskeletal imaging volumes are increasing, there is a relative shortage of subspecialized musculoskeletal radiologists to interpret the studies. Will artificial intelligence (AI) be the solution? For AI to be the solution, the wide implementation of AI-supported data acquisition methods in clinical practice requires establishing trusted and reliable results. This implementation will demand close collaboration between core AI researchers and clinical radiologists. Upon successful clinical implementation, a wide variety of AI-based tools can improve the musculoskeletal radiologist's workflow by triaging imaging examinations, helping with image interpretation, and decreasing the reporting time. Additional AI applications may also be helpful for business, education, and research purposes if successfully integrated into the daily practice of musculoskeletal radiology. The question is not whether AI will replace radiologists, but rather how musculoskeletal radiologists can take advantage of AI to enhance their expert capabilities.


Asunto(s)
Inteligencia Artificial , Comercio , Humanos , Cintigrafía , Examen Físico , Radiólogos
4.
Radiology ; 309(3): e230860, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38085079

RESUMEN

Background Chest radiography remains the most common radiologic examination, and interpretation of its results can be difficult. Purpose To explore the potential benefit of artificial intelligence (AI) assistance in the detection of thoracic abnormalities on chest radiographs by evaluating the performance of radiologists with different levels of expertise, with and without AI assistance. Materials and Methods Patients who underwent both chest radiography and thoracic CT within 72 hours between January 2010 and December 2020 in a French public hospital were screened retrospectively. Radiographs were randomly included until reaching 500 radiographs, with about 50% of radiographs having abnormal findings. A senior thoracic radiologist annotated the radiographs for five abnormalities (pneumothorax, pleural effusion, consolidation, mediastinal and hilar mass, lung nodule) based on the corresponding CT results (ground truth). A total of 12 readers (four thoracic radiologists, four general radiologists, four radiology residents) read half the radiographs without AI and half the radiographs with AI (ChestView; Gleamer). Changes in sensitivity and specificity were measured using paired t tests. Results The study included 500 patients (mean age, 54 years ± 19 [SD]; 261 female, 239 male), with 522 abnormalities visible on 241 radiographs. On average, for all readers, AI use resulted in an absolute increase in sensitivity of 26% (95% CI: 20, 32), 14% (95% CI: 11, 17), 12% (95% CI: 10, 14), 8.5% (95% CI: 6, 11), and 5.9% (95% CI: 4, 8) for pneumothorax, consolidation, nodule, pleural effusion, and mediastinal and hilar mass, respectively (P < .001). Specificity increased with AI assistance (3.9% [95% CI: 3.2, 4.6], 3.7% [95% CI: 3, 4.4], 2.9% [95% CI: 2.3, 3.5], and 2.1% [95% CI: 1.6, 2.6] for pleural effusion, mediastinal and hilar mass, consolidation, and nodule, respectively), except in the diagnosis of pneumothorax (-0.2%; 95% CI: -0.36, -0.04; P = .01). The mean reading time was 81 seconds without AI versus 56 seconds with AI (31% decrease, P < .001). Conclusion AI-assisted chest radiography interpretation resulted in absolute increases in sensitivity for all radiologists of various levels of expertise and reduced the reading times; specificity increased with AI, except in the diagnosis of pneumothorax. © RSNA, 2023 Supplemental material is available for this article.


Asunto(s)
Enfermedades Pulmonares , Derrame Pleural , Neumotórax , Humanos , Masculino , Femenino , Persona de Mediana Edad , Inteligencia Artificial , Estudios Retrospectivos , Radiografía Torácica/métodos , Radiografía , Sensibilidad y Especificidad , Radiólogos
5.
Eur Radiol ; 33(11): 8241-8250, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37572190

RESUMEN

OBJECTIVES: To assess whether a computer-aided detection (CADe) system could serve as a learning tool for radiology residents in chest X-ray (CXR) interpretation. METHODS: Eight radiology residents were asked to interpret 500 CXRs for the detection of five abnormalities, namely pneumothorax, pleural effusion, alveolar syndrome, lung nodule, and mediastinal mass. After interpreting 150 CXRs, the residents were divided into 2 groups of equivalent performance and experience. Subsequently, group 1 interpreted 200 CXRs from the "intervention dataset" using a CADe as a second reader, while group 2 served as a control by interpreting the same CXRs without the use of CADe. Finally, the 2 groups interpreted another 150 CXRs without the use of CADe. The sensitivity, specificity, and accuracy before, during, and after the intervention were compared. RESULTS: Before the intervention, the median individual sensitivity, specificity, and accuracy of the eight radiology residents were 43% (range: 35-57%), 90% (range: 82-96%), and 81% (range: 76-84%), respectively. With the use of CADe, residents from group 1 had a significantly higher overall sensitivity (53% [n = 431/816] vs 43% [n = 349/816], p < 0.001), specificity (94% [i = 3206/3428] vs 90% [n = 3127/3477], p < 0.001), and accuracy (86% [n = 3637/4244] vs 81% [n = 3476/4293], p < 0.001), compared to the control group. After the intervention, there were no significant differences between group 1 and group 2 regarding the overall sensitivity (44% [n = 309/696] vs 46% [n = 317/696], p = 0.666), specificity (90% [n = 2294/2541] vs 90% [n = 2285/2542], p = 0.642), or accuracy (80% [n = 2603/3237] vs 80% [n = 2602/3238], p = 0.955). CONCLUSIONS: Although it improves radiology residents' performances for interpreting CXRs, a CADe system alone did not appear to be an effective learning tool and should not replace teaching. CLINICAL RELEVANCE STATEMENT: Although the use of artificial intelligence improves radiology residents' performance in chest X-rays interpretation, artificial intelligence cannot be used alone as a learning tool and should not replace dedicated teaching. KEY POINTS: • With CADe as a second reader, residents had a significantly higher sensitivity (53% vs 43%, p < 0.001), specificity (94% vs 90%, p < 0.001), and accuracy (86% vs 81%, p < 0.001), compared to residents without CADe. • After removing access to the CADe system, residents' sensitivity (44% vs 46%, p = 0.666), specificity (90% vs 90%, p = 0.642), and accuracy (80% vs 80%, p = 0.955) returned to that of the level for the group without CADe.


Asunto(s)
Inteligencia Artificial , Internado y Residencia , Humanos , Rayos X , Radiografía Torácica , Radiografía
6.
Diagn Interv Imaging ; 104(7-8): 330-336, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37095034

RESUMEN

PURPOSE: The purpose of this study was to compare the performance of an artificial intelligence (AI) solution to that of a senior general radiologist for bone age assessment. MATERIAL AND METHODS: Anteroposterior hand radiographs of eight boys and eight girls from each age interval between five and 17 year-old from four different radiology departments were retrospectively collected. Two board-certified pediatric radiologists with knowledge of the sex and chronological age of the patients independently estimated the Greulich and Pyle bone age to determine the standard of reference. A senior general radiologist not specialized in pediatric radiology (further referred to as "the reader") then determined the bone age with knowledge of the sex and chronological age. The results of the reader were then compared to those of the AI solution using mean absolute error (MAE) in age estimation. RESULTS: The study dataset included a total of 206 patients (102 boys of mean chronological age of 10.9 ± 3.7 [SD] years, 104 girls of mean chronological age of 11 ± 3.7 [SD] years). For both sexes, the AI algorithm showed a significantly lower MAE than the reader (P < 0.007). In boys, the MAE was 0.488 years (95% confidence interval [CI]: 0.28-0.44; r2 = 0.978) for the AI algorithm and 0.771 years (95% CI: 0.64-0.90; r2 = 0.94) for the reader. In girls, the MAE was 0.494 years (95% CI: 0.41-0.56; r2 = 0.973) for the AI algorithm and 0.673 years (95% CI: 0.54-0.81; r2 = 0.934) for the reader. CONCLUSION: The AI solution better estimates the Greulich and Pyle bone age than a general radiologist does.


Asunto(s)
Determinación de la Edad por el Esqueleto , Inteligencia Artificial , Niño , Masculino , Femenino , Humanos , Adolescente , Preescolar , Estudios Retrospectivos , Determinación de la Edad por el Esqueleto/métodos , Algoritmos
7.
Eur J Radiol ; 154: 110447, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35921795

RESUMEN

PURPOSE: To appraise the performances of an AI trained to detect and localize skeletal lesions and compare them to the routine radiological interpretation. METHODS: We retrospectively collected all radiographic examinations with the associated radiologists' reports performed after a traumatic injury of the limbs and pelvis during 3 consecutive months (January to March 2017) in a private imaging group of 14 centers. Each examination was analyzed by an AI (BoneView, Gleamer) and its results were compared to those of the radiologists' reports. In case of discrepancy, the examination was reviewed by a senior skeletal radiologist to settle on the presence of fractures, dislocations, elbow effusions, and focal bone lesions (FBL). The lesion-wise sensitivity of the AI and the radiologists' reports was compared for each lesion type. This study received IRB approval (CRM-2106-177). RESULTS: A total of 4774 exams were included in the study. Lesion-wise sensitivity was 73.7% for the radiologists' reports vs. 98.1% for the AI (+24.4 points) for fracture detection, 63.3% vs. 89.9% (+26.6 points) for dislocation detection, 84.7% vs. 91.5% (+6.8 points) for elbow effusion detection, and 16.1% vs. 98.1% (+82 points) for FBL detection. The specificity of the radiologists' reports was always 100% whereas AI specificity was 88%, 99.1%, 99.8%, 95.6% for fractures, dislocations, elbow effusions, and FBL respectively. The NPV was measured at 99.5% for fractures, 99.8% for dislocations, and 99.9% for elbow effusions and FBL. CONCLUSION: AI has the potential to prevent diagnosis errors by detecting lesions that were initially missed in the radiologists' reports.


Asunto(s)
Aprendizaje Profundo , Fractura-Luxación , Fracturas Óseas , Luxaciones Articulares , Algoritmos , Codo , Fracturas Óseas/diagnóstico por imagen , Humanos , Radiólogos , Estudios Retrospectivos , Rayos X
8.
Skeletal Radiol ; 51(11): 2129-2139, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-35522332

RESUMEN

OBJECTIVE: We aimed to perform an external validation of an existing commercial AI software program (BoneView™) for the detection of acute appendicular fractures in pediatric patients. MATERIALS AND METHODS: In our retrospective study, anonymized radiographic exams of extremities, with or without fractures, from pediatric patients (aged 2-21) were included. Three hundred exams (150 with fractures and 150 without fractures) were included, comprising 60 exams per body part (hand/wrist, elbow/upper arm, shoulder/clavicle, foot/ankle, leg/knee). The Ground Truth was defined by experienced radiologists. A deep learning algorithm interpreted the radiographs for fracture detection, and its diagnostic performance was compared against the Ground Truth, and receiver operating characteristic analysis was done. Statistical analyses included sensitivity per patient (the proportion of patients for whom all fractures were identified) and sensitivity per fracture (the proportion of fractures identified by the AI among all fractures), specificity per patient, and false-positive rate per patient. RESULTS: There were 167 boys and 133 girls with a mean age of 10.8 years. For all fractures, sensitivity per patient (average [95% confidence interval]) was 91.3% [85.6, 95.3], specificity per patient was 90.0% [84.0,94.3], sensitivity per fracture was 92.5% [87.0, 96.2], and false-positive rate per patient in patients who had no fracture was 0.11. The patient-wise area under the curve was 0.93 for all fractures. AI diagnostic performance was consistently high across all anatomical locations and different types of fractures except for avulsion fractures (sensitivity per fracture 72.7% [39.0, 94.0]). CONCLUSION: The BoneView™ deep learning algorithm provides high overall diagnostic performance for appendicular fracture detection in pediatric patients.


Asunto(s)
Aprendizaje Profundo , Fracturas Óseas , Algoritmos , Niño , Femenino , Fracturas Óseas/diagnóstico por imagen , Humanos , Masculino , Curva ROC , Estudios Retrospectivos , Sensibilidad y Especificidad
9.
Radiology ; 302(3): 627-636, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34931859

RESUMEN

Background Missed fractures are a common cause of diagnostic discrepancy between initial radiographic interpretation and the final read by board-certified radiologists. Purpose To assess the effect of assistance by artificial intelligence (AI) on diagnostic performances of physicians for fractures on radiographs. Materials and Methods This retrospective diagnostic study used the multi-reader, multi-case methodology based on an external multicenter data set of 480 examinations with at least 60 examinations per body region (foot and ankle, knee and leg, hip and pelvis, hand and wrist, elbow and arm, shoulder and clavicle, rib cage, and thoracolumbar spine) between July 2020 and January 2021. Fracture prevalence was set at 50%. The ground truth was determined by two musculoskeletal radiologists, with discrepancies solved by a third. Twenty-four readers (radiologists, orthopedists, emergency physicians, physician assistants, rheumatologists, family physicians) were presented the whole validation data set (n = 480), with and without AI assistance, with a 1-month minimum washout period. The primary analysis had to demonstrate superiority of sensitivity per patient and the noninferiority of specificity per patient at -3% margin with AI aid. Stand-alone AI performance was also assessed using receiver operating characteristic curves. Results A total of 480 patients were included (mean age, 59 years ± 16 [standard deviation]; 327 women). The sensitivity per patient was 10.4% higher (95% CI: 6.9, 13.9; P < .001 for superiority) with AI aid (4331 of 5760 readings, 75.2%) than without AI (3732 of 5760 readings, 64.8%). The specificity per patient with AI aid (5504 of 5760 readings, 95.6%) was noninferior to that without AI aid (5217 of 5760 readings, 90.6%), with a difference of +5.0% (95% CI: +2.0, +8.0; P = .001 for noninferiority). AI shortened the average reading time by 6.3 seconds per examination (95% CI: -12.5, -0.1; P = .046). The sensitivity by patient gain was significant in all regions (+8.0% to +16.2%; P < .05) but shoulder and clavicle and spine (+4.2% and +2.6%; P = .12 and .52). Conclusion AI assistance improved the sensitivity and may even improve the specificity of fracture detection by radiologists and nonradiologists, without lengthening reading time. Published under a CC BY 4.0 license. Online supplemental material is available for this article. See also the editorial by Link and Pedoia in this issue.


Asunto(s)
Inteligencia Artificial , Errores Diagnósticos/prevención & control , Fracturas Óseas/diagnóstico por imagen , Mejoramiento de la Calidad , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Conjuntos de Datos como Asunto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , Sensibilidad y Especificidad
10.
Radiology ; 300(1): 120-129, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33944629

RESUMEN

Background The interpretation of radiographs suffers from an ever-increasing workload in emergency and radiology departments, while missed fractures represent up to 80% of diagnostic errors in the emergency department. Purpose To assess the performance of an artificial intelligence (AI) system designed to aid radiologists and emergency physicians in the detection and localization of appendicular skeletal fractures. Materials and Methods The AI system was previously trained on 60 170 radiographs obtained in patients with trauma. The radiographs were randomly split into 70% training, 10% validation, and 20% test sets. Between 2016 and 2018, 600 adult patients in whom multiview radiographs had been obtained after a recent trauma, with or without one or more fractures of shoulder, arm, hand, pelvis, leg, and foot, were retrospectively included from 17 French medical centers. Radiographs with quality precluding human interpretation or containing only obvious fractures were excluded. Six radiologists and six emergency physicians were asked to detect and localize fractures with (n = 300) and fractures without (n = 300) the aid of software highlighting boxes around AI-detected fractures. Aided and unaided sensitivity, specificity, and reading times were compared by means of paired Student t tests after averaging of performances of each reader. Results A total of 600 patients (mean age ± standard deviation, 57 years ± 22; 358 women) were included. The AI aid improved the sensitivity of physicians by 8.7% (95% CI: 3.1, 14.2; P = .003 for superiority) and the specificity by 4.1% (95% CI: 0.5, 7.7; P < .001 for noninferiority) and reduced the average number of false-positive fractures per patient by 41.9% (95% CI: 12.8, 61.3; P = .02) in patients without fractures and the mean reading time by 15.0% (95% CI: -30.4, 3.8; P = .12). Finally, stand-alone performance of a newer release of the AI system was greater than that of all unaided readers, including skeletal expert radiologists, with an area under the receiver operating characteristic curve of 0.94 (95% CI: 0.92, 0.96). Conclusion The artificial intelligence aid provided a gain of sensitivity (8.7% increase) and specificity (4.1% increase) without loss of reading speed. © RSNA, 2021 Online supplemental material is available for this article.


Asunto(s)
Inteligencia Artificial , Fracturas Óseas/diagnóstico por imagen , Médicos/estadística & datos numéricos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiólogos/estadística & datos numéricos , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Estudios Transversales , Servicio de Urgencia en Hospital , Femenino , Humanos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Estudios Retrospectivos , Sensibilidad y Especificidad , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...