Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Radiology ; 312(2): e240272, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39162628

RESUMEN

Background Radiology practices have a high volume of unremarkable chest radiographs and artificial intelligence (AI) could possibly improve workflow by providing an automatic report. Purpose To estimate the proportion of unremarkable chest radiographs, where AI can correctly exclude pathology (ie, specificity) without increasing diagnostic errors. Materials and Methods In this retrospective study, consecutive chest radiographs in unique adult patients (≥18 years of age) were obtained January 1-12, 2020, at four Danish hospitals. Exclusion criteria included insufficient radiology reports or AI output error. Two thoracic radiologists, who were blinded to AI output, labeled chest radiographs as "remarkable" or "unremarkable" based on predefined unremarkable findings (reference standard). Radiology reports were classified similarly. A commercial AI tool was adapted to output a chest radiograph "remarkableness" probability, which was used to calculate specificity at different AI sensitivities. Chest radiographs with missed findings by AI and/or the radiology report were graded by one thoracic radiologist as critical, clinically significant, or clinically insignificant. Paired proportions were compared using the McNemar test. Results A total of 1961 patients were included (median age, 72 years [IQR, 58-81 years]; 993 female), with one chest radiograph per patient. The reference standard labeled 1231 of 1961 chest radiographs (62.8%) as remarkable and 730 of 1961 (37.2%) as unremarkable. At 99.9%, 99.0%, and 98.0% sensitivity, the AI had a specificity of 24.5% (179 of 730 radiographs [95% CI: 21, 28]), 47.1% (344 of 730 radiographs [95% CI: 43, 51]), and 52.7% (385 of 730 radiographs [95% CI: 49, 56]), respectively. With the AI fixed to have a similar sensitivity as radiology reports (87.2%), the missed findings of AI and reports had 2.2% (27 of 1231 radiographs) and 1.1% (14 of 1231 radiographs) classified as critical (P = .01), 4.1% (51 of 1231 radiographs) and 3.6% (44 of 1231 radiographs) classified as clinically significant (P = .46), and 6.5% (80 of 1231) and 8.1% (100 of 1231) classified as clinically insignificant (P = .11), respectively. At sensitivities greater than or equal to 95.4%, the AI tool exhibited less than or equal to 1.1% critical misses. Conclusion A commercial AI tool used off-label could correctly exclude pathology in 24.5%-52.7% of all unremarkable chest radiographs at greater than or equal to 98% sensitivity. The AI had equal or lower rates of critical misses than radiology reports at sensitivities greater than or equal to 95.4%. These results should be confirmed in a prospective study. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Yoon and Hwang in this issue.


Asunto(s)
Inteligencia Artificial , Radiografía Torácica , Humanos , Radiografía Torácica/métodos , Femenino , Anciano , Masculino , Estudios Retrospectivos , Persona de Mediana Edad , Anciano de 80 o más Años , Sensibilidad y Especificidad , Dinamarca , Errores Diagnósticos/estadística & datos numéricos
2.
Radiology ; 308(3): e231236, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37750768

RESUMEN

Background Commercially available artificial intelligence (AI) tools can assist radiologists in interpreting chest radiographs, but their real-life diagnostic accuracy remains unclear. Purpose To evaluate the diagnostic accuracy of four commercially available AI tools for detection of airspace disease, pneumothorax, and pleural effusion on chest radiographs. Materials and Methods This retrospective study included consecutive adult patients who underwent chest radiography at one of four Danish hospitals in January 2020. Two thoracic radiologists (or three, in cases of disagreement) who had access to all previous and future imaging labeled chest radiographs independently for the reference standard. Area under the receiver operating characteristic curve, sensitivity, and specificity were calculated. Sensitivity and specificity were additionally stratified according to the severity of findings, number of findings on chest radiographs, and radiographic projection. The χ2 and McNemar tests were used for comparisons. Results The data set comprised 2040 patients (median age, 72 years [IQR, 58-81 years]; 1033 female), of whom 669 (32.8%) had target findings. The AI tools demonstrated areas under the receiver operating characteristic curve ranging 0.83-0.88 for airspace disease, 0.89-0.97 for pneumothorax, and 0.94-0.97 for pleural effusion. Sensitivities ranged 72%-91% for airspace disease, 63%-90% for pneumothorax, and 62%-95% for pleural effusion. Negative predictive values ranged 92%-100% for all target findings. In airspace disease, pneumothorax, and pleural effusion, specificity was high for chest radiographs with normal or single findings (range, 85%-96%, 99%-100%, and 95%-100%, respectively) and markedly lower for chest radiographs with four or more findings (range, 27%-69%, 96%-99%, 65%-92%, respectively) (P < .001). AI sensitivity was lower for vague airspace disease (range, 33%-61%) and small pneumothorax or pleural effusion (range, 9%-94%) compared with larger findings (range, 81%-100%; P value range, > .99 to < .001). Conclusion Current-generation AI tools showed moderate to high sensitivity for detecting airspace disease, pneumothorax, and pleural effusion on chest radiographs. However, they produced more false-positive findings than radiology reports, and their performance decreased for smaller-sized target findings and when multiple findings were present. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Yanagawa and Tomiyama in this issue.


Asunto(s)
Aprendizaje Profundo , Derrame Pleural , Neumotórax , Adulto , Humanos , Femenino , Anciano , Inteligencia Artificial , Neumotórax/diagnóstico por imagen , Estudios Retrospectivos , Radiografía Torácica/métodos , Sensibilidad y Especificidad , Derrame Pleural/diagnóstico por imagen
3.
Radiology ; 307(3): e222268, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36880947

RESUMEN

Background Automated interpretation of normal chest radiographs could alleviate the workload of radiologists. However, the performance of such an artificial intelligence (AI) tool compared with clinical radiology reports has not been established. Purpose To perform an external evaluation of a commercially available AI tool for (a) the number of chest radiographs autonomously reported, (b) the sensitivity for AI detection of abnormal chest radiographs, and (c) the performance of AI compared with that of the clinical radiology reports. Materials and Methods In this retrospective study, consecutive posteroanterior chest radiographs from adult patients in four hospitals in the capital region of Denmark were obtained in January 2020, including images from emergency department patients, in-hospital patients, and outpatients. Three thoracic radiologists labeled chest radiographs in a reference standard based on chest radiograph findings into the following categories: critical, other remarkable, unremarkable, or normal (no abnormalities). AI classified chest radiographs as high confidence normal (normal) or not high confidence normal (abnormal). Results A total of 1529 patients were included for analysis (median age, 69 years [IQR, 55-69 years]; 776 women), with 1100 (72%) classified by the reference standard as having abnormal radiographs, 617 (40%) as having critical abnormal radiographs, and 429 (28%) as having normal radiographs. For comparison, clinical radiology reports were classified based on the text and insufficient reports excluded (n = 22). The sensitivity of AI was 99.1% (95% CI: 98.3, 99.6; 1090 of 1100 patients) for abnormal radiographs and 99.8% (95% CI: 99.1, 99.9; 616 of 617 patients) for critical radiographs. Corresponding sensitivities for radiologist reports were 72.3% (95% CI: 69.5, 74.9; 779 of 1078 patients) and 93.5% (95% CI: 91.2, 95.3; 558 of 597 patients), respectively. Specificity of AI, and hence the potential autonomous reporting rate, was 28.0% of all normal posteroanterior chest radiographs (95% CI: 23.8, 32.5; 120 of 429 patients), or 7.8% (120 of 1529 patients) of all posteroanterior chest radiographs. Conclusion Of all normal posteroanterior chest radiographs, 28% were autonomously reported by AI with a sensitivity for any abnormalities higher than 99%. This corresponded to 7.8% of the entire posteroanterior chest radiograph production. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Park in this issue.


Asunto(s)
Inteligencia Artificial , Radiografía Torácica , Adulto , Humanos , Femenino , Anciano , Estudios Retrospectivos , Radiografía Torácica/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiólogos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA