Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Radiology ; 307(4): e222176, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37129490

RESUMO

Background Automation bias (the propensity for humans to favor suggestions from automated decision-making systems) is a known source of error in human-machine interactions, but its implications regarding artificial intelligence (AI)-aided mammography reading are unknown. Purpose To determine how automation bias can affect inexperienced, moderately experienced, and very experienced radiologists when reading mammograms with the aid of an artificial intelligence (AI) system. Materials and Methods In this prospective experiment, 27 radiologists read 50 mammograms and provided their Breast Imaging Reporting and Data System (BI-RADS) assessment assisted by a purported AI system. Mammograms were obtained between January 2017 and December 2019 and were presented in two randomized sets. The first was a training set of 10 mammograms, with the correct BI-RADS category suggested by the AI system. The second was a set of 40 mammograms in which an incorrect BI-RADS category was suggested for 12 mammograms. Reader performance, degree of bias in BI-RADS scoring, perceived accuracy of the AI system, and reader confidence in their own BI-RADS ratings were assessed using analysis of variance (ANOVA) and repeated-measures ANOVA followed by post hoc tests and Kruskal-Wallis tests followed by the Dunn post hoc test. Results The percentage of correctly rated mammograms by inexperienced (mean, 79.7% ± 11.7 [SD] vs 19.8% ± 14.0; P < .001; r = 0.93), moderately experienced (mean, 81.3% ± 10.1 vs 24.8% ± 11.6; P < .001; r = 0.96), and very experienced (mean, 82.3% ± 4.2 vs 45.5% ± 9.1; P = .003; r = 0.97) radiologists was significantly impacted by the correctness of the AI prediction of BI-RADS category. Inexperienced radiologists were significantly more likely to follow the suggestions of the purported AI when it incorrectly suggested a higher BI-RADS category than the actual ground truth compared with both moderately (mean degree of bias, 4.0 ± 1.8 vs 2.4 ± 1.5; P = .044; r = 0.46) and very (mean degree of bias, 4.0 ± 1.8 vs 1.2 ± 0.8; P = .009; r = 0.65) experienced readers. Conclusion The results show that inexperienced, moderately experienced, and very experienced radiologists reading mammograms are prone to automation bias when being supported by an AI-based system. This and other effects of human and machine interaction must be considered to ensure safe deployment and accurate diagnostic performance when combining human readers and AI. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Baltzer in this issue.


Assuntos
Inteligência Artificial , Neoplasias da Mama , Humanos , Feminino , Estudos Prospectivos , Mamografia , Automação , Neoplasias da Mama/diagnóstico por imagem , Estudos Retrospectivos
2.
Eur Radiol ; 31(10): 7960-7968, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33860828

RESUMO

OBJECTIVES: To examine the various roles of radiologists in different steps of developing artificial intelligence (AI) applications. MATERIALS AND METHODS: Through the case study of eight companies active in developing AI applications for radiology, in different regions (Europe, Asia, and North America), we conducted 17 semi-structured interviews and collected data from documents. Based on systematic thematic analysis, we identified various roles of radiologists. We describe how each role happens across the companies and what factors impact how and when these roles emerge. RESULTS: We identified 9 roles that radiologists play in different steps of developing AI applications: (1) problem finder (in 4 companies); (2) problem shaper (in 3 companies); (3) problem dominator (in 1 company); (4) data researcher (in 2 companies); (5) data labeler (in 3 companies); (6) data quality controller (in 2 companies); (7) algorithm shaper (in 3 companies); (8) algorithm tester (in 6 companies); and (9) AI researcher (in 1 company). CONCLUSIONS: Radiologists can play a wide range of roles in the development of AI applications. How actively they are engaged and the way they are interacting with the development teams significantly vary across the cases. Radiologists need to become proactive in engaging in the development process and embrace new roles. KEY POINTS: • Radiologists can play a wide range of roles during the development of AI applications. • Both radiologists and developers need to be open to new roles and ways of interacting during the development process. • The availability of resources, time, expertise, and trust are key factors that impact how actively radiologists play roles in the development process.


Assuntos
Inteligência Artificial , Radiologia , Algoritmos , Humanos , Radiografia , Radiologistas
3.
Eur Radiol ; 31(8): 6021-6029, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33587154

RESUMO

OBJECTIVES: The aim is to offer an overview of the existing training programs and critically examine them and suggest avenues for further development of AI training programs for radiologists. METHODS: Deductive thematic analysis of 100 training programs offered in 2019 and 2020 (until June 30). We analyze the public data about the training programs based on their "contents," "target audience," "instructors and offering agents," and "legitimization strategies." RESULTS: There are many AI training programs offered to radiologists, yet most of them (80%) are short, stand-alone sessions, which are not part of a longer-term learning trajectory. The training programs mainly (around 85%) focus on the basic concepts of AI and are offered in passive mode. Professional institutions and commercial companies are active in offering the programs (91%), though academic institutes are limitedly involved. CONCLUSIONS: There is a need to further develop systematic training programs that are pedagogically integrated into radiology curriculum. Future training programs need to further focus on learning how to work with AI at work and be further specialized and customized to the contexts of radiology work. KEY POINTS: • Most of AI training programs are short, stand-alone sessions, which focus on the basics of AI. • The content of training programs focuses on medical and technical topics; managerial, legal, and ethical topics are marginally addressed. • Professional institutions and commercial companies are active in offering AI training; academic institutes are limitedly involved.


Assuntos
Inteligência Artificial , Radiologia , Previsões , Humanos , Radiografia , Radiologistas
4.
Eur Radiol ; 31(4): 1805-1811, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32945967

RESUMO

OBJECTIVES: Why is there a major gap between the promises of AI and its applications in the domain of diagnostic radiology? To answer this question, we systematically review and critically analyze the AI applications in the radiology domain. METHODS: We systematically analyzed these applications based on their focal modality and anatomic region as well as their stage of development, technical infrastructure, and approval. RESULTS: We identified 269 AI applications in the diagnostic radiology domain, offered by 99 companies. We show that AI applications are primarily narrow in terms of tasks, modality, and anatomic region. A majority of the available AI functionalities focus on supporting the "perception" and "reasoning" in the radiology workflow. CONCLUSIONS: Thereby, we contribute by (1) offering a systematic framework for analyzing and mapping the technological developments in the diagnostic radiology domain, (2) providing empirical evidence regarding the landscape of AI applications, and (3) offering insights into the current state of AI applications. Accordingly, we discuss the potential impacts of AI applications on the radiology work and we highlight future possibilities for developing these applications. KEY POINTS: • Many AI applications are introduced to the radiology domain and their number and diversity grow very fast. • Most of the AI applications are narrow in terms of modality, body part, and pathology. • A lot of applications focus on supporting "perception" and "reasoning" tasks.


Assuntos
Inteligência Artificial , Radiologia , Previsões , Radiografia , Fluxo de Trabalho
5.
Neuroradiology ; 62(10): 1265-1278, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32318774

RESUMO

PURPOSE: To conduct a systematic review of the possibilities of artificial intelligence (AI) in neuroradiology by performing an objective, systematic assessment of available applications. To analyse the potential impacts of AI applications on the work of neuroradiologists. METHODS: We identified AI applications offered on the market during the period 2017-2019. We systematically collected and structured information in a relational database and coded for the characteristics of the applications, their functionalities for the radiology workflow and their potential impacts in terms of 'supporting', 'extending' and 'replacing' radiology tasks. RESULTS: We identified 37 AI applications in the domain of neuroradiology from 27 vendors, together offering 111 functionalities. The majority of functionalities 'support' radiologists, especially for the detection and interpretation of image findings. The second-largest group of functionalities 'extends' the possibilities of radiologists by providing quantitative information about pathological findings. A small but noticeable portion of functionalities seek to 'replace' certain radiology tasks. CONCLUSION: Artificial intelligence in neuroradiology is not only in the stage of development and testing but also available for clinical practice. The majority of functionalities support radiologists or extend their tasks. None of the applications can replace the entire radiology profession, but a few applications can do so for a limited set of tasks. Scientific validation of the AI products is more limited than the regulatory approval.


Assuntos
Inteligência Artificial , Neuroimagem , Humanos
6.
Sci Rep ; 13(1): 9230, 2023 06 07.
Artigo em Inglês | MEDLINE | ID: mdl-37286665

RESUMO

Various studies have shown that medical professionals are prone to follow the incorrect suggestions offered by algorithms, especially when they have limited inputs to interrogate and interpret such suggestions and when they have an attitude of relying on them. We examine the effect of correct and incorrect algorithmic suggestions on the diagnosis performance of radiologists when (1) they have no, partial, and extensive informational inputs for explaining the suggestions (study 1) and (2) they are primed to hold a positive, negative, ambivalent, or neutral attitude towards AI (study 2). Our analysis of 2760 decisions made by 92 radiologists conducting 15 mammography examinations shows that radiologists' diagnoses follow both incorrect and correct suggestions, despite variations in the explainability inputs and attitudinal priming interventions. We identify and explain various pathways through which radiologists navigate through the decision process and arrive at correct or incorrect decisions. Overall, the findings of both studies show the limited effect of using explainability inputs and attitudinal priming for overcoming the influence of (incorrect) algorithmic suggestions.


Assuntos
Neoplasias da Mama , Radiologistas , Humanos , Feminino , Projetos Piloto , Algoritmos , Mamografia , Inteligência Artificial , Neoplasias da Mama/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA