Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Eur Radiol ; 2024 Mar 11.
Artículo en Inglés | MEDLINE | ID: mdl-38466390

RESUMEN

OBJECTIVES: To evaluate an artificial intelligence (AI)-assisted double reading system for detecting clinically relevant missed findings on routinely reported chest radiographs. METHODS: A retrospective study was performed in two institutions, a secondary care hospital and tertiary referral oncology centre. Commercially available AI software performed a comparative analysis of chest radiographs and radiologists' authorised reports using a deep learning and natural language processing algorithm, respectively. The AI-detected discrepant findings between images and reports were assessed for clinical relevance by an external radiologist, as part of the commercial service provided by the AI vendor. The selected missed findings were subsequently returned to the institution's radiologist for final review. RESULTS: In total, 25,104 chest radiographs of 21,039 patients (mean age 61.1 years ± 16.2 [SD]; 10,436 men) were included. The AI software detected discrepancies between imaging and reports in 21.1% (5289 of 25,104). After review by the external radiologist, 0.9% (47 of 5289) of cases were deemed to contain clinically relevant missed findings. The institution's radiologists confirmed 35 of 47 missed findings (74.5%) as clinically relevant (0.1% of all cases). Missed findings consisted of lung nodules (71.4%, 25 of 35), pneumothoraces (17.1%, 6 of 35) and consolidations (11.4%, 4 of 35). CONCLUSION: The AI-assisted double reading system was able to identify missed findings on chest radiographs after report authorisation. The approach required an external radiologist to review the AI-detected discrepancies. The number of clinically relevant missed findings by radiologists was very low. CLINICAL RELEVANCE STATEMENT: The AI-assisted double reader workflow was shown to detect diagnostic errors and could be applied as a quality assurance tool. Although clinically relevant missed findings were rare, there is potential impact given the common use of chest radiography. KEY POINTS: • A commercially available double reading system supported by artificial intelligence was evaluated to detect reporting errors in chest radiographs (n=25,104) from two institutions. • Clinically relevant missed findings were found in 0.1% of chest radiographs and consisted of unreported lung nodules, pneumothoraces and consolidations. • Applying AI software as a secondary reader after report authorisation can assist in reducing diagnostic errors without interrupting the radiologist's reading workflow. However, the number of AI-detected discrepancies was considerable and required review by a radiologist to assess their relevance.

2.
Eur Radiol ; 33(6): 4249-4258, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36651954

RESUMEN

OBJECTIVES: Only few published artificial intelligence (AI) studies for COVID-19 imaging have been externally validated. Assessing the generalizability of developed models is essential, especially when considering clinical implementation. We report the development of the International Consortium for COVID-19 Imaging AI (ICOVAI) model and perform independent external validation. METHODS: The ICOVAI model was developed using multicenter data (n = 1286 CT scans) to quantify disease extent and assess COVID-19 likelihood using the COVID-19 Reporting and Data System (CO-RADS). A ResUNet model was modified to automatically delineate lung contours and infectious lung opacities on CT scans, after which a random forest predicted the CO-RADS score. After internal testing, the model was externally validated on a multicenter dataset (n = 400) by independent researchers. CO-RADS classification performance was calculated using linearly weighted Cohen's kappa and segmentation performance using Dice Similarity Coefficient (DSC). RESULTS: Regarding internal versus external testing, segmentation performance of lung contours was equally excellent (DSC = 0.97 vs. DSC = 0.97, p = 0.97). Lung opacities segmentation performance was adequate internally (DSC = 0.76), but significantly worse on external validation (DSC = 0.59, p < 0.0001). For CO-RADS classification, agreement with radiologists on the internal set was substantial (kappa = 0.78), but significantly lower on the external set (kappa = 0.62, p < 0.0001). CONCLUSION: In this multicenter study, a model developed for CO-RADS score prediction and quantification of COVID-19 disease extent was found to have a significant reduction in performance on independent external validation versus internal testing. The limited reproducibility of the model restricted its potential for clinical use. The study demonstrates the importance of independent external validation of AI models. KEY POINTS: • The ICOVAI model for prediction of CO-RADS and quantification of disease extent on chest CT of COVID-19 patients was developed using a large sample of multicenter data. • There was substantial performance on internal testing; however, performance was significantly reduced on external validation, performed by independent researchers. The limited generalizability of the model restricts its potential for clinical use. • Results of AI models for COVID-19 imaging on internal tests may not generalize well to external data, demonstrating the importance of independent external validation.


Asunto(s)
Inteligencia Artificial , COVID-19 , Humanos , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X , Algoritmos , Estudios Retrospectivos
3.
Eur Radiol ; 32(12): 8191-8199, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35652937

RESUMEN

BACKGROUND: We explored perceptions and preferences regarding the conversion of in-person to virtual conferences as necessitated by travel and in-person meeting restrictions. METHODS: A 16-question online survey to assess preferences regarding virtual conferences during the COVID-19 pandemic and future perspectives on this subject was disseminated internationally online between June and August 2020. FINDINGS: A total of 508 responses were received from 73 countries. The largest number of responses came from Italy and the USA. The majority of respondents had already attended a virtual conference (80%) and would like to attend future virtual meetings (97%). The ideal duration of such an event was 2-3 days (42%). The preferred time format was a 2-4-h session (43%). Most respondents also noted that they would like a significant fee reduction and the possibility to attend a conference partly in-person and partly online. Respondents indicated educational sessions as the most valuable sections of virtual meetings. The reported positive factor of the virtual meeting format is the ability to re-watch lectures on demand. On the other hand, the absence of networking and human contact was recognized as a significant loss. In the future, people expressed a preference to attend conferences in person for networking purposes, but only in safer conditions. CONCLUSIONS: Respondents appreciated the opportunity to attend the main radiological congresses online and found it a good opportunity to stay updated without having to travel. However, in general, they would prefer these conferences to be structured differently. The lack of networking opportunities was the main reason for preferring an in-person meeting. KEY POINTS: • Respondents appreciated the opportunity to attend the main radiological meetings online, considering it a good opportunity to stay updated without having to travel. • In the future, it is likely for congresses to offer attendance options both in person and online, making them more accessible to a larger audience. • Respondents indicated that networking represents the most valuable advantage of in-person conferences compared to online ones.


Asunto(s)
COVID-19 , Radiología , Humanos , Pandemias , Encuestas y Cuestionarios , Radiólogos
4.
Radiology ; 299(1): E204-E213, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33399506

RESUMEN

The coronavirus disease 2019 (COVID-19) pandemic is a global health care emergency. Although reverse-transcription polymerase chain reaction testing is the reference standard method to identify patients with COVID-19 infection, chest radiography and CT play a vital role in the detection and management of these patients. Prediction models for COVID-19 imaging are rapidly being developed to support medical decision making. However, inadequate availability of a diverse annotated data set has limited the performance and generalizability of existing models. To address this unmet need, the RSNA and Society of Thoracic Radiology collaborated to develop the RSNA International COVID-19 Open Radiology Database (RICORD). This database is the first multi-institutional, multinational, expert-annotated COVID-19 imaging data set. It is made freely available to the machine learning community as a research and educational resource for COVID-19 chest imaging. Pixel-level volumetric segmentation with clinical annotations was performed by thoracic radiology subspecialists for all COVID-19-positive thoracic CT scans. The labeling schema was coordinated with other international consensus panels and COVID-19 data annotation efforts, the European Society of Medical Imaging Informatics, the American College of Radiology, and the American Association of Physicists in Medicine. Study-level COVID-19 classification labels for chest radiographs were annotated by three radiologists, with majority vote adjudication by board-certified radiologists. RICORD consists of 240 thoracic CT scans and 1000 chest radiographs contributed from four international sites. It is anticipated that RICORD will ideally lead to prediction models that can demonstrate sustained performance across populations and health care systems.


Asunto(s)
COVID-19/diagnóstico por imagen , Bases de Datos Factuales/estadística & datos numéricos , Salud Global/estadística & datos numéricos , Pulmón/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Humanos , Internacionalidad , Radiografía Torácica , Radiología , SARS-CoV-2 , Sociedades Médicas , Tomografía Computarizada por Rayos X/estadística & datos numéricos
5.
Eur Radiol ; 31(10): 7960-7968, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33860828

RESUMEN

OBJECTIVES: To examine the various roles of radiologists in different steps of developing artificial intelligence (AI) applications. MATERIALS AND METHODS: Through the case study of eight companies active in developing AI applications for radiology, in different regions (Europe, Asia, and North America), we conducted 17 semi-structured interviews and collected data from documents. Based on systematic thematic analysis, we identified various roles of radiologists. We describe how each role happens across the companies and what factors impact how and when these roles emerge. RESULTS: We identified 9 roles that radiologists play in different steps of developing AI applications: (1) problem finder (in 4 companies); (2) problem shaper (in 3 companies); (3) problem dominator (in 1 company); (4) data researcher (in 2 companies); (5) data labeler (in 3 companies); (6) data quality controller (in 2 companies); (7) algorithm shaper (in 3 companies); (8) algorithm tester (in 6 companies); and (9) AI researcher (in 1 company). CONCLUSIONS: Radiologists can play a wide range of roles in the development of AI applications. How actively they are engaged and the way they are interacting with the development teams significantly vary across the cases. Radiologists need to become proactive in engaging in the development process and embrace new roles. KEY POINTS: • Radiologists can play a wide range of roles during the development of AI applications. • Both radiologists and developers need to be open to new roles and ways of interacting during the development process. • The availability of resources, time, expertise, and trust are key factors that impact how actively radiologists play roles in the development process.


Asunto(s)
Inteligencia Artificial , Radiología , Algoritmos , Humanos , Radiografía , Radiólogos
6.
Eur Radiol ; 31(8): 6021-6029, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-33587154

RESUMEN

OBJECTIVES: The aim is to offer an overview of the existing training programs and critically examine them and suggest avenues for further development of AI training programs for radiologists. METHODS: Deductive thematic analysis of 100 training programs offered in 2019 and 2020 (until June 30). We analyze the public data about the training programs based on their "contents," "target audience," "instructors and offering agents," and "legitimization strategies." RESULTS: There are many AI training programs offered to radiologists, yet most of them (80%) are short, stand-alone sessions, which are not part of a longer-term learning trajectory. The training programs mainly (around 85%) focus on the basic concepts of AI and are offered in passive mode. Professional institutions and commercial companies are active in offering the programs (91%), though academic institutes are limitedly involved. CONCLUSIONS: There is a need to further develop systematic training programs that are pedagogically integrated into radiology curriculum. Future training programs need to further focus on learning how to work with AI at work and be further specialized and customized to the contexts of radiology work. KEY POINTS: • Most of AI training programs are short, stand-alone sessions, which focus on the basics of AI. • The content of training programs focuses on medical and technical topics; managerial, legal, and ethical topics are marginally addressed. • Professional institutions and commercial companies are active in offering AI training; academic institutes are limitedly involved.


Asunto(s)
Inteligencia Artificial , Radiología , Predicción , Humanos , Radiografía , Radiólogos
7.
Eur Radiol ; 31(9): 7058-7066, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33744991

RESUMEN

OBJECTIVES: Radiologists' perception is likely to influence the adoption of artificial intelligence (AI) into clinical practice. We investigated knowledge and attitude towards AI by radiologists and residents in Europe and beyond. METHODS: Between April and July 2019, a survey on fear of replacement, knowledge, and attitude towards AI was accessible to radiologists and residents. The survey was distributed through several radiological societies, author networks, and social media. Independent predictors of fear of replacement and a positive attitude towards AI were assessed using multivariable logistic regression. RESULTS: The survey was completed by 1,041 respondents from 54 mostly European countries. Most respondents were male (n = 670, 65%), median age was 38 (24-74) years, n = 142 (35%) residents, and n = 471 (45%) worked in an academic center. Basic AI-specific knowledge was associated with fear (adjusted OR 1.56, 95% CI 1.10-2.21, p = 0.01), while intermediate AI-specific knowledge (adjusted OR 0.40, 95% CI 0.20-0.80, p = 0.01) or advanced AI-specific knowledge (adjusted OR 0.43, 95% CI 0.21-0.90, p = 0.03) was inversely associated with fear. A positive attitude towards AI was observed in 48% (n = 501) and was associated with only having heard of AI, intermediate (adjusted OR 11.65, 95% CI 4.25-31.92, p < 0.001), or advanced AI-specific knowledge (adjusted OR 17.65, 95% CI 6.16-50.54, p < 0.001). CONCLUSIONS: Limited AI-specific knowledge levels among radiology residents and radiologists are associated with fear, while intermediate to advanced AI-specific knowledge levels are associated with a positive attitude towards AI. Additional training may therefore improve clinical adoption. KEY POINTS: • Forty-eight percent of radiologists and residents have an open and proactive attitude towards artificial intelligence (AI), while 38% fear of replacement by AI. • Intermediate and advanced AI-specific knowledge levels may enhance adoption of AI in clinical practice, while rudimentary knowledge levels appear to be inhibitive. • AI should be incorporated in radiology training curricula to help facilitate its clinical adoption.


Asunto(s)
Inteligencia Artificial , Radiología , Adulto , Miedo , Humanos , Masculino , Radiólogos , Encuestas y Cuestionarios
8.
Eur Radiol ; 31(11): 8797-8806, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-33974148

RESUMEN

OBJECTIVES: Currently, hurdles to implementation of artificial intelligence (AI) in radiology are a much-debated topic but have not been investigated in the community at large. Also, controversy exists if and to what extent AI should be incorporated into radiology residency programs. METHODS: Between April and July 2019, an international survey took place on AI regarding its impact on the profession and training. The survey was accessible for radiologists and residents and distributed through several radiological societies. Relationships of independent variables with opinions, hurdles, and education were assessed using multivariable logistic regression. RESULTS: The survey was completed by 1041 respondents from 54 countries. A majority (n = 855, 82%) expects that AI will cause a change to the radiology field within 10 years. Most frequently, expected roles of AI in clinical practice were second reader (n = 829, 78%) and work-flow optimization (n = 802, 77%). Ethical and legal issues (n = 630, 62%) and lack of knowledge (n = 584, 57%) were mentioned most often as hurdles to implementation. Expert respondents added lack of labelled images and generalizability issues. A majority (n = 819, 79%) indicated that AI should be incorporated in residency programs, while less support for imaging informatics and AI as a subspecialty was found (n = 241, 23%). CONCLUSIONS: Broad community demand exists for incorporation of AI into residency programs. Based on the results of the current study, integration of AI education seems advisable for radiology residents, including issues related to data management, ethics, and legislation. KEY POINTS: • There is broad demand from the radiological community to incorporate AI into residency programs, but there is less support to recognize imaging informatics as a radiological subspecialty. • Ethical and legal issues and lack of knowledge are recognized as major bottlenecks for AI implementation by the radiological community, while the shortage in labeled data and IT-infrastructure issues are less often recognized as hurdles. • Integrating AI education in radiology curricula including technical aspects of data management, risk of bias, and ethical and legal issues may aid successful integration of AI into diagnostic radiology.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Motivación , Radiólogos , Encuestas y Cuestionarios
9.
Eur Radiol ; 30(10): 5525-5532, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32458173

RESUMEN

OBJECTIVE: The objective was to identify barriers and facilitators to the implementation of artificial intelligence (AI) applications in clinical radiology in The Netherlands. MATERIALS AND METHODS: Using an embedded multiple case study, an exploratory, qualitative research design was followed. Data collection consisted of 24 semi-structured interviews from seven Dutch hospitals. The analysis of barriers and facilitators was guided by the recently published Non-adoption, Abandonment, Scale-up, Spread, and Sustainability (NASSS) framework for new medical technologies in healthcare organizations. RESULTS: Among the most important facilitating factors for implementation were the following: (i) pressure for cost containment in the Dutch healthcare system, (ii) high expectations of AI's potential added value, (iii) presence of hospital-wide innovation strategies, and (iv) presence of a "local champion." Among the most prominent hindering factors were the following: (i) inconsistent technical performance of AI applications, (ii) unstructured implementation processes, (iii) uncertain added value for clinical practice of AI applications, and (iv) large variance in acceptance and trust of direct (the radiologists) and indirect (the referring clinicians) adopters. CONCLUSION: In order for AI applications to contribute to the improvement of the quality and efficiency of clinical radiology, implementation processes need to be carried out in a structured manner, thereby providing evidence on the clinical added value of AI applications. KEY POINTS: • Successful implementation of AI in radiology requires collaboration between radiologists and referring clinicians. • Implementation of AI in radiology is facilitated by the presence of a local champion. • Evidence on the clinical added value of AI in radiology is needed for successful implementation.


Asunto(s)
Inteligencia Artificial/tendencias , Radiografía/tendencias , Radiólogos , Radiología/tendencias , Recolección de Datos , Humanos , Países Bajos , Desarrollo de Programa , Evaluación de Programas y Proyectos de Salud , Investigación Cualitativa
10.
Radiology ; 293(2): 436-440, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31573399

RESUMEN

This is a condensed summary of an international multisociety statement on ethics of artificial intelligence (AI) in radiology produced by the ACR, European Society of Radiology, RSNA, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics, Canadian Association of Radiologists, and American Association of Physicists in Medicine. AI has great potential to increase efficiency and accuracy throughout radiology, but it also carries inherent pitfalls and biases. Widespread use of AI-based intelligent and autonomous systems in radiology can increase the risk of systemic errors with high consequence and highlights complex ethical and societal issues. Currently, there is little experience using AI for patient care in diverse clinical settings. Extensive research is needed to understand how to best deploy AI in clinical practice. This statement highlights our consensus that ethical use of AI in radiology should promote well-being, minimize harm, and ensure that the benefits and harms are distributed among stakeholders in a just manner. We believe AI should respect human rights and freedoms, including dignity and privacy. It should be designed for maximum transparency and dependability. Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future. The radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes. This article is a simultaneous joint publication in Radiology, Journal of the American College of Radiology, Canadian Association of Radiologists Journal, and Insights into Imaging. Published under a CC BY-NC-ND 4.0 license. Online supplemental material is available for this article.


Asunto(s)
Inteligencia Artificial/ética , Radiología/ética , Canadá , Consenso , Europa (Continente) , Humanos , Radiólogos/ética , Sociedades Médicas , Estados Unidos
11.
Can Assoc Radiol J ; 70(4): 329-334, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31585825

RESUMEN

This is a condensed summary of an international multisociety statement on ethics of artificial intelligence (AI) in radiology produced by the ACR, European Society of Radiology, RSNA, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics, Canadian Association of Radiologists, and American Association of Physicists in Medicine. AI has great potential to increase efficiency and accuracy throughout radiology, but it also carries inherent pitfalls and biases. Widespread use of AI-based intelligent and autonomous systems in radiology can increase the risk of systemic errors with high consequence and highlights complex ethical and societal issues. Currently, there is little experience using AI for patient care in diverse clinical settings. Extensive research is needed to understand how to best deploy AI in clinical practice. This statement highlights our consensus that ethical use of AI in radiology should promote well-being, minimize harm, and ensure that the benefits and harms are distributed among stakeholders in a just manner. We believe AI should respect human rights and freedoms, including dignity and privacy. It should be designed for maximum transparency and dependability. Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future. The radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes.


Asunto(s)
Inteligencia Artificial/ética , Radiología/ética , Canadá , Consenso , Europa (Continente) , Humanos , Radiólogos/ética , Sociedades Médicas , Estados Unidos
13.
J Digit Imaging ; 29(4): 443-9, 2016 08.
Artículo en Inglés | MEDLINE | ID: mdl-26847202

RESUMEN

The growing use of social media is transforming the way health care professionals (HCPs) are communicating. In this changing environment, it could be useful to outline the usage of social media by radiologists in all its facets and on an international level. The main objective of the RANSOM survey was to investigate how radiologists are using social media and what is their attitude towards them. The second goal was to discern differences in tendencies among American and European radiologists. An international survey was launched on SurveyMonkey ( https://www.surveymonkey.com ) asking questions about the platforms they prefer, about the advantages, disadvantages, and risks, and about the main incentives and barriers to use social media. A total of 477 radiologists participated in the survey, of which 277 from Europe and 127 from North America. The results show that 85 % of all survey participants are using social media, mostly for a mixture of private and professional reasons. Facebook is the most popular platform for general purposes, whereas LinkedIn and Twitter are more popular for professional usage. The most important reason for not using social media is an unwillingness to mix private and professional matters. Eighty-two percent of all participants are aware of the educational opportunities offered by social media. The survey results underline the need to increase radiologists' skills in using social media efficiently and safely. There is also a need to create clear guidelines regarding the online and social media presence of radiologists to maximize the potential benefits of engaging with social media.


Asunto(s)
Actitud del Personal de Salud , Radiólogos/estadística & datos numéricos , Medios de Comunicación Sociales/estadística & datos numéricos , Europa (Continente) , Humanos , América del Norte , Radiólogos/psicología , Encuestas y Cuestionarios , Estados Unidos
14.
Radiol Cardiothorac Imaging ; 5(2): e220163, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37124638

RESUMEN

Purpose: To evaluate the diagnostic efficacy of artificial intelligence (AI) software in detecting incidental pulmonary embolism (IPE) at CT and shorten the time to diagnosis with use of radiologist reading worklist prioritization. Materials and Methods: In this study with historical controls and prospective evaluation, regulatory-cleared AI software was evaluated to prioritize IPE on routine chest CT scans with intravenous contrast agent in adult oncology patients. Diagnostic accuracy metrics were calculated, and temporal end points, including detection and notification times (DNTs), were assessed during three time periods (April 2019 to September 2020): routine workflow without AI, human triage without AI, and worklist prioritization with AI. Results: In total, 11 736 CT scans in 6447 oncology patients (mean age, 63 years ± 12 [SD]; 3367 men) were included. Prevalence of IPE was 1.3% (51 of 3837 scans), 1.4% (54 of 3920 scans), and 1.0% (38 of 3979 scans) for the respective time periods. The AI software detected 131 true-positive, 12 false-negative, 31 false-positive, and 11 559 true-negative results, achieving 91.6% sensitivity, 99.7% specificity, 99.9% negative predictive value, and 80.9% positive predictive value. During prospective evaluation, AI-based worklist prioritization reduced the median DNT for IPE-positive examinations to 87 minutes (vs routine workflow of 7714 minutes and human triage of 4973 minutes). Radiologists' missed rate of IPE was significantly reduced from 44.8% (47 of 105 scans) without AI to 2.6% (one of 38 scans) when assisted by the AI tool (P < .001). Conclusion: AI-assisted workflow prioritization of IPE on routine CT scans in oncology patients showed high diagnostic accuracy and significantly shortened the time to diagnosis in a setting with a backlog of examinations.Keywords: CT, Computer Applications, Detection, Diagnosis, Embolism, Thorax, ThrombosisSupplemental material is available for this article.© RSNA, 2023See also the commentary by Elicker in this issue.

15.
Sci Rep ; 13(1): 9230, 2023 06 07.
Artículo en Inglés | MEDLINE | ID: mdl-37286665

RESUMEN

Various studies have shown that medical professionals are prone to follow the incorrect suggestions offered by algorithms, especially when they have limited inputs to interrogate and interpret such suggestions and when they have an attitude of relying on them. We examine the effect of correct and incorrect algorithmic suggestions on the diagnosis performance of radiologists when (1) they have no, partial, and extensive informational inputs for explaining the suggestions (study 1) and (2) they are primed to hold a positive, negative, ambivalent, or neutral attitude towards AI (study 2). Our analysis of 2760 decisions made by 92 radiologists conducting 15 mammography examinations shows that radiologists' diagnoses follow both incorrect and correct suggestions, despite variations in the explainability inputs and attitudinal priming interventions. We identify and explain various pathways through which radiologists navigate through the decision process and arrive at correct or incorrect decisions. Overall, the findings of both studies show the limited effect of using explainability inputs and attitudinal priming for overcoming the influence of (incorrect) algorithmic suggestions.


Asunto(s)
Neoplasias de la Mama , Radiólogos , Humanos , Femenino , Proyectos Piloto , Algoritmos , Mamografía , Inteligencia Artificial , Neoplasias de la Mama/diagnóstico por imagen
16.
PLoS One ; 18(5): e0285121, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37130128

RESUMEN

BACKGROUND: Recently, artificial intelligence (AI)-based applications for chest imaging have emerged as potential tools to assist clinicians in the diagnosis and management of patients with coronavirus disease 2019 (COVID-19). OBJECTIVES: To develop a deep learning-based clinical decision support system for automatic diagnosis of COVID-19 on chest CT scans. Secondarily, to develop a complementary segmentation tool to assess the extent of lung involvement and measure disease severity. METHODS: The Imaging COVID-19 AI initiative was formed to conduct a retrospective multicentre cohort study including 20 institutions from seven different European countries. Patients with suspected or known COVID-19 who underwent a chest CT were included. The dataset was split on the institution-level to allow external evaluation. Data annotation was performed by 34 radiologists/radiology residents and included quality control measures. A multi-class classification model was created using a custom 3D convolutional neural network. For the segmentation task, a UNET-like architecture with a backbone Residual Network (ResNet-34) was selected. RESULTS: A total of 2,802 CT scans were included (2,667 unique patients, mean [standard deviation] age = 64.6 [16.2] years, male/female ratio 1.3:1). The distribution of classes (COVID-19/Other type of pulmonary infection/No imaging signs of infection) was 1,490 (53.2%), 402 (14.3%), and 910 (32.5%), respectively. On the external test dataset, the diagnostic multiclassification model yielded high micro-average and macro-average AUC values (0.93 and 0.91, respectively). The model provided the likelihood of COVID-19 vs other cases with a sensitivity of 87% and a specificity of 94%. The segmentation performance was moderate with Dice similarity coefficient (DSC) of 0.59. An imaging analysis pipeline was developed that returned a quantitative report to the user. CONCLUSION: We developed a deep learning-based clinical decision support system that could become an efficient concurrent reading tool to assist clinicians, utilising a newly created European dataset including more than 2,800 CT scans.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Humanos , Femenino , Masculino , Persona de Mediana Edad , COVID-19/diagnóstico por imagen , Inteligencia Artificial , Pulmón/diagnóstico por imagen , Prueba de COVID-19 , Estudios de Cohortes , SARS-CoV-2 , Tomografía Computarizada por Rayos X/métodos
17.
Radiol Clin North Am ; 59(6): 955-966, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34689880

RESUMEN

The potential of artificial intelligence (AI) in radiology goes far beyond image analysis. AI can be used to optimize all steps of the radiology workflow by supporting a variety of nondiagnostic tasks, including order entry support, patient scheduling, resource allocation, and improving the radiologist's workflow. This article discusses several principal directions of using AI algorithms to improve radiological operations and workflow management, with the intention of providing a broader understanding of the value of applying AI in the radiology department.


Asunto(s)
Inteligencia Artificial , Diagnóstico por Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Radiología/métodos , Flujo de Trabajo , Humanos
18.
Eur J Radiol ; 136: 109566, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33556686

RESUMEN

PURPOSE: We aimed to systematically analyse how the radiology community discusses the concept of artificial intelligence (AI), perceives its benefits, and reflects on its limitations. METHODS: We conducted a qualitative, systematic discourse analysis on 200 social-media posts collected over a period of five months (April-August 2020). RESULTS: The discourse on AI is active, albeit often referring to AI as an umbrella term and lacking precision on the context (e.g. research, clinical) and the temporal focus (e.g. current AI, future AI). The discourse is also somewhat split between optimism and pessimism. The latter considers a wider range of social, ethical and legal factors than the former, which tends to focus on concrete technologies and their functionalities. CONCLUSIONS: Further precision in the discourse could lead to more constructive conversations around AI. The split between optimism and pessimism calls for a constant exchange and synthesis between the two perspectives. Practical conversations (e.g. business models) remain rare, but may be crucial for an effective implementation of AI in clinical practice.


Asunto(s)
Inteligencia Artificial , Radiología , Predicción , Humanos , Radiografía
19.
Radiol Artif Intell ; 3(6): e210027, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34870218

RESUMEN

PURPOSE: To determine whether deep learning algorithms developed in a public competition could identify lung cancer on low-dose CT scans with a performance similar to that of radiologists. MATERIALS AND METHODS: In this retrospective study, a dataset consisting of 300 patient scans was used for model assessment; 150 patient scans were from the competition set and 150 were from an independent dataset. Both test datasets contained 50 cancer-positive scans and 100 cancer-negative scans. The reference standard was set by histopathologic examination for cancer-positive scans and imaging follow-up for at least 2 years for cancer-negative scans. The test datasets were applied to the three top-performing algorithms from the Kaggle Data Science Bowl 2017 public competition: grt123, Julian de Wit and Daniel Hammack (JWDH), and Aidence. Model outputs were compared with an observer study of 11 radiologists that assessed the same test datasets. Each scan was scored on a continuous scale by both the deep learning algorithms and the radiologists. Performance was measured using multireader, multicase receiver operating characteristic analysis. RESULTS: The area under the receiver operating characteristic curve (AUC) was 0.877 (95% CI: 0.842, 0.910) for grt123, 0.902 (95% CI: 0.871, 0.932) for JWDH, and 0.900 (95% CI: 0.870, 0.928) for Aidence. The average AUC of the radiologists was 0.917 (95% CI: 0.889, 0.945), which was significantly higher than grt123 (P = .02); however, no significant difference was found between the radiologists and JWDH (P = .29) or Aidence (P = .26). CONCLUSION: Deep learning algorithms developed in a public competition for lung cancer detection in low-dose CT scans reached performance close to that of radiologists.Keywords: Lung, CT, Thorax, Screening, Oncology Supplemental material is available for this article. © RSNA, 2021.

20.
Insights Imaging ; 11(1): 33, 2020 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-32128639

RESUMEN

OBJECTIVES: Podcasts are audio recordings distributed via the Internet. We review the availability of podcasts on the topic of radiology. METHODS: A search for podcasts relating to radiology was performed using search engines and free public websites that either hosted or distributed podcasts. Only English language podcast series were included, and video podcasts were excluded. Data was gathered by manually interrogating the metadata on the primary hosting platform and related websites. RESULTS: Forty-one podcast series met the inclusion criteria. The earliest was from 2005. In total, 56.1% of podcasts were defined as active and 43.9% inactive at the time of publication. Number of episodes for each podcast series ranged from 1 to 269 with 56.1% of podcasts having ≤ 10 episodes. There was a wide variation in podcast series' frequency/schedules. The most common subject topic was 'radiology current affairs' (43.9%), with the least common 'exam revision' (7.3%) and 'radiography' (7.3%). The majority of podcasts were targeted at radiologists (87.8%) and originated from the USA (70.1%). Podcast hosts consisted of doctors (63.4%), other professionals (29.3%) or unknown (7.3%). Additional supplementary media or information as show notes were provided by 26.8% of radiology podcast series. CONCLUSIONS: This gives a new insight into the world of 'radiology podcasting'. To the authors' knowledge, this is the first review in the literature and highlights the increasing availability of podcasting in radiology.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA