Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24.922
Filtrar
Más filtros

Intervalo de año de publicación
1.
Lab Invest ; 104(6): 102060, 2024 06.
Artículo en Inglés | MEDLINE | ID: mdl-38626875

RESUMEN

Precision medicine aims to provide personalized care based on individual patient characteristics, rather than guideline-directed therapies for groups of diseases or patient demographics. Images-both radiology- and pathology-derived-are a major source of information on presence, type, and status of disease. Exploring the mathematical relationship of pixels in medical imaging ("radiomics") and cellular-scale structures in digital pathology slides ("pathomics") offers powerful tools for extracting both qualitative and, increasingly, quantitative data. These analytical approaches, however, may be significantly enhanced by applying additional methods arising from fields of mathematics such as differential geometry and algebraic topology that remain underexplored in this context. Geometry's strength lies in its ability to provide precise local measurements, such as curvature, that can be crucial for identifying abnormalities at multiple spatial levels. These measurements can augment the quantitative features extracted in conventional radiomics, leading to more nuanced diagnostics. By contrast, topology serves as a robust shape descriptor, capturing essential features such as connected components and holes. The field of topological data analysis was initially founded to explore the shape of data, with functional network connectivity in the brain being a prominent example. Increasingly, its tools are now being used to explore organizational patterns of physical structures in medical images and digitized pathology slides. By leveraging tools from both differential geometry and algebraic topology, researchers and clinicians may be able to obtain a more comprehensive, multi-layered understanding of medical images and contribute to precision medicine's armamentarium.


Asunto(s)
Medicina de Precisión , Medicina de Precisión/métodos , Humanos , Radiología/métodos , Procesamiento de Imagen Asistido por Computador/métodos
2.
Radiology ; 312(1): e232085, 2024 07.
Artículo en Inglés | MEDLINE | ID: mdl-39041937

RESUMEN

Deep learning (DL) is currently the standard artificial intelligence tool for computer-based image analysis in radiology. Traditionally, DL models have been trained with strongly supervised learning methods. These methods depend on reference standard labels, typically applied manually by experts. In contrast, weakly supervised learning is more scalable. Weak supervision comprises situations in which only a portion of the data are labeled (incomplete supervision), labels refer to a whole region or case as opposed to a precisely delineated image region (inexact supervision), or labels contain errors (inaccurate supervision). In many applications, weak labels are sufficient to train useful models. Thus, weakly supervised learning can unlock a large amount of otherwise unusable data for training DL models. One example of this is using large language models to automatically extract weak labels from free-text radiology reports. Here, we outline the key concepts in weakly supervised learning and provide an overview of applications in radiologic image analysis. With more fundamental and clinical translational work, weakly supervised learning could facilitate the uptake of DL in radiology and research workflows by enabling large-scale image analysis and advancing the development of new DL-based biomarkers.


Asunto(s)
Aprendizaje Profundo , Radiología , Humanos , Radiología/educación , Aprendizaje Automático Supervisado , Interpretación de Imagen Asistida por Computador/métodos
3.
Radiology ; 313(1): e241489, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39377675

RESUMEN

Supplemental material is available for this article. See also the editorial by Forghani in this issue.


Asunto(s)
Radiología , Humanos , Radiología/educación , Evaluación Educacional/métodos , Consejos de Especialidades , Competencia Clínica
4.
Radiology ; 310(3): e231593, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38530171

RESUMEN

Background The complex medical terminology of radiology reports may cause confusion or anxiety for patients, especially given increased access to electronic health records. Large language models (LLMs) can potentially simplify radiology report readability. Purpose To compare the performance of four publicly available LLMs (ChatGPT-3.5 and ChatGPT-4, Bard [now known as Gemini], and Bing) in producing simplified radiology report impressions. Materials and Methods In this retrospective comparative analysis of the four LLMs (accessed July 23 to July 26, 2023), the Medical Information Mart for Intensive Care (MIMIC)-IV database was used to gather 750 anonymized radiology report impressions covering a range of imaging modalities (MRI, CT, US, radiography, mammography) and anatomic regions. Three distinct prompts were employed to assess the LLMs' ability to simplify report impressions. The first prompt (prompt 1) was "Simplify this radiology report." The second prompt (prompt 2) was "I am a patient. Simplify this radiology report." The last prompt (prompt 3) was "Simplify this radiology report at the 7th grade level." Each prompt was followed by the radiology report impression and was queried once. The primary outcome was simplification as assessed by readability score. Readability was assessed using the average of four established readability indexes. The nonparametric Wilcoxon signed-rank test was applied to compare reading grade levels across LLM output. Results All four LLMs simplified radiology report impressions across all prompts tested (P < .001). Within prompts, differences were found between LLMs. Providing the context of being a patient or requesting simplification at the seventh-grade level reduced the reading grade level of output for all models and prompts (except prompt 1 to prompt 2 for ChatGPT-4) (P < .001). Conclusion Although the success of each LLM varied depending on the specific prompt wording, all four models simplified radiology report impressions across all modalities and prompts tested. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Rahsepar in this issue.


Asunto(s)
Confusión , Radiología , Humanos , Estudios Retrospectivos , Bases de Datos Factuales , Lenguaje
5.
Radiology ; 311(1): e232714, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38625012

RESUMEN

Background Errors in radiology reports may occur because of resident-to-attending discrepancies, speech recognition inaccuracies, and large workload. Large language models, such as GPT-4 (ChatGPT; OpenAI), may assist in generating reports. Purpose To assess effectiveness of GPT-4 in identifying common errors in radiology reports, focusing on performance, time, and cost-efficiency. Materials and Methods In this retrospective study, 200 radiology reports (radiography and cross-sectional imaging [CT and MRI]) were compiled between June 2023 and December 2023 at one institution. There were 150 errors from five common error categories (omission, insertion, spelling, side confusion, and other) intentionally inserted into 100 of the reports and used as the reference standard. Six radiologists (two senior radiologists, two attending physicians, and two residents) and GPT-4 were tasked with detecting these errors. Overall error detection performance, error detection in the five error categories, and reading time were assessed using Wald χ2 tests and paired-sample t tests. Results GPT-4 (detection rate, 82.7%;124 of 150; 95% CI: 75.8, 87.9) matched the average detection performance of radiologists independent of their experience (senior radiologists, 89.3% [134 of 150; 95% CI: 83.4, 93.3]; attending physicians, 80.0% [120 of 150; 95% CI: 72.9, 85.6]; residents, 80.0% [120 of 150; 95% CI: 72.9, 85.6]; P value range, .522-.99). One senior radiologist outperformed GPT-4 (detection rate, 94.7%; 142 of 150; 95% CI: 89.8, 97.3; P = .006). GPT-4 required less processing time per radiology report than the fastest human reader in the study (mean reading time, 3.5 seconds ± 0.5 [SD] vs 25.1 seconds ± 20.1, respectively; P < .001; Cohen d = -1.08). The use of GPT-4 resulted in lower mean correction cost per report than the most cost-efficient radiologist ($0.03 ± 0.01 vs $0.42 ± 0.41; P < .001; Cohen d = -1.12). Conclusion The radiology report error detection rate of GPT-4 was comparable with that of radiologists, potentially reducing work hours and cost. © RSNA, 2024 See also the editorial by Forman in this issue.


Asunto(s)
Radiología , Humanos , Estudios Retrospectivos , Radiografía , Radiólogos , Confusión
6.
Radiology ; 312(3): e233065, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39315901

RESUMEN

Background Report writing skills are a core competency to be acquired during residency, yet objective tools for tracking performance are lacking. Purpose To investigate whether the Jaccard index, derived from report comparison, can objectively illustrate learning curves in report writing performance throughout radiology residency. Materials and Methods Retrospective data from 246 984 radiology reports written from September 2017 to November 2022 in a tertiary care radiology department were included. Reports were scored using the Jaccard similarity coefficient (ie, a quantitative expression of the amount of edits performed; range, 0-1) of residents' draft (unsupervised initial attempt at a complete report) or preliminary reports (following joint readout with attending physicians) and faculty-reviewed final reports. Weighted mean Jaccard similarity was compared between years of experience using Welch analysis of variance with post hoc testing overall, per imaging division, and per modality. Relationships with years and quarters of resident experience were assessed using Spearman correlation. Results This study included 53 residents (mean report count, 4660 ± 3546; 1-5 years of experience). Mean Jaccard similarity of preliminary reports increased by 6% from 1st-year to 5th-year residents (0.86 ± 0.22 to 0.92 ± 0.15; P < .001). Spearman correlation demonstrated a strong relationship between residents' experience and higher report similarity when aggregated for years (rs = 0.99 [95% CI: 0.85, 1.00]; P < .001) or quarters of experience (rs = 0.90 [95% CI: 0.73, 0.96]; P < .001). For residents' draft reports, Jaccard similarity increased by 14% over the course of the 5-year residency program (0.68 ± 0.27 to 0.82 ± 0.23; P < .001). Subgroup analysis confirmed similar trends for all imaging divisions and modalities (eg, in musculoskeletal imaging, from 0.77 ± 0.31 to 0.91 ± 0.16 [P < .001]; rs = 0.98 [95% CI: 0.72, 1.00] [P < .001]). Conclusion Residents' report writing performance increases with experience. Trends can be quantified with the Jaccard index, with a 6% improvement from 1st- to 5th-year residents, indicating its effectiveness as a tool for evaluating training progress and guiding education over the course of residency. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Bruno in this issue.


Asunto(s)
Competencia Clínica , Internado y Residencia , Curva de Aprendizaje , Radiología , Escritura , Radiología/educación , Humanos , Estudios Retrospectivos , Educación de Postgrado en Medicina/métodos
7.
Radiology ; 312(3): e240153, 2024 09.
Artículo en Inglés | MEDLINE | ID: mdl-39225605

RESUMEN

Background Recent advancements, including image processing capabilities, present new potential applications of large language models such as ChatGPT (OpenAI), a generative pretrained transformer, in radiology. However, baseline performance of ChatGPT in radiology-related tasks is understudied. Purpose To evaluate the performance of GPT-4 with vision (GPT-4V) on radiology in-training examination questions, including those with images, to gauge the model's baseline knowledge in radiology. Materials and Methods In this prospective study, conducted between September 2023 and March 2024, the September 2023 release of GPT-4V was assessed using 386 retired questions (189 image-based and 197 text-only questions) from the American College of Radiology Diagnostic Radiology In-Training Examinations. Nine question pairs were identified as duplicates; only the first instance of each duplicate was considered in ChatGPT's assessment. A subanalysis assessed the impact of different zero-shot prompts on performance. Statistical analysis included χ2 tests of independence to ascertain whether the performance of GPT-4V varied between question types or subspecialty. The McNemar test was used to evaluate performance differences between the prompts, with Benjamin-Hochberg adjustment of the P values conducted to control the false discovery rate (FDR). A P value threshold of less than.05 denoted statistical significance. Results GPT-4V correctly answered 246 (65.3%) of the 377 unique questions, with significantly higher accuracy on text-only questions (81.5%, 159 of 195) than on image-based questions (47.8%, 87 of 182) (χ2 test, P < .001). Subanalysis revealed differences between prompts on text-based questions, where chain-of-thought prompting outperformed long instruction by 6.1% (McNemar, P = .02; FDR = 0.063), basic prompting by 6.8% (P = .009, FDR = 0.044), and the original prompting style by 8.9% (P = .001, FDR = 0.014). No differences were observed between prompts on image-based questions with P values of .27 to >.99. Conclusion While GPT-4V demonstrated a level of competence in text-based questions, it showed deficits interpreting radiologic images. © RSNA, 2024 See also the editorial by Deng in this issue.


Asunto(s)
Evaluación Educacional , Radiología , Humanos , Estudios Prospectivos , Radiología/educación , Evaluación Educacional/métodos , Competencia Clínica , Estados Unidos , Internado y Residencia , Educación de Postgrado en Medicina/métodos
8.
Radiology ; 310(1): e231469, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38259205

RESUMEN

Background Health care access disparities and lack of inclusion in clinical research have been well documented for marginalized populations. However, few studies exist examining the research funding of institutions that serve historically underserved groups. Purpose To assess the relationship between research funding awarded to radiology departments by the National Institutes of Health (NIH) and Lown Institute Hospitals Index rankings for inclusivity and community benefit. Materials and Methods This retrospective study included radiology departments awarded funding from the NIH between 2017 and 2021. The 2021 Lown Institute Hospitals Index rankings for inclusivity and community benefit were examined. The inclusivity metric measures how similar a hospital's patient population is to the surrounding community in terms of income, race and ethnicity, and education level. The community benefit metric measures charity care spending, Medicaid as a proportion of patient revenue, and other community benefit spending. Linear regression and Pearson correlation coefficients (r values) were used to evaluate the relationship between aggregate NIH radiology department research funding and measures of inclusivity and community benefit. Results Seventy-five radiology departments that received NIH funding ranging from $195 000 to $216 879 079 were included. A negative correlation was observed between the amount of radiology department research funding received and institutional rankings for serving patients from racial and/or ethnic minorities (r = -0.34; P < .001), patients with low income (r = -0.44; P < .001), and patients with lower levels of education (r = -0.46; P < .001). No correlation was observed between the amount of radiology department research funding and institutional rankings for charity care spending (r = -0.19; P = .06), community investment (r = -0.04; P = .68), and Medicaid as a proportion of patient revenue (r = -0.10; P = .22). Conclusion Radiology departments that received more NIH research funding were less likely to serve patients from racial and/or ethnic minorities and patients who had low income or lower levels of education. © RSNA, 2024 See also the editorial by Mehta and Rosen in this issue.


Asunto(s)
Servicio de Radiología en Hospital , Radiología , Estados Unidos , Humanos , Estudios Retrospectivos , Hospitales , Academias e Institutos
9.
Radiology ; 310(2): e232030, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38411520

RESUMEN

According to the World Health Organization, climate change is the single biggest health threat facing humanity. The global health care system, including medical imaging, must manage the health effects of climate change while at the same time addressing the large amount of greenhouse gas (GHG) emissions generated in the delivery of care. Data centers and computational efforts are increasingly large contributors to GHG emissions in radiology. This is due to the explosive increase in big data and artificial intelligence (AI) applications that have resulted in large energy requirements for developing and deploying AI models. However, AI also has the potential to improve environmental sustainability in medical imaging. For example, use of AI can shorten MRI scan times with accelerated acquisition times, improve the scheduling efficiency of scanners, and optimize the use of decision-support tools to reduce low-value imaging. The purpose of this Radiology in Focus article is to discuss this duality at the intersection of environmental sustainability and AI in radiology. Further discussed are strategies and opportunities to decrease AI-related emissions and to leverage AI to improve sustainability in radiology, with a focus on health equity. Co-benefits of these strategies are explored, including lower cost and improved patient outcomes. Finally, knowledge gaps and areas for future research are highlighted.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Radiografía , Macrodatos , Cambio Climático
10.
Radiology ; 310(3): e231972, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38470234

RESUMEN

Background Previous studies have shown an increase in the number of authors on radiologic articles between 1950 and 2013, but the cause is unclear. Purpose To determine whether authorship rate in radiologic and general medical literature has continued to increase and to assess study variables associated with increased author numbers. Materials and Methods PubMed/Medline was searched for articles published between January 1998 and October 2022 in general radiology and general medical journals with the top five highest current impact factors. Generalized linear regression analysis was used to calculate adjusted incidence rate ratios (IRRs) for the numbers of authors. Wald tests assessed the associations between study variables and the numbers of authors per article. Combined mixed-effects regression analysis was performed to compare general medicine and radiology journals. Results There were 3381 original radiologic research articles that were analyzed. Authorship rate increased between 1998 (median, six authors; IQR, 4) and 2022 (median, 11 authors; IQR, 8). Later publication year was associated with more authors per article (IRR, 1.02; 95% CI: 1.01, 1.02; P < .001) after adjusting for publishing journal, continent of origin of first author, number of countries involved, PubMed/Medline original article type, study design, number of disciplines involved, multicenter or single-center study, reporting of a priori power calculation, reporting of obtaining informed consent, study sample size, and number of article pages. There were 1250 general medicine original research articles that were analyzed. Later publication year was also associated with more authors after adjustment for the study variables (IRR, 1.04; 95% CI: 1.03, 1.05; P < .001). There was a stronger increase in authorship by publication year for general medicine journals compared with radiology journals (IRR, 1.02; 95% CI: 1.01, 1.02; P < .001). Conclusion An increase in authorship rate was observed in the radiologic and general medical literature between 1998 and 2022, and the number of authors per article was independently associated with later year of publication. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Arrivé in this issue.


Asunto(s)
Medicina General , Radiología , Humanos , Autoria , Proyectos de Investigación
11.
Radiology ; 311(1): e232806, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38563670

RESUMEN

Background The increasing use of teleradiology has been accompanied by concerns relating to risk management and patient safety. Purpose To compare characteristics of teleradiology and nonteleradiology radiology malpractice cases and identify contributing factors underlying these cases. Materials and Methods In this retrospective analysis, a national database of medical malpractice cases was queried to identify cases involving telemedicine that closed between January 2010 and March 2022. Teleradiology malpractice cases were identified based on manual review of cases in which telemedicine was coded as one of the contributing factors. These cases were compared with nonteleradiology cases that closed during the same time period in which radiology had been determined to be the primary responsible clinical service. Claimant, clinical, and financial characteristics of the cases were recorded, and continuous or categorical data were compared using the Wilcoxon rank-sum test or Fisher exact test, respectively. Results This study included 135 teleradiology and 3474 radiology malpractices cases. The death of a patient occurred more frequently in teleradiology cases (48 of 135 [35.6%]) than in radiology cases (685 of 3474 [19.7%]; P < .001). Cerebrovascular disease was a more common final diagnosis in the teleradiology cases (13 of 135 [9.6%]) compared with the radiology cases (124 of 3474 [3.6%]; P = .002). Problems with communication among providers was a more frequent contributing factor in the teleradiology cases (35 of 135 [25.9%]) than in the radiology cases (439 of 3474 [12.6%]; P < .001). Teleradiology cases were more likely to close with indemnity payment (79 of 135 [58.5%]) than the radiology cases (1416 of 3474 [40.8%]; P < .001) and had a higher median indemnity payment than the radiology cases ($339 230 [IQR, $120 790-$731 615] vs $214 063 [IQR, $66 620-$585 424]; P = .01). Conclusion Compared with radiology cases, teleradiology cases had higher clinical and financial severity and were more likely to involve issues with communication. © RSNA, 2024 See also the editorial by Mezrich in this issue.


Asunto(s)
Mala Praxis , Radiología , Telemedicina , Telerradiología , Humanos , Estudios Retrospectivos
12.
Radiology ; 311(1): e240219, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38652030

RESUMEN

Climate change adversely affects the well-being of humans and the entire planet. A planetary health framework recognizes that sustaining a healthy planet is essential to achieving individual, community, and global health. Radiology contributes to the climate crisis by generating greenhouse gas (GHG) emissions during the production and use of medical imaging equipment and supplies. To promote planetary health, strategies that mitigate and adapt to climate change in radiology are needed. Mitigation strategies to reduce GHG emissions include switching to renewable energy sources, refurbishing rather than replacing imaging scanners, and powering down unused scanners. Radiology departments must also build resiliency to the now unavoidable impacts of the climate crisis. Adaptation strategies include education, upgrading building infrastructure, and developing departmental sustainability dashboards to track progress in achieving sustainability goals. Shifting practices to catalyze these necessary changes in radiology requires a coordinated approach. This includes partnering with key stakeholders, providing effective communication, and prioritizing high-impact interventions. This article reviews the intersection of planetary health and radiology. Its goals are to emphasize why we should care about sustainability, showcase actions we can take to mitigate our impact, and prepare us to adapt to the effects of climate change. © RSNA, 2024 Supplemental material is available for this article. See also the article by Ibrahim et al in this issue. See also the article by Lenkinski and Rofsky in this issue.


Asunto(s)
Cambio Climático , Salud Global , Humanos , Gases de Efecto Invernadero , Radiología , Servicio de Radiología en Hospital/organización & administración
13.
Radiology ; 313(1): e240609, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39352277

RESUMEN

Background GPT-4V (GPT-4 with vision, ChatGPT; OpenAI) has shown impressive performance in several medical assessments. However, few studies have assessed its performance in interpreting radiologic images. Purpose To assess and compare the accuracy of GPT-4V in assessing radiologic cases with both images and textual context to that of radiologists and residents, to assess if GPT-4V assistance improves human accuracy, and to assess and compare the accuracy of GPT-4V with that of image-only or text-only inputs. Materials and Methods Seventy-two Case of the Day questions at the RSNA 2023 Annual Meeting were curated in this observer study. Answers from GPT-4V were obtained between November 26 and December 10, 2023, with the following inputs for each question: image only, text only, and both text and images. Five radiologists and three residents also answered the questions in an "open book" setting. For the artificial intelligence (AI)-assisted portion, the radiologists and residents were provided with the outputs of GPT-4V. The accuracy of radiologists and residents, both with and without AI assistance, was analyzed using a mixed-effects linear model. The accuracies of GPT-4V with different input combinations were compared by using the McNemar test. P < .05 was considered to indicate a significant difference. Results The accuracy of GPT-4V was 43% (31 of 72; 95% CI: 32, 55). Radiologists and residents did not significantly outperform GPT-4V in either imaging-dependent (59% and 56% vs 39%; P = .31 and .52, respectively) or imaging-independent (76% and 63% vs 70%; both P = .99) cases. With access to GPT-4V responses, there was no evidence of improvement in the average accuracy of the readers. The accuracy obtained by GPT-4V with text-only and image-only inputs was 50% (35 of 70; 95% CI: 39, 61) and 38% (26 of 69; 95% CI: 27, 49), respectively. Conclusion The radiologists and residents did not significantly outperform GPT-4V. Assistance from GPT-4V did not help human raters. GPT-4V relied on the textual context for its outputs. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Katz in this issue.


Asunto(s)
Radiología , Humanos , Competencia Clínica , Inteligencia Artificial , Sociedades Médicas , Internado y Residencia
14.
Radiology ; 310(1): e232756, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38226883

RESUMEN

Although chatbots have existed for decades, the emergence of transformer-based large language models (LLMs) has captivated the world through the most recent wave of artificial intelligence chatbots, including ChatGPT. Transformers are a type of neural network architecture that enables better contextual understanding of language and efficient training on massive amounts of unlabeled data, such as unstructured text from the internet. As LLMs have increased in size, their improved performance and emergent abilities have revolutionized natural language processing. Since language is integral to human thought, applications based on LLMs have transformative potential in many industries. In fact, LLM-based chatbots have demonstrated human-level performance on many professional benchmarks, including in radiology. LLMs offer numerous clinical and research applications in radiology, several of which have been explored in the literature with encouraging results. Multimodal LLMs can simultaneously interpret text and images to generate reports, closely mimicking current diagnostic pathways in radiology. Thus, from requisition to report, LLMs have the opportunity to positively impact nearly every step of the radiology journey. Yet, these impressive models are not without limitations. This article reviews the limitations of LLMs and mitigation strategies, as well as potential uses of LLMs, including multimodal models. Also reviewed are existing LLM-based applications that can enhance efficiency in supervised settings.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Radiografía , Benchmarking , Industrias
15.
Radiology ; 310(1): e223170, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38259208

RESUMEN

Despite recent advancements in machine learning (ML) applications in health care, there have been few benefits and improvements to clinical medicine in the hospital setting. To facilitate clinical adaptation of methods in ML, this review proposes a standardized framework for the step-by-step implementation of artificial intelligence into the clinical practice of radiology that focuses on three key components: problem identification, stakeholder alignment, and pipeline integration. A review of the recent literature and empirical evidence in radiologic imaging applications justifies this approach and offers a discussion on structuring implementation efforts to help other hospital practices leverage ML to improve patient care. Clinical trial registration no. 04242667 © RSNA, 2024 Supplemental material is available for this article.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Radiografía , Algoritmos , Aprendizaje Automático
16.
Radiology ; 310(3): e231986, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38501953

RESUMEN

Photon-counting CT (PCCT) is an emerging advanced CT technology that differs from conventional CT in its ability to directly convert incident x-ray photon energies into electrical signals. The detector design also permits substantial improvements in spatial resolution and radiation dose efficiency and allows for concurrent high-pitch and high-temporal-resolution multienergy imaging. This review summarizes (a) key differences in PCCT image acquisition and image reconstruction compared with conventional CT; (b) early evidence for the clinical benefit of PCCT for high-spatial-resolution diagnostic tasks in thoracic imaging, such as assessment of airway and parenchymal diseases, as well as benefits of high-pitch and multienergy scanning; (c) anticipated radiation dose reduction, depending on the diagnostic task, and increased utility for routine low-dose thoracic CT imaging; (d) adaptations for thoracic imaging in children; (e) potential for further quantitation of thoracic diseases; and (f) limitations and trade-offs. Moreover, important points for conducting and interpreting clinical studies examining the benefit of PCCT relative to conventional CT and integration of PCCT systems into multivendor, multispecialty radiology practices are discussed.


Asunto(s)
Radiología , Tomografía Computarizada por Rayos X , Niño , Humanos , Procesamiento de Imagen Asistido por Computador , Fotones
17.
Radiology ; 312(3): e233057, 2024 09.
Artículo en Inglés | MEDLINE | ID: mdl-39225601

RESUMEN

Background Podcasts have become an increasingly popular method of communicating information in medicine, including in radiology. However, the effect of podcasts on the reach of journal articles remains unclear. Purpose To evaluate the influence of Radiology podcasts on the performance metrics, including downloads, citations, and Altmetric Attention Score (AAS), of Radiology articles. Materials and Methods This was a retrospective study. All articles published in the print version of Radiology from January 2021 to December 2022 were reviewed; editorials and case reports were excluded. Articles featured on Radiology podcasts were included in the podcast group. Articles published within the same journal issue and category were the nonpodcast group. Downloads, Google Scholar citations, Dimensions citations, and AAS metrics were recorded. The Mann-Whitney U test was used to compare medians and evaluate differences between older and more recently published articles. Results The podcast group, composed of 88 articles, exhibited significantly higher median values for downloads (PG, 4521.0; nonpodcast group, 2123.0; P < .001), Google Scholar citations (podcast group, 14.5; nonpodcast group, 10.0; P = .01), Dimensions citations (podcast group, 12.0; nonpodcast group, 9.0; P = .01), and AAS (podcast group, 43.0; nonpodcast group, 10.0; P < .001) compared with the nonpodcast group of 378 articles. Within both groups, articles published in the earlier nonpodcast group (January to June 2021) had higher downloads (podcast group, P = .08; nonpodcast group, P < .001), Google Scholar citations (podcast group and nonpodcast group, P < .001), and Dimension citations (podcast group and nonpodcast group, P < .001) than articles from the later period (July to December 2022). AAS markedly increased in recent podcast articles (P = .03), but AAS for nonpodcast articles significantly decreased over time (P = .01). Conclusion Radiology articles featured on the Radiology podcast had greater median metrics, including downloads, Google Scholar citations, Dimensions citations, and AAS, compared with nonpodcast articles, suggesting that podcasts can be an effective method of disseminating and amplifying research within the field of radiology. © RSNA, 2024 See also the editorial by Chu and Nicola in this issue.


Asunto(s)
Publicaciones Periódicas como Asunto , Radiología , Difusión por la Web como Asunto , Humanos , Estudios Retrospectivos , Difusión de la Información/métodos
18.
Radiology ; 310(3): e232298, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38441091

RESUMEN

Gastrointestinal (GI) bleeding is the most common GI diagnosis leading to hospitalization within the United States. Prompt diagnosis and treatment of GI bleeding is critical to improving patient outcomes and reducing high health care utilization and costs. Radiologic techniques including CT angiography, catheter angiography, CT enterography, MR enterography, nuclear medicine red blood cell scan, and technetium-99m pertechnetate scintigraphy (Meckel scan) are frequently used to evaluate patients with GI bleeding and are complementary to GI endoscopy. However, multiple management guidelines exist, which differ in the recommended utilization of these radiologic examinations. This variability can lead to confusion as to how these tests should be used in the evaluation of GI bleeding. In this document, a panel of experts from the American College of Gastroenterology and Society of Abdominal Radiology provide a review of the radiologic examinations used to evaluate for GI bleeding including nomenclature, technique, performance, advantages, and limitations. A comparison of advantages and limitations relative to endoscopic examinations is also included. Finally, consensus statements and recommendations on technical parameters and utilization of radiologic techniques for GI bleeding are provided. © Radiological Society of North America and the American College of Gastroenterology, 2024. Supplemental material is available for this article. This article is being published concurrently in American Journal of Gastroenterology and Radiology. The articles are identical except for minor stylistic and spelling differences in keeping with each journal's style. Citations from either journal can be used when citing this article. See also the editorial by Lockhart in this issue.


Asunto(s)
Hemorragia Gastrointestinal , Radiología , Humanos , Hemorragia Gastrointestinal/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Angiografía , Catéteres
19.
Radiology ; 311(2): e232715, 2024 05.
Artículo en Inglés | MEDLINE | ID: mdl-38771184

RESUMEN

Background ChatGPT (OpenAI) can pass a text-based radiology board-style examination, but its stochasticity and confident language when it is incorrect may limit utility. Purpose To assess the reliability, repeatability, robustness, and confidence of GPT-3.5 and GPT-4 (ChatGPT; OpenAI) through repeated prompting with a radiology board-style examination. Materials and Methods In this exploratory prospective study, 150 radiology board-style multiple-choice text-based questions, previously used to benchmark ChatGPT, were administered to default versions of ChatGPT (GPT-3.5 and GPT-4) on three separate attempts (separated by ≥1 month and then 1 week). Accuracy and answer choices between attempts were compared to assess reliability (accuracy over time) and repeatability (agreement over time). On the third attempt, regardless of answer choice, ChatGPT was challenged three times with the adversarial prompt, "Your answer choice is incorrect. Please choose a different option," to assess robustness (ability to withstand adversarial prompting). ChatGPT was prompted to rate its confidence from 1-10 (with 10 being the highest level of confidence and 1 being the lowest) on the third attempt and after each challenge prompt. Results Neither version showed a difference in accuracy over three attempts: for the first, second, and third attempt, accuracy of GPT-3.5 was 69.3% (104 of 150), 63.3% (95 of 150), and 60.7% (91 of 150), respectively (P = .06); and accuracy of GPT-4 was 80.6% (121 of 150), 78.0% (117 of 150), and 76.7% (115 of 150), respectively (P = .42). Though both GPT-4 and GPT-3.5 had only moderate intrarater agreement (κ = 0.78 and 0.64, respectively), the answer choices of GPT-4 were more consistent across three attempts than those of GPT-3.5 (agreement, 76.7% [115 of 150] vs 61.3% [92 of 150], respectively; P = .006). After challenge prompt, both changed responses for most questions, though GPT-4 did so more frequently than GPT-3.5 (97.3% [146 of 150] vs 71.3% [107 of 150], respectively; P < .001). Both rated "high confidence" (≥8 on the 1-10 scale) for most initial responses (GPT-3.5, 100% [150 of 150]; and GPT-4, 94.0% [141 of 150]) as well as for incorrect responses (ie, overconfidence; GPT-3.5, 100% [59 of 59]; and GPT-4, 77% [27 of 35], respectively; P = .89). Conclusion Default GPT-3.5 and GPT-4 were reliably accurate across three attempts, but both had poor repeatability and robustness and were frequently overconfident. GPT-4 was more consistent across attempts than GPT-3.5 but more influenced by an adversarial prompt. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Ballard in this issue.


Asunto(s)
Inteligencia Artificial , Competencia Clínica , Evaluación Educacional , Radiología , Humanos , Evaluación Educacional/métodos , Estudios Prospectivos , Reproducibilidad de los Resultados , Consejos de Especialidades
20.
Radiology ; 311(3): e232653, 2024 06.
Artículo en Inglés | MEDLINE | ID: mdl-38888474

RESUMEN

The deployment of artificial intelligence (AI) solutions in radiology practice creates new demands on existing imaging workflow. Accommodating custom integrations creates a substantial operational and maintenance burden. These custom integrations also increase the likelihood of unanticipated problems. Standards-based interoperability facilitates AI integration with systems from different vendors into a single environment by enabling seamless exchange between information systems in the radiology workflow. Integrating the Healthcare Enterprise (IHE) is an initiative to improve how computer systems share information across health care domains, including radiology. IHE integrates existing standards-such as Digital Imaging and Communications in Medicine, Health Level Seven, and health care lexicons and ontologies (ie, LOINC, RadLex, SNOMED Clinical Terms)-by mapping data elements from one standard to another. IHE Radiology manages profiles (standards-based implementation guides) for departmental workflow and information sharing across care sites, including profiles for scaling AI processing traffic and integrating AI results. This review focuses on the need for standards-based interoperability to scale AI integration in radiology, including a brief review of recent IHE profiles that provide a framework for AI integration. This review also discusses challenges and additional considerations for AI integration, including technical, clinical, and policy perspectives.


Asunto(s)
Inteligencia Artificial , Sistemas de Información Radiológica , Integración de Sistemas , Flujo de Trabajo , Radiología/normas , Sistemas de Información Radiológica/normas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA