Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
1.
Radiol Med ; 129(9): 1275-1287, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39096356

RESUMEN

Magnetic resonance imaging (MRI) is an essential tool for evaluating pelvic disorders affecting the prostate, bladder, uterus, ovaries, and/or rectum. Since the diagnostic pathway of pelvic MRI can involve various complex procedures depending on the affected organ, the Reporting and Data System (RADS) is used to standardize image acquisition and interpretation. Artificial intelligence (AI), which encompasses machine learning and deep learning algorithms, has been integrated into both pelvic MRI and the RADS, particularly for prostate MRI. This review outlines recent developments in the use of AI in various stages of the pelvic MRI diagnostic pathway, including image acquisition, image reconstruction, organ and lesion segmentation, lesion detection and classification, and risk stratification, with special emphasis on recent trends in multi-center studies, which can help to improve the generalizability of AI.


Asunto(s)
Inteligencia Artificial , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Femenino , Masculino , Pelvis/diagnóstico por imagen
3.
Eur Radiol ; 2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39198333

RESUMEN

OBJECTIVES: Large language models like GPT-4 have demonstrated potential for diagnosis in radiology. Previous studies investigating this potential primarily utilized quizzes from academic journals. This study aimed to assess the diagnostic capabilities of GPT-4-based Chat Generative Pre-trained Transformer (ChatGPT) using actual clinical radiology reports of brain tumors and compare its performance with that of neuroradiologists and general radiologists. METHODS: We collected brain MRI reports written in Japanese from preoperative brain tumor patients at two institutions from January 2017 to December 2021. The MRI reports were translated into English by radiologists. GPT-4 and five radiologists were presented with the same textual findings from the reports and asked to suggest differential and final diagnoses. The pathological diagnosis of the excised tumor served as the ground truth. McNemar's test and Fisher's exact test were used for statistical analysis. RESULTS: In a study analyzing 150 radiological reports, GPT-4 achieved a final diagnostic accuracy of 73%, while radiologists' accuracy ranged from 65 to 79%. GPT-4's final diagnostic accuracy using reports from neuroradiologists was higher at 80%, compared to 60% using those from general radiologists. In the realm of differential diagnoses, GPT-4's accuracy was 94%, while radiologists' fell between 73 and 89%. Notably, for these differential diagnoses, GPT-4's accuracy remained consistent whether reports were from neuroradiologists or general radiologists. CONCLUSION: GPT-4 exhibited good diagnostic capability, comparable to neuroradiologists in differentiating brain tumors from MRI reports. GPT-4 can be a second opinion for neuroradiologists on final diagnoses and a guidance tool for general radiologists and residents. CLINICAL RELEVANCE STATEMENT: This study evaluated GPT-4-based ChatGPT's diagnostic capabilities using real-world clinical MRI reports from brain tumor cases, revealing that its accuracy in interpreting brain tumors from MRI findings is competitive with radiologists. KEY POINTS: We investigated the diagnostic accuracy of GPT-4 using real-world clinical MRI reports of brain tumors. GPT-4 achieved final and differential diagnostic accuracy that is comparable with neuroradiologists. GPT-4 has the potential to improve the diagnostic process in clinical radiology.

4.
Jpn J Radiol ; 2024 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-39031270

RESUMEN

PURPOSE: The performance of vision-language models (VLMs) with image interpretation capabilities, such as GPT-4 omni (GPT-4o), GPT-4 vision (GPT-4V), and Claude-3, has not been compared and remains unexplored in specialized radiological fields, including nuclear medicine and interventional radiology. This study aimed to evaluate and compare the diagnostic accuracy of various VLMs, including GPT-4 + GPT-4V, GPT-4o, Claude-3 Sonnet, and Claude-3 Opus, using Japanese diagnostic radiology, nuclear medicine, and interventional radiology (JDR, JNM, and JIR, respectively) board certification tests. MATERIALS AND METHODS: In total, 383 questions from the JDR test (358 images), 300 from the JNM test (92 images), and 322 from the JIR test (96 images) from 2019 to 2023 were consecutively collected. The accuracy rates of the GPT-4 + GPT-4V, GPT-4o, Claude-3 Sonnet, and Claude-3 Opus were calculated for all questions or questions with images. The accuracy rates of the VLMs were compared using McNemar's test. RESULTS: GPT-4o demonstrated the highest accuracy rates across all evaluations with the JDR (all questions, 49%; questions with images, 48%), JNM (all questions, 64%; questions with images, 59%), and JIR tests (all questions, 43%; questions with images, 34%), followed by Claude-3 Opus with the JDR (all questions, 40%; questions with images, 38%), JNM (all questions, 42%; questions with images, 43%), and JIR tests (all questions, 40%; questions with images, 30%). For all questions, McNemar's test showed that GPT-4o significantly outperformed the other VLMs (all P < 0.007), except for Claude-3 Opus in the JIR test. For questions with images, GPT-4o outperformed the other VLMs in the JDR and JNM tests (all P < 0.001), except Claude-3 Opus in the JNM test. CONCLUSION: The GPT-4o had the highest success rates for questions with images and all questions from the JDR, JNM, and JIR board certification tests.

5.
Lancet Digit Health ; 6(8): e580-e588, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38981834

RESUMEN

BACKGROUND: Chest x-ray is a basic, cost-effective, and widely available imaging method that is used for static assessments of organic diseases and anatomical abnormalities, but its ability to estimate dynamic measurements such as pulmonary function is unknown. We aimed to estimate two major pulmonary functions from chest x-rays. METHODS: In this retrospective model development and validation study, we trained, validated, and externally tested a deep learning-based artificial intelligence (AI) model to estimate forced vital capacity (FVC) and forced expiratory volume in 1 s (FEV1) from chest x-rays. We included consecutively collected results of spirometry and any associated chest x-rays that had been obtained between July 1, 2003, and Dec 31, 2021, from five institutions in Japan (labelled institutions A-E). Eligible x-rays had been acquired within 14 days of spirometry and were labelled with the FVC and FEV1. X-rays from three institutions (A-C) were used for training, validation, and internal testing, with the testing dataset being independent of the training and validation datasets, and then x-rays from the two other institutions (D and E) were used for independent external testing. Performance for estimating FVC and FEV1 was evaluated by calculating the Pearson's correlation coefficient (r), intraclass correlation coefficient (ICC), mean square error (MSE), root mean square error (RMSE), and mean absolute error (MAE) compared with the results of spirometry. FINDINGS: We included 141 734 x-ray and spirometry pairs from 81 902 patients from the five institutions. The training, validation, and internal test datasets included 134 307 x-rays from 75 768 patients (37 718 [50%] female, 38 050 [50%] male; mean age 56 years [SD 18]), and the external test datasets included 2137 x-rays from 1861 patients (742 [40%] female, 1119 [60%] male; mean age 65 years [SD 17]) from institution D and 5290 x-rays from 4273 patients (1972 [46%] female, 2301 [54%] male; mean age 63 years [SD 17]) from institution E. External testing for FVC yielded r values of 0·91 (99% CI 0·90-0·92) for institution D and 0·90 (0·89-0·91) for institution E, ICC of 0·91 (99% CI 0·90-0·92) and 0·89 (0·88-0·90), MSE of 0·17 L2 (99% CI 0·15-0·19) and 0·17 L2 (0·16-0·19), RMSE of 0·41 L (99% CI 0·39-0·43) and 0·41 L (0·39-0·43), and MAE of 0·31 L (99% CI 0·29-0·32) and 0·31 L (0·30-0·32). External testing for FEV1 yielded r values of 0·91 (99% CI 0·90-0·92) for institution D and 0·91 (0·90-0·91) for institution E, ICC of 0·90 (99% CI 0·89-0·91) and 0·90 (0·90-0·91), MSE of 0·13 L2 (99% CI 0·12-0·15) and 0·11 L2 (0·10-0·12), RMSE of 0·37 L (99% CI 0·35-0·38) and 0·33 L (0·32-0·35), and MAE of 0·28 L (99% CI 0·27-0·29) and 0·25 L (0·25-0·26). INTERPRETATION: This deep learning model allowed estimation of FVC and FEV1 from chest x-rays, showing high agreement with spirometry. The model offers an alternative to spirometry for assessing pulmonary function, which is especially useful for patients who are unable to undergo spirometry, and might enhance the customisation of CT imaging protocols based on insights gained from chest x-rays, improving the diagnosis and management of lung diseases. Future studies should investigate the performance of this AI model in combination with clinical information to enable more appropriate and targeted use. FUNDING: None.


Asunto(s)
Aprendizaje Profundo , Humanos , Japón , Masculino , Femenino , Estudios Retrospectivos , Persona de Mediana Edad , Anciano , Capacidad Vital , Pulmón/diagnóstico por imagen , Pulmón/fisiología , Volumen Espiratorio Forzado , Radiografía Torácica , Espirometría/métodos , Adulto , Pruebas de Función Respiratoria/métodos
6.
Eur Radiol ; 2024 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-38995378

RESUMEN

OBJECTIVES: To compare the diagnostic accuracy of Generative Pre-trained Transformer (GPT)-4-based ChatGPT, GPT-4 with vision (GPT-4V) based ChatGPT, and radiologists in musculoskeletal radiology. MATERIALS AND METHODS: We included 106 "Test Yourself" cases from Skeletal Radiology between January 2014 and September 2023. We input the medical history and imaging findings into GPT-4-based ChatGPT and the medical history and images into GPT-4V-based ChatGPT, then both generated a diagnosis for each case. Two radiologists (a radiology resident and a board-certified radiologist) independently provided diagnoses for all cases. The diagnostic accuracy rates were determined based on the published ground truth. Chi-square tests were performed to compare the diagnostic accuracy of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and radiologists. RESULTS: GPT-4-based ChatGPT significantly outperformed GPT-4V-based ChatGPT (p < 0.001) with accuracy rates of 43% (46/106) and 8% (9/106), respectively. The radiology resident and the board-certified radiologist achieved accuracy rates of 41% (43/106) and 53% (56/106). The diagnostic accuracy of GPT-4-based ChatGPT was comparable to that of the radiology resident, but was lower than that of the board-certified radiologist although the differences were not significant (p = 0.78 and 0.22, respectively). The diagnostic accuracy of GPT-4V-based ChatGPT was significantly lower than those of both radiologists (p < 0.001 and < 0.001, respectively). CONCLUSION: GPT-4-based ChatGPT demonstrated significantly higher diagnostic accuracy than GPT-4V-based ChatGPT. While GPT-4-based ChatGPT's diagnostic performance was comparable to radiology residents, it did not reach the performance level of board-certified radiologists in musculoskeletal radiology. CLINICAL RELEVANCE STATEMENT: GPT-4-based ChatGPT outperformed GPT-4V-based ChatGPT and was comparable to radiology residents, but it did not reach the level of board-certified radiologists in musculoskeletal radiology. Radiologists should comprehend ChatGPT's current performance as a diagnostic tool for optimal utilization. KEY POINTS: This study compared the diagnostic performance of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and radiologists in musculoskeletal radiology. GPT-4-based ChatGPT was comparable to radiology residents, but did not reach the level of board-certified radiologists. When utilizing ChatGPT, it is crucial to input appropriate descriptions of imaging findings rather than the images.

9.
Diagn Interv Imaging ; 2024 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-38918123

RESUMEN

The rapid advancement of artificial intelligence (AI) in healthcare has revolutionized the industry, offering significant improvements in diagnostic accuracy, efficiency, and patient outcomes. However, the increasing adoption of AI systems also raises concerns about their environmental impact, particularly in the context of climate change. This review explores the intersection of climate change and AI in healthcare, examining the challenges posed by the energy consumption and carbon footprint of AI systems, as well as the potential solutions to mitigate their environmental impact. The review highlights the energy-intensive nature of AI model training and deployment, the contribution of data centers to greenhouse gas emissions, and the generation of electronic waste. To address these challenges, the development of energy-efficient AI models, the adoption of green computing practices, and the integration of renewable energy sources are discussed as potential solutions. The review also emphasizes the role of AI in optimizing healthcare workflows, reducing resource waste, and facilitating sustainable practices such as telemedicine. Furthermore, the importance of policy and governance frameworks, global initiatives, and collaborative efforts in promoting sustainable AI practices in healthcare is explored. The review concludes by outlining best practices for sustainable AI deployment, including eco-design, lifecycle assessment, responsible data management, and continuous monitoring and improvement. As the healthcare industry continues to embrace AI technologies, prioritizing sustainability and environmental responsibility is crucial to ensure that the benefits of AI are realized while actively contributing to the preservation of our planet.

10.
Jpn J Radiol ; 2024 Jun 10.
Artículo en Inglés | MEDLINE | ID: mdl-38856878

RESUMEN

Medicine and deep learning-based artificial intelligence (AI) engineering represent two distinct fields each with decades of published history. The current rapid convergence of deep learning and medicine has led to significant advancements, yet it has also introduced ambiguity regarding data set terms common to both fields, potentially leading to miscommunication and methodological discrepancies. This narrative review aims to give historical context for these terms, accentuate the importance of clarity when these terms are used in medical deep learning contexts, and offer solutions to mitigate misunderstandings by readers from either field. Through an examination of historical documents, including articles, writing guidelines, and textbooks, this review traces the divergent evolution of terms for data sets and their impact. Initially, the discordant interpretations of the word 'validation' in medical and AI contexts are explored. We then show that in the medical field as well, terms traditionally used in the deep learning domain are becoming more common, with the data for creating models referred to as the 'training set', the data for tuning of parameters referred to as the 'validation (or tuning) set', and the data for the evaluation of models as the 'test set'. Additionally, the test sets used for model evaluation are classified into internal (random splitting, cross-validation, and leave-one-out) sets and external (temporal and geographic) sets. This review then identifies often misunderstood terms and proposes pragmatic solutions to mitigate terminological confusion in the field of deep learning in medicine. We support the accurate and standardized description of these data sets and the explicit definition of data set splitting terminologies in each publication. These are crucial methods for demonstrating the robustness and generalizability of deep learning applications in medicine. This review aspires to enhance the precision of communication, thereby fostering more effective and transparent research methodologies in this interdisciplinary field.

11.
Am J Cardiol ; 223: 1-6, 2024 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-38782227

RESUMEN

We develop and evaluate an artificial intelligence (AI)-based algorithm that uses pre-rotation atherectomy (RA) intravascular ultrasound (IVUS) images to automatically predict regions debulked by RA. A total of 2106 IVUS cross-sections from 60 patients with de novo severely calcified coronary lesions who underwent IVUS-guided RA were consecutively collected. The 2 identical IVUS images of pre- and post-RA were merged, and the orientations of the debulked segments identified in the merged images were marked on the outer circle of each IVUS image. The AI model was developed based on ResNet (deep residual learning for image recognition). The architecture connected 36 fully connected layers, each corresponding to 1 of the 36 orientations segmented every 10°, to a single feature extractor. In each cross-sectional analysis, our AI model achieved an average sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of 81%, 72%, 46%, 90%, and 75%, respectively. In conclusion, the AI-based algorithm can use information from pre-RA IVUS images to accurately predict regions debulked by RA and will assist interventional cardiologists in determining the treatment strategies for severely calcified coronary lesions.


Asunto(s)
Algoritmos , Inteligencia Artificial , Aterectomía Coronaria , Enfermedad de la Arteria Coronaria , Ultrasonografía Intervencional , Humanos , Ultrasonografía Intervencional/métodos , Aterectomía Coronaria/métodos , Masculino , Femenino , Anciano , Enfermedad de la Arteria Coronaria/cirugía , Enfermedad de la Arteria Coronaria/diagnóstico por imagen , Calcificación Vascular/diagnóstico por imagen , Calcificación Vascular/cirugía , Valor Predictivo de las Pruebas , Persona de Mediana Edad , Vasos Coronarios/diagnóstico por imagen , Vasos Coronarios/cirugía , Estudios Retrospectivos
13.
Clin Neuroradiol ; 2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38806794

RESUMEN

PURPOSE: To compare the diagnostic performance among Generative Pre-trained Transformer (GPT)-4-based ChatGPT, GPT­4 with vision (GPT-4V) based ChatGPT, and radiologists in challenging neuroradiology cases. METHODS: We collected 32 consecutive "Freiburg Neuropathology Case Conference" cases from the journal Clinical Neuroradiology between March 2016 and December 2023. We input the medical history and imaging findings into GPT-4-based ChatGPT and the medical history and images into GPT-4V-based ChatGPT, then both generated a diagnosis for each case. Six radiologists (three radiology residents and three board-certified radiologists) independently reviewed all cases and provided diagnoses. ChatGPT and radiologists' diagnostic accuracy rates were evaluated based on the published ground truth. Chi-square tests were performed to compare the diagnostic accuracy of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and radiologists. RESULTS: GPT­4 and GPT-4V-based ChatGPTs achieved accuracy rates of 22% (7/32) and 16% (5/32), respectively. Radiologists achieved the following accuracy rates: three radiology residents 28% (9/32), 31% (10/32), and 28% (9/32); and three board-certified radiologists 38% (12/32), 47% (15/32), and 44% (14/32). GPT-4-based ChatGPT's diagnostic accuracy was lower than each radiologist, although not significantly (all p > 0.07). GPT-4V-based ChatGPT's diagnostic accuracy was also lower than each radiologist and significantly lower than two board-certified radiologists (p = 0.02 and 0.03) (not significant for radiology residents and one board-certified radiologist [all p > 0.09]). CONCLUSION: While GPT-4-based ChatGPT demonstrated relatively higher diagnostic performance than GPT-4V-based ChatGPT, the diagnostic performance of GPT­4 and GPT-4V-based ChatGPTs did not reach the performance level of either radiology residents or board-certified radiologists in challenging neuroradiology cases.

14.
AJNR Am J Neuroradiol ; 45(6): 826-832, 2024 06 07.
Artículo en Inglés | MEDLINE | ID: mdl-38663993

RESUMEN

BACKGROUND: Intermodality image-to-image translation is an artificial intelligence technique for generating one technique from another. PURPOSE: This review was designed to systematically identify and quantify biases and quality issues preventing validation and clinical application of artificial intelligence models for intermodality image-to-image translation of brain imaging. DATA SOURCES: PubMed, Scopus, and IEEE Xplore were searched through August 2, 2023, for artificial intelligence-based image translation models of radiologic brain images. STUDY SELECTION: This review collected 102 works published between April 2017 and August 2023. DATA ANALYSIS: Eligible studies were evaluated for quality using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) and for bias using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). Medically-focused article adherence was compared with that of engineering-focused articles overall with the Mann-Whitney U test and for each criterion using the Fisher exact test. DATA SYNTHESIS: Median adherence to the relevant CLAIM criteria was 69% and 38% for PROBAST questions. CLAIM adherence was lower for engineering-focused articles compared with medically-focused articles (65% versus 73%, P < .001). Engineering-focused studies had higher adherence for model description criteria, and medically-focused studies had higher adherence for data set and evaluation descriptions. LIMITATIONS: Our review is limited by the study design and model heterogeneity. CONCLUSIONS: Nearly all studies revealed critical issues preventing clinical application, with engineering-focused studies showing higher adherence for the technical model description but significantly lower overall adherence than medically-focused studies. The pursuit of clinical application requires collaboration from both fields to improve reporting.


Asunto(s)
Neuroimagen , Humanos , Neuroimagen/métodos , Neuroimagen/normas , Sesgo , Inteligencia Artificial
15.
Jpn J Radiol ; 42(7): 685-696, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38551772

RESUMEN

The advent of Deep Learning (DL) has significantly propelled the field of diagnostic radiology forward by enhancing image analysis and interpretation. The introduction of the Transformer architecture, followed by the development of Large Language Models (LLMs), has further revolutionized this domain. LLMs now possess the potential to automate and refine the radiology workflow, extending from report generation to assistance in diagnostics and patient care. The integration of multimodal technology with LLMs could potentially leapfrog these applications to unprecedented levels.However, LLMs come with unresolved challenges such as information hallucinations and biases, which can affect clinical reliability. Despite these issues, the legislative and guideline frameworks have yet to catch up with technological advancements. Radiologists must acquire a thorough understanding of these technologies to leverage LLMs' potential to the fullest while maintaining medical safety and ethics. This review aims to aid in that endeavor.


Asunto(s)
Aprendizaje Profundo , Radiología , Humanos , Radiología/métodos , Radiólogos , Inteligencia Artificial , Flujo de Trabajo
16.
Neuroradiology ; 66(6): 955-961, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38407581

RESUMEN

PURPOSE: Cranial nerve involvement (CNI) influences the treatment strategies and prognosis of head and neck tumors. However, its incidence in skull base chordomas and chondrosarcomas remains to be investigated. This study evaluated the imaging features of chordoma and chondrosarcoma, with a focus on the differences in CNI. METHODS: Forty-two patients (26 and 16 patients with chordomas and chondrosarcomas, respectively) treated at our institution between January 2007 and January 2023 were included in this retrospective study. Imaging features, such as the maximum diameter, tumor location (midline or off-midline), calcification, signal intensity on T2-weighted image, mean apparent diffusion coefficient (ADC) values, contrast enhancement, and CNI, were evaluated and compared using Fisher's exact test or the Mann-Whitney U-test. The odds ratio (OR) was calculated to evaluate the association between the histological type and imaging features. RESULTS: The incidence of CNI in chondrosarcomas was significantly higher than that in chordomas (63% vs. 8%, P < 0.001). An off-midline location was more common in chondrosarcomas than in chordomas (86% vs. 13%; P < 0.001). The mean ADC values of chondrosarcomas were significantly higher than those of chordomas (P < 0.001). Significant associations were identified between chondrosarcomas and CNI (OR = 20.00; P < 0.001), location (OR = 53.70; P < 0.001), and mean ADC values (OR = 1.01; P = 0.002). CONCLUSION: The incidence of CNI and off-midline location in chondrosarcomas was significantly higher than that in chordomas. CNI, tumor location, and the mean ADC can help distinguish between these entities.


Asunto(s)
Condrosarcoma , Cordoma , Neoplasias de la Base del Cráneo , Humanos , Femenino , Masculino , Estudios Retrospectivos , Persona de Mediana Edad , Cordoma/diagnóstico por imagen , Cordoma/patología , Adulto , Condrosarcoma/diagnóstico por imagen , Condrosarcoma/patología , Anciano , Neoplasias de la Base del Cráneo/diagnóstico por imagen , Medios de Contraste , Adolescente , Imagen por Resonancia Magnética/métodos
17.
Sci Rep ; 14(1): 2911, 2024 02 05.
Artículo en Inglés | MEDLINE | ID: mdl-38316892

RESUMEN

This study created an image-to-image translation model that synthesizes diffusion tensor images (DTI) from conventional diffusion weighted images, and validated the similarities between the original and synthetic DTI. Thirty-two healthy volunteers were prospectively recruited. DTI and DWI were obtained with six and three directions of the motion probing gradient (MPG), respectively. The identical imaging plane was paired for the image-to-image translation model that synthesized one direction of the MPG from DWI. This process was repeated six times in the respective MPG directions. Regions of interest (ROIs) in the lentiform nucleus, thalamus, posterior limb of the internal capsule, posterior thalamic radiation, and splenium of the corpus callosum were created and applied to maps derived from the original and synthetic DTI. The mean values and signal-to-noise ratio (SNR) of the original and synthetic maps for each ROI were compared. The Bland-Altman plot between the original and synthetic data was evaluated. Although the test dataset showed a larger standard deviation of all values and lower SNR in the synthetic data than in the original data, the Bland-Altman plots showed each plot localizing in a similar distribution. Synthetic DTI could be generated from conventional DWI with an image-to-image translation model.


Asunto(s)
Aprendizaje Profundo , Sustancia Blanca , Humanos , Cuerpo Calloso/diagnóstico por imagen , Relación Señal-Ruido , Cápsula Interna , Imagen de Difusión por Resonancia Magnética/métodos
18.
Jpn J Radiol ; 42(1): 3-15, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37540463

RESUMEN

In this review, we address the issue of fairness in the clinical integration of artificial intelligence (AI) in the medical field. As the clinical adoption of deep learning algorithms, a subfield of AI, progresses, concerns have arisen regarding the impact of AI biases and discrimination on patient health. This review aims to provide a comprehensive overview of concerns associated with AI fairness; discuss strategies to mitigate AI biases; and emphasize the need for cooperation among physicians, AI researchers, AI developers, policymakers, and patients to ensure equitable AI integration. First, we define and introduce the concept of fairness in AI applications in healthcare and radiology, emphasizing the benefits and challenges of incorporating AI into clinical practice. Next, we delve into concerns regarding fairness in healthcare, addressing the various causes of biases in AI and potential concerns such as misdiagnosis, unequal access to treatment, and ethical considerations. We then outline strategies for addressing fairness, such as the importance of diverse and representative data and algorithm audits. Additionally, we discuss ethical and legal considerations such as data privacy, responsibility, accountability, transparency, and explainability in AI. Finally, we present the Fairness of Artificial Intelligence Recommendations in healthcare (FAIR) statement to offer best practices. Through these efforts, we aim to provide a foundation for discussing the responsible and equitable implementation and deployment of AI in healthcare.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Algoritmos , Radiólogos , Atención a la Salud
19.
J Magn Reson Imaging ; 59(4): 1341-1348, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37424114

RESUMEN

BACKGROUND: Although brain activities in Alzheimer's disease (AD) might be evaluated MRI and PET, the relationships between brain temperature (BT), the index of diffusivity along the perivascular space (ALPS index), and amyloid deposition in the cerebral cortex are still unclear. PURPOSE: To investigate the relationship between metabolic imaging measurements and clinical information in patients with AD and normal controls (NCs). STUDY TYPE: Retrospective analysis of a prospective dataset. POPULATION: 58 participants (78.3 ± 6.8 years; 30 female): 29 AD patients and 29 age- and sex-matched NCs from the Open Access Series of Imaging Studies dataset. FIELD STRENGTH/SEQUENCE: 3T; T1-weighted magnetization-prepared rapid gradient-echo, diffusion tensor imaging with 64 directions, and dynamic 18 F-florbetapir PET. ASSESSMENT: Imaging metrics were compared between AD and NCs. These included BT calculated by the diffusivity of the lateral ventricles, ALPS index that reflects the glymphatic system, the mean standardized uptake value ratio (SUVR) of amyloid PET in the cerebral cortex and clinical information, such as age, sex, and MMSE. STATISTICAL TESTS: Pearson's or Spearman's correlation and multiple linear regression analyses. P values <0.05 were defined as statistically significant. RESULTS: Significant positive correlations were found between BT and ALPS index (r = 0.44 for NCs), while significant negative correlations were found between age and ALPS index (rs = -0.43 for AD and - 0.47 for NCs). The SUVR of amyloid PET was not significantly associated with BT (P = 0.81 for AD and 0.21 for NCs) or ALPS index (P = 0.10 for AD and 0.52 for NCs). In the multiple regression analysis, age was significantly associated with BT, while age, sex, and presence of AD were significantly associated with the ALPS index. DATA CONCLUSION: Impairment of the glymphatic system measured using MRI was associated with lower BT and aging. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY STAGE: 1.


Asunto(s)
Enfermedad de Alzheimer , Humanos , Femenino , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/metabolismo , Imagen de Difusión Tensora/métodos , Estudios Retrospectivos , Estudios Prospectivos , Acceso a la Información , Tomografía de Emisión de Positrones/métodos , Imagen por Resonancia Magnética/métodos , Amiloide , Proteínas Amiloidogénicas , Corteza Cerebral
20.
J Radiat Res ; 65(1): 1-9, 2024 Jan 19.
Artículo en Inglés | MEDLINE | ID: mdl-37996085

RESUMEN

This review provides an overview of the application of artificial intelligence (AI) in radiation therapy (RT) from a radiation oncologist's perspective. Over the years, advances in diagnostic imaging have significantly improved the efficiency and effectiveness of radiotherapy. The introduction of AI has further optimized the segmentation of tumors and organs at risk, thereby saving considerable time for radiation oncologists. AI has also been utilized in treatment planning and optimization, reducing the planning time from several days to minutes or even seconds. Knowledge-based treatment planning and deep learning techniques have been employed to produce treatment plans comparable to those generated by humans. Additionally, AI has potential applications in quality control and assurance of treatment plans, optimization of image-guided RT and monitoring of mobile tumors during treatment. Prognostic evaluation and prediction using AI have been increasingly explored, with radiomics being a prominent area of research. The future of AI in radiation oncology offers the potential to establish treatment standardization by minimizing inter-observer differences in segmentation and improving dose adequacy evaluation. RT standardization through AI may have global implications, providing world-standard treatment even in resource-limited settings. However, there are challenges in accumulating big data, including patient background information and correlating treatment plans with disease outcomes. Although challenges remain, ongoing research and the integration of AI technology hold promise for further advancements in radiation oncology.


Asunto(s)
Neoplasias , Oncología por Radiación , Radioterapia Guiada por Imagen , Humanos , Inteligencia Artificial , Planificación de la Radioterapia Asistida por Computador/métodos , Neoplasias/radioterapia , Oncología por Radiación/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...