Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 47
Filter
2.
Jpn J Radiol ; 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38856878

ABSTRACT

Medicine and deep learning-based artificial intelligence (AI) engineering represent two distinct fields each with decades of published history. The current rapid convergence of deep learning and medicine has led to significant advancements, yet it has also introduced ambiguity regarding data set terms common to both fields, potentially leading to miscommunication and methodological discrepancies. This narrative review aims to give historical context for these terms, accentuate the importance of clarity when these terms are used in medical deep learning contexts, and offer solutions to mitigate misunderstandings by readers from either field. Through an examination of historical documents, including articles, writing guidelines, and textbooks, this review traces the divergent evolution of terms for data sets and their impact. Initially, the discordant interpretations of the word 'validation' in medical and AI contexts are explored. We then show that in the medical field as well, terms traditionally used in the deep learning domain are becoming more common, with the data for creating models referred to as the 'training set', the data for tuning of parameters referred to as the 'validation (or tuning) set', and the data for the evaluation of models as the 'test set'. Additionally, the test sets used for model evaluation are classified into internal (random splitting, cross-validation, and leave-one-out) sets and external (temporal and geographic) sets. This review then identifies often misunderstood terms and proposes pragmatic solutions to mitigate terminological confusion in the field of deep learning in medicine. We support the accurate and standardized description of these data sets and the explicit definition of data set splitting terminologies in each publication. These are crucial methods for demonstrating the robustness and generalizability of deep learning applications in medicine. This review aspires to enhance the precision of communication, thereby fostering more effective and transparent research methodologies in this interdisciplinary field.

4.
Diagn Interv Imaging ; 2024 Jun 24.
Article in English | MEDLINE | ID: mdl-38918123

ABSTRACT

The rapid advancement of artificial intelligence (AI) in healthcare has revolutionized the industry, offering significant improvements in diagnostic accuracy, efficiency, and patient outcomes. However, the increasing adoption of AI systems also raises concerns about their environmental impact, particularly in the context of climate change. This review explores the intersection of climate change and AI in healthcare, examining the challenges posed by the energy consumption and carbon footprint of AI systems, as well as the potential solutions to mitigate their environmental impact. The review highlights the energy-intensive nature of AI model training and deployment, the contribution of data centers to greenhouse gas emissions, and the generation of electronic waste. To address these challenges, the development of energy-efficient AI models, the adoption of green computing practices, and the integration of renewable energy sources are discussed as potential solutions. The review also emphasizes the role of AI in optimizing healthcare workflows, reducing resource waste, and facilitating sustainable practices such as telemedicine. Furthermore, the importance of policy and governance frameworks, global initiatives, and collaborative efforts in promoting sustainable AI practices in healthcare is explored. The review concludes by outlining best practices for sustainable AI deployment, including eco-design, lifecycle assessment, responsible data management, and continuous monitoring and improvement. As the healthcare industry continues to embrace AI technologies, prioritizing sustainability and environmental responsibility is crucial to ensure that the benefits of AI are realized while actively contributing to the preservation of our planet.

5.
Am J Cardiol ; 223: 1-6, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-38782227

ABSTRACT

We develop and evaluate an artificial intelligence (AI)-based algorithm that uses pre-rotation atherectomy (RA) intravascular ultrasound (IVUS) images to automatically predict regions debulked by RA. A total of 2106 IVUS cross-sections from 60 patients with de novo severely calcified coronary lesions who underwent IVUS-guided RA were consecutively collected. The 2 identical IVUS images of pre- and post-RA were merged, and the orientations of the debulked segments identified in the merged images were marked on the outer circle of each IVUS image. The AI model was developed based on ResNet (deep residual learning for image recognition). The architecture connected 36 fully connected layers, each corresponding to 1 of the 36 orientations segmented every 10°, to a single feature extractor. In each cross-sectional analysis, our AI model achieved an average sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of 81%, 72%, 46%, 90%, and 75%, respectively. In conclusion, the AI-based algorithm can use information from pre-RA IVUS images to accurately predict regions debulked by RA and will assist interventional cardiologists in determining the treatment strategies for severely calcified coronary lesions.


Subject(s)
Algorithms , Artificial Intelligence , Atherectomy, Coronary , Coronary Artery Disease , Ultrasonography, Interventional , Humans , Ultrasonography, Interventional/methods , Atherectomy, Coronary/methods , Male , Female , Aged , Coronary Artery Disease/surgery , Coronary Artery Disease/diagnostic imaging , Vascular Calcification/diagnostic imaging , Vascular Calcification/surgery , Predictive Value of Tests , Middle Aged , Coronary Vessels/diagnostic imaging , Coronary Vessels/surgery , Retrospective Studies
6.
Clin Neuroradiol ; 2024 May 28.
Article in English | MEDLINE | ID: mdl-38806794

ABSTRACT

PURPOSE: To compare the diagnostic performance among Generative Pre-trained Transformer (GPT)-4-based ChatGPT, GPT­4 with vision (GPT-4V) based ChatGPT, and radiologists in challenging neuroradiology cases. METHODS: We collected 32 consecutive "Freiburg Neuropathology Case Conference" cases from the journal Clinical Neuroradiology between March 2016 and December 2023. We input the medical history and imaging findings into GPT-4-based ChatGPT and the medical history and images into GPT-4V-based ChatGPT, then both generated a diagnosis for each case. Six radiologists (three radiology residents and three board-certified radiologists) independently reviewed all cases and provided diagnoses. ChatGPT and radiologists' diagnostic accuracy rates were evaluated based on the published ground truth. Chi-square tests were performed to compare the diagnostic accuracy of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and radiologists. RESULTS: GPT­4 and GPT-4V-based ChatGPTs achieved accuracy rates of 22% (7/32) and 16% (5/32), respectively. Radiologists achieved the following accuracy rates: three radiology residents 28% (9/32), 31% (10/32), and 28% (9/32); and three board-certified radiologists 38% (12/32), 47% (15/32), and 44% (14/32). GPT-4-based ChatGPT's diagnostic accuracy was lower than each radiologist, although not significantly (all p > 0.07). GPT-4V-based ChatGPT's diagnostic accuracy was also lower than each radiologist and significantly lower than two board-certified radiologists (p = 0.02 and 0.03) (not significant for radiology residents and one board-certified radiologist [all p > 0.09]). CONCLUSION: While GPT-4-based ChatGPT demonstrated relatively higher diagnostic performance than GPT-4V-based ChatGPT, the diagnostic performance of GPT­4 and GPT-4V-based ChatGPTs did not reach the performance level of either radiology residents or board-certified radiologists in challenging neuroradiology cases.

8.
AJNR Am J Neuroradiol ; 45(6): 826-832, 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38663993

ABSTRACT

BACKGROUND: Intermodality image-to-image translation is an artificial intelligence technique for generating one technique from another. PURPOSE: This review was designed to systematically identify and quantify biases and quality issues preventing validation and clinical application of artificial intelligence models for intermodality image-to-image translation of brain imaging. DATA SOURCES: PubMed, Scopus, and IEEE Xplore were searched through August 2, 2023, for artificial intelligence-based image translation models of radiologic brain images. STUDY SELECTION: This review collected 102 works published between April 2017 and August 2023. DATA ANALYSIS: Eligible studies were evaluated for quality using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) and for bias using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). Medically-focused article adherence was compared with that of engineering-focused articles overall with the Mann-Whitney U test and for each criterion using the Fisher exact test. DATA SYNTHESIS: Median adherence to the relevant CLAIM criteria was 69% and 38% for PROBAST questions. CLAIM adherence was lower for engineering-focused articles compared with medically-focused articles (65% versus 73%, P < .001). Engineering-focused studies had higher adherence for model description criteria, and medically-focused studies had higher adherence for data set and evaluation descriptions. LIMITATIONS: Our review is limited by the study design and model heterogeneity. CONCLUSIONS: Nearly all studies revealed critical issues preventing clinical application, with engineering-focused studies showing higher adherence for the technical model description but significantly lower overall adherence than medically-focused studies. The pursuit of clinical application requires collaboration from both fields to improve reporting.


Subject(s)
Neuroimaging , Humans , Neuroimaging/methods , Neuroimaging/standards , Bias , Artificial Intelligence
9.
Jpn J Radiol ; 42(7): 685-696, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38551772

ABSTRACT

The advent of Deep Learning (DL) has significantly propelled the field of diagnostic radiology forward by enhancing image analysis and interpretation. The introduction of the Transformer architecture, followed by the development of Large Language Models (LLMs), has further revolutionized this domain. LLMs now possess the potential to automate and refine the radiology workflow, extending from report generation to assistance in diagnostics and patient care. The integration of multimodal technology with LLMs could potentially leapfrog these applications to unprecedented levels.However, LLMs come with unresolved challenges such as information hallucinations and biases, which can affect clinical reliability. Despite these issues, the legislative and guideline frameworks have yet to catch up with technological advancements. Radiologists must acquire a thorough understanding of these technologies to leverage LLMs' potential to the fullest while maintaining medical safety and ethics. This review aims to aid in that endeavor.


Subject(s)
Deep Learning , Radiology , Humans , Radiology/methods , Radiologists , Artificial Intelligence , Workflow
10.
Sci Rep ; 14(1): 2911, 2024 02 05.
Article in English | MEDLINE | ID: mdl-38316892

ABSTRACT

This study created an image-to-image translation model that synthesizes diffusion tensor images (DTI) from conventional diffusion weighted images, and validated the similarities between the original and synthetic DTI. Thirty-two healthy volunteers were prospectively recruited. DTI and DWI were obtained with six and three directions of the motion probing gradient (MPG), respectively. The identical imaging plane was paired for the image-to-image translation model that synthesized one direction of the MPG from DWI. This process was repeated six times in the respective MPG directions. Regions of interest (ROIs) in the lentiform nucleus, thalamus, posterior limb of the internal capsule, posterior thalamic radiation, and splenium of the corpus callosum were created and applied to maps derived from the original and synthetic DTI. The mean values and signal-to-noise ratio (SNR) of the original and synthetic maps for each ROI were compared. The Bland-Altman plot between the original and synthetic data was evaluated. Although the test dataset showed a larger standard deviation of all values and lower SNR in the synthetic data than in the original data, the Bland-Altman plots showed each plot localizing in a similar distribution. Synthetic DTI could be generated from conventional DWI with an image-to-image translation model.


Subject(s)
Deep Learning , White Matter , Humans , Corpus Callosum/diagnostic imaging , Signal-To-Noise Ratio , Internal Capsule , Diffusion Magnetic Resonance Imaging/methods
11.
Neuroradiology ; 66(6): 955-961, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38407581

ABSTRACT

PURPOSE: Cranial nerve involvement (CNI) influences the treatment strategies and prognosis of head and neck tumors. However, its incidence in skull base chordomas and chondrosarcomas remains to be investigated. This study evaluated the imaging features of chordoma and chondrosarcoma, with a focus on the differences in CNI. METHODS: Forty-two patients (26 and 16 patients with chordomas and chondrosarcomas, respectively) treated at our institution between January 2007 and January 2023 were included in this retrospective study. Imaging features, such as the maximum diameter, tumor location (midline or off-midline), calcification, signal intensity on T2-weighted image, mean apparent diffusion coefficient (ADC) values, contrast enhancement, and CNI, were evaluated and compared using Fisher's exact test or the Mann-Whitney U-test. The odds ratio (OR) was calculated to evaluate the association between the histological type and imaging features. RESULTS: The incidence of CNI in chondrosarcomas was significantly higher than that in chordomas (63% vs. 8%, P < 0.001). An off-midline location was more common in chondrosarcomas than in chordomas (86% vs. 13%; P < 0.001). The mean ADC values of chondrosarcomas were significantly higher than those of chordomas (P < 0.001). Significant associations were identified between chondrosarcomas and CNI (OR = 20.00; P < 0.001), location (OR = 53.70; P < 0.001), and mean ADC values (OR = 1.01; P = 0.002). CONCLUSION: The incidence of CNI and off-midline location in chondrosarcomas was significantly higher than that in chordomas. CNI, tumor location, and the mean ADC can help distinguish between these entities.


Subject(s)
Chondrosarcoma , Chordoma , Skull Base Neoplasms , Humans , Female , Male , Retrospective Studies , Middle Aged , Chordoma/diagnostic imaging , Chordoma/pathology , Adult , Chondrosarcoma/diagnostic imaging , Chondrosarcoma/pathology , Aged , Skull Base Neoplasms/diagnostic imaging , Contrast Media , Adolescent , Magnetic Resonance Imaging/methods
12.
J Magn Reson Imaging ; 59(4): 1341-1348, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37424114

ABSTRACT

BACKGROUND: Although brain activities in Alzheimer's disease (AD) might be evaluated MRI and PET, the relationships between brain temperature (BT), the index of diffusivity along the perivascular space (ALPS index), and amyloid deposition in the cerebral cortex are still unclear. PURPOSE: To investigate the relationship between metabolic imaging measurements and clinical information in patients with AD and normal controls (NCs). STUDY TYPE: Retrospective analysis of a prospective dataset. POPULATION: 58 participants (78.3 ± 6.8 years; 30 female): 29 AD patients and 29 age- and sex-matched NCs from the Open Access Series of Imaging Studies dataset. FIELD STRENGTH/SEQUENCE: 3T; T1-weighted magnetization-prepared rapid gradient-echo, diffusion tensor imaging with 64 directions, and dynamic 18 F-florbetapir PET. ASSESSMENT: Imaging metrics were compared between AD and NCs. These included BT calculated by the diffusivity of the lateral ventricles, ALPS index that reflects the glymphatic system, the mean standardized uptake value ratio (SUVR) of amyloid PET in the cerebral cortex and clinical information, such as age, sex, and MMSE. STATISTICAL TESTS: Pearson's or Spearman's correlation and multiple linear regression analyses. P values <0.05 were defined as statistically significant. RESULTS: Significant positive correlations were found between BT and ALPS index (r = 0.44 for NCs), while significant negative correlations were found between age and ALPS index (rs = -0.43 for AD and - 0.47 for NCs). The SUVR of amyloid PET was not significantly associated with BT (P = 0.81 for AD and 0.21 for NCs) or ALPS index (P = 0.10 for AD and 0.52 for NCs). In the multiple regression analysis, age was significantly associated with BT, while age, sex, and presence of AD were significantly associated with the ALPS index. DATA CONCLUSION: Impairment of the glymphatic system measured using MRI was associated with lower BT and aging. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY STAGE: 1.


Subject(s)
Alzheimer Disease , Humans , Female , Alzheimer Disease/diagnostic imaging , Alzheimer Disease/metabolism , Diffusion Tensor Imaging/methods , Retrospective Studies , Prospective Studies , Access to Information , Positron-Emission Tomography/methods , Magnetic Resonance Imaging/methods , Amyloid , Amyloidogenic Proteins , Cerebral Cortex
13.
Jpn J Radiol ; 42(1): 3-15, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37540463

ABSTRACT

In this review, we address the issue of fairness in the clinical integration of artificial intelligence (AI) in the medical field. As the clinical adoption of deep learning algorithms, a subfield of AI, progresses, concerns have arisen regarding the impact of AI biases and discrimination on patient health. This review aims to provide a comprehensive overview of concerns associated with AI fairness; discuss strategies to mitigate AI biases; and emphasize the need for cooperation among physicians, AI researchers, AI developers, policymakers, and patients to ensure equitable AI integration. First, we define and introduce the concept of fairness in AI applications in healthcare and radiology, emphasizing the benefits and challenges of incorporating AI into clinical practice. Next, we delve into concerns regarding fairness in healthcare, addressing the various causes of biases in AI and potential concerns such as misdiagnosis, unequal access to treatment, and ethical considerations. We then outline strategies for addressing fairness, such as the importance of diverse and representative data and algorithm audits. Additionally, we discuss ethical and legal considerations such as data privacy, responsibility, accountability, transparency, and explainability in AI. Finally, we present the Fairness of Artificial Intelligence Recommendations in healthcare (FAIR) statement to offer best practices. Through these efforts, we aim to provide a foundation for discussing the responsible and equitable implementation and deployment of AI in healthcare.


Subject(s)
Artificial Intelligence , Radiology , Humans , Algorithms , Radiologists , Delivery of Health Care
14.
Neuroradiology ; 66(1): 73-79, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37994939

ABSTRACT

PURPOSE: The noteworthy performance of Chat Generative Pre-trained Transformer (ChatGPT), an artificial intelligence text generation model based on the GPT-4 architecture, has been demonstrated in various fields; however, its potential applications in neuroradiology remain unexplored. This study aimed to evaluate the diagnostic performance of GPT-4 based ChatGPT in neuroradiology. METHODS: We collected 100 consecutive "Case of the Week" cases from the American Journal of Neuroradiology between October 2021 and September 2023. ChatGPT generated a diagnosis from patient's medical history and imaging findings for each case. Then the diagnostic accuracy rate was determined using the published ground truth. Each case was categorized by anatomical location (brain, spine, and head & neck), and brain cases were further divided into central nervous system (CNS) tumor and non-CNS tumor groups. Fisher's exact test was conducted to compare the accuracy rates among the three anatomical locations, as well as between the CNS tumor and non-CNS tumor groups. RESULTS: ChatGPT achieved a diagnostic accuracy rate of 50% (50/100 cases). There were no significant differences between the accuracy rates of the three anatomical locations (p = 0.89). The accuracy rate was significantly lower for the CNS tumor group compared to the non-CNS tumor group in the brain cases (16% [3/19] vs. 62% [36/58], p < 0.001). CONCLUSION: This study demonstrated the diagnostic performance of ChatGPT in neuroradiology. ChatGPT's diagnostic accuracy varied depending on disease etiologies, and its diagnostic accuracy was significantly lower in CNS tumors compared to non-CNS tumors.


Subject(s)
Artificial Intelligence , Neoplasms , Humans , Head , Brain , Neck
16.
J Radiat Res ; 65(1): 1-9, 2024 Jan 19.
Article in English | MEDLINE | ID: mdl-37996085

ABSTRACT

This review provides an overview of the application of artificial intelligence (AI) in radiation therapy (RT) from a radiation oncologist's perspective. Over the years, advances in diagnostic imaging have significantly improved the efficiency and effectiveness of radiotherapy. The introduction of AI has further optimized the segmentation of tumors and organs at risk, thereby saving considerable time for radiation oncologists. AI has also been utilized in treatment planning and optimization, reducing the planning time from several days to minutes or even seconds. Knowledge-based treatment planning and deep learning techniques have been employed to produce treatment plans comparable to those generated by humans. Additionally, AI has potential applications in quality control and assurance of treatment plans, optimization of image-guided RT and monitoring of mobile tumors during treatment. Prognostic evaluation and prediction using AI have been increasingly explored, with radiomics being a prominent area of research. The future of AI in radiation oncology offers the potential to establish treatment standardization by minimizing inter-observer differences in segmentation and improving dose adequacy evaluation. RT standardization through AI may have global implications, providing world-standard treatment even in resource-limited settings. However, there are challenges in accumulating big data, including patient background information and correlating treatment plans with disease outcomes. Although challenges remain, ongoing research and the integration of AI technology hold promise for further advancements in radiation oncology.


Subject(s)
Neoplasms , Radiation Oncology , Radiotherapy, Image-Guided , Humans , Artificial Intelligence , Radiotherapy Planning, Computer-Assisted/methods , Neoplasms/radiotherapy , Radiation Oncology/methods
18.
Ann Nucl Med ; 37(11): 583-595, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37749301

ABSTRACT

The radiopharmaceutical 2-[fluorine-18]fluoro-2-deoxy-D-glucose (FDG) has been dominantly used in positron emission tomography (PET) scans for over 20 years, and due to its vast utility its applications have expanded and are continuing to expand into oncology, neurology, cardiology, and infectious/inflammatory diseases. More recently, the addition of artificial intelligence (AI) has enhanced nuclear medicine diagnosis and imaging with FDG-PET, and new radiopharmaceuticals such as prostate-specific membrane antigen (PSMA) and fibroblast activation protein inhibitor (FAPI) have emerged. Nuclear medicine therapy using agents such as [177Lu]-dotatate surpasses conventional treatments in terms of efficacy and side effects. This article reviews recently established evidence of FDG and non-FDG drugs and anticipates the future trajectory of nuclear medicine.

19.
Radiol Med ; 128(10): 1236-1249, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37639191

ABSTRACT

Although there is no solid agreement for artificial intelligence (AI), it refers to a computer system with intelligence similar to that of humans. Deep learning appeared in 2006, and more than 10 years have passed since the third AI boom was triggered by improvements in computing power, algorithm development, and the use of big data. In recent years, the application and development of AI technology in the medical field have intensified internationally. There is no doubt that AI will be used in clinical practice to assist in diagnostic imaging in the future. In qualitative diagnosis, it is desirable to develop an explainable AI that at least represents the basis of the diagnostic process. However, it must be kept in mind that AI is a physician-assistant system, and the final decision should be made by the physician while understanding the limitations of AI. The aim of this article is to review the application of AI technology in diagnostic imaging from PubMed database while particularly focusing on diagnostic imaging in thorax such as lesion detection and qualitative diagnosis in order to help radiologists and clinicians to become more familiar with AI in thorax.


Subject(s)
Artificial Intelligence , Deep Learning , Humans , Algorithms , Thorax , Diagnostic Imaging
20.
Magn Reson Med Sci ; 22(4): 401-414, 2023 Oct 01.
Article in English | MEDLINE | ID: mdl-37532584

ABSTRACT

Due primarily to the excellent soft tissue contrast depictions provided by MRI, the widespread application of head and neck MRI in clinical practice serves to assess various diseases. Artificial intelligence (AI)-based methodologies, particularly deep learning analyses using convolutional neural networks, have recently gained global recognition and have been extensively investigated in clinical research for their applicability across a range of categories within medical imaging, including head and neck MRI. Analytical approaches using AI have shown potential for addressing the clinical limitations associated with head and neck MRI. In this review, we focus primarily on the technical advancements in deep-learning-based methodologies and their clinical utility within the field of head and neck MRI, encompassing aspects such as image acquisition and reconstruction, lesion segmentation, disease classification and diagnosis, and prognostic prediction for patients presenting with head and neck diseases. We then discuss the limitations of current deep-learning-based approaches and offer insights regarding future challenges in this field.


Subject(s)
Artificial Intelligence , Head , Humans , Head/diagnostic imaging , Neck/diagnostic imaging , Magnetic Resonance Imaging , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...