Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 49
Filtrar
1.
Lancet Digit Health ; 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38981834

RESUMO

BACKGROUND: Chest x-ray is a basic, cost-effective, and widely available imaging method that is used for static assessments of organic diseases and anatomical abnormalities, but its ability to estimate dynamic measurements such as pulmonary function is unknown. We aimed to estimate two major pulmonary functions from chest x-rays. METHODS: In this retrospective model development and validation study, we trained, validated, and externally tested a deep learning-based artificial intelligence (AI) model to estimate forced vital capacity (FVC) and forced expiratory volume in 1 s (FEV1) from chest x-rays. We included consecutively collected results of spirometry and any associated chest x-rays that had been obtained between July 1, 2003, and Dec 31, 2021, from five institutions in Japan (labelled institutions A-E). Eligible x-rays had been acquired within 14 days of spirometry and were labelled with the FVC and FEV1. X-rays from three institutions (A-C) were used for training, validation, and internal testing, with the testing dataset being independent of the training and validation datasets, and then x-rays from the two other institutions (D and E) were used for independent external testing. Performance for estimating FVC and FEV1 was evaluated by calculating the Pearson's correlation coefficient (r), intraclass correlation coefficient (ICC), mean square error (MSE), root mean square error (RMSE), and mean absolute error (MAE) compared with the results of spirometry. FINDINGS: We included 141 734 x-ray and spirometry pairs from 81 902 patients from the five institutions. The training, validation, and internal test datasets included 134 307 x-rays from 75 768 patients (37 718 [50%] female, 38 050 [50%] male; mean age 56 years [SD 18]), and the external test datasets included 2137 x-rays from 1861 patients (742 [40%] female, 1119 [60%] male; mean age 65 years [SD 17]) from institution D and 5290 x-rays from 4273 patients (1972 [46%] female, 2301 [54%] male; mean age 63 years [SD 17]) from institution E. External testing for FVC yielded r values of 0·91 (99% CI 0·90-0·92) for institution D and 0·90 (0·89-0·91) for institution E, ICC of 0·91 (99% CI 0·90-0·92) and 0·89 (0·88-0·90), MSE of 0·17 L2 (99% CI 0·15-0·19) and 0·17 L2 (0·16-0·19), RMSE of 0·41 L (99% CI 0·39-0·43) and 0·41 L (0·39-0·43), and MAE of 0·31 L (99% CI 0·29-0·32) and 0·31 L (0·30-0·32). External testing for FEV1 yielded r values of 0·91 (99% CI 0·90-0·92) for institution D and 0·91 (0·90-0·91) for institution E, ICC of 0·90 (99% CI 0·89-0·91) and 0·90 (0·90-0·91), MSE of 0·13 L2 (99% CI 0·12-0·15) and 0·11 L2 (0·10-0·12), RMSE of 0·37 L (99% CI 0·35-0·38) and 0·33 L (0·32-0·35), and MAE of 0·28 L (99% CI 0·27-0·29) and 0·25 L (0·25-0·26). INTERPRETATION: This deep learning model allowed estimation of FVC and FEV1 from chest x-rays, showing high agreement with spirometry. The model offers an alternative to spirometry for assessing pulmonary function, which is especially useful for patients who are unable to undergo spirometry, and might enhance the customisation of CT imaging protocols based on insights gained from chest x-rays, improving the diagnosis and management of lung diseases. Future studies should investigate the performance of this AI model in combination with clinical information to enable more appropriate and targeted use. FUNDING: None.

2.
Eur Radiol ; 2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-38995378

RESUMO

OBJECTIVES: To compare the diagnostic accuracy of Generative Pre-trained Transformer (GPT)-4-based ChatGPT, GPT-4 with vision (GPT-4V) based ChatGPT, and radiologists in musculoskeletal radiology. MATERIALS AND METHODS: We included 106 "Test Yourself" cases from Skeletal Radiology between January 2014 and September 2023. We input the medical history and imaging findings into GPT-4-based ChatGPT and the medical history and images into GPT-4V-based ChatGPT, then both generated a diagnosis for each case. Two radiologists (a radiology resident and a board-certified radiologist) independently provided diagnoses for all cases. The diagnostic accuracy rates were determined based on the published ground truth. Chi-square tests were performed to compare the diagnostic accuracy of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and radiologists. RESULTS: GPT-4-based ChatGPT significantly outperformed GPT-4V-based ChatGPT (p < 0.001) with accuracy rates of 43% (46/106) and 8% (9/106), respectively. The radiology resident and the board-certified radiologist achieved accuracy rates of 41% (43/106) and 53% (56/106). The diagnostic accuracy of GPT-4-based ChatGPT was comparable to that of the radiology resident, but was lower than that of the board-certified radiologist although the differences were not significant (p = 0.78 and 0.22, respectively). The diagnostic accuracy of GPT-4V-based ChatGPT was significantly lower than those of both radiologists (p < 0.001 and < 0.001, respectively). CONCLUSION: GPT-4-based ChatGPT demonstrated significantly higher diagnostic accuracy than GPT-4V-based ChatGPT. While GPT-4-based ChatGPT's diagnostic performance was comparable to radiology residents, it did not reach the performance level of board-certified radiologists in musculoskeletal radiology. CLINICAL RELEVANCE STATEMENT: GPT-4-based ChatGPT outperformed GPT-4V-based ChatGPT and was comparable to radiology residents, but it did not reach the level of board-certified radiologists in musculoskeletal radiology. Radiologists should comprehend ChatGPT's current performance as a diagnostic tool for optimal utilization. KEY POINTS: This study compared the diagnostic performance of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and radiologists in musculoskeletal radiology. GPT-4-based ChatGPT was comparable to radiology residents, but did not reach the level of board-certified radiologists. When utilizing ChatGPT, it is crucial to input appropriate descriptions of imaging findings rather than the images.

5.
Diagn Interv Imaging ; 2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-38918123

RESUMO

The rapid advancement of artificial intelligence (AI) in healthcare has revolutionized the industry, offering significant improvements in diagnostic accuracy, efficiency, and patient outcomes. However, the increasing adoption of AI systems also raises concerns about their environmental impact, particularly in the context of climate change. This review explores the intersection of climate change and AI in healthcare, examining the challenges posed by the energy consumption and carbon footprint of AI systems, as well as the potential solutions to mitigate their environmental impact. The review highlights the energy-intensive nature of AI model training and deployment, the contribution of data centers to greenhouse gas emissions, and the generation of electronic waste. To address these challenges, the development of energy-efficient AI models, the adoption of green computing practices, and the integration of renewable energy sources are discussed as potential solutions. The review also emphasizes the role of AI in optimizing healthcare workflows, reducing resource waste, and facilitating sustainable practices such as telemedicine. Furthermore, the importance of policy and governance frameworks, global initiatives, and collaborative efforts in promoting sustainable AI practices in healthcare is explored. The review concludes by outlining best practices for sustainable AI deployment, including eco-design, lifecycle assessment, responsible data management, and continuous monitoring and improvement. As the healthcare industry continues to embrace AI technologies, prioritizing sustainability and environmental responsibility is crucial to ensure that the benefits of AI are realized while actively contributing to the preservation of our planet.

6.
Jpn J Radiol ; 2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38856878

RESUMO

Medicine and deep learning-based artificial intelligence (AI) engineering represent two distinct fields each with decades of published history. The current rapid convergence of deep learning and medicine has led to significant advancements, yet it has also introduced ambiguity regarding data set terms common to both fields, potentially leading to miscommunication and methodological discrepancies. This narrative review aims to give historical context for these terms, accentuate the importance of clarity when these terms are used in medical deep learning contexts, and offer solutions to mitigate misunderstandings by readers from either field. Through an examination of historical documents, including articles, writing guidelines, and textbooks, this review traces the divergent evolution of terms for data sets and their impact. Initially, the discordant interpretations of the word 'validation' in medical and AI contexts are explored. We then show that in the medical field as well, terms traditionally used in the deep learning domain are becoming more common, with the data for creating models referred to as the 'training set', the data for tuning of parameters referred to as the 'validation (or tuning) set', and the data for the evaluation of models as the 'test set'. Additionally, the test sets used for model evaluation are classified into internal (random splitting, cross-validation, and leave-one-out) sets and external (temporal and geographic) sets. This review then identifies often misunderstood terms and proposes pragmatic solutions to mitigate terminological confusion in the field of deep learning in medicine. We support the accurate and standardized description of these data sets and the explicit definition of data set splitting terminologies in each publication. These are crucial methods for demonstrating the robustness and generalizability of deep learning applications in medicine. This review aspires to enhance the precision of communication, thereby fostering more effective and transparent research methodologies in this interdisciplinary field.

7.
Clin Neuroradiol ; 2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38806794

RESUMO

PURPOSE: To compare the diagnostic performance among Generative Pre-trained Transformer (GPT)-4-based ChatGPT, GPT­4 with vision (GPT-4V) based ChatGPT, and radiologists in challenging neuroradiology cases. METHODS: We collected 32 consecutive "Freiburg Neuropathology Case Conference" cases from the journal Clinical Neuroradiology between March 2016 and December 2023. We input the medical history and imaging findings into GPT-4-based ChatGPT and the medical history and images into GPT-4V-based ChatGPT, then both generated a diagnosis for each case. Six radiologists (three radiology residents and three board-certified radiologists) independently reviewed all cases and provided diagnoses. ChatGPT and radiologists' diagnostic accuracy rates were evaluated based on the published ground truth. Chi-square tests were performed to compare the diagnostic accuracy of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and radiologists. RESULTS: GPT­4 and GPT-4V-based ChatGPTs achieved accuracy rates of 22% (7/32) and 16% (5/32), respectively. Radiologists achieved the following accuracy rates: three radiology residents 28% (9/32), 31% (10/32), and 28% (9/32); and three board-certified radiologists 38% (12/32), 47% (15/32), and 44% (14/32). GPT-4-based ChatGPT's diagnostic accuracy was lower than each radiologist, although not significantly (all p > 0.07). GPT-4V-based ChatGPT's diagnostic accuracy was also lower than each radiologist and significantly lower than two board-certified radiologists (p = 0.02 and 0.03) (not significant for radiology residents and one board-certified radiologist [all p > 0.09]). CONCLUSION: While GPT-4-based ChatGPT demonstrated relatively higher diagnostic performance than GPT-4V-based ChatGPT, the diagnostic performance of GPT­4 and GPT-4V-based ChatGPTs did not reach the performance level of either radiology residents or board-certified radiologists in challenging neuroradiology cases.

9.
Am J Cardiol ; 223: 1-6, 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-38782227

RESUMO

We develop and evaluate an artificial intelligence (AI)-based algorithm that uses pre-rotation atherectomy (RA) intravascular ultrasound (IVUS) images to automatically predict regions debulked by RA. A total of 2106 IVUS cross-sections from 60 patients with de novo severely calcified coronary lesions who underwent IVUS-guided RA were consecutively collected. The 2 identical IVUS images of pre- and post-RA were merged, and the orientations of the debulked segments identified in the merged images were marked on the outer circle of each IVUS image. The AI model was developed based on ResNet (deep residual learning for image recognition). The architecture connected 36 fully connected layers, each corresponding to 1 of the 36 orientations segmented every 10°, to a single feature extractor. In each cross-sectional analysis, our AI model achieved an average sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of 81%, 72%, 46%, 90%, and 75%, respectively. In conclusion, the AI-based algorithm can use information from pre-RA IVUS images to accurately predict regions debulked by RA and will assist interventional cardiologists in determining the treatment strategies for severely calcified coronary lesions.


Assuntos
Algoritmos , Inteligência Artificial , Aterectomia Coronária , Doença da Artéria Coronariana , Ultrassonografia de Intervenção , Humanos , Ultrassonografia de Intervenção/métodos , Aterectomia Coronária/métodos , Masculino , Feminino , Idoso , Doença da Artéria Coronariana/cirurgia , Doença da Artéria Coronariana/diagnóstico por imagem , Calcificação Vascular/diagnóstico por imagem , Calcificação Vascular/cirurgia , Valor Preditivo dos Testes , Pessoa de Meia-Idade , Vasos Coronários/diagnóstico por imagem , Vasos Coronários/cirurgia , Estudos Retrospectivos
10.
AJNR Am J Neuroradiol ; 45(6): 826-832, 2024 06 07.
Artigo em Inglês | MEDLINE | ID: mdl-38663993

RESUMO

BACKGROUND: Intermodality image-to-image translation is an artificial intelligence technique for generating one technique from another. PURPOSE: This review was designed to systematically identify and quantify biases and quality issues preventing validation and clinical application of artificial intelligence models for intermodality image-to-image translation of brain imaging. DATA SOURCES: PubMed, Scopus, and IEEE Xplore were searched through August 2, 2023, for artificial intelligence-based image translation models of radiologic brain images. STUDY SELECTION: This review collected 102 works published between April 2017 and August 2023. DATA ANALYSIS: Eligible studies were evaluated for quality using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) and for bias using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). Medically-focused article adherence was compared with that of engineering-focused articles overall with the Mann-Whitney U test and for each criterion using the Fisher exact test. DATA SYNTHESIS: Median adherence to the relevant CLAIM criteria was 69% and 38% for PROBAST questions. CLAIM adherence was lower for engineering-focused articles compared with medically-focused articles (65% versus 73%, P < .001). Engineering-focused studies had higher adherence for model description criteria, and medically-focused studies had higher adherence for data set and evaluation descriptions. LIMITATIONS: Our review is limited by the study design and model heterogeneity. CONCLUSIONS: Nearly all studies revealed critical issues preventing clinical application, with engineering-focused studies showing higher adherence for the technical model description but significantly lower overall adherence than medically-focused studies. The pursuit of clinical application requires collaboration from both fields to improve reporting.


Assuntos
Neuroimagem , Humanos , Neuroimagem/métodos , Neuroimagem/normas , Viés , Inteligência Artificial
11.
Jpn J Radiol ; 42(7): 685-696, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38551772

RESUMO

The advent of Deep Learning (DL) has significantly propelled the field of diagnostic radiology forward by enhancing image analysis and interpretation. The introduction of the Transformer architecture, followed by the development of Large Language Models (LLMs), has further revolutionized this domain. LLMs now possess the potential to automate and refine the radiology workflow, extending from report generation to assistance in diagnostics and patient care. The integration of multimodal technology with LLMs could potentially leapfrog these applications to unprecedented levels.However, LLMs come with unresolved challenges such as information hallucinations and biases, which can affect clinical reliability. Despite these issues, the legislative and guideline frameworks have yet to catch up with technological advancements. Radiologists must acquire a thorough understanding of these technologies to leverage LLMs' potential to the fullest while maintaining medical safety and ethics. This review aims to aid in that endeavor.


Assuntos
Aprendizado Profundo , Radiologia , Humanos , Radiologia/métodos , Radiologistas , Inteligência Artificial , Fluxo de Trabalho
12.
Neuroradiology ; 66(6): 955-961, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38407581

RESUMO

PURPOSE: Cranial nerve involvement (CNI) influences the treatment strategies and prognosis of head and neck tumors. However, its incidence in skull base chordomas and chondrosarcomas remains to be investigated. This study evaluated the imaging features of chordoma and chondrosarcoma, with a focus on the differences in CNI. METHODS: Forty-two patients (26 and 16 patients with chordomas and chondrosarcomas, respectively) treated at our institution between January 2007 and January 2023 were included in this retrospective study. Imaging features, such as the maximum diameter, tumor location (midline or off-midline), calcification, signal intensity on T2-weighted image, mean apparent diffusion coefficient (ADC) values, contrast enhancement, and CNI, were evaluated and compared using Fisher's exact test or the Mann-Whitney U-test. The odds ratio (OR) was calculated to evaluate the association between the histological type and imaging features. RESULTS: The incidence of CNI in chondrosarcomas was significantly higher than that in chordomas (63% vs. 8%, P < 0.001). An off-midline location was more common in chondrosarcomas than in chordomas (86% vs. 13%; P < 0.001). The mean ADC values of chondrosarcomas were significantly higher than those of chordomas (P < 0.001). Significant associations were identified between chondrosarcomas and CNI (OR = 20.00; P < 0.001), location (OR = 53.70; P < 0.001), and mean ADC values (OR = 1.01; P = 0.002). CONCLUSION: The incidence of CNI and off-midline location in chondrosarcomas was significantly higher than that in chordomas. CNI, tumor location, and the mean ADC can help distinguish between these entities.


Assuntos
Condrossarcoma , Cordoma , Neoplasias da Base do Crânio , Humanos , Feminino , Masculino , Estudos Retrospectivos , Pessoa de Meia-Idade , Cordoma/diagnóstico por imagem , Cordoma/patologia , Adulto , Condrossarcoma/diagnóstico por imagem , Condrossarcoma/patologia , Idoso , Neoplasias da Base do Crânio/diagnóstico por imagem , Meios de Contraste , Adolescente , Imageamento por Ressonância Magnética/métodos
13.
Sci Rep ; 14(1): 2911, 2024 02 05.
Artigo em Inglês | MEDLINE | ID: mdl-38316892

RESUMO

This study created an image-to-image translation model that synthesizes diffusion tensor images (DTI) from conventional diffusion weighted images, and validated the similarities between the original and synthetic DTI. Thirty-two healthy volunteers were prospectively recruited. DTI and DWI were obtained with six and three directions of the motion probing gradient (MPG), respectively. The identical imaging plane was paired for the image-to-image translation model that synthesized one direction of the MPG from DWI. This process was repeated six times in the respective MPG directions. Regions of interest (ROIs) in the lentiform nucleus, thalamus, posterior limb of the internal capsule, posterior thalamic radiation, and splenium of the corpus callosum were created and applied to maps derived from the original and synthetic DTI. The mean values and signal-to-noise ratio (SNR) of the original and synthetic maps for each ROI were compared. The Bland-Altman plot between the original and synthetic data was evaluated. Although the test dataset showed a larger standard deviation of all values and lower SNR in the synthetic data than in the original data, the Bland-Altman plots showed each plot localizing in a similar distribution. Synthetic DTI could be generated from conventional DWI with an image-to-image translation model.


Assuntos
Aprendizado Profundo , Substância Branca , Humanos , Corpo Caloso/diagnóstico por imagem , Razão Sinal-Ruído , Cápsula Interna , Imagem de Difusão por Ressonância Magnética/métodos
14.
J Magn Reson Imaging ; 59(4): 1341-1348, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37424114

RESUMO

BACKGROUND: Although brain activities in Alzheimer's disease (AD) might be evaluated MRI and PET, the relationships between brain temperature (BT), the index of diffusivity along the perivascular space (ALPS index), and amyloid deposition in the cerebral cortex are still unclear. PURPOSE: To investigate the relationship between metabolic imaging measurements and clinical information in patients with AD and normal controls (NCs). STUDY TYPE: Retrospective analysis of a prospective dataset. POPULATION: 58 participants (78.3 ± 6.8 years; 30 female): 29 AD patients and 29 age- and sex-matched NCs from the Open Access Series of Imaging Studies dataset. FIELD STRENGTH/SEQUENCE: 3T; T1-weighted magnetization-prepared rapid gradient-echo, diffusion tensor imaging with 64 directions, and dynamic 18 F-florbetapir PET. ASSESSMENT: Imaging metrics were compared between AD and NCs. These included BT calculated by the diffusivity of the lateral ventricles, ALPS index that reflects the glymphatic system, the mean standardized uptake value ratio (SUVR) of amyloid PET in the cerebral cortex and clinical information, such as age, sex, and MMSE. STATISTICAL TESTS: Pearson's or Spearman's correlation and multiple linear regression analyses. P values <0.05 were defined as statistically significant. RESULTS: Significant positive correlations were found between BT and ALPS index (r = 0.44 for NCs), while significant negative correlations were found between age and ALPS index (rs = -0.43 for AD and - 0.47 for NCs). The SUVR of amyloid PET was not significantly associated with BT (P = 0.81 for AD and 0.21 for NCs) or ALPS index (P = 0.10 for AD and 0.52 for NCs). In the multiple regression analysis, age was significantly associated with BT, while age, sex, and presence of AD were significantly associated with the ALPS index. DATA CONCLUSION: Impairment of the glymphatic system measured using MRI was associated with lower BT and aging. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY STAGE: 1.


Assuntos
Doença de Alzheimer , Humanos , Feminino , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/metabolismo , Imagem de Tensor de Difusão/métodos , Estudos Retrospectivos , Estudos Prospectivos , Acesso à Informação , Tomografia por Emissão de Pósitrons/métodos , Imageamento por Ressonância Magnética/métodos , Amiloide , Proteínas Amiloidogênicas , Córtex Cerebral
15.
Jpn J Radiol ; 42(1): 3-15, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37540463

RESUMO

In this review, we address the issue of fairness in the clinical integration of artificial intelligence (AI) in the medical field. As the clinical adoption of deep learning algorithms, a subfield of AI, progresses, concerns have arisen regarding the impact of AI biases and discrimination on patient health. This review aims to provide a comprehensive overview of concerns associated with AI fairness; discuss strategies to mitigate AI biases; and emphasize the need for cooperation among physicians, AI researchers, AI developers, policymakers, and patients to ensure equitable AI integration. First, we define and introduce the concept of fairness in AI applications in healthcare and radiology, emphasizing the benefits and challenges of incorporating AI into clinical practice. Next, we delve into concerns regarding fairness in healthcare, addressing the various causes of biases in AI and potential concerns such as misdiagnosis, unequal access to treatment, and ethical considerations. We then outline strategies for addressing fairness, such as the importance of diverse and representative data and algorithm audits. Additionally, we discuss ethical and legal considerations such as data privacy, responsibility, accountability, transparency, and explainability in AI. Finally, we present the Fairness of Artificial Intelligence Recommendations in healthcare (FAIR) statement to offer best practices. Through these efforts, we aim to provide a foundation for discussing the responsible and equitable implementation and deployment of AI in healthcare.


Assuntos
Inteligência Artificial , Radiologia , Humanos , Algoritmos , Radiologistas , Atenção à Saúde
16.
Neuroradiology ; 66(1): 73-79, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37994939

RESUMO

PURPOSE: The noteworthy performance of Chat Generative Pre-trained Transformer (ChatGPT), an artificial intelligence text generation model based on the GPT-4 architecture, has been demonstrated in various fields; however, its potential applications in neuroradiology remain unexplored. This study aimed to evaluate the diagnostic performance of GPT-4 based ChatGPT in neuroradiology. METHODS: We collected 100 consecutive "Case of the Week" cases from the American Journal of Neuroradiology between October 2021 and September 2023. ChatGPT generated a diagnosis from patient's medical history and imaging findings for each case. Then the diagnostic accuracy rate was determined using the published ground truth. Each case was categorized by anatomical location (brain, spine, and head & neck), and brain cases were further divided into central nervous system (CNS) tumor and non-CNS tumor groups. Fisher's exact test was conducted to compare the accuracy rates among the three anatomical locations, as well as between the CNS tumor and non-CNS tumor groups. RESULTS: ChatGPT achieved a diagnostic accuracy rate of 50% (50/100 cases). There were no significant differences between the accuracy rates of the three anatomical locations (p = 0.89). The accuracy rate was significantly lower for the CNS tumor group compared to the non-CNS tumor group in the brain cases (16% [3/19] vs. 62% [36/58], p < 0.001). CONCLUSION: This study demonstrated the diagnostic performance of ChatGPT in neuroradiology. ChatGPT's diagnostic accuracy varied depending on disease etiologies, and its diagnostic accuracy was significantly lower in CNS tumors compared to non-CNS tumors.


Assuntos
Inteligência Artificial , Neoplasias , Humanos , Cabeça , Encéfalo , Pescoço
18.
J Radiat Res ; 65(1): 1-9, 2024 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-37996085

RESUMO

This review provides an overview of the application of artificial intelligence (AI) in radiation therapy (RT) from a radiation oncologist's perspective. Over the years, advances in diagnostic imaging have significantly improved the efficiency and effectiveness of radiotherapy. The introduction of AI has further optimized the segmentation of tumors and organs at risk, thereby saving considerable time for radiation oncologists. AI has also been utilized in treatment planning and optimization, reducing the planning time from several days to minutes or even seconds. Knowledge-based treatment planning and deep learning techniques have been employed to produce treatment plans comparable to those generated by humans. Additionally, AI has potential applications in quality control and assurance of treatment plans, optimization of image-guided RT and monitoring of mobile tumors during treatment. Prognostic evaluation and prediction using AI have been increasingly explored, with radiomics being a prominent area of research. The future of AI in radiation oncology offers the potential to establish treatment standardization by minimizing inter-observer differences in segmentation and improving dose adequacy evaluation. RT standardization through AI may have global implications, providing world-standard treatment even in resource-limited settings. However, there are challenges in accumulating big data, including patient background information and correlating treatment plans with disease outcomes. Although challenges remain, ongoing research and the integration of AI technology hold promise for further advancements in radiation oncology.


Assuntos
Neoplasias , Radioterapia (Especialidade) , Radioterapia Guiada por Imagem , Humanos , Inteligência Artificial , Planejamento da Radioterapia Assistida por Computador/métodos , Neoplasias/radioterapia , Radioterapia (Especialidade)/métodos
20.
Ann Nucl Med ; 37(11): 583-595, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37749301

RESUMO

The radiopharmaceutical 2-[fluorine-18]fluoro-2-deoxy-D-glucose (FDG) has been dominantly used in positron emission tomography (PET) scans for over 20 years, and due to its vast utility its applications have expanded and are continuing to expand into oncology, neurology, cardiology, and infectious/inflammatory diseases. More recently, the addition of artificial intelligence (AI) has enhanced nuclear medicine diagnosis and imaging with FDG-PET, and new radiopharmaceuticals such as prostate-specific membrane antigen (PSMA) and fibroblast activation protein inhibitor (FAPI) have emerged. Nuclear medicine therapy using agents such as [177Lu]-dotatate surpasses conventional treatments in terms of efficacy and side effects. This article reviews recently established evidence of FDG and non-FDG drugs and anticipates the future trajectory of nuclear medicine.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...