Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Front Bioeng Biotechnol ; 12: 1330713, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38361791

RESUMEN

Over the past 35 years, studies conducted worldwide have revealed a threefold increase in the incidence of thyroid cancer. Strain elastography is a new imaging technique to identify benign and malignant thyroid nodules due to its sensitivity to tissue stiffness. However, there are certain limitations of this technique, particularly in terms of standardization of the compression process, evaluation of results and several assumptions used in commercial strain elastography modes for the purpose of simplifying imaging analysis. In this work, we propose a novel conditional generative adversarial network (TSE-GAN) for automatically generating thyroid strain elastograms, which adopts a global-to-local architecture to improve the ability of extracting multi-scale features and develops an adaptive deformable U-net structure in the sub-generator to apply effective deformation. Furthermore, we introduce a Lab-based loss function to induce the networks to generate realistic thyroid elastograms that conform to the probability distribution of the target domain. Qualitative and quantitative assessments are conducted on a clinical dataset provided by Shanghai Sixth People's Hospital. Experimental results demonstrate that thyroid elastograms generated by the proposed TSE-GAN outperform state-of-the-art image translation methods in meeting the needs of clinical diagnostic applications and providing practical value.

2.
Front Surg ; 11: 1370017, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38708363

RESUMEN

Introduction: The utilization of artificial intelligence (AI) augments intraoperative safety and surgical training. The recognition of parathyroid glands (PGs) is difficult for inexperienced surgeons. The aim of this study was to find out whether deep learning could be used to auxiliary identification of PGs on intraoperative videos in patients undergoing thyroid surgery. Methods: In this retrospective study, 50 patients undergoing thyroid surgery between 2021 and 2023 were randomly assigned (7:3 ratio) to a training cohort (n = 35) and a validation cohort (n = 15). The combined datasets included 98 videos with 9,944 annotated frames. An independent test cohort included 15 videos (1,500 frames) from an additional 15 patients. We developed a deep-learning model Video-Trans-U-HRNet to segment parathyroid glands in surgical videos, comparing it with three advanced medical AI methods on the internal validation cohort. Additionally, we assessed its performance against four surgeons (2 senior surgeons and 2 junior surgeons) on the independent test cohort, calculating precision and recall metrics for the model. Results: Our model demonstrated superior performance compared to other AI models on the internal validation cohort. The DICE and accuracy achieved by our model were 0.760 and 74.7% respectively, surpassing Video-TransUnet (0.710, 70.1%), Video-SwinUnet (0.754, 73.6%), and TransUnet (0.705, 69.4%). For the external test, our method got 89.5% precision 77.3% recall and 70.8% accuracy. In the statistical analysis, our model demonstrated results comparable to those of senior surgeons (senior surgeon 1: χ2 = 0.989, p = 0.320; senior surgeon 2: χ2 = 1.373, p = 0.241) and outperformed 2 junior surgeons (junior surgeon 1: χ2 = 3.889, p = 0.048; junior surgeon 2: χ2 = 4.763, p = 0.029). Discussion: We introduce an innovative intraoperative video method for identifying PGs, highlighting the potential advancements of AI in the surgical domain. The segmentation method employed for parathyroid glands in intraoperative videos offer surgeons supplementary guidance in locating real PGs. The method developed may have utility in facilitating training and decreasing the learning curve associated with the use of this technology.

3.
Comput Med Imaging Graph ; 115: 102394, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38714019

RESUMEN

Fracture related infection (FRI) is one of the most devastating complications after fracture surgery in the lower extremities, which can lead to extremely high morbidity and medical costs. Therefore, early comprehensive evaluation and accurate diagnosis of patients are critical for appropriate treatment, prevention of complications, and good prognosis. 18Fluoro-deoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) is one of the most commonly used medical imaging modalities for diagnosing FRI. With the development of deep learning, more neural networks have been proposed and become powerful computer-aided diagnosis tools in medical imaging. Therefore, a fully automated two-stage framework for FRI detection and diagnosis, 3DFRINet (Three Dimension FRI Network), is proposed for 18F-FDG PET/CT 3D imaging. The first stage can effectively extract and fuse the features of both modalities to accurately locate the lesion by the dual-branch design and attention module. The second stage reduces the dimensionality of the image by using the maximum intensity projection, which retains the effective features while reducing the computational effort and achieving excellent diagnostic performance. The diagnostic performance of lesions reached 91.55% accuracy, 0.9331 AUC, and 0.9250 F1 score. 3DFRINet has an advantage over six nuclear medicine experts in each classification metric. The statistical analysis shows that 3DFRINet is equivalent or superior to the primary nuclear medicine physicians and comparable to the senior nuclear medicine physicians. In conclusion, this study first proposed a method based on 18F-FDG PET/CT three-dimensional imaging for FRI location and diagnosis. This method shows superior lesion detection rate and diagnostic efficiency and therefore has good prospects for clinical application.


Asunto(s)
Fluorodesoxiglucosa F18 , Fracturas Óseas , Imagenología Tridimensional , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Imagenología Tridimensional/métodos , Fracturas Óseas/diagnóstico por imagen , Radiofármacos , Femenino , Masculino , Persona de Mediana Edad , Adulto , Extremidad Inferior/diagnóstico por imagen , Redes Neurales de la Computación , Anciano
4.
Cancer Med ; 13(4): e7065, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38457206

RESUMEN

INTRODUCTION: Near-infrared autofluorescence imaging (NIFI) can be used to identify parathyroid gland (PG) during surgery. The purpose of the study is to establish a new model, help surgeons better identify, and protect PGs. METHODS: Five hundred and twenty three NIFI images were selected. The PGs were recorded by NIFI and marked with artificial intelligence (AI) model. The recognition rate for PGs was calculated. Analyze the differences between surgeons of different years of experience and AI recognition, and evaluate the diagnostic and therapeutic efficacy of AI model. RESULTS: Our model achieved 83.5% precision and 57.8% recall in the internal validation set. The visual recognition rate of AI model was 85.2% and 82.4% on internal and external sets. The PG recognition rate of AI model is higher than that of junior surgeons (p < 0.05). CONCLUSIONS: This AI model will help surgeons identify PGs, and develop their learning ability and self-confidence.


Asunto(s)
Aprendizaje Profundo , Glándulas Paratiroides , Humanos , Glándulas Paratiroides/diagnóstico por imagen , Glándulas Paratiroides/cirugía , Paratiroidectomía/métodos , Tiroidectomía/métodos , Inteligencia Artificial , Imagen Óptica/métodos , Espectroscopía Infrarroja Corta/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA