Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 60
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Acta Radiol ; 65(1): 68-75, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37097830

RESUMEN

BACKGROUND: Extramural venous invasion (EMVI) is an important prognostic factor of rectal adenocarcinoma. However, accurate preoperative assessment of EMVI remains difficult. PURPOSE: To assess EMVI preoperatively through radiomics technology, and use different algorithms combined with clinical factors to establish a variety of models in order to make the most accurate judgments before surgery. MATERIAL AND METHODS: A total of 212 patients with rectal adenocarcinoma between September 2012 and July 2019 were included and distributed to training and validation datasets. Radiomics features were extracted from pretreatment T2-weighted images. Different prediction models (clinical model, logistic regression [LR], random forest [RF], support vector machine [SVM], clinical-LR model, clinical-RF model, and clinical-SVM model) were constructed on the basis of radiomics features and clinical factors, respectively. The area under the curve (AUC) and accuracy were used to assess the predictive efficacy of different models. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were also calculated. RESULTS: The clinical-LR model exhibited the best diagnostic efficiency with an AUC of 0.962 (95% confidence interval [CI] = 0.936-0.988) and 0.865 (95% CI = 0.770-0.959), accuracy of 0.899 and 0.828, sensitivity of 0.867 and 0.818, specificity of 0.913 and 0.833, PPV of 0.813 and 0.720, and NPV of 0.940 and 0.897 for the training and validation datasets, respectively. CONCLUSION: The radiomics-based prediction model is a valuable tool in EMVI detection and can assist decision-making in clinical practice.


Asunto(s)
Adenocarcinoma , Neoplasias del Recto , Humanos , Radiómica , Estudios Retrospectivos , Imagen por Resonancia Magnética/métodos , Neoplasias del Recto/diagnóstico por imagen , Neoplasias del Recto/cirugía , Adenocarcinoma/diagnóstico por imagen , Adenocarcinoma/cirugía
2.
Hepatobiliary Pancreat Dis Int ; 22(6): 594-604, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36456428

RESUMEN

BACKGROUND: Although transarterial chemoembolization (TACE) is the first-line therapy for intermediate-stage hepatocellular carcinoma (HCC), it is not suitable for all patients. This study aimed to determine how to select patients who are not suitable for TACE as the first treatment choice. METHODS: A total of 243 intermediate-stage HCC patients treated with TACE at three centers were retrospectively enrolled, of which 171 were used for model training and 72 for testing. Radiomics features were screened using the Spearman correlation analysis and the least absolute shrinkage and selection operator (LASSO) algorithm. Subsequently, a radiomics model was established using extreme gradient boosting (XGBoost) with 5-fold cross-validation. The Shapley additive explanations (SHAP) method was used to visualize the radiomics model. A clinical model was constructed using univariate and multivariate logistic regression. The combined model comprising the radiomics signature and clinical factors was then established. This model's performance was evaluated by discrimination, calibration, and clinical application. Generalization ability was evaluated by the testing cohort. Finally, the model was used to analyze overall and progression-free survival of different groups. RESULTS: A third of the patients (81/243) were unsuitable for TACE treatment. The combined model had a high degree of accuracy as it identified TACE-unsuitable cases, at a sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) of 0.759, 0.885, 0.906 [95% confidence interval (CI): 0.859-0.953] in the training cohort and 0.826, 0.776, and 0.894 (95% CI: 0.815-0.972) in the testing cohort, respectively. CONCLUSIONS: The high degree of accuracy of our clinical-radiomics model makes it clinically useful in identifying intermediate-stage HCC patients who are unsuitable for TACE treatment.


Asunto(s)
Carcinoma Hepatocelular , Quimioembolización Terapéutica , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagen , Carcinoma Hepatocelular/terapia , Quimioembolización Terapéutica/efectos adversos , Quimioembolización Terapéutica/métodos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/terapia , Estudios Retrospectivos , Procedimientos Quirúrgicos Vasculares
3.
BMC Med Imaging ; 21(1): 178, 2021 11 24.
Artículo en Inglés | MEDLINE | ID: mdl-34819022

RESUMEN

BACKGROUND: Most existing algorithms have been focused on the segmentation from several public Liver CT datasets scanned regularly (no pneumoperitoneum and horizontal supine position). This study primarily segmented datasets with unconventional liver shapes and intensities deduced by contrast phases, irregular scanning conditions, different scanning objects of pigs and patients with large pathological tumors, which formed the multiple heterogeneity of datasets used in this study. METHODS: The multiple heterogeneous datasets used in this paper includes: (1) One public contrast-enhanced CT dataset and one public non-contrast CT dataset; (2) A contrast-enhanced dataset that has abnormal liver shape with very long left liver lobes and large-sized liver tumors with abnormal presets deduced by microvascular invasion; (3) One artificial pneumoperitoneum dataset under the pneumoperitoneum and three scanning profiles (horizontal/left/right recumbent position); (4) Two porcine datasets of Bama type and domestic type that contains pneumoperitoneum cases but with large anatomy discrepancy with humans. The study aimed to investigate the segmentation performances of 3D U-Net in: (1) generalization ability between multiple heterogeneous datasets by cross-testing experiments; (2) the compatibility when hybrid training all datasets in different sampling and encoder layer sharing schema. We further investigated the compatibility of encoder level by setting separate level for each dataset (i.e., dataset-wise convolutions) while sharing the decoder. RESULTS: Model trained on different datasets has different segmentation performance. The prediction accuracy between LiTS dataset and Zhujiang dataset was about 0.955 and 0.958 which shows their good generalization ability due to that they were all contrast-enhanced clinical patient datasets scanned regularly. For the datasets scanned under pneumoperitoneum, their corresponding datasets scanned without pneumoperitoneum showed good generalization ability. Dataset-wise convolution module in high-level can improve the dataset unbalance problem. The experimental results will facilitate researchers making solutions when segmenting those special datasets. CONCLUSIONS: (1) Regularly scanned datasets is well generalized to irregularly ones. (2) The hybrid training is beneficial but the dataset imbalance problem always exits due to the multi-domain homogeneity. The higher levels encoded more domain specific information than lower levels and thus were less compatible in terms of our datasets.


Asunto(s)
Imagenología Tridimensional , Neoplasias Hepáticas/diagnóstico por imagen , Hígado/diagnóstico por imagen , Aprendizaje Automático , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Animales , Medios de Contraste , Conjuntos de Datos como Asunto , Humanos , Neumoperitoneo/diagnóstico por imagen , Porcinos
4.
Surg Endosc ; 34(8): 3449-3459, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-31705286

RESUMEN

BACKGROUND: Understanding the internal anatomy of the liver remains a major challenge in anatomical liver resection. Although virtual hepatectomy and indocyanine green (ICG) fluorescence imaging techniques have been widely used in hepatobiliary surgery, limitations in their application for real-time navigation persist. OBJECTIVE: The aim of the present study was to evaluate the feasibility and clinical utility of the novel laparoscopic hepatectomy navigation system (LHNS), which fuses preoperative three-dimensional (3D) models with ICG fluorescence imaging to achieve real-time surgical navigation. METHODS: We conducted a retrospective review of clinical outcome for 64 patients who underwent laparoscopic hepatectomy from January 2018 to December 2018, including 30 patients who underwent the procedure using the LHNS (LHNS group) and 34 patients who underwent the procedure without LHNS guidance (Non-LHNS group). RESULTS: There was no significant difference in preoperative characteristics between the two groups. The LHNS group had a significantly less blood loss (285.0 ± 163.0 mL vs. 391.1 ± 242.0 mL; P = 0.047), less intraoperative blood transfusion rate (13.3% vs. 38.2%; P = 0.045), and shorter postoperative hospital stay (7.8 ± 2.1 days vs. 10.6 ± 3.8 days; P < 0.001) than the Non-LHNS group. There was no statistical difference in operative time and the overall complication rate between the two groups. The liver transection line was clearly delineated by the LHNS in 27 patients; however, the projection of boundary was unclear in 2 cases, and in 1 case, the boundary was not clearly displayed by ICG fluorescence imaging. CONCLUSIONS: We developed the LHNS to address limitations of current intraoperative imaging systems. The LHNS is hopefully to become a promising real-time navigation system for laparoscopic hepatectomy.


Asunto(s)
Carcinoma Hepatocelular/diagnóstico por imagen , Hepatectomía/métodos , Laparoscopía/métodos , Neoplasias Hepáticas/diagnóstico por imagen , Imagen Óptica/métodos , Cirugía Asistida por Computador/métodos , Sistemas de Navegación Quirúrgica , Adulto , Anciano , Pérdida de Sangre Quirúrgica , Transfusión Sanguínea , Carcinoma Hepatocelular/cirugía , Estudios de Factibilidad , Femenino , Fluorescencia , Humanos , Imagenología Tridimensional , Verde de Indocianina/uso terapéutico , Tiempo de Internación , Neoplasias Hepáticas/cirugía , Masculino , Persona de Mediana Edad , Tempo Operativo , Complicaciones Posoperatorias , Estudios Retrospectivos
5.
Hum Brain Mapp ; 40(18): 5159-5171, 2019 12 15.
Artículo en Inglés | MEDLINE | ID: mdl-31423713

RESUMEN

Although the middle temporal gyrus (MTG) has been parcellated into subregions with distinguished anatomical connectivity patterns, whether the structural topography of MTG can inform functional segregations of this area remains largely unknown. Accumulating evidence suggests that the brain's underlying organization and function can be directly and effectively delineated with resting-state functional connectivity (RSFC) by identifying putative functional boundaries between cortical areas. Here, RSFC profiles were used to explore functional segregations of the MTG and defined four subregions from anterior to posterior in two independent datasets, which showed a similar pattern with MTG parcellation scheme obtained using anatomical connectivity. The functional segregations of MTG were further supported by whole brain RSFC, coactivation, and specific RFSC, and coactivation mapping. Furthermore, the fingerprint with predefined 10 networks and functional characterizations of each subregion using meta-analysis also identified functional distinction between subregions. The specific connectivity analysis and functional characterization indicated that the bilateral most anterior subregions mainly participated in social cognition and semantic processing; the ventral middle subregions were involved in social cognition in left hemisphere and auditory processing in right hemisphere; the bilateral ventro-posterior subregions participated in action observation, whereas the left subregion was also involved in semantic processing; both of the dorsal subregions in superior temporal sulcus were involved in language, social cognition, and auditory processing. Taken together, our findings demonstrated MTG sharing similar structural and functional topographies and provide more detailed information about the functional organization of the MTG, which may facilitate future clinical and cognitive research on this area.


Asunto(s)
Imagen por Resonancia Magnética/métodos , Red Nerviosa/diagnóstico por imagen , Red Nerviosa/fisiología , Descanso/fisiología , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/fisiología , Adulto , Mapeo Encefálico/métodos , Femenino , Humanos , Masculino , Adulto Joven
6.
Biomed Eng Online ; 17(1): 63, 2018 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-29792208

RESUMEN

OBJECTIVE: In this paper, we aim to investigate the effect of computer-aided triage system, which is implemented for the health checkup of lung lesions involving tens of thousands of chest X-rays (CXRs) that are required for diagnosis. Therefore, high accuracy of diagnosis by an automated system can reduce the radiologist's workload on scrutinizing the medical images. METHOD: We present a deep learning model in order to efficiently detect abnormal levels or identify normal levels during mass chest screening so as to obtain the probability confidence of the CXRs. Moreover, a convolutional sparse denoising autoencoder is designed to compute the reconstruction error. We employ four publicly available radiology datasets pertaining to CXRs, analyze their reports, and utilize their images for mining the correct disease level of the CXRs that are to be submitted to a computer aided triaging system. Based on our approach, we vote for the final decision from multi-classifiers to determine which three levels of the images (i.e. normal, abnormal, and uncertain cases) that the CXRs fall into. RESULTS: We only deal with the grade diagnosis for physical examination and propose multiple new metric indices. Combining predictors for classification by using the area under a receiver operating characteristic curve, we observe that the final decision is related to the threshold from reconstruction error and the probability value. Our method achieves promising results in terms of precision of 98.7 and 94.3% based on the normal and abnormal cases, respectively. CONCLUSION: The results achieved by the proposed framework show superiority in classifying the disease level with high accuracy. This can potentially save the radiologists time and effort, so as to allow them to focus on higher-level risk CXRs.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Radiografía Torácica , Relación Señal-Ruido , Triaje/métodos , Automatización , Humanos , Pulmón/diagnóstico por imagen , Curva ROC
7.
J Appl Clin Med Phys ; 17(6): 118-127, 2016 11 08.
Artículo en Inglés | MEDLINE | ID: mdl-27929487

RESUMEN

This study was to evaluate the accuracy, consistency, and efficiency of three liver volumetry methods- one interactive method, an in-house-developed 3D medical Image Analysis (3DMIA) system, one automatic active shape model (ASM)-based segmentation, and one automatic probabilistic atlas (PA)-guided segmentation method on clinical contrast-enhanced CT images. Forty-two datasets, including 27 normal liver and 15 space-occupying liver lesion patients, were retrospectively included in this study. The three methods - one semiautomatic 3DMIA, one automatic ASM-based, and one automatic PA-based liver volumetry - achieved an accuracy with VD (volume difference) of -1.69%, -2.75%, and 3.06% in the normal group, respectively, and with VD of -3.20%, -3.35%, and 4.14% in the space-occupying lesion group, respectively. However, the three methods achieved an efficiency of 27.63 mins, 1.26 mins, 1.18 mins on average, respectively, compared with the manual volumetry, which took 43.98 mins. The high intraclass correlation coefficient between the three methods and the manual method indicated an excel-lent agreement on liver volumetry. Significant differences in segmentation time were observed between the three methods (3DMIA, ASM, and PA) and the manual volumetry (p < 0.001), as well as between the automatic volumetries (ASM and PA) and the semiautomatic volumetry (3DMIA) (p < 0.001). The semiautomatic interactive 3DMIA, automatic ASM-based, and automatic PA-based liver volum-etry agreed well with manual gold standard in both the normal liver group and the space-occupying lesion group. The ASM- and PA-based automatic segmentation have better efficiency in clinical use.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Imagenología Tridimensional/métodos , Neoplasias Hepáticas/patología , Neoplasias Hepáticas/radioterapia , Radioterapia de Intensidad Modulada/métodos , Tomografía Computarizada por Rayos X/métodos , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Niño , Preescolar , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Masculino , Persona de Mediana Edad , Tamaño de los Órganos , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador/métodos , Estudios Retrospectivos , Adulto Joven
8.
Int J Comput Assist Radiol Surg ; 19(7): 1291-1299, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38689146

RESUMEN

PURPOSE: Surgical action triplet recognition is a clinically significant yet challenging task. It provides surgeons with detailed information about surgical scenarios, thereby facilitating clinical decision-making. However, the high similarity among action triplets presents a formidable obstacle to recognition. To enhance accuracy, prior methods necessitated the utilization of larger models, thereby incurring a considerable computational burden. METHODS: We propose a novel framework known as the Lite and Mega Models (LAM). It comprises a CNN-based fully fine-tuned model (LAM-Lite) and a parameter-efficient fine-tuned model based on the foundation model using Transformer architecture (LAM-Mega). Temporal multi-label data augmentation is introduced for extracting robust class-level features. RESULTS: Our study demonstrates that LAM outperforms prior methods across various parameter scales on the CholecT50 dataset. Using fewer tunable parameters, LAM achieves a mean average precision (mAP) of 42.1%, a 3.6% improvement over the previous state of the art. CONCLUSION: Leveraging effective structural design and robust capabilities of the foundational model, our proposed approach successfully strikes a balance between accuracy and computational efficiency. The source code is accessible at https://github.com/Lycus99/LAM .


Asunto(s)
Redes Neurales de la Computación , Humanos , Cirugía Asistida por Computador/métodos , Algoritmos , Toma de Decisiones Clínicas/métodos
9.
Int J Comput Assist Radiol Surg ; 19(6): 1203-1211, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38642295

RESUMEN

PURPOSE: Specular reflections in endoscopic images not only disturb visual perception but also hamper computer vision algorithm performance. However, the intricate nature and variability of these reflections, coupled with a lack of relevant datasets, pose ongoing challenges for removal. METHODS: We present EndoSRR, a robust method for eliminating specular reflections in endoscopic images. EndoSRR comprises two stages: reflection detection and reflection region inpainting. In the reflection detection stage, we adapt and fine-tune the segment anything model (SAM) using a weakly labeled dataset, achieving an accurate reflection mask. For reflective region inpainting, we employ LaMa, a fast Fourier convolution-based model trained on a 4.5M-image dataset, enabling effective inpainting of arbitrarily shaped reflection regions. Lastly, we introduce an iterative optimization strategy for dual pre-trained models to refine the results of specular reflection removal, named DPMIO. RESULTS: Utilizing the SCARED-2019 dataset, our approach surpasses state-of-the-art methods in both qualitative and quantitative evaluations. Qualitatively, our method excels in accurately detecting reflective regions, yielding more natural and realistic inpainting results. Quantitatively, our method demonstrates superior performance in both segmentation evaluation metrics (IoU, E-measure, etc.) and image inpainting evaluation metrics (PSNR, SSIM, etc.). CONCLUSION: The experimental results underscore the significance of proficient endoscopic specular reflection removal for enhancing visual perception and downstream tasks. The methodology and results presented in this study are poised to catalyze advancements in specular reflection removal, thereby augmenting the accuracy and safety of minimally invasive surgery.


Asunto(s)
Endoscopía , Humanos , Endoscopía/métodos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
10.
Comput Assist Surg (Abingdon) ; 29(1): 2329675, 2024 12.
Artículo en Inglés | MEDLINE | ID: mdl-38504595

RESUMEN

The real-time requirement for image segmentation in laparoscopic surgical assistance systems is extremely high. Although traditional deep learning models can ensure high segmentation accuracy, they suffer from a large computational burden. In the practical setting of most hospitals, where powerful computing resources are lacking, these models cannot meet the real-time computational demands. We propose a novel network SwinD-Net based on Skip connections, incorporating Depthwise separable convolutions and Swin Transformer Blocks. To reduce computational overhead, we eliminate the skip connection in the first layer and reduce the number of channels in shallow feature maps. Additionally, we introduce Swin Transformer Blocks, which have a larger computational and parameter footprint, to extract global information and capture high-level semantic features. Through these modifications, our network achieves desirable performance while maintaining a lightweight design. We conduct experiments on the CholecSeg8k dataset to validate the effectiveness of our approach. Compared to other models, our approach achieves high accuracy while significantly reducing computational and parameter overhead. Specifically, our model requires only 98.82 M floating-point operations (FLOPs) and 0.52 M parameters, with an inference time of 47.49 ms per image on a CPU. Compared to the recently proposed lightweight segmentation network UNeXt, our model not only outperforms it in terms of the Dice metric but also has only 1/3 of the parameters and 1/22 of the FLOPs. In addition, our model achieves a 2.4 times faster inference speed than UNeXt, demonstrating comprehensive improvements in both accuracy and speed. Our model effectively reduces parameter count and computational complexity, improving the inference speed while maintaining comparable accuracy. The source code will be available at https://github.com/ouyangshuiming/SwinDNet.


Asunto(s)
Laparoscopía , Hígado , Humanos , Hígado/diagnóstico por imagen , Hígado/cirugía , Programas Informáticos
11.
Artículo en Inglés | MEDLINE | ID: mdl-39003438

RESUMEN

PURPOSE: Differentiating pulmonary lymphoma from lung infections using CT images is challenging. Existing deep neural network-based lung CT classification models rely on 2D slices, lacking comprehensive information and requiring manual selection. 3D models that involve chunking compromise image information and struggle with parameter reduction, limiting performance. These limitations must be addressed to improve accuracy and practicality. METHODS: We propose a transformer sequential feature encoding structure to integrate multi-level information from complete CT images, inspired by the clinical practice of using a sequence of cross-sectional slices for diagnosis. We incorporate position encoding and cross-level long-range information fusion modules into the feature extraction CNN network for cross-sectional slices, ensuring high-precision feature extraction. RESULTS: We conducted comprehensive experiments on a dataset of 124 patients, with respective sizes of 64, 20 and 40 for training, validation and testing. The results of ablation experiments and comparative experiments demonstrated the effectiveness of our approach. Our method outperforms existing state-of-the-art methods in the 3D CT image classification problem of distinguishing between lung infections and pulmonary lymphoma, achieving an accuracy of 0.875, AUC of 0.953 and F1 score of 0.889. CONCLUSION: The experiments verified that our proposed position-enhanced transformer-based sequential feature encoding model is capable of effectively performing high-precision feature extraction and contextual feature fusion in the lungs. It enhances the ability of a standalone CNN network or transformer to extract features, thereby improving the classification performance. The source code is accessible at https://github.com/imchuyu/PTSFE .

12.
Int J Comput Assist Radiol Surg ; 18(1): 149-156, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-35984606

RESUMEN

PURPOSE: CycleGAN and its variants are widely used in medical image synthesis, which can use unpaired data for medical image synthesis. The most commonly used method is to use a Generative Adversarial Network (GAN) model to process 2D slices and thereafter concatenate all of these slices to 3D medical images. Nevertheless, these methods always bring about spatial inconsistencies in contiguous slices. We offer a new model based on the CycleGAN to work out this problem, which can achieve high-quality conversion from magnetic resonance (MR) to computed tomography (CT) images. METHODS: To achieve spatial consistencies of 3D medical images and avoid the memory-heavy 3D convolutions, we reorganized the adjacent 3 slices into a 2.5D slice as the input image. Further, we propose a U-Net discriminator network to improve accuracy, which can perceive input objects locally and globally. Then, the model uses Content-Aware ReAssembly of Features (CARAFE) upsampling, which has a large field of view and content awareness takes the place of using a settled kernel for all samples. RESULTS: The mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) for double U-Net CycleGAN generated 3D image synthesis are 74.56±10.02, 27.12±0.71 and 0.84±0.03, respectively. Our method achieves preferable results than state-of-the-art methods. CONCLUSION: The experiment results indicate our method can realize the conversion of MR to CT images using ill-sorted pair data, and achieves preferable results than state-of-the-art methods. Compared with 3D CycleGAN, it can synthesize better 3D CT images with less computation and memory.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Imagen por Resonancia Magnética , Espectroscopía de Resonancia Magnética
13.
Front Neurol ; 14: 1164283, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37602256

RESUMEN

Anatomical network analysis (AnNA) is a systems biological framework based on network theory that enables anatomical structural analysis by incorporating modularity to model structural complexity. The human brain and facial structures exhibit close structural and functional relationships, suggestive of a co-evolved anatomical network. The present study aimed to analyze the human head as a modular entity that comprises the central nervous system, including the brain, spinal cord, and craniofacial skeleton. An AnNA model was built using 39 anatomical nodes from the brain, spinal cord, and craniofacial skeleton. The linkages were identified using peripheral nerve supply and direct contact between structures. The Spinglass algorithm in the igraph software was applied to construct a network and identify the modules of the central nervous system-craniofacial skeleton anatomical network. Two modules were identified. These comprised an anterior module, which included the forebrain, anterior cranial base, and upper-middle face, and a posterior module, which included the midbrain, hindbrain, mandible, and posterior cranium. These findings may reflect the genetic and signaling networks that drive the mosaic central nervous system and craniofacial development and offer important systems biology perspectives for developmental disorders of craniofacial structures.

14.
Quant Imaging Med Surg ; 13(10): 6989-7001, 2023 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-37869278

RESUMEN

Background: Surgical action recognition is an essential technology in context-aware-based autonomous surgery, whereas the accuracy is limited by clinical dataset scale. Leveraging surgical videos from virtual reality (VR) simulations to research algorithms for the clinical domain application, also known as domain adaptation, can effectively reduce the cost of data acquisition and annotation, and protect patient privacy. Methods: We introduced a surgical domain adaptation method based on the contrastive language-image pretraining model (SDA-CLIP) to recognize cross-domain surgical action. Specifically, we utilized the Vision Transformer (ViT) and Transformer to extract video and text embeddings, respectively. Text embedding was developed as a bridge between VR and clinical domains. Inter- and intra-modality loss functions were employed to enhance the consistency of embeddings of the same class. Further, we evaluated our method on the MICCAI 2020 EndoVis Challenge SurgVisDom dataset. Results: Our SDA-CLIP achieved a weighted F1-score of 65.9% (+18.9%) on the hard domain adaptation task (trained only with VR data) and 84.4% (+4.4%) on the soft domain adaptation task (trained with VR and clinical-like data), which outperformed the first place team of the challenge by a significant margin. Conclusions: The proposed SDA-CLIP model can effectively extract video scene information and textual semantic information, which greatly improves the performance of cross-domain surgical action recognition. The code is available at https://github.com/Lycus99/SDA-CLIP.

15.
Curr Med Imaging ; 2023 Nov 06.
Artículo en Inglés | MEDLINE | ID: mdl-37936443

RESUMEN

BACKGROUND: Currently, three-dimensional cephalometry measurements are mainly based on cone beam computed tomography (CBCT), which has limitations of ionizing radiation, lack of soft tissue information, and lack of standardization of median sagittal plane establishment. OBJECTIVES: This study investigated magnetic resonance imaging (MRI)-only based 3D cephalometry measurement based on the integrated and modular characteristics of the human head. METHODS: Double U-Net CycleGAN was used for CT image synthesis from MRI. This method enabled the synthesis of a CT-like image from MRI and measurements were made using 3D slicer registration and fusion. RESULTS: A protocol for generating and optimizing MRI-based synthetic CT was described and found to meet the precision requirements of 3D head measurement using MRI midline positioning methods reported in neuroscience to establish the median sagittal plane. An MRI-only reference frame and coordinate system were established enabling an MRI-only cephalometric analysis protocol that combined the dual advantages of soft and hard tissue display. The protocol was devised using data from a single volunteer and validation data from a larger sample remains to be collected. CONCLUSION: The reported method provided a new protocol for MRI-only cephalometric analysis of craniofacial growth and development, malformation occurrence, treatment planning, and outcomes.

16.
Int J Comput Assist Radiol Surg ; 18(8): 1521-1531, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-36787037

RESUMEN

PURPOSE: Laparoscopic liver resection is a minimal invasive surgery. Augmented reality can map preoperative anatomy information extracted from computed tomography to the intraoperative liver surface reconstructed from stereo 3D laparoscopy. However, liver surface registration is particularly challenging as the intraoperative surface is only partially visible and suffers from large liver deformations due to pneumoperitoneum. This study proposes a deep learning-based robust point cloud registration network. METHODS: This study proposed a low overlap liver surface registration algorithm combining local mixed features and global features of point clouds. A learned overlap mask is used to filter the non-overlapping region of the point cloud, and a network is used to predict the overlapping region threshold to regulate the training process. RESULTS: We validated the algorithm on the DePoLL (the Deformable Porcine Laparoscopic Liver) dataset. Compared with the baseline method and other state-of-the-art registration methods, our method achieves minimum target registration error (TRE) of 19.9 ± 2.7 mm. CONCLUSION: The proposed point cloud registration method uses the learned overlapping mask to filter the non-overlapping areas in the point cloud, then the extracted overlapping area point cloud is registered according to the mixed features and global features, and this method is robust and efficient in low-overlap liver surface registration.


Asunto(s)
Laparoscopía , Cirugía Asistida por Computador , Animales , Algoritmos , Laparoscopía/métodos , Hígado/diagnóstico por imagen , Hígado/cirugía , Cirugía Asistida por Computador/métodos , Porcinos , Tomografía Computarizada por Rayos X/métodos
17.
IEEE J Biomed Health Inform ; 27(10): 4983-4994, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37498758

RESUMEN

Surgical action triplet recognition plays a significant role in helping surgeons facilitate scene analysis and decision-making in computer-assisted surgeries. Compared to traditional context-aware tasks such as phase recognition, surgical action triplets, comprising the instrument, verb, and target, can offer more comprehensive and detailed information. However, current triplet recognition methods fall short in distinguishing the fine-grained subclasses and disregard temporal correlation in action triplets. In this article, we propose a multi-task fine-grained spatial-temporal framework for surgical action triplet recognition named MT-FiST. The proposed method utilizes a multi-label mutual channel loss, which consists of diversity and discriminative components. This loss function decouples global task features into class-aligned features, enabling the learning of more local details from the surgical scene. The proposed framework utilizes partial shared-parameters LSTM units to capture temporal correlations between adjacent frames. We conducted experiments on the CholecT50 dataset proposed in the MICCAI 2021 Surgical Action Triplet Recognition Challenge. Our framework is evaluated on the private test set of the challenge to ensure fair comparisons. Our model apparently outperformed state-of-the-art models in instrument, verb, target, and action triplet recognition tasks, with mAPs of 82.1% (+4.6%), 51.5% (+4.0%), 45.50% (+7.8%), and 35.8% (+3.1%), respectively. The proposed MT-FiST boosts the recognition of surgical action triplets in a context-aware surgical assistant system, further solving multi-task recognition by effective temporal aggregation and fine-grained features.


Asunto(s)
Cirugía Asistida por Computador , Humanos
18.
Int J Surg ; 109(9): 2598-2607, 2023 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-37338535

RESUMEN

BACKGROUND: Augmented reality (AR)-assisted navigation system are currently good techniques for hepatectomy; however, its application and efficacy for laparoscopic pancreatoduodenectomy have not been reported. This study sought to focus on and evaluate the advantages of laparoscopic pancreatoduodenectomy guided by the AR-assisted navigation system in intraoperative and short-time outcomes. METHODS: Eighty-two patients who underwent laparoscopic pancreatoduodenectomy from January 2018 to May 2022 were enrolled and divided into the AR and non-AR groups. Clinical baseline features, operation time, intraoperative blood loss, blood transfusion rate, perioperative complications, and mortality were analyzed. RESULTS: AR-guided laparoscopic pancreaticoduodenectomy was performed in the AR group ( n =41), whereas laparoscopic pancreatoduodenectomy was carried out routinely in the non-AR group ( n =41). There was no significant difference in baseline data between the two groups ( P >0.05); Although the operation time of the AR group was longer than that of the non-AR group (420.15±94.38 vs. 348.98±76.15, P <0.001), the AR group had a less intraoperative blood loss (219.51±167.03 vs. 312.20±195.51, P =0.023), lower blood transfusion rate (24.4 vs. 65.9%, P <0.001), lower occurrence rates of postoperative pancreatic fistula (12.2 vs. 46.3%, P =0.002) and bile leakage (0 vs. 14.6%, P =0.026), and shorter postoperative hospital stay (11.29±2.78 vs. 20.04±11.22, P <0.001) compared with the non-AR group. CONCLUSION: AR-guided laparoscopic pancreatoduodenectomy has significant advantages in identifying important vascular structures, minimizing intraoperative damage, and reducing postoperative complications, suggesting that it is a safe, feasible method with a bright future in the clinical setting.


Asunto(s)
Realidad Aumentada , Laparoscopía , Humanos , Pancreaticoduodenectomía/efectos adversos , Pancreaticoduodenectomía/métodos , Estudios Retrospectivos , Pérdida de Sangre Quirúrgica/prevención & control , Laparoscopía/efectos adversos , Laparoscopía/métodos , Complicaciones Posoperatorias/epidemiología , Complicaciones Posoperatorias/etiología , Complicaciones Posoperatorias/prevención & control , Resultado del Tratamiento
19.
Quant Imaging Med Surg ; 13(3): 1619-1630, 2023 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-36915332

RESUMEN

Background: Methods based on the combination of transformer and convolutional neural networks (CNNs) have achieved impressive results in the field of medical image segmentation. However, most of the recently proposed combination segmentation approaches simply treat transformers as auxiliary modules which help to extract long-range information and encode global context into convolutional representations, and there is a lack of investigation on how to optimally combine self-attention with convolution. Methods: We designed a novel transformer block (MRFormer) that combines a multi-head self-attention layer and a residual depthwise convolutional block as the basic unit to deeply integrate both long-range and local spatial information. The MRFormer block was embedded between the encoder and decoder in U-Net at the last two layers. This framework (UMRFormer-Net) was applied to the segmentation of three-dimensional (3D) pancreas, and its ability to effectively capture the characteristic contextual information of the pancreas and surrounding tissues was investigated. Results: Experimental results show that the proposed UMRFormer-Net achieved accuracy in pancreas segmentation that was comparable or superior to that of existing state-of-the-art 3D methods in both the Clinical Proteomic Tumor Analysis Consortium Pancreatic Ductal Adenocarcinoma (CPTAC-PDA) dataset and the public Medical Segmentation Decathlon dataset (self-division). UMRFormer-Net statistically significantly outperformed existing transformer-related methods and state-of-the-art 3D methods (P<0.05, P<0.01, or P<0.001), with a higher Dice coefficient (85.54% and 77.36%, respectively) or a lower 95% Hausdorff distance (4.05 and 8.34 mm, respectively). Conclusions: UMRFormer-Net can obtain more matched and accurate segmentation boundary and region information in pancreas segmentation, thus improving the accuracy of pancreas segmentation. The code is available at https://github.com/supersunshinefk/UMRFormer-Net.

20.
Med Phys ; 50(10): 6243-6258, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36975007

RESUMEN

BACKGROUND: The fusion of computed tomography (CT) and ultrasound (US) image can enhance lesion detection ability and improve the success rate of liver interventional radiology. The image-based fusion methods encounter the challenge of registration initialization due to the random scanning pose and limited field of view of US. Existing automatic methods those used vessel geometric information and intensity-based metric are sensitive to parameters and have low success rate. The learning-based methods require a large number of registered datasets for training. PURPOSE: The aim of this study is to provide a fully automatic and robust US-3D CT registration method without registered training data and user-specified parameters assisted by the revolutionary deep learning-based segmentation, which can further be used for preparing training samples for the study of learning-based methods. METHODS: We propose a fully automatic CT-3D US registration method by two improved registration metrics. We propose to use 3D U-Net-based multi-organ segmentation of US and CT to assist the conventional registration. The rigid transform is searched in the space of any paired vessel bifurcation planes where the best transform is decided by a segmentation overlap metric, which is more related to the segmentation precision than Dice coefficient. In nonrigid registration phase, we propose a hybrid context and edge based image similarity metric with a simple mask that can remove most noisy US voxels to guide the B-spline transform registration. We evaluate our method on 42 paired CT-3D US datasets scanned with two different US devices from two hospitals. We compared our methods with other exsiting methods with both quantitative measures of target registration error (TRE) and the Jacobian determinent with paired t-test and qualitative registration imaging results. RESULTS: The results show that our method achieves fully automatic rigid registration TRE of 4.895 mm, deformable registration TRE of 2.995 mm in average, which outperforms state-of-the-art automatic linear methods and nonlinear registration metrics with paired t-test's p value less than 0.05. The proposed overlap metric achieves better results than self similarity description (SSD), edge matching (EM), and block matching (BM) with p values of 1.624E-10, 4.235E-9, and 0.002, respectively. The proposed hybrid edge and context-based metric outperforms context-only, edge-only, and intensity statistics-only-based metrics with p values of 0.023, 3.81E-5, and 1.38E-15, respectively. The 3D US segmentation has achieved mean Dice similarity coefficient (DSC) of 0.799, 0.724, 0.788, and precision of 0.871, 0.769, 0.862 for gallbladder, vessel, and branch vessel, respectively. CONCLUSIONS: The deep learning-based US segmentation can achieve satisfied result to assist robust conventional rigid registration. The Dice similarity coefficient-based metrics, hybrid context, and edge image similarity metric contribute to robust and accurate registration.


Asunto(s)
Imagenología Tridimensional , Hígado , Imagenología Tridimensional/métodos , Ultrasonografía/métodos , Hígado/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA