RESUMO
Accurately reconstructing 4D critical organs contributes to the visual guidance in X-ray image-guided interventional operation. Current methods estimate intraoperative dynamic meshes by refining a static initial organ mesh from the semantic information in the single-frame X-ray images. However, these methods fall short of reconstructing an accurate and smooth organ sequence due to the distinct respiratory patterns between the initial mesh and X-ray image. To overcome this limitation, we propose a novel dual-stage complementary 4D organ reconstruction (DSC-Recon) model for recovering dynamic organ meshes by utilizing the preoperative and intraoperative data with different respiratory patterns. DSC-Recon is structured as a dual-stage framework: 1) The first stage focuses on addressing a flexible interpolation network applicable to multiple respiratory patterns, which could generate dynamic shape sequences between any pair of preoperative 3D meshes segmented from CT scans. 2) In the second stage, we present a deformation network to take the generated dynamic shape sequence as the initial prior and explore the discriminate feature (i.e., target organ areas and meaningful motion information) in the intraoperative X-ray images, predicting the deformed mesh by introducing a designed feature mapping pipeline integrated into the initialized shape refinement process. Experiments on simulated and clinical datasets demonstrate the superiority of our method over state-of-the-art methods in both quantitative and qualitative aspects.
Assuntos
Imageamento Tridimensional , Tomografia Computadorizada por Raios X , Humanos , Imageamento Tridimensional/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Cirurgia Assistida por Computador/métodos , Fígado/diagnóstico por imagem , Fígado/cirurgiaRESUMO
OBJECTIVE: Biliary interventional procedures require physicians to track the interventional instrument tip (Tip) precisely with X-ray image. However, Tip positioning relies heavily on the physicians' experience due to the limitations of X-ray imaging and the respiratory interference, which leads to biliary damage, prolonged operation time, and increased X-ray radiation. METHODS: We construct an augmented reality (AR) navigation system for biliary interventional procedures. It includes system calibration, respiratory motion correction and fusion navigation. Firstly, the magnetic and 3D computed tomography (CT) coordinates are aligned through system calibration. Secondly, a respiratory motion correction method based on manifold regularization is proposed to correct the misalignment of the two coordinates caused by respiratory motion. Thirdly, the virtual biliary, liver and Tip from CT are overlapped to the corresponding position of the patient for dynamic virtual-real fusion. RESULTS: Our system is respectively evaluated and achieved an average alignment error of 0.75 ± 0.17 mm and 2.79 ± 0.46 mm on phantoms and patients. The navigation experiments conducted on phantoms achieve an average Tip positioning error of 0.98 ± 0.15 mm and an average fusion error of 1.67 ± 0.34 mm after correction. CONCLUSION: Our system can automatically register the Tip to the corresponding location in CT, and dynamically overlap the 3D virtual model onto patients to provide accurate and intuitive AR navigation. SIGNIFICANCE: This study demonstrates the clinical potential of our system by assisting physicians during biliary interventional procedures. Our system enables dynamic visualization of virtual model on patients, reducing the reliance on contrast agents and X-ray usage.
Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Humanos , Imageamento Tridimensional , Fígado , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodos , Cirurgia Assistida por Computador/métodosRESUMO
PURPOSE: This work proposes a robot-assisted augmented reality (AR) surgical navigation system for mandibular reconstruction. The system accurately superimposes the preoperative osteotomy plan of the mandible and fibula into a real scene. It assists the doctor in osteotomy quickly and safely under the guidance of the robotic arm. METHODS: The proposed system mainly consists of two modules: the AR guidance module of the mandible and fibula and the robot navigation module. In the AR guidance module, we propose an AR calibration method based on the spatial registration of the image tracking marker to superimpose the virtual models of the mandible and fibula into the real scene. In the robot navigation module, the posture of the robotic arm is first calibrated under the tracking of the optical tracking system. The robotic arm can then be positioned at the planned osteotomy after the registration of the computed tomography image and the patient position. The combined guidance of AR and robotic arm can enhance the safety and precision of the surgery. RESULTS: The effectiveness of the proposed system was quantitatively assessed on cadavers. In the AR guidance module, osteotomies of the mandible and fibula achieved mean errors of 1.61 ± 0.62 and 1.08 ± 0.28 mm, respectively. The mean reconstruction error of the mandible was 1.36 ± 0.22 mm. In the AR-robot guidance module, the mean osteotomy errors of the mandible and fibula were 1.47 ± 0.46 and 0.98 ± 0.24 mm, respectively. The mean reconstruction error of the mandible was 1.20 ± 0.36 mm. CONCLUSIONS: The cadaveric experiments of 12 fibulas and six mandibles demonstrate the proposed system's effectiveness and potential clinical value in reconstructing the mandibular defect with a free fibular flap.
Assuntos
Realidade Aumentada , Retalhos de Tecido Biológico , Reconstrução Mandibular , Robótica , Cirurgia Assistida por Computador , Humanos , Reconstrução Mandibular/métodos , Cirurgia Assistida por Computador/métodos , Retalhos de Tecido Biológico/cirurgia , Mandíbula/diagnóstico por imagem , Mandíbula/cirurgiaRESUMO
BACKGROUND: Monocular depth estimation plays a fundamental role in clinical endoscopy surgery. However, the coherent illumination, smooth surfaces, and texture-less nature of endoscopy images present significant challenges to traditional depth estimation methods. Existing approaches struggle to accurately perceive depth in such settings. METHOD: To overcome these challenges, this paper proposes a novel multi-scale residual fusion method for estimating the depth of monocular endoscopy images. Specifically, we address the issue of coherent illumination by leveraging image frequency domain component space transformation, thereby enhancing the stability of the scene's light source. Moreover, we employ an image radiation intensity attenuation model to estimate the initial depth map. Finally, to refine the accuracy of depth estimation, we utilize a multi-scale residual fusion optimization technique. RESULTS: To evaluate the performance of our proposed method, extensive experiments were conducted on public datasets. The structural similarity measures for continuous frames in three distinct clinical data scenes reached impressive values of 0.94, 0.82, and 0.84, respectively. These results demonstrate the effectiveness of our approach in capturing the intricate details of endoscopy images. Furthermore, the depth estimation accuracy achieved remarkable levels of 89.3 % and 91.2 % for the two models' data, respectively, underscoring the robustness of our method. CONCLUSIONS: Overall, the promising results obtained on public datasets highlight the significant potential of our method for clinical applications, facilitating reliable depth estimation and enhancing the quality of endoscopy surgical procedures.
Assuntos
Endoscopia Gastrointestinal , EndoscopiaRESUMO
Microwave ablation (MWA) is a minimally invasive procedure for the treatment of liver tumor. Accumulating clinical evidence has considered the minimal ablative margin (MAM) as a significant predictor of local tumor progression (LTP). In clinical practice, MAM assessment is typically carried out through image registration of pre- and post-MWA images. However, this process faces two main challenges: non-homologous match between tumor and coagulation with inconsistent image appearance, and tissue shrinkage caused by thermal dehydration. These challenges result in low precision when using traditional registration methods for MAM assessment. In this paper, we present a local contractive nonrigid registration method using a biomechanical model (LC-BM) to address these challenges and precisely assess the MAM. The LC-BM contains two consecutive parts: (1) local contractive decomposition (LC-part), which reduces the incorrect match between the tumor and coagulation and quantifies the shrinkage in the external coagulation region, and (2) biomechanical model constraint (BM-part), which compensates for the shrinkage in the internal coagulation region. After quantifying and compensating for tissue shrinkage, the warped tumor is overlaid on the coagulation, and then the MAM is assessed. We evaluated the method using prospectively collected data from 36 patients with 47 liver tumors, comparing LC-BM with 11 state-of-the-art methods. LTP was diagnosed through contrast-enhanced MR follow-up images, serving as the ground truth for tumor recurrence. LC-BM achieved the highest accuracy (97.9%) in predicting LTP, outperforming other methods. Therefore, our proposed method holds significant potential to improve MAM assessment in MWA surgeries.
RESUMO
Objective.In computer-assisted minimally invasive surgery, the intraoperative x-ray image is enhanced by overlapping it with a preoperative CT volume to improve visualization of vital anatomical structures. Therefore, accurate and robust 3D/2D registration of CT volume and x-ray image is highly desired in clinical practices. However, previous registration methods were prone to initial misalignments and struggled with local minima, leading to issues of low accuracy and vulnerability.Approach.To improve registration performance, we propose a novel CT/x-ray image registration agent (CT2X-IRA) within a task-driven deep reinforcement learning framework, which contains three key strategies: (1) a multi-scale-stride learning mechanism provides multi-scale feature representation and flexible action step size, establishing fast and globally optimal convergence of the registration task. (2) A domain adaptation module reduces the domain gap between the x-ray image and digitally reconstructed radiograph projected from the CT volume, decreasing the sensitivity and uncertainty of the similarity measurement. (3) A weighted reward function facilitates CT2X-IRA in searching for the optimal transformation parameters, improving the estimation accuracy of out-of-plane transformation parameters under large initial misalignments.Main results.We evaluate the proposed CT2X-IRA on both the public and private clinical datasets, achieving target registration errors of 2.13 mm and 2.33 mm with the computation time of 1.5 s and 1.1 s, respectively, showing an accurate and fast workflow for CT/x-ray image rigid registration.Significance.The proposed CT2X-IRA obtains the accurate and robust 3D/2D registration of CT and x-ray images, suggesting its potential significance in clinical applications.
Assuntos
Algoritmos , Imageamento Tridimensional , Raios X , Imageamento Tridimensional/métodos , Tomografia Computadorizada por Raios X/métodos , Radiografia , Processamento de Imagem Assistida por ComputadorRESUMO
Orthognathic surgery corrects jaw deformities to improve aesthetics and functions. Due to the complexity of the craniomaxillofacial (CMF) anatomy, orthognathic surgery requires precise surgical planning, which involves predicting postoperative changes in facial appearance. To this end, most conventional methods involve simulation with biomechanical modeling methods, which are labor intensive and computationally expensive. Here we introduce a learning-based framework to speed up the simulation of postoperative facial appearances. Specifically, we introduce a facial shape change prediction network (FSC-Net) to learn the nonlinear mapping from bony shape changes to facial shape changes. FSC-Net is a point transform network weakly-supervised by paired preoperative and postoperative data without point-wise correspondence. In FSC-Net, a distance-guided shape loss places more emphasis on the jaw region. A local point constraint loss restricts point displacements to preserve the topology and smoothness of the surface mesh after point transformation. Evaluation results indicate that FSC-Net achieves 15× speedup with accuracy comparable to a state-of-the-art (SOTA) finite-element modeling (FEM) method.
Assuntos
Aprendizado Profundo , Cirurgia Ortognática , Procedimentos Cirúrgicos Ortognáticos , Procedimentos Cirúrgicos Ortognáticos/métodos , Simulação por Computador , Face/diagnóstico por imagem , Face/cirurgiaRESUMO
Objective. Radiation therapy requires a precise target location. However, respiratory motion increases the uncertainties of the target location. Accurate and robust tracking is significant for improving operation accuracy.Approach. In this work, we propose a tracking framework Multi3, including a multi-templates Siamese network, multi-peaks detection and multi-features refinement, for target tracking in ultrasound sequences. Specifically, we use two templates to provide the location and deformation of the target for robust tracking. Multi-peaks detection is applied to extend the set of potential target locations, and multi-features refinement is designed to select an appropriate location as the tracking result by quality assessment.Main results. The proposed Multi3 is evaluated on a public dataset, i.e. MICCAI 2015 challenge on liver ultrasound tracking (CLUST), and our clinical dataset provided by the Chinese People's Liberation Army General Hospital. Experimental results show that Multi3 achieves accurate and robust tracking in ultrasound sequences (0.75 ± 0.62 mm and 0.51 ± 0.32 mm tracking errors in two datasets).Significance. The proposed Multi3 is the most robust method on the CLUST 2D benchmark set, exhibiting potential in clinical practice.
Assuntos
Algoritmos , Fígado , Abdome , Humanos , Fígado/diagnóstico por imagem , Movimento (Física) , Ultrassonografia/métodosRESUMO
Building an in vivo three-dimensional (3D) surface model from a monocular endoscopy is an effective technology to improve the intuitiveness and precision of clinical laparoscopic surgery. This paper proposes a multi-loss rebalancing-based method for joint estimation of depth and motion from a monocular endoscopy image sequence. The feature descriptors are used to provide monitoring signals for the depth estimation network and motion estimation network. The epipolar constraints of the sequence frame is considered in the neighborhood spatial information by depth estimation network to enhance the accuracy of depth estimation. The reprojection information of depth estimation is used to reconstruct the camera motion by motion estimation network with a multi-view relative pose fusion mechanism. The relative response loss, feature consistency loss, and epipolar consistency loss function are defined to improve the robustness and accuracy of the proposed unsupervised learning-based method. Evaluations are implemented on public datasets. The error of motion estimation in three scenes decreased by 42.1%,53.6%, and 50.2%, respectively. And the average error of 3D reconstruction is 6.456 ± 1.798mm. This demonstrates its capability to generate reliable depth estimation and trajectory reconstruction results for endoscopy images and meaningful applications in clinical.
RESUMO
Dental landmark localization is a fundamental step to analyzing dental models in the planning of orthodontic or orthognathic surgery. However, current clinical practices require clinicians to manually digitize more than 60 landmarks on 3D dental models. Automatic methods to detect landmarks can release clinicians from the tedious labor of manual annotation and improve localization accuracy. Most existing landmark detection methods fail to capture local geometric contexts, causing large errors and misdetections. We propose an end-to-end learning framework to automatically localize 68 landmarks on high-resolution dental surfaces. Our network hierarchically extracts multi-scale local contextual features along two paths: a landmark localization path and a landmark area-of-interest segmentation path. Higher-level features are learned by combining local-to-global features from the two paths by feature fusion to predict the landmark heatmap and the landmark area segmentation map. An attention mechanism is then applied to the two maps to refine the landmark position. We evaluated our framework on a real-patient dataset consisting of 77 high-resolution dental surfaces. Our approach achieves an average localization error of 0.42 mm, significantly outperforming related start-of-the-art methods.
RESUMO
Facial appearance changes with the movements of bony segments in orthognathic surgery of patients with craniomaxillofacial (CMF) deformities. Conventional bio-mechanical methods, such as finite element modeling (FEM), for simulating such changes, are labor intensive and computationally expensive, preventing them from being used in clinical settings. To overcome these limitations, we propose a deep learning framework to predict post-operative facial changes. Specifically, FC-Net, a facial appearance change simulation network, is developed to predict the point displacement vectors associated with a facial point cloud. FC-Net learns the point displacements of a pre-operative facial point cloud from the bony movement vectors between pre-operative and simulated post-operative bony models. FC-Net is a weakly-supervised point displacement network trained using paired data with strict point-to-point correspondence. To preserve the topology of the facial model during point transform, we employ a local-point-transform loss to constrain the local movements of points. Experimental results on real patient data reveal that the proposed framework can predict post-operative facial appearance changes remarkably faster than a state-of-the-art FEM method with comparable prediction accuracy.
RESUMO
BACKGROUND: Most existing algorithms have been focused on the segmentation from several public Liver CT datasets scanned regularly (no pneumoperitoneum and horizontal supine position). This study primarily segmented datasets with unconventional liver shapes and intensities deduced by contrast phases, irregular scanning conditions, different scanning objects of pigs and patients with large pathological tumors, which formed the multiple heterogeneity of datasets used in this study. METHODS: The multiple heterogeneous datasets used in this paper includes: (1) One public contrast-enhanced CT dataset and one public non-contrast CT dataset; (2) A contrast-enhanced dataset that has abnormal liver shape with very long left liver lobes and large-sized liver tumors with abnormal presets deduced by microvascular invasion; (3) One artificial pneumoperitoneum dataset under the pneumoperitoneum and three scanning profiles (horizontal/left/right recumbent position); (4) Two porcine datasets of Bama type and domestic type that contains pneumoperitoneum cases but with large anatomy discrepancy with humans. The study aimed to investigate the segmentation performances of 3D U-Net in: (1) generalization ability between multiple heterogeneous datasets by cross-testing experiments; (2) the compatibility when hybrid training all datasets in different sampling and encoder layer sharing schema. We further investigated the compatibility of encoder level by setting separate level for each dataset (i.e., dataset-wise convolutions) while sharing the decoder. RESULTS: Model trained on different datasets has different segmentation performance. The prediction accuracy between LiTS dataset and Zhujiang dataset was about 0.955 and 0.958 which shows their good generalization ability due to that they were all contrast-enhanced clinical patient datasets scanned regularly. For the datasets scanned under pneumoperitoneum, their corresponding datasets scanned without pneumoperitoneum showed good generalization ability. Dataset-wise convolution module in high-level can improve the dataset unbalance problem. The experimental results will facilitate researchers making solutions when segmenting those special datasets. CONCLUSIONS: (1) Regularly scanned datasets is well generalized to irregularly ones. (2) The hybrid training is beneficial but the dataset imbalance problem always exits due to the multi-domain homogeneity. The higher levels encoded more domain specific information than lower levels and thus were less compatible in terms of our datasets.
Assuntos
Imageamento Tridimensional , Neoplasias Hepáticas/diagnóstico por imagem , Fígado/diagnóstico por imagem , Aprendizado de Máquina , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Animais , Meios de Contraste , Conjuntos de Dados como Assunto , Humanos , Pneumoperitônio/diagnóstico por imagem , SuínosRESUMO
BACKGROUND: Clinically, the total and residual liver volume must be accurately calculated before major hepatectomy. However, liver volume might be influenced by pneumoperitoneum during surgery. Changes in liver volume change also affect the accuracy of simulation and augmented reality navigation systems, which are commonly first validated in animal models. In this study, the morphologic changes in porcine livers in vivo under 13 mm Hg pneumoperitoneum pressure were investigated. MATERIALS AND METHODS: Twenty male pigs were scanned with contrast-enhanced computed tomography without pneumoperitoneum and with 13 mm Hg pneumoperitoneum pressure. RESULTS: The surface area and volume of the liver and the vascular diameter of the aortic lumen, inferior vena cava lumen, and portal vein lumen were measured. There were statistically significant differences in the surface area and volume of the liver (P=0.000), transverse diameter of the portal vein (P=0.038), longitudinal diameter of the inferior vena cava (P=0.033), longitudinal diameter of the portal vein (P=0.036), vascular cross-sectional area of the inferior vena cava (P=0.028), and portal vein (P=0.038) before and after 13 mm Hg pneumoperitoneum pressure. CONCLUSIONS: This study indicated that the creation of pneumoperitoneum at 13 mm Hg pressure in a porcine causes liver morphologic alterations affecting the area and volume, as well as the diameter of a blood vessel.
Assuntos
Pneumoperitônio , Abdome , Animais , Fígado/diagnóstico por imagem , Masculino , Veia Porta/diagnóstico por imagem , Suínos , Veia Cava Inferior/diagnóstico por imagemRESUMO
PURPOSE: The purpose of this study was to reduce the experience dependence during the orthognathic surgical planning that involves virtually simulating the corrective procedure for jaw deformities. METHODS: We introduce a geometric deep learning framework for generating reference facial bone shape models for objective guidance in surgical planning. First, we propose a surface deformation network to warp a patient's deformed bone to a set of normal bones for generating a dictionary of patient-specific normal bony shapes. Subsequently, sparse representation learning is employed to estimate a reference shape model based on the dictionary. RESULTS: We evaluated our method on a clinical dataset containing 24 patients, and compared it with a state-of-the-art method that relies on landmark-based sparse representation. Our method yields significantly higher accuracy than the competing method for estimating normal jaws and maintains the midfaces of patients' facial bones as well as the conventional way. CONCLUSIONS: Experimental results indicate that our method generates accurate shape models that meet clinical standards.
Assuntos
Anormalidades Maxilomandibulares , Procedimentos Cirúrgicos Ortognáticos , Humanos , Imageamento Tridimensional , Arcada Osseodentária , Aprendizado de Máquina não SupervisionadoRESUMO
Orthognathic surgical outcomes rely heavily on the quality of surgical planning. Automatic estimation of a reference facial bone shape significantly reduces experience-dependent variability and improves planning accuracy and efficiency. We propose an end-to-end deep learning framework to estimate patient-specific reference bony shape models for patients with orthognathic deformities. Specifically, we apply a point-cloud network to learn a vertex-wise deformation field from a patient's deformed bony shape, represented as a point cloud. The estimated deformation field is then used to correct the deformed bony shape to output a patient-specific reference bony surface model. To train our network effectively, we introduce a simulation strategy to synthesize deformed bones from any given normal bone, producing a relatively large and diverse dataset of shapes for training. Our method was evaluated using both synthetic and real patient data. Experimental results show that our framework estimates realistic reference bony shape models for patients with varying deformities. The performance of our method is consistently better than an existing method and several deep point-cloud networks. Our end-to-end estimation framework based on geometric deep learning shows great potential for improving clinical workflows.
Assuntos
Aprendizado Profundo , Procedimentos Cirúrgicos Ortognáticos , Osso e Ossos , HumanosRESUMO
The molecular mechanism of Alzheimer-like cognitive impairment induced by manganese (Mn) exposure has not yet been fully clarified, and there are currently no effective interventions to treat neurodegenerative lesions related to manganism. Protein phosphatase 2 A (PP2A) is a major tau phosphatase and was recently identified as a potential therapeutic target molecule for neurodegenerative diseases; its activity is directed by the methylation status of the catalytic C subunit. Methionine is an essential amino acid, and its downstream metabolite S-adenosylmethionine (SAM) participates in transmethylation pathways as a methyl donor. In this study, the neurotoxic mechanism of Mn and the protective effect of methionine were evaluated in Mn-exposed cell and rat models. We show that Mn-induced neurotoxicity is characterized by PP2Ac demethylation accompanied by abnormally decreased LCMT-1 and increased PME-1, which are associated with tau hyperphosphorylation and spatial learning and memory deficits, and that the poor availability of SAM in the hippocampus is likely to determine the loss of PP2Ac methylation. Importantly, maintenance of local SAM levels through continuous supplementation with exogenous methionine, or through specific inhibition of PP2Ac demethylation by ABL127 administration in vitro, can effectively prevent tau hyperphosphorylation to reduce cellular oxidative stress, apoptosis, damage to cell viability, and rat memory deficits in cell or animal Mn exposure models. In conclusion, our data suggest that SAM and PP2Ac methylation may be novel targets for the treatment of Mn poisoning and neurotoxic mechanism-related tauopathies.
Assuntos
Intoxicação por Manganês/metabolismo , Manganês/toxicidade , Metionina/metabolismo , Proteína Fosfatase 2/metabolismo , Tauopatias/induzido quimicamente , Tauopatias/metabolismo , Animais , Linhagem Celular Tumoral , Disfunção Cognitiva/induzido quimicamente , Disfunção Cognitiva/metabolismo , Disfunção Cognitiva/patologia , Hipocampo/efeitos dos fármacos , Hipocampo/patologia , Masculino , Intoxicação por Manganês/patologia , Metilação/efeitos dos fármacos , Camundongos , Ratos , Ratos Sprague-Dawley , Tauopatias/patologiaRESUMO
OBJECTIVE: Understanding the three-dimensional (3D) spatial position and orientation of vessels and tumor(s) is vital in laparoscopic liver resection procedures. Augmented reality (AR) techniques can help surgeons see the patient's internal anatomy in conjunction with laparoscopic video images. METHOD: In this paper, we present an AR-assisted navigation system for liver resection based on a rigid stereoscopic laparoscope. The stereo image pairs from the laparoscope are used by an unsupervised convolutional network (CNN) framework to estimate depth and generate an intraoperative 3D liver surface. Meanwhile, 3D models of the patient's surgical field are segmented from preoperative CT images using V-Net architecture for volumetric image data in an end-to-end predictive style. A globally optimal iterative closest point (Go-ICP) algorithm is adopted to register the pre- and intraoperative models into a unified coordinate space; then, the preoperative 3D models are superimposed on the live laparoscopic images to provide the surgeon with detailed information about the subsurface of the patient's anatomy, including tumors, their resection margins and vessels. RESULTS: The proposed navigation system is tested on four laboratory ex vivo porcine livers and five operating theatre in vivo porcine experiments to validate its accuracy. The ex vivo and in vivo reprojection errors (RPE) are 6.04⯱â¯1.85â¯mm and 8.73⯱â¯2.43â¯mm, respectively. CONCLUSION AND SIGNIFICANCE: Both the qualitative and quantitative results indicate that our AR-assisted navigation system shows promise and has the potential to be highly useful in clinical practice.
Assuntos
Realidade Aumentada , Laparoscopia/métodos , Fígado/diagnóstico por imagem , Fígado/cirurgia , Algoritmos , Animais , Aprendizado Profundo , Percepção de Profundidade , Modelos Animais de Doenças , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Laparoscópios , Neoplasias/diagnóstico por imagem , Reprodutibilidade dos Testes , Software , Cirurgia Assistida por Computador , Suínos , Tomografia Computadorizada por Raios X , Gravação em VídeoRESUMO
Landmark localization is an important step in quantifying craniomaxillofacial (CMF) deformities and designing treatment plans of reconstructive surgery. However, due to the severity of deformities and defects (partially missing anatomy), it is difficult to automatically and accurately localize a large set of landmarks simultaneously. In this work, we propose two cascaded networks for digitizing 60 anatomical CMF landmarks in cone-beam computed tomography (CBCT) images. The first network is a U-Net that outputs heatmaps for landmark locations and landmark features extracted with a local attention mechanism. The second network is a graph convolution network that takes the features extracted by the first network as input and determines whether each landmark exists via binary classification. We evaluated our approach on 50 sets of CBCT scans of patients with CMF deformities and compared them with state-of-the-art methods. The results indicate that our approach can achieve an average detection error of 1.47mm with a false positive rate of 19%, outperforming related methods.
RESUMO
In this paper, we introduce a method for estimating patient-specific reference bony shape models for planning of reconstructive surgery for patients with acquired craniomaxillofacial (CMF) trauma. We propose an automatic bony shape estimation framework using pre-traumatic portrait photographs and post-traumatic head computed tomography (CT) scans. A 3D facial surface is first reconstructed from the patient's pre-traumatic photographs. An initial estimation of the patient's normal bony shape is then obtained with the reconstructed facial surface via sparse representation using a dictionary of paired facial and bony surfaces of normal subjects. We further refine the bony shape model by deforming the initial bony shape model to the post-traumatic 3D CT bony model, regularized by a statistical shape model built from a database of normal subjects. Experimental results show that our method is capable of effectively recovering the patient's normal facial bony shape in regions with defects, allowing CMF surgical planning to be performed precisely for a wider range of defects caused by trauma.
RESUMO
OBJECTIVES: To develop and validate a radiomics-based nomogram for the preoperative prediction of posthepatectomy liver failure (PHLF) in patients with hepatocellular carcinoma (HCC). METHODS: One hundred twelve consecutive HCC patients who underwent hepatectomy were included in the study pool (training cohort: nâ¯=â¯80, validation cohort: nâ¯=â¯32), and another 13 patients were included in a pilot prospective analysis. A total of 713 radiomics features were extracted from portal-phase computed tomography (CT) images. A logistic regression was used to construct a radiomics score (Rad-score). Then a nomogram, including Rad-score and other risk factors, was built with a multivariate logistic regression model. The discrimination, calibration and clinical utility of nomogram were evaluated. RESULTS: The Rad-score could predict PHLF with an AUC of 0.822 (95% CI, 0.726-0.917) in the training cohort and of 0.762 (95% CI, 0.576-0.948) in the validation cohort; however, the approach could not completely outmatch the existing methods (CP [Child-Pugh], MELD [Model of End Stage Liver Disease], ALBI [albumin-bilirubin]). The individual predictive nomogram that included the Rad-score, MELD and performance status (PS) showed better discrimination with an AUC of 0.864 (95% CI, 0.786-0.942), which was higher than the AUCs of the conventional methods (nomogram vs CP, MELD, and ALBI at Pâ¯<â¯0.001, Pâ¯<â¯0.005, and Pâ¯<â¯0.005, respectively). In the validation cohort, the nomogram discrimination was also superior to those of the other three methods (AUC: 0.896; 95% CI, 0.774-1.000). The calibration curves showed good agreement in both cohorts, and the decision curve analysis of the entire cohort revealed that the nomogram was clinically useful. A pilot prospective analysis showed that the radiomics nomogram could predict PHLF with an AUC of 0.833 (95% CI, 0.591-1.000). CONCLUSIONS: A nomogram based on the Rad-score, MELD, and PS can predict PHLF.