Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Pattern Recognit ; 1522024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38645435

RESUMO

Deep learning models for medical image segmentation are usually trained with voxel-wise losses, e.g., cross-entropy loss, focusing on unary supervision without considering inter-voxel relationships. This oversight potentially leads to semantically inconsistent predictions. Here, we propose a contextual similarity loss (CSL) and a structural similarity loss (SSL) to explicitly and efficiently incorporate inter-voxel relationships for improved performance. The CSL promotes consistency in predicted object categories for each image sub-region compared to ground truth. The SSL enforces compatibility between the predictions of voxel pairs by computing pair-wise distances between them, ensuring that voxels of the same class are close together whereas those from different classes are separated by a wide margin in the distribution space. The effectiveness of the CSL and SSL is evaluated using a clinical cone-beam computed tomography (CBCT) dataset of patients with various craniomaxillofacial (CMF) deformities and a public pancreas dataset. Experimental results show that the CSL and SSL outperform state-of-the-art regional loss functions in preserving segmentation semantics.

2.
Med Image Anal ; 93: 103094, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38306802

RESUMO

In orthognathic surgical planning for patients with jaw deformities, it is crucial to accurately simulate the changes in facial appearance that follow the bony movement. Compared with the traditional biomechanics-based methods like the finite-element method (FEM), which are both labor-intensive and computationally inefficient, deep learning-based methods offer an efficient and robust modeling alternative. However, current methods do not account for the physical relationship between facial soft tissue and bony structure, causing them to fall short in accuracy compared to FEM. In this work, we propose an Attentive Correspondence assisted Movement Transformation network (ACMT-Net) to predict facial changes by correlating facial soft tissue changes with bony movement through a point-to-point attentive correspondence matrix. To ensure efficient training, we also introduce a contrastive loss for self-supervised pre-training of the ACMT-Net with a k-Nearest Neighbors (k-NN) based clustering. Experimental results on patients with jaw deformities show that our proposed solution can achieve significantly improved computational efficiency over the state-of-the-art FEM-based method with comparable facial change prediction accuracy.


Assuntos
Face , Movimento , Humanos , Face/diagnóstico por imagem , Fenômenos Biomecânicos , Simulação por Computador
3.
Oper Neurosurg (Hagerstown) ; 26(1): 46-53, 2024 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-37811925

RESUMO

BACKGROUND AND OBJECTIVE: Computer-aided surgical simulation (CASS) can be used to virtually plan ideal outcomes of craniosynostosis surgery. Our purpose was to create a workflow analyzing the accuracy of surgical outcomes relative to virtually planned fronto-orbital advancement (FOA). METHODS: Patients who underwent FOA using CASS between October 1, 2017, and February 28, 2022, at our center and had postoperative computed tomography within 6 months of surgery were included. Virtual 3-dimensional (3D) models were created and coregistered using each patient's preoperative and postoperative computed tomography data. Three points on each bony segment were used to define the object in 3D space. Each planned bony segment was manipulated to match the actual postoperative outcome. The change in position of the 3D object was measured in translational (X, Y, Z) and rotational (roll, pitch, yaw) aspects to represent differences between planned and actual postoperative positions. The difference in the translational position of several bony landmarks was also recorded. Wilcoxon signed-rank tests were performed to measure significance of these differences from the ideal value of 0, which would indicate no difference between preoperative plan and postoperative outcome. RESULTS: Data for 63 bony segments were analyzed from 8 patients who met the inclusion criteria. Median differences between planned and actual outcomes of the segment groups ranged from -0.3 to -1.3 mm in the X plane; 1.4 to 5.6 mm in the Y plane; 0.9 to 2.7 mm in the Z plane; -1.2° to -4.5° in pitch; -0.1° to 1.0° in roll; and -2.8° to 1.0° in yaw. No significant difference from 0 was found in 21 of 24 segment region/side combinations. Translational differences of bony landmarks ranged from -2.7 to 3.6 mm. CONCLUSION: A high degree of accuracy was observed relative to the CASS plan. Virtual analysis of surgical accuracy in FOA using CASS was feasible.


Assuntos
Craniossinostoses , Cirurgia Assistida por Computador , Humanos , Projetos Piloto , Cirurgia Assistida por Computador/métodos , Craniossinostoses/diagnóstico por imagem , Craniossinostoses/cirurgia , Resultado do Tratamento , Computadores
4.
J Oral Maxillofac Surg ; 82(2): 181-190, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-37995761

RESUMO

BACKGROUND: Jaw deformity diagnosis requires objective tests. Current methods, like cephalometry, have limitations. However, recent studies have shown that machine learning can diagnose jaw deformities in two dimensions. Therefore, we hypothesized that a multilayer perceptron (MLP) could accurately diagnose jaw deformities in three dimensions (3D). PURPOSE: Examine the hypothesis by focusing on anomalous mandibular position. We aimed to: (1) create a machine learning model to diagnose mandibular retrognathism and prognathism; and (2) compare its performance with traditional cephalometric methods. STUDY DESIGN, SETTING, SAMPLE: An in-silico experiment on deidentified retrospective data. The study was conducted at the Houston Methodist Research Institute and Rensselaer Polytechnic Institute. Included were patient records with jaw deformities and preoperative 3D facial models. Patients with significant jaw asymmetry were excluded. PREDICTOR VARIABLES: The tests used to diagnose mandibular anteroposterior position are: (1) SNB angle; (2) facial angle; (3) mandibular unit length (MdUL); and (4) MLP model. MAIN OUTCOME VARIABLE: The resultant diagnoses: normal, prognathic, or retrognathic. COVARIATES: None. ANALYSES: A senior surgeon labeled the patients' mandibles as prognathic, normal, or retrognathic, creating a gold standard. Scientists at Rensselaer Polytechnic Institute developed an MLP model to diagnose mandibular prognathism and retrognathism using the 3D coordinates of 50 landmarks. The performance of the MLP model was compared with three traditional cephalometric measurements: (1) SNB, (2) facial angle, and (3) MdUL. The primary metric used to assess the performance was diagnostic accuracy. McNemar's exact test tested the difference between traditional cephalometric measurement and MLP. Cohen's Kappa measured inter-rater agreement between each method and the gold standard. RESULTS: The sample included 101 patients. The diagnostic accuracy of SNB, facial angle, MdUL, and MLP were 74.3, 74.3, 75.3, and 85.2%, respectively. McNemar's test shows that our MLP performs significantly better than the SNB (P = .027), facial angle (P = .019), and MdUL (P = .031). The agreement between the traditional cephalometric measurements and the surgeon's diagnosis was fair. In contrast, the agreement between the MLP and the surgeon was moderate. CONCLUSION AND RELEVANCE: The performance of the MLP is significantly better than that of the traditional cephalometric measurements.


Assuntos
Anormalidades Maxilomandibulares , Má Oclusão Classe III de Angle , Prognatismo , Retrognatismo , Humanos , Prognatismo/diagnóstico por imagem , Retrognatismo/diagnóstico por imagem , Estudos Retrospectivos , Mandíbula/diagnóstico por imagem , Mandíbula/anormalidades , Má Oclusão Classe III de Angle/cirurgia , Cefalometria/métodos
5.
IEEE Trans Med Imaging ; 42(10): 2948-2960, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37097793

RESUMO

Federated learning is an emerging paradigm allowing large-scale decentralized learning without sharing data across different data owners, which helps address the concern of data privacy in medical image analysis. However, the requirement for label consistency across clients by the existing methods largely narrows its application scope. In practice, each clinical site may only annotate certain organs of interest with partial or no overlap with other sites. Incorporating such partially labeled data into a unified federation is an unexplored problem with clinical significance and urgency. This work tackles the challenge by using a novel federated multi-encoding U-Net (Fed-MENU) method for multi-organ segmentation. In our method, a multi-encoding U-Net (MENU-Net) is proposed to extract organ-specific features through different encoding sub-networks. Each sub-network can be seen as an expert of a specific organ and trained for that client. Moreover, to encourage the organ-specific features extracted by different sub-networks to be informative and distinctive, we regularize the training of the MENU-Net by designing an auxiliary generic decoder (AGD). Extensive experiments on six public abdominal CT datasets show that our Fed-MENU method can effectively obtain a federated learning model using the partially labeled datasets with superior performance to other models trained by either localized or centralized learning methods. Source code is publicly available at https://github.com/DIAL-RPI/Fed-MENU.


Assuntos
Relevância Clínica , Software , Humanos
6.
IEEE Trans Med Imaging ; 42(2): 336-345, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35657829

RESUMO

Orthognathic surgery corrects jaw deformities to improve aesthetics and functions. Due to the complexity of the craniomaxillofacial (CMF) anatomy, orthognathic surgery requires precise surgical planning, which involves predicting postoperative changes in facial appearance. To this end, most conventional methods involve simulation with biomechanical modeling methods, which are labor intensive and computationally expensive. Here we introduce a learning-based framework to speed up the simulation of postoperative facial appearances. Specifically, we introduce a facial shape change prediction network (FSC-Net) to learn the nonlinear mapping from bony shape changes to facial shape changes. FSC-Net is a point transform network weakly-supervised by paired preoperative and postoperative data without point-wise correspondence. In FSC-Net, a distance-guided shape loss places more emphasis on the jaw region. A local point constraint loss restricts point displacements to preserve the topology and smoothness of the surface mesh after point transformation. Evaluation results indicate that FSC-Net achieves 15× speedup with accuracy comparable to a state-of-the-art (SOTA) finite-element modeling (FEM) method.


Assuntos
Aprendizado Profundo , Cirurgia Ortognática , Procedimentos Cirúrgicos Ortognáticos , Procedimentos Cirúrgicos Ortognáticos/métodos , Simulação por Computador , Face/diagnóstico por imagem , Face/cirurgia
7.
IEEE Trans Med Imaging ; 41(11): 3445-3453, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35759585

RESUMO

Domain adaptation techniques have been demonstrated to be effective in addressing label deficiency challenges in medical image segmentation. However, conventional domain adaptation based approaches often concentrate on matching global marginal distributions between different domains in a class-agnostic fashion. In this paper, we present a dual-attention domain-adaptative segmentation network (DADASeg-Net) for cross-modality medical image segmentation. The key contribution of DADASeg-Net is a novel dual adversarial attention mechanism, which regularizes the domain adaptation module with two attention maps respectively from the space and class perspectives. Specifically, the spatial attention map guides the domain adaptation module to focus on regions that are challenging to align in adaptation. The class attention map encourages the domain adaptation module to capture class-specific instead of class-agnostic knowledge for distribution alignment. DADASeg-Net shows superior performance in two challenging medical image segmentation tasks.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos
8.
IEEE Trans Med Imaging ; 41(10): 2856-2866, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35544487

RESUMO

Cephalometric analysis relies on accurate detection of craniomaxillofacial (CMF) landmarks from cone-beam computed tomography (CBCT) images. However, due to the complexity of CMF bony structures, it is difficult to localize landmarks efficiently and accurately. In this paper, we propose a deep learning framework to tackle this challenge by jointly digitalizing 105 CMF landmarks on CBCT images. By explicitly learning the local geometrical relationships between the landmarks, our approach extends Mask R-CNN for end-to-end prediction of landmark locations. Specifically, we first apply a detection network on a down-sampled 3D image to leverage global contextual information to predict the approximate locations of the landmarks. We subsequently leverage local information provided by higher-resolution image patches to refine the landmark locations. On patients with varying non-syndromic jaw deformities, our method achieves an average detection accuracy of 1.38± 0.95mm, outperforming a related state-of-the-art method.


Assuntos
Tomografia Computadorizada de Feixe Cônico Espiral , Pontos de Referência Anatômicos , Cefalometria/métodos , Tomografia Computadorizada de Feixe Cônico/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reprodutibilidade dos Testes
9.
Int J Comput Assist Radiol Surg ; 17(5): 945-952, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35362849

RESUMO

PURPOSE: Orthognathic surgery requires an accurate surgical plan of how bony segments are moved and how the face passively responds to the bony movement. Currently, finite element method (FEM) is the standard for predicting facial deformation. Deep learning models have recently been used to approximate FEM because of their faster simulation speed. However, current solutions are not compatible with detailed facial meshes and often do not explicitly provide the network with known boundary type information. Therefore, the purpose of this proof-of-concept study is to develop a biomechanics-informed deep neural network that accepts point cloud data and explicit boundary types as inputs to the network for fast prediction of soft-tissue deformation. METHODS: A deep learning network was developed based on the PointNet++ architecture. The network accepts the starting facial mesh, input displacement, and explicit boundary type information and predicts the final facial mesh deformation. RESULTS: We trained and tested our deep learning model on datasets created from FEM simulations of facial meshes. Our model achieved a mean error between 0.159 and 0.642 mm on five subjects. Including explicit boundary types had mixed results, improving performance in simulations with large deformations but decreasing performance in simulations with small deformations. These results suggest that including explicit boundary types may not be necessary to improve network performance. CONCLUSION: Our deep learning method can approximate FEM for facial change prediction in orthognathic surgical planning by accepting geometrically detailed meshes and explicit boundary types while significantly reducing simulation time.


Assuntos
Aprendizado Profundo , Cirurgia Ortognática , Procedimentos Cirúrgicos Ortognáticos , Face/cirurgia , Análise de Elementos Finitos , Humanos , Redes Neurais de Computação
10.
J Oral Maxillofac Surg ; 80(4): 641-650, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34942153

RESUMO

PURPOSE: A facial reference frame is a 3-dimensional Cartesian coordinate system that includes 3 perpendicular planes: midsagittal, axial, and coronal. The order in which one defines the planes matters. The purposes of this study are to determine the following: 1) what sequence (axial-midsagittal-coronal vs midsagittal-axial-coronal) produced more appropriate reference frames and 2) whether orbital or auricular dystopia influenced the outcomes. METHODS: This study is an ambispective cross-sectional study. Fifty-four subjects with facial asymmetry were included. The facial reference frames of each subject (outcome variable) were constructed using 2 methods (independent variable): axial plane first and midsagittal plane first. Two board-certified orthodontists together blindly evaluated the results using a 3-point categorical scale based on their careful inspection and expert intuition. The covariant for stratification was the existence of orbital or auricular dystopia. Finally, Wilcoxon signed rank tests were performed. RESULTS: The facial reference frames defined by the midsagittal plane first method was statistically significantly different from ones defined by the axial plane first method (P = .001). Using the midsagittal plane first method, the reference frames were more appropriately defined in 22 (40.7%) subjects, equivalent in 26 (48.1%) and less appropriately defined in 6 (11.1%). After stratified by orbital or auricular dystopia, the results also showed that the reference frame computed using midsagittal plane first method was statistically significantly more appropriate in both subject groups regardless of the existence of orbital or auricular dystopia (27 with orbital or auricular dystopia and 27 without, both P < .05). CONCLUSIONS: The midsagittal plane first sequence improves the facial reference frames compared with the traditional axial plane first approach. However, regardless of the sequence used, clinicians need to judge the correctness of the reference frame before diagnosis or surgical planning.


Assuntos
Pontos de Referência Anatômicos , Imageamento Tridimensional , Computadores , Estudos Transversais , Assimetria Facial , Humanos , Imageamento Tridimensional/métodos
11.
Artigo em Inglês | MEDLINE | ID: mdl-34966912

RESUMO

Facial appearance changes with the movements of bony segments in orthognathic surgery of patients with craniomaxillofacial (CMF) deformities. Conventional bio-mechanical methods, such as finite element modeling (FEM), for simulating such changes, are labor intensive and computationally expensive, preventing them from being used in clinical settings. To overcome these limitations, we propose a deep learning framework to predict post-operative facial changes. Specifically, FC-Net, a facial appearance change simulation network, is developed to predict the point displacement vectors associated with a facial point cloud. FC-Net learns the point displacements of a pre-operative facial point cloud from the bony movement vectors between pre-operative and simulated post-operative bony models. FC-Net is a weakly-supervised point displacement network trained using paired data with strict point-to-point correspondence. To preserve the topology of the facial model during point transform, we employ a local-point-transform loss to constrain the local movements of points. Experimental results on real patient data reveal that the proposed framework can predict post-operative facial appearance changes remarkably faster than a state-of-the-art FEM method with comparable prediction accuracy.

12.
Artigo em Inglês | MEDLINE | ID: mdl-34927176

RESUMO

Virtual orthognathic surgical planning involves simulating surgical corrections of jaw deformities on 3D facial bony shape models. Due to the lack of necessary guidance, the planning procedure is highly experience-dependent and the planning results are often suboptimal. A reference facial bony shape model representing normal anatomies can provide an objective guidance to improve planning accuracy. Therefore, we propose a self-supervised deep framework to automatically estimate reference facial bony shape models. Our framework is an end-to-end trainable network, consisting of a simulator and a corrector. In the training stage, the simulator maps jaw deformities of a patient bone to a normal bone to generate a simulated deformed bone. The corrector then restores the simulated deformed bone back to normal. In the inference stage, the trained corrector is applied to generate a patient-specific normal-looking reference bone from a real deformed bone. The proposed framework was evaluated using a clinical dataset and compared with a state-of-the-art method that is based on a supervised point-cloud network. Experimental results show that the estimated shape models given by our approach are clinically acceptable and significantly more accurate than that of the competing method.

13.
Artigo em Inglês | MEDLINE | ID: mdl-34927177

RESUMO

Dental landmark localization is a fundamental step to analyzing dental models in the planning of orthodontic or orthognathic surgery. However, current clinical practices require clinicians to manually digitize more than 60 landmarks on 3D dental models. Automatic methods to detect landmarks can release clinicians from the tedious labor of manual annotation and improve localization accuracy. Most existing landmark detection methods fail to capture local geometric contexts, causing large errors and misdetections. We propose an end-to-end learning framework to automatically localize 68 landmarks on high-resolution dental surfaces. Our network hierarchically extracts multi-scale local contextual features along two paths: a landmark localization path and a landmark area-of-interest segmentation path. Higher-level features are learned by combining local-to-global features from the two paths by feature fusion to predict the landmark heatmap and the landmark area segmentation map. An attention mechanism is then applied to the two maps to refine the landmark position. We evaluated our framework on a real-patient dataset consisting of 77 high-resolution dental surfaces. Our approach achieves an average localization error of 0.42 mm, significantly outperforming related start-of-the-art methods.

14.
Med Phys ; 48(12): 7735-7746, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34309844

RESUMO

PURPOSE: The purpose of this study was to reduce the experience dependence during the orthognathic surgical planning that involves virtually simulating the corrective procedure for jaw deformities. METHODS: We introduce a geometric deep learning framework for generating reference facial bone shape models for objective guidance in surgical planning. First, we propose a surface deformation network to warp a patient's deformed bone to a set of normal bones for generating a dictionary of patient-specific normal bony shapes. Subsequently, sparse representation learning is employed to estimate a reference shape model based on the dictionary. RESULTS: We evaluated our method on a clinical dataset containing 24 patients, and compared it with a state-of-the-art method that relies on landmark-based sparse representation. Our method yields significantly higher accuracy than the competing method for estimating normal jaws and maintains the midfaces of patients' facial bones as well as the conventional way. CONCLUSIONS: Experimental results indicate that our method generates accurate shape models that meet clinical standards.


Assuntos
Anormalidades Maxilomandibulares , Procedimentos Cirúrgicos Ortognáticos , Humanos , Imageamento Tridimensional , Arcada Osseodentária , Aprendizado de Máquina não Supervisionado
15.
IEEE Trans Med Imaging ; 40(12): 3867-3878, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34310293

RESUMO

Automatic craniomaxillofacial (CMF) landmark localization from cone-beam computed tomography (CBCT) images is challenging, considering that 1) the number of landmarks in the images may change due to varying deformities and traumatic defects, and 2) the CBCT images used in clinical practice are typically large. In this paper, we propose a two-stage, coarse-to-fine deep learning method to tackle these challenges with both speed and accuracy in mind. Specifically, we first use a 3D faster R-CNN to roughly locate landmarks in down-sampled CBCT images that have varying numbers of landmarks. By converting the landmark point detection problem to a generic object detection problem, our 3D faster R-CNN is formulated to detect virtual, fixed-size objects in small boxes with centers indicating the approximate locations of the landmarks. Based on the rough landmark locations, we then crop 3D patches from the high-resolution images and send them to a multi-scale UNet for the regression of heatmaps, from which the refined landmark locations are finally derived. We evaluated the proposed approach by detecting up to 18 landmarks on a real clinical dataset of CMF CBCT images with various conditions. Experiments show that our approach achieves state-of-the-art accuracy of 0.89 ± 0.64mm in an average time of 26.2 seconds per volume.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Imageamento Tridimensional
16.
Med Image Anal ; 72: 102095, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34090256

RESUMO

Accurate prediction of facial soft-tissue changes following orthognathic surgery is crucial for surgical outcome improvement. We developed a novel incremental simulation approach using finite element method (FEM) with a realistic lip sliding effect to improve the prediction accuracy in the lip region. First, a lip-detailed mesh is generated based on accurately digitized lip surface points. Second, an improved facial soft-tissue change simulation method is developed by applying a lip sliding effect along with the mucosa sliding effect. Finally, the orthognathic surgery initiated soft-tissue change is simulated incrementally to facilitate a natural transition of the facial change and improve the effectiveness of the sliding effects. Our method was quantitatively validated using 35 retrospective clinical data sets by comparing it to the traditional FEM simulation method and the FEM simulation method with mucosa sliding effect only. The surface deviation error of our method showed significant improvement in the upper and lower lips over the other two prior methods. In addition, the evaluation results using our lip-shape analysis, which reflects clinician's qualitative evaluation, also proved significant improvement of the lip prediction accuracy of our method for the lower lip and both upper and lower lips as a whole compared to the other two methods. In conclusion, the prediction accuracy in the clinically critical region, i.e., the lips, significantly improved after applying incremental simulation with realistic lip sliding effect compared with the FEM simulation methods without the lip sliding effect.


Assuntos
Lábio , Cirurgia Ortognática , Cefalometria , Humanos , Lábio/cirurgia , Mandíbula , Maxila , Estudos Retrospectivos
17.
Med Image Anal ; 71: 102060, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33957558

RESUMO

The dearth of annotated data is a major hurdle in building reliable image segmentation models. Manual annotation of medical images is tedious, time-consuming, and significantly variable across imaging modalities. The need for annotation can be ameliorated by leveraging an annotation-rich source modality in learning a segmentation model for an annotation-poor target modality. In this paper, we introduce a diverse data augmentation generative adversarial network (DDA-GAN) to train a segmentation model for an unannotated target image domain by borrowing information from an annotated source image domain. This is achieved by generating diverse augmented data for the target domain by one-to-many source-to-target translation. The DDA-GAN uses unpaired images from the source and target domains and is an end-to-end convolutional neural network that (i) explicitly disentangles domain-invariant structural features related to segmentation from domain-specific appearance features, (ii) combines structural features from the source domain with appearance features randomly sampled from the target domain for data augmentation, and (iii) train the segmentation model with the augmented data in the target domain and the annotations from the source domain. The effectiveness of our method is demonstrated both qualitatively and quantitatively in comparison with the state of the art for segmentation of craniomaxillofacial bony structures via MRI and cardiac substructures via CT.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Imageamento por Ressonância Magnética
18.
IEEE J Biomed Health Inform ; 25(8): 2958-2966, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33497345

RESUMO

Orthognathic surgical outcomes rely heavily on the quality of surgical planning. Automatic estimation of a reference facial bone shape significantly reduces experience-dependent variability and improves planning accuracy and efficiency. We propose an end-to-end deep learning framework to estimate patient-specific reference bony shape models for patients with orthognathic deformities. Specifically, we apply a point-cloud network to learn a vertex-wise deformation field from a patient's deformed bony shape, represented as a point cloud. The estimated deformation field is then used to correct the deformed bony shape to output a patient-specific reference bony surface model. To train our network effectively, we introduce a simulation strategy to synthesize deformed bones from any given normal bone, producing a relatively large and diverse dataset of shapes for training. Our method was evaluated using both synthetic and real patient data. Experimental results show that our framework estimates realistic reference bony shape models for patients with varying deformities. The performance of our method is consistently better than an existing method and several deep point-cloud networks. Our end-to-end estimation framework based on geometric deep learning shows great potential for improving clinical workflows.


Assuntos
Aprendizado Profundo , Procedimentos Cirúrgicos Ortognáticos , Osso e Ossos , Humanos
19.
IEEE Trans Med Imaging ; 40(1): 274-285, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32956048

RESUMO

An increasing number of studies are leveraging unsupervised cross-modality synthesis to mitigate the limited label problem in training medical image segmentation models. They typically transfer ground truth annotations from a label-rich imaging modality to a label-lacking imaging modality, under an assumption that different modalities share the same anatomical structure information. However, since these methods commonly use voxel/pixel-wise cycle-consistency to regularize the mappings between modalities, high-level semantic information is not necessarily preserved. In this paper, we propose a novel anatomy-regularized representation learning approach for segmentation-oriented cross-modality image synthesis. It learns a common feature encoding across different modalities to form a shared latent space, where 1) the input and its synthesis present consistent anatomical structure information, and 2) the transformation between two images in one domain is preserved by their syntheses in another domain. We applied our method to the tasks of cross-modality skull segmentation and cardiac substructure segmentation. Experimental results demonstrate the superiority of our method in comparison with state-of-the-art cross-modality medical image segmentation methods.


Assuntos
Processamento de Imagem Assistida por Computador , Crânio , Coração/diagnóstico por imagem
20.
IEEE Trans Biomed Eng ; 68(2): 362-373, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-32340932

RESUMO

OBJECTIVE: To estimate a patient-specific reference bone shape model for a patient with craniomaxillofacial (CMF) defects due to facial trauma. METHODS: We proposed an automatic facial bone shape estimation framework using pre-traumatic conventional portrait photos and post-traumatic head computed tomography (CT) scans via a 3D face reconstruction and a deformable shape model. Specifically, a three-dimensional (3D) face was first reconstructed from the patient's pre-traumatic portrait photos. Second, a correlation model between the skin and bone surfaces was constructed using a sparse representation based on the CT images of training normal subjects. Third, by feeding the reconstructed 3D face into the correlation model, an initial reference shape model was generated. In addition, we refined the initial estimation by applying non-rigid surface matching between the initially estimated shape and the patient's post-traumatic bone based on the adaptive-focus deformable shape model (AFDSM). Furthermore, a statistical shape model, built from the training normal subjects, was utilized to constrain the deformation process to avoid overfitting. RESULTS AND CONCLUSION: The proposed method was evaluated using both synthetic and real patient data. Experimental results show that the patient's abnormal facial bony structure can be recovered using our method, and the estimated reference shape model is considered clinically acceptable by an experienced CMF surgeon. SIGNIFICANCE: The proposed method is more suitable to the complex CMF defects for CMF reconstructive surgical planning.


Assuntos
Processamento de Imagem Assistida por Computador , Modelos Estatísticos , Face/diagnóstico por imagem , Face/cirurgia , Humanos , Imageamento Tridimensional , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...