Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 66
Filtrar
1.
Brief Bioinform ; 24(1)2023 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-36572655

RESUMO

The time since deposition (TSD) of a bloodstain, i.e., the time of a bloodstain formation is an essential piece of biological evidence in crime scene investigation. The practical usage of some existing microscopic methods (e.g., spectroscopy or RNA analysis technology) is limited, as their performance strongly relies on high-end instrumentation and/or rigorous laboratory conditions. This paper presents a practically applicable deep learning-based method (i.e., BloodNet) for efficient, accurate, and costless TSD inference from a macroscopic view, i.e., by using easily accessible bloodstain photos. To this end, we established a benchmark database containing around 50,000 photos of bloodstains with varying TSDs. Capitalizing on such a large-scale database, BloodNet adopted attention mechanisms to learn from relatively high-resolution input images the localized fine-grained feature representations that were highly discriminative between different TSD periods. Also, the visual analysis of the learned deep networks based on the Smooth Grad-CAM tool demonstrated that our BloodNet can stably capture the unique local patterns of bloodstains with specific TSDs, suggesting the efficacy of the utilized attention mechanism in learning fine-grained representations for TSD inference. As a paired study for BloodNet, we further conducted a microscopic analysis using Raman spectroscopic data and a machine learning method based on Bayesian optimization. Although the experimental results show that such a new microscopic-level approach outperformed the state-of-the-art by a large margin, its inference accuracy is significantly lower than BloodNet, which further justifies the efficacy of deep learning techniques in the challenging task of bloodstain TSD inference. Our code is publically accessible via https://github.com/shenxiaochenn/BloodNet. Our datasets and pre-trained models can be freely accessed via https://figshare.com/articles/dataset/21291825.


Assuntos
Manchas de Sangue , Teorema de Bayes , Aprendizado de Máquina
2.
BMC Surg ; 23(1): 254, 2023 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-37635206

RESUMO

BACKGROUND: To investigate the relationship between tongue fat content and severity of obstructive sleep apnea (OSA) and its effects on the efficacy of uvulopalatopharyngoplasty (UPPP) in the Chinese group. METHOD: Fifty-two participants concluded to this study were diagnosed as OSA by performing polysomnography (PSG) then they were divided into moderate group and severe group according to apnea hypopnea index (AHI). All of them were also collected a series of data including age, BMI, height, weight, neck circumference, abdominal circumference, magnetic resonance imaging (MRI) of upper airway and the score of Epworth Sleepiness Scale (ESS) on the morning after they completed PSG. The relationship between tongue fat content and severity of OSA as well as the association between tongue fat content in pre-operation and surgical efficacy were analyzed.Participants underwent UPPP and followed up at 3rd month after surgery, and they were divided into two groups according to the surgical efficacy. RESULTS: There were 7 patients in the moderate OSA group and 45 patients in the severe OSA group. The tongue volume was significantly larger in the severe OSA group than that in the moderate OSA group. There was no difference in tongue fat volume and tongue fat rate between the two groups. There was no association among tongue fat content, AHI, obstructive apnea hypopnea index, obstructive apnea index and Epworth sleepiness scale (all P > 0.05), but tongue fat content was related to the lowest oxygen saturation (r=-0.335, P < 0.05). There was no significantly difference in pre-operative tongue fat content in two different surgical efficacy groups. CONCLUSIONS: This study didn't show an association between tongue fat content and the severity of OSA in the Chinese group, but it suggested a negative correlation between tongue fat content and the lowest oxygen saturation (LSaO2). Tongue fat content didn't influence surgical efficacy of UPPP in Chinese OSA patients. TRIAL REGISTRATION: This study didn't report on a clinical trial, it was retrospectively registered.


Assuntos
Adiposidade , População do Leste Asiático , Procedimentos Cirúrgicos Otorrinolaringológicos , Apneia Obstrutiva do Sono , Língua , Humanos , Povo Asiático , Polissonografia , Apneia Obstrutiva do Sono/diagnóstico , Apneia Obstrutiva do Sono/cirurgia , Sonolência , Língua/anatomia & histologia , Língua/cirurgia
3.
Bioinformatics ; 37(19): 3106-3114, 2021 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-34237137

RESUMO

MOTIVATION: Predicting early in treatment whether a tumor is likely to respond to treatment is one of the most difficult yet important tasks in providing personalized cancer care. Most oropharyngeal squamous cell carcinoma (OPSCC) patients receive standard cancer therapy. However, the treatment outcomes vary significantly and are difficult to predict. Multiple studies indicate that microRNAs (miRNAs) are promising cancer biomarkers for the prognosis of oropharyngeal cancer. The reliable and efficient use of miRNAs for patient stratification and treatment outcome prognosis is still a very challenging task, mainly due to the relatively high dimensionality of miRNAs compared to the small number of observation sets; the redundancy, irrelevancy and uncertainty in the large amount of miRNAs; and the imbalanced observation patient samples. RESULTS: In this study, a new machine learning-based prognosis model was proposed to stratify subsets of OPSCC patients with low and high risks for treatment failure. The model cascaded a two-stage prognostic biomarker selection method and an evidential K-nearest neighbors classifier to address the challenges and improve the accuracy of patient stratification. The model has been evaluated on miRNA expression profiling of 150 oropharyngeal tumors by use of overall survival and disease-specific survival as the end points of disease treatment outcomes, respectively. The proposed method showed superior performance compared to other advanced machine-learning methods in terms of common performance quantification metrics. The proposed prognosis model can be employed as a supporting tool to identify patients who are likely to fail standard therapy and potentially benefit from alternative targeted treatments.Availability and implementation: Code is available in https://github.com/shenghh2015/mRMR-BFT-outcome-prediction.

4.
Proc Natl Acad Sci U S A ; 116(32): 15855-15860, 2019 08 06.
Artigo em Inglês | MEDLINE | ID: mdl-31332010

RESUMO

During the first 2 postnatal years, cortical thickness of the human brain develops dynamically and spatially heterogeneously and likely peaks between 1 and 2 y of age. The striking development renders this period critical for later cognitive outcomes and vulnerable to early neurodevelopmental disorders. However, due to the difficulties in longitudinal infant brain MRI acquisition and processing, our knowledge still remains limited on the dynamic changes, peak age, and spatial heterogeneities of cortical thickness during infancy. To fill this knowledge gap, in this study, we discover the developmental regionalization of cortical thickness, i.e., developmentally distinct regions, each of which is composed of a set of codeveloping cortical vertices, for better understanding of the spatiotemporal heterogeneities of cortical thickness development. We leverage an infant-dedicated computational pipeline, an advanced multivariate analysis method (i.e., nonnegative matrix factorization), and a densely sampled longitudinal dataset with 210 serial MRI scans from 43 healthy infants, with each infant being scheduled to have 7 longitudinal scans at around 1, 3, 6, 9, 12, 18, and 24 mo of age. Our results suggest that, during the first 2 y, the whole-brain average cortical thickness increases rapidly and reaches a plateau at about 14 mo of age and then decreases at a slow pace thereafter. More importantly, each discovered region is structurally and functionally meaningful and exhibits a distinctive developmental pattern, with several regions peaking at varied ages while others keep increasing in the first 2 postnatal years. Our findings provide valuable references and insights for early brain development.


Assuntos
Córtex Cerebral/anatomia & histologia , Córtex Cerebral/crescimento & desenvolvimento , Feminino , Humanos , Lactente , Imageamento por Ressonância Magnética , Masculino
5.
J Magn Reson Imaging ; 54(4): 1326-1336, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33998738

RESUMO

BACKGROUND: Perivascular spaces (PVSs) are important component of the brain glymphatic system. While visual rating has been widely used to assess PVS, computational measures may have higher sensitivity for capturing PVS characteristics under disease conditions. PURPOSE: To compute quantitative and morphological PVS features and to assess their associations with vascular risk factors and cerebral small vessel disease (CSVD). STUDY TYPE: Prospective. POPULATION: One hundred sixty-one middle-aged/later middle-aged subjects (age = 60.4 ± 7.3). SEQUENCE: 3D T1-weighted, T2-weighted and T2-FLAIR sequences, and susceptibility-weighted multiecho gradient-echo sequence on a 3 T scanner. ASSESSMENT: Automated PVS segmentation was performed on sub-millimeter T2-weighted images. Quantitative and morphological PVS features were calculated in white matter (WM) and basal ganglia (BG) regions, including volume, count, size, length (Lmaj ), width (Lmin ), and linearity. Visual PVS scores were also acquired for comparison. STATISTICAL TESTS: Simple and multiple linear regression analyses were used to explore the associations among variables. RESULTS: WM-PVS visual score and count were associated with hypertension (ß = 0.161, P < 0.05; ß = 0.193, P < 0.05), as were BG-PVS rating score, volume, count and Lmin (ß = 0.197, P < 0.05; ß = 0.170, P < 0.05; ß = 0.200, P < 0.05; ß = 0.172, P < 0.05). WM-PVS size was associated with diabetes (ß = 0.165, P < 0.05). WM-PVS and BG-PVS were associated with CSVD markers, especially white matter hyperintensities (WMHs) (P < 0.05). Multiple regression analysis showed that WM/BG-PVS quantitative measures were widely associated with vascular risk factors and CSVD markers (P < 0.05). Morphological measures were associated with WMH severity in WM region and also associated with lacunes and microbleeds (P < 0.05) in BG region. DATA CONCLUSION: These novel PVS measures may capture mild PVS alterations driven by different pathologies. EVIDENCE LEVEL: 2 TECHNICAL EFFICACY: Stage 2.


Assuntos
Doenças de Pequenos Vasos Cerebrais , Substância Branca , Idoso , Doenças de Pequenos Vasos Cerebrais/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Pessoa de Meia-Idade , Estudos Prospectivos , Fatores de Risco , Substância Branca/diagnóstico por imagem
6.
Orthod Craniofac Res ; 24 Suppl 2: 108-116, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33711187

RESUMO

OBJECTIVE: This study aimed to quantify the 3D asymmetry of the maxilla in patients with unilateral cleft lip and palate (UCP) and investigate the defect factors responsible for the variability of the maxilla on the cleft side using a deep-learning-based CBCT image segmentation protocol. SETTING AND SAMPLE POPULATION: Cone beam computed tomography (CBCT) images of 60 patients with UCP were acquired. The samples in this study consisted of 39 males and 21 females, with a mean age of 11.52 years (SD = 3.27 years; range of 8-18 years). MATERIALS AND METHODS: The deep-learning-based protocol was used to segment the maxilla and defect initially, followed by manual refinement. Paired t-tests were performed to characterize the maxillary asymmetry. A multiple linear regression was carried out to investigate the relationship between the defect parameters and those of the cleft side of the maxilla. RESULTS: The cleft side of the maxilla demonstrated a significant decrease in maxillary volume and length as well as alveolar length, anterior width, posterior width, anterior height and posterior height. A significant increase in maxillary anterior width was demonstrated on the cleft side of the maxilla. There was a close relationship between the defect parameters and those of the cleft side of the maxilla. CONCLUSIONS: Based on the 3D volumetric segmentations, significant hypoplasia of the maxilla on the cleft side existed in the pyriform aperture and alveolar crest area near the defect. The defect structures appeared to contribute to the variability of the maxilla on the cleft side.


Assuntos
Fenda Labial , Fissura Palatina , Aprendizado Profundo , Tomografia Computadorizada de Feixe Cônico Espiral , Adolescente , Criança , Fenda Labial/diagnóstico por imagem , Fissura Palatina/diagnóstico por imagem , Tomografia Computadorizada de Feixe Cônico , Feminino , Humanos , Masculino , Maxila/diagnóstico por imagem
7.
Neuroimage ; 218: 116978, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32447015

RESUMO

Perivascular spaces (PVSs) are fluid-filled spaces surrounding penetrating blood vessels in the brain and are an integral pathway of the glymphatic system. A PVS and the enclosed blood vessel are commonly visualized as a single vessel-like complex (denoted as PVSV) in high-resolution MRI images. Quantitative characterization of the PVSV morphology in MRI images in healthy subjects may serve as a reference for detecting disease related PVS and/or blood vessel alterations in patients with brain diseases. To this end, we evaluated the age dependences, spatial heterogeneities, and dynamic properties of PVSV morphological features in 45 healthy subjects (21-55 years old), using an ultra-high-resolution three-dimensional transverse relaxation time weighted MRI sequence (0.41 â€‹× â€‹0.41 â€‹× â€‹0.4 â€‹mm3) at 7T. Quantitative PVSV parameters, including apparent diameter, count, volume fraction (VF), and relative contrast to noise ratio (rCNR) were calculated in the white matter and subcortical structures. Dynamic changes were induced by carbogen breathing which are known to induce vasodilation and increase the blood oxygenation level in the brain. PVSV count and VF significantly increased with age in basal ganglia (BG), so did rCNR in BG, midbrain, and white matter (WM). Apparent PVSV diameter also showed a positive association with age in the three brain regions, although it did not reach statistical significance. The PVSV VF and count showed large inter-subject variations, with coefficients of variation ranging from 0.17 to 0.74 after regressing out age and gender effects. Both apparent diameter and VF exhibited significant spatial heterogeneity, which cannot be explained solely by radio-frequency field inhomogeneities. Carbogen breathing significantly increased VF in BG and WM, and rCNR in thalamus, BG, and WM compared to air breathing. Our results are consistent with gradual dilation of PVSs with age in healthy adults. The PVSV morphology exhibited spatial heterogeneity and large inter-subject variations and changed during carbogen breathing compared to air breathing.


Assuntos
Vasos Sanguíneos/anatomia & histologia , Sistema Glinfático/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Adulto , Envelhecimento/patologia , Feminino , Voluntários Saudáveis , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
8.
J Magn Reson Imaging ; 52(6): 1852-1858, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32656955

RESUMO

BACKGROUND: A generative adversarial network could be used for high-resolution (HR) medical image synthesis with reduced scan time. PURPOSE: To evaluate the potential of using a deep convolutional generative adversarial network (DCGAN) for generating HRpre and HRpost images based on their corresponding low-resolution (LR) images (LRpre and LRpost ). STUDY TYPE: This was a retrospective analysis of a prospectively acquired cohort. POPULATION: In all, 224 subjects were randomly divided into 200 training subjects and an independent 24 subjects testing set. FIELD STRENGTH/SEQUENCE: Dynamic contrast-enhanced (DCE) MRI with a 1.5T scanner. ASSESSMENT: Three breast radiologists independently ranked the image datasets, using the DCE images as the ground truth, and reviewed the image quality of both the original LR images and the generated HR images. The BI-RADS category and conspicuity of lesions were also ranked. The inter/intracorrelation coefficients (ICCs) of mean image quality scores, lesion conspicuity scores, and Breast Imaging Reporting and Data System (BI-RADS) categories were calculated between the three readers. STATISTICAL TEST: Wilcoxon signed-rank tests evaluated differences among the multireader ranking scores. RESULTS: The mean overall image quality scores of the generated HRpre and HRpost were significantly higher than those of the original LRpre and LRpost (4.77 ± 0.41 vs. 3.27 ± 0.43 and 4.72 ± 0.44 vs. 3.23 ± 0.43, P < 0.0001, respectively, in the multireader study). The mean lesion conspicuity scores of the generated HRpre and HRpost were significantly higher than those of the original LRpre and LRpost (4.18 ± 0.70 vs. 3.49 ± 0.58 and 4.35 ± 0.59 vs. 3.48 ± 0.61, P < 0.001, respectively, in the multireader study). The ICCs of the image quality scores, lesion conspicuity scores, and BI-RADS categories had good agreements among the three readers (all ICCs >0.75). DATA CONCLUSION: DCGAN was capable of generating HR of the breast from fast pre- and postcontrast LR and achieved superior quantitative and qualitative performance in a multireader study. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY STAGE: 2 J. MAGN. RESON. IMAGING 2020;52:1852-1858.


Assuntos
Mama , Imageamento por Ressonância Magnética , Mama/diagnóstico por imagem , Redes Neurais de Computação , Radiografia , Estudos Retrospectivos
9.
Neuroimage ; 198: 114-124, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31112785

RESUMO

Reconstruction of accurate cortical surfaces without topological errors (i.e., handles and holes) from infant brain MR images is very important in early brain development studies. However, infant brain MR images typically suffer extremely low tissue contrast and dynamic imaging appearance patterns. Thus, it is inevitable to have large amounts of topological errors in the segmented infant brain tissue images, which lead to inaccurately reconstructed cortical surfaces with topological errors. To address this issue, inspired by recent advances in deep learning, we propose an anatomically constrained network for topological correction on infant cortical surfaces. Specifically, in our method, we first locate regions of potential topological defects by leveraging a topology-preserving level set method. Then, we propose an anatomically constrained network to correct those candidate voxels in the located regions. Since infant cortical surfaces often contain large and complex handles or holes, it is difficult to completely correct all errors using one-shot correction. Therefore, we further enroll these two steps into an iterative framework to gradually correct large topological errors. To the best of our knowledge, this is the first work to introduce deep learning approach for topological correction of infant cortical surfaces. We compare our method with the state-of-the-art methods on both simulated topological errors and real topological errors in human infant brain MR images. Moreover, we also validate our method on the infant brain MR images of macaques. All experimental results show the superior performance of the proposed method.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Substância Branca/anatomia & histologia , Animais , Artefatos , Encéfalo/diagnóstico por imagem , Humanos , Lactente , Macaca , Reprodutibilidade dos Testes , Substância Branca/diagnóstico por imagem
10.
IEEE Trans Med Imaging ; PP2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38801691

RESUMO

Tooth instance segmentation of dental panoramic X-ray images represents a task of significant clinical importance. Teeth demonstrate symmetry within the upper and lower jawbones and are arranged in a specific order. However, previous studies frequently overlook this crucial spatial prior information, resulting in misidentifications of tooth categories for adjacent or similarly shaped teeth. In this paper, we propose SPGTNet, a spatial prior-guided transformer method, designed to both the extracted tooth positional features from CNNs and the long-range contextual information from vision transformers for dental panoramic X-ray image segmentation. Initially, a center-based spatial prior perception module is employed to identify each tooth's centroid, thereby enhancing the spatial prior information for the CNN sequence features. Subsequently, a bi-directional cross-attention module is designed to facilitate the interaction between the spatial prior information of the CNN sequence features and the long-distance contextual features of the vision transformer sequence features. Finally, an instance identification head is employed to derive the tooth segmentation results. Extensive experiments on three public benchmark datasets have demonstrated the effectiveness and superiority of our proposed method in comparison with other state-of-the-art approaches. The proposed method demonstrates the capability to accurately identify and analyze tooth structures, thereby providing crucial information for dental diagnosis, treatment planning, and research.

11.
J Autism Dev Disord ; 53(6): 2475-2489, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35389185

RESUMO

Previous studies have demonstrated abnormal brain overgrowth in children with autism spectrum disorder (ASD), but the development of specific brain regions, such as the amygdala and hippocampal subfields in infants, is incompletely documented. To address this issue, we performed the first MRI study of amygdala and hippocampal subfields in infants from 6 to 24 months of age using a longitudinal dataset. A novel deep learning approach, Dilated-Dense U-Net, was proposed to address the challenge of low tissue contrast and small structural size of these subfields. We performed a volume-based analysis on the segmentation results. Our results show that infants who were later diagnosed with ASD had larger left and right volumes of amygdala and hippocampal subfields than typically developing controls.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Criança , Humanos , Lactente , Transtorno do Espectro Autista/diagnóstico por imagem , Hipocampo/diagnóstico por imagem , Encéfalo , Tonsila do Cerebelo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos
12.
IEEE Trans Med Imaging ; 42(4): 1046-1055, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36399586

RESUMO

Adjuvant and salvage radiotherapy after radical prostatectomy requires precise delineations of prostate bed (PB), i.e., the clinical target volume, and surrounding organs at risk (OARs) to optimize radiotherapy planning. Segmenting PB is particularly challenging even for clinicians, e.g., from the planning computed tomography (CT) images, as it is an invisible/virtual target after the operative removal of the cancerous prostate gland. Very recently, a few deep learning-based methods have been proposed to automatically contour non-contrast PB by leveraging its spatial reliance on adjacent OARs (i.e., the bladder and rectum) with much more clear boundaries, mimicking the clinical workflow of experienced clinicians. Although achieving state-of-the-art results from both the clinical and technical aspects, these existing methods improperly ignore the gap between the hierarchical feature representations needed for segmenting those fundamentally different clinical targets (i.e., PB and OARs), which in turn limits their delineation accuracy. This paper proposes an asymmetric multi-task network integrating dynamic cross-task representation adaptation (i.e., DyAdapt) for accurate and efficient co-segmentation of PB and OARs in one-pass from CT images. In the learning-to-learn framework, the DyAdapt modules adaptively transfer the hierarchical feature representations from the source task of OARs segmentation to match up with the target (and more challenging) task of PB segmentation, conditioned on the dynamic inter-task associations learned from the learning states of the feed-forward path. On a real-patient dataset, our method led to state-of-the-art results of PB and OARs co-segmentation. Code is available at https://github.com/ladderlab-xjtu/DyAdapt.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias da Próstata , Masculino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Órgãos em Risco , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Neoplasias da Próstata/cirurgia , Tomografia Computadorizada por Raios X/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Prostatectomia
13.
IEEE Trans Med Imaging ; 42(2): 336-345, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35657829

RESUMO

Orthognathic surgery corrects jaw deformities to improve aesthetics and functions. Due to the complexity of the craniomaxillofacial (CMF) anatomy, orthognathic surgery requires precise surgical planning, which involves predicting postoperative changes in facial appearance. To this end, most conventional methods involve simulation with biomechanical modeling methods, which are labor intensive and computationally expensive. Here we introduce a learning-based framework to speed up the simulation of postoperative facial appearances. Specifically, we introduce a facial shape change prediction network (FSC-Net) to learn the nonlinear mapping from bony shape changes to facial shape changes. FSC-Net is a point transform network weakly-supervised by paired preoperative and postoperative data without point-wise correspondence. In FSC-Net, a distance-guided shape loss places more emphasis on the jaw region. A local point constraint loss restricts point displacements to preserve the topology and smoothness of the surface mesh after point transformation. Evaluation results indicate that FSC-Net achieves 15× speedup with accuracy comparable to a state-of-the-art (SOTA) finite-element modeling (FEM) method.


Assuntos
Aprendizado Profundo , Cirurgia Ortognática , Procedimentos Cirúrgicos Ortognáticos , Procedimentos Cirúrgicos Ortognáticos/métodos , Simulação por Computador , Face/diagnóstico por imagem , Face/cirurgia
14.
Med Image Anal ; 83: 102644, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36272236

RESUMO

This paper proposes a deep learning framework to encode subject-specific transformations between facial and bony shapes for orthognathic surgical planning. Our framework involves a bidirectional point-to-point convolutional network (P2P-Conv) to predict the transformations between facial and bony shapes. P2P-Conv is an extension of the state-of-the-art P2P-Net and leverages dynamic point-wise convolution (i.e., PointConv) to capture local-to-global spatial information. Data augmentation is carried out in the training of P2P-Conv with multiple point subsets from the facial and bony shapes. During inference, network outputs generated for multiple point subsets are combined into a dense transformation. Finally, non-rigid registration using the coherent point drift (CPD) algorithm is applied to generate surface meshes based on the predicted point sets. Experimental results on real-subject data demonstrate that our method substantially improves the prediction of facial and bony shapes over state-of-the-art methods.

15.
IEEE Trans Cybern ; 52(4): 1992-2003, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32721906

RESUMO

Deep-learning methods (especially convolutional neural networks) using structural magnetic resonance imaging (sMRI) data have been successfully applied to computer-aided diagnosis (CAD) of Alzheimer's disease (AD) and its prodromal stage [i.e., mild cognitive impairment (MCI)]. As it is practically challenging to capture local and subtle disease-associated abnormalities directly from the whole-brain sMRI, most of those deep-learning approaches empirically preselect disease-associated sMRI brain regions for model construction. Considering that such isolated selection of potentially informative brain locations might be suboptimal, very few methods have been proposed to perform disease-associated discriminative region localization and disease diagnosis in a unified deep-learning framework. However, those methods based on task-oriented discriminative localization still suffer from two common limitations, that is: 1) identified brain locations are strictly consistent across all subjects, which ignores the unique anatomical characteristics of each brain and 2) only limited local regions/patches are used for model training, which does not fully utilize the global structural information provided by the whole-brain sMRI. In this article, we propose an attention-guided deep-learning framework to extract multilevel discriminative sMRI features for dementia diagnosis. Specifically, we first design a backbone fully convolutional network to automatically localize the discriminative brain regions in a weakly supervised manner. Using the identified disease-related regions as spatial attention guidance, we further develop a hybrid network to jointly learn and fuse multilevel sMRI features for CAD model construction. Our proposed method was evaluated on three public datasets (i.e., ADNI-1, ADNI-2, and AIBL), showing superior performance compared with several state-of-the-art methods in both tasks of AD diagnosis and MCI conversion prediction.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Doença de Alzheimer/diagnóstico por imagem , Atenção , Encéfalo/diagnóstico por imagem , Disfunção Cognitiva/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos
16.
IEEE Trans Neural Netw Learn Syst ; 33(8): 4056-4068, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-33656999

RESUMO

Accurate prediction of clinical scores (of neuropsychological tests) based on noninvasive structural magnetic resonance imaging (MRI) helps understand the pathological stage of dementia (e.g., Alzheimer's disease (AD)) and forecast its progression. Existing machine/deep learning approaches typically preselect dementia-sensitive brain locations for MRI feature extraction and model construction, potentially leading to undesired heterogeneity between different stages and degraded prediction performance. Besides, these methods usually rely on prior anatomical knowledge (e.g., brain atlas) and time-consuming nonlinear registration for the preselection of brain locations, thereby ignoring individual-specific structural changes during dementia progression because all subjects share the same preselected brain regions. In this article, we propose a multi-task weakly-supervised attention network (MWAN) for the joint regression of multiple clinical scores from baseline MRI scans. Three sequential components are included in MWAN: 1) a backbone fully convolutional network for extracting MRI features; 2) a weakly supervised dementia attention block for automatically identifying subject-specific discriminative brain locations; and 3) an attention-aware multitask regression block for jointly predicting multiple clinical scores. The proposed MWAN is an end-to-end and fully trainable deep learning model in which dementia-aware holistic feature learning and multitask regression model construction are integrated into a unified framework. Our MWAN method was evaluated on two public AD data sets for estimating clinical scores of mini-mental state examination (MMSE), clinical dementia rating sum of boxes (CDRSB), and AD assessment scale cognitive subscale (ADAS-Cog). Quantitative experimental results demonstrate that our method produces superior regression performance compared with state-of-the-art methods. Importantly, qualitative results indicate that the dementia-sensitive brain locations automatically identified by our MWAN method well retain individual specificities and are biologically meaningful.


Assuntos
Doença de Alzheimer , Redes Neurais de Computação , Encéfalo/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos
17.
IEEE Trans Med Imaging ; 41(11): 3116-3127, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35635829

RESUMO

Accurate tooth identification and delineation in dental CBCT images are essential in clinical oral diagnosis and treatment. Teeth are positioned in the alveolar bone in a particular order, featuring similar appearances across adjacent and bilaterally symmetric teeth. However, existing tooth segmentation methods ignored such specific anatomical topology, which hampers the segmentation accuracy. Here we propose a semantic graph-based method to explicitly model the spatial associations between different anatomical targets (i.e., teeth) for their precise delineation in a coarse-to-fine fashion. First, to efficiently control the bilaterally symmetric confusion in segmentation, we employ a lightweight network to roughly separate teeth as four quadrants. Then, designing a semantic graph attention mechanism to explicitly model the anatomical topology of the teeth in each quadrant, based on which voxel-wise discriminative feature embeddings are learned for the accurate delineation of teeth boundaries. Extensive experiments on a clinical dental CBCT dataset demonstrate the superior performance of the proposed method compared with other state-of-the-art approaches.


Assuntos
Tomografia Computadorizada de Feixe Cônico Espiral , Dente , Semântica , Dente/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
18.
IEEE Trans Med Imaging ; 41(4): 826-835, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34714743

RESUMO

Precise segmentation of teeth from intra-oral scanner images is an essential task in computer-aided orthodontic surgical planning. The state-of-the-art deep learning-based methods often simply concatenate the raw geometric attributes (i.e., coordinates and normal vectors) of mesh cells to train a single-stream network for automatic intra-oral scanner image segmentation. However, since different raw attributes reveal completely different geometric information, the naive concatenation of different raw attributes at the (low-level) input stage may bring unnecessary confusion in describing and differentiating between mesh cells, thus hampering the learning of high-level geometric representations for the segmentation task. To address this issue, we design a two-stream graph convolutional network (i.e., TSGCN), which can effectively handle inter-view confusion between different raw attributes to more effectively fuse their complementary information and learn discriminative multi-view geometric representations. Specifically, our TSGCN adopts two input-specific graph-learning streams to extract complementary high-level geometric representations from coordinates and normal vectors, respectively. Then, these single-view representations are further fused by a self-attention module to adaptively balance the contributions of different views in learning more discriminative multi-view representations for accurate and fully automatic tooth segmentation. We have evaluated our TSGCN on a real-patient dataset of dental (mesh) models acquired by 3D intraoral scanners. Experimental results show that our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.


Assuntos
Redes Neurais de Computação , Dente , Humanos , Processamento de Imagem Assistida por Computador , Dente/diagnóstico por imagem
19.
IEEE Trans Med Imaging ; 41(10): 2856-2866, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35544487

RESUMO

Cephalometric analysis relies on accurate detection of craniomaxillofacial (CMF) landmarks from cone-beam computed tomography (CBCT) images. However, due to the complexity of CMF bony structures, it is difficult to localize landmarks efficiently and accurately. In this paper, we propose a deep learning framework to tackle this challenge by jointly digitalizing 105 CMF landmarks on CBCT images. By explicitly learning the local geometrical relationships between the landmarks, our approach extends Mask R-CNN for end-to-end prediction of landmark locations. Specifically, we first apply a detection network on a down-sampled 3D image to leverage global contextual information to predict the approximate locations of the landmarks. We subsequently leverage local information provided by higher-resolution image patches to refine the landmark locations. On patients with varying non-syndromic jaw deformities, our method achieves an average detection accuracy of 1.38± 0.95mm, outperforming a related state-of-the-art method.


Assuntos
Tomografia Computadorizada de Feixe Cônico Espiral , Pontos de Referência Anatômicos , Cefalometria/métodos , Tomografia Computadorizada de Feixe Cônico/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reprodutibilidade dos Testes
20.
IEEE Trans Med Imaging ; 41(11): 3158-3166, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35666796

RESUMO

Accurately segmenting teeth and identifying the corresponding anatomical landmarks on dental mesh models are essential in computer-aided orthodontic treatment. Manually performing these two tasks is time-consuming, tedious, and, more importantly, highly dependent on orthodontists' experiences due to the abnormality and large-scale variance of patients' teeth. Some machine learning-based methods have been designed and applied in the orthodontic field to automatically segment dental meshes (e.g., intraoral scans). In contrast, the number of studies on tooth landmark localization is still limited. This paper proposes a two-stage framework based on mesh deep learning (called TS-MDL) for joint tooth labeling and landmark identification on raw intraoral scans. Our TS-MDL first adopts an end-to-end iMeshSegNet method (i.e., a variant of the existing MeshSegNet with both improved accuracy and efficiency) to label each tooth on the downsampled scan. Guided by the segmentation outputs, our TS-MDL further selects each tooth's region of interest (ROI) on the original mesh to construct a light-weight variant of the pioneering PointNet (i.e., PointNet-Reg) for regressing the corresponding landmark heatmaps. Our TS-MDL was evaluated on a real-clinical dataset, showing promising segmentation and localization performance. Specifically, iMeshSegNet in the first stage of TS-MDL reached an averaged Dice similarity coefficient (DSC) at 0.964±0.054 , significantly outperforming the original MeshSegNet. In the second stage, PointNet-Reg achieved a mean absolute error (MAE) of 0.597±0.761 mm in distances between the prediction and ground truth for 66 landmarks, which is superior compared with other networks for landmark detection. All these results suggest the potential usage of our TS-MDL in orthodontics.


Assuntos
Aprendizado Profundo , Dente , Humanos , Processamento de Imagem Assistida por Computador/métodos , Telas Cirúrgicas , Dente/diagnóstico por imagem , Aprendizado de Máquina
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa