Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Brief Bioinform ; 24(1)2023 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-36433784

RESUMO

Biomedical multi-modality data (also named multi-omics data) refer to data that span different types and derive from multiple sources in clinical practices (e.g. gene sequences, proteomics and histopathological images), which can provide comprehensive perspectives for cancers and generally improve the performance of survival models. However, the performance improvement of multi-modality survival models may be hindered by two key issues as follows: (1) how to learn and fuse modality-sharable and modality-individual representations from multi-modality data; (2) how to explore the potential risk-aware characteristics in each risk subgroup, which is beneficial to risk stratification and prognosis evaluation. Additionally, learning-based survival models generally refer to numerous hyper-parameters, which requires time-consuming parameter setting and might result in a suboptimal solution. In this paper, we propose an adaptive risk-aware sharable and individual subspace learning method for cancer survival analysis. The proposed method jointly learns sharable and individual subspaces from multi-modality data, whereas two auxiliary terms (i.e. intra-modality complementarity and inter-modality incoherence) are developed to preserve the complementary and distinctive properties of each modality. Moreover, it equips with a grouping co-expression constraint for obtaining risk-aware representation and preserving local consistency. Furthermore, an adaptive-weighted strategy is employed to efficiently estimate crucial parameters during the training stage. Experimental results on three public datasets demonstrate the superiority of our proposed model.


Assuntos
Aprendizado de Máquina , Neoplasias , Humanos , Neoplasias/genética , Análise de Sobrevida
2.
Stat Sin ; 33(2): 633-662, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37197479

RESUMO

Recent technological advances have made it possible to measure multiple types of many features in biomedical studies. However, some data types or features may not be measured for all study subjects because of cost or other constraints. We use a latent variable model to characterize the relationships across and within data types and to infer missing values from observed data. We develop a penalized-likelihood approach for variable selection and parameter estimation and devise an efficient expectation-maximization algorithm to implement our approach. We establish the asymptotic properties of the proposed estimators when the number of features increases at a polynomial rate of the sample size. Finally, we demonstrate the usefulness of the proposed methods using extensive simulation studies and provide an application to a motivating multi-platform genomics study.

3.
Hum Brain Mapp ; 38(6): 3081-3097, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28345269

RESUMO

Autism spectrum disorder (ASD) is a neurodevelopment disease characterized by impairment of social interaction, language, behavior, and cognitive functions. Up to now, many imaging-based methods for ASD diagnosis have been developed. For example, one may extract abundant features from multi-modality images and then derive a discriminant function to map the selected features toward the disease label. A lot of recent works, however, are limited to single imaging centers. To this end, we propose a novel multi-modality multi-center classification (M3CC) method for ASD diagnosis. We treat the classification of each imaging center as one task. By introducing the task-task and modality-modality regularizations, we solve the classification for all imaging centers simultaneously. Meanwhile, the optimal feature selection and the modeling of the discriminant functions can be jointly conducted for highly accurate diagnosis. Besides, we also present an efficient iterative optimization solution to our formulated problem and further investigate its convergence. Our comprehensive experiments on the ABIDE database show that our proposed method can significantly improve the performance of ASD diagnosis, compared to the existing methods. Hum Brain Mapp 38:3081-3097, 2017. © 2017 Wiley Periodicals, Inc.


Assuntos
Transtorno do Espectro Autista/classificação , Transtorno do Espectro Autista/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Adolescente , Algoritmos , Criança , Análise Discriminante , Feminino , Humanos , Masculino , Reconhecimento Automatizado de Padrão , Reprodutibilidade dos Testes
4.
Neuroimage ; 108: 214-24, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25562829

RESUMO

The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multi-modality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Anisotropia , Substância Cinzenta , Humanos , Lactente , Substância Branca
5.
PeerJ Comput Sci ; 10: e2077, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38983227

RESUMO

Background: Dyslexia is a neurological disorder that affects an individual's language processing abilities. Early care and intervention can help dyslexic individuals succeed academically and socially. Recent developments in deep learning (DL) approaches motivate researchers to build dyslexia detection models (DDMs). DL approaches facilitate the integration of multi-modality data. However, there are few multi-modality-based DDMs. Methods: In this study, the authors built a DL-based DDM using multi-modality data. A squeeze and excitation (SE) integrated MobileNet V3 model, self-attention mechanisms (SA) based EfficientNet B7 model, and early stopping and SA-based Bi-directional long short-term memory (Bi-LSTM) models were developed to extract features from magnetic resonance imaging (MRI), functional MRI, and electroencephalography (EEG) data. In addition, the authors fine-tuned the LightGBM model using the Hyperband optimization technique to detect dyslexia using the extracted features. Three datasets containing FMRI, MRI, and EEG data were used to evaluate the performance of the proposed DDM. Results: The findings supported the significance of the proposed DDM in detecting dyslexia with limited computational resources. The proposed model outperformed the existing DDMs by producing an optimal accuracy of 98.9%, 98.6%, and 98.8% for the FMRI, MRI, and EEG datasets, respectively. Healthcare centers and educational institutions can benefit from the proposed model to identify dyslexia in the initial stages. The interpretability of the proposed model can be improved by integrating vision transformers-based feature extraction.

6.
Med Image Anal ; 90: 102977, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37778101

RESUMO

In obstetric sonography, the quality of acquisition of ultrasound scan video is crucial for accurate (manual or automated) biometric measurement and fetal health assessment. However, the nature of fetal ultrasound involves free-hand probe manipulation and this can make it challenging to capture high-quality videos for fetal biometry, especially for the less-experienced sonographer. Manually checking the quality of acquired videos would be time-consuming, subjective and requires a comprehensive understanding of fetal anatomy. Thus, it would be advantageous to develop an automatic quality assessment method to support video standardization and improve diagnostic accuracy of video-based analysis. In this paper, we propose a general and purely data-driven video-based quality assessment framework which directly learns a distinguishable feature representation from high-quality ultrasound videos alone, without anatomical annotations. Our solution effectively utilizes both spatial and temporal information of ultrasound videos. The spatio-temporal representation is learned by a bi-directional reconstruction between the video space and the feature space, enhanced by a key-query memory module proposed in the feature space. To further improve performance, two additional modalities are introduced in training which are the sonographer gaze and optical flow derived from the video. Two different clinical quality assessment tasks in fetal ultrasound are considered in our experiments, i.e., measurement of the fetal head circumference and cerebellar diameter; in both of these, low-quality videos are detected by the large reconstruction error in the feature space. Extensive experimental evaluation demonstrates the merits of our approach.


Assuntos
Feto , Ultrassonografia Pré-Natal , Gravidez , Feminino , Humanos , Ultrassonografia Pré-Natal/métodos , Feto/diagnóstico por imagem , Ultrassonografia
7.
Front Oncol ; 13: 1100087, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36874136

RESUMO

Objectives: Recurrence risk evaluation is clinically significant for patients with locally advanced cervical cancer (LACC). We investigated the ability of transformer network in recurrence risk stratification of LACC based on computed tomography (CT) and magnetic resonance (MR) images. Methods: A total of 104 patients with pathologically diagnosed LACC between July 2017 and December 2021 were enrolled in this study. All patients underwent CT and MR scanning, and their recurrence status was identified by the biopsy. We randomly divided patients into training cohort (48 cases, non-recurrence: recurrence = 37: 11), validation cohort (21 cases, non-recurrence: recurrence = 16: 5), and testing cohort (35 cases, non-recurrence: recurrence = 27: 8), upon which we extracted 1989, 882 and 315 patches for model's development, validation and evaluation, respectively. The transformer network consisted of three modality fusion modules to extract multi-modality and multi-scale information, and a fully-connected module to perform recurrence risk prediction. The model's prediction performance was assessed by six metrics, including the area under the receiver operating characteristic curve (AUC), accuracy, f1-score, sensitivity, specificity and precision. Univariate analysis with F-test and T-test were conducted for statistical analysis. Results: The proposed transformer network is superior to conventional radiomics methods and other deep learning networks in both training, validation and testing cohorts. Particularly, in testing cohort, the transformer network achieved the highest AUC of 0.819 ± 0.038, while four conventional radiomics methods and two deep learning networks got the AUCs of 0.680 ± 0.050, 0.720 ± 0.068, 0.777 ± 0.048, 0.691 ± 0.103, 0.743 ± 0.022 and 0.733 ± 0.027, respectively. Conclusions: The multi-modality transformer network showed promising performance in recurrence risk stratification of LACC and may be used as an effective tool to help clinicians make clinical decisions.

8.
Artif Intell Med ; 145: 102678, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37925204

RESUMO

Alzheimer's disease (AD) is an irreversible central nervous degenerative disease, while mild cognitive impairment (MCI) is a precursor state of AD. Accurate early diagnosis of AD is conducive to the prevention and early intervention treatment of AD. Although some computational methods have been developed for AD diagnosis, most employ only neuroimaging, ignoring other data (e.g., genetic, clinical) that may have potential disease information. In addition, the results of some methods lack interpretability. In this work, we proposed a novel method (called DANMLP) of joining dual attention convolutional neural network (CNN) and multilayer perceptron (MLP) for computer-aided AD diagnosis by integrating multi-modality data of the structural magnetic resonance imaging (sMRI), clinical data (i.e., demographics, neuropsychology), and APOE genetic data. Our DANMLP consists of four primary components: (1) the Patch-CNN for extracting the image characteristics from each local patch, (2) the position self-attention block for capturing the dependencies between features within a patch, (3) the channel self-attention block for capturing dependencies of inter-patch features, (4) two MLP networks for extracting the clinical features and outputting the AD classification results, respectively. Compared with other state-of-the-art methods in the 5CV test, DANMLP achieves 93% and 82.4% classification accuracy for the AD vs. MCI and MCI vs. NC tasks on the ADNI database, which is 0.2%∼15.2% and 3.4%∼26.8% higher than that of other five methods, respectively. The individualized visualization of focal areas can also help clinicians in the early diagnosis of AD. These results indicate that DANMLP can be effectively used for diagnosing AD and MCI patients.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Humanos , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/genética , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Neuroimagem/métodos , Diagnóstico por Computador , Disfunção Cognitiva/diagnóstico por imagem , Disfunção Cognitiva/genética
9.
Stat Methods Med Res ; 31(7): 1242-1262, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35301917

RESUMO

In modern biomedical classification applications, data are often collected from multiple modalities, ranging from various omics technologies to brain scans. As different modalities provide complementary information, classifiers using multi-modality data usually have good classification performance. However, in many studies, due to the high cost of measures, in a lot of samples, some modalities are missing and therefore all data from those modalities are missing completely. In this case, the training data set is a block-missing multi-modality data set. In this paper, considering such classification problems, we develop a new weighted nearest neighbors classifier, called the integrative nearest neighbor (INN) classifier. INN harnesses all available information in the training data set and the feature vector of the test data point effectively to predict the class label of the test data point without deleting or imputing any missing data. Given a test data point, INN determines the weights on the training samples adaptively by minimizing the worst-case upper bound on the estimation error of the regression function over a convex class of functions. Our simulation study shows that INN outperforms common weighted nearest neighbors classifiers that only use complete training samples or modalities that are available in each sample. It performs better than methods that impute the missing data as well, even for the case where some modalities are missing not at random. The effectiveness of INN has been also demonstrated by our theoretical studies and a real application from the Alzheimer's disease neuroimaging initiative.


Assuntos
Algoritmos , Doença de Alzheimer , Doença de Alzheimer/diagnóstico por imagem , Análise por Conglomerados , Humanos , Modelos Teóricos , Neuroimagem
10.
Comput Biol Med ; 150: 106116, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36215848

RESUMO

Early detection and treatment of Alzheimer's Disease (AD) are significant. Recently, multi-modality imaging data have promoted the development of the automatic diagnosis of AD. This paper proposes a method based on latent feature fusion to make full use of multi-modality image data information. Specifically, we learn a specific projection matrix for each modality by introducing a binary label matrix and local geometry constraints and then project the original features of each modality into a low-dimensional target space. In this space, we fuse latent feature representations of different modalities for AD classification. The experimental results on Alzheimer's Disease Neuroimaging Initiative database demonstrate the proposed methods effectiveness in classifying AD.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Humanos , Doença de Alzheimer/diagnóstico por imagem , Imagem Multimodal/métodos , Neuroimagem/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia por Emissão de Pósitrons/métodos , Disfunção Cognitiva/diagnóstico
11.
Med Image Anal ; 69: 101953, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33460880

RESUMO

Alzheimers disease (AD) is a complex neurodegenerative disease. Its early diagnosis and treatment have been a major concern of researchers. Currently, the multi-modality data representation learning of this disease is gradually becoming an emerging research field, attracting widespread attention. However, in practice, data from multiple modalities are only partially available, and most of the existing multi-modal learning algorithms can not deal with the incomplete multi-modality data. In this paper, we propose an Auto-Encoder based Multi-View missing data Completion framework (AEMVC) to learn common representations for AD diagnosis. Specifically, we firstly map the original complete view to a latent space using an auto-encoder network framework. Then, the latent representations measuring statistical dependence learned from the complete view are used to complement the kernel matrix of the incomplete view in the kernel space. Meanwhile, the structural information of original data and the inherent association between views are maintained by graph regularization and Hilbert-Schmidt Independence Criterion (HSIC) constraints. Finally, a kernel based multi-view method is applied to the learned kernel matrix for the acquisition of common representations. Experimental results achieved on Alzheimers Disease Neuroimaging Initiative (ADNI) datasets validate the effectiveness of the proposed method.


Assuntos
Doença de Alzheimer , Doenças Neurodegenerativas , Algoritmos , Doença de Alzheimer/diagnóstico por imagem , Diagnóstico Precoce , Humanos , Neuroimagem
12.
Ann Appl Stat ; 15(1): 64-87, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34354791

RESUMO

The biomarker networks measured by different modalities of data (e.g., structural magnetic resonance imaging (sMRI), diffusion tensor imaging (DTI)) may share the same true underlying biological model. In this work, we propose a node-wise biomarker graphical model to leverage the shared mechanism between multi-modality data to provide a more reliable estimation of the target modality network and account for the heterogeneity in networks due to differences between subjects and networks of external modality. Latent variables are introduced to represent the shared unobserved biological network and the information from the external modality is incorporated to model the distribution of the underlying biological network. We propose an efficient approximation to the posterior expectation of the latent variables that reduces computational cost by at least 50%. The performance of the proposed method is demonstrated by extensive simulation studies and an application to construct gray matter brain atrophy network of Huntington's disease by using sMRI data and DTI data. The identified network connections are more consistent with clinical literature and better improve prediction in follow-up clinical outcomes and separate subjects into clinically meaningful subgroups with different prognosis than alternative methods.

13.
Med Image Anal ; 60: 101630, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31927474

RESUMO

Fusing multi-modality data is crucial for accurate identification of brain disorder as different modalities can provide complementary perspectives of complex neurodegenerative disease. However, there are at least four common issues associated with the existing fusion methods. First, many existing fusion methods simply concatenate features from each modality without considering the correlations among different modalities. Second, most existing methods often make prediction based on a single classifier, which might not be able to address the heterogeneity of the Alzheimer's disease (AD) progression. Third, many existing methods often employ feature selection (or reduction) and classifier training in two independent steps, without considering the fact that the two pipelined steps are highly related to each other. Forth, there are missing neuroimaging data for some of the participants (e.g., missing PET data), due to the participants' "no-show" or dropout. In this paper, to address the above issues, we propose an early AD diagnosis framework via novel multi-modality latent space inducing ensemble SVM classifier. Specifically, we first project the neuroimaging data from different modalities into a latent space, and then map the learned latent representations into the label space to learn multiple diversified classifiers. Finally, we obtain the more reliable classification results by using an ensemble strategy. More importantly, we present a Complete Multi-modality Latent Space (CMLS) learning model for complete multi-modality data and also an Incomplete Multi-modality Latent Space (IMLS) learning model for incomplete multi-modality data. Extensive experiments using the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset have demonstrated that our proposed models outperform other state-of-the-art methods.


Assuntos
Doença de Alzheimer/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Neuroimagem/métodos , Reconhecimento Automatizado de Padrão/métodos , Idoso , Conjuntos de Dados como Assunto , Diagnóstico Precoce , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Tomografia por Emissão de Pósitrons
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa