Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 74
Filtrar
1.
Comput Biol Chem ; 112: 108175, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39191166

RESUMEN

Cancer drug response (CDR) prediction is an important area of research that aims to personalize cancer therapy, optimizing treatment plans for maximum effectiveness while minimizing potential negative effects. Despite the advancements in Deep learning techniques, the effective integration of multi-omics data for drug response prediction remains challenging. In this paper, a regression method using Deep ResNet for CDR (DRN-CDR) prediction is proposed. We aim to explore the potential of considering sole cancer genes in drug response prediction. Here the multi-omics data such as gene expressions, mutation data, and methylation data along with the molecular structural information of drugs were integrated to predict the IC50 values of drugs. Drug features are extracted by employing a Uniform Graph Convolution Network, while Cell line features are extracted using a combination of Convolutional Neural Network and Fully Connected Networks. These features are then concatenated and fed into a deep ResNet for the prediction of IC50 values between Drug - Cell line pairs. The proposed method yielded higher Pearson's correlation coefficient (rp) of 0.7938 with lowest Root Mean Squared Error (RMSE) value of 0.92 when compared with similar methods of tCNNS, MOLI, DeepCDR, TGSA, NIHGCN, DeepTTA, GraTransDRP and TSGCNN. Further, when the model is extended to a classification problem to categorize drugs as sensitive or resistant, we achieved AUC and AUPR measures of 0.7623 and 0.7691, respectively. The drugs such as Tivozanib, SNX-2112, CGP-60474, PHA-665752, Foretinib etc., exhibited low median IC50 values and were found to be effective anti-cancer drugs. The case studies with different TCGA cancer types also revealed the effectiveness of SNX-2112, CGP-60474, Foretinib, Cisplatin, Vinblastine etc. This consistent pattern strongly suggests the effectiveness of the model in predicting CDR.


Asunto(s)
Antineoplásicos , Humanos , Antineoplásicos/farmacología , Antineoplásicos/química , Neoplasias/tratamiento farmacológico , Línea Celular Tumoral , Redes Neurales de la Computación , Aprendizaje Profundo , Ensayos de Selección de Medicamentos Antitumorales , Multiómica
2.
Brain Sci ; 14(8)2024 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-39199511

RESUMEN

Electroencephalogram (EEG) and functional near-infrared spectroscopy (fNIRS) can objectively reflect a person's emotional state and have been widely studied in emotion recognition. However, the effective feature fusion and discriminative feature learning from EEG-fNIRS data is challenging. In order to improve the accuracy of emotion recognition, a graph convolution and capsule attention network model (GCN-CA-CapsNet) is proposed. Firstly, EEG-fNIRS signals are collected from 50 subjects induced by emotional video clips. And then, the features of the EEG and fNIRS are extracted; the EEG-fNIRS features are fused to generate higher-quality primary capsules by graph convolution with the Pearson correlation adjacency matrix. Finally, the capsule attention module is introduced to assign different weights to the primary capsules, and higher-quality primary capsules are selected to generate better classification capsules in the dynamic routing mechanism. We validate the efficacy of the proposed method on our emotional EEG-fNIRS dataset with an ablation study. Extensive experiments demonstrate that the proposed GCN-CA-CapsNet method achieves a more satisfactory performance against the state-of-the-art methods, and the average accuracy can increase by 3-11%.

3.
Sensors (Basel) ; 24(15)2024 Jul 26.
Artículo en Inglés | MEDLINE | ID: mdl-39123903

RESUMEN

The manufacturing industry has been operating within a constantly evolving technological environment, underscoring the importance of maintaining the efficiency and reliability of manufacturing processes. Motor-related failures, especially bearing defects, are common and serious issues in manufacturing processes. Bearings provide accurate and smooth movements and play essential roles in mechanical equipment with shafts. Given their importance, bearing failure diagnosis has been extensively studied. However, the imbalance in failure data and the complexity of time series data make diagnosis challenging. Conventional AI models (convolutional neural networks (CNNs), long short-term memory (LSTM), support vector machine (SVM), and extreme gradient boosting (XGBoost)) face limitations in diagnosing such failures. To address this problem, this paper proposes a bearing failure diagnosis model using a graph convolution network (GCN)-based LSTM autoencoder with self-attention. The model was trained on data extracted from the Case Western Reserve University (CWRU) dataset and a fault simulator testbed. The proposed model achieved 97.3% accuracy on the CWRU dataset and 99.9% accuracy on the fault simulator dataset.

4.
Sensors (Basel) ; 24(15)2024 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-39123933

RESUMEN

With the development of precision sensing instruments and data storage devices, the fusion of multi-sensor data in gearbox fault diagnosis has attracted much attention. However, existing methods have difficulty in capturing the local temporal dependencies of multi-sensor monitoring information, and the inescapable noise severely decreases the accuracy of multi-sensor information fusion diagnosis. To address these issues, this paper proposes a fault diagnosis method based on dynamic graph convolutional neural networks and hard threshold denoising. Firstly, considering that the relationships between monitoring data from different sensors change over time, a dynamic graph structure is adopted to model the temporal dependencies of multi-sensor data, and, further, a graph convolutional neural network is constructed to achieve the interaction and feature extraction of temporal information from multi-sensor data. Secondly, to avoid the influence of noise in practical engineering, a hard threshold denoising strategy is designed, and a learnable hard threshold denoising layer is embedded into the graph neural network. Experimental fault datasets from two typical gearbox fault test benches under environmental noise are used to verify the effectiveness of the proposed method in gearbox fault diagnosis. The experimental results show that the proposed DDGCN method achieves an average diagnostic accuracy of up to 99.7% under different levels of environmental noise, demonstrating good noise resistance.

5.
PeerJ Comput Sci ; 10: e2216, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39145234

RESUMEN

Piwi-interacting RNA (piRNA) is a type of non-coding small RNA that is highly expressed in mammalian testis. PiRNA has been implicated in various human diseases, but the experimental validation of piRNA-disease associations is costly and time-consuming. In this article, a novel computational method for predicting piRNA-disease associations using a multi-channel graph variational autoencoder (MC-GVAE) is proposed. This method integrates four types of similarity networks for piRNAs and diseases, which are derived from piRNA sequences, disease semantics, piRNA Gaussian Interaction Profile (GIP) kernel, and disease GIP kernel, respectively. These networks are modeled by a graph VAE framework, which can learn low-dimensional and informative feature representations for piRNAs and diseases. Then, a multi-channel method is used to fuse the feature representations from different networks. Finally, a three-layer neural network classifier is applied to predict the potential associations between piRNAs and diseases. The method was evaluated on a benchmark dataset containing 5,002 experimentally validated associations with 4,350 piRNAs and 21 diseases, constructed from the piRDisease v1.0 database. It achieved state-of-the-art performance, with an average AUC value of 0.9310 and an AUPR value of 0.9247 under five-fold cross-validation. This demonstrates the method's effectiveness and superiority in piRNA-disease association prediction.

6.
ISA Trans ; 152: 331-357, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38987043

RESUMEN

Prediction of Remaining Useful Life (RUL) for Rolling Element Bearings (REB) has attracted widespread attention from academia and industry. However, there are still several bottlenecks, including the effective utilization of multi-sensor data, the interpretability of prediction models, and the prediction across the entire life cycle, which limit prediction accuracy. In view of that, we propose a knowledge-based explainable life-cycle RUL prediction framework. First, considering the feature fusion of fast-changing signals, the Pearson correlation coefficient matrix and feature transformation objective function are incorporated to an Improved Graph Convolutional Autoencoder. Furthermore, to integrate the multi-source signals, a Cascaded Multi-head Self-attention Autoencoder with Characteristic Guidance is proposed to construct health indicators. Then, the whole life cycle of REB is divided into different stages based on the Continuous Gradient Recognition with Outlier Detection. With the development of Measurement-based Correction Life Formula and Bidirectional Recursive Gated Dual Attention Unit, accurate life-cycle RUL prediction is achieved. Data from self-designed test rig and PHM 2012 Prognostic challenge datasets are analyzed with the proposed framework and five existing prediction models. Compared with the strongest prediction model among the five, the proposed framework demonstrates significant improvements. For the data from self-designed test rig, there is a 1.66 % enhancement in Corrected Cumulative Relative Accuracy (CCRA) and a 49.00 % improvement in Coefficient of Determination (R2). For the PHM 2012 datasets, there is a 4.04 % increase in CCRA and a 120.72 % boost in R2.

7.
Sensors (Basel) ; 24(14)2024 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-39066156

RESUMEN

Semi-supervised graph convolutional networks (SSGCNs) have been proven to be effective in hyperspectral image classification (HSIC). However, limited training data and spectral uncertainty restrict the classification performance, and the computational demands of a graph convolution network (GCN) present challenges for real-time applications. To overcome these issues, a dual-branch fusion of a GCN and convolutional neural network (DFGCN) is proposed for HSIC tasks. The GCN branch uses an adaptive multi-scale superpixel segmentation method to build fusion adjacency matrices at various scales, which improves the graph convolution efficiency and node representations. Additionally, a spectral feature enhancement module (SFEM) enhances the transmission of crucial channel information between the two graph convolutions. Meanwhile, the CNN branch uses a convolutional network with an attention mechanism to focus on detailed features of local areas. By combining the multi-scale superpixel features from the GCN branch and the local pixel features from the CNN branch, this method leverages complementary features to fully learn rich spatial-spectral information. Our experimental results demonstrate that the proposed method outperforms existing advanced approaches in terms of classification efficiency and accuracy across three benchmark data sets.

8.
Brain Sci ; 14(5)2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38790434

RESUMEN

Functional connectivity (FC) obtained from resting-state functional magnetic resonance imaging has been integrated with machine learning algorithms to deliver consistent and reliable brain disease classification outcomes. However, in classical learning procedures, custom-built specialized feature selection techniques are typically used to filter out uninformative features from FC patterns to generalize efficiently on the datasets. The ability of convolutional neural networks (CNN) and other deep learning models to extract informative features from data with grid structure (such as images) has led to the surge in popularity of these techniques. However, the designs of many existing CNN models still fail to exploit the relationships between entities of graph-structure data (such as networks). Therefore, graph convolution network (GCN) has been suggested as a means for uncovering the intricate structure of brain network data, which has the potential to substantially improve classification accuracy. Furthermore, overfitting in classifiers can be largely attributed to the limited number of available training samples. Recently, the generative adversarial network (GAN) has been widely used in the medical field for its generative aspect that can generate synthesis images to cope with the problems of data scarcity and patient privacy. In our previous work, GCN and GAN have been designed to investigate FC patterns to perform diagnosis tasks, and their effectiveness has been tested on the ABIDE-I dataset. In this paper, the models will be further applied to FC data derived from more public datasets (ADHD, ABIDE-II, and ADNI) and our in-house dataset (PTSD) to justify their generalization on all types of data. The results of a number of experiments show the powerful characteristic of GAN to mimic FC data to achieve high performance in disease prediction. When employing GAN for data augmentation, the diagnostic accuracy across ADHD-200, ABIDE-II, and ADNI datasets surpasses that of other machine learning models, including results achieved with BrainNetCNN. Specifically, in ADHD, the accuracy increased from 67.74% to 73.96% with GAN, in ABIDE-II from 70.36% to 77.40%, and in ADNI, reaching 52.84% and 88.56% for multiclass and binary classification, respectively. GCN also obtains decent results, with the best accuracy in ADHD datasets at 71.38% for multinomial and 75% for binary classification, respectively, and the second-best accuracy in the ABIDE-II dataset (72.28% and 75.16%, respectively). Both GAN and GCN achieved the highest accuracy for the PTSD dataset, reaching 97.76%. However, there are still some limitations that can be improved. Both methods have many opportunities for the prediction and diagnosis of diseases.

9.
Comput Biol Med ; 173: 108361, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38569236

RESUMEN

Deep learning plays a significant role in the detection of pulmonary nodules in low-dose computed tomography (LDCT) scans, contributing to the diagnosis and treatment of lung cancer. Nevertheless, its effectiveness often relies on the availability of extensive, meticulously annotated dataset. In this paper, we explore the utilization of an incompletely annotated dataset for pulmonary nodules detection and introduce the FULFIL (Forecasting Uncompleted Labels For Inexpensive Lung nodule detection) algorithm as an innovative approach. By instructing annotators to label only the nodules they are most confident about, without requiring complete coverage, we can substantially reduce annotation costs. Nevertheless, this approach results in an incompletely annotated dataset, which presents challenges when training deep learning models. Within the FULFIL algorithm, we employ Graph Convolution Network (GCN) to discover the relationships between annotated and unannotated nodules for self-adaptively completing the annotation. Meanwhile, a teacher-student framework is employed for self-adaptive learning using the completed annotation dataset. Furthermore, we have designed a Dual-Views loss to leverage different data perspectives, aiding the model in acquiring robust features and enhancing generalization. We carried out experiments using the LUng Nodule Analysis (LUNA) dataset, achieving a sensitivity of 0.574 at a False positives per scan (FPs/scan) of 0.125 with only 10% instance-level annotations for nodules. This performance outperformed comparative methods by 7.00%. Experimental comparisons were conducted to evaluate the performance of our model and human experts on test dataset. The results demonstrate that our model can achieve a comparable level of performance to that of human experts. The comprehensive experimental results demonstrate that FULFIL can effectively leverage an incomplete pulmonary nodule dataset to develop a robust deep learning model, making it a promising tool for assisting in lung nodule detection.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Nódulo Pulmonar Solitario , Humanos , Nódulo Pulmonar Solitario/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Pulmón/diagnóstico por imagen
10.
Brief Bioinform ; 25(2)2024 Jan 22.
Artículo en Inglés | MEDLINE | ID: mdl-38426320

RESUMEN

Protein subcellular localization (PSL) is very important in order to understand its functions, and its movement between subcellular niches within cells plays fundamental roles in biological process regulation. Mass spectrometry-based spatio-temporal proteomics technologies can help provide new insights of protein translocation, but bring the challenge in identifying reliable protein translocation events due to the noise interference and insufficient data mining. We propose a semi-supervised graph convolution network (GCN)-based framework termed TransGCN that infers protein translocation events from spatio-temporal proteomics. Based on expanded multiple distance features and joint graph representations of proteins, TransGCN utilizes the semi-supervised GCN to enable effective knowledge transfer from proteins with known PSLs for predicting protein localization and translocation. Our results demonstrate that TransGCN outperforms current state-of-the-art methods in identifying protein translocations, especially in coping with batch effects. It also exhibited excellent predictive accuracy in PSL prediction. TransGCN is freely available on GitHub at https://github.com/XuejiangGuo/TransGCN.


Asunto(s)
Habilidades de Afrontamiento , Proteómica , Minería de Datos , Espectrometría de Masas , Transporte de Proteínas
11.
Cereb Cortex ; 34(3)2024 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-38494887

RESUMEN

The early diagnosis of autism spectrum disorder (ASD) has been extensively facilitated through the utilization of resting-state fMRI (rs-fMRI). With rs-fMRI, the functional brain network (FBN) has gained much attention in diagnosing ASD. As a promising strategy, graph convolutional networks (GCN) provide an attractive approach to simultaneously extract FBN features and facilitate ASD identification, thus replacing the manual feature extraction from FBN. Previous GCN studies primarily emphasized the exploration of topological simultaneously connection weights of the estimated FBNs while only focusing on the single connection pattern. However, this approach fails to exploit the potential complementary information offered by different connection patterns of FBNs, thereby inherently limiting the performance. To enhance the diagnostic performance, we propose a multipattern graph convolution network (MPGCN) that integrates multiple connection patterns to improve the accuracy of ASD diagnosis. As an initial endeavor, we endeavored to integrate information from multiple connection patterns by incorporating multiple graph convolution modules. The effectiveness of the MPGCN approach is evaluated by analyzing rs-fMRI scans from a cohort of 92 subjects sourced from the publicly accessible Autism Brain Imaging Data Exchange database. Notably, the experiment demonstrates that our model achieves an accuracy of 91.1% and an area under ROC curve score of 0.9742. The implementation codes are available at https://github.com/immutableJackz/MPGCN.


Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Humanos , Trastorno del Espectro Autista/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Bases de Datos Factuales , Curva ROC
12.
Front Artif Intell ; 7: 1331853, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38487743

RESUMEN

The application of artificial intelligence technology in the medical field has become increasingly prevalent, yet there remains significant room for exploration in its deep implementation. Within the field of orthopedics, which integrates closely with AI due to its extensive data requirements, rotator cuff injuries are a commonly encountered condition in joint motion. One of the most severe complications following rotator cuff repair surgery is the recurrence of tears, which has a significant impact on both patients and healthcare professionals. To address this issue, we utilized the innovative EV-GCN algorithm to train a predictive model. We collected medical records of 1,631 patients who underwent rotator cuff repair surgery at a single center over a span of 5 years. In the end, our model successfully predicted postoperative re-tear before the surgery using 62 preoperative variables with an accuracy of 96.93%, and achieved an accuracy of 79.55% on an independent external dataset of 518 cases from other centers. This model outperforms human doctors in predicting outcomes with high accuracy. Through this methodology and research, our aim is to utilize preoperative prediction models to assist in making informed medical decisions during and after surgery, leading to improved treatment effectiveness. This research method and strategy can be applied to other medical fields, and the research findings can assist in making healthcare decisions.

13.
Comput Biol Med ; 170: 108048, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38310804

RESUMEN

Illuminating associations between diseases and genes can help reveal the pathogenesis of syndromes and contribute to treatments, but a large number of associations remained unexplored. To identify novel disease-gene associations, many computational methods have been developed using disease and gene-related prior knowledge. However, these methods remain of relatively inferior performance due to the limited external data sources and the inevitable noise among the prior knowledge. In this study, we have developed a new method, Self-Supervised Mutual Infomax Graph Convolution Network (MiGCN), to predict disease-gene associations under the guidance of external disease-disease and gene-gene collaborative graphs. The noises within the collaborative graphs were eliminated by maximizing the mutual information between nodes and neighbors through a graphical mutual infomax layer. In parallel, the node interactions were strengthened by a novel informative message passing layer to improve the learning ability of graph neural network. The extensive experiments showed that our model achieved performance improvement over the state-of-art method by more than 8 % on AUC. The datasets, source codes and trained models of MiGCN are available at https://github.com/biomed-AI/MiGCN.


Asunto(s)
Aprendizaje , Redes Neurales de la Computación , Humanos , Programas Informáticos , Síndrome
14.
Brief Funct Genomics ; 23(2): 128-137, 2024 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-37208992

RESUMEN

Determining cell types by single-cell transcriptomics data is fundamental for downstream analysis. However, cell clustering and data imputation still face the computation challenges, due to the high dropout rate, sparsity and dimensionality of single-cell data. Although some deep learning based solutions have been proposed to handle these challenges, they still can not leverage gene attribute information and cell topology in a sensible way to explore the consistent clustering. In this paper, we present scDeepFC, a deep information fusion-based single-cell data clustering method for cell clustering and data imputation. Specifically, scDeepFC uses a deep auto-encoder (DAE) network and a deep graph convolution network to embed high-dimensional gene attribute information and high-order cell-cell topological information into different low-dimensional representations, and then fuses them to generate a more comprehensive and accurate consensus representation via a deep information fusion network. In addition, scDeepFC integrates the zero-inflated negative binomial (ZINB) into DAE to model the dropout events. By jointly optimizing the ZINB loss and cell graph reconstruction loss, scDeepFC generates a salient embedding representation for clustering cells and imputing missing data. Extensive experiments on real single-cell datasets prove that scDeepFC outperforms other popular single-cell analysis methods. Both the gene attribute and cell topology information can improve the cell clustering.


Asunto(s)
Perfilación de la Expresión Génica , Análisis de Expresión Génica de una Sola Célula , Análisis por Conglomerados , Análisis de la Célula Individual , Análisis de Secuencia de ARN
15.
BMC Bioinformatics ; 24(1): 476, 2023 Dec 14.
Artículo en Inglés | MEDLINE | ID: mdl-38097930

RESUMEN

The increasing body of research has consistently demonstrated the intricate correlation between the human microbiome and human well-being. Microbes can impact the efficacy and toxicity of drugs through various pathways, as well as influence the occurrence and metastasis of tumors. In clinical practice, it is crucial to elucidate the association between microbes and diseases. Although traditional biological experiments accurately identify this association, they are time-consuming, expensive, and susceptible to experimental conditions. Consequently, conducting extensive biological experiments to screen potential microbe-disease associations becomes challenging. The computational methods can solve the above problems well, but the previous computational methods still have the problems of low utilization of node features and the prediction accuracy needs to be improved. To address this issue, we propose the DAEGCNDF model predicting potential associations between microbes and diseases. Our model calculates four similar features for each microbe and disease. These features are fused to obtain a comprehensive feature matrix representing microbes and diseases. Our model first uses the graph convolutional network module to extract low-rank features with graph information of microbes and diseases, and then uses a deep sparse Auto-Encoder to extract high-rank features of microbe-disease pairs, after which the low-rank and high-rank features are spliced to improve the utilization of node features. Finally, Deep Forest was used for microbe-disease potential relationship prediction. The experimental results show that combining low-rank and high-rank features helps to improve the model performance and Deep Forest has better classification performance than the baseline model.


Asunto(s)
Algoritmos , Neoplasias , Humanos , Biología Computacional/métodos
16.
Comput Biol Med ; 167: 107583, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37890420

RESUMEN

Accurate and automatic segmentation of medical images is a key step in clinical diagnosis and analysis. Currently, the successful application of Transformers' model in the field of computer vision, researchers have begun to gradually explore the application of Transformers in medical segmentation of images, especially in combination with convolutional neural networks with coding-decoding structure, which have achieved remarkable results in the field of medical segmentation. However, most studies have combined Transformers with CNNs at a single scale or processed only the highest-level semantic feature information, ignoring the rich location information in the lower-level semantic feature information. At the same time, for problems such as blurred structural boundaries and heterogeneous textures in images, most existing methods usually simply connect contour information to capture the boundaries of the target. However, these methods cannot capture the precise outline of the target and ignore the potential relationship between the boundary and the region. In this paper, we propose the TGDAUNet, which consists of a dual-branch backbone network of CNNs and Transformers and a parallel attention mechanism, to achieve accurate segmentation of lesions in medical images. Firstly, high-level semantic feature information of the CNN backbone branches is fused at multiple scales, and the high-level and low-level feature information complement each other's location and spatial information. We further use the polarised self-attentive (PSA) module to reduce the impact of redundant information caused by multiple scales, to better couple with the feature information extracted from the Transformers backbone branch, and to establish global contextual long-range dependencies at multiple scales. In addition, we have designed the Reverse Graph-reasoned Fusion (RGF) module and the Feature Aggregation (FA) module to jointly guide the global context. The FA module aggregates high-level semantic feature information to generate an original global predictive segmentation map. The RGF module captures non-significant features of the boundaries in the original or secondary global prediction segmentation graph through a reverse attention mechanism, establishing a graph reasoning module to explore the potential semantic relationships between boundaries and regions, further refining the target boundaries. Finally, to validate the effectiveness of our proposed method, we compare our proposed method with the current popular methods in the CVC-ClinicDB, Kvasir-SEG, ETIS, CVC-ColonDB, CVC-300,datasets as well as the skin cancer segmentation datasets ISIC-2016 and ISIC-2017. The large number of experimental results show that our method outperforms the currently popular methods. Source code is released at https://github.com/sd-spf/TGDAUNet.


Asunto(s)
Redes Neurales de la Computación , Neoplasias Cutáneas , Humanos , Solución de Problemas , Semántica , Programas Informáticos , Procesamiento de Imagen Asistido por Computador
17.
Neural Netw ; 168: 161-170, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37757724

RESUMEN

Graph convolutional network has been extensively employed in semi-supervised classification tasks. Although some studies have attempted to leverage graph convolutional networks to explore multi-view data, they mostly consider the fusion of feature and topology individually, leading to the underutilization of the consistency and complementarity of multi-view data. In this paper, we propose an end-to-end joint fusion framework that aims to simultaneously conduct a consistent feature integration and an adaptive topology adjustment. Specifically, to capture the feature consistency, we construct a deep matrix decomposition module, which maps data from different views onto a feature space obtaining a consistent feature representation. Moreover, we design a more flexible graph convolution that allows to adaptively learn a more robust topology. A dynamic topology can greatly reduce the influence of unreliable information, which acquires a more adaptive representation. As a result, our method jointly designs an effective feature fusion module and a topology adjustment module, and lets these two modules mutually enhance each other. It takes full advantage of the consistency and complementarity to better capture the more intrinsic information. The experimental results indicate that our method surpasses state-of-the-art semi-supervised classification methods.


Asunto(s)
Aprendizaje , Redes Neurales de la Computación
18.
Neural Netw ; 168: 105-122, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37748391

RESUMEN

In recent years, the application of convolutional neural networks (CNNs) and graph convolutional networks (GCNs) in hyperspectral image classification (HSIC) has achieved remarkable results. However, the limited label samples are still a major challenge when using CNN and GCN to classify hyperspectral images. In order to alleviate this problem, a double branch fusion network of CNN and enhanced graph attention network (CEGAT) based on key sample selection strategy is proposed. First, a linear discrimination of spectral inter-class slices (LD_SICS) module is designed to eliminate spectral redundancy of HSIs. Then, a spatial spectral correlation attention (SSCA) module is proposed, which can extract and assign attention weight to the spatial and spectral correlation features. On the graph attention (GAT) branch, the HSI is segmented into some super pixels as input to reduce the amount of network parameters. In addition, an enhanced graph attention (EGAT) module is constructed to enhance the relationship between nodes. Finally, a key sample selection (KSS) strategy is proposed to enable the network to achieve better classification performance with few labeled samples. Compared with other state-of-the-art methods, CEGAT has better classification performance under limited label samples.


Asunto(s)
Redes Neurales de la Computación , Polímeros
19.
Neural Netw ; 167: 551-558, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37696072

RESUMEN

In the 3D skeleton-based action recognition task, learning rich spatial and temporal motion patterns from body joints are two foundational yet under-explored problems. In this paper, we propose two methods for improving these problems: (I) a novel glimpse-focus action recognition strategy that captures multi-range pose features from the whole body and key body parts jointly; (II) a powerful temporal feature extractor JD-TC that enriches trajectory features by inferring different inter-frame correlations for different joints. By coupling these two proposals, we develop a powerful skeleton-based action recognition system that extracts rich pose and trajectory features from a skeleton sequence and outperforms previous state-of-the-art methods on three large-scale datasets.


Asunto(s)
Aprendizaje , Esqueleto , Movimiento (Física) , Reconocimiento en Psicología
20.
BMC Bioinformatics ; 24(1): 363, 2023 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-37759189

RESUMEN

BACKGROUND: Autism spectrum disorder (ASD) is a serious developmental disorder of the brain. Recently, various deep learning methods based on functional magnetic resonance imaging (fMRI) data have been developed for the classification of ASD. Among them, graph neural networks, which generalize deep neural network models to graph structured data, have shown great advantages. However, in graph neural methods, because the graphs constructed are homogeneous, the phenotype information of the subjects cannot be fully utilized. This affects the improvement of the classification performance. METHODS: To fully utilize the phenotype information, this paper proposes a heterogeneous graph convolutional attention network (HCAN) model to classify ASD. By combining an attention mechanism and a heterogeneous graph convolutional network, important aggregated features can be extracted in the HCAN. The model consists of a multilayer HCAN feature extractor and a multilayer perceptron (MLP) classifier. First, a heterogeneous population graph was constructed based on the fMRI and phenotypic data. Then, a multilayer HCAN is used to mine graph-based features from the heterogeneous graph. Finally, the extracted features are fed into an MLP for the final classification. RESULTS: The proposed method is assessed on the autism brain imaging data exchange (ABIDE) repository. In total, 871 subjects in the ABIDE I dataset are used for the classification task. The best classification accuracy of 82.9% is achieved. Compared to the other methods using exactly the same subjects in the literature, the proposed method achieves superior performance to the best reported result. CONCLUSIONS: The proposed method can effectively integrate heterogeneous graph convolutional networks with a semantic attention mechanism so that the phenotype features of the subjects can be fully utilized. Moreover, it shows great potential in the diagnosis of brain functional disorders with fMRI data.


Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Humanos , Trastorno del Espectro Autista/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Redes Neurales de la Computación , Fenotipo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...