Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Front Big Data ; 7: 1410424, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39011466

RESUMEN

With the increasing popularity of Graph Neural Networks (GNNs) for predictive tasks on graph structured data, research on their explainability is becoming more critical and achieving significant progress. Although many methods are proposed to explain the predictions of GNNs, their focus is mainly on "how to generate explanations." However, other important research questions like "whether the GNN explanations are inaccurate," "what if the explanations are inaccurate," and "how to adjust the model to generate more accurate explanations" have gained little attention. Our previous GNN Explanation Supervision (GNES) framework demonstrated effectiveness on improving the reasonability of the local explanation while still keep or even improve the backbone GNNs model performance. In many applications instead of per sample explanations, we need to find global explanations which are reasonable and faithful to the domain data. Simply learning to explain GNNs locally is not an optimal solution to a global understanding of the model. To improve the explainability power of the GNES framework, we propose the Global GNN Explanation Supervision (GGNES) technique which uses a basic trained GNN and a global extension of the loss function used in the GNES framework. This GNN creates local explanations which are fed to a Global Logic-based GNN Explainer, an existing technique that can learn the global Explanation in terms of a logic formula. These two frameworks are then trained iteratively to generate reasonable global explanations. Extensive experiments demonstrate the effectiveness of the proposed model on improving the global explanations while keeping the performance similar or even increase the model prediction power.

2.
Artículo en Inglés | MEDLINE | ID: mdl-36037449

RESUMEN

Inferring resting-state functional connectivity (FC) from anatomical brain wiring, known as structural connectivity (SC), is of enormous significance in neuroscience for understanding biological neuronal networks and treating mental diseases. Both SC and FC are networks where the nodes are brain regions, and in SC, the edges are the physical fiber nerves among the nodes, while in FC, the edges are the nodes' coactivation relations. Despite the importance of SC and FC, until very recently, the rapidly growing research body on this topic has generally focused on either linear models or computational models that rely heavily on heuristics and simple assumptions regarding the mapping between FC and SC. However, the relationship between FC and SC is actually highly nonlinear and complex and contains considerable randomness; additional factors, such as the subject's age and health, can also significantly impact the SC-FC relationship and hence cannot be ignored. To address these challenges, here, we develop a novel SC-to-FC generative adversarial network (SF-GAN) framework for mapping SC to FC, along with additional metafeatures based on a newly proposed graph neural network-based generative model that is capable of learning the stochasticity. Specifically, a new graph-based conditional generative adversarial nets model is proposed, where edge convolution layers are leveraged to encode the graph patterns in the SC in the form of a graph representation. New edge deconvolution layers are then utilized to decode the representation back to FC. Additional metafeatures of subjects' profile information are integrated into the graph representation with newly designed sparse-regularized layers that can automatically select features that impact FC. Finally, we have also proposed new post hoc explainer of our SF-GAN, which can identify which subgraphs in SC strongly influence which subgraphs in FC by a new multilevel edge-correlation-guided graph clustering problem. The results of experiments conducted to test the new model confirm that it significantly outperforms existing state-of-the-art methods, with additional interpretability for identifying important metafeatures and subgraphs.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA