Physics-Informed Explainable Continual Learning on Graphs.
IEEE Trans Neural Netw Learn Syst
; 35(9): 11761-11772, 2024 Sep.
Article
em En
| MEDLINE
| ID: mdl-38198265
ABSTRACT
Temporal graph learning has attracted great attention with its ability to deal with dynamic graphs. Although current methods are reasonably accurate, most of them are unexplainable due to their black-box nature. It remains a challenge to explain how temporal graph learning models adapt to information evolution. Furthermore, with the increasing application of artificial intelligence in various scientific domains, such as chemistry and biomedicine, the importance of delivering not only precise outcomes but also offering explanations regarding the learning models becomes paramount. This transparency aids users in comprehending the decision-making procedures and instills greater confidence in the generated models. To address this issue, this article proposes a novel physics-informed explainable continual learning (PiECL), focusing on temporal graphs. Our proposed method utilizes physical and mathematical algorithms to quantify the disturbance of new data to previous knowledge for obtaining changed information over time. As the proposed model is based on theories in physics, it can provide a transparent underlying mechanism for information evolution detection, thus enhancing explainability. The experimental results on three real-world datasets demonstrate that PiECL can explain the learning process, and the generated model outperforms other state-of-the-art methods. PiECL shows tremendous potential for explaining temporal graph learning in various scientific contexts.
Texto completo:
1
Base de dados:
MEDLINE
Tipo de estudo:
Prognostic_studies
Idioma:
En
Ano de publicação:
2024
Tipo de documento:
Article