RESUMO
BACKGROUND: The exclusion/occlusion of the left atrial appendage (LAA) is a treatment option for atrial fibrillation (AF) patients who are at high risk of stroke and high risk of bleeding. As the role of the LAA is not well understood or explored, this study aims to assess its role on flow dynamics in the left atrium. METHODS: Computational fluid dynamics (CFD) simulations were carried out for nine AF patients before and after LAA exclusion. The flow parameters investigated included the LA velocities, Time Averaged Wall Shear Stress (TAWSS), Oscillatory Shear Index (OSI), Relative Residence Time (RRT), and Pressure in the LA. RESULTS: This study shows that, on average, a decrease in TAWSS (1.82 ± 1.85 Pa to 1.27 ± 0.96 Pa, p < 0.05) and a slight increase in OSI (0.16 ± 0.10 to 0.17 ± 0.10, p < 0.05), RRT (1.87 ± 1.84 Pa-1 to 2.11 ± 1.78 Pa-1, p < 0.05), and pressure (-19.2 ± 6.8 mmHg to -15.3 ± 8.3 mmHg, p < 0.05) were observed in the LA after the exclusion of the LAA, with a decrease in low-magnitude velocities. CONCLUSION: The exclusion of the LAA seems to be associated with changes in LA flow dynamics. Further studies are needed to elucidate the clinical implications of these changes.
RESUMO
OBJECTIVES: Coronary computed tomography angiography (CCTA) has rapidly developed in the coronary artery disease (CAD) field. However, manual coronary artery tree segmentation and reconstruction are time-consuming and tedious. Deep learning algorithms have been successfully developed for medical image analysis to process extensive data. Thus, we aimed to develop a deep learning tool for automatic coronary artery reconstruction and an automated CAD diagnosis model based on a large, single-centre retrospective CCTA cohort. METHODS: Automatic CAD diagnosis consists of two subtasks. One is a segmentation task, which aims to extract the region of interest (ROI) from original images with U-Net. The second task is an identification task, which we implemented using 3DNet. The coronary artery tree images and clinical parameters were input into 3DNet, and the CAD diagnosis result was output. RESULTS: We built a coronary artery segmentation model based on CCTA images with the corresponding labelling. The segmentation model had a mean Dice value of 0.771 ± 0.021. Based on this model, we built an automated diagnosis model (classification model) for CAD. The average accuracy and area under the receiver operating characteristic curve (AUC) were 0.750 ± 0.056 and 0.737, respectively. CONCLUSION: Herein, using a deep learning algorithm, we realized the rapid classification and diagnosis of CAD from CCTA images in two steps. Our deep learning model can automatically segment the coronary artery quickly and accurately and can deliver a diagnosis of ≥ 50% coronary artery stenosis. Artificial intelligence methods such as deep learning have the potential to elevate the efficiency in CCTA image analysis considerably. KEY POINTS: ⢠The deep learning model rapidly achieved a high Dice value (0.771 ± 0.0210) in the autosegmentation of coronary arteries using CCTA images. ⢠Based on the segmentation model, we built a CAD autoclassifier with the 3DNet algorithm, which achieved a good diagnostic performance (AUC) of 0.737. ⢠The deep neural network could be used in the image postprocessing of coronary computed tomography angiography to achieve a quick and accurate diagnosis of CAD.
Assuntos
Doença da Artéria Coronariana , Estenose Coronária , Aprendizado Profundo , Inteligência Artificial , Angiografia por Tomografia Computadorizada/métodos , Constrição Patológica , Angiografia Coronária/métodos , Doença da Artéria Coronariana/diagnóstico por imagem , Estenose Coronária/diagnóstico por imagem , Humanos , Estudos RetrospectivosAssuntos
Doenças Autoimunes , Hipertensão , Interleucina-17 , Humanos , Anticorpos Monoclonais Humanizados/efeitos adversos , Anticorpos Monoclonais Humanizados/uso terapêutico , Anti-Hipertensivos/efeitos adversos , Anti-Hipertensivos/uso terapêutico , Doenças Autoimunes/tratamento farmacológico , Doenças Autoimunes/imunologia , Pressão Sanguínea/efeitos dos fármacos , Hipertensão/tratamento farmacológico , Hipertensão/fisiopatologia , Hipertensão/imunologia , Interleucina-17/antagonistas & inibidores , Interleucina-17/imunologia , Ensaios Clínicos Controlados Aleatórios como Assunto , Medição de Risco , Fatores de Risco , Resultado do TratamentoRESUMO
MOTIVATION: The exponential growth of biological network database has increasingly rendered the global network similarity search (NSS) computationally intensive. Given a query network and a network database, it aims to find out the top similar networks in the database against the query network based on a topological similarity measure of interest. With the advent of big network data, the existing search methods may become unsuitable since some of them could render queries unsuccessful by returning empty answers or arbitrary query restrictions. Therefore, the design of NSS algorithm remains challenging under the dilemma between accuracy and efficiency. RESULTS: We propose a global NSS method based on regression, denotated as NSSRF, which boosts the search speed without any significant sacrifice in practical performance. As motivated from the nature, subgraph signatures are heavily involved. Two phases are proposed in NSSRF: offline model building phase and similarity query phase. In the offline model building phase, the subgraph signatures and cosine similarity scores are used for efficient random forest regression (RFR) model training. In the similarity query phase, the trained regression model is queried to return similar networks. We have extensively validated NSSRF on biological pathways and molecular structures; NSSRF demonstrates competitive performance over the state-of-the-arts. Remarkably, NSSRF works especially well for large networks, which indicates that the proposed approach can be promising in the era of big data. Case studies have proven the efficiencies and uniqueness of NSSRF which could be missed by the existing state-of-the-arts. AVAILABILITY AND IMPLEMENTATION: The source code of two versions of NSSRF are freely available for downloading at https://github.com/zhangjiaobxy/nssrfBinary and https://github.com/zhangjiaobxy/nssrfPackage . CONTACT: kc.w@cityu.edu.hk. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Assuntos
Biologia Computacional/métodos , Modelos Teóricos , Software , Animais , Humanos , Redes e Vias Metabólicas , Conformação ProteicaRESUMO
Using bibliometric analysis, this study attempted to provide an overview of the current state of research and key findings regarding the immunotherapy for vasculitis in general. We gathered the literature from the Web of Science (WOS) database covering the last 20 years (2004-2024) pertaining to the immunotherapy for vasculitis, and we used Citespace to evaluate the mapping of knowledge. The findings demonstrated that there were 572 articles concerning the immunotherapy for vasculitis, with a faster growth after 2018. The USA, Assistance Publique Hopitaux Paris, and Cornelia M are the nation, organization, and writer with the highest number of publications. Daxini A (2018) is the most frequently mentioned reference as well (26). Prominent universities and developed nations form the finest alliances for research on immunotherapy for vasculitis researches. Immune checkpoint inhibitors, Wegener's granulomatosis, and systemic lupus erythematosus are Three research hotspots in this field.
Assuntos
Bibliometria , Imunoterapia , Vasculite , Humanos , Pesquisa Biomédica/tendências , Inibidores de Checkpoint Imunológico/uso terapêutico , Imunoterapia/métodos , Lúpus Eritematoso Sistêmico/terapia , Lúpus Eritematoso Sistêmico/imunologia , Vasculite/terapia , Vasculite/imunologiaRESUMO
Researchers have proposed to exploit label correlation to alleviate the exponential-size output space of label distribution learning (LDL). In particular, some have designed LDL methods to consider local label correlation. These methods roughly partition the training set into clusters and then exploit local label correlation on each one. Each sample belongs to one cluster and therefore has only one local label correlation. However, in real-world scenarios, the training samples may have fuzziness and belong to multiple clusters with blended local label correlations, which challenge these works. To solve this problem, we propose in LDL fuzzy label correlation (FLC)-each sample blends, with fuzzy membership, multiple local label correlations. First, we propose two types of FLCs, i.e., fuzzy membership-induced label correlation (FC) and joint fuzzy clustering and label correlation (FCC). Then, we put forward LDL-FC and LDL-FCC to exploit these two FLCs, respectively. Finally, we conduct extensive experiments to justify that LDL-FC and LDL-FCC statistically outperform state-of-the-art LDL methods.
RESUMO
This study demonstrated that fibrinogen is an independent risk factor for 10-year mortality in patients with acute coronary syndrome (ACS), with a U-shaped nonlinear relationship observed between the two. These findings underscore the importance of monitoring fibrinogen levels and the consideration of long-term anti-inflammatory treatment in the clinical management of patients with ACS.
Assuntos
Síndrome Coronariana Aguda , Fibrinogênio , Humanos , Síndrome Coronariana Aguda/mortalidade , Síndrome Coronariana Aguda/sangue , Fibrinogênio/análise , Masculino , Feminino , Estudos Prospectivos , Pessoa de Meia-Idade , Idoso , Fatores de Risco , Biomarcadores/sangueRESUMO
Coronary artery segmentation is crucial for physicians to identify and locate plaques and stenosis using coronary computed tomography angiography (CCTA). However, the low contrast of CCTA images and the intricate structures of coronary arteries make this task challenging. To address these difficulties, we propose a novel model, the DFS-PDS network. This network comprises two subnetworks: a discriminative frequency segment subnetwork (DFS) and a position domain scales subnetwork (PDS). DFS introduced a gated mechanism within the feed-forward network, leveraging the Joint Photographic Experts Group (JPEG) compression algorithm, to discriminatively determine which low- and high-frequency information of the features should be preserved for latent image segmentation. The PDS aims to learn the shape prototype by predicting the radius. Additionally, our model has the consistent ability to guarantee region and boundary features through boundary consistency loss. During training, both subnetworks are optimized jointly, and in the testing stage, the coarse segmentation and radius prediction are produced. A coronary-geometric refinement method refines the segmentation masks by leveraging the shape prior to being reconstructed from the radius map, reducing the difficulty of segmenting coronary artery structures from complex surrounding structures. The DFS-PDS network is compared with state-of-the-art (SOTA) methods on two coronary artery datasets to evaluate its performance. The experimental results demonstrate that the DFS-PDS network performs better than the SOTA models, including Vnet, nnUnet, DDT, CS2-Net, Unetr, and CAS-Net, in terms of Dice or connectivity evaluation metrics.
Assuntos
Vasos Coronários , Humanos , Vasos Coronários/diagnóstico por imagem , Algoritmos , Angiografia por Tomografia Computadorizada/métodos , Angiografia Coronária/métodos , Processamento de Imagem Assistida por Computador/métodosRESUMO
BACKGROUND: Patients undergoing transcatheter aortic valve implantation (TAVI) for bicuspid aortic stenosis (AS) frequently present with ascending aortic (AAo) dilatation which is left untreated. The objective of this study was to study the natural progression and underlying mechanisms of AAo dilatation after TAVI for bicuspid AS. METHODS: Patients with a native bicuspid AS and a baseline AAo maximum diameter > 40 mm treated by TAVI and in whom post-TAVI computed tomography (CT) scans beyond 1 year were available were included. AAo dilatation was deemed to be either continuous (≥ 2 mm increase) or stable (< 2 mm increase or decrease). Uni- and multivariate logistic regression analysis was utilized in order to identify factors associated with continuous AAo dilatation post-TAVI. RESULTS: A total of 61 patients with a mean AAo maximum diameter of 45.6 ± 3.9 mm at baseline were evaluated. At a median follow-up of 2.9 years, AAo dimensions remained stable in 85% of patients. Continuous AAo dilatation was observed in 15% of patients at a rate of 1.4 mm/year. Factors associated with continuous AAo dilatation were raphe length/annulus mean diameter ratio (OR 4.09, 95% CI [1.40-16.7], p = 0.022), TAV eccentricity at the leaflet outflow level (OR 2.11, 95%CI [1.12-4.53], p = 0.031) and maximum transprosthetic gradient (OR 1.30, 95%CI [0.99-1.73], p = 0.058). CONCLUSIONS: Ascending aortic dilatation in patients undergoing TAVI for bicuspid AS remains stable in the majority of patients. Factors influencing TAV stent frame geometry and function were identified to be associated with continuous AAo dilatation after TAVI; this should be confirmed in future larger cohort studies.
RESUMO
Aims: Permanent pacemaker implantation and left bundle branch block are common complications after transcatheter aortic valve replacement (TAVR) and are associated with impaired prognosis. This study aimed to develop an artificial intelligence (AI) model for predicting conduction disturbances after TAVR using pre-procedural 12-lead electrocardiogram (ECG) images. Methods and results: We collected pre-procedural 12-lead ECGs of patients who underwent TAVR at West China Hospital between March 2016 and March 2022. A hold-out testing set comprising 20% of the sample was randomly selected. We developed an AI model using a convolutional neural network, trained it using five-fold cross-validation and tested it on the hold-out testing cohort. We also developed and validated an enhanced model that included additional clinical features. After applying exclusion criteria, we included 1354 ECGs of 718 patients in the study. The AI model predicted conduction disturbances in the hold-out testing cohort with an area under the curve (AUC) of 0.764, accuracy of 0.743, F1 score of 0.752, sensitivity of 0.876, and specificity of 0.624, based solely on pre-procedural ECG images. The performance was better than the Emory score (AUC = 0.704), as well as the logistic (AUC = 0.574) and XGBoost (AUC = 0.520) models built with previously identified high-risk ECG patterns. After adding clinical features, there was an increase in the overall performance with an AUC of 0.779, accuracy of 0.774, F1 score of 0.776, sensitivity of 0.794, and specificity of 0.752. Conclusion: Artificial intelligence-enhanced ECGs may offer better predictive value than traditionally defined high-risk ECG patterns.
RESUMO
Ensemble clustering integrates a set of base clustering results to generate a stronger one. Existing methods usually rely on a co-association (CA) matrix that measures how many times two samples are grouped into the same cluster according to the base clusterings to achieve ensemble clustering. However, when the constructed CA matrix is of low quality, the performance will degrade. In this article, we propose a simple, yet effective CA matrix self-enhancement framework that can improve the CA matrix to achieve better clustering performance. Specifically, we first extract the high-confidence (HC) information from the base clusterings to form a sparse HC matrix. By propagating the highly reliable information of the HC matrix to the CA matrix and complementing the HC matrix according to the CA matrix simultaneously, the proposed method generates an enhanced CA matrix for better clustering. Technically, the proposed model is formulated as a symmetric constrained convex optimization problem, which is efficiently solved by an alternating iterative algorithm with convergence and global optimum theoretically guaranteed. Extensive experimental comparisons with 12 state-of-the-art methods on ten benchmark datasets substantiate the effectiveness, flexibility, and efficiency of the proposed model in ensemble clustering. The codes and datasets can be downloaded at https://github.com/Siritao/EC-CMS.
RESUMO
Existing graph clustering networks heavily rely on a predefined yet fixed graph, which can lead to failures when the initial graph fails to accurately capture the data topology structure of the embedding space. In order to address this issue, we propose a novel clustering network called Embedding-Induced Graph Refinement Clustering Network (EGRC-Net), which effectively utilizes the learned embedding to adaptively refine the initial graph and enhance the clustering performance. To begin, we leverage both semantic and topological information by employing a vanilla auto-encoder and a graph convolution network, respectively, to learn a latent feature representation. Subsequently, we utilize the local geometric structure within the feature embedding space to construct an adjacency matrix for the graph. This adjacency matrix is dynamically fused with the initial one using our proposed fusion architecture. To train the network in an unsupervised manner, we minimize the Jeffreys divergence between multiple derived distributions. Additionally, we introduce an improved approximate personalized propagation of neural predictions to replace the standard graph convolution network, enabling EGRC-Net to scale effectively. Through extensive experiments conducted on nine widely-used benchmark datasets, we demonstrate that our proposed methods consistently outperform several state-of-the-art approaches. Notably, EGRC-Net achieves an improvement of more than 11.99% in Adjusted Rand Index (ARI) over the best baseline on the DBLP dataset. Furthermore, our scalable approach exhibits a 10.73% gain in ARI while reducing memory usage by 33.73% and decreasing running time by 19.71%. The code for EGRC-Net will be made publicly available at https://github.com/ZhihaoPENG-CityU/EGRC-Net.
RESUMO
Label distribution learning (LDL) is a novel learning paradigm that assigns each instance with a label distribution. Although many specialized LDL algorithms have been proposed, few of them have noticed that the obtained label distributions are generally inaccurate with noise due to the difficulty of annotation. Besides, existing LDL algorithms overlooked that the noise in the inaccurate label distributions generally depends on instances. In this article, we identify the instance-dependent inaccurate LDL (IDI-LDL) problem and propose a novel algorithm called low-rank and sparse LDL (LRS-LDL). First, we assume that the inaccurate label distribution consists of the ground-truth label distribution and instance-dependent noise. Then, we learn a low-rank linear mapping from instances to the ground-truth label distributions and a sparse mapping from instances to the instance-dependent noise. In the theoretical analysis, we establish a generalization bound for LRS-LDL. Finally, in the experiments, we demonstrate that LRS-LDL can effectively address the IDI-LDL problem and outperform existing LDL methods.
RESUMO
Deep self-expressiveness-based subspace clustering methods have demonstrated effectiveness. However, existing works only consider the attribute information to conduct the self-expressiveness, limiting the clustering performance. In this paper, we propose a novel adaptive attribute and structure subspace clustering network (AASSC-Net) to simultaneously consider the attribute and structure information in an adaptive graph fusion manner. Specifically, we first exploit an auto-encoder to represent input data samples with latent features for the construction of an attribute matrix. We also construct a mixed signed and symmetric structure matrix to capture the local geometric structure underlying data samples. Then, we perform self-expressiveness on the constructed attribute and structure matrices to learn their affinity graphs separately. Finally, we design a novel attention-based fusion module to adaptively leverage these two affinity graphs to construct a more discriminative affinity graph. Extensive experimental results on commonly used benchmark datasets demonstrate that our AASSC-Net significantly outperforms state-of-the-art methods. In addition, we conduct comprehensive ablation studies to discuss the effectiveness of the designed modules. The code is publicly available at https://github.com/ZhihaoPENG-CityU/AASSC-Net.
RESUMO
This article explores the problem of semisupervised affinity matrix learning, that is, learning an affinity matrix of data samples under the supervision of a small number of pairwise constraints (PCs). By observing that both the matrix encoding PCs, called pairwise constraint matrix (PCM) and the empirically constructed affinity matrix (EAM), express the similarity between samples, we assume that both of them are generated from a latent affinity matrix (LAM) that can depict the ideal pairwise relation between samples. Specifically, the PCM can be thought of as a partial observation of the LAM, while the EAM is a fully observed one but corrupted with noise/outliers. To this end, we innovatively cast the semisupervised affinity matrix learning as the recovery of the LAM guided by the PCM and EAM, which is technically formulated as a convex optimization problem. We also provide an efficient algorithm for solving the resulting model numerically. Extensive experiments on benchmark datasets demonstrate the significant superiority of our method over state-of-the-art ones when used for constrained clustering and dimensionality reduction. The code is publicly available at https://github.com/jyh-learning/LAM.
Assuntos
Algoritmos , Aprendizado de Máquina Supervisionado , Análise por ConglomeradosRESUMO
PURPOSE: Late major bleeding is one of the main complications after transcatheter aortic valve replacement (TAVR). We aimed to develop a risk prediction model based on deep learning to predict major or life-threatening bleeding complications (MLBCs) after TAVR. PATIENTS AND METHODS: This was a retrospective study including TAVR patients from West China Hospital of Sichuan University Transcatheter Aortic Valve Replacement Registry (ChiCTR2000033419) between April 17, 2012 and May 27, 2020. A deep learning-based model named BLeNet was developed with 56 features covering baseline, procedural, and post-procedural characteristics. The model was validated with the bootstrap method and evaluated using Harrell's concordance index (c-index), receiver operating characteristics (ROC) curve, calibration curve, and Kaplan-Meier estimate. Captum interpretation library was applied to identify feature importance. The BLeNet model was compared with the traditional Cox proportional hazard (Cox-PH) model and the random survival forest model in the metrics mentioned above. RESULTS: The BLeNet model outperformed the Cox-PH and random survival forest models significantly in discrimination [optimism-corrected c-index of BLeNet vs Cox-PH vs random survival forest: 0.81 (95% CI: 0.79-0.92) vs 0.72 (95% CI: 0.63-0.77) vs 0.70 (95% CI: 0.61-0.74)] and calibration (integrated calibration index of BLeNet vs Cox-PH vs random survival forest: 0.007 vs 0.015 vs 0.019). In Kaplan-Meier analysis, BLeNet model had great performance in stratifying high- and low-bleeding risk patients (p < 0.0001). CONCLUSION: Deep learning is a feasible way to build prediction models concerning TAVR prognosis. A dedicated bleeding risk prediction model was developed for TAVR patients to facilitate well-informed clinical decisions.
RESUMO
As a variant of non-negative matrix factorization (NMF), symmetric NMF (SymNMF) can generate the clustering result without additional post-processing, by decomposing a similarity matrix into the product of a clustering indicator matrix and its transpose. However, the similarity matrix in the traditional SymNMF methods is usually predefined, resulting in limited clustering performance. Considering that the quality of the similarity graph is crucial to the final clustering performance, we propose a new semisupervised model, which is able to simultaneously learn the similarity matrix with supervisory information and generate the clustering results, such that the mutual enhancement effect of the two tasks can produce better clustering performance. Our model fully utilizes the supervisory information in the form of pairwise constraints to propagate it for obtaining an informative similarity matrix. The proposed model is finally formulated as a non-negativity-constrained optimization problem. Also, we propose an iterative method to solve it with the convergence theoretically proven. Extensive experiments validate the superiority of the proposed model when compared with nine state-of-the-art NMF models.
RESUMO
In this article, we propose a novel model for constrained clustering, namely, the dissimilarity propagation-guided graph-Laplacian principal component analysis (DP-GLPCA). By fully utilizing a limited number of weakly supervisory information in the form of pairwise constraints, the proposed DP-GLPCA is capable of capturing both the local and global structures of input samples to exploit their characteristics for excellent clustering. More specifically, we first formulate a convex semisupervised low-dimensional embedding model by incorporating a new dissimilarity regularizer into GLPCA (i.e., an unsupervised dimensionality reduction model), in which both the similarity and dissimilarity between low-dimensional representations are enforced with the constraints to improve their discriminability. An efficient iterative algorithm based on the inexact augmented Lagrange multiplier is designed to solve it with the global convergence guaranteed. Furthermore, we innovatively propose to propagate the cannot-link constraints (i.e., dissimilarity) to refine the dissimilarity regularizer to be more informative. The resulting DP model is iteratively solved, and we also prove that it can converge to a Karush-Kuhn-Tucker point. Extensive experimental results over nine commonly used benchmark data sets show that the proposed DP-GLPCA can produce much higher clustering accuracy than state-of-the-art constrained clustering methods. Besides, the effectiveness and advantage of the proposed DP model are experimentally verified. To the best of our knowledge, it is the first time to investigate DP, which is contrast to existing pairwise constraint propagation that propagates similarity. The code is publicly available at https://github.com/jyh-learning/DP-GLPCA.
RESUMO
In this paper, we propose a novel classification scheme for the remotely sensed hyperspectral image (HSI), namely SP-DLRR, by comprehensively exploring its unique characteristics, including the local spatial information and low-rankness. SP-DLRR is mainly composed of two modules, i.e., the classification-guided superpixel segmentation and the discriminative low-rank representation, which are iteratively conducted. Specifically, by utilizing the local spatial information and incorporating the predictions from a typical classifier, the first module segments pixels of an input HSI (or its restoration generated by the second module) into superpixels. According to the resulting superpixels, the pixels of the input HSI are then grouped into clusters and fed into our novel discriminative low-rank representation model with an effective numerical solution. Such a model is capable of increasing the intra-class similarity by suppressing the spectral variations locally while promoting the inter-class discriminability globally, leading to a restored HSI with more discriminative pixels. Experimental results on three benchmark datasets demonstrate the significant superiority of SP-DLRR over state-of-the-art methods, especially for the case with an extremely limited number of training pixels.
RESUMO
Constrained spectral clustering (SC) based on pairwise constraint propagation has attracted much attention due to the good performance. All the existing methods could be generally cast as the following two steps, i.e., a small number of pairwise constraints are first propagated to the whole data under the guidance of a predefined affinity matrix, and the affinity matrix is then refined in accordance with the resulting propagation and finally adopted for SC. Such a stepwise manner, however, overlooks the fact that the two steps indeed depend on each other, i.e., the two steps form a "chicken-egg" problem, leading to suboptimal performance. To this end, we propose a joint PCP model for constrained SC by simultaneously learning a propagation matrix and an affinity matrix. Especially, it is formulated as a bounded symmetric graph regularized low-rank matrix completion problem. We also show that the optimized affinity matrix by our model exhibits an ideal appearance under some conditions. Extensive experimental results in terms of constrained SC, semisupervised classification, and propagation behavior validate the superior performance of our model compared with state-of-the-art methods.