Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Publication year range
1.
Neural Netw ; 174: 106220, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38447427

ABSTRACT

Structured pruning is a representative model compression technology for convolutional neural networks (CNNs), aiming to prune some less important filters or channels of CNNs. Most recent structured pruning methods have established some criteria to measure the importance of filters, which are mainly based on the magnitude of weights or other parameters in CNNs. However, these judgment criteria lack explainability, and it is insufficient to simply rely on the numerical values of the network parameters to assess the relationship between the channel and the model performance. Moreover, directly utilizing these pruning criteria for global pruning may lead to suboptimal solutions, therefore, it is necessary to complement search algorithms to determine the pruning ratio for each layer. To address these issues, we propose ARPruning (Attention-map-based Ranking Pruning), which reconstructs a new pruning criterion as the importance of the intra-layer channels and further develops a new local neighborhood search algorithm for determining the optimal inter-layer pruning ratio. To measure the relationship between the channel to be pruned and the model performance, we construct an intra-layer channel importance criterion by considering the attention map for each layer. Then, we propose an automatic pruning strategy searching method that can search for the optimal solution effectively and efficiently. By integrating the well-designed pruning criteria and search strategy, our ARPruning can not only maintain a high compression rate but also achieve outstanding accuracy. In our work, it is also experimentally concluded that compared with state-of-the-art pruning methods, our ARPruning method is capable of achieving better compression results. The code can be obtained at https://github.com/dozingLee/ARPruning.


Subject(s)
Algorithms , Data Compression , Neural Networks, Computer
2.
Article in English | MEDLINE | ID: mdl-38470597

ABSTRACT

Federated learning (FL) enables collaborative training of machine learning models across distributed medical data sources without compromising privacy. However, applying FL to medical image analysis presents challenges like high communication overhead and data heterogeneity. This paper proposes novel FL techniques using explainable artificial intelligence (XAI) for efficient, accurate, and trustworthy analysis. A heterogeneity-aware causal learning approach selectively sparsifies model weights based on their causal contributions, significantly reducing communication requirements while retaining performance and improving interpretability. Furthermore, blockchain provides decentralized quality assessment of client datasets. The assessment scores adjust aggregation weights so higher-quality data has more influence during training, improving model generalization. Comprehensive experiments show our XAI-integrated FL framework enhances efficiency, accuracy and interpretability. The causal learning method decreases communication overhead while maintaining segmentation accuracy. The blockchain-based data valuation mitigates issues from low-quality local datasets. Our framework provides essential model explanations and trust mechanisms, making FL viable for clinical adoption in medical image analysis.

3.
Neural Netw ; 144: 75-89, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34454244

ABSTRACT

Whether sub-optimal local minima and saddle points exist in the highly non-convex loss landscape of deep neural networks has a great impact on the performance of optimization algorithms. Theoretically, we study in this paper the existence of non-differentiable sub-optimal local minima and saddle points for deep ReLU networks with arbitrary depth. We prove that there always exist non-differentiable saddle points in the loss surface of deep ReLU networks with squared loss or cross-entropy loss under reasonable assumptions. We also prove that deep ReLU networks with cross-entropy loss will have non-differentiable sub-optimal local minima if some outermost samples do not belong to a certain class. Experimental results on real and synthetic datasets verify our theoretical findings.


Subject(s)
Algorithms , Neural Networks, Computer , Entropy
4.
Nan Fang Yi Ke Da Xue Xue Bao ; 33(3): 436-8, 443, 2013 Mar.
Article in Chinese | MEDLINE | ID: mdl-23529248

ABSTRACT

OBJECTIVE: To detect hFgl2 expression in peripheral blood mononuclear cells in patients with chronic hepatitis B and liver cancer and explore its association with the severity of chronic hepatitis B. METHODS: The protein expression of hFgl2 in peripheral blood mononuclear cells was detected in 78 patients with chronic hepatitis B (including mild, moderate, or severe cases), chronic severe hepatitis, or liver cancer, with 20 healthy volunteers as controls. The data were analyzed in comparison with the patients' alanine aminotransferase (ALT), aspartate aminotransferase (AST) and total bilirubin (TBiL and) levels. RESULTS: hFgl2 protein expression was significantly higher in patients with chronic severe hepatitis and liver cancer than in the healthy volunteers and patients with chronic hepatitis B. The patients with chronic severe hepatitis had significantly higher hFgl2 protein expression than patients with liver cancer. In severe cases of chronic hepatitis B, hFgl2 protein expression was positively correlated with ALT, AST and TBiL, but these correlations were not found in mild or moderate cases. CONCLUSIONS: Peripheral blood mononuclear cells express hFgl2 protein, whose expression level increases with the severity of chronic hepatitis B.


Subject(s)
Fibrinogen/metabolism , Hepatitis B, Chronic/blood , Leukocytes, Mononuclear/metabolism , Liver Neoplasms/blood , Case-Control Studies , Hepatitis B, Chronic/classification , Humans , Liver Neoplasms/classification
SELECTION OF CITATIONS
SEARCH DETAIL
...