Your browser doesn't support javascript.
loading
Efficient and Flexible Method for Reducing Moderate-Size Deep Neural Networks with Condensation.
Chen, Tianyi; Xu, Zhi-Qin John.
Affiliation
  • Chen T; School of Mathematical Sciences, Institute of Natural Sciences, MOE-LSC, Shanghai Jiao Tong University, Shanghai 200240, China.
  • Xu ZJ; School of Mathematical Sciences, Institute of Natural Sciences, MOE-LSC, Shanghai Jiao Tong University, Shanghai 200240, China.
Entropy (Basel) ; 26(7)2024 Jun 30.
Article in En | MEDLINE | ID: mdl-39056928
ABSTRACT
Neural networks have been extensively applied to a variety of tasks, achieving astounding results. Applying neural networks in the scientific field is an important research direction that is gaining increasing attention. In scientific applications, the scale of neural networks is generally moderate size, mainly to ensure the speed of inference during application. Additionally, comparing neural networks to traditional algorithms in scientific applications is inevitable. These applications often require rapid computations, making the reduction in neural network sizes increasingly important. Existing work has found that the powerful capabilities of neural networks are primarily due to their nonlinearity. Theoretical work has discovered that under strong nonlinearity, neurons in the same layer tend to behave similarly, a phenomenon known as condensation. Condensation offers an opportunity to reduce the scale of neural networks to a smaller subnetwork with a similar performance. In this article, we propose a condensation reduction method to verify the feasibility of this idea in practical problems, thereby validating existing theories. Our reduction method can currently be applied to both fully connected networks and convolutional networks, achieving positive results. In complex combustion acceleration tasks, we reduced the size of the neural network to 41.7% of its original scale while maintaining prediction accuracy. In the CIFAR10 image classification task, we reduced the network size to 11.5% of the original scale, still maintaining a satisfactory validation accuracy. Our method can be applied to most trained neural networks, reducing computational pressure and improving inference speed.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Entropy (Basel) Year: 2024 Document type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Entropy (Basel) Year: 2024 Document type: Article