Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros

Banco de datos
Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(22)2023 Nov 13.
Artículo en Inglés | MEDLINE | ID: mdl-38005534

RESUMEN

With the advancement of neural networks, more and more neural networks are being applied to structural health monitoring systems (SHMSs). When an SHMS requires the integration of numerous neural networks, high-performance and low-latency networks are favored. This paper focuses on damage detection based on vibration signals. In contrast to traditional neural network approaches, this study utilizes a stochastic configuration network (SCN). An SCN is an incrementally learning network that randomly configures appropriate neurons based on data and errors. It is an emerging neural network that does not require predefined network structures and is not based on gradient descent. While SCNs dynamically define the network structure, they essentially function as fully connected neural networks that fail to capture the temporal properties of monitoring data effectively. Moreover, they suffer from inference time and computational cost issues. To enable faster and more accurate operation within the monitoring system, this paper introduces a stochastic convolutional feature extraction approach that does not rely on backpropagation. Additionally, a random node deletion algorithm is proposed to automatically prune redundant neurons in SCNs, addressing the issue of network node redundancy. Experimental results demonstrate that the feature extraction method improves accuracy by 30% compared to the original SCN, and the random node deletion algorithm removes approximately 10% of neurons.

2.
Neural Netw ; 172: 106067, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38199151

RESUMEN

Modern DNNs often include a huge number of parameters that are expensive for both computation and memory. Pruning can significantly reduce model complexity and lessen resource demands, and less complex models can also be easier to explain and interpret. In this paper, we propose a novel pruning algorithm, Cluster-Restricted Extreme Sparsity Pruning of Redundancy (CRESPR), to prune a neural network into modular units and achieve better pruning efficiency. With the Hessian matrix, we provide an analytic explanation of why modular structures in a sparse DNN can better maintain performance, especially at an extreme high pruning ratio. In CRESPR, each modular unit contains mostly internal connections, which clearly shows how subgroups of input features are processed through a DNN and eventually contribute to classification decisions. Such process-level revealing of internal working mechanisms undoubtedly leads to better interpretability of a black-box DNN model. Extensive experiments were conducted with multiple DNN architectures and datasets, and CRESPR achieves higher pruning performance than current state-of-the-art methods at high and extremely high pruning ratios. Additionally, we show how CRESPR improves model interpretability through a concrete example.


Asunto(s)
Algoritmos , Redes Neurales de la Computación
3.
Neural Netw ; 171: 229-241, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38101291

RESUMEN

Deep learning models have been widely used during the last decade due to their outstanding learning and abstraction capacities. However, one of the main challenges any scientist has to face using deep learning models is to establish the network's architecture. Due to this difficulty, data scientists usually build over complex models and, as a result, most of them result computationally intensive and impose a large memory footprint, generating huge costs, contributing to climate change and hindering their use in computational-limited devices. In this paper, we propose a novel dense feed-forward neural network constructing method based on pruning and transfer learning. Its performance has been thoroughly assessed in classification and regression problems. Without any accuracy loss, our approach can compress the number of parameters by more than 70%. Even further, choosing the pruning parameter carefully, most of the refined models outperform original ones. Furthermore, we have verified that our method not only identifies a better network architecture but also facilitates knowledge transfer between the original and refined models. The results obtained show that our constructing method not only helps in the design of more efficient models but also more effective ones.


Asunto(s)
Cambio Climático , Redes Neurales de la Computación , Formación de Concepto , Conocimiento
4.
Fundam Res ; 4(4): 941-950, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39156574

RESUMEN

Neural network pruning is a popular approach to reducing the computational complexity of deep neural networks. In recent years, as growing evidence shows that conventional network pruning methods employ inappropriate proxy metrics, and as new types of hardware become increasingly available, hardware-aware network pruning that incorporates hardware characteristics in the loop of network pruning has gained growing attention. Both network accuracy and hardware efficiency (latency, memory consumption, etc.) are critical objectives to the success of network pruning, but the conflict between the multiple objectives makes it impossible to find a single optimal solution. Previous studies mostly convert the hardware-aware network pruning to optimization problems with a single objective. In this paper, we propose to solve the hardware-aware network pruning problem with Multi-Objective Evolutionary Algorithms (MOEAs). Specifically, we formulate the problem as a multi-objective optimization problem, and propose a novel memetic MOEA, namely HAMP, that combines an efficient portfolio-based selection and a surrogate-assisted local search, to solve it. Empirical studies demonstrate the potential of MOEAs in providing simultaneously a set of alternative solutions and the superiority of HAMP compared to the state-of-the-art hardware-aware network pruning method.

5.
J Neural Eng ; 20(4)2023 07 31.
Artículo en Inglés | MEDLINE | ID: mdl-37429288

RESUMEN

Objective.Neural decoding, an important area of neural engineering, helps to link neural activity to behavior. Deep neural networks (DNNs), which are becoming increasingly popular in many application fields of machine learning, show promising performance in neural decoding compared to traditional neural decoding methods. Various neural decoding applications, such as brain computer interface applications, require both high decoding accuracy and real-time decoding speed. Pruning methods are used to produce compact DNN models for faster computational speed. Greedy inter-layer order with Random Selection (GRS) is a recently-designed structured pruning method that derives compact DNN models for calcium-imaging-based neural decoding. Although GRS has advantages in terms of detailed structure analysis and consideration of both learned information and model structure during the pruning process, the method is very computationally intensive, and is not feasible when large-scale DNN models need to be pruned within typical constraints on time and computational resources. Large-scale DNN models arise in neural decoding when large numbers of neurons are involved. In this paper, we build on GRS to develop a new structured pruning algorithm called jump GRS (JGRS) that is designed to efficiently compress large-scale DNN models.Approach.On top of GRS, JGRS implements a 'jump mechanism', which bypasses retraining intermediate models when model accuracy is relatively less sensitive to pruning operations. Design of the jump mechanism is motivated by identifying different phases of the structured pruning process, where retraining can be done infrequently in earlier phases without sacrificing accuracy. The jump mechanism helps to significantly speed up execution of the pruning process and greatly enhance its scalability. We compare the pruning performance and speed of JGRS and GRS with extensive experiments in the context of neural decoding.Main results.Our results demonstrate that JGRS provides significantly faster pruning speed compared to GRS, and at the same time, JGRS provides pruned models that are similarly compact as those generated by GRS.Significance.In our experiments, we demonstrate that JGRS achieves on average 9%-20% more compressed models compared to GRS with 2-8 times faster speed (less time required for pruning) across four different initial models on a relevant dataset for neural data analysis.


Asunto(s)
Interfaces Cerebro-Computador , Redes Neurales de la Computación , Neuronas , Algoritmos , Calcio
6.
J Imaging ; 8(3)2022 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-35324619

RESUMEN

Introduced in the late 1980s for generalization purposes, pruning has now become a staple for compressing deep neural networks. Despite many innovations in recent decades, pruning approaches still face core issues that hinder their performance or scalability. Drawing inspiration from early work in the field, and especially the use of weight decay to achieve sparsity, we introduce Selective Weight Decay (SWD), which carries out efficient, continuous pruning throughout training. Our approach, theoretically grounded on Lagrangian smoothing, is versatile and can be applied to multiple tasks, networks, and pruning structures. We show that SWD compares favorably to state-of-the-art approaches, in terms of performance-to-parameters ratio, on the CIFAR-10, Cora, and ImageNet ILSVRC2012 datasets.

7.
Neural Netw ; 147: 103-112, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34998270

RESUMEN

Neural network pruning can trim the over-parameterized neural networks effectively by removing a number of network parameters. However, the traditional rule-based approaches always depend on manual experience. Existing heuristic search methods in discrete search spaces are usually time consuming and sub-optimal. In this paper, we develop a differentiable multi-pruner and predictor (DMPP) to prune neural networks automatically. The pruner composed of learnable parameters generates the pruning ratios of all convolutional layers as the continuous representation of the network. The neural network-based predictor is employed to predict the performance of different structures, which can accelerate the search process. Pruner and predictor enable us to directly employ gradient-based optimization to find a better structure. In addition, multi-pruner is presented to improve the efficiency of search, and knowledge distillation is leveraged to improve the performance of the pruned network. To evaluate the effectiveness of the proposed method, extensive experiments are performed on CIFAR-10, CIFAR-100, and ImageNet datasets with VGGNet and ResNet. Results show that the present DMPP can achieve a better performance than many previous state-of-the-art methods.


Asunto(s)
Redes Neurales de la Computación , Yoduro de Dimetilfenilpiperazina , Heurística
8.
Front Comput Neurosci ; 15: 760554, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34776916

RESUMEN

Neural network pruning is critical to alleviating the high computational cost of deep neural networks on resource-limited devices. Conventional network pruning methods compress the network based on the hand-crafted rules with a pre-defined pruning ratio (PR), which fails to consider the variety of channels among different layers, thus, resulting in a sub-optimal pruned model. To alleviate this issue, this study proposes a genetic wavelet channel search (GWCS) based pruning framework, where the pruning process is modeled as a multi-stage genetic optimization procedure. Its main ideas are 2-fold: (1) it encodes all the channels of the pertained network and divide them into multiple searching spaces according to the different functional convolutional layers from concrete to abstract. (2) it develops a wavelet channel aggregation based fitness function to explore the most representative and discriminative channels at each layer and prune the network dynamically. In the experiments, the proposed GWCS is evaluated on CIFAR-10, CIFAR-100, and ImageNet datasets with two kinds of popular deep convolutional neural networks (CNNs) (ResNet and VGGNet). The results demonstrate that GNAS outperforms state-of-the-art pruning algorithms in both accuracy and compression rate. Notably, GNAS reduces more than 73.1% FLOPs by pruning ResNet-32 with even 0.79% accuracy improvement on CIFAR-100.

9.
JMIR Form Res ; 4(12): e17265, 2020 Dec 22.
Artículo en Inglés | MEDLINE | ID: mdl-33350391

RESUMEN

BACKGROUND: Artificial neural networks have achieved unprecedented success in the medical domain. This success depends on the availability of massive and representative datasets. However, data collection is often prevented by privacy concerns, and people want to take control over their sensitive information during both the training and using processes. OBJECTIVE: To address security and privacy issues, we propose a privacy-preserving method for the analysis of distributed medical data. The proposed method, termed stochastic channel-based federated learning (SCBFL), enables participants to train a high-performance model cooperatively and in a distributed manner without sharing their inputs. METHODS: We designed, implemented, and evaluated a channel-based update algorithm for a central server in a distributed system. The update algorithm will select the channels with regard to the most active features in a training loop, and then upload them as learned information from local datasets. A pruning process, which serves as a model accelerator, was further applied to the algorithm based on the validation set. RESULTS: We constructed a distributed system consisting of 5 clients and 1 server. Our trials showed that the SCBFL method can achieve an area under the receiver operating characteristic curve (AUC-ROC) of 0.9776 and an area under the precision-recall curve (AUC-PR) of 0.9695 with only 10% of channels shared with the server. Compared with the federated averaging algorithm, the proposed SCBFL method achieved a 0.05388 higher AUC-ROC and 0.09695 higher AUC-PR. In addition, our experiment showed that 57% of the time is saved by the pruning process with only a reduction of 0.0047 in AUC-ROC performance and a reduction of 0.0068 in AUC-PR performance. CONCLUSIONS: In this experiment, our model demonstrated better performance and a higher saturating speed than the federated averaging method, which reveals all of the parameters of local models to the server. The saturation rate of performance could be promoted by introducing a pruning process and further improvement could be achieved by tuning the pruning rate.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA