Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
Network ; 34(1-2): 65-83, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36625845

RESUMO

This paper proposes a two phases-based training method to design the codewords to map the cluster indices of the input feature vectors to the outputs of the new perceptrons with the multi-pulse type activation functions. Our proposed method is applied to classify two types of the tachycardias. First, the total number of the new perceptrons is initialized as the dimensions of the input feature vectors. Next, a set of new perceptrons with each new perceptron having a single pulse type activation function is designed. Then, the new perceptrons with the multi-pulse type activation function are designed based on those new perceptrons with the single pulse type activation function. After that, the codewords are assigned according to the outputs of the new perceptrons with the multi-pulse type activation functions. Finally, a condition on the codewords is checked. The significance of this work is to guarantee to achieve the no classification error efficiently through using more than one new perceptron with the multi-pulse type activation if the feature space can be linearly partitioned into the multiple clusters. The computer numerical simulation results show that our proposed method outperforms the conventional perceptrons with the sign type activation function.


Assuntos
Algoritmos , Redes Neurais de Computação , Simulação por Computador , Frequência Cardíaca
2.
Sensors (Basel) ; 23(23)2023 Nov 22.
Artigo em Inglês | MEDLINE | ID: mdl-38067692

RESUMO

With the advent of autonomous vehicle applications, the importance of LiDAR point cloud 3D object detection cannot be overstated. Recent studies have demonstrated that methods for aggregating features from voxels can accurately and efficiently detect objects in large, complex 3D detection scenes. Nevertheless, most of these methods do not filter background points well and have inferior detection performance for small objects. To ameliorate this issue, this paper proposes an Attention-based and Multiscale Feature Fusion Network (AMFF-Net), which utilizes a Dual-Attention Voxel Feature Extractor (DA-VFE) and a Multi-scale Feature Fusion (MFF) Module to improve the precision and efficiency of 3D object detection. The DA-VFE considers pointwise and channelwise attention and integrates them into the Voxel Feature Extractor (VFE) to enhance key point cloud information in voxels and refine more-representative voxel features. The MFF Module consists of self-calibrated convolutions, a residual structure, and a coordinate attention mechanism, which acts as a 2D Backbone to expand the receptive domain and capture more contextual information, thus better capturing small object locations, enhancing the feature-extraction capability of the network and reducing the computational overhead. We performed evaluations of the proposed model on the nuScenes dataset with a large number of driving scenarios. The experimental results showed that the AMFF-Net achieved 62.8% in the mAP, which significantly boosted the performance of small object detection compared to the baseline network and significantly reduced the computational overhead, while the inference speed remained essentially the same. AMFF-Net also achieved advanced performance on the KITTI dataset.

3.
Sensors (Basel) ; 23(2)2023 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-36679558

RESUMO

Attention refers to the human psychological ability to focus on doing an activity. The attention assessment plays an important role in diagnosing attention deficit hyperactivity disorder (ADHD). In this paper, the attention assessment is performed via a classification approach. First, the single-channel electroencephalograms (EEGs) are acquired from various participants when they perform various activities. Then, fast Fourier transform (FFT) is applied to the acquired EEGs, and the high-frequency components are discarded for performing denoising. Next, empirical mode decomposition (EMD) is applied to remove the underlying trend of the signals. In order to extract more features, singular spectrum analysis (SSA) is employed to increase the total number of the components. Finally, some typical models such as the random forest-based classifier, the support vector machine (SVM)-based classifier, and the back-propagation (BP) neural network-based classifier are used for performing the classifications. Here, the percentages of the classification accuracies are employed as the attention scores. The computer numerical simulation results show that our proposed method yields a higher classification performance compared to the traditional methods without performing the EMD and SSA.


Assuntos
Eletroencefalografia , Redes Neurais de Computação , Humanos , Análise de Fourier , Eletroencefalografia/métodos , Máquina de Vetores de Suporte , Algoritmo Florestas Aleatórias
4.
Sensors (Basel) ; 22(21)2022 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-36365828

RESUMO

Recently, deep learning-based image quality enhancement models have been proposed to improve the perceptual quality of distorted synthesized views impaired by compression and the Depth Image-Based Rendering (DIBR) process in a multi-view video system. However, due to the lack of Multi-view Video plus Depth (MVD) data, the training data for quality enhancement models is small, which limits the performance and progress of these models. Augmenting the training data to enhance the synthesized view quality enhancement (SVQE) models is a feasible solution. In this paper, a deep learning-based SVQE model using more synthetic synthesized view images (SVIs) is suggested. To simulate the irregular geometric displacement of DIBR distortion, a random irregular polygon-based SVI synthesis method is proposed based on existing massive RGB/RGBD data, and a synthetic synthesized view database is constructed, which includes synthetic SVIs and the DIBR distortion mask. Moreover, to further guide the SVQE models to focus more precisely on DIBR distortion, a DIBR distortion mask prediction network which could predict the position and variance of DIBR distortion is embedded into the SVQE models. The experimental results on public MVD sequences demonstrate that the PSNR performance of the existing SVQE models, e.g., DnCNN, NAFNet, and TSAN, pre-trained on NYU-based synthetic SVIs could be greatly promoted by 0.51-, 0.36-, and 0.26 dB on average, respectively, while the MPPSNRr performance could also be elevated by 0.86, 0.25, and 0.24 on average, respectively. In addition, by introducing the DIBR distortion mask prediction network, the SVI quality obtained by the DnCNN and NAFNet pre-trained on NYU-based synthetic SVIs could be further enhanced by 0.02- and 0.03 dB on average in terms of the PSNR and 0.004 and 0.121 on average in terms of the MPPSNRr.


Assuntos
Compressão de Dados , Aprendizado Profundo , Aumento da Imagem/métodos , Compressão de Dados/métodos
5.
Sensors (Basel) ; 20(11)2020 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-32517226

RESUMO

This paper proposes a framework combining the complementary ensemble empirical mode decomposition with both the independent component analysis and the non-negative matrix factorization for estimating both the heart rate and the respiratory rate from the photoplethysmography (PPG) signal. After performing the complementary ensemble empirical mode decomposition on the PPG signal, a finite number of intrinsic mode functions are obtained. Then, these intrinsic mode functions are divided into two groups to perform the further analysis via both the independent component analysis and the non-negative matrix factorization. The surrogate cardiac signal related to the heart activity and another surrogate respiratory signal related to the respiratory activity are reconstructed to estimate the heart rate and the respiratory rate, respectively. Finally, different records of signals acquired from the Medical Information Mart for Intensive Care database downloaded from the Physionet Automated Teller Machine (ATM) data bank are employed for demonstrating the outperformance of our proposed method. The results show that our proposed method outperforms both the digital filtering approach and the conventional empirical mode decomposition based methods in terms of reconstructing both the surrogate cardiac signal and the respiratory signal from the PPG signal as well as both achieving the higher accuracy and the higher reliability for estimating both the heart rate and the respiratory rate.


Assuntos
Frequência Cardíaca , Fotopletismografia , Taxa Respiratória , Processamento de Sinais Assistido por Computador , Algoritmos , Humanos , Reprodutibilidade dos Testes
6.
Sensors (Basel) ; 20(21)2020 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-33114352

RESUMO

This paper aims to develop an activity recognition algorithm to allow parents to monitor their children at home after school. A common method used to analyze electroencephalograms is to use infinite impulse response filters to decompose the electroencephalograms into various brain wave components. However, nonlinear phase distortions will be introduced by these filters. To address this issue, this paper applies empirical mode decomposition to decompose the electroencephalograms into various intrinsic mode functions and categorize them into four groups. In addition, common features used to analyze electroencephalograms are energy and entropy. However, because there are only two features, the available information is limited. To address this issue, this paper extracts 11 different physical quantities from each group of intrinsic mode functions, and these are employed as the features. Finally, this paper uses the random forest to perform activity recognition. It is worth noting that the conventional approach for performing activity recognition is based on a single type of signal, which limits the recognition performance. In this paper, a multi-modal system based on electroencephalograms, image sequences, and motion signals is used for activity recognition. The numerical simulation results show that the percentage accuracies based on three types of signal are higher than those based on two types of signal or the individual signals. This demonstrates the advantages of using the multi-modal approach for activity recognition. In addition, our proposed empirical mode decomposition-based method outperforms the conventional filtering-based method. This demonstrates the advantages of using the nonlinear and adaptive time frequency approach for activity recognition.

7.
Sensors (Basel) ; 20(2)2020 Jan 07.
Artigo em Inglês | MEDLINE | ID: mdl-31936084

RESUMO

The novelty and the contribution of this paper consists of applying an iterative joint singular spectrum analysis and low-rank decomposition approach for suppressing the spikes in an electroencephalogram. First, an electroencephalogram is filtered by an ideal lowpass filter via removing its discrete Fourier transform coefficients outside the wave band, the wave band, the wave band, the wave band and the wave band. Second, the singular spectrum analysis is performed on the filtered electroencephalogram to obtain the singular spectrum analysis components. The singular spectrum analysis components are sorted according to the magnitudes of their corresponding eigenvalues. The singular spectrum analysis components are sequentially added together starting from the last singular spectrum analysis component. If the variance of the summed singular spectrum analysis component under the unit energy normalization is larger than a threshold value, then the summation is terminated. The summed singular spectrum analysis component forms the first scale of the electroencephalogram. The rest singular spectrum analysis components are also summed up together separately to form the residue of the electroencephalogram. Next, the low-rank decomposition is performed on the residue of the electroencephalogram to obtain both the low-rank component and the sparse component. The low-rank component is added to the previous scale of the electroencephalogram to obtain the next scale of the electroencephalogram. Finally, the above procedures are repeated on the sparse component until the variance of the current scale of the electroencephalogram under the unit energy normalization is larger than another threshold value. The computer numerical simulation results show that the spike suppression performance based on our proposed method outperforms that based on the state-of-the-art methods.


Assuntos
Algoritmos , Eletroencefalografia , Processamento de Sinais Assistido por Computador , Humanos , Fatores de Tempo
8.
Sensors (Basel) ; 18(5)2018 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-29702629

RESUMO

This paper presents an unsupervised learning algorithm for sparse nonnegative matrix factor time⁻frequency deconvolution with optimized fractional ß-divergence. The ß-divergence is a group of cost functions parametrized by a single parameter ß. The Itakura⁻Saito divergence, Kullback⁻Leibler divergence and Least Square distance are special cases that correspond to ß=0, 1, 2, respectively. This paper presents a generalized algorithm that uses a flexible range of ß that includes fractional values. It describes a maximization⁻minimization (MM) algorithm leading to the development of a fast convergence multiplicative update algorithm with guaranteed convergence. The proposed model operates in the time⁻frequency domain and decomposes an information-bearing matrix into two-dimensional deconvolution of factor matrices that represent the spectral dictionary and temporal codes. The deconvolution process has been optimized to yield sparse temporal codes through maximizing the likelihood of the observations. The paper also presents a method to estimate the fractional ß value. The method is demonstrated on separating audio mixtures recorded from a single channel. The paper shows that the extraction of the spectral dictionary and temporal codes is significantly more efficient by using the proposed algorithm and subsequently leads to better source separation performance. Experimental tests and comparisons with other factorization methods have been conducted to verify its efficacy.

9.
IEEE Trans Biomed Eng ; 71(5): 1677-1686, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38147418

RESUMO

Spikesorting is crucial in studying neural individually and synergistically encoding and decoding behaviors. However, existent spike sorting algorithms perform unsatisfactorily in real scenarios where heavy noises and overlapping samples are commonly in the spikes, and the spikes from different neurons are similar. To address such challenging scenarios, we propose an automatic spike sporting method in this paper, which integrally combines low-rank and sparse representation (LRSR) into a unified model. In particular, LRSR models spikes through low-rank optimization, uncovering global data structure for handling similar and overlapped samples. To eliminate the influence of the embedded noises, LRSR uses a sparse constraint, effectively separating spikes from noise. The optimization is solved using alternate augmented Lagrange multipliers methods. Moreover, we conclude with an automatic spike-sorting framework that employs the spectral clustering theorem to estimate the number of neurons. Extensive experiments over various simulated and real-world datasets demonstrate that our proposed method, LRSR, can handle spike sorting effectively and efficiently.


Assuntos
Potenciais de Ação , Algoritmos , Neurônios , Processamento de Sinais Assistido por Computador , Potenciais de Ação/fisiologia , Humanos , Neurônios/fisiologia , Modelos Neurológicos , Animais , Simulação por Computador
10.
Artigo em Inglês | MEDLINE | ID: mdl-38640042

RESUMO

Multimodal medical image fusion aims to integrate complementary information from different modalities of medical images. Deep learning methods, especially recent vision Transformers, have effectively improved image fusion performance. However, there are limitations for Transformers in image fusion, such as lacks of local feature extraction and cross-modal feature interaction, resulting in insufficient multimodal feature extraction and integration. In addition, the computational cost of Transformers is higher. To address these challenges, in this work, we develop an adaptive cross-modal fusion strategy for unsupervised multimodal medical image fusion. Specifically, we propose a novel lightweight cross Transformer based on cross multi-axis attention mechanism. It includes cross-window attention and cross-grid attention to mine and integrate both local and global interactions of multimodal features. The cross Transformer is further guided by a spatial adaptation fusion module, which allows the model to focus on the most relevant information. Moreover, we design a special feature extraction module that combines multiple gradient residual dense convolutional and Transformer layers to obtain local features from coarse to fine and capture global features. The proposed strategy significantly boosts the fusion performance while minimizing computational costs. Extensive experiments, including clinical brain tumor image fusion, have shown that our model can achieve clearer texture details and better visual quality than other state-of-the-art fusion methods.

11.
Neural Comput Appl ; : 1-23, 2023 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-37362574

RESUMO

In linear registration, a floating image is spatially aligned with a reference image after performing a series of linear metric transformations. Additionally, linear registration is mainly considered a preprocessing version of nonrigid registration. To better accomplish the task of finding the optimal transformation in pairwise intensity-based medical image registration, in this work, we present an optimization algorithm called the normal vibration distribution search-based differential evolution algorithm (NVSA), which is modified from the Bernstein search-based differential evolution (BSD) algorithm. We redesign the search pattern of the BSD algorithm and import several control parameters as part of the fine-tuning process to reduce the difficulty of the algorithm. In this study, 23 classic optimization functions and 16 real-world patients (resulting in 41 multimodal registration scenarios) are used in experiments performed to statistically investigate the problem solving ability of the NVSA. Nine metaheuristic algorithms are used in the conducted experiments. When compared to the commonly utilized registration methods, such as ANTS, Elastix, and FSL, our method achieves better registration performance on the RIRE dataset. Moreover, we prove that our method can perform well with or without its initial spatial transformation in terms of different evaluation indicators, demonstrating its versatility and robustness for various clinical needs and applications. This study establishes the idea that metaheuristic-based methods can better accomplish linear registration tasks than the frequently used approaches; the proposed method demonstrates promise that it can solve real-world clinical and service problems encountered during nonrigid registration as a preprocessing approach.The source code of the NVSA is publicly available at https://github.com/PengGui-N/NVSA.

12.
Multimed Tools Appl ; 81(4): 5743-5760, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34975285

RESUMO

This paper proposes a method to evaluate the effectiveness of the eye message therapy. The existing methods are via the diagnoses conducted by the medical professions based on the measurements acquired by the optical instruments. However, this approach is very expensive. To address this issue, this paper performs the classification between the periocular images taken before performing the eye massage therapy and those after performing the eye massage therapy to address the above difficulty. First, the median filtering is used to suppress the solitary point noise with preserving the edges of the image without causing the significant blurring. Then, the Canny operator is employed to accurately locate the edges. Next, the circle Hough transform (CHT) is used for performing the iris segmentation. Finally, various classifiers are used to perform the classification. The computer numerical simulation results show that our proposed method can achieve the high classification accuracies. This implies that there is a significant difference on the iris before performing the eye massage therapy and after performing the eye massage therapy. In addition, the comparisons with the state of art Daugman method have been performed. It is found that the classification performance achieved by the CHT based method is better than those achieved by the Daugman method.

13.
Artigo em Inglês | MEDLINE | ID: mdl-33877983

RESUMO

Spike sorting technologies support neuroscientists to access the neural activity with single-neuron or single-action-potential resolutions. However, conventional spike sorting technologies perform the feature extraction and the clustering separately after the spikes are well detected. It not only induces many redundant processes, but it also yields a lower accuracy and an unstable result especially when noises and/or overlapping spikes exist in the dataset. To address these issues, this paper proposes a unified optimization model integrating the feature extraction and the clustering for spike sorting. Unlike the widely used combination strategies, i.e., performing the principal component analysis (PCA) for spike feature extraction and the K-means (KM) for clustering in sequence, interestingly, this paper finds the solution of the proposed unified model by iteratively performing PCA and KM-like procedures. Subsequently, by embedding the K-means++ strategy in KM-like initializing and a comparison updating rule in the solving process, the proposed model can well handle the noises and overlapping interference as well as enjoy a high accuracy and a low computational complexity. Finally, an automatic spike sorting method is derived after taking the best of the clustering validity indices into the proposed model. The extensive numerical simulation results on both synthetic and real-world datasets confirm that our proposed method outperforms the related state-of-the-art approaches.


Assuntos
Algoritmos , Processamento de Sinais Assistido por Computador , Potenciais de Ação , Análise por Conglomerados , Humanos , Neurônios
14.
Comput Intell Neurosci ; 2020: 3283890, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32788918

RESUMO

With the higher-order neighborhood information of a graph network, the accuracy of graph representation learning classification can be significantly improved. However, the current higher-order graph convolutional networks have a large number of parameters and high computational complexity. Therefore, we propose a hybrid lower-order and higher-order graph convolutional network (HLHG) learning model, which uses a weight sharing mechanism to reduce the number of network parameters. To reduce the computational complexity, we propose a novel information fusion pooling layer to combine the high-order and low-order neighborhood matrix information. We theoretically compare the computational complexity and the number of parameters of the proposed model with those of the other state-of-the-art models. Experimentally, we verify the proposed model on large-scale text network datasets using supervised learning and on citation network datasets using semisupervised learning. The experimental results show that the proposed model achieves higher classification accuracy with a small set of trainable weight parameters.


Assuntos
Classificação/métodos , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Conjuntos de Dados como Assunto
15.
Artigo em Inglês | MEDLINE | ID: mdl-30951475

RESUMO

In recent years, the signal processing opportunities with the multi-channel recording and the high precision detection provided by the development of new extracellular multielectrodes are increasing. Hence, designing new spike sorting algorithms are both attractive and challenging. These algorithms are used to distinguish the individual neurons' activity from the dense and simultaneously recorded neural action potentials with high accuracy. However, since the overlapping phenomenon often inevitably arises in the recorded data, they are not accurate enough in practical situations especially when the noise level is high. In this paper, a spike feature extraction method based on the Wavelet Packets Decomposition and the Mutual Information is proposed. This is a highly accurate semi-supervised solution with a short training phase for performing the automation of the spike sorting framework. Further, the evaluations are performed on different public datasets. The raw data is not only suffered from multiple noises (from 5% level to 20% level) but also includes various degrees of overlapping spikes at different times. The clustering results demonstrate the effectiveness of our proposed algorithm. Also, it achieves a good anti-noise performance with ensuring a high clustering accuracy (up to 99.76%) compared with the state of art methods.

16.
IEEE Trans Neural Netw ; 19(6): 938-47, 2008 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18541495

RESUMO

In this paper, it is found that the weights of a perceptron are bounded for all initial weights if there exists a nonempty set of initial weights that the weights of the perceptron are bounded. Hence, the boundedness condition of the weights of the perceptron is independent of the initial weights. Also, a necessary and sufficient condition for the weights of the perceptron exhibiting a limit cycle behavior is derived. The range of the number of updates for the weights of the perceptron required to reach the limit cycle is estimated. Finally, it is suggested that the perceptron exhibiting the limit cycle behavior can be employed for solving a recognition problem when downsampled sets of bounded training feature vectors are linearly separable. Numerical computer simulation results show that the perceptron exhibiting the limit cycle behavior can achieve a better recognition performance compared to a multilayer perceptron.


Assuntos
Algoritmos , Relógios Biológicos/fisiologia , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Feminino , Humanos , Masculino , Dinâmica não Linear , Fatores de Tempo , Voz
17.
Artigo em Inglês | MEDLINE | ID: mdl-24109714

RESUMO

Ambulatory electrocardiogram signals can be contaminated with various types of noise. Among these, electrode motion 'em' artifacts are considered particularly undesired as they can be mistaken for ectopic beats. Unfortunately, 'em' noise has proved difficult to tackle using ordinary filtering techniques. In this paper, we explore a novel filtering alternative, and show that it could be considered as a potential candidate for dealing with electrode motion artifacts. The proposed system is composed of two simple parts: a frequency filter and a time window, interconnected in series. The two components are designed such that the overall system operates optimally in the mean square error sense. Experimentation on signals obtained from the MIT-BIH database demonstrates the superiority of the above approach over optimal Fourier filtering.


Assuntos
Artefatos , Eletrocardiografia Ambulatorial/instrumentação , Eletrocardiografia Ambulatorial/métodos , Movimento (Física) , Processamento de Sinais Assistido por Computador , Bases de Dados Factuais , Eletrodos , Teste de Esforço , Análise de Fourier , Humanos
18.
PLoS One ; 8(7): e66730, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23840865

RESUMO

In any diabetic retinopathy screening program, about two-thirds of patients have no retinopathy. However, on average, it takes a human expert about one and a half times longer to decide an image is normal than to recognize an abnormal case with obvious features. In this work, we present an automated system for filtering out normal cases to facilitate a more effective use of grading time. The key aim with any such tool is to achieve high sensitivity and specificity to ensure patients' safety and service efficiency. There are many challenges to overcome, given the variation of images and characteristics to identify. The system combines computed evidence obtained from various processing stages, including segmentation of candidate regions, classification and contextual analysis through Hidden Markov Models. Furthermore, evolutionary algorithms are employed to optimize the Hidden Markov Models, feature selection and heterogeneous ensemble classifiers. In order to evaluate its capability of identifying normal images across diverse populations, a population-oriented study was undertaken comparing the software's output to grading by humans. In addition, population based studies collect large numbers of images on subjects expected to have no abnormality. These studies expect timely and cost-effective grading. Altogether 9954 previously unseen images taken from various populations were tested. All test images were masked so the automated system had not been exposed to them before. This system was trained using image subregions taken from about 400 sample images. Sensitivities of 92.2% and specificities of 90.4% were achieved varying between populations and population clusters. Of all images the automated system decided to be normal, 98.2% were true normal when compared to the manual grading results. These results demonstrate scalability and strong potential of such an integrated computational intelligence system as an effective tool to assist a grading service.


Assuntos
Retinopatia Diabética/diagnóstico , Fundo de Olho , Processamento de Imagem Assistida por Computador/métodos , Programas de Rastreamento/métodos , Algoritmos , Inteligência Artificial , Retinopatia Diabética/patologia , Humanos , Processamento de Imagem Assistida por Computador/economia , Cadeias de Markov , Programas de Rastreamento/economia
19.
ISA Trans ; 51(3): 439-45, 2012 May.
Artigo em Inglês | MEDLINE | ID: mdl-22265087

RESUMO

There are two main contributions of this paper. First, this paper proposes a first-order piecewise finite precision nonlinear dynamical model for characterizing the average queue size of the random early detection (RED) algorithm. Second, this paper proposes a nonconvex integer optimal robust impulsive control strategy for stabilizing the average queue size. The objective of the control strategy is to determine the average queue size so that the average power of the impulsive control force is minimized subject to a constraint on the absolute difference between the actual average queue size and the theoretical average queue size at the equilibrium point. Computer numerical simulation results show that the proposed control strategy is effective and efficient for stabilizing the average queue size.

20.
IEEE Trans Syst Man Cybern B Cybern ; 40(6): 1521-30, 2010 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-20199935

RESUMO

In this paper, an invariant set of the weight of the perceptron trained by the perceptron training algorithm is defined and characterized. The dynamic range of the steady-state values of the weight of the perceptron can be evaluated by finding the dynamic range of the weight of the perceptron inside the largest invariant set. In addition, the necessary and sufficient condition for the forward dynamics of the weight of the perceptron to be injective, as well as the condition for the invariant set of the weight of the perceptron to be attractive, is derived.


Assuntos
Algoritmos , Inteligência Artificial , Técnicas de Apoio para a Decisão , Modelos Teóricos , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA