Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Med Syst ; 24(3): 183-93, 2000 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-10984872

RESUMO

Spindles are one of the most important short-lasting waveforms in sleep EEG. They are the hallmarks of the so-called Stage 2 sleep. Visual spindle scoring is a tedious workload, since there are often a thousand spindles in one all-night recording of some 8 hr. Automated methods for spindle detection typically use some form of fixed spindle amplitude threshold, which is poor with respect to inter-subject variability. In this work a spindle detection system allowing spindle detection without an amplitude threshold was developed. This system can be used for automatic decision making of whether or not a sleep spindle is present in the EEG at a certain point of time. An Autoassociative Multilayer Perceptron (A-MLP) network was employed for the decision making. A novel training procedure was developed to remove inconsistencies from the training data, which was found to improve the system performance significantly.


Assuntos
Eletroencefalografia/classificação , Redes Neurais de Computação , Fases do Sono/fisiologia , Adulto , Ritmo alfa/classificação , Artefatos , Automação , Ritmo beta/classificação , Tomada de Decisões Assistida por Computador , Reações Falso-Positivas , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reconhecimento Automatizado de Padrão , Curva ROC , Processamento de Sinais Assistido por Computador , Sono REM/fisiologia , Ritmo Teta/classificação
2.
Neural Netw ; 13(4-5): 525-31, 2000.
Artigo em Inglês | MEDLINE | ID: mdl-10946397

RESUMO

In our recent studies we have proposed and investigated a centroid-based multilayer perceptron (CMLP) network architecture for modelling purposes. In the CMLP network the first hidden layer is a centroid layer. We have found that the proposed hybrid can provide significant advantages over standard multilayer perceptron networks in terms of fast and efficient learning, and compact network structure in complex classification problems. Previously the number of units for the centroid layer had been determined empiricially. Here we extend our work by introducing a method for determining the minimal number of centroid units for a given problem. The proposed scheme also enables efficient initialization of the centroids units. In addition, we also propose an initialization scheme for the MLP part of the CMLP network. Our benchmark simulations show that the proposed methods significantly improve the CMLP scheme.


Assuntos
Simulação por Computador , Redes Neurais de Computação , Sistemas Computacionais
3.
J Sleep Res ; 9(4): 327-34, 2000 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-11386202

RESUMO

Sleep spindles are transient EEG waveforms of non-rapid eye movement sleep. There is considerable intersubject variability in spindle amplitudes. The problem in automatic spindle detection has been that, despite this fact, a fixed amplitude threshold has been used. Selection of the spindle detection threshold value is critical with respect to the sensitivity of spindle detection. In this study a method was developed to estimate the optimal recording-specific threshold value for each all-night recording without any visual scorings. The performance of the proposed method was validated using four test recordings each having a very different number of visually scored spindles. The optimal threshold values for the test recordings could be estimated well. The presented method seems very promising in providing information about sleep spindle amplitudes of individual all-night recordings.


Assuntos
Sono/fisiologia , Eletroencefalografia , Humanos , Estimulação Luminosa , Limiar Sensorial
4.
IEEE Trans Neural Netw ; 11(3): 795-8, 2000.
Artigo em Inglês | MEDLINE | ID: mdl-18249805

RESUMO

The main advantages of cascade-correlation learning are the abilities to learn quickly and to determine the network size. However, recent studies have shown that in many problems the generalization performance of a cascade-correlation trained network may not be quite optimal. Moreover, to reach a certain performance level, a larger network may be required than with other training methods. Recent advances in statistical learning theory emphasize the importance of a learning method to be able to learn optimal hyperplanes. This has led to advanced learning methods, which have demonstrated substantial performance improvements. Based on these recent advances in statistical learning theory, we introduce modifications to the standard cascade-correlation learning that take into account the optimal hyperplane constraints. Experimental results demonstrate that with modified cascade correlation, considerable performance gains are obtained compared to the standard cascade-correlation learning. This includes better generalization, smaller network size, and faster learning.

5.
Int J Neural Syst ; 9(1): 1-9, 1999 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-10401926

RESUMO

A hybrid neural network architecture is investigated for modeling purposes. The proposed hybrid is based on the multilayer perceptron (MLP) network. In addition to the usual hidden layers, the first hidden layer is selected to be an adaptive reference pattern layer. Each unit in this new layer incorporates a reference pattern that is located somewhere in the space spanned by the input variables. The outputs of these units are the component wise-squared differences between the elements of a reference pattern and the inputs. The reference pattern layer has some resemblance to the hidden layer of the radial basis function (RBF) networks. Therefore the proposed design can be regarded as a sort of hybrid of MLP and RBF networks. The presented benchmark experiments show that the proposed hybrid can provide significant advantages over standard MLPs and RBFs in terms of fast and efficient learning, and compact network structure.


Assuntos
Redes Neurais de Computação , Retroalimentação
6.
Neural Netw ; 12(4-5): 707-716, 1999 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-12662678

RESUMO

Neural network methods have proven to be powerful tools in modelling of nonlinear processes. One crucial part of modelling is the training phase where the model parameters are adjusted so that the model performs the desired operation as well as possible. Besides parameter estimation, an important problem is to select a suitable model structure. With a bad structure we potentially run into problems like underfitting, overfitting or wasting computational resources. One approach for structure learning is to use constructive methods, where training begins with minimal structure, and then more parameters are added when needed according to some predefined rule. This kind of constructive solution has also become more attractive in neural networks literature where one of the most well known constructive techniques is cascade-correlation (CC) learning. Inspired by CC we propose and study a similar technique called constructive backpropagation (CBP). We show that CBP is computationally just as efficient as the CC algorithm even though we need to backpropagate the error through no more than one hidden layer. Further, CBP has the same constructive benefits as CC, but in addition CBP benefits from simpler implementation and the ability to utilize stochastic optimization routines. Moreover, we show how CBP can be extended to allow addition of multiple new units simultaneously and how it can be used to perform continuous automatic structure adaptation. This includes both addition and deletion of units. The performance of CBP learning is studied with time series modelling experiments which demonstrate that CBP can provide significantly better modelling capabilities compared to CC learning.

7.
IEEE Trans Neural Netw ; 10(2): 410-4, 1999.
Artigo em Inglês | MEDLINE | ID: mdl-18252537

RESUMO

Weight initialization in the cascade-correlation learning is considered. Most of the previous studies use the so called candidate training to deal with the initialization problem in the cascade-correlation learning. There several candidate hidden units are first trained, and then the one yielding the best value for the covariance criterion is installed to the network. In case there are many candidate units to be trained, the total computational cost of the training can become very large. Here we consider a new approach for weight initialization in the cascade-correlation learning. The proposed method is based on the concept of stepwise regression. Empirical simulations show that the new method can significantly speed-up the cascade-correlation learning compared to the case where the candidate training is used. Moreover, the overall performance remained similar or was even better than with the candidate training.

8.
Neural Comput ; 8(3): 583-93, 1996 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-8868569

RESUMO

Nonlinear time series modeling with a multilayer perceptron network is presented. An important aspect of this modeling is the model selection, i.e., the problem of determining the size as well as the complexity of the model. To overcome this problem we apply the predictive minimum description length (PMDL) principle as a minimization criterion. In the neural network scheme it means minimizing the number of input and hidden units. Three time series modeling experiments are used to examine the usefulness of the PMDL model selection scheme. A comparison with the widely used cross-validation technique is also presented. In our experiments the PMDL scheme and the cross-validation scheme yield similar results in terms of model complexity. However, the PMDL method was found to be two times faster to compute. This is significant improvement since model selection in general is very time consuming.


Assuntos
Redes Neurais de Computação , Percepção/fisiologia , Algoritmos , Modelos Neurológicos , Dinâmica não Linear , Processos Estocásticos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...