Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
IEEE Trans Neural Netw ; 8(5): 1065-70, 1997.
Artigo em Inglês | MEDLINE | ID: mdl-18255709

RESUMO

In this work, we characterize and contrast the capabilities of the general class of time-delay neural networks (TDNNs) with input delay neural networks (IDNNs), the subclass of TDNNs with delays limited to the inputs. Each class of networks is capable of representing the same set of languages, those embodied by the definite memory machines (DMMs), a subclass of finite-state machines. We demonstrate the close affinity between TDNNs and DMM languages by learning a very large DMM (2048 states) using only a few training examples. Even though both architectures are capable of representing the same class of languages, they have distinguishable learning biases. Intuition suggests that general TDNNs which include delays in hidden layers should perform well, compared to IDNNs, on problems in which the output can be expressed as a function on narrow input windows which repeat in time. On the other hand, these general TDNNs should perform poorly when the input windows are wide, or there is little repetition. We confirm these hypotheses via a set of simulations and statistical analysis.

2.
Int J Neural Syst ; 6(3): 249-56, 1995 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-8589862

RESUMO

A recurrent learning algorithm based on a finite difference discretization of continuous equations for neural networks is derived. This algorithm has the simplicity of discrete algorithms while retaining some essential characteristics of the continuous equations. In discrete networks learning smooth oscillations is difficult if the period of oscillation is too large. The network either grossly distorts the waveforms or is unable to learn at all. We show how the finite difference formulation can explain and overcome this problem. Formulas for learning time constants and time delays in this framework are also presented.


Assuntos
Algoritmos , Redes Neurais de Computação , Inteligência Artificial , Simulação por Computador
5.
Neural Netw ; 12(7-8): 1053-1074, 1999 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-12662645

RESUMO

There is strong evidence that face processing in the brain is localized. The double dissociation between prosopagnosia, a face recognition deficit occurring after brain damage, and visual object agnosia, difficulty recognizing other kinds of complex objects, indicates that face and non-face object recognition may be served by partially independent neural mechanisms. In this paper, we use computational models to show how the face processing specialization apparently underlying prosopagnosia and visual object agnosia could be attributed to (1) a relatively simple competitive selection mechanism that, during development, devotes neural resources to the tasks they are best at performing, (2) the developing infant's need to perform subordinate classification (identification) of faces early on, and (3) the infant's low visual acuity at birth. Inspired by de Schonen, Mancini and Liegeois' arguments (1998) [de Schonen, S., Mancini, J., Liegeois, F. (1998). About functional cortical specialization: the development of face recognition. In: F. Simon & G. Butterworth, The development of sensory, motor, and cognitive capacities in early infancy (pp. 103-116). Hove, UK: Psychology Press] that factors like these could bias the visual system to develop a processing subsystem particularly useful for face recognition, and Jacobs and Kosslyn's experiments (1994) [Jacobs, R. A., & Kosslyn, S. M. (1994). Encoding shape and spatial relations-the role of receptive field size in coordination complementary representations. Cognitive Science, 18(3), 361-368] in the mixtures of experts (ME) modeling paradigm, we provide a preliminary computational demonstration of how this theory accounts for the double dissociation between face and object processing. We present two feed-forward computational models of visual processing. In both models, the selection mechanism is a gating network that mediates a competition between modules attempting to classify input stimuli. In Model I, when the modules are simple unbiased classifiers, the competition is sufficient to achieve enough of a specialization that damaging one module impairs the model's face recognition more than its object recognition, and damaging the other module impairs the model's object recognition more than its face recognition. However, the model is not completely satisfactory because it requires a search of parameter space. With Model II, we explore biases that lead to more consistent specialization. We bias the modules by providing one with low spatial frequency information and the other with high spatial frequency information. In this case, when the model's task is subordinate classification of faces and superordinate classification of objects, the low spatial frequency network shows an even stronger specialization for faces. No other combination of tasks and inputs shows this strong specialization. We take these results as support for the idea that something resembling a face processing "module" could arise as a natural consequence of the infant's developmental environment without being innately specified.

6.
J Cogn Neurosci ; 4(3): 289-98, 1992.
Artigo em Inglês | MEDLINE | ID: mdl-23964885

RESUMO

Abstract Four models were compared on repeated explicit memory (fragment cued recall) or implicit memory (fragment completion) tasks (Hayman & Tulving, 1989a). In the experiments, when given explicit instructions to complete fragments with words from a just-studied list-the explicit condition-people showed a dependence relation between the first and the second fragment targeted at the same word. However, when subjects were just told to complete the (primed) fragments-the implicit condition-stochastic independence between the two fragments resulted. Three distributed models-CHARM, a competitive-learning model, and a back-propagation model produced dependence, as in the explicit memory test. In contrast, a separate-trace model, MINERVA, showed independence, as in the implicit task. It was concluded that explicit memory is based on a highly interactive network that glues or binds together the features within the items, as do the first three models. The binding accounts for the dependence relation. Implicit memory appears to be based, instead, on separate non interacting traces.

7.
Nature ; 340(6233): 468-71, 1989 Aug 10.
Artigo em Inglês | MEDLINE | ID: mdl-2755509

RESUMO

Mechanical stimulation of the body surface of the leech causes a localized withdrawal from dorsal, ventral and lateral stimuli. The pathways from sensory to motor neurons in the reflex include at least one interneuron. We have identified a subset of interneurons contributing to the reflex by intracellular recording, and our analysis of interneuron input and output connections suggests a network in which most interneurons respond to more than one sensory input, most have effects on all motor neurons and in which each form of the behaviour is produced by appropriate and inappropriate effects of many interneurons. To determine whether interneurons of this type can account for the behaviour, or whether additional types are required, model networks were trained by back-propagation to reproduce the physiologically determined input-output function of the reflex. Quantitative comparisons of model and actual connection strengths show that model interneurons are similar to real ones. Consequently, the identified subset of interneurons could control local bending as part of a distributed processing network in which each form of the behaviour is produced by the appropriate and inappropriate effects of many interneurons.


Assuntos
Interneurônios/fisiologia , Modelos Neurológicos , Animais , Estimulação Elétrica , Gânglios/fisiologia , Técnicas In Vitro , Sanguessugas , Neurônios Aferentes/fisiologia , Reflexo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA