Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
1.
Clin Med (Lond) ; 11(2): 138-41, 2011 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-21526694

RESUMO

This study aimed to ascertain the value of posters at medical meetings to presenters and delegates. The usefulness of posters to presenters at national and international meetings was evaluated by assessing the numbers of delegates visiting them and the reasons why they visited. Memorability of selected posters was assessed and factors influencing their appeal to expert delegates identified. At both the national and international meetings, very few delegates (< 5%) visited posters. Only a minority read them and fewer asked useful questions. Recall of content was so poor that it prevented identification of factors improving their memorability. Factors increasing posters' visual appeal included their scientific content, pictures/graphs and limited use of words. Few delegates visit posters and those doing so recall little of their content. To engage their audience, researchers should design visually appealing posters by presenting high quality data in pictures or graphs without an excess of words.


Assuntos
Recursos Audiovisuais , Pesquisa Biomédica , Congressos como Assunto , Gastroenterologia , Disseminação de Informação , Feminino , Humanos , Modelos Lineares , Masculino , Estatísticas não Paramétricas , Reino Unido
2.
Neural Comput ; 13(6): 1379-414, 2001 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-11387050

RESUMO

We perform a detailed fixed-point analysis of two-unit recurrent neural networks with sigmoid-shaped transfer functions. Using geometrical arguments in the space of transfer function derivatives, we partition the network state-space into distinct regions corresponding to stability types of the fixed points. Unlike in the previous studies, we do not assume any special form of connectivity pattern between the neurons, and all free parameters are allowed to vary. We also prove that when both neurons have excitatory self-connections and the mutual interaction pattern is the same (i.e., the neurons mutually inhibit or excite themselves), new attractive fixed points are created through the saddle-node bifurcation. Finally, for an N-neuron recurrent network, we give lower bounds on the rate of convergence of attractive periodic points toward the saturation values of neuron activations, as the absolute values of connection weights grow.


Assuntos
Modelos Neurológicos , Rede Nervosa/fisiologia , Redes Neurais de Computação , Neurônios/fisiologia , Matemática
3.
Science ; 291(5506): 987-8, 2001 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-11232583
4.
Neural Comput ; 12(10): 2355-83, 2000 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-11032038

RESUMO

An algorithm is introduced that trains a neural network to identify chaotic dynamics from a single measured time series. During training, the algorithm learns to short-term predict the time series. At the same time a criterion, developed by Diks, van Zwet, Takens, and de Goede (1996) is monitored that tests the hypothesis that the reconstructed attractors of model-generated and measured data are the same. Training is stopped when the prediction error is low and the model passes this test. Two other features of the algorithm are (1) the way the state of the system, consisting of delays from the time series, has its dimension reduced by weighted principal component analysis data reduction, and (2) the user-adjustable prediction horizon obtained by "error propagation"-partially propagating prediction errors to the next time step. The algorithm is first applied to data from an experimental-driven chaotic pendulum, of which two of the three state variables are known. This is a comprehensive example that shows how well the Diks test can distinguish between slightly different attractors. Second, the algorithm is applied to the same problem, but now one of the two known state variables is ignored. Finally, we present a model for the laser data from the Santa Fe time-series competition (set A). It is the first model for these data that is not only useful for short-term predictions but also generates time series with similar chaotic characteristics as the measured data.


Assuntos
Redes Neurais de Computação , Dinâmica não Linear , Algoritmos , Inteligência Artificial , Lasers , Modelos Neurológicos
5.
Artif Life ; 6(3): 237-54, 2000.
Artigo em Inglês | MEDLINE | ID: mdl-11224918

RESUMO

We analyze a general model of multi-agent communication in which all agents communicate simultaneously to a message board. A genetic algorithm is used to evolve multi-agent languages for the predator agents in a version of the predator-prey pursuit problem. We show that the resulting behavior of the communicating multi-agent system is equivalent to that of a Mealy finite state machine whose states are determined by the agents' usage of the evolved language. Simulations show that the evolution of a communication language improves the performance of the predators. Increasing the language size (and thus increasing the number of possible states in the Mealy machine) improves the performance even further. Furthermore, the evolved communicating predators perform significantly better than all previous work on similar prey. We introduce a method for incrementally increasing the language size, which results in an effective coarse-to-fine search that significantly reduces the evolution time required to find a solution. We present some observations on the effects of language size, experimental setup, and prey difficulty on the evolved Mealy machines. In particular, we observe that the start state is often revisited, and incrementally increasing the language size results in smaller Mealy machines. Finally, a simple rule is derived that provides a pessimistic estimate on the minimum language size that should be used for any multi-agent problem.


Assuntos
Comunicação , Comportamento Predatório , Algoritmos , Animais , Evolução Biológica , Humanos , Idioma , Modelos Genéticos
8.
Ocul Immunol Inflamm ; 5(1): 67-8, 1997 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-9145695

RESUMO

Although the pathogenesis in most cases of intermediate uveitis is unknown, a small minority of cases is associated with a variety of specific inflammatory etiologies: sarcoidosis; multiple sclerosis; Lyme disease; syphilis; ocular lymphoma; and as a rare manifestation of Behçet's disease and AIDS. A 61-year-old woman developed pars planitis after cataract surgery. A vitrectomy was performed after ten months when a white capsular plaque and an hypopyon developed. Propionibacterium acnes was isolated. The intermediate uveitis was not controlled until later removal of the intraocular lens and capsular remnants. Chronic propionibacterial endophthalmitis may be a cause of intermediate uveitis.


Assuntos
Endoftalmite/complicações , Infecções Oculares Bacterianas , Infecções por Bactérias Gram-Positivas , Pars Planite/microbiologia , Complicações Pós-Operatórias , Propionibacterium acnes/isolamento & purificação , Extração de Catarata , Doença Crônica , Feminino , Humanos , Lentes Intraoculares , Pessoa de Meia-Idade , Reoperação , Vitrectomia , Corpo Vítreo/microbiologia
9.
IEEE Trans Neural Netw ; 8(1): 98-113, 1997.
Artigo em Inglês | MEDLINE | ID: mdl-18255614

RESUMO

We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

10.
IEEE Trans Neural Netw ; 8(5): 1065-70, 1997.
Artigo em Inglês | MEDLINE | ID: mdl-18255709

RESUMO

In this work, we characterize and contrast the capabilities of the general class of time-delay neural networks (TDNNs) with input delay neural networks (IDNNs), the subclass of TDNNs with delays limited to the inputs. Each class of networks is capable of representing the same set of languages, those embodied by the definite memory machines (DMMs), a subclass of finite-state machines. We demonstrate the close affinity between TDNNs and DMM languages by learning a very large DMM (2048 states) using only a few training examples. Even though both architectures are capable of representing the same class of languages, they have distinguishable learning biases. Intuition suggests that general TDNNs which include delays in hidden layers should perform well, compared to IDNNs, on problems in which the output can be expressed as a function on narrow input windows which repeat in time. On the other hand, these general TDNNs should perform poorly when the input windows are wide, or there is little repetition. We confirm these hypotheses via a set of simulations and statistical analysis.

11.
IEEE Trans Neural Netw ; 8(6): 1507-17, 1997.
Artigo em Inglês | MEDLINE | ID: mdl-18255751

RESUMO

The performance of neural network simulations is often reported in terms of the mean and standard deviation of a number of simulations performed with different starting conditions. However, in many cases, the distribution of the individual results does not approximate a Gaussian distribution, may not be symmetric, and may be multimodal. We present the distribution of results for practical problems and show that assuming Gaussian distributions can significantly affect the interpretation of results, especially those of comparison studies. For a controlled task which we consider, we find that the distribution of performance is skewed toward better performance for smoother target functions and skewed toward worse performance for more complex target functions. We propose new guidelines for reporting performance which provide more information about the actual distribution.

12.
Artigo em Inglês | MEDLINE | ID: mdl-18255858

RESUMO

Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=Psi(u(t-n(u)), ..., u(t-1), u(t), y(t-n(y)), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, n(u) and n(y) are the input and output order, and the function Psi is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power.

13.
Behav Sci Law ; 15(4): 469-82, 1997.
Artigo em Inglês | MEDLINE | ID: mdl-9433749

RESUMO

Provision of mental health services to correctional populations places considerable demands on clinical staff to provide efficient and effective means to screen patients for severe mental disorders and other emergent conditions that necessitate immediate interventions. Among the highly problematic behaviors found in correctional settings are forms of acting out (e.g., suicide and aggression towards others) and response style (e.g., motivations to malinger). The current study examined the usefulness of the Personality Assessment Inventory (PAI) in assessing problematic behaviors in a corrections-based psychiatric hospital. As evidence of criterion related validity, selected PAI scales were compared to (a) evidence of malingering on the Structured Interview of Reported Symptoms (SIRS), (b) suicidal threats and gestures, and (c) ratings of aggression on the Overt Aggression Scale (OAS). In general, results supported the use of the PAI for the assessment of these problematic behaviors.


Assuntos
Psiquiatria Legal/métodos , Inventário de Personalidade/normas , Prisioneiros/psicologia , Psicometria/normas , Adulto , Agressão/classificação , Análise de Variância , Distribuição de Qui-Quadrado , Estudos Transversais , Humanos , Masculino , Simulação de Doença/diagnóstico , Projetos Piloto , Reprodutibilidade dos Testes , Estudos Retrospectivos , Medição de Risco , Suicídio/psicologia , Violência
14.
Neural Comput ; 8(4): 675-96, 1996 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-8624958

RESUMO

We propose an algorithm for encoding deterministic finite-state automata (DFAs) in second-order recurrent neural networks with sigmoidal discriminant function and we prove that the languages accepted by the constructed network and the DFA are identical. The desired finite-state network dynamics is achieved by programming a small subset of all weights. A worst case analysis reveals a relationship between the weight strength and the maximum allowed network size, which guarantees finite-state behavior of the constructed network. We illustrate the method by encoding random DFAs with 10, 100, and 1000 states. While the theory predicts that the weight strength scales with the DFA size, we find empirically the weight strength to be almost constant for all the random DFAs. These results can be explained by noting that the generated DFAs represent average cases. We empirically demonstrate the existence of extreme DFAs for which the weight strength scales with DFA size.


Assuntos
Algoritmos , Simulação por Computador , Redes Neurais de Computação , Automação , Análise Discriminante , Neurônios , Distribuição Aleatória , Software
15.
IEEE Trans Neural Netw ; 7(6): 1329-38, 1996.
Artigo em Inglês | MEDLINE | ID: mdl-18263528

RESUMO

It has previously been shown that gradient-descent learning algorithms for recurrent neural networks can perform poorly on tasks that involve long-term dependencies, i.e. those problems for which the desired output depends on inputs presented at times far in the past. We show that the long-term dependencies problem is lessened for a class of architectures called nonlinear autoregressive models with exogenous (NARX) recurrent neural networks, which have powerful representational capabilities. We have previously reported that gradient descent learning can be more effective in NARX networks than in recurrent neural network architectures that have "hidden states" on problems including grammatical inference and nonlinear system identification. Typically, the network converges much faster and generalizes better than other networks. The results in this paper are consistent with this phenomenon. We present some experimental results which show that NARX networks can often retain information for two to three times as long as conventional recurrent neural networks. We show that although NARX networks do not circumvent the problem of long-term dependencies, they can greatly improve performance on long-term dependency problems. We also describe in detail some of the assumptions regarding what it means to latch information robustly and suggest possible ways to loosen these assumptions.

16.
IEEE Trans Neural Netw ; 7(6): 1424-38, 1996.
Artigo em Inglês | MEDLINE | ID: mdl-18263536

RESUMO

Concerns the effect of noise on the performance of feedforward neural nets. We introduce and analyze various methods of injecting synaptic noise into dynamically driven recurrent nets during training. Theoretical results show that applying a controlled amount of noise during training may improve convergence and generalization performance. We analyze the effects of various noise parameters and predict that best overall performance can be achieved by injecting additive noise at each time step. Noise contributes a second-order gradient term to the error function which can be viewed as an anticipatory agent to aid convergence. This term appears to find promising regions of weight space in the beginning stages of training when the training error is large and should improve convergence on error surfaces with local minima. The first-order term is a regularization term that can improve generalization. Specifically, it can encourage internal representations where the state nodes operate in the saturated regions of the sigmoid discriminant function. While this effect can improve performance on automata inference problems with binary inputs and target outputs, it is unclear what effect it will have on other types of problems. To substantiate these predictions, we present simulations on learning the dual parity grammar from temporal strings for all noise models, and present simulations on learning a randomly generated six-state grammar using the predicted best noise model.

17.
IEEE Trans Neural Netw ; 6(4): 829-36, 1995.
Artigo em Inglês | MEDLINE | ID: mdl-18263373

RESUMO

It is often difficult to predict the optimal neural network size for a particular application. Constructive or destructive methods that add or subtract neurons, layers, connections, etc. might offer a solution to this problem. We prove that one method, recurrent cascade correlation, due to its topology, has fundamental limitations in representation and thus in its learning capabilities. It cannot represent with monotone (i.e., sigmoid) and hard-threshold activation functions certain finite state automata. We give a "preliminary" approach on how to get around these limitations by devising a simple constructive training method that adds neurons during training while still preserving the powerful fully-recurrent structure. We illustrate this approach by simulations which learn many examples of regular grammars that the recurrent cascade correlation method is unable to learn.

18.
IEEE Trans Neural Netw ; 5(3): 511-3, 1994.
Artigo em Inglês | MEDLINE | ID: mdl-18267822

RESUMO

We examine the representational capabilities of first-order and second-order single-layer recurrent neural networks (SLRNN's) with hard-limiting neurons. We show that a second-order SLRNN is strictly more powerful than a first-order SLRNN. However, if the first-order SLRNN is augmented with output layers of feedforward neurons, it can implement any finite-state recognizer, but only if state-splitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of state-splitting allows for efficient implementation of finite-state recognizers using augmented first-order SLRNN's.

19.
IEEE Trans Neural Netw ; 5(5): 848-51, 1994.
Artigo em Inglês | MEDLINE | ID: mdl-18267860

RESUMO

Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic that significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that rules extracted from networks trained with this pruning heuristic are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state, triple-parity grammar. Further simulations indicate that this pruning method can have generalization performance superior to that obtained by training with weight decay.

20.
Am J Ophthalmol ; 116(1): 79-83, 1993 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-8328547

RESUMO

A series of seven exotropic children (aged 2 to 10 years) had resolution of exotropia after spectacle correction of hyperopia. Their hyperopic correction ranged from 3.00 to 7.00 diopters. Six had intermittent exotropia, which became small-angle esophoria after spectacle correction. In one patient with apparently no fusion, spectacle correction converted constant exotropia to small esotropia in the monofixational range. In all patients, Worth 4-dot and Titmus Stereo Test results, when obtainable, indicated an improvement in binocular sensory status after correction of the hyperopia. We conclude that a trial of spectacle correction is warranted in exotropic children with severe hyperopia and in those with moderate hyperopia and a low accommodative convergence/accommodation ratio or evidence of hypoaccommodation.


Assuntos
Exotropia/terapia , Óculos , Hiperopia/terapia , Criança , Pré-Escolar , Feminino , Humanos , Masculino , Acuidade Visual
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...