Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
Neural Comput ; 35(11): 1713-1796, 2023 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-37725706

RESUMO

Markov chains are a class of probabilistic models that have achieved widespread application in the quantitative sciences. This is in part due to their versatility, but is compounded by the ease with which they can be probed analytically. This tutorial provides an in-depth introduction to Markov chains and explores their connection to graphs and random walks. We use tools from linear algebra and graph theory to describe the transition matrices of different types of Markov chains, with a particular focus on exploring properties of the eigenvalues and eigenvectors corresponding to these matrices. The results presented are relevant to a number of methods in machine learning and data mining, which we describe at various stages. Rather than being a novel academic study in its own right, this text presents a collection of known results, together with some new concepts. Moreover, the tutorial focuses on offering intuition to readers rather than formal understanding and only assumes basic exposure to concepts from linear algebra and probability theory. It is therefore accessible to students and researchers from a wide variety of disciplines.

2.
Neural Comput ; 34(9): 1841-1870, 2022 08 16.
Artigo em Inglês | MEDLINE | ID: mdl-35896150

RESUMO

Many studies have suggested that episodic memory is a generative process, but most computational models adopt a storage view. In this article, we present a model of the generative aspects of episodic memory. It is based on the central hypothesis that the hippocampus stores and retrieves selected aspects of an episode as a memory trace, which is necessarily incomplete. At recall, the neocortex reasonably fills in the missing parts based on general semantic information in a process we call semantic completion. The model combines two neural network architectures known from machine learning, the vector-quantized variational autoencoder (VQ-VAE) and the pixel convolutional neural network (PixelCNN). As episodes, we use images of digits and fashion items (MNIST) augmented by different backgrounds representing context. The model is able to complete missing parts of a memory trace in a semantically plausible way up to the point where it can generate plausible images from scratch, and it generalizes well to images not trained on. Compression as well as semantic completion contribute to a strong reduction in memory requirements and robustness to noise. Finally, we also model an episodic memory experiment and can reproduce that semantically congruent contexts are always recalled better than incongruent ones, high attention levels improve memory accuracy in both cases, and contexts that are not remembered correctly are more often remembered semantically congruently than completely wrong. This model contributes to a deeper understanding of the interplay between episodic memory and semantic information in the generative process of recalling the past.


Assuntos
Memória Episódica , Atenção , Rememoração Mental , Semântica
3.
Hippocampus ; 30(6): 638-656, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31886605

RESUMO

The medial temporal lobe (MTL) is well known to be essential for declarative memory. However, a growing body of research suggests that MTL structures might be involved in perceptual processes as well. Our previous modeling work suggests that sensory representations in cortex influence the accuracy of episodic memory retrieved from the MTL. We adopt that model here to show that, conversely, episodic memory can also influence the quality of sensory representations. We model the effect of episodic memory as (a) repeatedly replaying episodes from memory and (b) recombining episode fragments to form novel sequences that are more informative for learning sensory representations than the original episodes. We demonstrate that the performance in visual discrimination tasks is superior when episodic memory is present and that this difference is due to episodic memory driving the learning of a more optimized sensory representation. We conclude that the MTL can, even if it has only a purely mnemonic function, influence perceptual discrimination indirectly.


Assuntos
Memória Episódica , Modelos Neurológicos , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa/métodos , Lobo Temporal/fisiologia , Percepção Visual/fisiologia , Discriminação Psicológica/fisiologia , Humanos
4.
Neural Comput ; 30(5): 1151-1179, 2018 05.
Artigo em Inglês | MEDLINE | ID: mdl-29566353

RESUMO

The computational principles of slowness and predictability have been proposed to describe aspects of information processing in the visual system. From the perspective of slowness being a limited special case of predictability we investigate the relationship between these two principles empirically. On a collection of real-world data sets we compare the features extracted by slow feature analysis (SFA) to the features of three recently proposed methods for predictable feature extraction: forecastable component analysis, predictable feature analysis, and graph-based predictable feature analysis. Our experiments show that the predictability of the learned features is highly correlated, and, thus, SFA appears to effectively implement a method for extracting predictable features according to different measures of predictability.

5.
Neural Comput ; 30(2): 293-332, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29220304

RESUMO

The experimental evidence on the interrelation between episodic memory and semantic memory is inconclusive. Are they independent systems, different aspects of a single system, or separate but strongly interacting systems? Here, we propose a computational role for the interaction between the semantic and episodic systems that might help resolve this debate. We hypothesize that episodic memories are represented as sequences of activation patterns. These patterns are the output of a semantic representational network that compresses the high-dimensional sensory input. We show quantitatively that the accuracy of episodic memory crucially depends on the quality of the semantic representation. We compare two types of semantic representations: appropriate representations, which means that the representation is used to store input sequences that are of the same type as those that it was trained on, and inappropriate representations, which means that stored inputs differ from the training data. Retrieval accuracy is higher for appropriate representations because the encoded sequences are less divergent than those encoded with inappropriate representations. Consistent with our model prediction, we found that human subjects remember some aspects of episodes significantly more accurately if they had previously been familiarized with the objects occurring in the episode, as compared to episodes involving unfamiliar objects. We thus conclude that the interaction with the semantic system plays an important role for episodic memory.

6.
PLoS Comput Biol ; 11(5): e1004250, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25954996

RESUMO

In the last decades a standard model regarding the function of the hippocampus in memory formation has been established and tested computationally. It has been argued that the CA3 region works as an auto-associative memory and that its recurrent fibers are the actual storing place of the memories. Furthermore, to work properly CA3 requires memory patterns that are mutually uncorrelated. It has been suggested that the dentate gyrus orthogonalizes the patterns before storage, a process known as pattern separation. In this study we review the model when random input patterns are presented for storage and investigate whether it is capable of storing patterns of more realistic entorhinal grid cell input. Surprisingly, we find that an auto-associative CA3 net is redundant for random inputs up to moderate noise levels and is only beneficial at high noise levels. When grid cell input is presented, auto-association is even harmful for memory performance at all levels. Furthermore, we find that Hebbian learning in the dentate gyrus does not support its function as a pattern separator. These findings challenge the standard framework and support an alternative view where the simpler EC-CA1-EC network is sufficient for memory storage.


Assuntos
Hipocampo/fisiologia , Memória/fisiologia , Modelos Neurológicos , Modelos Psicológicos , Animais , Região CA1 Hipocampal/fisiologia , Região CA3 Hipocampal/fisiologia , Biologia Computacional , Giro Denteado/fisiologia , Córtex Entorrinal/fisiologia , Humanos , Aprendizagem/fisiologia , Rememoração Mental/fisiologia
7.
PLoS Comput Biol ; 10(5): e1003564, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-24810948

RESUMO

The developing visual system of many mammalian species is partially structured and organized even before the onset of vision. Spontaneous neural activity, which spreads in waves across the retina, has been suggested to play a major role in these prenatal structuring processes. Recently, it has been shown that when employing an efficient coding strategy, such as sparse coding, these retinal activity patterns lead to basis functions that resemble optimal stimuli of simple cells in primary visual cortex (V1). Here we present the results of applying a coding strategy that optimizes for temporal slowness, namely Slow Feature Analysis (SFA), to a biologically plausible model of retinal waves. Previously, SFA has been successfully applied to model parts of the visual system, most notably in reproducing a rich set of complex-cell features by training SFA with quasi-natural image sequences. In the present work, we obtain SFA units that share a number of properties with cortical complex-cells by training on simulated retinal waves. The emergence of two distinct properties of the SFA units (phase invariance and orientation tuning) is thoroughly investigated via control experiments and mathematical analysis of the input-output functions found by SFA. The results support the idea that retinal waves share relevant temporal and spatial properties with natural visual input. Hence, retinal waves seem suitable training stimuli to learn invariances and thereby shape the developing early visual system such that it is best prepared for coding input from the natural world.


Assuntos
Relógios Biológicos/fisiologia , Ondas Encefálicas/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios Retinianos/fisiologia , Córtex Visual/fisiologia , Percepção Visual/fisiologia , Animais , Simulação por Computador , Humanos
8.
PLoS One ; 19(6): e0304076, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38900733

RESUMO

Based on the CRISP theory (Content Representation, Intrinsic Sequences, and Pattern completion), we present a computational model of the hippocampus that allows for online one-shot storage of pattern sequences without the need for a consolidation process. In our model, CA3 provides a pre-trained sequence that is hetero-associated with the input sequence, rather than storing a sequence in CA3. That is, plasticity on a short timescale only occurs in the incoming and outgoing connections of CA3, not in its recurrent connections. We use a single learning rule named Hebbian descent to train all plastic synapses in the network. A forgetting mechanism in the learning rule allows the network to continuously store new patterns while forgetting those stored earlier. We find that a single cue pattern can reliably trigger the retrieval of sequences, even when cues are noisy or missing information. Furthermore, pattern separation in subregion DG is necessary when sequences contain correlated patterns. Besides artificially generated input sequences, the model works with sequences of handwritten digits and natural images. Notably, our model is capable of improving itself without external input, in a process that can be referred to as 'replay' or 'offline-learning', which helps in improving the associations and consolidating the learned patterns.


Assuntos
Modelos Neurológicos , Redes Neurais de Computação , Humanos , Plasticidade Neuronal , Aprendizagem , Hipocampo/fisiologia , Sinapses/fisiologia
9.
Front Artif Intell ; 7: 1354114, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38533466

RESUMO

In an era where Artificial Intelligence (AI) integration into business processes is crucial for maintaining competitiveness, there is a growing need for structured guidance on designing AI solutions that align with human needs. To this end, we present "technical assistance concerning human-centered AI development" (tachAId), an interactive advisory tool which comprehensively guides AI developers and decision makers in navigating the machine learning lifecycle with a focus on human-centered design. tachAId motivates and presents concrete technical advice to ensure human-centeredness across the phases of AI development. The tool's effectiveness is evaluated through a catalog of criteria for human-centered AI in the form of relevant challenges and goals, derived from existing methodologies and guidelines. Lastly, tachAId and one other comparable advisory tool were examined to determine their adherence to these criteria in order to provide an overview of the human-centered aspects covered by these tools and to allow interested parties to quickly assess whether the tools meet their needs.

10.
Front Psychol ; 14: 1160648, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37138984

RESUMO

Episodic memory has been studied extensively in the past few decades, but so far little is understood about how it drives future behavior. Here we propose that episodic memory can facilitate learning in two fundamentally different modes: retrieval and replay, which is the reinstatement of hippocampal activity patterns during later sleep or awake quiescence. We study their properties by comparing three learning paradigms using computational modeling based on visually-driven reinforcement learning. Firstly, episodic memories are retrieved to learn from single experiences (one-shot learning); secondly, episodic memories are replayed to facilitate learning of statistical regularities (replay learning); and, thirdly, learning occurs online as experiences arise with no access to memories of past experiences (online learning). We found that episodic memory benefits spatial learning in a broad range of conditions, but the performance difference is meaningful only when the task is sufficiently complex and the number of learning trials is limited. Furthermore, the two modes of accessing episodic memory affect spatial learning differently. One-shot learning is typically faster than replay learning, but the latter may reach a better asymptotic performance. In the end, we also investigated the benefits of sequential replay and found that replaying stochastic sequences results in faster learning as compared to random replay when the number of replays is limited. Understanding how episodic memory drives future behavior is an important step toward elucidating the nature of episodic memory.

11.
Neurosci Biobehav Rev ; 152: 105200, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37178943

RESUMO

Spatial navigation has received much attention from neuroscientists, leading to the identification of key brain areas and the discovery of numerous spatially selective cells. Despite this progress, our understanding of how the pieces fit together to drive behavior is generally lacking. We argue that this is partly caused by insufficient communication between behavioral and neuroscientific researchers. This has led the latter to under-appreciate the relevance and complexity of spatial behavior, and to focus too narrowly on characterizing neural representations of space-disconnected from the computations these representations are meant to enable. We therefore propose a taxonomy of navigation processes in mammals that can serve as a common framework for structuring and facilitating interdisciplinary research in the field. Using the taxonomy as a guide, we review behavioral and neural studies of spatial navigation. In doing so, we validate the taxonomy and showcase its usefulness in identifying potential issues with common experimental approaches, designing experiments that adequately target particular behaviors, correctly interpreting neural activity, and pointing to new avenues of research.


Assuntos
Neurociências , Navegação Espacial , Humanos , Animais , Percepção Espacial , Encéfalo , Comportamento Espacial , Hipocampo , Mamíferos
12.
PLoS Comput Biol ; 7(1): e1001063, 2011 Jan 27.
Artigo em Inglês | MEDLINE | ID: mdl-21298080

RESUMO

Recently, we presented a study of adult neurogenesis in a simplified hippocampal memory model. The network was required to encode and decode memory patterns despite changing input statistics. We showed that additive neurogenesis was a more effective adaptation strategy compared to neuronal turnover and conventional synaptic plasticity as it allowed the network to respond to changes in the input statistics while preserving representations of earlier environments. Here we extend our model to include realistic, spatially driven input firing patterns in the form of grid cells in the entorhinal cortex. We compare network performance across a sequence of spatial environments using three distinct adaptation strategies: conventional synaptic plasticity, where the network is of fixed size but the connectivity is plastic; neuronal turnover, where the network is of fixed size but units in the network may die and be replaced; and additive neurogenesis, where the network starts out with fewer initial units but grows over time. We confirm that additive neurogenesis is a superior adaptation strategy when using realistic, spatially structured input patterns. We then show that a more biologically plausible neurogenesis rule that incorporates cell death and enhanced plasticity of new granule cells has an overall performance significantly better than any one of the three individual strategies operating alone. This adaptation rule can be tailored to maximise performance of the network when operating as either a short- or long-term memory store. We also examine the time course of adult neurogenesis over the lifetime of an animal raised under different hypothetical rearing conditions. These growth profiles have several distinct features that form a theoretical prediction that could be tested experimentally. Finally, we show that place cells can emerge and refine in a realistic manner in our model as a direct result of the sparsification performed by the dentate gyrus layer.


Assuntos
Hipocampo/citologia , Memória , Neurogênese , Plasticidade Neuronal , Animais , Ratos
13.
Neural Comput ; 23(2): 303-35, 2011 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-21105830

RESUMO

We develop a group-theoretical analysis of slow feature analysis for the case where the input data are generated by applying a set of continuous transformations to static templates. As an application of the theory, we analytically derive nonlinear visual receptive fields and show that their optimal stimuli, as well as the orientation and frequency tuning, are in good agreement with previous simulations of complex cells in primary visual cortex (Berkes and Wiskott, 2005). The theory suggests that side and end stopping can be interpreted as a weak breaking of translation invariance. Direction selectivity is also discussed.


Assuntos
Modelos Neurológicos , Modelos Teóricos , Neurônios/fisiologia , Córtex Visual/fisiologia , Percepção Visual/fisiologia , Animais , Humanos
14.
Neural Comput ; 23(9): 2289-323, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21671784

RESUMO

Primates are very good at recognizing objects independent of viewing angle or retinal position, and they outperform existing computer vision systems by far. But invariant object recognition is only one prerequisite for successful interaction with the environment. An animal also needs to assess an object's position and relative rotational angle. We propose here a model that is able to extract object identity, position, and rotation angles. We demonstrate the model behavior on complex three-dimensional objects under translation and rotation in depth on a homogeneous background. A similar model has previously been shown to extract hippocampal spatial codes from quasi-natural videos. The framework for mathematical analysis of this earlier application carries over to the scenario of invariant object recognition. Thus, the simulation results can be explained analytically even for the complex high-dimensional data we employed.


Assuntos
Algoritmos , Inteligência Artificial , Modelos Neurológicos , Redes Neurais de Computação , Reconhecimento Psicológico/fisiologia , Percepção Visual/fisiologia , Animais , Humanos
15.
PLoS Comput Biol ; 6(8)2010 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-20808883

RESUMO

Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA) network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.


Assuntos
Modelos Biológicos , Desempenho Psicomotor/fisiologia , Reforço Psicológico , Algoritmos , Animais , Encéfalo/fisiologia , Simulação por Computador , Humanos , Aprendizagem/fisiologia , Redes Neurais de Computação , Ratos , Percepção Visual/fisiologia
16.
Sci Rep ; 11(1): 2713, 2021 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-33526840

RESUMO

The context-dependence of extinction learning has been well studied and requires the hippocampus. However, the underlying neural mechanisms are still poorly understood. Using memory-driven reinforcement learning and deep neural networks, we developed a model that learns to navigate autonomously in biologically realistic virtual reality environments based on raw camera inputs alone. Neither is context represented explicitly in our model, nor is context change signaled. We find that memory-intact agents learn distinct context representations, and develop ABA renewal, whereas memory-impaired agents do not. These findings reproduce the behavior of control and hippocampal animals, respectively. We therefore propose that the role of the hippocampus in the context-dependence of extinction learning might stem from its function in episodic-like memory and not in context-representation per se. We conclude that context-dependence can emerge from raw visual inputs.

17.
Network ; 20(3): 137-61, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19731146

RESUMO

Recently we presented a model of additive neurogenesis in a linear, feedforward neural network that performed an encoding-decoding memory task in a changing input environment. Growing the neural network over time allowed the network to adapt to changes in input statistics without disrupting retrieval properties, and we proposed that adult neurogenesis might fulfil a similar computational role in the dentate gyrus of the hippocampus. Here we explicitly evaluate this hypothesis by examining additive neurogenesis in a simplified hippocampal memory model. The model incorporates a divergence in unit number from the entorhinal cortex to the dentate gyrus and sparse coding in the dentate gyrus, both notable features of hippocampal processing. We evaluate two distinct adaptation strategies; neuronal turnover, where the network is of fixed size but units may be deleted and new ones added, and additive neurogenesis, where the network grows over time, and quantify the performance of the network across the full range of adaptation levels from zero in a fixed network to one in a fully adapting network. We find that additive neurogenesis is always superior to neuronal turnover as it permits the network to be responsive to changes in input statistics while at the same time preserving representations of earlier environments.


Assuntos
Giro Denteado/fisiologia , Aprendizagem/fisiologia , Memória/fisiologia , Redes Neurais de Computação , Neurogênese , Adaptação Psicológica/fisiologia , Algoritmos , Animais , Simulação por Computador , Meio Ambiente , Neurônios/fisiologia , Ratos , Lobo Temporal/fisiologia
18.
PLoS Comput Biol ; 3(6): e112, 2007 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-17604445

RESUMO

Our nervous system can efficiently recognize objects in spite of changes in contextual variables such as perspective or lighting conditions. Several lines of research have proposed that this ability for invariant recognition is learned by exploiting the fact that object identities typically vary more slowly in time than contextual variables or noise. Here, we study the question of how this "temporal stability" or "slowness" approach can be implemented within the limits of biologically realistic spike-based learning rules. We first show that slow feature analysis, an algorithm that is based on slowness, can be implemented in linear continuous model neurons by means of a modified Hebbian learning rule. This approach provides a link to the trace rule, which is another implementation of slowness learning. Then, we show analytically that for linear Poisson neurons, slowness learning can be implemented by spike-timing-dependent plasticity (STDP) with a specific learning window. By studying the learning dynamics of STDP, we show that for functional interpretations of STDP, it is not the learning window alone that is relevant but rather the convolution of the learning window with the postsynaptic potential. We then derive STDP learning windows that implement slow feature analysis and the "trace rule." The resulting learning windows are compatible with physiological data both in shape and timescale. Moreover, our analysis shows that the learning window can be split into two functionally different components that are sensitive to reversible and irreversible aspects of the input statistics, respectively. The theory indicates that irreversible input statistics are not in favor of stable weight distributions but may generate oscillatory weight dynamics. Our analysis offers a novel interpretation for the functional role of STDP in physiological neurons.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Plasticidade Neuronal/fisiologia , Neurônios/fisiologia , Tempo de Reação/fisiologia , Transmissão Sináptica/fisiologia , Simulação por Computador
19.
PLoS Comput Biol ; 3(8): e166, 2007 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-17784780

RESUMO

We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system []. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA layer.


Assuntos
Movimentos da Cabeça/fisiologia , Hipocampo/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios Aferentes/fisiologia , Orientação/fisiologia , Percepção Espacial/fisiologia , Animais , Simulação por Computador , Rede Nervosa/citologia , Neurônios Aferentes/classificação , Neurônios Aferentes/citologia , Ratos
20.
PLoS One ; 13(10): e0204685, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30286147

RESUMO

Episodic memories have been suggested to be represented by neuronal sequences, which are stored and retrieved from the hippocampal circuit. A special difficulty is that realistic neuronal sequences are strongly correlated with each other since computational memory models generally perform poorly when correlated patterns are stored. Here, we study in a computational model under which conditions the hippocampal circuit can perform this function robustly. During memory encoding, CA3 sequences in our model are driven by intrinsic dynamics, entorhinal inputs, or a combination of both. These CA3 sequences are hetero-associated with the input sequences, so that the network can retrieve entire sequences based on a single cue pattern. We find that overall memory performance depends on two factors: the robustness of sequence retrieval from CA3 and the circuit's ability to perform pattern completion through the feedforward connectivity, including CA3, CA1 and EC. The two factors, in turn, depend on the relative contribution of the external inputs and recurrent drive on CA3 activity. In conclusion, memory performance in our network model critically depends on the network architecture and dynamics in CA3.


Assuntos
Hipocampo/fisiologia , Memória/fisiologia , Vias Neurais/fisiologia , Animais , Simulação por Computador , Córtex Entorrinal/fisiologia , Memória Episódica , Modelos Neurológicos , Neurônios/fisiologia , Ratos , Lobo Temporal/fisiologia
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa