Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 8(1): 9301, 2018 06 18.
Artigo em Inglês | MEDLINE | ID: mdl-29915205

RESUMO

The human perceptual system enables us to extract visual properties of an object's material from auditory information. In monkeys, the neural basis underlying such multisensory association develops through experience of exposure to a material; material information could be processed in the posterior inferior temporal cortex, progressively from the high-order visual areas. In humans, however, the development of this neural representation remains poorly understood. Here, we demonstrated for the first time the presence of a mapping of the auditory material property with visual material ("Metal" and "Wood") in the right temporal region in preverbal 4- to 8-month-old infants, using near-infrared spectroscopy (NIRS). Furthermore, we found that infants acquired the audio-visual mapping for a property of the "Metal" material later than for the "Wood" material, since infants form the visual property of "Metal" material after approximately 6 months of age. These findings indicate that multisensory processing of material information induces the activation of brain areas related to sound symbolism. Our findings also indicate that the material's familiarity might facilitate the development of multisensory processing during the first year of life.


Assuntos
Percepção Auditiva/fisiologia , Percepção Visual/fisiologia , Feminino , Hemoglobinas/metabolismo , Humanos , Lactente , Masculino , Oxiemoglobinas/metabolismo , Espectroscopia de Luz Próxima ao Infravermelho , Fatores de Tempo
2.
Appetite ; 116: 493-501, 2017 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-28572067

RESUMO

Because chewing sounds influence perceived food textures, unpleasant textures of texture-modified diets might be improved by chewing sound modulation. Additionally, since inhomogeneous food properties increase perceived sensory intensity, the effects of chewing sound modulation might depend on inhomogeneity. This study examined the influences of texture inhomogeneity on the effects of chewing sound modulation. Three kinds of nursing care foods in two food process types (minced-/puréed-like foods for inhomogeneous/homogeneous texture respectively) were used as sample foods. A pseudo-chewing sound presentation system, using electromyogram signals, was used to modulate chewing sounds. Thirty healthy elderly participants participated in the experiment. In two conditions with and without the pseudo-chewing sound, participants rated the taste, texture, and evoked feelings in response to sample foods. The results showed that inhomogeneity strongly influenced the perception of food texture. Regarding the effects of the pseudo-chewing sound, taste was less influenced, the perceived food texture tended to change in the minced-like foods, and evoked feelings changed in both food process types. Though there were some food-dependent differences in the effects of the pseudo-chewing sound, the presentation of the pseudo-chewing sounds was more effective in foods with an inhomogeneous texture. In addition, it was shown that the pseudo-chewing sound might have positively influenced feelings.


Assuntos
Ingestão de Alimentos/psicologia , Potenciais Evocados/fisiologia , Serviços de Alimentação , Mastigação/fisiologia , Som , Percepção do Tato , Idoso , Eletromiografia , Feminino , Humanos , Masculino , Cuidados de Enfermagem , Inquéritos e Questionários , Paladar
3.
Physiol Behav ; 167: 324-331, 2016 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-27720736

RESUMO

Elderly individuals whose ability to chew and swallow has declined are often restricted to unpleasant diets of very soft food, leading to a poor appetite. To address this problem, we aimed to investigate the influence of altered auditory input of chewing sounds on the perception of food texture. The modified chewing sound was reported to influence the perception of food texture in normal foods. We investigated whether the perceived sensations of nursing care foods could be altered by providing altered auditory feedback of chewing sounds, even if the actual food texture is dull. Chewing sounds were generated using electromyogram (EMG) of the masseter. When the frequency properties of the EMG signal are modified and it is heard as a sound, it resembles a "crunchy" sound, much like that emitted by chewing, for example, root vegetables (EMG chewing sound). Thirty healthy adults took part in the experiment. In two conditions (with/without the EMG chewing sound), participants rated the taste, texture and evoked feelings of five kinds of nursing care foods using two questionnaires. When the "crunchy" EMG chewing sound was present, participants were more likely to evaluate food as having the property of stiffness. Moreover, foods were perceived as rougher and to have a greater number of ingredients in the condition with the EMG chewing sound, and satisfaction and pleasantness were also greater. In conclusion, the "crunchy" pseudo-chewing sound could influence the perception of food texture, even if the actual "crunchy" oral sensation is lacking. Considering the effect of altered auditory feedback while chewing, we can suppose that such a tool would be a useful technique to help people on texture-modified diets to enjoy their food.


Assuntos
Músculo Masseter/fisiologia , Mastigação/fisiologia , Som , Percepção do Tato/fisiologia , Adulto , Análise de Variância , Eletromiografia , Potencial Evocado Motor/fisiologia , Feminino , Análise de Fourier , Humanos , Masculino , Pessoa de Meia-Idade , Propriedades de Superfície , Inquéritos e Questionários , Adulto Jovem
4.
Perception ; 45(10): 1099-114, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27257036

RESUMO

Certain systematic relationships are often assumed between information conveyed from multiple sensory modalities; for instance, a small figure and a high pitch may be perceived as more harmonious. This phenomenon, termed cross-modal correspondence, may result from correlations between multi-sensory signals learned in daily experience of the natural environment. If so, we would observe cross-modal correspondences not only in the perception of artificial stimuli but also in perception of natural objects. To test this hypothesis, we reanalyzed data collected previously in our laboratory examining perceptions of the material properties of wood using vision, audition, and touch. We compared participant evaluations of three perceptual properties (surface brightness, sharpness of sound, and smoothness) of the wood blocks obtained separately via vision, audition, and touch. Significant positive correlations were identified for all properties in the audition-touch comparison, and for two of the three properties regarding in the vision-touch comparison. By contrast, no properties exhibited significant positive correlations in the vision-audition comparison. These results suggest that we learn correlations between multi-sensory signals through experience; however, the strength of this statistical learning is apparently dependent on the particular combination of sensory modalities involved.


Assuntos
Percepção Auditiva/fisiologia , Percepção do Tato/fisiologia , Percepção Visual/fisiologia , Madeira , Adulto , Feminino , Audição/fisiologia , Humanos , Masculino , Estimulação Física , Tato/fisiologia , Visão Ocular/fisiologia , Adulto Jovem
5.
Perception ; 44(2): 198-214, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26561972

RESUMO

Temporal phase discrimination is a useful psychophysical task to evaluate how sensory signals, synchronously detected in parallel, are perceptually bound by human observers. In this task two stimulus sequences synchronously alternate between two states (say, A-B-A-B and X-Y-X-Y) in either of two temporal phases (ie A and B are respectively paired with X and Y, or vice versa). The critical alternation frequency beyond which participants cannot discriminate the temporal phase is measured as an index characterizing the temporal property of the underlying binding process. This task has been used to reveal the mechanisms underlying visual and cross-modal bindings. To directly compare these binding mechanisms with those in another modality, this study used the temporal phase discrimination task to reveal the processes underlying auditory bindings. The two sequences were alternations between two pitches. We manipulated the distance between the two sequences by changing intersequence frequency separation, or presentation ears (diotic vs dichotic). Results showed that the alternation frequency limit ranged from 7 to 30 Hz, becoming higher as the intersequence distance decreased, as is the case with vision. However, unlike vision, auditory phase discrimination limits were higher and more variable across participants.


Assuntos
Percepção Auditiva/fisiologia , Discriminação Psicológica/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Espacial/fisiologia , Percepção do Tempo/fisiologia , Percepção Visual/fisiologia , Adulto , Humanos , Adulto Jovem
6.
Vision Res ; 109: 185-200, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25576379

RESUMO

Most research on the multimodal perception of material properties has investigated the perception of material properties of two modalities such as vision-touch, vision-audition, audition-touch, and vision-action. Here, we investigated whether the same affective classifications of materials can be found in three different modalities of vision, audition, and touch, using wood as the target object. Fifty participants took part in an experiment involving the three modalities of vision, audition, and touch, in isolation. Twenty-two different wood types including genuine, processed, and fake were perceptually evaluated using a questionnaire consisting of twenty-three items (12 perceptual and 11 affective). The results demonstrated that evaluations of the affective properties of wood were similar in all three modalities. The elements of "expensiveness, sturdiness, rareness, interestingness, and sophisticatedness" and "pleasantness, relaxed feelings, and liked-disliked" were separately grouped for all three senses. Our results suggest that the affective material properties of wood are at least partly represented in a supramodal fashion. Our results also suggest an association between perceptual and affective properties, which will be a useful tool not only in science, but also in applied fields.


Assuntos
Percepção Auditiva/fisiologia , Percepção do Tato/fisiologia , Percepção Visual/fisiologia , Madeira , Adulto , Discriminação Psicológica/fisiologia , Análise Fatorial , Feminino , Humanos , Masculino , Propriedades de Superfície , Adulto Jovem
7.
J Vis ; 14(4)2014 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-24744448

RESUMO

Interest in the perception of the material of objects has been growing. While material perception is a critical ability for animals to properly regulate behavioral interactions with surrounding objects (e.g., eating), little is known about its underlying processing. Vision and audition provide useful information for material perception; using only its visual appearance or impact sound, we can infer what an object is made from. However, what material is perceived when the visual appearance of one material is combined with the impact sound of another, and what are the rules that govern cross-modal integration of material information? We addressed these questions by asking 16 human participants to rate how likely it was that audiovisual stimuli (48 combinations of visual appearances of six materials and impact sounds of eight materials) along with visual-only stimuli and auditory-only stimuli fell into each of 13 material categories. The results indicated strong interactions between audiovisual material perceptions; for example, the appearance of glass paired with a pepper sound is perceived as transparent plastic. Rating material-category likelihoods follow a multiplicative integration rule in that the categories judged to be likely are consistent with both visual and auditory stimuli. On the other hand, rating-material properties, such as roughness and hardness, follow a weighted average rule. Despite a difference in their integration calculations, both rules can be interpreted as optimal Bayesian integration of independent audiovisual estimations for the two types of material judgment, respectively.


Assuntos
Percepção Auditiva/fisiologia , Percepção de Forma/fisiologia , Percepção Visual/fisiologia , Adulto , Teorema de Bayes , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Som , Inquéritos e Questionários , Adulto Jovem
8.
Front Psychol ; 3: 61, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22408631

RESUMO

Using four experiments, this study investigates what amount of delay brings about maximal impairment under delayed visual feedback and whether a critical interval, such as that in audition, also exists in vision. The first experiment measured the Grooved Pegboard test performance as a function of visual feedback delays from 120 to 2120 ms in 16 steps. Performance sharply decreased until about 490 ms, then more gradually until 2120 ms, suggesting that two mechanisms were operating under delayed visual feedback. Since delayed visual feedback differs from delayed auditory feedback in that the former induces not only temporal but also spatial displacements between motor and sensory feedback, this difference could also exist in the mechanism responsible for spatial displacement. The second experiment was hence conducted to provide simultaneous haptic feedback together with delayed visual feedback to inform correct spatial position. The disruption was significantly ameliorated when information about spatial position was provided from a haptic source. The sharp decrease in performance of up to approximately 300 ms was followed by an almost flat performance. This is similar to the critical interval found in audition. Accordingly, the mechanism that caused the sharp decrease in performance in experiments 1 and 2 was probably mainly responsible for temporal disparity and is common across different modality-motor combinations, while the other mechanism that caused a rather gradual decrease in performance in experiment 1 was mainly responsible for spatial displacement. In experiments 3 and 4, the reliability of spatial information from the haptic source was reduced by wearing a glove or using a tool. When the reliability of spatial information was reduced, the data lay between those of experiments 1 and 2, and that a gradual decrease in performance partially reappeared. These results further support the notion that two mechanisms operate under delayed visual feedback.

9.
PLoS One ; 6(4): e18309, 2011 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-21494684

RESUMO

Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room.


Assuntos
Recursos Audiovisuais , Distúrbios da Fala/fisiopatologia , Fala/fisiologia , Estimulação Acústica , Humanos , Fatores de Tempo
10.
Proc Biol Sci ; 277(1692): 2281-90, 2010 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-20335212

RESUMO

The human brain processes different aspects of the surrounding environment through multiple sensory modalities, and each modality can be subdivided into multiple attribute-specific channels. When the brain rebinds sensory content information ('what') across different channels, temporal coincidence ('when') along with spatial coincidence ('where') provides a critical clue. It however remains unknown whether neural mechanisms for binding synchronous attributes are specific to each attribute combination, or universal and central. In human psychophysical experiments, we examined how combinations of visual, auditory and tactile attributes affect the temporal frequency limit of synchrony-based binding. The results indicated that the upper limits of cross-attribute binding were lower than those of within-attribute binding, and surprisingly similar for any combination of visual, auditory and tactile attributes (2-3 Hz). They are unlikely to be the limits for judging synchrony, since the temporal limit of a cross-attribute synchrony judgement was higher and varied with the modality combination (4-9 Hz). These findings suggest that cross-attribute temporal binding is mediated by a slow central process that combines separately processed 'what' and 'when' properties of a single event. While the synchrony performance reflects temporal bottlenecks existing in 'when' processing, the binding performance reflects the central temporal limit of integrating 'when' and 'what' properties.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Tato/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Humanos , Estimulação Luminosa/métodos , Fatores de Tempo
11.
Exp Brain Res ; 198(2-3): 245-59, 2009 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-19499212

RESUMO

To see whether there is a difference in temporal resolution of synchrony perception between audio-visual (AV), visuo-tactile (VT), and audio-tactile (AT) combinations, we compared synchrony-asynchrony discrimination thresholds of human participants. Visual and auditory stimuli were, respectively, a luminance-modulated Gaussian blob and an amplitude-modulated white noise. Tactile stimuli were mechanical vibrations presented to the index finger. All the stimuli were temporally modulated by either single pulses or repetitive-pulse trains. The results show that the temporal resolution of synchrony perception was similar for AV and VT (e.g., approximately 4 Hz for repetitive-pulse stimuli), but significantly higher for AT approximately 10 Hz). Apart from having a higher temporal resolution, however, AT synchrony perception was similar to AV synchrony perception in that participants could select matching features through attention, and a change in the matching-feature attribute had little effect on temporal resolution. The AT superiority in temporal resolution was indicated not only by synchrony-asynchrony discrimination but also by simultaneity judgments. Temporal order judgments were less affected by modality combination than the other two tasks.


Assuntos
Percepção Auditiva , Percepção do Tato , Percepção Visual , Estimulação Acústica , Análise de Variância , Limiar Diferencial , Humanos , Estimulação Luminosa , Estimulação Física , Psicofísica , Fatores de Tempo
12.
Neurosci Lett ; 433(3): 225-30, 2008 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-18281153

RESUMO

Our previous findings suggest that audio-visual synchrony perception is based on the matching of salient temporal features selected from each sensory modality through bottom-up segregation or by top-down attention to a specific spatial position. This study examined whether top-down attention to a specific feature value is also effective in selection of cross-modal matching features. In the first experiment, the visual stimulus was a pulse train in which a flash randomly appeared with a probability of 6.25, 12.5 or 25% for every 6.25 ms. Four flash colors randomly appeared with equal probability, and one of them was selected as the target color on each trial. The paired auditory stimulus was a single-pitch pip sequence that had the same temporal structure as the target color flashes, presented in synchrony with the target flashes (synchronous stimulus) or with a 250-ms relative shift (asynchronous stimuli). The task of the participants was synchrony-asynchrony discrimination, with the target color being indicated to the participant by a probe (with-probe condition) or not (without probe). In another control condition, there was no correlation between color and auditory signals (color-shuffled). In the second experiment, the roles of visual and auditory stimuli were exchanged. The results show that the performance of synchrony-asynchrony discrimination was worst for the color/pitch-shuffled condition, but best under the with-probe condition where the observer knew beforehand which color/pitch should be matched with the signal of the other modality. This suggests that top-down, feature-based attention can aid in feature selection for audio-visual synchrony discrimination even when the bottom-up segmentation processes cannot uniquely determine salient features. The observed feature-based selection, however, is not as effective as position-based selection.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Percepção do Tempo/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Cognição/fisiologia , Aprendizagem por Discriminação/fisiologia , Humanos , Masculino , Processos Mentais/fisiologia , Testes Neuropsicológicos , Estimulação Luminosa , Probabilidade , Tempo de Reação/fisiologia , Percepção Espacial/fisiologia , Fatores de Tempo
13.
Vision Res ; 47(8): 1075-93, 2007 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-17350068

RESUMO

Computationally, audio-visual temporal synchrony detection is analogous to visual motion detection in the sense that both solve the correspondence problem. We examined whether audio-visual synchrony detection is mediated by a mechanism similar to low-level motion sensors, by one similar to a higher-level feature matching process, or by both types of mechanisms as in the case of visual motion detection. We found that audio-visual synchrony-asynchrony discrimination for temporally dense random pulse trains was difficult, whereas motion detection is known to be easy for spatially dense random dot patterns (random dot kinematograms) due to the operation of low-level motion sensors. Subsequent experiments further indicated that the temporal limiting factor of audio-visual synchrony discrimination is the temporal density of salient features not the temporal frequency of the stimulus, nor the physical density of the stimulus. These results suggest that audio-visual synchrony perception is based solely on a salient feature matching mechanism similar to that proposed for high-level visual motion detection.


Assuntos
Percepção Auditiva/fisiologia , Discriminação Psicológica , Percepção Visual/fisiologia , Estimulação Acústica , Humanos , Percepção de Movimento/fisiologia , Estimulação Luminosa , Psicofísica
14.
Proc Biol Sci ; 273(1588): 865-74, 2006 Apr 07.
Artigo em Inglês | MEDLINE | ID: mdl-16618681

RESUMO

We examined whether the detection of audio-visual temporal synchrony is determined by a pre-attentive parallel process, or by an attentive serial process using a visual search paradigm. We found that detection of a visual target that changed in synchrony with an auditory stimulus was gradually impaired as the number of unsynchronized visual distractors increased (experiment 1), whereas synchrony discrimination of an attended target in a pre-cued location was unaffected by the presence of distractors (experiment 2). The effect of distractors cannot be ascribed to reduced target visibility nor can the increase in false alarm rates be predicted by a noisy parallel processing model. Reaction times for target detection increased linearly with number of distractors, with the slope being about twice as steep for target-absent trials as for target-present trials (experiment 3). Similar results were obtained regardless of whether the audio-visual stimulus consisted of visual flashes synchronized with amplitude-modulated pips, or of visual rotations synchronized with frequency-modulated up-down sweeps. All of the results indicate that audio-visual perceptual synchrony is judged by a serial process and are consistent with the suggestion that audio-visual temporal synchrony is detected by a 'mid-level' feature matching process.


Assuntos
Estimulação Acústica , Percepção Auditiva , Audição/fisiologia , Percepção Visual , Humanos , Modelos Biológicos , Estimulação Luminosa , Tempo de Reação
15.
Exp Brain Res ; 166(3-4): 455-64, 2005 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-16032402

RESUMO

Temporal synchrony is a critical condition for integrating information presented in different sensory modalities. To gain insight into the mechanism underlying synchrony perception of audio-visual signals we examined temporal limits for human participants to detect synchronous audio-visual stimuli. Specifically, we measured the percentage correctness of synchrony-asynchrony discrimination as a function of audio-visual lag while changing the temporal frequency and/or modulation waveforms. Audio-visual stimuli were a luminance-modulated Gaussian blob and amplitude-modulated white noise. The results indicated that synchrony-asynchrony discrimination became nearly impossible for periodic pulse trains at temporal frequencies higher than 4 Hz, even when the lag was large enough for discrimination with single pulses (Experiment 1). This temporal limitation cannot be ascribed to peripheral low-pass filters in either vision or audition (Experiment 2), which suggests that the temporal limit reflects a property of a more central mechanism located at or before cross-modal signal comparison. We also found that the functional behaviour of this central mechanism could not be approximated by a linear low-pass filter (Experiment 3). These results are consistent with a hypothesis that the perception of audio-visual synchrony is based on comparison of salient temporal features individuated from within-modal signal streams.


Assuntos
Percepção Auditiva/fisiologia , Discriminação Psicológica/fisiologia , Processos Mentais/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Algoritmos , Humanos , Estimulação Luminosa
16.
Percept Psychophys ; 67(2): 315-23, 2005 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-15971694

RESUMO

Pitch perception is determined by both place and temporal cues. To explore whether the manner in which these cues are used differs depending on absolute pitch capability, pitch identification experiments with and without pitch references were conducted for subjects with different absolute pitch capabilities and musical experience. Three types of stimuli were used to manipulate place and temporal cues separately: narrowband noises, which provide strong place cues but less salient temporal cues; iterated rippled noises, which provide strong temporal cues but less salient place cues; and sinusoidal tones, which provide both. The results indicated that absolute pitch possessors utilize temporal cues more effectively when they identify musical chroma With regard to the judgment of height, it was indicated that place cues play an important role for both absolute and nonabsolute pitch possessors.


Assuntos
Sinais (Psicologia) , Percepção da Altura Sonora , Percepção do Tempo , Adolescente , Adulto , Feminino , Humanos , Música
17.
Nat Neurosci ; 7(7): 773-8, 2004 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-15195098

RESUMO

To perceive the auditory and visual aspects of a physical event as occurring simultaneously, the brain must adjust for differences between the two modalities in both physical transmission time and sensory processing time. One possible strategy to overcome this difficulty is to adaptively recalibrate the simultaneity point from daily experience of audiovisual events. Here we report that after exposure to a fixed audiovisual time lag for several minutes, human participants showed shifts in their subjective simultaneity responses toward that particular lag. This 'lag adaptation' also altered the temporal tuning of an auditory-induced visual illusion, suggesting that adaptation occurred via changes in sensory processing, rather than as a result of a cognitive shift while making task responses. Our findings suggest that the brain attempts to adjust subjective simultaneity across different modalities by detecting and reducing time lags between inputs that likely arise from the same physical events.


Assuntos
Adaptação Fisiológica/fisiologia , Percepção Auditiva/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Atenção/fisiologia , Calibragem , Humanos , Ilusões , Estimulação Luminosa/métodos , Desempenho Psicomotor/fisiologia , Tempo de Reação , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...