Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Comp Physiol B ; 194(3): 383-401, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38733409

RESUMEN

Vocalisations are increasingly being recognised as an important aspect of normal rodent behaviour yet little is known of how they interact with other spontaneous behaviours such as sleep and torpor, particularly in a social setting. We obtained chronic recordings of the vocal behaviour of adult male and female Djungarian hamsters (Phodopus sungorus) housed under short photoperiod (8 h light, 16 h dark, square wave transitions), in different social contexts. The animals were kept in isolation or in same-sex sibling pairs, separated by a grid which allowed non-physical social interaction. On approximately 20% of days hamsters spontaneously entered torpor, a state of metabolic depression that coincides with the rest phase of many small mammal species in response to actual or predicted energy shortages. Animals produced ultrasonic vocalisations (USVs) with a peak frequency of 57 kHz in both social and asocial conditions and there was a high degree of variability in vocalisation rate between subjects. Vocalisation rate was correlated with locomotor activity across the 24-h light cycle, occurring more frequently during the dark period when the hamsters were more active and peaking around light transitions. Solitary-housed animals did not vocalise whilst torpid and animals remained in torpor despite overlapping with vocalisations in social-housing. Besides a minor decrease in peak USV frequency when isolated hamsters were re-paired with their siblings, changing social contexts did not influence vocalisation behaviour or structure. In rare instances, temporally overlapping USVs occurred when animals were socially-housed and were grouped in such a way that could indicate coordination. We did not observe broadband calls (BBCs) contemporaneous with USVs in this paradigm, corroborating their correlation with physical aggression which was absent from our experiment. Overall, we find little evidence to suggest a direct social function of hamster USVs. We conclude that understanding the effects of vocalisations on spontaneous behaviours, such as sleep and torpor, will inform experimental design of future studies, especially where the role of social interactions is investigated.


Asunto(s)
Ritmo Circadiano , Phodopus , Fotoperiodo , Vocalización Animal , Animales , Vocalización Animal/fisiología , Masculino , Phodopus/fisiología , Femenino , Ritmo Circadiano/fisiología , Cricetinae , Actividad Motora/fisiología , Fenotipo , Letargo/fisiología , Ultrasonido , Estaciones del Año , Conducta Social
2.
Elife ; 112022 05 26.
Artículo en Inglés | MEDLINE | ID: mdl-35617119

RESUMEN

In almost every natural environment, sounds are reflected by nearby objects, producing many delayed and distorted copies of the original sound, known as reverberation. Our brains usually cope well with reverberation, allowing us to recognize sound sources regardless of their environments. In contrast, reverberation can cause severe difficulties for speech recognition algorithms and hearing-impaired people. The present study examines how the auditory system copes with reverberation. We trained a linear model to recover a rich set of natural, anechoic sounds from their simulated reverberant counterparts. The model neurons achieved this by extending the inhibitory component of their receptive filters for more reverberant spaces, and did so in a frequency-dependent manner. These predicted effects were observed in the responses of auditory cortical neurons of ferrets in the same simulated reverberant environments. Together, these results suggest that auditory cortical neurons adapt to reverberation by adjusting their filtering properties in a manner consistent with dereverberation.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Estimulación Acústica , Adaptación Fisiológica , Animales , Corteza Auditiva/fisiología , Hurones , Humanos , Sonido , Percepción del Habla/fisiología
3.
Cereb Cortex ; 28(1): 350-369, 2018 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-29136122

RESUMEN

Despite decades of microelectrode recordings, fundamental questions remain about how auditory cortex represents sound-source location. Here, we used in vivo 2-photon calcium imaging to measure the sensitivity of layer II/III neurons in mouse primary auditory cortex (A1) to interaural level differences (ILDs), the principal spatial cue in this species. Although most ILD-sensitive neurons preferred ILDs favoring the contralateral ear, neurons with either midline or ipsilateral preferences were also present. An opponent-channel decoder accurately classified ILDs using the difference in responses between populations of neurons that preferred contralateral-ear-greater and ipsilateral-ear-greater stimuli. We also examined the spatial organization of binaural tuning properties across the imaged neurons with unprecedented resolution. Neurons driven exclusively by contralateral ear stimuli or by binaural stimulation occasionally formed local clusters, but their binaural categories and ILD preferences were not spatially organized on a more global scale. In contrast, the sound frequency preferences of most neurons within local cortical regions fell within a restricted frequency range, and a tonotopic gradient was observed across the cortical surface of individual mice. These results indicate that the representation of ILDs in mouse A1 is comparable to that of most other mammalian species, and appears to lack systematic or consistent spatial order.


Asunto(s)
Corteza Auditiva/metabolismo , Calcio/metabolismo , Neuronas/metabolismo , Localización de Sonidos/fisiología , Estimulación Acústica/métodos , Potenciales de Acción/fisiología , Animales , Vías Auditivas/metabolismo , Señalización del Calcio/fisiología , Oído/fisiología , Femenino , Lateralidad Funcional/fisiología , Ratones Endogámicos C57BL , Ratones Transgénicos , Procesamiento de Señales Asistido por Computador , Imagen de Colorante Sensible al Voltaje
4.
Front Comput Neurosci ; 10: 24, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27047368

RESUMEN

Attempting to explain the perceptual qualities of pitch has proven to be, and remains, a difficult problem. The wide range of sounds which elicit pitch and a lack of agreement across neurophysiological studies on how pitch is encoded by the brain have made this attempt more difficult. In describing the potential neural mechanisms by which pitch may be processed, a number of neural networks have been proposed and implemented. However, no unsupervised neural networks with biologically accurate cochlear inputs have yet been demonstrated. This paper proposes a simple system in which pitch representing neurons are produced in a biologically plausible setting. Purely unsupervised regimes of neural network learning are implemented and these prove to be sufficient in identifying the pitch of sounds with a variety of spectral profiles, including sounds with missing fundamental frequencies and iterated rippled noises.

5.
Front Psychol ; 5: 159, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24624101

RESUMEN

Timing cues are an essential feature of music. To understand how the brain gives rise to our experience of music we must appreciate how acoustical temporal patterns are integrated over the range of several seconds in order to extract global timing. In music perception, global timing comprises three distinct but often interacting percepts: temporal grouping, beat, and tempo. What directions may we take to further elucidate where and how the global timing of music is processed in the brain? The present perspective addresses this question and describes our current understanding of the neural basis of global timing perception.

6.
Curr Biol ; 23(7): 620-5, 2013 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-23523247

RESUMEN

The neural processing of sensory stimuli involves a transformation of physical stimulus parameters into perceptual features, and elucidating where and how this transformation occurs is one of the ultimate aims of sensory neurophysiology. Recent studies have shown that the firing of neurons in early sensory cortex can be modulated by multisensory interactions [1-5], motor behavior [1, 3, 6, 7], and reward feedback [1, 8, 9], but it remains unclear whether neural activity is more closely tied to perception, as indicated by behavioral choice, or to the physical properties of the stimulus. We investigated which of these properties are predominantly represented in auditory cortex by recording local field potentials (LFPs) and multiunit spiking activity in ferrets while they discriminated the pitch of artificial vowels. We found that auditory cortical activity is informative both about the fundamental frequency (F0) of a target sound and also about the pitch that the animals appear to perceive given their behavioral responses. Surprisingly, although the stimulus F0 was well represented at the onset of the target sound, neural activity throughout auditory cortex frequently predicted the reported pitch better than the target F0.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Señales (Psicología) , Hurones/fisiología , Discriminación de la Altura Tonal/fisiología , Potenciales Sinápticos/fisiología , Estimulación Acústica , Animales , Femenino
7.
J Acoust Soc Am ; 133(1): 365-76, 2013 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-23297909

RESUMEN

Spectral timbre is an acoustic feature that enables human listeners to determine the identity of a spoken vowel. Despite its importance to sound perception, little is known about the neural representation of sound timbre and few psychophysical studies have investigated timbre discrimination in non-human species. In this study, ferrets were positively conditioned to discriminate artificial vowel sounds in a two-alternative-forced-choice paradigm. Animals quickly learned to discriminate the vowel sound /u/ from /ε/ and were immediately able to generalize across a range of voice pitches. They were further tested in a series of experiments designed to assess how well they could discriminate these vowel sounds under different listening conditions. First, a series of morphed vowels was created by systematically shifting the location of the first and second formant frequencies. Second, the ferrets were tested with single formant stimuli designed to assess which spectral cues they could be using to make their decisions. Finally, vowel discrimination thresholds were derived in the presence of noise maskers presented from either the same or a different spatial location. These data indicate that ferrets show robust vowel discrimination behavior across a range of listening conditions and that this ability shares many similarities with human listeners.


Asunto(s)
Conducta Animal , Discriminación en Psicología , Hurones/psicología , Discriminación de la Altura Tonal , Acústica del Lenguaje , Calidad de la Voz , Estimulación Acústica , Animales , Conducta de Elección , Señales (Psicología) , Femenino , Humanos , Ruido/efectos adversos , Enmascaramiento Perceptual , Psicoacústica , Espectrografía del Sonido
8.
J Neurosci ; 32(39): 13339-42, 2012 Sep 26.
Artículo en Inglés | MEDLINE | ID: mdl-23015423

RESUMEN

Experiments in animals have provided an important complement to human studies of pitch perception by revealing how the activity of individual neurons represents harmonic complex and periodic sounds. Such studies have shown that the acoustical parameters associated with pitch are represented by the spiking responses of neurons in A1 (primary auditory cortex) and various higher auditory cortical fields. The responses of these neurons are also modulated by the timbre of sounds. In marmosets, a distinct region on the low-frequency border of primary and non-primary auditory cortex may provide pitch tuning that generalizes across timbre classes.


Asunto(s)
Corteza Auditiva/fisiología , Mapeo Encefálico , Potenciales Evocados Auditivos/fisiología , Percepción de la Altura Tonal/fisiología , Estimulación Acústica , Animales , Electrofisiología , Humanos , Discriminación de la Altura Tonal
9.
Biol Cybern ; 106(11-12): 617-25, 2012 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-22798035

RESUMEN

Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies.


Asunto(s)
Corteza Auditiva/fisiología , Potenciales Evocados/fisiología , Sensación/fisiología , Percepción Espacial/fisiología , Estimulación Acústica , Animales , Mapeo Encefálico , Humanos , Red Nerviosa/fisiología , Psicofísica
10.
J Neurosci ; 31(41): 14565-76, 2011 Oct 12.
Artículo en Inglés | MEDLINE | ID: mdl-21994373

RESUMEN

We can recognize the melody of a familiar song when it is played on different musical instruments. Similarly, an animal must be able to recognize a warning call whether the caller has a high-pitched female or a lower-pitched male voice, and whether they are sitting in a tree to the left or right. This type of perceptual invariance to "nuisance" parameters comes easily to listeners, but it is unknown whether or how such robust representations of sounds are formed at the level of sensory cortex. In this study, we investigate whether neurons in both core and belt areas of ferret auditory cortex can robustly represent the pitch, formant frequencies, or azimuthal location of artificial vowel sounds while the other two attributes vary. We found that the spike rates of the majority of cortical neurons that are driven by artificial vowels carry robust representations of these features, but the most informative temporal response windows differ from neuron to neuron and across five auditory cortical fields. Furthermore, individual neurons can represent multiple features of sounds unambiguously by independently modulating their spike rates within distinct time windows. Such multiplexing may be critical to identifying sounds that vary along more than one perceptual dimension. Finally, we observed that formant information is encoded in cortex earlier than pitch information, and we show that this time course matches ferrets' behavioral reaction time differences on a change detection task.


Asunto(s)
Corteza Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Localización de Sonidos/fisiología , Sonido , Estimulación Acústica/métodos , Potenciales de Acción/fisiología , Animales , Corteza Auditiva/citología , Vías Auditivas/fisiología , Sesgo , Femenino , Hurones , Neuronas/fisiología , Tiempo de Reacción/fisiología , Análisis Espectral , Estadísticas no Paramétricas
11.
Curr Biol ; 21(7): R251-3, 2011 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-21481759

RESUMEN

A recent study shows that expectation about the timing of behaviorally-relevant sounds enhances the responses of neurons in the primary auditory cortex and improves the accuracy and speed with which animals respond to those sounds.


Asunto(s)
Estimulación Acústica , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Animales , Vías Auditivas/fisiología , Ratas , Sonido
12.
Hear Res ; 271(1-2): 74-87, 2011 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-20457240

RESUMEN

It is widely appreciated that the key predictor of the pitch of a sound is its periodicity. Neural structures which support pitch perception must therefore be able to reflect the repetition rate of a sound, but this alone is not sufficient. Since pitch is a psychoacoustic property, a putative cortical code for pitch must also be able to account for the relationship between the amount to which a sound is periodic (i.e. its temporal regularity) and the perceived pitch salience, as well as limits in our ability to detect pitch changes or to discriminate rising from falling pitch. Pitch codes must also be robust in the presence of nuisance variables such as loudness or timbre. Here, we review a large body of work on the cortical basis of pitch perception, which illustrates that the distribution of cortical processes that give rise to pitch perception is likely to depend on both the acoustical features and functional relevance of a sound. While previous studies have greatly advanced our understanding, we highlight several open questions regarding the neural basis of pitch perception. These questions can begin to be addressed through a cooperation of investigative efforts across species and experimental techniques, and, critically, by examining the responses of single neurons in behaving animals.


Asunto(s)
Corteza Auditiva/fisiología , Percepción de la Altura Tonal/fisiología , Estimulación Acústica , Acústica , Animales , Conducta Animal , Humanos , Modelos Neurológicos , Periodicidad
13.
Neuroscientist ; 16(4): 453-69, 2010 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-20530254

RESUMEN

We are able to rapidly recognize and localize the many sounds in our environment. We can describe any of these sounds in terms of various independent "features" such as their loudness, pitch, or position in space. However, we still know surprisingly little about how neurons in the auditory brain, specifically the auditory cortex, might form representations of these perceptual characteristics from the information that the ear provides about sound acoustics. In this article, the authors examine evidence that the auditory cortex is necessary for processing the pitch, timbre, and location of sounds, and document how neurons across multiple auditory cortical fields might represent these as trains of action potentials. They conclude by asking whether neurons in different regions of the auditory cortex might not be simply sensitive to each of these three sound features but whether they might be selective for one of them. The few studies that have examined neural sensitivity to multiple sound attributes provide only limited support for neural selectivity within auditory cortex. Providing an explanation of the neural basis of feature invariance is thus one of the major challenges to sensory neuroscience obtaining the ultimate goal of understanding how neural firing patterns in the brain give rise to perception.


Asunto(s)
Corteza Auditiva/fisiología , Neuronas/fisiología , Percepción de la Altura Tonal/fisiología , Localización de Sonidos/fisiología , Estimulación Acústica , Animales , Vías Auditivas/fisiología , Potenciales Evocados Auditivos/fisiología
14.
J Neurosci ; 30(14): 5078-91, 2010 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-20371828

RESUMEN

We measured the responses of neurons in auditory cortex of male and female ferrets to artificial vowels of varying fundamental frequency (f(0)), or periodicity, and compared these with the performance of animals trained to discriminate the periodicity of these sounds. Sensitivity to f(0) was found in all five auditory cortical fields examined, with most of those neurons exhibiting either low-pass or high-pass response functions. Only rarely was the stimulus dependence of individual neuron discharges sufficient to account for the discrimination performance of the ferrets. In contrast, when analyzed with a simple classifier, responses of small ensembles, comprising 3-61 simultaneously recorded neurons, often discriminated periodicity changes as well as the animals did. We examined four potential strategies for decoding ensemble responses: spike counts, relative first-spike latencies, a binary "spike or no-spike" code, and a spike-order code. All four codes represented stimulus periodicity effectively, and, surprisingly, the spike count and relative latency codes enabled an equally rapid readout, within 75 ms of stimulus onset. Thus, relative latency codes do not necessarily facilitate faster discrimination judgments. A joint spike count plus relative latency code was more informative than either code alone, indicating that the information captured by each measure was not wholly redundant. The responses of neural ensembles, but not of single neurons, reliably encoded f(0) changes even when stimulus intensity was varied randomly over a 20 dB range. Because trained animals can discriminate stimulus periodicity across different sound levels, this implies that ensemble codes are better suited to account for behavioral performance.


Asunto(s)
Estimulación Acústica , Corteza Auditiva/fisiología , Neuronas/fisiología , Periodicidad , Estimulación Acústica/métodos , Potenciales de Acción/fisiología , Animales , Vías Auditivas/fisiología , Hurones
15.
J Acoust Soc Am ; 126(3): 1321-35, 2009 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-19739746

RESUMEN

Although many studies have examined the performance of animals in detecting a frequency change in a sequence of tones, few have measured animals' discrimination of the fundamental frequency (F0) of complex, naturalistic stimuli. Additionally, it is not yet clear if animals perceive the pitch of complex sounds along a continuous, low-to-high scale. Here, four ferrets (Mustela putorius) were trained on a two-alternative forced choice task to discriminate sounds that were higher or lower in F0 than a reference sound using pure tones and artificial vowels as stimuli. Average Weber fractions for ferrets on this task varied from approximately 20% to 80% across references (200-1200 Hz), and these fractions were similar for pure tones and vowels. These thresholds are approximately ten times higher than those typically reported for other mammals on frequency change detection tasks that use go/no-go designs. Naive human listeners outperformed ferrets on the present task, but they showed similar effects of stimulus type and reference F0. These results suggest that while non-human animals can be trained to label complex sounds as high or low in pitch, this task may be much more difficult for animals than simply detecting a frequency change.


Asunto(s)
Hurones/fisiología , Discriminación de la Altura Tonal , Estimulación Acústica , Adulto , Análisis de Varianza , Animales , Umbral Auditivo , Discriminación en Psicología , Femenino , Humanos , Masculino , Pruebas Neuropsicológicas , Psicoacústica , Psicometría , Habla , Acústica del Lenguaje , Adulto Joven
17.
J Neurosci ; 29(7): 2064-75, 2009 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-19228960

RESUMEN

Because we can perceive the pitch, timbre, and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from nonspatial attributes. Indeed, recent studies support the existence of anatomically segregated "what" and "where" cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and nonspatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre, and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Although indicating that neural encoding of pitch, location, and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and nonspatial cues at higher cortical levels. Some units exhibited significant nonlinear interactions between particular combinations of pitch, timbre, and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and nonspatial attributes. Such nonlinearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects.


Asunto(s)
Potenciales de Acción/fisiología , Corteza Auditiva/fisiología , Orientación/fisiología , Percepción de la Altura Tonal/fisiología , Localización de Sonidos/fisiología , Percepción Espacial/fisiología , Estimulación Acústica , Animales , Corteza Auditiva/anatomía & histología , Vías Auditivas/anatomía & histología , Vías Auditivas/fisiología , Mapeo Encefálico , Electrofisiología , Femenino , Hurones , Red Nerviosa/anatomía & histología , Red Nerviosa/fisiología , Neuronas/fisiología , Dinámicas no Lineales , Procesamiento de Señales Asistido por Computador
18.
J Cogn Neurosci ; 20(1): 135-52, 2008 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-17919084

RESUMEN

Abstract Neurometric analysis has proven to be a powerful tool for studying links between neural activity and perception, especially in visual and somatosensory cortices, but conventional neurometrics are based on a simplistic rate-coding hypothesis that is clearly at odds with the rich and complex temporal spiking patterns evoked by many natural stimuli. In this study, we investigated the possible relationships between temporal spike pattern codes in the primary auditory cortex (A1) and the perceptual detection of subtle changes in the temporal structure of a natural sound. Using a two-alternative forced-choice oddity task, we measured the ability of human listeners to detect local time reversals in a marmoset twitter call. We also recorded responses of neurons in A1 of anesthetized and awake ferrets to these stimuli, and analyzed these responses using a novel neurometric approach that is sensitive to temporal discharge patterns. We found that although spike count-based neurometrics were inadequate to account for behavioral performance on this auditory task, neurometrics based on the temporal discharge patterns of populations of A1 units closely matched the psychometric performance curve, but only if the spiking patterns were resolved at temporal resolutions of 20 msec or better. These results demonstrate that neurometric discrimination curves can be calculated for temporal spiking patterns, and they suggest that such an extension of previous spike count-based approaches is likely to be essential for understanding the neural correlates of the perception of stimuli with a complex temporal structure.


Asunto(s)
Percepción Auditiva/fisiología , Mapeo Encefálico , Corteza Cerebral/fisiología , Discriminación en Psicología/fisiología , Potenciales Evocados Auditivos/fisiología , Estimulación Acústica , Animales , Atención/fisiología , Humanos , Modelos Neurológicos , Psicometría , Valores de Referencia
19.
Brain Res ; 1124(1): 126-41, 2006 Dec 08.
Artículo en Inglés | MEDLINE | ID: mdl-17069776

RESUMEN

Performance on perceptual tasks requiring the discrimination of brief, temporally proximate or temporally varying sensory stimuli (temporal processing tasks) is impaired in some individuals with developmental language disorder and/or dyslexia. Little is known about how these temporal processes in perception develop and how they relate to language and reading performance in the normal population. The present study examined performance on 8 temporal processing tasks and 5 language/reading tasks in 120 unselected readers who varied in age over a range in which reading and phonological awareness were developing. Performance on all temporal processing tasks except coherent motion detection improved over ages 7 years to adulthood (p<0.01), especially between ages 7 and 13 years. Independent of these age effects, performance on all 8 temporal processing tasks predicted phonological awareness and reading performance (p<0.05), and three auditory temporal processing tasks predicted receptive language function (p<0.05). Furthermore, all temporal processing measures except within-channel gap detection and coherent motion detection predicted unique variance in phonological scores within subjects, whereas only within-channel gap detection performance explained unique variance in orthographic reading performance. These findings partially support the (Farmer, M.E., Klein, R.M., 1995. The evidence for a temporal processing deficit linked to dyslexia: A review. Psychon. Bull. Rev. 2, 460-493) notion of there being separable auditory and visual perceptual contributions to phonological and orthographic reading development. The data also are compatible with the view that the umbrella term "temporal processing" encompasses fundamentally different sensory or cognitive processes that may contribute differentially to language and reading performance, which may have different developmental trajectories and be differentially susceptible to pathology.


Asunto(s)
Percepción Auditiva/fisiología , Discriminación en Psicología/fisiología , Desarrollo Humano , Lectura , Percepción Visual/fisiología , Adolescente , Adulto , Factores de Edad , Niño , Femenino , Humanos , Modelos Lineales , Masculino , Persona de Mediana Edad , Fonética , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA