Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 20(2): e1011849, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38315733

RESUMO

Sleep deprivation has an ever-increasing impact on individuals and societies. Yet, to date, there is no quick and objective test for sleep deprivation. Here, we used automated acoustic analyses of the voice to detect sleep deprivation. Building on current machine-learning approaches, we focused on interpretability by introducing two novel ideas: the use of a fully generic auditory representation as input feature space, combined with an interpretation technique based on reverse correlation. The auditory representation consisted of a spectro-temporal modulation analysis derived from neurophysiology. The interpretation method aimed to reveal the regions of the auditory representation that supported the classifiers' decisions. Results showed that generic auditory features could be used to detect sleep deprivation successfully, with an accuracy comparable to state-of-the-art speech features. Furthermore, the interpretation revealed two distinct effects of sleep deprivation on the voice: changes in slow temporal modulations related to prosody and changes in spectral features related to voice quality. Importantly, the relative balance of the two effects varied widely across individuals, even though the amount of sleep deprivation was controlled, thus confirming the need to characterize sleep deprivation at the individual level. Moreover, while the prosody factor correlated with subjective sleepiness reports, the voice quality factor did not, consistent with the presence of both explicit and implicit consequences of sleep deprivation. Overall, the findings show that individual effects of sleep deprivation may be observed in vocal biomarkers. Future investigations correlating such markers with objective physiological measures of sleep deprivation could enable "sleep stethoscopes" for the cost-effective diagnosis of the individual effects of sleep deprivation.


Assuntos
Privação do Sono , Voz , Humanos , Sono , Qualidade da Voz , Vigília
2.
PLoS Comput Biol ; 19(1): e1010307, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36634121

RESUMO

Changes in the frequency content of sounds over time are arguably the most basic form of information about the behavior of sound-emitting objects. In perceptual studies, such changes have mostly been investigated separately, as aspects of either pitch or timbre. Here, we propose a unitary account of "up" and "down" subjective judgments of frequency change, based on a model combining auditory correlates of acoustic cues in a sound-specific and listener-specific manner. To do so, we introduce a generalized version of so-called Shepard tones, allowing symmetric manipulations of spectral information on a fine scale, usually associated to pitch (spectral fine structure, SFS), and on a coarse scale, usually associated timbre (spectral envelope, SE). In a series of behavioral experiments, listeners reported "up" or "down" shifts across pairs of generalized Shepard tones that differed in SFS, in SE, or in both. We observed the classic properties of Shepard tones for either SFS or SE shifts: subjective judgements followed the smallest log-frequency change direction, with cases of ambiguity and circularity. Interestingly, when both SFS and SE changes were applied concurrently (synergistically or antagonistically), we observed a trade-off between cues. Listeners were encouraged to report when they perceived "both" directions of change concurrently, but this rarely happened, suggesting a unitary percept. A computational model could accurately fit the behavioral data by combining different cues reflecting frequency changes after auditory filtering. The model revealed that cue weighting depended on the nature of the sound. When presented with harmonic sounds, listeners put more weight on SFS-related cues, whereas inharmonic sounds led to more weight on SE-related cues. Moreover, these stimulus-based factors were modulated by inter-individual differences, revealing variability across listeners in the detailed recipe for "up" and "down" judgments. We argue that frequency changes are tracked perceptually via the adaptive combination of a diverse set of cues, in a manner that is in fact similar to the derivation of other basic auditory dimensions such as spatial location.


Assuntos
Acústica , Percepção Auditiva , Estimulação Acústica , Sinais (Psicologia) , Julgamento , Percepção da Altura Sonora
3.
Audiol Neurootol ; 2024 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-38498993

RESUMO

INTRODUCTION: Difficulties in understanding speech in noise is the most common complaint of people with hearing impairment. Thus, there is a need for tests of speech-in-noise ability in clinical settings, which have to be evaluated for each language. Here, a reference dataset is presented for a quick speech-in-noise test in the French language (Vocale Rapide dans le Bruit, VRB; Leclercq, Renard & Vincent, 2018). METHODS: A large cohort (N=641) was tested in a nationwide multicentric study. The cohort comprised normal-hearing individuals and individuals with a broad range of symmetrical hearing losses. Short everyday sentences embedded in babble noise were presented over a spatial array of loudspeakers. Speech level was kept constant while noise level was progressively increased over a range of signal-to-noise ratios. The signal-to-noise ratio for which 50% of keywords could be correctly reported (Speech Reception Threshold, SRT) was derived from psychometric functions. Other audiometric measures were collected for the cohort, such as audiograms and speech-in-quiet performance. RESULTS: The VRB test was both sensitive and reliable, as shown by the steep slope of the psychometric functions and by the high test-retest consistency across sentence lists. Correlation analyses showed that pure tone averages derived from the audiograms explained 74% of the SRT variance over the whole cohort, but only 29% for individuals with clinically normal audiograms. SRTs were then compared to recent guidelines from the French Society of Audiology (Joly et al., 2021). Among individuals who would not have qualified for hearing aid prescription based on their audiogram or speech intelligibility in quiet, 18.4% were now eligible as they displayed SRTs in noise impaired by 3 dB or more. For individuals with borderline audiograms, between 20 dB HL and 30 dB HL, the prevalence of impaired SRTs increased to 71.4%. Finally, even though five lists are recommended for clinical use, a minute-long screening using only one VRB list detected 98.6% of impaired SRTs. CONCLUSION: The reference data suggest that VRB testing can be used to identify individuals with speech-in-noise impairment.

4.
Proc Natl Acad Sci U S A ; 118(48)2021 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-34819369

RESUMO

To guide behavior, perceptual systems must operate on intrinsically ambiguous sensory input. Observers are usually able to acknowledge the uncertainty of their perception, but in some cases, they critically fail to do so. Here, we show that a physiological correlate of ambiguity can be found in pupil dilation even when the observer is not aware of such ambiguity. We used a well-known auditory ambiguous stimulus, known as the tritone paradox, which can induce the perception of an upward or downward pitch shift within the same individual. In two experiments, behavioral responses showed that listeners could not explicitly access the ambiguity in this stimulus, even though their responses varied from trial to trial. However, pupil dilation was larger for the more ambiguous cases. The ambiguity of the stimulus for each listener was indexed by the entropy of behavioral responses, and this entropy was also a significant predictor of pupil size. In particular, entropy explained additional variation in pupil size independent of the explicit judgment of confidence in the specific situation that we investigated, in which the two measures were decoupled. Our data thus suggest that stimulus ambiguity is implicitly represented in the brain even without explicit awareness of this ambiguity.


Assuntos
Percepção Auditiva/fisiologia , Conscientização/fisiologia , Pupila/fisiologia , Estimulação Acústica/métodos , Adulto , Feminino , Humanos , Julgamento , Masculino , Incerteza , Percepção Visual/fisiologia
5.
J Acoust Soc Am ; 150(3): 1735, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34598638

RESUMO

Stochastic sounds are useful to probe auditory memory, as they require listeners to learn unpredictable and novel patterns under controlled experimental conditions. Previous studies using white noise or random click trains have demonstrated rapid auditory learning. Here, we explored perceptual learning with a more parametrically variable stimulus. These "tone clouds" were defined as broadband combinations of tone pips at randomized frequencies and onset times. Varying the number of tones covered a perceptual range from individually audible pips to noise-like stimuli. Results showed that listeners could detect and learn repeating patterns in tone clouds. Task difficulty varied depending on the density of tone pips, with sparse tone clouds the easiest. Rapid learning of individual tone clouds was observed for all densities, with a roughly constant benefit of learning irrespective of baseline performance. Variations in task difficulty were correlated to amplitude modulations in an auditory model. Tone clouds thus provide a tool to probe auditory learning in a variety of task-difficulty settings, which could be useful for clinical or neurophysiological studies. They also show that rapid auditory learning operates over a wide range of spectrotemporal complexity, essentially from melodies to noise.


Assuntos
Aprendizagem , Ruído , Estimulação Acústica , Percepção Auditiva , Ruído/efeitos adversos , Som
6.
J Acoust Soc Am ; 150(3): 1934, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34598651

RESUMO

Learning about new sounds is essential for cochlear-implant and normal-hearing listeners alike, with the additional challenge for implant listeners that spectral resolution is severely degraded. Here, a task measuring the rapid learning of slow or fast stochastic temporal sequences [Kang, Agus, and Pressnitzer (2017). J. Acoust. Soc. Am. 142, 2219-2232] was performed by cochlear-implant (N = 10) and normal-hearing (N = 9) listeners, using electric or acoustic pulse sequences, respectively. Rapid perceptual learning was observed for both groups, with highly similar characteristics. Moreover, for cochlear-implant listeners, an additional condition tested ultra-fast electric pulse sequences that would be impossible to represent temporally when presented acoustically. This condition also demonstrated learning. Overall, the results suggest that cochlear-implant listeners have access to the neural plasticity mechanisms needed for the rapid perceptual learning of complex temporal sequences.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Estimulação Acústica , Acústica , Testes Auditivos
7.
J Acoust Soc Am ; 143(6): 3665, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29960504

RESUMO

Using a same-different discrimination task, it has been shown that discrimination performance for sequences of complex tones varying just detectably in pitch is less dependent on sequence length (1, 2, or 4 elements) when the tones contain resolved harmonics than when they do not [Cousineau, Demany, and Pessnitzer (2009). J. Acoust. Soc. Am. 126, 3179-3187]. This effect had been attributed to the activation of automatic frequency-shift detectors (FSDs) by the shifts in resolved harmonics. The present study provides evidence against this hypothesis by showing that the sequence-processing advantage found for complex tones with resolved harmonics is not found for pure tones or other sounds supposed to activate FSDs (narrow bands of noise and wide-band noises eliciting pitch sensations due to interaural phase shifts). The present results also indicate that for pitch sequences, processing performance is largely unrelated to pitch salience per se: for a fixed level of discriminability between sequence elements, sequences of elements with salient pitches are not necessarily better processed than sequences of elements with less salient pitches. An ideal-observer model for the same-different binary-sequence discrimination task is also developed in the present study. The model allows the computation of d' for this task using numerical methods.

8.
J Acoust Soc Am ; 142(4): 2219, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-29092589

RESUMO

The acquisition of auditory memory for temporal patterns was investigated. The temporal patterns were random sequences of irregularly spaced clicks. Participants performed a task previously used to study auditory memory for noise [Agus, Thorpe, and Pressnitzer (2010). Neuron 66, 610-618]. The memory for temporal patterns displayed strong similarities with the memory for noise: temporal patterns were learnt rapidly, in an unsupervised manner, and could be distinguished from statistically matched patterns after learning. There was, however, a qualitative difference from the memory for noise. For temporal patterns, no memory transfer was observed after time reversals, showing that both the time intervals and their order were represented in memory. Remarkably, learning was observed over a broad range of time scales, which encompassed rhythm-like and buzz-like temporal patterns. Temporal patterns present specific challenges to the neural mechanisms of plasticity, because the information to be learnt is distributed over time. Nevertheless, the present data show that the acquisition of novel auditory memories can be as efficient for temporal patterns as for sounds containing additional spectral and spectro-temporal cues, such as noise. This suggests that the rapid formation of memory traces may be a general by-product of repeated auditory exposure.


Assuntos
Percepção Auditiva , Memória , Percepção do Tempo , Estimulação Acústica , Adulto , Sinais (Psicologia) , Feminino , Humanos , Aprendizagem , Masculino , Transferência de Experiência
9.
Proc Natl Acad Sci U S A ; 109(17): 6775-80, 2012 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-22493250

RESUMO

Auditory scene analysis requires the listener to parse the incoming flow of acoustic information into perceptual "streams," such as sentences from a single talker in the midst of background noise. Behavioral and neural data show that the formation of streams is not instantaneous; rather, streaming builds up over time and can be reset by sudden changes in the acoustics of the scene. Here, we investigated the effect of changes induced by voluntary head motion on streaming. We used a telepresence robot in a virtual reality setup to disentangle all potential consequences of head motion: changes in acoustic cues at the ears, changes in apparent source location, and changes in motor or attentional processes. The results showed that self-motion influenced streaming in at least two ways. Right after the onset of movement, self-motion always induced some resetting of perceptual organization to one stream, even when the acoustic scene itself had not changed. Then, after the motion, the prevalent organization was rapidly biased by the binaural cues discovered through motion. Auditory scene analysis thus appears to be a dynamic process that is affected by the active sensing of the environment.


Assuntos
Vias Auditivas , Movimento (Física) , Movimentos da Cabeça , Humanos
10.
Proc Biol Sci ; 281(1791): 20141000, 2014 Sep 22.
Artigo em Inglês | MEDLINE | ID: mdl-25100695

RESUMO

Previous behavioural studies have shown that repeated presentation of a randomly chosen acoustic pattern leads to the unsupervised learning of some of its specific acoustic features. The objective of our study was to determine the neural substrate for the representation of freshly learnt acoustic patterns. Subjects first performed a behavioural task that resulted in the incidental learning of three different noise-like acoustic patterns. During subsequent high-resolution functional magnetic resonance imaging scanning, subjects were then exposed again to these three learnt patterns and to others that had not been learned. Multi-voxel pattern analysis was used to test if the learnt acoustic patterns could be 'decoded' from the patterns of activity in the auditory cortex and medial temporal lobe. We found that activity in planum temporale and the hippocampus reliably distinguished between the learnt acoustic patterns. Our results demonstrate that these structures are involved in the neural representation of specific acoustic patterns after they have been learnt.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva , Hipocampo/fisiologia , Aprendizagem , Estimulação Acústica , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
11.
Conscious Cogn ; 30: 62-72, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25147080

RESUMO

People with schizophrenia are known to exhibit difficulties in the updating of their current belief states even in the light of disconfirmatory evidence. In the present study we tested the hypothesis that people with schizophrenia could also manifest perceptual inflexibility, or difficulties in the updating of their current sensory states. The presence of perceptual inflexibility might contribute both to the patients' altered perception of reality and the formation of some delusions as well as to their social cognition deficits. Here, we addressed this issue with a protocol of auditory hysteresis, a direct measure of sensory persistence, on a population of stabilized antipsychotic-treated schizophrenia patients and a sample of control subjects. Trials consisted of emotional signals (i.e., screams) and neutral signals (i.e., spectrally-rotated versions of the emotional stimuli) progressively emerging from white noise - Ascending Sequences - or progressively fading away in white noise - Descending Sequences. Results showed that patients presented significantly stronger hysteresis effects than control subjects, as evidenced by a higher rate of perceptual reports in Descending Sequences. The present study thus provides direct evidence of perceptual inflexibility in schizophrenia.


Assuntos
Percepção Auditiva/fisiologia , Emoções/fisiologia , Transtornos da Percepção/fisiopatologia , Esquizofrenia/fisiopatologia , Adulto , Feminino , Humanos , Masculino
12.
J Acoust Soc Am ; 135(3): 1380-91, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24606276

RESUMO

Sounds such as the voice or musical instruments can be recognized on the basis of timbre alone. Here, sound recognition was investigated with severely reduced timbre cues. Short snippets of naturally recorded sounds were extracted from a large corpus. Listeners were asked to report a target category (e.g., sung voices) among other sounds (e.g., musical instruments). All sound categories covered the same pitch range, so the task had to be solved on timbre cues alone. The minimum duration for which performance was above chance was found to be short, on the order of a few milliseconds, with the best performance for voice targets. Performance was independent of pitch and was maintained when stimuli contained less than a full waveform cycle. Recognition was not generally better when the sound snippets were time-aligned with the sound onset compared to when they were extracted with a random starting time. Finally, performance did not depend on feedback or training, suggesting that the cues used by listeners in the artificial gating task were similar to those relevant for longer, more familiar sounds. The results show that timbre cues for sound recognition are available at a variety of time scales, including very short ones.


Assuntos
Sinais (Psicologia) , Discriminação Psicológica , Discriminação da Altura Tonal , Reconhecimento Psicológico , Estimulação Acústica , Adulto , Análise de Variância , Audiometria , Retroalimentação Psicológica , Feminino , Humanos , Masculino , Música , Filtro Sensorial , Detecção de Sinal Psicológico , Canto , Espectrografia do Som , Fatores de Tempo , Qualidade da Voz , Adulto Jovem
13.
PLoS Comput Biol ; 8(11): e1002759, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23133363

RESUMO

Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sound's physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Modelos Neurológicos , Música , Estimulação Acústica , Adulto , Algoritmos , Biologia Computacional , Feminino , Humanos , Julgamento/fisiologia , Masculino , Psicofísica , Reconhecimento Psicológico/fisiologia , Som
14.
Cereb Cortex ; 22(4): 838-53, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-21709178

RESUMO

Lesion studies in monkeys have suggested a modest left hemisphere dominance for processing species-specific vocalizations, the neural basis of which has thus far remained unclear. We used contrast agent-enhanced functional magnetic resonance imaging to map the regions of the rhesus monkey brain involved in processing conspecific vocalizations as well as human speech and emotional sounds. Control conditions included scrambled versions of all 3 stimuli and silence. Compared with silence, all stimuli activated widespread parts of the auditory cortex and subcortical auditory structures with a right hemispheric bias at the level of the auditory core. However, comparing intact with scrambled sounds revealed a leftward bias in the auditory belt and the parabelt. The left-sided dominance was stronger and more robust for human speech than for rhesus vocalizations and hence does not reflect conspecific call selectivity but rather the processing of complex spectrotemporal patterns, such as those present in human speech and in some of the rhesus monkey vocalizations. This was confirmed by regressing brain activity with a model-derived parameter indexing the prevalence of such patterns. Our results indicate that processing of vocal sounds in the lateral belt and parabelt is asymmetric in monkeys, as predicted from lesion studies.


Assuntos
Mapeamento Encefálico , Encéfalo/irrigação sanguínea , Encéfalo/fisiologia , Lateralidade Funcional/fisiologia , Vocalização Animal/fisiologia , Vigília , Estimulação Acústica/métodos , Análise de Variância , Animais , Vias Auditivas/irrigação sanguínea , Vias Auditivas/fisiologia , Percepção Auditiva , Movimentos Oculares , Análise Fatorial , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Macaca mulatta , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Psicoacústica , Som , Espectrografia do Som , Tentativa de Suicídio
15.
Adv Exp Med Biol ; 787: 443-51, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23716251

RESUMO

Human listeners seem to be remarkably able to recognise acoustic sound sources based on timbre cues. Here we describe a psychophysical paradigm to estimate the time it takes to recognise a set of complex sounds differing only in timbre cues: both in terms of the minimum duration of the sounds and the inferred neural processing time. Listeners had to respond to the human voice while ignoring a set of distractors. All sounds were recorded from natural sources over the same pitch range and equalised to the same duration and power. In a first experiment, stimuli were gated in time with a raised-cosine window of variable duration and random onset time. A voice/non-voice (yes/no) task was used. Performance, as measured by d', remained above chance for the shortest sounds tested (2 ms); d's above 1 were observed for durations longer than or equal to 8 ms. Then, we constructed sequences of short sounds presented in rapid succession. Listeners were asked to report the presence of a single voice token that could occur at a random position within the sequence. This method is analogous to the "rapid sequential visual presentation" paradigm (RSVP), which has been used to evaluate neural processing time for images. For 500-ms sequences made of 32-ms and 16-ms sounds, d' remained above chance for presentation rates of up to 30 sounds per second. There was no effect of the pitch relation between successive sounds: identical for all sounds in the sequence or random for each sound. This implies that the task was not determined by streaming or forward masking, as both phenomena would predict better performance for the random pitch condition. Overall, the recognition of familiar sound categories such as the voice seems to be surprisingly fast, both in terms of the acoustic duration required and of the underlying neural time constants.


Assuntos
Estimulação Acústica/métodos , Reconhecimento Fisiológico de Modelo/fisiologia , Percepção da Altura Sonora/fisiologia , Psicoacústica , Percepção da Fala/fisiologia , Voz/fisiologia , Adulto , Humanos , Música , Mascaramento Perceptivo/fisiologia , Reconhecimento Psicológico/fisiologia , Acústica da Fala , Adulto Jovem
16.
Adv Exp Med Biol ; 787: 157-64, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23716220

RESUMO

The context in which a stimulus occurs can influence its perception. We study contextual effects in audition using the tritone paradox, where a pair of complex (Shepard) tones separated by half an octave can be perceived as ascending or descending. While ambiguous in isolation, they are heard with a clear upward or downward change in pitch, when preceded by spectrally matched biasing sequences. We presented these biased Shepard pairs to awake ferrets and obtained neuronal responses from primary auditory cortex. Using dimensionality reduction from the neural population response, we decode the perceived pitch for each tone. The bias sequence is found to reliably shift the perceived pitch of the tones away from its central frequency. Using human psychophysics, we provide evidence that this shift in pitch is present in active human perception as well. These results are incompatible with the standard absolute distance decoder for Shepard tones, which would have predicted the bias to attract the tones. We propose a relative decoder that takes the stimulus history into account and is consistent with the present and other data sets.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção da Altura Sonora/fisiologia , Psicoacústica , Psicofísica/métodos , Animais , Eletrofisiologia , Furões , Humanos , Modelos Neurológicos
17.
Adv Exp Med Biol ; 787: 535-43, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23716261

RESUMO

Humans and other animals can attend to one of multiple sounds, and -follow it selectively over time. The neural underpinnings of this perceptual feat remain mysterious. Some studies have concluded that sounds are heard as separate streams when they activate well-separated populations of central auditory neurons, and that this process is largely pre-attentive. Here, we propose instead that stream formation depends primarily on temporal coherence between responses that encode various features of a sound source. Furthermore, we postulate that only when attention is directed toward a particular feature (e.g., pitch or location) do all other temporally coherent features of that source (e.g., timbre and location) become bound together as a stream that is segregated from the incoherent features of other sources. Experimental -neurophysiological evidence in support of this hypothesis will be presented. The focus, however, will be on a computational realization of this idea and a discussion of the insights learned from simulations to disentangle complex sound sources such as speech and music. The model consists of a representational stage of early and cortical auditory processing that creates a multidimensional depiction of various sound attributes such as pitch, location, and spectral resolution. The following stage computes a coherence matrix that summarizes the pair-wise correlations between all channels making up the cortical representation. Finally, the perceived segregated streams are extracted by decomposing the coherence matrix into its uncorrelated components. Questions raised by the model are discussed, especially on the role of attention in streaming and the search for further neural correlates of streaming percepts.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Modelos Neurológicos , Estimulação Acústica/métodos , Acústica , Animais , Vias Auditivas/fisiologia , Furões , Humanos , Percepção da Altura Sonora/fisiologia , Localização de Som/fisiologia , Percepção do Tempo/fisiologia
18.
J Acoust Soc Am ; 134(1): 464-73, 2013 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23862821

RESUMO

In noise repetition-detection tasks, listeners have to distinguish trials of continuously running noise from trials in which noise tokens are repeated in a cyclic manner. Recently, it has been shown that using the exact same noise token across several trials ("reference noise") facilitates the detection of repetitions for this token [Agus et al. (2010). Neuron 66, 610-618]. This was attributed to perceptual learning. Here, the nature of the learning was investigated. In experiment 1, reference noise tokens were embedded in trials with or without cyclic presentation. Naïve listeners reported repetitions in both cases, thus responding to the reference noise even in the absence of an actual repetition. Experiment 2, with the same listeners, showed a similar pattern of results even after the design of the experiment was made explicit, ruling out a misunderstanding of the task. Finally, in experiment 3, listeners reported repetitions in trials containing the reference noise, even before ever hearing it presented cyclically. The results show that listeners were able to learn and recognize noise tokens in the absence of an immediate repetition. Moreover, the learning mandatorily interfered with listeners' ability to detect repetitions. It is concluded that salient perceptual changes accompany the learning of noise.


Assuntos
Atenção , Percepção Auditiva , Memória de Curto Prazo , Mascaramento Perceptivo , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Prática Psicológica , Psicoacústica , Localização de Som , Adulto Jovem
19.
Curr Biol ; 33(8): R296-R298, 2023 04 24.
Artigo em Inglês | MEDLINE | ID: mdl-37098329

RESUMO

Almost universally, music uses scales consisting of a small number of notes. Could this increase the fitness of melodies for oral transmission? By reproducing the process online, a new study reveals how cognition, sound and culture may interact to shape music.


Assuntos
Música , Cognição , Som , Percepção Auditiva
20.
JASA Express Lett ; 3(6)2023 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-37379207

RESUMO

Online auditory experiments use the sound delivery equipment of each participant, with no practical way to calibrate sound level or frequency response. Here, a method is proposed to control sensation level across frequencies: embedding stimuli in threshold-equalizing noise. In a cohort of 100 online participants, noise could equate detection thresholds from 125 to 4000 Hz. Equalization was successful even for participants with atypical thresholds in quiet, due either to poor quality equipment or unreported hearing loss. Moreover, audibility in quiet was highly variable, as overall level was uncalibrated, but variability was much reduced with noise. Use cases are discussed.


Assuntos
Surdez , Percepção da Fala , Humanos , Limiar Auditivo/fisiologia , Percepção da Fala/fisiologia , Ruído/efeitos adversos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA