Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 95
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Acoust Soc Am ; 151(3): 1557, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35364949

RESUMO

It is not always easy to follow a conversation in a noisy environment. To distinguish between two speakers, a listener must mobilize many perceptual and cognitive processes to maintain attention on a target voice and avoid shifting attention to the background noise. The development of an intelligibility task with long stimuli-the Long-SWoRD test-is introduced. This protocol allows participants to fully benefit from the cognitive resources, such as semantic knowledge, to separate two talkers in a realistic listening environment. Moreover, this task also provides the experimenters with a means to infer fluctuations in auditory selective attention. Two experiments document the performance of normal-hearing listeners in situations where the perceptual separability of the competing voices ranges from easy to hard using a combination of voice and binaural cues. The results show a strong effect of voice differences when the voices are presented diotically. In addition, analyzing the influence of the semantic context on the pattern of responses indicates that the semantic information induces a response bias in situations where the competing voices are distinguishable and indistinguishable from one another.


Assuntos
Percepção da Fala , Fala , Sinais (Psicologia) , Humanos , Mascaramento Perceptivo , Semântica , Percepção da Fala/fisiologia
2.
J Acoust Soc Am ; 149(1): 259, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33514136

RESUMO

The ability to discriminate frequency differences between pure tones declines as the duration of the interstimulus interval (ISI) increases. The conventional explanation for this finding is that pitch representations gradually decay from auditory short-term memory. Gradual decay means that internal noise increases with increasing ISI duration. Another possibility is that pitch representations experience "sudden death," disappearing without a trace from memory. Sudden death means that listeners guess (respond at random) more often when the ISIs are longer. Since internal noise and guessing probabilities influence the shape of psychometric functions in different ways, they can be estimated simultaneously. Eleven amateur musicians performed a two-interval, two-alternative forced-choice frequency-discrimination task. The frequencies of the first tones were roved, and frequency differences and ISI durations were manipulated across trials. Data were analyzed using Bayesian models that simultaneously estimated internal noise and guessing probabilities. On average across listeners, internal noise increased monotonically as a function of increasing ISI duration, suggesting that gradual decay occurred. The guessing rate decreased with an increasing ISI duration between 0.5 and 2 s but then increased with further increases in ISI duration, suggesting that sudden death occurred but perhaps only at longer ISIs. Results are problematic for decay-only models of discrimination and contrast with those from a study on visual short-term memory, which found that over similar durations, visual representations experienced little gradual decay yet substantial sudden death.


Assuntos
Memória de Curto Prazo , Música , Discriminação da Altura Tonal , Teorema de Bayes , Humanos , Ruído
3.
J Acoust Soc Am ; 147(1): 371, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-32006971

RESUMO

Perceptual anchors are representations of stimulus features stored in long-term memory rather than short-term memory. The present study investigated whether listeners use perceptual anchors to improve pure-tone frequency discrimination. Ten amateur musicians performed a two-interval, two-alternative forced-choice frequency-discrimination experiment. In one half of the experiment, the frequency of the first tone was fixed across trials, and in the other half, the frequency of the first tone was roved widely across trials. The durations of the interstimulus intervals (ISIs) and the frequency differences between the tones on each trial were also manipulated. The data were analyzed with a Bayesian model that assumed that performance was limited by sensory noise (related to the initial encoding of the stimuli), memory noise (which increased proportionally to the ISI), fluctuations in attention, and response bias. It was hypothesized that memory-noise variance increased more rapidly during roved-frequency discrimination than fixed-frequency discrimination because listeners used perceptual anchors in the latter condition. The results supported this hypothesis. The results also suggested that listeners experienced more lapses in attention during roved-frequency discrimination.


Assuntos
Percepção Auditiva , Memória de Longo Prazo , Discriminação da Altura Tonal , Estimulação Acústica , Adulto , Teorema de Bayes , Feminino , Humanos , Masculino , Psicofísica , Adulto Jovem
4.
Ear Hear ; 40(4): 938-950, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30461444

RESUMO

OBJECTIVES: The objective of this work was to build a 15-item short-form of the Speech Spatial and Qualities of Hearing Scale (SSQ) that maintains the three-factor structure of the full form, using a data-driven approach consistent with internationally recognized procedures for short-form building. This included the validation of the new short-form on an independent sample and an in-depth, comparative analysis of all existing, full and short SSQ forms. DESIGN: Data from a previous study involving 98 normal-hearing (NH) individuals and 196 people with hearing impairments (HI), non hearing aid wearers, along with results from several other published SSQ studies, were used for developing the short-form. Data from a new and independent sample of 35 NH and 88 HI hearing aid wearers were used to validate the new short-form. Factor and hierarchical cluster analyses were used to check the factor structure and internal consistency of the new short-form. In addition, the new short-form was compared with all other SSQ forms, including the full SSQ, the German SSQ15, the SSQ12, and the SSQ5. Construct validity was further assessed by testing statistical relationships between scores and audiometric factors, including pure-tone threshold averages (PTAs) and left/right PTA asymmetry. Receiver-operating characteristic analyses were used to compare the ability of different SSQ forms to discriminate between NH and HI (HI non hearing aid wearers and HI hearing aid wearers) individuals. RESULTS: Compared all other SSQ forms, including the full SSQ, the new short-form showed negligible cross-loading across the three main subscales and greater discriminatory power between NH and HI subjects (as indicated by a larger area under the receiver-operating characteristic curve), as well as between the main subscales (especially Speech and Qualities). Moreover, the new, 5-item Spatial subscale showed increased sensitivity to left/right PTA asymmetry. Very good internal consistency and homogeneity and high correlations with the SSQ were obtained for all short-forms. CONCLUSIONS: While maintaining the three-factor structure of the full SSQ, and exceeding the latter in terms of construct validity and sensitivity to audiometric variables, the new 15-item SSQ affords a substantial reduction in the number of items and, thus, in test time. Based on overall scores, Speech subscores, or Spatial subscores, but not Qualities subscores, the 15-item SSQ appears to be more sensitive to differences in self-evaluated hearing abilities between NH and HI subjects than the full SSQ.


Assuntos
Auxiliares de Audição , Perda Auditiva/reabilitação , Medidas de Resultados Relatados pelo Paciente , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria de Tons Puros , Estudos de Casos e Controles , Análise por Conglomerados , Análise Fatorial , Feminino , Perda Auditiva/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Curva ROC , Reprodutibilidade dos Testes , Percepção da Fala , Adulto Jovem
5.
J Acoust Soc Am ; 143(6): 3665, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29960504

RESUMO

Using a same-different discrimination task, it has been shown that discrimination performance for sequences of complex tones varying just detectably in pitch is less dependent on sequence length (1, 2, or 4 elements) when the tones contain resolved harmonics than when they do not [Cousineau, Demany, and Pessnitzer (2009). J. Acoust. Soc. Am. 126, 3179-3187]. This effect had been attributed to the activation of automatic frequency-shift detectors (FSDs) by the shifts in resolved harmonics. The present study provides evidence against this hypothesis by showing that the sequence-processing advantage found for complex tones with resolved harmonics is not found for pure tones or other sounds supposed to activate FSDs (narrow bands of noise and wide-band noises eliciting pitch sensations due to interaural phase shifts). The present results also indicate that for pitch sequences, processing performance is largely unrelated to pitch salience per se: for a fixed level of discriminability between sequence elements, sequences of elements with salient pitches are not necessarily better processed than sequences of elements with less salient pitches. An ideal-observer model for the same-different binary-sequence discrimination task is also developed in the present study. The model allows the computation of d' for this task using numerical methods.

6.
J Acoust Soc Am ; 144(4): 2462, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30404465

RESUMO

In order to perceive meaningful speech, the auditory system must recognize different phonemes amidst a noisy and variable acoustic signal. To better understand the processing mechanisms underlying this ability, evoked cortical responses to different spoken consonants were measured with electroencephalography (EEG). Using multivariate pattern analysis (MVPA), binary classifiers attempted to discriminate between the EEG activity evoked by two given consonants at each peri-stimulus time sample, providing a dynamic measure of their cortical dissimilarity. To examine the relationship between representations at the auditory periphery and cortex, MVPA was also applied to modelled auditory-nerve (AN) responses of consonants, and time-evolving AN-based and EEG-based dissimilarities were compared with one another. Cortical dissimilarities between consonants were commensurate with their articulatory distinctions, particularly their manner of articulation, and to a lesser extent, their voicing. Furthermore, cortical distinctions between consonants in two periods of activity, centered at 130 and 400 ms after onset, aligned with their peripheral dissimilarities in distinct onset and post-onset periods, respectively. In relating speech representations across articulatory, peripheral, and cortical domains, the understanding of crucial transformations in the auditory pathway underlying the ability to perceive speech is advanced.


Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Percepção da Fala , Adulto , Feminino , Humanos , Masculino , Fonética
7.
Ear Hear ; 38(4): 465-474, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28169839

RESUMO

OBJECTIVES: The goal of this study was to examine whether individuals are using speech intelligibility to determine how much noise they are willing to accept while listening to running speech. Previous research has shown that the amount of background noise that an individual is willing to accept while listening to speech is predictive of his or her likelihood of success with hearing aids. If it were possible to determine the criterion by which individuals make this judgment, then it may be possible to alter this cue, especially for those who are unlikely to be successful with hearing aids, and thereby improve their chances of success with hearing aids. DESIGN: Twenty-one individuals with normal hearing and 21 with sensorineural hearing loss participated in this study. In each group, there were 7 with a low, moderate, and high acceptance of background noise, as determined by the Acceptable Noise Level (ANL) test. (During the ANL test, listeners adjusted speech to their most comfortable listening level, then background noise was added, and they adjusted it to the maximum level that they were "willing to put up with" while listening to the speech.) Participants also performed a modified version of the ANL test in which the speech was fixed at four different levels (50, 63, 75, and 88 dBA), and they adjusted only the level of the background noise. The authors calculated speech intelligibility index (SII) scores for each participant and test level. SII scores ranged from 0 (no speech information is present) to 1 (100% of the speech information is present). The authors considered a participant's results to be consistent with a speech intelligibility-based listening criterion if his or her SIIs remained constant across all of the test conditions. RESULTS: For all but one of the participants with normal hearing, their SIIs remained constant across the entire 38-dB range of speech levels. For all participants with hearing loss, the SII increased with speech level. CONCLUSIONS: For most listeners with normal hearing, their ANLs were consistent with the use of speech intelligibility as a listening cue; for listeners with hearing impairment, they were not. Future studies should determine what cues these individuals are using when selecting an ANL. Having a better understanding of these cues may help audiologists design and optimize treatment options for their patients.


Assuntos
Perda Auditiva Neurossensorial/fisiopatologia , Ruído , Percepção da Fala , Estudos de Casos e Controles , Sinais (Psicologia) , Auxiliares de Audição , Perda Auditiva Neurossensorial/reabilitação , Humanos , Prognóstico , Inteligibilidade da Fala , Resultado do Tratamento
8.
J Acoust Soc Am ; 142(4): 2386, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-29092591

RESUMO

To better understand issues of hearing-aid benefit during natural listening, this study examined the added demand placed by the goal of understanding speech over the more typically studied goal of simply recognizing speech sounds. The study compared hearing-aid benefit in two conditions, and examined factors that might account for the observed benefits. In the phonetic condition, listeners needed only identify the correct sound to make a correct response. In the semantic condition, listeners had to understand what they had heard to respond correctly, because the answer did not include any keywords from the spoken speech. Hearing aids provided significant benefit for listeners in the phonetic condition. In the semantic condition on the other hand, there were large inter-individual differences, with many listeners not experiencing any benefit of aiding. Neither a set of cognitive and linguistic tests, nor age, could explain this variability. Furthermore, analysis of psychometric functions showed that enhancement of the target speech fidelity through improvement of signal-to-noise ratio had a larger impact on listeners' performance in the phonetic condition than in the semantic condition. These results demonstrate the importance of incorporating naturalistic elements in the simulation of multi-talker listening for assessing the benefits of intervention in communication success.


Assuntos
Auxiliares de Audição , Perda Auditiva Neurossensorial , Percepção da Fala , Idoso , Idoso de 80 Anos ou mais , Análise de Variância , Limiar Auditivo , Feminino , Perda Auditiva Neurossensorial/reabilitação , Humanos , Testes de Inteligência , Masculino , Pessoa de Meia-Idade , Testes Neuropsicológicos , Razão Sinal-Ruído , Vocabulário
9.
J Neurosci ; 34(37): 12425-43, 2014 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-25209282

RESUMO

The ability to attend to a particular sound in a noisy environment is an essential aspect of hearing. To accomplish this feat, the auditory system must segregate sounds that overlap in frequency and time. Many natural sounds, such as human voices, consist of harmonics of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0. A difference in pitch between simultaneous HCTs provides a powerful cue for their segregation. The neural mechanisms underlying concurrent sound segregation based on pitch differences are poorly understood. Here, we examined neural responses in monkey primary auditory cortex (A1) to two concurrent HCTs that differed in F0 such that they are heard as two separate "auditory objects" with distinct pitches. We found that A1 can resolve, via a rate-place code, the lower harmonics of both HCTs, a prerequisite for deriving their pitches and for their perceptual segregation. Onset asynchrony between the HCTs enhanced the neural representation of their harmonics, paralleling their improved perceptual segregation in humans. Pitches of the concurrent HCTs could also be temporally represented by neuronal phase-locking at their respective F0s. Furthermore, a model of A1 responses using harmonic templates could qualitatively reproduce psychophysical data on concurrent sound segregation in humans. Finally, we identified a possible intracortical homolog of the "object-related negativity" recorded noninvasively in humans, which correlates with the perceptual segregation of concurrent sounds. Findings indicate that A1 contains sufficient spectral and temporal information for segregating concurrent sounds based on differences in pitch.


Assuntos
Córtex Auditivo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Rede Nervosa/fisiologia , Reconhecimento Fisiológico de Modelo/fisiologia , Percepção da Altura Sonora/fisiologia , Animais , Mapeamento Encefálico , Sinais (Psicologia) , Haplorrinos , Humanos , Macaca fascicularis , Masculino , Mascaramento Perceptivo
10.
J Neurosci ; 33(25): 10312-23, 2013 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-23785145

RESUMO

Many natural sounds are periodic and consist of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0, which plays a key role in the perception of speech and music. "Pitch-selective" neurons have been identified in non-primary auditory cortex of marmoset monkeys. Noninvasive studies point to a putative "pitch center" located in a homologous cortical region in humans. It remains unclear whether there is sufficient spectral and temporal information available at the level of primary auditory cortex (A1) to enable reliable pitch extraction in non-primary auditory cortex. Here we evaluated multiunit responses to HCTs in A1 of awake macaques using a stimulus design employed in auditory nerve studies of pitch encoding. The F0 of the HCTs was varied in small increments, such that harmonics of the HCTs fell either on the peak or on the sides of the neuronal pure tone tuning functions. Resultant response-amplitude-versus-harmonic-number functions ("rate-place profiles") displayed a periodic pattern reflecting the neuronal representation of individual HCT harmonics. Consistent with psychoacoustic findings in humans, lower harmonics were better resolved in rate-place profiles than higher harmonics. Lower F0s were also temporally represented by neuronal phase-locking to the periodic waveform of the HCTs. Findings indicate that population responses in A1 contain sufficient spectral and temporal information for extracting the pitch of HCTs by neurons in downstream cortical areas that receive their input from A1.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Animais , Eletrodos Implantados , Potenciais Evocados Auditivos/fisiologia , Macaca fascicularis , Masculino , Vigília/fisiologia
11.
PLoS Comput Biol ; 9(11): e1003336, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24244142

RESUMO

The nature of the neural codes for pitch and loudness, two basic auditory attributes, has been a key question in neuroscience for over century. A currently widespread view is that sound intensity (subjectively, loudness) is encoded in spike rates, whereas sound frequency (subjectively, pitch) is encoded in precise spike timing. Here, using information-theoretic analyses, we show that the spike rates of a population of virtual neural units with frequency-tuning and spike-count correlation characteristics similar to those measured in the primary auditory cortex of primates, contain sufficient statistical information to account for the smallest frequency-discrimination thresholds measured in human listeners. The same population, and the same spike-rate code, can also account for the intensity-discrimination thresholds of humans. These results demonstrate the viability of a unified rate-based cortical population code for both sound frequency (pitch) and sound intensity (loudness), and thus suggest a resolution to a long-standing puzzle in auditory neuroscience.


Assuntos
Potenciais de Ação/fisiologia , Córtex Auditivo/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Animais , Biologia Computacional , Humanos , Primatas
12.
Proc Natl Acad Sci U S A ; 108(18): 7629-34, 2011 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-21502495

RESUMO

Humans' ability to recognize musical melodies is generally limited to pure-tone frequencies below 4 or 5 kHz. This limit coincides with the highest notes on modern musical instruments and is widely believed to reflect the upper limit of precise stimulus-driven spike timing in the auditory nerve. We tested the upper limits of pitch and melody perception in humans using pure and harmonic complex tones, such as those produced by the human voice and musical instruments, in melody recognition and pitch-matching tasks. We found that robust pitch perception can be elicited by harmonic complex tones with fundamental frequencies below 2 kHz, even when all of the individual harmonics are above 6 kHz--well above the currently accepted existence region of pitch and above the currently accepted limits of neural phase locking. The results suggest that the perception of musical pitch at high frequencies is not constrained by temporal phase locking in the auditory nerve but may instead stem from higher-level constraints shaped by prior exposure to harmonic sounds.


Assuntos
Nervo Coclear/fisiologia , Música , Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Humanos
13.
J Neurosci ; 32(13): 4660-4, 2012 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-22457512

RESUMO

When an acoustic signal is temporarily interrupted by another sound, it is sometimes heard as continuing through, even when the signal is actually turned off during the interruption-an effect known as the "auditory continuity illusion." A widespread view is that the illusion can only occur when peripheral neural responses contain no evidence that the signal was interrupted. Here we challenge this view using a combination of psychophysical measures from human listeners and computational simulations with a model of the auditory periphery. The results reveal that the illusion seems to depend more on the overall specific loudness than on the peripheral masking properties of the interrupting sound. This finding indicates that the continuity illusion is determined by the global features, rather than the fine-grained temporal structure, of the interrupting sound, and argues against the view that the illusion arises in the auditory periphery.


Assuntos
Audição , Ilusões/psicologia , Mascaramento Perceptivo , Estimulação Acústica/métodos , Adulto , Simulação por Computador/estatística & dados numéricos , Feminino , Humanos , Masculino , Som
14.
Adv Exp Med Biol ; 787: 137-45, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23716218

RESUMO

High-frequency pure tones (>6 kHz), which alone do not produce salient melodic pitch information, provide melodic pitch information when they form part of a harmonic complex tone with a lower fundamental frequency (F0). We explored this phenomenon in normal-hearing listeners by measuring F0 difference limens (F0DLs) for harmonic complex tones and pure-tone frequency difference limens (FDLs) for each of the tones within the harmonic complexes. Two spectral regions were tested. The low- and high-frequency band-pass regions comprised harmonics 6-11 of a 280- or 1,400-Hz F0, respectively; thus, for the high-frequency region, audible frequencies present were all above 7 kHz. Frequency discrimination of inharmonic log-spaced tone complexes was also tested in control conditions. All tones were presented in a background of noise to limit the detection of distortion products. As found in previous studies, F0DLs in the low region were typically no better than the FDL for each of the constituent pure tones. In contrast, F0DLs for the high-region complex were considerably better than the FDLs found for most of the constituent (high-frequency) pure tones. The data were compared with models of optimal spectral integration of information, to assess the relative influence of peripheral and more central noise in limiting performance. The results demonstrate a dissociation in the way pitch information is integrated at low and high frequencies and provide new challenges and constraints in the search for the underlying neural mechanisms of pitch.


Assuntos
Vias Auditivas/fisiologia , Audição/fisiologia , Modelos Neurológicos , Percepção da Altura Sonora/fisiologia , Audiometria de Tons Puros , Humanos , Música , Ruído , Discriminação da Altura Tonal/fisiologia , Adulto Jovem
15.
Adv Exp Med Biol ; 787: 109-18, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23716215

RESUMO

Listeners with sensorineural hearing loss (SNHL) often show poorer thresholds for fundamental-frequency (F0) discrimination and poorer discrimination between harmonic and frequency-shifted (inharmonic) complex tones, than normal-hearing (NH) listeners-especially when these tones contain resolved or partially resolved components. It has been suggested that these perceptual deficits reflect reduced access to temporal-fine-structure (TFS) information and could be due to degraded phase locking in the auditory nerve (AN) with SNHL. In the present study, TFS and temporal-envelope (ENV) cues in single AN-fiber responses to band-pass-filtered harmonic and inharmonic complex tones were -measured in chinchillas with either normal-hearing or noise-induced SNHL. The stimuli were comparable to those used in recent psychophysical studies of F0 and harmonic/inharmonic discrimination. As in those studies, the rank of the center component was manipulated to produce -different resolvability conditions, different phase relationships (cosine and random phase) were tested, and background noise was present. Neural TFS and ENV cues were quantified using cross-correlation coefficients computed using shuffled cross correlograms between neural responses to REF (harmonic) and TEST (F0- or frequency-shifted) stimuli. In animals with SNHL, AN-fiber tuning curves showed elevated thresholds, broadened tuning, best-frequency shifts, and downward shifts in the dominant TFS response component; however, no significant degradation in the ability of AN fibers to encode TFS or ENV cues was found. Consistent with optimal-observer analyses, the results indicate that TFS and ENV cues depended only on the relevant frequency shift in Hz and thus were not degraded because phase locking remained intact. These results suggest that perceptual "TFS-processing" deficits do not simply reflect degraded phase locking at the level of the AN. To the extent that performance in F0- and harmonic/inharmonic discrimination tasks depend on TFS cues, it is likely through a more complicated (suboptimal) decoding mechanism, which may involve "spatiotemporal" (place-time) neural representations.


Assuntos
Limiar Auditivo/fisiologia , Nervo Coclear/fisiologia , Perda Auditiva Provocada por Ruído/fisiopatologia , Perda Auditiva Neurossensorial/fisiopatologia , Discriminação da Altura Tonal/fisiologia , Estimulação Acústica/métodos , Animais , Chinchila , Limiar Diferencial/fisiologia , Humanos , Modelos Biológicos , Ruído , Psicoacústica
16.
Adv Exp Med Biol ; 787: 483-9, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23716255

RESUMO

Many previous studies have shown that a tone that is momentarily -interrupted can be perceived as continuous if the interruption is completely masked by noise. It has been suggested this "continuity illusion" occurs only when peripheral neural responses contain no evidence that the signal was interrupted. In this study, we used a combination of psychophysical measures and computational simulations of peripheral auditory responses to examine whether the continuity illusion can be experienced under conditions where peripheral neural responses contain evidence that the signal did not continue through the masker. Our results provide an example of a salient continuity illusion despite evidence of an interruption in the peripheral representation, indicating that the illusion may depend more on global features of the interrupting sound, such as its long-term specific loudness, than on its fine-grained temporal structure.


Assuntos
Percepção Auditiva/fisiologia , Ilusões/fisiologia , Reconhecimento Fisiológico de Modelo/fisiologia , Psicoacústica , Estimulação Acústica/métodos , Adulto , Limiar Auditivo/fisiologia , Feminino , Humanos , Percepção Sonora/fisiologia , Masculino , Mascaramento Perceptivo/fisiologia , Percepção do Tempo/fisiologia , Adulto Jovem
17.
Adv Exp Med Biol ; 787: 535-43, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23716261

RESUMO

Humans and other animals can attend to one of multiple sounds, and -follow it selectively over time. The neural underpinnings of this perceptual feat remain mysterious. Some studies have concluded that sounds are heard as separate streams when they activate well-separated populations of central auditory neurons, and that this process is largely pre-attentive. Here, we propose instead that stream formation depends primarily on temporal coherence between responses that encode various features of a sound source. Furthermore, we postulate that only when attention is directed toward a particular feature (e.g., pitch or location) do all other temporally coherent features of that source (e.g., timbre and location) become bound together as a stream that is segregated from the incoherent features of other sources. Experimental -neurophysiological evidence in support of this hypothesis will be presented. The focus, however, will be on a computational realization of this idea and a discussion of the insights learned from simulations to disentangle complex sound sources such as speech and music. The model consists of a representational stage of early and cortical auditory processing that creates a multidimensional depiction of various sound attributes such as pitch, location, and spectral resolution. The following stage computes a coherence matrix that summarizes the pair-wise correlations between all channels making up the cortical representation. Finally, the perceived segregated streams are extracted by decomposing the coherence matrix into its uncorrelated components. Questions raised by the model are discussed, especially on the role of attention in streaming and the search for further neural correlates of streaming percepts.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Modelos Neurológicos , Estimulação Acústica/métodos , Acústica , Animais , Vias Auditivas/fisiologia , Furões , Humanos , Percepção da Altura Sonora/fisiologia , Localização de Som/fisiologia , Percepção do Tempo/fisiologia
18.
J Acoust Soc Am ; 133(3): EL188-94, 2013 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-23464127

RESUMO

This study sought to investigate the influence of temporal incoherence and inharmonicity on concurrent stream segregation, using performance-based measures. Subjects discriminated frequency shifts in a temporally regular sequence of target pure tones, embedded in a constant or randomly varying multi-tone background. Depending on the condition tested, the target tones were either temporally coherent or incoherent with, and either harmonically or inharmonically related to, the background tones. The results provide further evidence that temporal incoherence facilitates stream segregation and they suggest that deviations from harmonicity can cause similar facilitation effects, even when the targets and the maskers are temporally coherent.


Assuntos
Vias Auditivas/fisiologia , Sinais (Psicologia) , Discriminação da Altura Tonal , Percepção do Tempo , Estimulação Acústica , Adulto , Audiometria de Tons Puros , Limiar Auditivo , Humanos , Julgamento , Ruído/efeitos adversos , Mascaramento Perceptivo , Psicoacústica , Detecção de Sinal Psicológico , Espectrografia do Som , Fatores de Tempo , Adulto Jovem
19.
J Acoust Soc Am ; 133(2): 982-97, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-23363115

RESUMO

The role of temporal stimulus parameters in the perception of across-frequency synchrony and asynchrony was investigated using pairs of 500-ms tones consisting of a 250-Hz tone and a tone with a higher frequency of 1, 2, 4, or 6 kHz. Subjective judgments suggested veridical perception of across-frequency synchrony but with greater sensitivity to changes in asynchrony for pairs in which the lower-frequency tone was leading than for pairs in which it was lagging. Consistent with the subjective judgments, thresholds for the detection of asynchrony measured in a three-alternative forced-choice task were lower when the signal interval contained a pair with the low-frequency tone leading than a pair with a high-frequency tone leading. A similar asymmetry was observed for asynchrony discrimination when the standard asynchrony was relatively small (≤20 ms) but not for larger standard asynchronies. Independent manipulation of onset and offset ramp durations indicated a dominant role of onsets in the perception of across-frequency asynchrony. A physiologically inspired model, involving broadly tuned monaural coincidence detectors that receive inputs from frequency-selective onset detectors, was able to accurately reproduce the asymmetric distributions of synchrony judgments. The model provides testable predictions for future physiological investigations of responses to broadband stimuli with across-frequency delays.


Assuntos
Discriminação Psicológica , Periodicidade , Discriminação da Altura Tonal , Percepção do Tempo , Estimulação Acústica , Análise de Variância , Audiometria , Vias Auditivas/fisiologia , Limiar Auditivo , Sinais (Psicologia) , Humanos , Julgamento , Modelos Biológicos , Psicoacústica , Fatores de Tempo
20.
J Neurosci ; 31(18): 6759-63, 2011 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-21543605

RESUMO

The mammalian auditory system contains descending neural pathways, some of which project onto the cochlea via the medial olivocochlear (MOC) system. The function of this efferent auditory system is not entirely clear. Behavioral studies in animals with olivocochlear (OC) lesions suggest that the MOC serves to facilitate sound localization in noise. In the current work, noise-induced OC activity (the OC reflex) and sound-localization performance in noise were measured in normal-hearing humans. Consistent with earlier studies, both measures were found to vary substantially across individuals. Importantly, significant correlations were observed between OC-reflex strength and the effect of noise on sound-localization performance; the stronger the OC reflex, the less marked the effect of noise. These results suggest that MOC activation by noise helps to counteract the detrimental effects of background noise on neural representations of direction-dependent spectral features, which are especially important for accurate localization in the up/down and front/back dimensions.


Assuntos
Vias Auditivas/fisiologia , Cóclea/fisiologia , Neurônios Eferentes/fisiologia , Localização de Som/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Vias Eferentes/fisiologia , Feminino , Humanos , Masculino , Ruído , Emissões Otoacústicas Espontâneas/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA