Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
1.
J Acoust Soc Am ; 145(4): 2388, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-31046337

RESUMO

The ISO-1999 [(2013). International Organization for Standardization, Geneva, Switzerland] standard is the most commonly used approach for estimating noise-induced hearing trauma. However, its insensitivity to noise characteristics limits its practical application. In this study, an automatic classification method using the support vector machine (SVM) was developed to predict hearing impairment in workers exposed to both Gaussian (G) and non-Gaussian (non-G) industrial noises. A recently collected human database (N = 2,110) from industrial workers in China was used in the present study. A statistical metric, kurtosis, was used to characterize the industrial noise. In addition to using all the data as one group, the data were also broken down into the following four subgroups based on the level of kurtosis: G/quasi-G, low-kurtosis, middle-kurtosis, and high-kurtosis groups. The performance of the ISO-1999 and the SVM models was compared over these five groups. The results showed that: (1) The performance of the SVM model significantly outperformed the ISO-1999 model in all five groups. (2) The ISO-1999 model could not properly predict hearing impairment for the high-kurtosis group. Moreover, the ISO-1999 model is likely to underestimate hearing impairment caused by both G and non-G noise exposures. (3) The SVM model is a potential tool to predict hearing impairment caused by diverse noise exposures.


Assuntos
Perda Auditiva Provocada por Ruído/etiologia , Ruído Ocupacional/efeitos adversos , Máquina de Vetores de Suporte , Estimulação Acústica/classificação , Estimulação Acústica/normas , Adulto , Idoso , Feminino , Perda Auditiva Provocada por Ruído/prevenção & controle , Humanos , Masculino , Indústria Manufatureira/classificação , Indústria Manufatureira/normas , Pessoa de Meia-Idade , Ruído Ocupacional/prevenção & controle
2.
Neuron ; 98(2): 405-416.e4, 2018 04 18.
Artigo em Inglês | MEDLINE | ID: mdl-29673483

RESUMO

Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain.


Assuntos
Estimulação Acústica/classificação , Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Córtex Pré-Frontal/fisiologia , Vocalização Animal/fisiologia , Adolescente , Adulto , Animais , Córtex Auditivo/diagnóstico por imagem , Feminino , Haplorrinos , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Córtex Pré-Frontal/diagnóstico por imagem , Adulto Jovem
3.
J Neural Eng ; 13(2): 026005, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26824883

RESUMO

OBJECTIVE: One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. APPROACH: We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. MAIN RESULTS: The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. SIGNIFICANCE: The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.


Assuntos
Estimulação Acústica/classificação , Córtex Auditivo/fisiologia , Interfaces Cérebro-Computador/classificação , Estimulação Acústica/métodos , Estimulação Acústica/normas , Adulto , Interfaces Cérebro-Computador/normas , Calibragem , Feminino , Humanos , Masculino , Adulto Jovem
4.
PLoS Comput Biol ; 10(12): e1003985, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25521593

RESUMO

A new approach for the segregation of monaural sound mixtures is presented based on the principle of temporal coherence and using auditory cortical representations. Temporal coherence is the notion that perceived sources emit coherently modulated features that evoke highly-coincident neural response patterns. By clustering the feature channels with coincident responses and reconstructing their input, one may segregate the underlying source from the simultaneously interfering signals that are uncorrelated with it. The proposed algorithm requires no prior information or training on the sources. It can, however, gracefully incorporate cognitive functions and influences such as memories of a target source or attention to a specific set of its attributes so as to segregate it from its background. Aside from its unusual structure and computational innovations, the proposed model provides testable hypotheses of the physiological mechanisms of this ubiquitous and remarkable perceptual ability, and of its psychophysical manifestations in navigating complex sensory environments.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Modelos Neurológicos , Estimulação Acústica/classificação , Algoritmos , Feminino , Humanos , Masculino , Ruído , Fala , Fatores de Tempo
5.
Noise Health ; 15(65): 281-7, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23771427

RESUMO

The literature investigated the effects of chronic baroque music auditory stimulation on the cardiovascular system. However, it lacks in the literature the acute effects of different styles of music on cardiac autonomic regulation. To evaluate the acute effects of baroque and heavy metal music on heart rate variability (HRV) in women. The study was performed in 21 healthy women between 18 and 30 years old. We excluded persons with previous experience with music instrument and those who had affinity with the song styles. All procedures were performed in the same sound-proof room. We analyzed HRV in the time (standard deviation of normal-to-normal respiratory rate (RR) intervals, root-mean square of differences between adjacent normal RR intervals in a time interval, and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms) and frequency (low frequency [LF], high frequency [HF], and LF/HF ratio) domains. HRV was recorded at rest for 10 min. Subsequently they were exposed to baroque or heavy metal music for 5 min through an earphone. After the first music exposure they remained at rest for more 5 min and them they were exposed again to baroque or heavy metal music. The sequence of songs was randomized for each individual. The power analysis provided a minimal number of 18 subjects. Shapiro-Wilk to verify normality of data and analysis of variance for repeated measures followed by the Bonferroni test for parametric variables and Friedman's followed by the Dunn's post-test for non-parametric distributions. During the analysis of the time-domain indices were not changed. In the frequency-domain analysis, the LF in absolute units was reduced during the heavy metal music stimulation compared to control. Acute exposure to heavy metal music affected the sympathetic activity in healthy women.


Assuntos
Estimulação Acústica/classificação , Sistema Nervoso Autônomo/fisiologia , Frequência Cardíaca/fisiologia , Música , Adolescente , Adulto , Análise de Variância , Feminino , Humanos , Fatores de Tempo , Adulto Jovem
6.
Exp Brain Res ; 226(2): 253-64, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23411674

RESUMO

We searched for evidence that the auditory organization of categories of sounds produced by actions includes a privileged or "basic" level of description. The sound events consisted of single objects (or substances) undergoing simple actions. Performance on sound events was measured in two ways: sounds were directly verified as belonging to a category, or sounds were used to create lexical priming. The category verification experiment measured the accuracy and reaction time to brief excerpts of these sounds. The lexical priming experiment measured reaction time benefits and costs caused by the presentation of these sounds prior to a lexical decision. The level of description of a sound varied in how specifically it described the physical properties of the action producing the sound. Both identification and priming effects were superior when a label described the specific interaction causing the sound (e.g. trickling) in comparison to the following: (1) more general descriptions (e.g. pour, liquid: trickling is a specific manner of pouring liquid), (2) more detailed descriptions using adverbs to provide detail regarding the manner of the action (e.g. trickling evenly). These results are consistent with neuroimaging studies showing that auditory representations of sounds produced by actions familiar to the listener activate motor representations of the gestures involved in sound production.


Assuntos
Estimulação Acústica/classificação , Percepção Auditiva/fisiologia , Tempo de Reação/fisiologia , Som , Adolescente , Adulto , Classificação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
7.
J Neurosci ; 32(38): 13273-80, 2012 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-22993443

RESUMO

The formation of new sound categories is fundamental to everyday goal-directed behavior. Categorization requires the abstraction of discrete classes from continuous physical features as required by context and task. Electrophysiology in animals has shown that learning to categorize novel sounds alters their spatiotemporal neural representation at the level of early auditory cortex. However, functional magnetic resonance imaging (fMRI) studies so far did not yield insight into the effects of category learning on sound representations in human auditory cortex. This may be due to the use of overlearned speech-like categories and fMRI subtraction paradigms, leading to insufficient sensitivity to distinguish the responses to learning-induced, novel sound categories. Here, we used fMRI pattern analysis to investigate changes in human auditory cortical response patterns induced by category learning. We created complex novel sound categories and analyzed distributed activation patterns during passive listening to a sound continuum before and after category learning. We show that only after training, sound categories could be successfully decoded from early auditory areas and that learning-induced pattern changes were specific to the category-distinctive sound feature (i.e., pitch). Notably, the similarity between fMRI response patterns for the sound continuum mirrored the sigmoid shape of the behavioral category identification function. Our results indicate that perceptual representations of novel sound categories emerge from neural changes at early levels of the human auditory processing hierarchy.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Aprendizagem/fisiologia , Som , Estimulação Acústica/classificação , Adulto , Análise de Variância , Córtex Auditivo/irrigação sanguínea , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Distribuição Normal , Oxigênio/sangue , Psicoacústica , Análise Espectral , Adulto Jovem
8.
Neuroreport ; 23(16): 947-51, 2012 Nov 14.
Artigo em Inglês | MEDLINE | ID: mdl-22989928

RESUMO

This study examined the classification of initial dips during passive listening to single words by analysis of vectors of deoxyHb and oxyHb measurements simultaneously derived from near-infrared spectroscopy. The initial dip response during a single-word 1.5-s task in 13 healthy participants was significant only in the language area, which includes the left posterior superior temporal gyrus and angular gyrus. Event-related vectors of responses to comprehended words moved significantly into phase 4, a dip phase, whereas vectors of responses to unknown words moved into a nondip phase (P<0.05). The same results were reproduced after previously unknown words were learnt by the participants. Among the five dip phases, reflecting variations in transient oxygen metabolic regulation during a task, the frequency of occurrence of hypoxic-ischemic initial dips (decreased oxyHb) was around three times that of the canonical dip (increased deoxyHb and oxyHb). Phase classification of event-related vectors enhances the slight amount of oxygen exchange that occurs in word recognition, which has been difficult to detect because of its small amplitude.


Assuntos
Estimulação Acústica/classificação , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Espectroscopia de Luz Próxima ao Infravermelho/classificação , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Lobo Temporal/fisiologia , Adulto , Mapeamento Encefálico/classificação , Mapeamento Encefálico/métodos , Compreensão/fisiologia , Feminino , Humanos , Masculino , Adulto Jovem
9.
Neural Netw ; 34: 80-95, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22858421

RESUMO

In this paper, we propose a novel framework based on a collective network of evolutionary binary classifiers (CNBC) to address the problems of feature and class scalability. The main goal of the proposed framework is to achieve a high classification performance over dynamic audio and video repositories. The proposed framework adopts a "Divide and Conquer" approach in which an individual network of binary classifiers (NBC) is allocated to discriminate each audio class. An evolutionary search is applied to find the best binary classifier in each NBC with respect to a given criterion. Through the incremental evolution sessions, the CNBC framework can dynamically adapt to each new incoming class or feature set without resorting to a full-scale re-training or re-configuration. Therefore, the CNBC framework is particularly designed for dynamically varying databases where no conventional static classifiers can adapt to such changes. In short, it is entirely a novel topology, an unprecedented approach for dynamic, content/data adaptive and scalable audio classification. A large set of audio features can be effectively used in the framework, where the CNBCs make appropriate selections and combinations so as to achieve the highest discrimination among individual audio classes. Experiments demonstrate a high classification accuracy (above 90%) and efficiency of the proposed framework over large and dynamic audio databases.


Assuntos
Estimulação Acústica/classificação , Evolução Biológica , Redes Neurais de Computação , Reconhecimento Fisiológico de Modelo , Estimulação Acústica/métodos
10.
Clin Neurophysiol ; 123(7): 1300-8, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-22197447

RESUMO

OBJECTIVE: Conflicting reports of P200 amplitude and latency in schizophrenia have suggested that this component is increased, reduced or does not differ from healthy subjects. A systematic review and meta-analysis were undertaken to accurately describe P200 deficits in auditory oddball tasks in schizophrenia. METHODS: A systematic search identified 20 studies which were meta-analyzed. Effect size (ES) estimates were obtained: P200 amplitude and latency for target and standard tones at midline electrodes. RESULTS: The ES obtained for amplitude (Cz) for standard and target stimuli indicate significant effects in opposite directions: standard stimuli elicit smaller P200 in patients (d = -0.36; 95% CI [-0.26, -0.08]); target stimuli elicit larger P200 in patients (d = 0.48; 95% CI [0.16, 0.82]). A similar effect occurs for latency at Cz, which is shorter for standards (d = -0.32; 95% CI [-0.54, -0.10]) and longer for targets (d = 0.42; 95% CI [0.23, 0.62]). Meta-regression analyses revealed that samples with more males show larger ES for amplitude of target stimuli, while the amount of medication was negatively associated with the ES for the latency of standards. CONCLUSIONS: The results obtained suggest that claims of reduced or augmented P200 in schizophrenia based on the sole examination of standard or target stimuli fail to consider the stimulus effect. SIGNIFICANCE: Quantification of effects for standard and target stimuli is a required first step to understand the nature of P200 deficits in schizophrenia.


Assuntos
Estimulação Acústica/classificação , Potenciais Evocados Auditivos/fisiologia , Potenciais Evocados/fisiologia , Esquizofrenia/fisiopatologia , Estimulação Acústica/métodos , Adulto , Estudos de Casos e Controles , Eletroencefalografia , Feminino , Humanos , Masculino , Tempo de Reação/fisiologia , Análise de Regressão
11.
J Exp Psychol Appl ; 18(1): 52-80, 2012 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-22122114

RESUMO

In this article we report on listener categorization of meaningful environmental sounds. A starting point for this study was the phenomenological taxonomy proposed by Gaver (1993b). In the first experimental study, 15 participants classified 60 environmental sounds and indicated the properties shared by the sounds in each class. In a second experimental study, 30 participants classified and described 56 sounds exclusively made by solid objects. The participants were required to concentrate on the actions causing the sounds independent of the sound source. The classifications were analyzed with a specific hierarchical cluster technique that accounted for possible cross-classifications, and the verbalizations were submitted to statistical lexical analyses. The results of the first study highlighted 4 main categories of sounds: solids, liquids, gases, and machines. The results of the second study indicated a distinction between discrete interactions (e.g., impacts) and continuous interactions (e.g., tearing) and suggested that actions and objects were not independent organizational principles. We propose a general structure of environmental sound categorization based on the sounds' temporal patterning, which has practical implications for the automatic classification of environmental sounds.


Assuntos
Estimulação Acústica/classificação , Percepção Auditiva , Meio Ambiente , Som , Estimulação Acústica/métodos , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
12.
Exp Brain Res ; 214(4): 597-605, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21912929

RESUMO

Previous research examining cross-modal conflicts in object recognition has often made use of animal vocalizations and images, which may be considered natural and ecologically valid, thus strengthening the association in the congruent condition. The current research tested whether the same cross-modal conflict would exist for man-made object sounds as well as comparing the speed and accuracy of auditory processing across the two object categories. Participants were required to attend to a sound paired with a visual stimulus and then respond to a verification item (e.g., "Dog?"). Sounds were congruent (same object), neutral (unidentifiable image), or incongruent (different object) with the images presented. In the congruent and neutral condition, animals were recognized significantly faster and with greater accuracy than man-made objects. It was hypothesized that in the incongruent condition, no difference in reaction time or error rate would be found between animals and man-made objects. This prediction was not supported, indicating that the association between an object's sound and image may not be that disparate when comparing animals to man-made objects. The findings further support cross-modal conflict research for both the animal and man-made object category. The most important finding, however, was that auditory processing is enhanced for living compared to nonliving objects, a difference only previously found in visual processing. Implications relevant to both the neuropsychological literature and sound research are discussed.


Assuntos
Estimulação Acústica/classificação , Percepção Auditiva/fisiologia , Conflito Psicológico , Estimulação Luminosa , Reconhecimento Psicológico/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia , Adulto Jovem
13.
J Cogn Neurosci ; 23(6): 1315-31, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-20521860

RESUMO

The formation of cross-modal object representations was investigated using a novel paradigm that was previously successful in establishing unimodal visual category learning in monkeys and humans. The stimulus set consisted of six categories of bird shapes and sounds that were morphed to create different exemplars of each category. Subjects learned new cross-modal bird categories using a one-back task. Over time, the subjects became faster and more accurate in categorizing the birds. After 3 days of training, subjects were scanned while passively viewing and listening to trained and novel bird types. Stimulus blocks consisted of bird sounds only, bird pictures only, matching pictures and sounds (cross-modal congruent), and mismatching pictures and sounds (cross-modal incongruent). fMRI data showed unimodal and cross-modal training effects in the right fusiform gyrus. In addition, the left STS showed cross-modal training effects in the absence of unimodal training effects. Importantly, for both the right fusiform gyrus and the left STS, the newly formed cross-modal representation was specific for the trained categories. Learning did not generalize to incongruent combinations of learned sounds and shapes; their response did not differ from the response to novel cross-modal bird types. Moreover, responses were larger for congruent than for incongruent cross-modal bird types in the right fusiform gyrus and STS, providing further evidence that categorization training induced the formation of meaningful cross-modal object representations.


Assuntos
Estimulação Acústica/classificação , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Aprendizagem/fisiologia , Estimulação Luminosa , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Fatores Etários , Feminino , Humanos , Masculino , Plasticidade Neuronal/fisiologia , Estimulação Luminosa/métodos , Ensino/métodos , Adulto Jovem
14.
J Neurosci Methods ; 191(1): 110-8, 2010 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-20595034

RESUMO

Prior studies of multichannel ECoG from animals showed that beta and gamma oscillations carried perceptual information in both local and global spatial patterns of amplitude modulation, when the subjects were trained to discriminate conditioned stimuli (CS). Here the hypothesis was tested that similar patterns could be found in the scalp EEG human subjects trained to discriminate simultaneous visual-auditory CS. Signals were continuously recorded from 64 equispaced scalp electrodes and band-pass filtered. The Hilbert transform gave the analytic phase, which segmented the EEG into temporal frames, and the analytic amplitude, which expressed the pattern in each frame as a feature vector. Methods applied to the ECoG were adapted to the EEG for systematic search of the beta-gamma spectrum, the time period after CS onset, and the scalp surface to locate patterns that could be classified with respect to type of CS. Spatial patterns of EEG amplitude modulation were found from all subjects that could be classified with respect to stimulus combination type significantly above chance levels. The patterns were found in the beta range (15-22 Hz) but not in the gamma range. They occurred in three short bursts following CS onset. They were non-local, occupying the entire array. Our results suggest that the scalp EEG can yield information about the timing of episodically synchronized brain activity in higher cognitive function, so that future studies in brain-computer interfacing can be better focused. Our methods may be most valuable for analyzing data from dense arrays with very high spatial and temporal sampling rates.


Assuntos
Mapeamento Encefálico/métodos , Córtex Cerebral/fisiologia , Eletroencefalografia/classificação , Eletroencefalografia/métodos , Percepção/fisiologia , Sensação/fisiologia , Processamento de Sinais Assistido por Computador , Estimulação Acústica/classificação , Estimulação Acústica/métodos , Adulto , Relógios Biológicos/fisiologia , Mapeamento Encefálico/classificação , Cognição/classificação , Cognição/fisiologia , Sincronização Cortical , Aprendizagem por Discriminação/classificação , Aprendizagem por Discriminação/fisiologia , Potenciais Evocados/fisiologia , Humanos , Masculino , Reconhecimento Automatizado de Padrão , Estimulação Luminosa/métodos , Software/classificação , Software/normas , Adulto Jovem
15.
J Neurophysiol ; 104(3): 1426-37, 2010 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-20610781

RESUMO

Songbirds, which, like humans, learn complex vocalizations, provide an excellent model for the study of acoustic pattern recognition. Here we examined the role of three basic acoustic parameters in an ethologically relevant categorization task. Female zebra finches were first trained to classify songs as belonging to one of two males and then asked whether they could generalize this knowledge to songs systematically altered with respect to frequency, timing, or intensity. Birds' performance on song categorization fell off rapidly when songs were altered in frequency or intensity, but they generalized well to songs that were changed in duration by >25%. Birds were not deaf to timing changes, however; they detected these tempo alterations when asked to discriminate between the same song played back at two different speeds. In addition, when birds were retrained with songs at many intensities, they could correctly categorize songs over a wide range of volumes. Thus although they can detect all these cues, birds attend less to tempo than to frequency or intensity cues during song categorization. These results are unexpected for several reasons: zebra finches normally encounter a wide range of song volumes but most failed to generalize across volumes in this task; males produce only slight variations in tempo, but females generalized widely over changes in song duration; and all three acoustic parameters are critical for auditory neurons. Thus behavioral data place surprising constraints on the relationship between previous experience, behavioral task, neural responses, and perception. We discuss implications for models of auditory pattern recognition.


Assuntos
Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Sinais (Psicologia) , Aprendizagem por Discriminação/fisiologia , Vocalização Animal/fisiologia , Estimulação Acústica/classificação , Animais , Feminino , Tentilhões , Masculino , Fatores de Tempo
16.
Hear Res ; 262(1-2): 26-33, 2010 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-20123119

RESUMO

Frequency-tuning is a fundamental property of auditory neurons. The filter bandwidth of peripheral auditory neurons determines the frequency resolution of an animal's auditory system. Behavioural studies in animals and humans have defined frequency-tuning in terms of the "equivalent-rectangular bandwidth" (ERB) of peripheral filters. In contrast, most physiological studies report the Q [best frequency/bandwidth] of frequency-tuning curves. This study aims to accurately describe the ERB of primary-like and chopper units in the ventral cochlear nucleus, the first brainstem processing station of the central auditory system. Recordings were made from 1020 isolated single units in the ventral cochlear nucleus of anesthetized guinea pigs in response to pure-tone stimuli which varied in frequency and in sound level. Frequency-threshold tuning curves were constructed for each unit and estimates of the ERB determined using methods previously described for auditory-nerve-fibre data in the same species. Primary-like, primary-notch, and sustained- and transient-chopper units showed frequency selectivity almost identical to that recorded in the auditory nerve. Their tuning at pure-tone threshold can be described as a function of best frequency (BF) by ERB = 0.31 * BF(0.5).


Assuntos
Estimulação Acústica/classificação , Nervo Coclear/fisiologia , Núcleo Coclear/fisiologia , Cobaias/fisiologia , Inconsciência , Animais , Vias Auditivas/fisiologia , Limiar Auditivo/fisiologia , Potenciais Evocados Auditivos/fisiologia
17.
Hear Res ; 262(1-2): 34-44, 2010 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-20123120

RESUMO

The purpose of this study was to compare cortical brain responses evoked by amplitude modulated acoustic beats of 3 and 6 Hz in tones of 250 and 1000 Hz with those evoked by their binaural beats counterparts in unmodulated tones to indicate whether the cortical processes involved differ. Event-related potentials (ERPs) were recorded to 3- and 6-Hz acoustic and binaural beats in 2000 ms duration 250 and 1000 Hz tones presented with approximately 1 s intervals. Latency, amplitude and source current density estimates of ERP components to beats-evoked oscillations were determined and compared across beat types, beat frequencies and base (carrier) frequencies. All stimuli evoked tone-onset components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude in response to acoustic than to binaural beats, to 250 than to 1000 Hz base frequency and to 3 Hz than to 6 Hz beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left temporal lobe areas. Differences between estimated sources of potentials to acoustic and binaural beats were not significant. The perceptions of binaural beats involve cortical activity that is not different than acoustic beats in distribution and in the effects of beat- and base frequency, indicating similar cortical processing.


Assuntos
Estimulação Acústica/classificação , Acústica , Córtex Auditivo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Adolescente , Adulto , Potenciais Evocados/fisiologia , Feminino , Audição/fisiologia , Humanos , Masculino , Adulto Jovem
18.
Hear Res ; 262(1-2): 19-25, 2010 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-20138978

RESUMO

Although much is understood about the stimulus properties affecting the latency of saccadic eye movements to visual targets, relatively little is known about the properties affecting saccades to auditory targets. This study examined the effect of three primary acoustic features-frequency, intensity, and spatial location-on auditory saccade characteristics in humans, and compared them to visual saccades. Saccade targets were presented from an azimuthal array of speakers and LEDs spanning +/-36 degrees. There was an 'eccentricity effect' for auditory saccades such that latencies decreased by up to 70 ms with eccentricity. This was observed for all frequencies and intensities tested. There was a smaller effect in the opposite direction effect for visual saccades. Auditory saccades had similar latencies to visual saccades (within 5 ms) for near midline locations, but were up to 90 ms faster at eccentric locations (+/-36 degrees). Overall, saccadic latencies were shortest for wideband noise and narrowband noises with center frequencies falling within the human speech range. Examination of saccade accuracy showed decreasing accuracy with increasing eccentricity, and a negative correlation between accuracy and latency for auditory stimuli.


Assuntos
Estimulação Acústica/classificação , Tempo de Reação/fisiologia , Movimentos Sacádicos/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Som
19.
Eur J Neurosci ; 30(2): 339-46, 2009 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-19614974

RESUMO

During speech perception, sound is mapped onto abstract phonological categories. Assimilation of place or manner of articulation in connected speech challenges this categorization. Does assimilation result in categorizations that need to be corrected later on, or does the system get it right immediately? Participants were presented with isolated nasals (/m/ labial, /n/ alveolar, and /n'/ assimilated towards labial place of articulation), extracted from naturally produced German utterances. Behavioural two-alternative forced-choice tasks showed that participants could correctly categorize the /n/s and /m/s. The assimilated nasals were predominantly categorized as /m/, indicative of a perceived change in place. A pitch variation additively influenced the categorizations. Using magnetoencephalography (MEG), we analysed the N100m elicited by the same stimuli without a categorization task. In sharp contrast to the behavioural data, this early, automatic brain response ignored the assimilation in the surface form and reflected the underlying category. As shown by distributed source modelling, phonemic differences were processed exclusively left-laterally (temporally and parietally), whereas the pitch variation was processed in temporal regions bilaterally. In conclusion, explicit categorization draws attention to the surface form - to the changed place and acoustic information. The N100m reflects automatic categorization, which exploits any hint of an underlying feature.


Assuntos
Estimulação Acústica/classificação , Percepção Auditiva/fisiologia , Comportamento de Escolha/fisiologia , Fonética , Estimulação Acústica/métodos , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
20.
J Am Acad Audiol ; 19(4): 348-70, 2008 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-18795473

RESUMO

This article investigates the different acoustic signals that hearing aid users are exposed to in their everyday environment. Binaural microphone signals from recording positions close to the microphone locations of behind-the-ear hearing aids were recorded by 20 hearing aid users during daily life. The recorded signals were acoustically analyzed with regard to narrowband short-term level distributions. The subjects also performed subjective assessments of their own recordings in the laboratory using several questions from the Glasgow Hearing Aid Benefit Profile (GHABP) questionnaire. Both the questionnaire and the acoustic analysis data show that the importance, problems, and hearing aid benefit as well as the acoustic characteristics of the individual situations vary a lot across subjects. Therefore, in addition to a nonlinear hearing aid fitting, further signal classification and signal/situation-adaptive features are highly desirable inside modern hearing aids. These should be compatible with the variability of the individual sound environments of hearing-impaired listeners.


Assuntos
Estimulação Acústica/classificação , Meio Ambiente , Auxiliares de Audição , Perda Auditiva/terapia , Ruído , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Perda Auditiva/psicologia , Humanos , Masculino , Pessoa de Meia-Idade , Satisfação do Paciente , Inquéritos e Questionários , Gravação em Fita
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA