Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters

Database
Language
Affiliation country
Publication year range
1.
Clin EEG Neurosci ; 54(6): 620-627, 2023 Nov.
Article in English | MEDLINE | ID: mdl-35410509

ABSTRACT

Speech-sound stimuli have a complex structure, and it is unclear how the brain processes them. An event-related potential (ERP), known as mismatch negativity (MMN), is elicited when an individual's brain detects a rare sound. In this study, MMNs were measured in response to an omitted segment of a complex sound consisting of a Japanese vowel. The results indicated that the latency from onset in the right hemisphere was significantly shorter than that in the frontal midline and left hemispheres during left ear stimulation. Additionally, the results of latency from omission showed that the latency of stimuli omitted in the latter part of the temporal window of integration (TWI) was longer than that of stimuli omitted in the first part of the TWI. The mean peak amplitude was found to be higher in the right hemisphere than in the frontal midline and left hemispheres in response to left ear stimulation. In conclusion, the results of this study suggest that would be incorrect to believe that the stimuli have strictly the characteristics of speech-sound. However. the results of the interaction effect in the latencies from omission were insignificant. These results suggest that the detection time for deviance may not be related to the stimulus ear. However, the type of deviant stimuli on latencies was found to be significant. This is because the detection of the deviants was delayed when a deviation occurred in the latter part of the TWI, regardless of the stimulation of the ear.


Subject(s)
Evoked Potentials, Auditory , Phonetics , Humans , Acoustic Stimulation/methods , Evoked Potentials, Auditory/physiology , Electroencephalography/methods , Sound
2.
Biol Psychol ; 151: 107848, 2020 03.
Article in English | MEDLINE | ID: mdl-31981583

ABSTRACT

Both stream segregation and temporal integration are considered important for auditory scene analysis in the brain. Several previous studies have indicated that stream segregation may precede temporal integration when both processes are required. In the present study, we utilized mismatch negativity (MMN)-which reflects automatic change detection-to systematically estimate the threshold of the frequency difference at which stream segregation occurs prior to temporal integration when these functions occur together during a state of inattention. Electroencephalography (EEG) data were recorded from 22 healthy Japanese men presented with six blocks of alternating high pure tones (high tones) and low pure tones (low tones). Only high tones were omitted with 5 % probability in all blocks. Our results indicated that stream segregation should cancel temporal integration of close sounds, as indicated by omission-MMN elicitation, when the frequency difference is 1000 Hz or larger.


Subject(s)
Acoustic Stimulation/psychology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Time Factors , Acoustic Stimulation/methods , Adult , Electroencephalography , Healthy Volunteers , Humans , Male , Middle Aged , Sound , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL