Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 10194, 2024 05 03.
Artigo em Inglês | MEDLINE | ID: mdl-38702398

RESUMO

Paired associative stimulation (PAS) consisting of high-intensity transcranial magnetic stimulation (TMS) and high-frequency peripheral nerve stimulation (known as high-PAS) induces plastic changes and improves motor performance in patients with incomplete spinal cord injury (SCI). Listening to music during PAS may potentially improve mood and arousal and facilitate PAS-induced neuroplasticity via auditory-motor coupling, but the effects have not been explored. This pilot study aimed to determine if the effect of high-PAS on motor-evoked potentials (MEPs) and subjective alertness can be augmented with music. Ten healthy subjects and nine SCI patients received three high-PAS sessions in randomized order (PAS only, PAS with music synchronized to TMS, PAS with self-selected music). MEPs were measured before (PRE), after (POST), 30 min (POST30), and 60 min (POST60) after stimulation. Alertness was evaluated with a questionnaire. In healthy subjects, MEPs increased at POST in all sessions and remained higher at POST60 in PAS with synchronized music compared with the other sessions. There was no difference in alertness. In SCI patients, MEPs increased at POST and POST30 in PAS only but not in other sessions, whereas alertness was higher in PAS with self-selected music. More research is needed to determine the potential clinical effects of using music during high-PAS.


Assuntos
Potencial Evocado Motor , Traumatismos da Medula Espinal , Estimulação Magnética Transcraniana , Humanos , Traumatismos da Medula Espinal/fisiopatologia , Traumatismos da Medula Espinal/terapia , Masculino , Feminino , Adulto , Estimulação Magnética Transcraniana/métodos , Pessoa de Meia-Idade , Potencial Evocado Motor/fisiologia , Projetos Piloto , Música , Voluntários Saudáveis , Nível de Alerta/fisiologia , Musicoterapia/métodos
2.
Front Psychol ; 14: 1153968, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37928563

RESUMO

A mere co-presence of an unfamiliar person may modulate an individual's attentive engagement with specific events or situations to a significant degree. To understand better how such social presence affects experiences, we recorded a set of parallel multimodal facial and psychophysiological data with subjects (N = 36) who listened to dramatic audio scenes alone or when facing an unfamiliar person. Both a selection of 6 s affective sound clips (IADS-2) followed by a 27 min soundtrack extracted from a Finnish episode film depicted familiar and often intense social situations familiar from the everyday world. Considering the systemic complexity of both the chosen naturalistic stimuli and expected variations in the experimental social situation, we applied a novel combination of signal analysis methods using inter-subject correlation (ISC) analysis, Representational Similarity Analysis (RSA) and Recurrence Quantification Analysis (RQA) followed by gradient boosting classification. We report our findings concerning three facial signals, gaze, eyebrow and smile that can be linked to socially motivated facial movements. We found that ISC values of pairs, whether calculated on true pairs or any two individuals who had a partner, were lower than the group with single individuals. Thus, audio stimuli induced more unique responses in those subjects who were listening to it in the presence of another person, while individual listeners tended to yield a more uniform response as it was driven by dramatized audio stimulus alone. Furthermore, our classifiers models trained using recurrence properties of gaze, eyebrows and smile signals demonstrated distinctive differences in the recurrence dynamics of signals from paired subjects and revealed the impact of individual differences on the latter. We showed that the presence of an unfamiliar co-listener that modifies social dynamics of dyadic listening tasks can be detected reliably from visible facial modalities. By applying our analysis framework to a broader range of psycho-physiological data, together with annotations of the content, and subjective reports of participants, we expected more detailed dyadic dependencies to be revealed. Our work contributes towards modeling and predicting human social behaviors to specific types of audio-visually mediated, virtual, and live social situations.

3.
Front Aging Neurosci ; 15: 1236971, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37731954

RESUMO

Background: Understanding healthy brain ageing has become vital as populations are ageing rapidly and age-related brain diseases are becoming more common. In normal brain ageing, speech processing undergoes functional reorganisation involving reductions of hemispheric asymmetry and overactivation in the prefrontal regions. However, little is known about how these changes generalise to other vocal production, such as singing, and how they are affected by associated cognitive demands. Methods: The present cross-sectional fMRI study systematically maps the neural correlates of vocal production across adulthood (N=100, age 21-88 years) using a balanced 2x3 design where tasks varied in modality (speech: proverbs / singing: song phrases) and cognitive demand (repetition / completion from memory / improvisation). Results: In speech production, ageing was associated with decreased left pre- and postcentral activation across tasks and increased bilateral angular and right inferior temporal and fusiform activation in the improvisation task. In singing production, ageing was associated with increased activation in medial and bilateral prefrontal and parietal regions in the completion task, whereas other tasks showed no ageing effects. Direct comparisons between the modalities showed larger age-related activation changes in speech than singing across tasks, including a larger left-to-right shift in lateral prefrontal regions in the improvisation task. Conclusion: The present results suggest that the brains' singing network undergoes differential functional reorganisation in normal ageing compared to the speech network, particularly during a task with high executive demand. These findings are relevant for understanding the effects of ageing on vocal production as well as how singing can support communication in healthy ageing and neurological rehabilitation.

4.
Clin Linguist Phon ; 37(4-6): 345-362, 2023 06 03.
Artigo em Inglês | MEDLINE | ID: mdl-36106455

RESUMO

Accumulating evidence suggests that ultrasound visual feedback increases the treatment efficacy for persistent speech sound errors. However, the available evidence is mostly from English. This is a feasibility study of ultrasound visual feedback for treating distortion of Finnish [r]. We developed a web-based application for auditory-perceptual judgement. We investigated the impact of listener's experience on perceptual judgement and the intra-rater reliability of listeners. Four boys (10-11 years) with distortion of [r], otherwise typical development, partook in eight ultrasound treatment sessions. In total, 117 [r] samples collected at pre- and post-intervention were judged with visual analogue scale (VAS) by two listener groups: five speech and language therapists (SLTs) and six SLT students. We constructed a linear mixed-effects model with fixed effects for time and listener group and several random effects. Our findings indicate that measurement time had a significant main effect on judgement results, χ2 = 78.82, p < 0.001. Effect of listener group was non-significant, but a significant main effect of interaction of group × time, χ2 = 6.33, p < 0.012 was observed. We further explored the effect of group with nested models, and results revealed a non-significant effect of group. The average intra-rater correlation of the 11 listeners was 0.83 for the pre-intervention samples and 0.92 for post-intervention showing a good or excellent degree of agreement. Finnish [r] sound can be evaluated with VAS and ultrasound visual feedback is a feasible and promising method in treatment for distortion of [r], and its efficacy should be further assessed.


Assuntos
Retroalimentação Sensorial , Percepção da Fala , Masculino , Humanos , Escala Visual Analógica , Reprodutibilidade dos Testes , Finlândia , Estudos de Viabilidade , Medida da Produção da Fala/métodos
5.
Front Physiol ; 13: 947184, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36160868

RESUMO

Circadian rhythms relate to multiple aspects of health and wellbeing, including physical activity patterns. Susceptible circadian regulation predisposes to circadian misalignment, poor sleep, sleep deprivation, increased sleepiness, and thereby sedentary behavior. Adolescents' circadian regulation is particularly vulnerable, and may lead to sedentary behavior. To investigate which factors associate strongest between physical activity (PA) and circadian behavior, we conducted multimodal circadian rhythm analyses. We investigate how individual characteristics of habitual circadian patterns associate with objectively measured PA. We studied 312 adolescents [70% females) (56% with delayed sleep phase (DSP)], mean age 16.9 years. Circadian period length, temperature mesor (estimated 24 h midline) and amplitude (difference between mesor and peak) were measured using distally attached thermologgers (ibutton 1922L, 3-day-measurement). We additionally utilized algorithm-formed clusters of circadian rhythmicity. Sleep duration, timing, DSP, and PA were measured using actigraphs (GeneActiv Original, 10-day-measurement). We found that continuous circadian period length was not associated with PA, but lower mesor and higher amplitude were consistently associated with higher levels of PA as indicated by mean Metabolic Equivalent (METmean) and moderate-to-vigorous PA (MVPA), even when controlling for sleep duration. Separate circadian clusters formed by an algorithm also reflected distinct patterns of PA accordingly. Late sleepers and those with DSP were less likely to engage in MVPA compared to non-DSP and had more sedentary behavior. Adolescents who engage in higher levels or high-intensity PA have better circadian regulation, as measured by different objective methods including distal temperature measurements as well as actigraphy-measured sleep-wake behavior.

6.
Front Psychol ; 12: 730924, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34966319

RESUMO

The neurophysiological properties of rapid eye movement sleep (REMS) are believed to tune down stressor-related emotional responses. While prior experimental findings are controversial, evidence suggests that affective habituation is hindered if REMS is fragmented. To elucidate the topic, we evoked self-conscious negative affect in the participants (N = 32) by exposing them to their own out-of-tune singing in the evening. Affective response to the stressor was measured with skin conductance response and subjectively reported embarrassment. To address possible inter-individual variance toward the stressor, we measured the shame-proneness of participants with an established questionnaire. The stressor was paired with a sound cue to pilot a targeted memory reactivation (TMR) protocol during the subsequent night's sleep. The sample was divided into three conditions: control (no TMR), TMR during slow-wave sleep, and TMR during REMS. We found that pre- to post-sleep change in affective response was not influenced by TMR. However, REMS percentage was associated negatively with overnight skin conductance response habituation, especially in those individuals whose REMS was fragmented. Moreover, shame-proneness interacted with REM fragmentation such that the higher the shame-proneness, the more the affective habituation was dependent on non-fragmented REMS. In summary, the potential of REMS in affective processing may depend on the quality of REMS as well as on individual vulnerability toward the stressor type.

7.
Memory ; 29(8): 1043-1057, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34309478

RESUMO

Laterality effects generally refer to an advantage for verbal processing in the left hemisphere and for non-verbal processing in the right hemisphere, and are often demonstrated in memory tasks in vision and audition. In contrast, their role in haptic memory is less understood. In this study, we examined haptic recognition memory and laterality for letters and nonsense shapes. We used both upper and lower case letters, with the latter designed as more complex in shape. Participants performed a recognition memory task with the left and right hand separately. Recognition memory performance (capacity and bias-free d') was higher and response times were faster for upper case letters than for lower case letters and nonsense shapes. The right hand performed best for upper case letters when it performed the task after the left hand. This right hand/left hemisphere advantage appeared for upper case letters, but not lower case letters, which also had a lower memory capacity, probably due to their more complex spatial shape. These findings suggest that verbal laterality effects in haptic memory are not very prominent, which may be due to the haptic verbal stimuli being processed mainly as spatial objects without reaching robust verbal coding into memory.


Assuntos
Lateralidade Funcional , Mãos , Percepção Auditiva , Humanos , Tempo de Reação , Reconhecimento Psicológico
8.
Laterality ; 25(6): 654-674, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32748691

RESUMO

The left hemisphere is known to be generally predominant in verbal processing and the right hemisphere in non-verbal processing. We studied whether verbal and non-verbal lateralization is present in haptics by comparing discrimination performance between letters and nonsense shapes. We addressed stimulus complexity by introducing lower case letters, which are verbally identical with upper case letters but have a more complex shape. The participants performed a same-different haptic discrimination task for upper and lower case letters and nonsense shapes with the left and right hand separately. We used signal detection theory to determine discriminability (d'), criterion (c) and we measured reaction times. Discrimination was better for the left hand for nonsense shapes, close to significantly better for the right hand for upper case letters and with no difference between the hands for lower case letters. For lower case letters, right hand showed a strong bias to respond "different", while the left hand showed faster reaction times. Our results are in agreement with the right lateralization for non-verbal material. Complexity of the verbal shape is important in haptics as the lower case letters seem to be processed as less verbal and more as spatial shapes than the upper case letters.


Assuntos
Lateralidade Funcional , Mãos , Humanos , Tempo de Reação
9.
Front Psychiatry ; 11: 279, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32411021

RESUMO

Studies of brain mechanisms supporting social interaction are demanding because real interaction only occurs when persons are in contact. Instead, most brain imaging studies scan subjects individually. Here we present a proof-of-concept demonstration of two-person blood oxygenation dependent (BOLD) imaging of brain activity from two individuals interacting inside the bore of a single MRI scanner. We developed a custom 16-channel (8 + 8 channels) two-helmet coil with two separate receiver-coil pairs providing whole-brain coverage, while bringing participants into a shared physical space and realistic face-to-face contact. Ten subject pairs were scanned with the setup. During the experiment, subjects took turns in tapping each other's lip versus observing and feeling the taps timed by auditory instructions. Networks of sensorimotor brain areas were engaged alternatingly in the subjects during executing motor actions as well as observing and feeling them; these responses were clearly distinguishable from the auditory responses occurring similarly in both participants. Even though the signal-to-noise ratio of our coil system was compromised compared with standard 32-channel head coils, our results show that the two-person fMRI scanning is feasible for studying the brain basis of social interaction.

10.
Emotion ; 19(1): 53-69, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-29504800

RESUMO

Recent work has challenged the previously widely accepted belief that affective processing does not require awareness and can be carried out with more limited resources than semantic processing. This debate has focused exclusively on visual perception, even though evidence from both human and animal studies suggests that existence for nonconscious affective processing would be physiologically more feasible in the auditory system. Here we contrast affective and semantic processing of nonverbal emotional vocalizations under different levels of awareness in three experiments, using explicit (two-alternative forced choice masked affective and semantic categorization tasks, Experiments 1 and 2) and implicit (masked affective and semantic priming, Experiment 3) measures. Identical stimuli and design were used in the semantic and affective tasks. Awareness was manipulated by altering stimulus-mask signal-to-noise ratio during continuous auditory masking. Stimulus awareness was measured on each trial using a four-point perceptual awareness scale. In explicit tasks, neither affective nor semantic categorization could be performed in the complete absence of awareness, while both tasks could be performed above chance level when stimuli were consciously perceived. Semantic categorization was faster than affective evaluation. When the stimuli were partially perceived, semantic categorization accuracy exceeded affective evaluation accuracy. In implicit tasks neither affective nor semantic priming occurred in the complete absence of awareness, whereas both affective and semantic priming emerged when participants were aware of the primes. We conclude that auditory semantic processing is faster than affective processing, and that both affective and semantic auditory processing are dependent on awareness. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Conscientização/fisiologia , Emoções/fisiologia , Tempo de Reação/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
11.
Open J Neurosci ; 32013 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-23956838

RESUMO

Previous studies have suggested that speech motor system mediates suppression by silent lipreading of electromagnetic auditory cortex responses to pure tones at about 100 ms from sound onset. We used sparse sampling functional magnetic resonance imaging (fMRI) at 3 Tesla to map auditory-cortex foci of suppressant effects during silent lipreading and covert self-production. Streams of video clips were presented simultaneously with 1/3 octave noise bursts centered at 250 Hz (low frequency, LF) or 2000 Hz (mid-frequency, MF), or during no auditory stimulation. In different conditions, the subjects were a) to press a button whenever they lipread the face articulate the same consecutive Finnish vowels /a/, /i/, /o/, and /y/, b) covertly selfproducing vowels while viewing still face image, or c) to press a button whenever a circle pictured on top of the lips expanded into oval shape of the same orientation twice in a row. The regions of interest (ROIs) within the superior temporal lobes of each hemisphere were defined by contrasting MF and LF stimulation against silence. Contrasting the nonlinguistic (i.e., expanding circle) vs. linguistic (i.e., lipreading and covert self-production) conditions within these ROIs showed significant suppression of hemodynamic activity to MF sounds in the linguistic condition in left hemisphere first transverse sulcus (FTS) and right hemisphere superior temporal gyrus (STG) lateral to Heschl's sulcus (HS). These findings suggest that the speech motor system mediates suppression of auditory-cortex processing of non-linguistic sounds during silent lipreading and covert self-production in left hemisphere FST and right hemisphere STG lateral to HS.

12.
PLoS One ; 7(10): e46872, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23071654

RESUMO

Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally (p = 0.1) replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at ~100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300-400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (~100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (~300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Desempenho Psicomotor/fisiologia , Som , Estimulação Acústica , Adulto , Análise de Variância , Percepção Auditiva/fisiologia , Limiar Auditivo , Mapeamento Encefálico , Discriminação Psicológica/fisiologia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Magnetoencefalografia/métodos , Masculino , Ruído , Estimulação Luminosa , Fatores de Tempo , Adulto Jovem
13.
J Neurosci ; 30(4): 1314-21, 2010 Jan 27.
Artigo em Inglês | MEDLINE | ID: mdl-20107058

RESUMO

Watching the lips of a speaker enhances speech perception. At the same time, the 100 ms response to speech sounds is suppressed in the observer's auditory cortex. Here, we used whole-scalp 306-channel magnetoencephalography (MEG) to study whether lipreading modulates human auditory processing already at the level of the most elementary sound features, i.e., pure tones. We further envisioned the temporal dynamics of the suppression to tell whether the effect is driven by top-down influences. Nineteen subjects were presented with 50 ms tones spanning six octaves (125-8000 Hz) (1) during "lipreading," i.e., when they watched video clips of silent articulations of Finnish vowels /a/, /i/, /o/, and /y/, and reacted to vowels presented twice in a row; (2) during a visual control task; (3) during a still-face passive control condition; and (4) in a separate experiment with a subset of nine subjects, during covert production of the same vowels. Auditory-cortex 100 ms responses (N100m) were equally suppressed in the lipreading and covert-speech-production tasks compared with the visual control and baseline tasks; the effects involved all frequencies and were most prominent in the left hemisphere. Responses to tones presented at different times with respect to the onset of the visual articulation showed significantly increased N100m suppression immediately after the articulatory gesture. These findings suggest that the lipreading-related suppression in the auditory cortex is caused by top-down influences, possibly by an efference copy from the speech-production system, generated during both own speech and lipreading.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Leitura Labial , Mascaramento Perceptivo/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Vias Auditivas/fisiologia , Mapeamento Encefálico , Potenciais Evocados Auditivos/fisiologia , Feminino , Lateralidade Funcional/fisiologia , Humanos , Magnetoencefalografia , Masculino , Rede Nervosa/fisiologia , Inibição Neural/fisiologia , Testes Neuropsicológicos , Estimulação Luminosa , Discriminação da Altura Tonal/fisiologia , Tempo de Reação/fisiologia , Acústica da Fala , Adulto Jovem
14.
Open Neuroimag J ; 2: 14-9, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-19018313

RESUMO

Hemodynamic activity in occipital, temporal, and parietal cortical areas were recently shown to correlate across subjects during viewing of a 30-minute movie clip. However, most of the frontal cortex lacked between-subject correlations. Here we presented 12 healthy naïve volunteers with the first 72 minutes of a movie ("Crash", 2005, Lions Gate Films) outside of the fMRI scanner to involve the subjects in the plot of the movie, followed by presentation of the last 36 minutes during fMRI scanning. We observed significant between-subjects correlation of fMRI activity in especially right hemisphere frontal cortical areas, in addition to the correlation of activity in temporal, occipital, and parietal areas. It is possible that this resulted from the subjects following the plot of the movie and being emotionally engaged in the movie during fMRI scanning. We further show that probabilistic independent component analysis (ICA) reveals meaningful activations in individual subjects during natural viewing.

15.
Neuroreport ; 19(1): 93-7, 2008 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-18281900

RESUMO

To test for the feature specificity of adaptation of auditory-cortex magnetoencephalographic N1m responses to phonemes during lipreading, we presented eight healthy volunteers with a simplified sine-wave first-formant (F1) transition shared by /ba/, /ga/, and /da/, and a continuum of second-formant (F2) transitions contained in /ba/ (ascending), /da/ (level), and /ga/ (descending), during lipreading of /ba/ vs. /ga/ vs. a still-face baseline. N1m responses to the F1 transition were suppressed during lipreading, further, visual /ga/ (vs. /ba/) significantly suppressed left-hemisphere N1m responses to the F2 transition contained in /ga/. This suggests that visual speech activates and adapts auditory cortex neural populations tuned to formant transitions, the basic sound-sweep constituents of phonemes, potentially explaining enhanced speech perception during lipreading.


Assuntos
Adaptação Fisiológica/fisiologia , Córtex Auditivo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Lateralidade Funcional/fisiologia , Leitura Labial , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Adulto , Mapeamento Encefálico , Relação Dose-Resposta à Radiação , Feminino , Humanos , Magnetoencefalografia , Masculino
16.
PLoS One ; 2(9): e909, 2007 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-17878944

RESUMO

BACKGROUND: An experienced car mechanic can often deduce what's wrong with a car by carefully listening to the sound of the ailing engine, despite the presence of multiple sources of noise. Indeed, the ability to select task-relevant sounds for awareness, whilst ignoring irrelevant ones, constitutes one of the most fundamental of human faculties, but the underlying neural mechanisms have remained elusive. While most of the literature explains the neural basis of selective attention by means of an increase in neural gain, a number of papers propose enhancement in neural selectivity as an alternative or a complementary mechanism. METHODOLOGY/PRINCIPAL FINDINGS: Here, to address the question whether pure gain increase alone can explain auditory selective attention in humans, we quantified the auditory cortex frequency selectivity in 20 healthy subjects by masking 1000-Hz tones by continuous noise masker with parametrically varying frequency notches around the tone frequency (i.e., a notched-noise masker). The task of the subjects was, in different conditions, to selectively attend to either occasionally occurring slight increments in tone frequency (1020 Hz), tones of slightly longer duration, or ignore the sounds. In line with previous studies, in the ignore condition, the global field power (GFP) of event-related brain responses at 100 ms from the stimulus onset to the 1000-Hz tones was suppressed as a function of the narrowing of the notch width. During the selective attention conditions, the suppressant effect of the noise notch width on GFP was decreased, but as a function significantly different from a multiplicative one expected on the basis of simple gain model of selective attention. CONCLUSIONS/SIGNIFICANCE: Our results suggest that auditory selective attention in humans cannot be explained by a gain model, where only the neural activity level is increased, but rather that selective attention additionally enhances auditory cortex frequency selectivity.


Assuntos
Atenção , Córtex Auditivo/fisiologia , Potenciais Evocados , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...