Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Perception ; 53(5-6): 317-334, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38483923

RESUMO

Our percept of the world is not solely determined by what we perceive and process at a given moment in time, but also depends on what we processed recently. In the present study, we investigate whether the perceived emotion of a spoken sentence is contingent upon the emotion of an auditory stimulus on the preceding trial (i.e., serial dependence). Thereto, participants were exposed to spoken sentences that varied in emotional affect by changing the prosody that ranged from 'happy' to 'fearful'. Participants were instructed to rate the emotion. We found a positive serial dependence for emotion processing whereby the perceived emotion was biased towards the emotion on the preceding trial. When we introduced 'no-go' trials (i.e., no rating was required), we found a negative serial dependence when participants knew in advance to withhold their response on a given trial (Experiment 2) and a positive serial dependence when participants received the information to withhold their response after the stimulus presentation (Experiment 3). We therefore established a robust serial dependence for emotion processing in speech and introduce a methodology to disentangle perceptual from post-perceptual processes. This approach can be applied to the vast majority of studies investigating sequential dependencies to separate positive from negative serial dependence.


Assuntos
Emoções , Percepção da Fala , Humanos , Feminino , Masculino , Adulto , Adulto Jovem , Percepção da Fala/fisiologia
2.
J Neurosci ; 40(5): 1053-1065, 2020 01 29.
Artigo em Inglês | MEDLINE | ID: mdl-31889007

RESUMO

Lip-reading is crucial for understanding speech in challenging conditions. But how the brain extracts meaning from, silent, visual speech is still under debate. Lip-reading in silence activates the auditory cortices, but it is not known whether such activation reflects immediate synthesis of the corresponding auditory stimulus or imagery of unrelated sounds. To disentangle these possibilities, we used magnetoencephalography to evaluate how cortical activity in 28 healthy adult humans (17 females) entrained to the auditory speech envelope and lip movements (mouth opening) when listening to a spoken story without visual input (audio-only), and when seeing a silent video of a speaker articulating another story (video-only). In video-only, auditory cortical activity entrained to the absent auditory signal at frequencies <1 Hz more than to the seen lip movements. This entrainment process was characterized by an auditory-speech-to-brain delay of ∼70 ms in the left hemisphere, compared with ∼20 ms in audio-only. Entrainment to mouth opening was found in the right angular gyrus at <1 Hz, and in early visual cortices at 1-8 Hz. These findings demonstrate that the brain can use a silent lip-read signal to synthesize a coarse-grained auditory speech representation in early auditory cortices. Our data indicate the following underlying oscillatory mechanism: seeing lip movements first modulates neuronal activity in early visual cortices at frequencies that match articulatory lip movements; the right angular gyrus then extracts slower features of lip movements, mapping them onto the corresponding speech sound features; this information is fed to auditory cortices, most likely facilitating speech parsing.SIGNIFICANCE STATEMENT Lip-reading consists in decoding speech based on visual information derived from observation of a speaker's articulatory facial gestures. Lip-reading is known to improve auditory speech understanding, especially when speech is degraded. Interestingly, lip-reading in silence still activates the auditory cortices, even when participants do not know what the absent auditory signal should be. However, it was uncertain what such activation reflected. Here, using magnetoencephalographic recordings, we demonstrate that it reflects fast synthesis of the auditory stimulus rather than mental imagery of unrelated, speech or non-speech, sounds. Our results also shed light on the oscillatory dynamics underlying lip-reading.


Assuntos
Córtex Auditivo/fisiologia , Leitura Labial , Percepção da Fala/fisiologia , Estimulação Acústica , Feminino , Humanos , Magnetoencefalografia , Masculino , Reconhecimento Visual de Modelos/fisiologia , Espectrografia do Som , Adulto Jovem
3.
Neuroimage ; 237: 118168, 2021 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-34000398

RESUMO

Spoken language comprehension is a fundamental component of our cognitive skills. We are quite proficient at deciphering words from the auditory input despite the fact that the speech we hear is often masked by noise such as background babble originating from talkers other than the one we are attending to. To perceive spoken language as intended, we rely on prior linguistic knowledge and context. Prior knowledge includes all sounds and words that are familiar to a listener and depends on linguistic experience. For bilinguals, the phonetic and lexical repertoire encompasses two languages, and the degree of overlap between word forms across languages affects the degree to which they influence one another during auditory word recognition. To support spoken word recognition, listeners often rely on semantic information (i.e., the words we hear are usually related in a meaningful way). Although the number of multilinguals across the globe is increasing, little is known about how crosslinguistic effects (i.e., word overlap) interact with semantic context and affect the flexible neural systems that support accurate word recognition. The current multi-echo functional magnetic resonance imaging (fMRI) study addresses this question by examining how prime-target word pair semantic relationships interact with the target word's form similarity (cognate status) to the translation equivalent in the dominant language (L1) during accurate word recognition of a non-dominant (L2) language. We tested 26 early-proficient Spanish-Basque (L1-L2) bilinguals. When L2 targets matching L1 translation-equivalent phonological word forms were preceded by unrelated semantic contexts that drive lexical competition, a flexible language control (fronto-parietal-subcortical) network was upregulated, whereas when they were preceded by related semantic contexts that reduce lexical competition, it was downregulated. We conclude that an interplay between semantic and crosslinguistic effects regulates flexible control mechanisms of speech processing to facilitate L2 word recognition, in noise.


Assuntos
Córtex Cerebral/fisiologia , Multilinguismo , Rede Nervosa/fisiologia , Psicolinguística , Reconhecimento Psicológico/fisiologia , Percepção da Fala/fisiologia , Adulto , Mapeamento Encefálico , Córtex Cerebral/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Rede Nervosa/diagnóstico por imagem , Semântica , Adulto Jovem
4.
Exp Brain Res ; 236(7): 1911-1918, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29696314

RESUMO

Perception of vocal affect is influenced by the concurrent sight of an emotional face. We demonstrate that the sight of an emotional face also can induce recalibration of vocal affect. Participants were exposed to videos of a 'happy' or 'fearful' face in combination with a slightly incongruous sentence with ambiguous prosody. After this exposure, ambiguous test sentences were rated as more 'happy' when the exposure phase contained 'happy' instead of 'fearful' faces. This auditory shift likely reflects recalibration that is induced by error minimization of the inter-sensory discrepancy. In line with this view, when the prosody of the exposure sentence was non-ambiguous and congruent with the face (without audiovisual discrepancy), aftereffects went in the opposite direction, likely reflecting adaptation. Our results demonstrate, for the first time, that perception of vocal affect is flexible and can be recalibrated by slightly discrepant visual information.


Assuntos
Adaptação Fisiológica/fisiologia , Percepção Auditiva/fisiologia , Emoções/fisiologia , Expressão Facial , Voz , Estimulação Acústica , Adolescente , Feminino , Humanos , Masculino , Estimulação Luminosa , Psicofísica , Adulto Jovem
5.
Eur J Neurosci ; 46(10): 2578-2583, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28976045

RESUMO

Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure of audiovisual integration) for fusions was similar to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli.


Assuntos
Ilusões/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Masculino , Fonética , Estimulação Luminosa
6.
J Exp Child Psychol ; 129: 157-64, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25258018

RESUMO

The correspondence between auditory speech and lip-read information can be detected based on a combination of temporal and phonetic cross-modal cues. Here, we determined the point in developmental time at which children start to effectively use phonetic information to match a speech sound with one of two articulating faces. We presented 4- to 11-year-olds (N=77) with three-syllabic sine-wave speech replicas of two pseudo-words that were perceived as non-speech and asked them to match the sounds with the corresponding lip-read video. At first, children had no phonetic knowledge about the sounds, and matching was thus based on the temporal cues that are fully retained in sine-wave speech. Next, we trained all children to perceive the phonetic identity of the sine-wave speech and repeated the audiovisual (AV) matching task. Only at around 6.5 years of age did the benefit of having phonetic knowledge about the stimuli become apparent, thereby indicating that AV matching based on phonetic cues presumably develops more slowly than AV matching based on temporal cues.


Assuntos
Desenvolvimento Infantil , Leitura Labial , Fonética , Percepção da Fala , Fala , Criança , Pré-Escolar , Sinais (Psicologia) , Humanos
7.
Multisens Res ; 37(3): 243-259, 2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38777333

RESUMO

Auditory speech can be difficult to understand but seeing the articulatory movements of a speaker can drastically improve spoken-word recognition and, on the longer-term, it helps listeners to adapt to acoustically distorted speech. Given that individuals with developmental dyslexia (DD) have sometimes been reported to rely less on lip-read speech than typical readers, we examined lip-read-driven adaptation to distorted speech in a group of adults with DD ( N = 29) and a comparison group of typical readers ( N = 29). Participants were presented with acoustically distorted Dutch words (six-channel noise-vocoded speech, NVS) in audiovisual training blocks (where the speaker could be seen) interspersed with audio-only test blocks. Results showed that words were more accurately recognized if the speaker could be seen (a lip-read advantage), and that performance steadily improved across subsequent auditory-only test blocks (adaptation). There were no group differences, suggesting that perceptual adaptation to disrupted spoken words is comparable for dyslexic and typical readers. These data open up a research avenue to investigate the degree to which lip-read-driven speech adaptation generalizes across different types of auditory degradation, and across dyslexic readers with decoding versus comprehension difficulties.


Assuntos
Dislexia , Leitura Labial , Leitura , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Masculino , Feminino , Dislexia/fisiopatologia , Adulto , Adulto Jovem , Adaptação Fisiológica/fisiologia , Ruído , Estimulação Acústica
8.
Multisens Res ; 37(2): 125-141, 2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38714314

RESUMO

Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.


Assuntos
Face , Confiança , Voz , Humanos , Feminino , Voz/fisiologia , Adulto Jovem , Adulto , Face/fisiologia , Percepção da Fala/fisiologia , Percepção da Altura Sonora/fisiologia , Reconhecimento Facial/fisiologia , Sinais (Psicologia) , Adolescente
9.
Dev Cogn Neurosci ; 59: 101181, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36549148

RESUMO

Humans' extraordinary ability to understand speech in noise relies on multiple processes that develop with age. Using magnetoencephalography (MEG), we characterize the underlying neuromaturational basis by quantifying how cortical oscillations in 144 participants (aged 5-27 years) track phrasal and syllabic structures in connected speech mixed with different types of noise. While the extraction of prosodic cues from clear speech was stable during development, its maintenance in a multi-talker background matured rapidly up to age 9 and was associated with speech comprehension. Furthermore, while the extraction of subtler information provided by syllables matured at age 9, its maintenance in noisy backgrounds progressively matured until adulthood. Altogether, these results highlight distinct behaviorally relevant maturational trajectories for the neuronal signatures of speech perception. In accordance with grain-size proposals, neuromaturational milestones are reached increasingly late for linguistic units of decreasing size, with further delays incurred by noise.


Assuntos
Percepção da Fala , Fala , Humanos , Adulto , Criança , Fala/fisiologia , Ruído , Magnetoencefalografia , Linguística , Percepção da Fala/fisiologia
10.
PLoS One ; 17(12): e0278986, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36580461

RESUMO

When listening to distorted speech, does one become a better listener by looking at the face of the speaker or by reading subtitles that are presented along with the speech signal? We examined this question in two experiments in which we presented participants with spectrally distorted speech (4-channel noise-vocoded speech). During short training sessions, listeners received auditorily distorted words or pseudowords that were partially disambiguated by concurrently presented lipread information or text. After each training session, listeners were tested with new degraded auditory words. Learning effects (based on proportions of correctly identified words) were stronger if listeners had trained with words rather than with pseudowords (a lexical boost), and adding lipread information during training was more effective than adding text (a lipread boost). Moreover, the advantage of lipread speech over text training was also found when participants were tested more than a month later. The current results thus suggest that lipread speech may have surprisingly long-lasting effects on adaptation to distorted speech.


Assuntos
Percepção da Fala , Humanos , Fala , Leitura , Lábio , Percepção Auditiva
11.
Neuropsychologia ; 165: 108107, 2022 01 28.
Artigo em Inglês | MEDLINE | ID: mdl-34921819

RESUMO

We investigated how aging modulates lexico-semantic processes in the visual (seeing written items), auditory (hearing spoken items) and audiovisual (seeing written items while hearing congruent spoken items) modalities. Participants were young and older adults who performed a delayed lexical decision task (LDT) presented in blocks of visual, auditory, and audiovisual stimuli. Event-related potentials (ERPs) revealed differences between young and older adults despite older adults' ability to identify words and pseudowords as accurately as young adults. The observed differences included more focalized lexico-semantic access in the N400 time window in older relative to young adults, stronger re-instantiation and/or more widespread activity of the lexicality effect at the time of responding, and stronger multimodal integration for older relative to young adults. Our results offer new insights into how functional neural differences in older adults can result in efficient access to lexico-semantic representations across the lifespan.


Assuntos
Eletroencefalografia , Semântica , Idoso , Envelhecimento , Encéfalo , Potenciais Evocados , Feminino , Humanos , Masculino , Análise de Regressão , Adulto Jovem
12.
Exp Brain Res ; 203(3): 575-82, 2010 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-20437168

RESUMO

Listeners use lipread information to adjust the phonetic boundary between two speech categories (phonetic recalibration, Bertelson et al. 2003). Here, we examined phonetic recalibration while listeners were engaged in a visuospatial or verbal memory working memory task under different memory load conditions. Phonetic recalibration was--like selective speech adaptation--not affected by a concurrent verbal or visuospatial memory task. This result indicates that phonetic recalibration is a low-level process not critically depending on processes used in verbal- or visuospatial working memory.


Assuntos
Adaptação Psicológica , Memória de Curto Prazo , Fonética , Percepção da Fala , Análise de Variância , Humanos , Testes de Linguagem , Leitura Labial , Testes Neuropsicológicos , Percepção Espacial , Percepção Visual , Adulto Jovem
13.
Q J Exp Psychol (Hove) ; 73(6): 957-967, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31931664

RESUMO

Humans quickly adapt to variations in the speech signal. Adaptation may surface as recalibration, a learning effect driven by error-minimisation between a visual face and an ambiguous auditory speech signal, or as selective adaptation, a contrastive aftereffect driven by the acoustic clarity of the sound. Here, we examined whether these aftereffects occur for vowel identity and voice gender. Participants were exposed to male, female, or androgynous tokens of speakers pronouncing /e/, /ø/, (embedded in words with a consonant-vowel-consonant structure), or an ambiguous vowel halfway between /e/ and /ø/ dubbed onto the video of a male or female speaker pronouncing /e/ or /ø/. For both voice gender and vowel identity, we found assimilative aftereffects after exposure to auditory ambiguous adapter sounds, and contrastive aftereffects after exposure to auditory clear adapter sounds. This demonstrates that similar principles for adaptation in these dimensions are at play.


Assuntos
Reconhecimento Facial/fisiologia , Percepção Social , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Fatores Sexuais , Adulto Jovem
14.
Neuropsychologia ; 137: 107305, 2020 02 03.
Artigo em Inglês | MEDLINE | ID: mdl-31838100

RESUMO

In two experiments, we investigated the relationship between lexical access processes, and processes that are specifically related to making lexical decisions. In Experiment 1, participants performed a standard lexical decision task in which they had to respond as quickly and as accurately as possible to visual (written), auditory (spoken) and audiovisual (written + spoken) items. In Experiment 2, a different group of participants performed the same task but were required to make responses after a delay. Linear mixed effect models on reaction times and single trial Event-Related Potentials (ERPs) revealed that ERP lexicality effects started earlier in the visual than auditory modality, and that effects were driven by the written input in the audiovisual modality. More negative ERP amplitudes predicted slower reaction times in all modalities in both experiments. However, these predictive amplitudes were mainly observed within the window of the lexicality effect in Experiment 1 (the speeded task), and shifted to post-response-probe time windows in Experiment 2 (the delayed task). The lexicality effects lasted longer in Experiment 1 than in Experiment 2, and in the delayed task, we additionally observed a "re-instantiation" of the lexicality effect related to the delayed response. Delaying the response in an otherwise identical lexical decision task thus allowed us to separate lexical access processes from processes specific to lexical decision.


Assuntos
Tomada de Decisões/fisiologia , Potenciais Evocados/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Psicolinguística , Desempenho Psicomotor/fisiologia , Tempo de Reação/fisiologia , Leitura , Percepção da Fala/fisiologia , Adolescente , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Adulto Jovem
15.
Int J Psychophysiol ; 155: 78-86, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32561354

RESUMO

BACKGROUND: One potentially relevant neurophysiological marker of internalizing problems (anxiety/depressive symptoms) is the late positive potential (LPP), as it is related to processing of emotional stimuli. For the first time, to our knowledge, we investigated the value of the LPP as a neurophysiological marker for internalizing problems and specific anxiety and depressive symptoms, at preschool age. METHOD: At age 4 years, children (N = 84) passively viewed a series of neutral, pleasant, and unpleasant pictures selected from the International Affective Pictures System. Affective picture processing was measured via the LPP (EEG recorded) and mothers reported on child behavior via the Child Behavior Checklist 1 ½ - 5 (internalizing, DSM-anxiety, DSM-affective/depression subscales). Difference scores between the neutral and affective pictures (i.e., neutral-pleasant and neutral-unpleasant) were computed for posterior, central and anterior brain locations for early (300-700 ms), middle (700-1200 ms) and late (1200-2000 ms) time windows. RESULTS: Greater LPP difference scores for pleasant images in the anterior recording site, in the middle time window, were associated with greater internalizing behaviors. Greater DSM-anxiety symptoms were associated with greater LPP difference scores for unpleasant and pleasant images. After correcting for multiple testing, only the association between greater DSM-affective/depression symptoms and greater LPP difference scores for unpleasant images in the anterior recording site (early time window) remained significant. DISCUSSION: Our study has identified a potential neural marker of preschool internalizing problems. Children with larger LPPs to unpleasant images may be at greater risk of internalizing problems, potentially due to an increased emotional reactivity.


Assuntos
Eletroencefalografia , Potenciais Evocados , Ansiedade , Transtornos de Ansiedade , Criança , Pré-Escolar , Emoções , Humanos
16.
Lang Speech ; 52(Pt 2-3): 341-50, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19624035

RESUMO

Listeners hearing an ambiguous speech sound flexibly adjust their phonetic categories in accordance with lipread information telling what the phoneme should be (recalibration). Here, we tested the stability of lipread-induced recalibration over time. Listeners were exposed to an ambiguous sound halfway between /t/ and /p/ that was dubbed onto a face articulating either /t/ or /p/. When tested immediately, listeners exposed to lipread /t/ were more likely to categorize the ambiguous sound as /t/ than listeners exposed to /p/. This aftereffect dissipated quickly with prolonged testing and did not reappear after a 24-hour delay. Recalibration of phonetic categories is thus a fragile phenomenon.


Assuntos
Leitura Labial , Fonética , Percepção da Fala , Adolescente , Adulto , Análise de Variância , Humanos , Psicolinguística , Fala , Fatores de Tempo , Adulto Jovem
17.
Front Psychol ; 10: 658, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30967827

RESUMO

Although the default state of the world is that we see and hear other people talking, there is evidence that seeing and hearing ourselves rather than someone else may lead to visual (i.e., lip-read) or auditory "self" advantages. We assessed whether there is a "self" advantage for phonetic recalibration (a lip-read driven cross-modal learning effect) and selective adaptation (a contrastive effect in the opposite direction of recalibration). We observed both aftereffects as well as an on-line effect of lip-read information on auditory perception (i.e., immediate capture), but there was no evidence for a "self" advantage in any of the tasks (as additionally supported by Bayesian statistics). These findings strengthen the emerging notion that recalibration reflects a general learning mechanism, and bolster the argument that adaptation depends on rather low-level auditory/acoustic features of the speech signal.

18.
PLoS One ; 14(7): e0219744, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31310616

RESUMO

Speech perception is influenced by vision through a process of audiovisual integration. This is demonstrated by the McGurk illusion where visual speech (for example /ga/) dubbed with incongruent auditory speech (such as /ba/) leads to a modified auditory percept (/da/). Recent studies have indicated that perception of the incongruent speech stimuli used in McGurk paradigms involves mechanisms of both general and audiovisual speech specific mismatch processing and that general mismatch processing modulates induced theta-band (4-8 Hz) oscillations. Here, we investigated whether the theta modulation merely reflects mismatch processing or, alternatively, audiovisual integration of speech. We used electroencephalographic recordings from two previously published studies using audiovisual sine-wave speech (SWS), a spectrally degraded speech signal sounding nonsensical to naïve perceivers but perceived as speech by informed subjects. Earlier studies have shown that informed, but not naïve subjects integrate SWS phonetically with visual speech. In an N1/P2 event-related potential paradigm, we found a significant difference in theta-band activity between informed and naïve perceivers of audiovisual speech, suggesting that audiovisual integration modulates induced theta-band oscillations. In a McGurk mismatch negativity paradigm (MMN) where infrequent McGurk stimuli were embedded in a sequence of frequent audio-visually congruent stimuli we found no difference between congruent and McGurk stimuli. The infrequent stimuli in this paradigm are violating both the general prediction of stimulus content, and that of audiovisual congruence. Hence, we found no support for the hypothesis that audiovisual mismatch modulates induced theta-band oscillations. We also did not find any effects of audiovisual integration in the MMN paradigm, possibly due to the experimental design.


Assuntos
Percepção Auditiva , Oscilometria , Percepção da Fala , Fala/fisiologia , Percepção Visual , Estimulação Acústica , Análise por Conglomerados , Eletrodos , Eletroencefalografia , Potenciais Evocados , Potenciais Evocados Auditivos , Humanos , Ilusões , Idioma , Masculino , Fonética , Estimulação Luminosa , Processamento de Sinais Assistido por Computador
19.
MethodsX ; 6: 428-436, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30906698

RESUMO

Hyperscanning refers to obtaining simultaneous neural recordings from more than one person (Montage et al., 2002 [1]), that can be used to study interactive situations. In particular, hyperscanning with Electroencephalography (EEG) is becoming increasingly popular since it allows researchers to explore the interactive brain with a high temporal resolution. Notably, there is a 40-year gap between the first instance that simultaneous measurement of EEG activity was mentioned in the literature (Duane and Behrendt, 1965 [2]), and the first actual description of an EEG hyperscanning setup being implemented (Babiloni et al., 2006 [3]). To date, specific EEG hyperscanning devices have not yet been developed and EEG hyperscanning setups are not usually described with sufficient detail to be easily reproduced. Here, we offer a step-by-step description of solutions to many of these technological challenges. Specifically, we describe and provide customized implementations of EEG hyperscanning setups using hardware and software from different companies: Brain Products, ANT, EGI, and BioSemi. •Necessary details to set up a functioning EEG hyperscanning protocol are provided.•The setups allow independent measures and measures of synchronization between the signals of two different brains.•Individual electrical Ground and Reference is obtained in all discussed systems.

20.
Sci Rep ; 7: 42055, 2017 02 07.
Artigo em Inglês | MEDLINE | ID: mdl-28169316

RESUMO

Perceiving linguistic input is vital for human functioning, but the process is complicated by the fact that the incoming signal is often degraded. However, humans can compensate for unimodal noise by relying on simultaneous sensory input from another modality. Here, we investigated noise-compensation for spoken and printed words in two experiments. In the first behavioral experiment, we observed that accuracy was modulated by reaction time, bias and sensitivity, but noise compensation could nevertheless be explained via accuracy differences when controlling for RT, bias and sensitivity. In the second experiment, we also measured Event Related Potentials (ERPs) and observed robust electrophysiological correlates of noise compensation starting at around 350 ms after stimulus onset, indicating that noise compensation is most prominent at lexical/semantic processing levels.


Assuntos
Limiar Auditivo/fisiologia , Potenciais Evocados/fisiologia , Ruído , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Encéfalo/fisiologia , Feminino , Humanos , Linguística , Masculino , Tempo de Reação/fisiologia , Semântica , Fala/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA