Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 71
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Vis ; 24(6): 7, 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38848099

RESUMO

Which properties of a natural scene affect visual search? We consider the alternative hypotheses that low-level statistics, higher-level statistics, semantics, or layout affect search difficulty in natural scenes. Across three experiments (n = 20 each), we used four different backgrounds that preserve distinct scene properties: (a) natural scenes (all experiments); (b) 1/f noise (pink noise, which preserves only low-level statistics and was used in Experiments 1 and 2); (c) textures that preserve low-level and higher-level statistics but not semantics or layout (Experiments 2 and 3); and (d) inverted (upside-down) scenes that preserve statistics and semantics but not layout (Experiment 2). We included "split scenes" that contained different backgrounds left and right of the midline (Experiment 1, natural/noise; Experiment 3, natural/texture). Participants searched for a Gabor patch that occurred at one of six locations (all experiments). Reaction times were faster for targets on noise and slower on inverted images, compared to natural scenes and textures. The N2pc component of the event-related potential, a marker of attentional selection, had a shorter latency and a higher amplitude for targets in noise than for all other backgrounds. The background contralateral to the target had an effect similar to that on the target side: noise led to faster reactions and shorter N2pc latencies than natural scenes, although we observed no difference in N2pc amplitude. There were no interactions between the target side and the non-target side. Together, this shows that-at least when searching simple targets without own semantic content-natural scenes are more effective distractors than noise and that this results from higher-order statistics rather than from semantics or layout.


Assuntos
Atenção , Estimulação Luminosa , Tempo de Reação , Semântica , Humanos , Atenção/fisiologia , Masculino , Feminino , Adulto Jovem , Adulto , Tempo de Reação/fisiologia , Estimulação Luminosa/métodos , Reconhecimento Visual de Modelos/fisiologia , Eletroencefalografia/métodos , Potenciais Evocados Visuais/fisiologia
2.
J Vis ; 24(5): 16, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38819806

RESUMO

Multistable perception occurs in all sensory modalities, and there is ongoing theoretical debate about whether there are overarching mechanisms driving multistability across modalities. Here we study whether multistable percepts are coupled across vision and audition on a moment-by-moment basis. To assess perception simultaneously for both modalities without provoking a dual-task situation, we query auditory perception by direct report, while measuring visual perception indirectly via eye movements. A support-vector-machine (SVM)-based classifier allows us to decode visual perception from the eye-tracking data on a moment-by-moment basis. For each timepoint, we compare visual percept (SVM output) and auditory percept (report) and quantify the co-occurrence of integrated (one-object) or segregated (two-objects) interpretations in the two modalities. Our results show an above-chance coupling of auditory and visual perceptual interpretations. By titrating stimulus parameters toward an approximately symmetric distribution of integrated and segregated percepts for each modality and individual, we minimize the amount of coupling expected by chance. Because of the nature of our task, we can rule out that the coupling stems from postperceptual levels (i.e., decision or response interference). Our results thus indicate moment-by-moment perceptual coupling in the resolution of visual and auditory multistability, lending support to theories that postulate joint mechanisms for multistable perception across the senses.


Assuntos
Percepção Auditiva , Estimulação Luminosa , Percepção Visual , Humanos , Percepção Auditiva/fisiologia , Percepção Visual/fisiologia , Adulto , Masculino , Feminino , Estimulação Luminosa/métodos , Adulto Jovem , Movimentos Oculares/fisiologia , Estimulação Acústica/métodos
3.
Sci Rep ; 14(1): 8858, 2024 04 17.
Artigo em Inglês | MEDLINE | ID: mdl-38632303

RESUMO

It is often assumed that rendering an alert signal more salient yields faster responses to this alert. Yet, there might be a trade-off between attracting attention and distracting from task execution. Here we tested this in four behavioral experiments with eye-tracking using an abstract alert-signal paradigm. Participants performed a visual discrimination task (primary task) while occasional alert signals occurred in the visual periphery accompanied by a congruently lateralized tone. Participants had to respond to the alert before proceeding with the primary task. When visual salience (contrast) or auditory salience (tone intensity) of the alert were increased, participants directed their gaze to the alert more quickly. This confirms that more salient alerts attract attention more efficiently. Increasing auditory salience yielded quicker responses for the alert and primary tasks, apparently confirming faster responses altogether. However, increasing visual salience did not yield similar benefits: instead, it increased the time between fixating the alert and responding, as high-salience alerts interfered with alert-task execution. Such task interference by high-salience alert-signals counteracts their more efficient attentional guidance. The design of alert signals must be adapted to a "sweet spot" that optimizes this stimulus-dependent trade-off between maximally rapid attentional orienting and minimal task interference.


Assuntos
Atenção , Percepção Visual , Humanos , Tempo de Reação/fisiologia , Atenção/fisiologia , Percepção Visual/fisiologia , Registros , Discriminação Psicológica
4.
Soft Matter ; 20(12): 2831-2839, 2024 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-38456340

RESUMO

Nanoindentation cycles measured with an atomic force microscope on hydrated collagen fibrils exhibit a rate-independent hysteresis with return point memory. This previously unknown energy dissipation mechanism describes in unified form elastoplastic indentation, capillary adhesion, and surface leveling at indentation velocities smaller than 1 µm s-1, where viscous friction is negligible. A generic hysteresis model, based on force-distance data measured during one large approach-retract cycle, predicts the force (output) and the dissipated energy for arbitrary indentation trajectories (input). While both quantities are rate independent, they do depend nonlinearly on indentation history and on indentation amplitude.

5.
Psychophysiology ; 61(6): e14545, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38366704

RESUMO

The auditory system has an amazing ability to rapidly encode auditory regularities. Evidence comes from the popular oddball paradigm, in which frequent (standard) sounds are occasionally exchanged for rare deviant sounds, which then elicit signs of prediction error based on their unexpectedness (e.g., MMN and P3a). Here, we examine the widely neglected characteristics of deviants being bearers of predictive information themselves; naive participants listened to sound sequences constructed according to a new, modified version of the oddball paradigm including two types of deviants that followed diametrically opposed rules: one deviant sound occurred mostly in pairs (repetition rule), the other deviant sound occurred mostly in isolation (non-repetition rule). Due to this manipulation, the sound following a first deviant (either the same deviant or a standard) was either predictable or unpredictable based on its conditional probability associated with the preceding deviant sound. Our behavioral results from an active deviant detection task replicate previous findings that deviant repetition rules (based on conditional probability) can be extracted when behaviorally relevant. Our electrophysiological findings obtained in a passive listening setting indicate that conditional probability also translates into differential processing at the P3a level. However, MMN was confined to global deviants and was not sensitive to conditional probability. This suggests that higher-level processing concerned with stimulus selection and/or evaluation (reflected in P3a) but not lower-level sensory processing (reflected in MMN) considers rarely encountered rules.


Assuntos
Percepção Auditiva , Eletroencefalografia , Potenciais Evocados P300 , Potenciais Evocados Auditivos , Humanos , Feminino , Masculino , Adulto Jovem , Potenciais Evocados P300/fisiologia , Adulto , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica
6.
J Neurophysiol ; 130(4): 1028-1040, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-37701952

RESUMO

When humans walk, it is important for them to have some measure of the distance they have traveled. Typically, many cues from different modalities are available, as humans perceive both the environment around them (for example, through vision and haptics) and their own walking. Here, we investigate the contribution of visual cues and nonvisual self-motion cues to distance reproduction when walking on a treadmill through a virtual environment by separately manipulating the speed of a treadmill belt and of the virtual environment. Using mobile eye tracking, we also investigate how our participants sampled the visual information through gaze. We show that, as predicted, both modalities affected how participants (N = 28) reproduced a distance. Participants weighed nonvisual self-motion cues more strongly than visual cues, corresponding also to their respective reliabilities, but with some interindividual variability. Those who looked more toward those parts of the visual scene that contained cues to speed and distance tended also to weigh visual information more strongly, although this correlation was nonsignificant, and participants generally directed their gaze toward visually informative areas of the scene less than expected. As measured by motion capture, participants adjusted their gait patterns to the treadmill speed but not to walked distance. In sum, we show in a naturalistic virtual environment how humans use different sensory modalities when reproducing distances and how the use of these cues differs between participants and depends on information sampling.NEW & NOTEWORTHY Combining virtual reality with treadmill walking, we measured the relative importance of visual cues and nonvisual self-motion cues for distance reproduction. Participants used both cues but put more weight on self-motion; weight on visual cues had a trend to correlate with looking at visually informative areas. Participants overshot distances, especially when self-motion was slow; they adjusted steps to self-motion cues but not to visual cues. Our work thus quantifies the multimodal contributions to distance reproduction.


Assuntos
Percepção de Movimento , Realidade Virtual , Humanos , Sinais (Psicologia) , Caminhada , Marcha
7.
Atten Percept Psychophys ; 85(8): 2731-2750, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37532882

RESUMO

The human auditory system is believed to represent regularities inherent in auditory information in internal models. Sounds not matching the standard regularity (deviants) elicit prediction error, alerting the system to information not explainable within currently active models. Here, we examine the widely neglected characteristic of deviants bearing predictive information themselves. In a modified version of the oddball paradigm, using higher-order regularities, we set up different expectations regarding the sound following a deviant. Higher-order regularities were defined by the relation of pitch within tone pairs (rather than absolute pitch of individual tones). In a deviant detection task participants listened to oddball sequences including two deviant types following diametrically opposed rules: one occurred mostly in succession (high repetition probability) and the other mostly in isolation (low repetition probability). Participants in Experiment 1 were not informed (naïve), whereas in Experiment 2 they were made aware of the repetition rules. Response times significantly decreased from first to second deviant when repetition probability was high-albeit more in the presence of explicit rule knowledge. There was no evidence of a facilitation effect when repetition probability was low. Significantly more false alarms occurred in response to standards following high compared with low repetition probability deviants, but only in participants aware of the repetition rules. These findings provide evidence that not only deviants violating lower- but also higher-order regularities can inform predictions about auditory events. More generally, they confirm the utility of this new paradigm to gather further insights into the predictive properties of the human brain.


Assuntos
Eletroencefalografia , Potenciais Evocados Auditivos , Humanos , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica , Som , Encéfalo/fisiologia
8.
Front Psychol ; 14: 1193822, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37425183

RESUMO

Word stress is demanding for non-native learners of English, partly because speakers from different backgrounds weight perceptual cues to stress like pitch, intensity, and duration differently. Slavic learners of English and particularly those with a fixed stress language background like Czech and Polish have been shown to be less sensitive to stress in their native and non-native languages. In contrast, German English learners are rarely discussed in a word stress context. A comparison of these varieties can reveal differences in the foreign language processing of speakers from two language families. We use electroencephalography (EEG) to explore group differences in word stress cue perception between Slavic and German learners of English. Slavic and German advanced English speakers were examined in passive multi-feature oddball experiments, where they were exposed to the word impact as an unstressed standard and as deviants stressed on the first or second syllable through higher pitch, intensity, or duration. The results revealed a robust Mismatch Negativity (MMN) component of the event-related potential (ERP) in both language groups in response to all conditions, demonstrating sensitivity to stress changes in a non-native language. While both groups showed higher MMN responses to stress changes to the second than the first syllable, this effect was more pronounced for German than for Slavic participants. Such group differences in non-native English word stress perception from the current and previous studies are argued to speak in favor of customizable language technologies and diversified English curricula compensating for non-native perceptual variation.

9.
IEEE Trans Vis Comput Graph ; 29(5): 2220-2229, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37027735

RESUMO

Using a map in an unfamiliar environment requires identifying correspondences between elements of the map's allocentric representation and elements in egocentric views. Aligning the map with the environment can be challenging. Virtual reality (VR) allows learning about unfamiliar environments in a sequence of egocentric views that correspond closely to the perspectives and views that are experienced in the actual environment. We compared three methods to prepare for localization and navigation tasks performed by teleoperating a robot in an office building: studying a floor plan of the building and two forms of VR exploration. One group of participants studied a building plan, a second group explored a faithful VR reconstruction of the building from a normal-sized avatar's perspective, and a third group explored the VR from a giant-sized avatar's perspective. All methods contained marked checkpoints. The subsequent tasks were identical for all groups. The self-localization task required indication of the approximate location of the robot in the environment. The navigation task required navigation between checkpoints. Participants took less time to learn with the giant VR perspective and with the floorplan than with the normal VR perspective. Both VR learning methods significantly outperformed the floorplan in the orientation task. Navigation was performed quicker after learning in the giant perspective compared to the normal perspective and the building plan. We conclude that the normal perspective and especially the giant perspective in VR are viable options for preparing for teleoperation in unfamiliar environments when a virtual model of the environment is available.

10.
Front Neuroergon ; 4: 1196507, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38234486

RESUMO

Actions in the real world have immediate sensory consequences. Mimicking these in digital environments is within reach, but technical constraints usually impose a certain latency (delay) between user actions and system responses. It is important to assess the impact of this latency on the users, ideally with measurement techniques that do not interfere with their digital experience. One such unobtrusive technique is electroencephalography (EEG), which can capture the users' brain activity associated with motor responses and sensory events by extracting event-related potentials (ERPs) from the continuous EEG recording. Here we exploit the fact that the amplitude of sensory ERP components (specifically, N1 and P2) reflects the degree to which the sensory event was perceived as an expected consequence of an own action (self-generation effect). Participants (N = 24) elicit auditory events in a virtual-reality (VR) setting by entering codes on virtual keypads to open doors. In a within-participant design, the delay between user input and sound presentation is manipulated across blocks. Occasionally, the virtual keypad is operated by a simulated robot instead, yielding a control condition with externally generated sounds. Results show that N1 (but not P2) amplitude is reduced for self-generated relative to externally generated sounds, and P2 (but not N1) amplitude is modulated by delay of sound presentation in a graded manner. This dissociation between N1 and P2 effects maps back to basic research on self-generation of sounds. We suggest P2 amplitude as a candidate read-out to assess the quality and immersiveness of digital environments with respect to system latency.

11.
J Acoust Soc Am ; 152(5): 2758, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36456271

RESUMO

Sequential auditory scene analysis (ASA) is often studied using sequences of two alternating tones, such as ABAB or ABA_, with "_" denoting a silent gap, and "A" and "B" sine tones differing in frequency (nominally low and high). Many studies implicitly assume that the specific arrangement (ABAB vs ABA_, as well as low-high-low vs high-low-high within ABA_) plays a negligible role, such that decisions about the tone pattern can be governed by other considerations. To explicitly test this assumption, a systematic comparison of different tone patterns for two-tone sequences was performed in three different experiments. Participants were asked to report whether they perceived the sequences as originating from a single sound source (integrated) or from two interleaved sources (segregated). Results indicate that core findings of sequential ASA, such as an effect of frequency separation on the proportion of integrated and segregated percepts, are similar across the different patterns during prolonged listening. However, at sequence onset, the integrated percept was more likely to be reported by the participants in ABA_low-high-low than in ABA_high-low-high sequences. This asymmetry is important for models of sequential ASA, since the formation of percepts at onset is an integral part of understanding how auditory interpretations build up.


Assuntos
Percepção Auditiva , Auscultação , Humanos , Som
12.
Front Hum Neurosci ; 15: 734231, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34776906

RESUMO

When multiple sound sources are present at the same time, auditory perception is often challenged with disentangling the resulting mixture and focusing attention on the target source. It has been repeatedly demonstrated that background (distractor) sound sources are easier to ignore when their spectrotemporal signature is predictable. Prior evidence suggests that this ability to exploit predictability for foreground-background segregation degrades with age. On a theoretical level, this has been related with an impairment in elderly adults' capabilities to detect certain types of sensory deviance in unattended sound sequences. Yet the link between those two capacities, deviance detection and predictability-based sound source segregation, has not been empirically demonstrated. Here we report on a combined behavioral-EEG study investigating the ability of elderly listeners (60-75 years of age) to use predictability as a cue for sound source segregation, as well as their sensory deviance detection capacities. Listeners performed a detection task on a target stream that can only be solved when a concurrent distractor stream is successfully ignored. We contrast two conditions whose distractor streams differ in their predictability. The ability to benefit from predictability was operationalized as performance difference between the two conditions. Results show that elderly listeners can use predictability for sound source segregation at group level, yet with a high degree of inter-individual variation in this ability. In a further, passive-listening control condition, we measured correlates of deviance detection in the event-related brain potential (ERP) elicited by occasional deviations from the same spectrotemporal pattern as used for the predictable distractor sequence during the behavioral task. ERP results confirmed neural signatures of deviance detection in terms of mismatch negativity (MMN) at group level. Correlation analyses at single-subject level provide no evidence for the hypothesis that deviance detection ability (measured by MMN amplitude) is related to the ability to benefit from predictability for sound source segregation. These results are discussed in the frameworks of sensory deviance detection and predictive coding.

13.
PLoS One ; 16(6): e0252370, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34086770

RESUMO

In multistability, a constant stimulus induces alternating perceptual interpretations. For many forms of visual multistability, the transition from one interpretation to another ("perceptual switch") is accompanied by a dilation of the pupil. Here we ask whether the same holds for auditory multistability, specifically auditory streaming. Two tones were played in alternation, yielding four distinct interpretations: the tones can be perceived as one integrated percept (single sound source), or as segregated with either tone or both tones in the foreground. We found that the pupil dilates significantly around the time a perceptual switch is reported ("multistable condition"). When participants instead responded to actual stimulus changes that closely mimicked the multistable perceptual experience ("replay condition"), the pupil dilated more around such responses than in multistability. This still held when data were corrected for the pupil response to the stimulus change as such. Hence, active responses to an exogeneous stimulus change trigger a stronger or temporally more confined pupil dilation than responses to an endogenous perceptual switch. In another condition, participants randomly pressed the buttons used for reporting multistability. In Study 1, this "random condition" failed to sufficiently mimic the temporal pattern of multistability. By adapting the instructions, in Study 2 we obtained a response pattern more similar to the multistable condition. In this case, the pupil dilated significantly around the random button presses. Albeit numerically smaller, this pupil response was not significantly different from the multistable condition. While there are several possible explanations-related, e.g., to the decision to respond-this underlines the difficulty to isolate a purely perceptual effect in multistability. Our data extend previous findings from visual to auditory multistability. They highlight methodological challenges in interpreting such data and suggest possible approaches to meet them, including a novel stimulus to simulate the experience of perceptual switches in auditory streaming.


Assuntos
Percepção Auditiva/fisiologia , Estimulação Acústica/métodos , Adulto , Feminino , Humanos , Masculino , Pupila/fisiologia , Som , Percepção Visual/fisiologia
14.
Vision Res ; 182: 69-88, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33610002

RESUMO

In multistability, perceptual interpretations ("percepts") of ambiguous stimuli alternate over time. There is considerable debate as to whether similar regularities govern the first percept after stimulus onset and percepts during prolonged presentation. We address this question in a visual pattern-component rivalry paradigm by presenting two overlaid drifting gratings, which participants perceived as individual gratings passing in front of each other ("segregated") or as a plaid ("integrated"). We varied the enclosed angle ("opening angle") between the gratings (experiments 1 and 2) and stimulus orientation (experiment 2). The relative number of integrated percepts increased monotonically with opening angle. The point of equality, where half of the percepts were integrated, was at a smaller opening angle at onset than during prolonged viewing. The functional dependence of the relative number of integrated percepts on opening angle showed a steeper curve at onset than during prolonged viewing. Dominance durations of integrated percepts were longer at onset than during prolonged viewing and increased with opening angle. The general pattern persisted when stimuli were rotated (experiment 2), despite some perceptual preference for cardinal motion directions over oblique directions. Analysis of eye movements, specifically the slow phase of the optokinetic nystagmus (OKN), confirmed the veridicality of participants' reports and provided a temporal characterization of percept formation after stimulus onset. Together, our results show that the first percept after stimulus onset exhibits a different dependence on stimulus parameters than percepts during prolonged viewing. This underlines the distinct role of the first percept in multistability.


Assuntos
Nistagmo Optocinético , Visão Binocular , Humanos , Estimulação Luminosa
16.
J Speech Lang Hear Res ; 62(1): 177-189, 2019 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-30534994

RESUMO

Purpose For elderly listeners, it is more challenging to listen to 1 voice surrounded by other voices than for young listeners. This could be caused by a reduced ability to use acoustic cues-such as slight differences in onset time-for the segregation of concurrent speech signals. Here, we study whether the ability to benefit from onset asynchrony differs between young (18-33 years) and elderly (55-74 years) listeners. Method We investigated young (normal hearing, N = 20) and elderly (mildly hearing impaired, N = 26) listeners' ability to segregate 2 vowels with onset asynchronies ranging from 20 to 100 ms. Behavioral measures were complemented by a specific event-related brain potential component, the object-related negativity, indicating the perception of 2 distinct auditory objects. Results Elderly listeners' behavioral performance (identification accuracy of the 2 vowels) was considerably poorer than young listeners'. However, both age groups showed the same amount of improvement with increasing onset asynchrony. Object-related negativity amplitude also increased similarly in both age groups. Conclusion Both age groups benefit to a similar extent from onset asynchrony as a cue for concurrent speech segregation during active (behavioral measurement) and during passive (electroencephalographic measurement) listening.


Assuntos
Acústica da Fala , Percepção da Fala/fisiologia , Adulto , Fatores Etários , Idoso , Análise de Variância , Audiometria , Limiar Auditivo , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
17.
Hear Res ; 370: 120-129, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30368055

RESUMO

A listener who focusses on a sound source of interest must continuously integrate the sounds emitted by the attended source and ignore the sounds emitted by the remaining sources in the auditory scene. Little is known about how the ignored sound sources in the background are mentally represented after the source of interest has formed the perceptual foreground. This is due to a key methodological challenge: the background representation is by definition not overtly reportable. Here we developed a paradigm based on event-related brain potentials (ERPs) to assess the mental representation of background sounds. Participants listened to sequences of three repeatedly presented tones arranged in an ascending order (low, middle, high frequency). They were instructed to detect intensity deviants in one of the tones, creating the perceptual foreground. The remaining two background tones contained timing and location deviants. Those deviants were set up such that mismatch negativity (MMN) components would be elicited in distinct ways if the background was decomposed into two separate sound streams (background segregation) or if it was not further decomposed (background integration). Results provide MMN-based evidence for background segregation and integration in parallel. This suggests that mental representations of background integration and segregation can be concurrently available, and that collecting empirical evidence for only one of these background organization alternatives might lead to erroneous conclusions.


Assuntos
Atenção , Córtex Auditivo/fisiologia , Percepção Auditiva , Potenciais Evocados Auditivos , Detecção de Sinal Psicológico , Estimulação Acústica , Adulto , Mapeamento Encefálico/métodos , Eletroencefalografia , Feminino , Humanos , Masculino , Mascaramento Perceptivo , Tempo de Reação , Fatores de Tempo , Adulto Jovem
18.
Neuroscience ; 389: 19-29, 2018 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-28735101

RESUMO

In everyday listening environments, a main task for our auditory system is to follow one out of multiple speakers talking simultaneously. The present study was designed to find electrophysiological indicators of two central processes involved - segregating the speech mixture into distinct speech sequences corresponding to the two speakers, and then attending to one of the speech sequences. We generated multistable speech stimuli that were set up to create ambiguity as to whether only one or two speakers are talking. Thereby we were able to investigate three perceptual alternatives (no segregation, segregated - speaker A in the foreground, segregated - speaker B in the foreground) without any confounding stimulus changes. Participants listened to a continuously repeating sequence of syllables, which were uttered alternately by two human speakers, and indicated whether they perceived the sequence as an inseparable mixture or as originating from two separate speakers. In the latter case, they distinguished which speaker was in their attentional foreground. Our data show a long-lasting event-related potential (ERP) modulation starting at 130ms after stimulus onset, which can be explained by the perceptual organization of the two speech sequences into attended foreground and ignored background streams. Our paradigm extends previous work with pure-tone sequences toward speech stimuli and adds the possibility to obtain neural correlates of the difficulty to segregate a speech mixture into distinct streams.


Assuntos
Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Psicoacústica , Adulto Jovem
19.
Vision Res ; 133: 121-129, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28237813

RESUMO

When distinct stimuli are presented to the two eyes, their mental representations alternate in awareness. Here, such "binocular rivalry" was used to investigate whether audio-visual associations bias visual perception. To induce two arbitrary associations, each between a tone and a grating of a specific color and motion direction, observers were required to respond whenever this combination was presented, but not for other tone-grating combinations. After about 20min of this induction phase, each of the gratings was presented to one eye to induce rivalry, while either of the two tones or no tone was played. Observers were asked to watch the rivaling stimuli and listen to the tones. The observer's dominant percept was assessed throughout by measuring the optokinetic nystagmus (OKN), whose slow phase follows the direction of the currently dominant grating. We found that perception in rivalry was affected by the concurrently played tone. Results suggest a bias towards the grating that had been associated with the concurrently presented tone and prolonged dominance durations for this grating compared to the other. Numerically, conditions without tone fell in-between for measures of bias and dominance duration. Our data show that a rapidly acquired arbitrary audio-visual association biases visual perception. Unlike previously reported cross-modal interactions in rivalry, this effect can neither be explained by a pure attentional (dual-task) effect, nor does it require a fixed physical or semantic relation between the auditory and visual stimulus. This suggests that audio-visual associations that are quickly formed by associative learning may affect visual representations directly.


Assuntos
Percepção Auditiva/fisiologia , Lateralidade Funcional/fisiologia , Visão Binocular/fisiologia , Percepção Visual/fisiologia , Adulto , Análise de Variância , Aprendizagem por Associação/fisiologia , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Adulto Jovem
20.
J Acoust Soc Am ; 141(1): 265, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-28147594

RESUMO

Empirical research on the sequential decomposition of an auditory scene primarily relies on interleaved sound mixtures of only two tone sequences (e.g., ABAB…). This oversimplifies the sound decomposition problem by limiting the number of putative perceptual organizations. The current study used a sound mixture composed of three different tones (ABCABC…) that could be perceptually organized in many different ways. Participants listened to these sequences and reported their subjective perception by continuously choosing one out of 12 visually presented perceptual organization alternatives. Different levels of frequency and spatial separation were implemented to check whether participants' perceptual reports would be systematic and plausible. As hypothesized, while perception switched back and forth in each condition between various perceptual alternatives (multistability), spatial as well as frequency separation generally raised the proportion of segregated and reduced the proportion of integrated alternatives. During segregated percepts, in contrast to the hypothesis, many participants had a tendency to perceive two streams in the foreground, rather than reporting alternatives with a clear foreground-background differentiation. Finally, participants perceived the organization with intermediate feature values (e.g., middle tones of the pattern) segregated in the foreground slightly less often than similar alternatives with outer feature values (e.g., higher tones).

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...