Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 71
Filtrar
1.
Soft Matter ; 20(12): 2831-2839, 2024 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-38456340

RESUMO

Nanoindentation cycles measured with an atomic force microscope on hydrated collagen fibrils exhibit a rate-independent hysteresis with return point memory. This previously unknown energy dissipation mechanism describes in unified form elastoplastic indentation, capillary adhesion, and surface leveling at indentation velocities smaller than 1 µm s-1, where viscous friction is negligible. A generic hysteresis model, based on force-distance data measured during one large approach-retract cycle, predicts the force (output) and the dissipated energy for arbitrary indentation trajectories (input). While both quantities are rate independent, they do depend nonlinearly on indentation history and on indentation amplitude.

2.
Psychophysiology ; 61(6): e14545, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38366704

RESUMO

The auditory system has an amazing ability to rapidly encode auditory regularities. Evidence comes from the popular oddball paradigm, in which frequent (standard) sounds are occasionally exchanged for rare deviant sounds, which then elicit signs of prediction error based on their unexpectedness (e.g., MMN and P3a). Here, we examine the widely neglected characteristics of deviants being bearers of predictive information themselves; naive participants listened to sound sequences constructed according to a new, modified version of the oddball paradigm including two types of deviants that followed diametrically opposed rules: one deviant sound occurred mostly in pairs (repetition rule), the other deviant sound occurred mostly in isolation (non-repetition rule). Due to this manipulation, the sound following a first deviant (either the same deviant or a standard) was either predictable or unpredictable based on its conditional probability associated with the preceding deviant sound. Our behavioral results from an active deviant detection task replicate previous findings that deviant repetition rules (based on conditional probability) can be extracted when behaviorally relevant. Our electrophysiological findings obtained in a passive listening setting indicate that conditional probability also translates into differential processing at the P3a level. However, MMN was confined to global deviants and was not sensitive to conditional probability. This suggests that higher-level processing concerned with stimulus selection and/or evaluation (reflected in P3a) but not lower-level sensory processing (reflected in MMN) considers rarely encountered rules.


Assuntos
Percepção Auditiva , Eletroencefalografia , Potenciais Evocados P300 , Potenciais Evocados Auditivos , Humanos , Feminino , Masculino , Adulto Jovem , Potenciais Evocados P300/fisiologia , Adulto , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica
3.
J Vis ; 24(5): 16, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38819806

RESUMO

Multistable perception occurs in all sensory modalities, and there is ongoing theoretical debate about whether there are overarching mechanisms driving multistability across modalities. Here we study whether multistable percepts are coupled across vision and audition on a moment-by-moment basis. To assess perception simultaneously for both modalities without provoking a dual-task situation, we query auditory perception by direct report, while measuring visual perception indirectly via eye movements. A support-vector-machine (SVM)-based classifier allows us to decode visual perception from the eye-tracking data on a moment-by-moment basis. For each timepoint, we compare visual percept (SVM output) and auditory percept (report) and quantify the co-occurrence of integrated (one-object) or segregated (two-objects) interpretations in the two modalities. Our results show an above-chance coupling of auditory and visual perceptual interpretations. By titrating stimulus parameters toward an approximately symmetric distribution of integrated and segregated percepts for each modality and individual, we minimize the amount of coupling expected by chance. Because of the nature of our task, we can rule out that the coupling stems from postperceptual levels (i.e., decision or response interference). Our results thus indicate moment-by-moment perceptual coupling in the resolution of visual and auditory multistability, lending support to theories that postulate joint mechanisms for multistable perception across the senses.


Assuntos
Percepção Auditiva , Estimulação Luminosa , Percepção Visual , Humanos , Percepção Auditiva/fisiologia , Percepção Visual/fisiologia , Adulto , Masculino , Feminino , Estimulação Luminosa/métodos , Adulto Jovem , Movimentos Oculares/fisiologia , Estimulação Acústica/métodos
4.
J Vis ; 24(6): 7, 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38848099

RESUMO

Which properties of a natural scene affect visual search? We consider the alternative hypotheses that low-level statistics, higher-level statistics, semantics, or layout affect search difficulty in natural scenes. Across three experiments (n = 20 each), we used four different backgrounds that preserve distinct scene properties: (a) natural scenes (all experiments); (b) 1/f noise (pink noise, which preserves only low-level statistics and was used in Experiments 1 and 2); (c) textures that preserve low-level and higher-level statistics but not semantics or layout (Experiments 2 and 3); and (d) inverted (upside-down) scenes that preserve statistics and semantics but not layout (Experiment 2). We included "split scenes" that contained different backgrounds left and right of the midline (Experiment 1, natural/noise; Experiment 3, natural/texture). Participants searched for a Gabor patch that occurred at one of six locations (all experiments). Reaction times were faster for targets on noise and slower on inverted images, compared to natural scenes and textures. The N2pc component of the event-related potential, a marker of attentional selection, had a shorter latency and a higher amplitude for targets in noise than for all other backgrounds. The background contralateral to the target had an effect similar to that on the target side: noise led to faster reactions and shorter N2pc latencies than natural scenes, although we observed no difference in N2pc amplitude. There were no interactions between the target side and the non-target side. Together, this shows that-at least when searching simple targets without own semantic content-natural scenes are more effective distractors than noise and that this results from higher-order statistics rather than from semantics or layout.


Assuntos
Atenção , Estimulação Luminosa , Tempo de Reação , Semântica , Humanos , Atenção/fisiologia , Masculino , Feminino , Adulto Jovem , Adulto , Tempo de Reação/fisiologia , Estimulação Luminosa/métodos , Reconhecimento Visual de Modelos/fisiologia , Eletroencefalografia/métodos , Potenciais Evocados Visuais/fisiologia
5.
J Neurophysiol ; 130(4): 1028-1040, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-37701952

RESUMO

When humans walk, it is important for them to have some measure of the distance they have traveled. Typically, many cues from different modalities are available, as humans perceive both the environment around them (for example, through vision and haptics) and their own walking. Here, we investigate the contribution of visual cues and nonvisual self-motion cues to distance reproduction when walking on a treadmill through a virtual environment by separately manipulating the speed of a treadmill belt and of the virtual environment. Using mobile eye tracking, we also investigate how our participants sampled the visual information through gaze. We show that, as predicted, both modalities affected how participants (N = 28) reproduced a distance. Participants weighed nonvisual self-motion cues more strongly than visual cues, corresponding also to their respective reliabilities, but with some interindividual variability. Those who looked more toward those parts of the visual scene that contained cues to speed and distance tended also to weigh visual information more strongly, although this correlation was nonsignificant, and participants generally directed their gaze toward visually informative areas of the scene less than expected. As measured by motion capture, participants adjusted their gait patterns to the treadmill speed but not to walked distance. In sum, we show in a naturalistic virtual environment how humans use different sensory modalities when reproducing distances and how the use of these cues differs between participants and depends on information sampling.NEW & NOTEWORTHY Combining virtual reality with treadmill walking, we measured the relative importance of visual cues and nonvisual self-motion cues for distance reproduction. Participants used both cues but put more weight on self-motion; weight on visual cues had a trend to correlate with looking at visually informative areas. Participants overshot distances, especially when self-motion was slow; they adjusted steps to self-motion cues but not to visual cues. Our work thus quantifies the multimodal contributions to distance reproduction.


Assuntos
Percepção de Movimento , Realidade Virtual , Humanos , Sinais (Psicologia) , Caminhada , Marcha
6.
J Acoust Soc Am ; 152(5): 2758, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36456271

RESUMO

Sequential auditory scene analysis (ASA) is often studied using sequences of two alternating tones, such as ABAB or ABA_, with "_" denoting a silent gap, and "A" and "B" sine tones differing in frequency (nominally low and high). Many studies implicitly assume that the specific arrangement (ABAB vs ABA_, as well as low-high-low vs high-low-high within ABA_) plays a negligible role, such that decisions about the tone pattern can be governed by other considerations. To explicitly test this assumption, a systematic comparison of different tone patterns for two-tone sequences was performed in three different experiments. Participants were asked to report whether they perceived the sequences as originating from a single sound source (integrated) or from two interleaved sources (segregated). Results indicate that core findings of sequential ASA, such as an effect of frequency separation on the proportion of integrated and segregated percepts, are similar across the different patterns during prolonged listening. However, at sequence onset, the integrated percept was more likely to be reported by the participants in ABA_low-high-low than in ABA_high-low-high sequences. This asymmetry is important for models of sequential ASA, since the formation of percepts at onset is an integral part of understanding how auditory interpretations build up.


Assuntos
Percepção Auditiva , Auscultação , Humanos , Som
7.
J Vis ; 17(1): 34, 2017 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-28129418

RESUMO

In binocular rivalry, paradigms have been proposed for unobtrusive moment-by-moment readout of observers' perceptual experience ("no-report paradigms"). Here, we take a first step to extend this concept to auditory multistability. Observers continuously reported which of two concurrent tone sequences they perceived in the foreground: high-pitch (1008 Hz) or low-pitch (400 Hz) tones. Interstimulus intervals were either fixed per sequence (Experiments 1 and 2) or random with tones alternating (Experiment 3). A horizontally drifting grating was presented to each eye; to induce binocular rivalry, gratings had distinct colors and motion directions. To associate each grating with one tone sequence, a pattern on the grating jumped vertically whenever the respective tone occurred. We found that the direction of the optokinetic nystagmus (OKN)-induced by the visually dominant grating-could be used to decode the tone (high/low) that was perceived in the foreground well above chance. This OKN-based readout improved after observers had gained experience with the auditory task (Experiments 1 and 2) and for simpler auditory tasks (Experiment 3). We found no evidence that the visual stimulus affected auditory multistability. Although decoding performance is still far from perfect, our paradigm may eventually provide a continuous estimate of the currently dominant percept in auditory multistability.


Assuntos
Percepção Auditiva/fisiologia , Nistagmo Optocinético/fisiologia , Localização de Som/fisiologia , Disparidade Visual/fisiologia , Visão Binocular/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
8.
J Acoust Soc Am ; 141(1): 265, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-28147594

RESUMO

Empirical research on the sequential decomposition of an auditory scene primarily relies on interleaved sound mixtures of only two tone sequences (e.g., ABAB…). This oversimplifies the sound decomposition problem by limiting the number of putative perceptual organizations. The current study used a sound mixture composed of three different tones (ABCABC…) that could be perceptually organized in many different ways. Participants listened to these sequences and reported their subjective perception by continuously choosing one out of 12 visually presented perceptual organization alternatives. Different levels of frequency and spatial separation were implemented to check whether participants' perceptual reports would be systematic and plausible. As hypothesized, while perception switched back and forth in each condition between various perceptual alternatives (multistability), spatial as well as frequency separation generally raised the proportion of segregated and reduced the proportion of integrated alternatives. During segregated percepts, in contrast to the hypothesis, many participants had a tendency to perceive two streams in the foreground, rather than reporting alternatives with a clear foreground-background differentiation. Finally, participants perceived the organization with intermediate feature values (e.g., middle tones of the pattern) segregated in the foreground slightly less often than similar alternatives with outer feature values (e.g., higher tones).

9.
Hum Brain Mapp ; 37(2): 704-16, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26583355

RESUMO

Human brain function draws on predictive mechanisms that exploit higher-level context during lower-level perception. These mechanisms are particularly relevant for situations in which sensory information is compromised or incomplete, as for example in natural speech where speech segments may be omitted due to sluggish articulation. Here, we investigate which brain areas support the processing of incomplete words that were predictable from semantic context, compared with incomplete words that were unpredictable. During functional magnetic resonance imaging (fMRI), participants heard sentences that orthogonally varied in predictability (semantically predictable vs. unpredictable) and completeness (complete vs. incomplete, i.e. missing their final consonant cluster). The effects of predictability and completeness interacted in heteromodal semantic processing areas, including left angular gyrus and left precuneus, where activity did not differ between complete and incomplete words when they were predictable. The same regions showed stronger activity for incomplete than for complete words when they were unpredictable. The interaction pattern suggests that for highly predictable words, the speech signal does not need to be complete for neural processing in semantic processing areas. Hum Brain Mapp 37:704-716, 2016. © 2015 Wiley Periodicals, Inc.


Assuntos
Encéfalo/fisiologia , Semântica , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Mapeamento Encefálico , Circulação Cerebrovascular , Feminino , Humanos , Testes de Linguagem , Imageamento por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Oxigênio/sangue , Tempo de Reação , Espectrografia do Som , Adulto Jovem
10.
J Acoust Soc Am ; 139(4): 1762, 2016 04.
Artigo em Inglês | MEDLINE | ID: mdl-27106324

RESUMO

While subjective reports provide a direct measure of perception, their validity is not self-evident. Here, the authors tested three possible biasing effects on perceptual reports in the auditory streaming paradigm: errors due to imperfect understanding of the instructions, voluntary perceptual biasing, and susceptibility to implicit expectations. (1) Analysis of the responses to catch trials separately promoting each of the possible percepts allowed the authors to exclude participants who likely have not fully understood the instructions. (2) Explicit biasing instructions led to markedly different behavior than the conventional neutral-instruction condition, suggesting that listeners did not voluntarily bias their perception in a systematic way under the neutral instructions. Comparison with a random response condition further supported this conclusion. (3) No significant relationship was found between social desirability, a scale-based measure of susceptibility to implicit social expectations, and any of the perceptual measures extracted from the subjective reports. This suggests that listeners did not significantly bias their perceptual reports due to possible implicit expectations present in the experimental context. In sum, these results suggest that valid perceptual data can be obtained from subjective reports in the auditory streaming paradigm.


Assuntos
Estimulação Acústica/métodos , Audiometria/métodos , Percepção Auditiva , Adolescente , Adulto , Viés , Compreensão , Feminino , Humanos , Masculino , Reconhecimento Fisiológico de Modelo , Projetos Piloto , Percepção da Altura Sonora , Reprodutibilidade dos Testes , Volição , Adulto Jovem
11.
Dev Neurosci ; 37(2): 172-81, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25721916

RESUMO

Separating concurrent sounds is fundamental for a veridical perception of one's auditory surroundings. Sound components that are harmonically related and start at the same time are usually grouped into a common perceptual object, whereas components that are not in harmonic relation or have different onset times are more likely to be perceived in terms of separate objects. Here we tested whether neonates are able to pick up the cues supporting this sound organization principle. We presented newborn infants with a series of complex tones with their harmonics in tune (creating the percept of a unitary sound object) and with manipulated variants, which gave the impression of two concurrently active sound sources. The manipulated variant had either one mistuned partial (single-cue condition) or the onset of this mistuned partial was also delayed (double-cue condition). Tuned and manipulated sounds were presented in random order with equal probabilities. Recording the neonates' electroencephalographic responses allowed us to evaluate their processing of the sounds. Results show that, in both conditions, mistuned sounds elicited a negative displacement of the event-related potential (ERP) relative to tuned sounds from 360 to 400 ms after sound onset. The mistuning-related ERP component resembles the object-related negativity (ORN) component in adults, which is associated with concurrent sound segregation. Delayed onset additionally led to a negative displacement from 160 to 200 ms, which was probably more related to the physical parameters of the sounds than to their perceptual segregation. The elicitation of an ORN-like response in newborn infants suggests that neonates possess the basic capabilities of segregating concurrent sounds by detecting inharmonic relations between the co-occurring sounds.


Assuntos
Percepção Auditiva/fisiologia , Desenvolvimento Infantil/fisiologia , Potenciais Evocados/fisiologia , Eletroencefalografia , Feminino , Humanos , Lactente , Recém-Nascido , Masculino
13.
J Neurosci ; 33(20): 8633-9, 2013 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-23678108

RESUMO

The remarkable capabilities displayed by humans in making sense of an overwhelming amount of sensory information cannot be explained easily if perception is viewed as a passive process. Current theoretical and computational models assume that to achieve meaningful and coherent perception, the human brain must anticipate upcoming stimulation. But how are upcoming stimuli predicted in the brain? We unmasked the neural representation of a prediction by omitting the predicted sensory input. Electrophysiological brain signals showed that when a clear prediction can be formulated, the brain activates a template of its response to the predicted stimulus before it arrives to our senses.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Audição/fisiologia , Som , Estimulação Acústica , Adulto , Mapeamento Encefálico , Eletroencefalografia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Valor Preditivo dos Testes , Fatores de Tempo , Adulto Jovem
14.
Eur J Neurosci ; 39(2): 308-18, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24236753

RESUMO

Does temporal regularity facilitate prediction in audition? To test this, we recorded human event-related potentials to frequent standard tones and infrequent pitch deviant tones, pre-attentively delivered within isochronous and anisochronous (20% onset jitter) rapid sequences. Deviant tones were repeated, either with high or low probability. Standard tone repetition sets a first-order prediction, which is violated by deviant tone onset, leading to a first-order prediction error response (Mismatch Negativity). The response to highly probable deviant repetitions is, however, attenuated relative to less probable repetitions, reflecting the formation of higher-order sensory predictions. Results show that temporal regularity is required for higher-order predictions, but does not modulate first-order prediction error responses. Inverse solution analyses (Variable Resolution Electrical Tomography; VARETA) localized the error response attenuation to posterior regions of the left superior temporal gyrus. In a control experiment with a slower stimulus rate, we found no evidence for higher-order predictions, and again no effect of temporal information on first-order prediction error. We conclude that: (i) temporal regularity facilitates the establishing of higher-order sensory predictions, i.e. 'knowing what next', in fast auditory sequences; (ii) first-order prediction error relies predominantly on stimulus feature mismatch, reflecting the adaptive fit of fast deviance detection processes.


Assuntos
Antecipação Psicológica/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Percepção do Tempo/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Mapeamento Encefálico , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Masculino , Aprendizagem por Probabilidade , Análise e Desempenho de Tarefas , Fatores de Tempo , Adulto Jovem
15.
PLoS Comput Biol ; 9(3): e1002925, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23516340

RESUMO

Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives-a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the dynamics of human perception in auditory streaming.


Assuntos
Percepção Auditiva/fisiologia , Modelos Neurológicos , Estimulação Acústica , Adolescente , Adulto , Algoritmos , Simulação por Computador , Feminino , Humanos , Masculino , Modelos Estatísticos , Som
16.
Brain Topogr ; 27(4): 565-77, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-24271978

RESUMO

Predictive accounts of perception have received increasing attention in the past 20 years. Detecting violations of auditory regularities, as reflected by the Mismatch Negativity (MMN) auditory event-related potential, is amongst the phenomena seamlessly fitting this approach. Largely based on the MMN literature, we propose a psychological conceptual framework called the Auditory Event Representation System (AERS), which is based on the assumption that auditory regularity violation detection and the formation of auditory perceptual objects are based on the same predictive regularity representations. Based on this notion, a computational model of auditory stream segregation, called CHAINS, has been developed. In CHAINS, the auditory sensory event representation of each incoming sound is considered for being the continuation of likely combinations of the preceding sounds in the sequence, thus providing alternative interpretations of the auditory input. Detecting repeating patterns allows predicting upcoming sound events, thus providing a test and potential support for the corresponding interpretation. Alternative interpretations continuously compete for perceptual dominance. In this paper, we briefly describe AERS and deduce some general constraints from this conceptual model. We then go on to illustrate how these constraints are computationally specified in CHAINS.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Modelos Neurológicos , Simulação por Computador , Potenciais Evocados Auditivos , Humanos
17.
Psychol Res ; 78(3): 361-78, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24553776

RESUMO

Sounds emitted by different sources arrive at our ears as a mixture that must be disentangled before meaningful information can be retrieved. It is still a matter of debate whether this decomposition happens automatically or requires the listener's attention. These opposite positions partly stem from different methodological approaches to the problem. We propose an integrative approach that combines the logic of previous measurements targeting either auditory stream segregation (interpreting a mixture as coming from two separate sources) or integration (interpreting a mixture as originating from only one source). By means of combined behavioral and event-related potential (ERP) measures, our paradigm has the potential to measure stream segregation and integration at the same time, providing the opportunity to obtain positive evidence of either one. This reduces the reliance on zero findings (i.e., the occurrence of stream integration in a given condition can be demonstrated directly, rather than indirectly based on the absence of empirical evidence for stream segregation, and vice versa). With this two-way approach, we systematically manipulate attention devoted to the auditory stimuli (by varying their task relevance) and to their underlying structure (by delivering perceptual tasks that require segregated or integrated percepts). ERP results based on the mismatch negativity (MMN) show no evidence for a modulation of stream integration by attention, while stream segregation results were less clear due to overlapping attention-related components in the MMN latency range. We suggest future studies combining the proposed two-way approach with some improvements in the ERP measurement of sequential stream segregation.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Tempo de Reação/fisiologia , Adulto Jovem
18.
J Acoust Soc Am ; 135(3): 1392-405, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24606277

RESUMO

While many studies have assessed the efficacy of similarity-based cues for auditory stream segregation, much less is known about whether and how the larger-scale structure of sound sequences support stream formation and the choice of sound organization. Two experiments investigated the effects of musical melody and rhythm on the segregation of two interleaved tone sequences. The two sets of tones fully overlapped in pitch range but differed from each other in interaural time and intensity. Unbeknownst to the listener, separately, each of the interleaved sequences was created from the notes of a different song. In different experimental conditions, the notes and/or their timing could either follow those of the songs or they could be scrambled or, in case of timing, set to be isochronous. Listeners were asked to continuously report whether they heard a single coherent sequence (integrated) or two concurrent streams (segregated). Although temporal overlap between tones from the two streams proved to be the strongest cue for stream segregation, significant effects of tonality and familiarity with the songs were also observed. These results suggest that the regular temporal patterns are utilized as cues in auditory stream segregation and that long-term memory is involved in this process.


Assuntos
Sinais (Psicologia) , Música , Periodicidade , Percepção da Altura Sonora , Percepção do Tempo , Estimulação Acústica , Adolescente , Adulto , Análise de Variância , Audiometria , Feminino , Humanos , Masculino , Discriminação da Altura Tonal , Psicoacústica , Fatores de Tempo , Adulto Jovem
19.
Sci Rep ; 14(1): 8858, 2024 04 17.
Artigo em Inglês | MEDLINE | ID: mdl-38632303

RESUMO

It is often assumed that rendering an alert signal more salient yields faster responses to this alert. Yet, there might be a trade-off between attracting attention and distracting from task execution. Here we tested this in four behavioral experiments with eye-tracking using an abstract alert-signal paradigm. Participants performed a visual discrimination task (primary task) while occasional alert signals occurred in the visual periphery accompanied by a congruently lateralized tone. Participants had to respond to the alert before proceeding with the primary task. When visual salience (contrast) or auditory salience (tone intensity) of the alert were increased, participants directed their gaze to the alert more quickly. This confirms that more salient alerts attract attention more efficiently. Increasing auditory salience yielded quicker responses for the alert and primary tasks, apparently confirming faster responses altogether. However, increasing visual salience did not yield similar benefits: instead, it increased the time between fixating the alert and responding, as high-salience alerts interfered with alert-task execution. Such task interference by high-salience alert-signals counteracts their more efficient attentional guidance. The design of alert signals must be adapted to a "sweet spot" that optimizes this stimulus-dependent trade-off between maximally rapid attentional orienting and minimal task interference.


Assuntos
Atenção , Percepção Visual , Humanos , Tempo de Reação/fisiologia , Atenção/fisiologia , Percepção Visual/fisiologia , Registros , Discriminação Psicológica
20.
Front Psychol ; 14: 1193822, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37425183

RESUMO

Word stress is demanding for non-native learners of English, partly because speakers from different backgrounds weight perceptual cues to stress like pitch, intensity, and duration differently. Slavic learners of English and particularly those with a fixed stress language background like Czech and Polish have been shown to be less sensitive to stress in their native and non-native languages. In contrast, German English learners are rarely discussed in a word stress context. A comparison of these varieties can reveal differences in the foreign language processing of speakers from two language families. We use electroencephalography (EEG) to explore group differences in word stress cue perception between Slavic and German learners of English. Slavic and German advanced English speakers were examined in passive multi-feature oddball experiments, where they were exposed to the word impact as an unstressed standard and as deviants stressed on the first or second syllable through higher pitch, intensity, or duration. The results revealed a robust Mismatch Negativity (MMN) component of the event-related potential (ERP) in both language groups in response to all conditions, demonstrating sensitivity to stress changes in a non-native language. While both groups showed higher MMN responses to stress changes to the second than the first syllable, this effect was more pronounced for German than for Slavic participants. Such group differences in non-native English word stress perception from the current and previous studies are argued to speak in favor of customizable language technologies and diversified English curricula compensating for non-native perceptual variation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA