Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 92
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Cogn Res Princ Implic ; 9(1): 35, 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38834918

RESUMO

Multilingual speakers can find speech recognition in everyday environments like restaurants and open-plan offices particularly challenging. In a world where speaking multiple languages is increasingly common, effective clinical and educational interventions will require a better understanding of how factors like multilingual contexts and listeners' language proficiency interact with adverse listening environments. For example, word and phrase recognition is facilitated when competing voices speak different languages. Is this due to a "release from masking" from lower-level acoustic differences between languages and talkers, or higher-level cognitive and linguistic factors? To address this question, we created a "one-man bilingual cocktail party" selective attention task using English and Mandarin speech from one bilingual talker to reduce low-level acoustic cues. In Experiment 1, 58 listeners more accurately recognized English targets when distracting speech was Mandarin compared to English. Bilingual Mandarin-English listeners experienced significantly more interference and intrusions from the Mandarin distractor than did English listeners, exacerbated by challenging target-to-masker ratios. In Experiment 2, 29 Mandarin-English bilingual listeners exhibited linguistic release from masking in both languages. Bilinguals experienced greater release from masking when attending to English, confirming an influence of linguistic knowledge on the "cocktail party" paradigm that is separate from primarily energetic masking effects. Effects of higher-order language processing and expertise emerge only in the most demanding target-to-masker contexts. The "one-man bilingual cocktail party" establishes a useful tool for future investigations and characterization of communication challenges in the large and growing worldwide community of Mandarin-English bilinguals.


Assuntos
Atenção , Multilinguismo , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Adulto , Feminino , Masculino , Adulto Jovem , Atenção/fisiologia , Mascaramento Perceptivo/fisiologia , Psicolinguística
2.
bioRxiv ; 2024 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-38826304

RESUMO

Efficient behavior is supported by humans' ability to rapidly recognize acoustically distinct sounds as members of a common category. Within auditory cortex, there are critical unanswered questions regarding the organization and dynamics of sound categorization. Here, we performed intracerebral recordings in the context of epilepsy surgery as 20 patient-participants listened to natural sounds. We built encoding models to predict neural responses using features of these sounds extracted from different layers within a sound-categorization deep neural network (DNN). This approach yielded highly accurate models of neural responses throughout auditory cortex. The complexity of a cortical site's representation (measured by the depth of the DNN layer that produced the best model) was closely related to its anatomical location, with shallow, middle, and deep layers of the DNN associated with core (primary auditory cortex), lateral belt, and parabelt regions, respectively. Smoothly varying gradients of representational complexity also existed within these regions, with complexity increasing along a posteromedial-to-anterolateral direction in core and lateral belt, and along posterior-to-anterior and dorsal-to-ventral dimensions in parabelt. When we estimated the time window over which each recording site integrates information, we found shorter integration windows in core relative to lateral belt and parabelt. Lastly, we found a relationship between the length of the integration window and the complexity of information processing within core (but not lateral belt or parabelt). These findings suggest hierarchies of timescales and processing complexity, and their interrelationship, represent a functional organizational principle of the auditory stream that underlies our perception of complex, abstract auditory information.

3.
bioRxiv ; 2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38617227

RESUMO

Prior lesion, noninvasive-imaging, and intracranial-electroencephalography (iEEG) studies have documented hierarchical, parallel, and distributed characteristics of human speech processing. Yet, there have not been direct, intracranial observations of the latency with which regions outside the temporal lobe respond to speech, or how these responses are impacted by task demands. We leveraged human intracranial recordings via stereo-EEG to measure responses from diverse forebrain sites during (i) passive listening to /bi/ and /pi/ syllables, and (ii) active listening requiring /bi/-versus-/pi/ categorization. We find that neural response latency increases from a few tens of ms in Heschl's gyrus (HG) to several tens of ms in superior temporal gyrus (STG), superior temporal sulcus (STS), and early parietal areas, and hundreds of ms in later parietal areas, insula, frontal cortex, hippocampus, and amygdala. These data also suggest parallel flow of speech information dorsally and ventrally, from HG to parietal areas and from HG to STG and STS, respectively. Latency data also reveal areas in parietal cortex, frontal cortex, hippocampus, and amygdala that are not responsive to the stimuli during passive listening but are responsive during categorization. Furthermore, multiple regions-spanning auditory, parietal, frontal, and insular cortices, and hippocampus and amygdala-show greater neural response amplitudes during active versus passive listening (a task-related effect). Overall, these results are consistent with hierarchical processing of speech at a macro level and parallel streams of information flow in temporal and parietal regions. These data also reveal regions where the speech code is stimulus-faithful and those that encode task-relevant representations.

4.
Curr Res Neurobiol ; 6: 100127, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38511174

RESUMO

The human voice is a critical stimulus for the auditory system that promotes social connection, informs the listener about identity and emotion, and acts as the carrier for spoken language. Research on voice processing in adults has informed our understanding of the unique status of the human voice in the mature auditory cortex and provided potential explanations for mechanisms that underly voice selectivity and identity processing. There is evidence that voice perception undergoes developmental change starting in infancy and extending through early adolescence. While even young infants recognize the voice of their mother, there is an apparent protracted course of development to reach adult-like selectivity for human voice over other sound categories and recognition of other talkers by voice. Gaps in the literature do not allow for an exact mapping of this trajectory or an adequate description of how voice processing and its neural underpinnings abilities evolve. This review provides a comprehensive account of developmental voice processing research published to date and discusses how this evidence fits with and contributes to current theoretical models proposed in the adult literature. We discuss how factors such as cognitive development, neural plasticity, perceptual narrowing, and language acquisition may contribute to the development of voice processing and its investigation in children. We also review evidence of voice processing abilities in premature birth, autism spectrum disorder, and phonagnosia to examine where and how deviations from the typical trajectory of development may manifest.

5.
bioRxiv ; 2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-37905141

RESUMO

Speech provides a rich context for exploring human cortical-basal ganglia circuit function, but direct intracranial recordings are rare. We recorded electrocorticographic signals in the cortex synchronously with single units in the subthalamic nucleus (STN), a basal ganglia node that receives direct input from widespread cortical regions, while participants performed a syllable repetition task during deep brain stimulation (DBS) surgery. We discovered that STN neurons exhibited spike-phase coupling (SPC) events with distinct combinations of frequency, location, and timing that indexed specific aspects of speech. The strength of SPC to posterior perisylvian cortex predicted phoneme production accuracy, while that of SPC to perirolandic cortex predicted time taken for articulation Thus, STN-cortical interactions are coordinated via transient bursts of behavior-specific synchronization that involves multiple neuronal populations and timescales. These results both suggest mechanisms that support auditory-sensorimotor integration during speech and explain why firing-rate based models are insufficient for explaining basal ganglia circuit behavior.

6.
Psychon Bull Rev ; 2023 Oct 26.
Artigo em Inglês | MEDLINE | ID: mdl-37884779

RESUMO

Communicating with a speaker with a different accent can affect one's own speech. Despite the strength of evidence for perception-production transfer in speech, the nature of transfer has remained elusive, with variable results regarding the acoustic properties that transfer between speakers and the characteristics of the speakers who exhibit transfer. The current study investigates perception-production transfer through the lens of statistical learning across passive exposure to speech. Participants experienced a short sequence of acoustically variable minimal pair (beer/pier) utterances conveying either an accent or typical American English acoustics, categorized a perceptually ambiguous test stimulus, and then repeated the test stimulus aloud. In the canonical condition, /b/-/p/ fundamental frequency (F0) and voice onset time (VOT) covaried according to typical English patterns. In the reverse condition, the F0xVOT relationship reversed to create an "accent" with speech input regularities atypical of American English. Replicating prior studies, F0 played less of a role in perceptual speech categorization in reverse compared with canonical statistical contexts. Critically, this down-weighting transferred to production, with systematic down-weighting of F0 in listeners' own speech productions in reverse compared with canonical contexts that was robust across male and female participants. Thus, the mapping of acoustics to speech categories is rapidly adjusted by short-term statistical learning across passive listening and these adjustments transfer to influence listeners' own speech productions.

7.
Cognition ; 237: 105467, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37148640

RESUMO

Multiple lines of research have developed training approaches that foster category learning, with important translational implications for education. Increasing exemplar variability, blocking or interleaving by category-relevant dimension, and providing explicit instructions about diagnostic dimensions each have been shown to facilitate category learning and/or generalization. However, laboratory research often must distill the character of natural input regularities that define real-world categories. As a result, much of what we know about category learning has come from studies with simplifying assumptions. We challenge the implicit expectation that these studies reflect the process of category learning of real-world input by creating an auditory category learning paradigm that intentionally violates some common simplifying assumptions of category learning tasks. Across five experiments and nearly 300 adult participants, we used training regimes previously shown to facilitate category learning, but here drew from a more complex and multidimensional category space with tens of thousands of unique exemplars. Learning was equivalently robust across training regimes that changed exemplar variability, altered the blocking of category exemplars, or provided explicit instructions of the category-diagnostic dimension. Each drove essentially equivalent accuracy measures of learning generalization following 40 min of training. These findings suggest that auditory category learning across complex input is not as susceptible to training regime manipulation as previously thought.


Assuntos
Generalização Psicológica , Aprendizagem , Adulto , Humanos , Formação de Conceito
8.
Cognition ; 238: 105473, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37210878

RESUMO

Statistical learning across passive exposure has been theoretically situated with unsupervised learning. However, when input statistics accumulate over established representations - like speech syllables, for example - there is the possibility that prediction derived from activation of rich, existing representations may support error-driven learning. Here, across five experiments, we present evidence for error-driven learning across passive speech listening. Young adults passively listened to a string of eight beer - pier speech tokens with distributional regularities following either a canonical American-English acoustic dimension correlation or a correlation reversed to create an accent. A sequence-final test stimulus assayed the perceptual weight - the effectiveness - of the secondary dimension in signaling category membership as a function of preceding sequence regularities. Perceptual weight flexibly adjusted according to the passively experienced regularities even when the preceding regularities shifted on a trial-by-trial basis. The findings align with a theoretical view that activation of established internal representations can support learning across statistical regularities via error-driven learning. At the broadest level, this suggests that not all statistical learning need be unsupervised. Moreover, these findings help to account for how cognitive systems may accommodate competing demands for flexibility and stability: instead of overwriting existing representations when short-term input distributions depart from the norms, the mapping from input to category representations may be dynamically - and rapidly - adjusted via error-driven learning from predictions derived from internal representations.


Assuntos
Percepção da Fala , Fala , Adulto Jovem , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia , Percepção Auditiva , Idioma
9.
Elife ; 122023 03 24.
Artigo em Inglês | MEDLINE | ID: mdl-36961499

RESUMO

Humans generate categories from complex regularities evolving across even imperfect sensory input. Here, we examined the possibility that incidental experiences can generate lasting category knowledge. Adults practiced a simple visuomotor task not dependent on acoustic input. Novel categories of acoustically complex sounds were not necessary for task success but aligned incidentally with distinct visuomotor responses in the task. Incidental sound category learning emerged robustly when within-category sound exemplar variability was closely yoked to visuomotor task demands and was not apparent in the initial session when this coupling was less robust. Nonetheless, incidentally acquired sound category knowledge was evident in both cases one day later, indicative of offline learning gains and, nine days later, learning in both cases supported explicit category labeling of novel sounds. Thus, a relatively brief incidental experience with multi-dimensional sound patterns aligned with behaviorally relevant actions and events can generate new sound categories, immediately after the learning experience or a day later. These categories undergo consolidation into long-term memory to support robust generalization of learning, rather than simply reflecting recall of specific sound-pattern exemplars previously encountered. Humans thus forage for information to acquire and consolidate new knowledge that may incidentally support behavior, even when learning is not strictly necessary for performance.


Assuntos
Generalização Psicológica , Aprendizagem , Adulto , Humanos , Aprendizagem/fisiologia , Generalização Psicológica/fisiologia , Som , Rememoração Mental , Memória de Longo Prazo
10.
Psychol Sci ; 34(4): 468-480, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36791783

RESUMO

Categorization has a deep impact on behavior, but whether category learning is served by a single system or multiple systems remains debated. Here, we designed two well-equated nonspeech auditory category learning challenges to draw on putative procedural (information-integration) versus declarative (rule-based) learning systems among adult Hebrew-speaking control participants and individuals with dyslexia, a language disorder that has been linked to a selective disruption in the procedural memory system and in which phonological deficits are ubiquitous. We observed impaired information-integration category learning and spared rule-based category learning in the dyslexia group compared with the neurotypical group. Quantitative model-based analyses revealed reduced use of, and slower shifting to, optimal procedural-based strategies in dyslexia with hypothesis-testing strategy use on par with control participants. The dissociation is consistent with multiple category learning systems and points to the possibility that procedural learning inefficiencies across categories defined by complex, multidimensional exemplars may result in difficulty in phonetic category acquisition in dyslexia.


Assuntos
Dislexia , Aprendizagem , Adulto , Humanos , Fonética
11.
Atten Percept Psychophys ; 85(2): 452-462, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36510102

RESUMO

The environment provides multiple regularities that might be useful in guiding behavior if one was able to learn their structure. Understanding statistical learning across simultaneous regularities is important, but poorly understood. We investigate learning across two domains: visuomotor sequence learning through the serial reaction time (SRT) task, and incidental auditory category learning via the systematic multimodal association reaction time (SMART) task. Several commonalities raise the possibility that these two learning phenomena may draw on common cognitive resources and neural networks. In each, participants are uninformed of the regularities that they come to use to guide actions, the outcomes of which may provide a form of internal feedback. We used dual-task conditions to compare learning of the regularities in isolation versus when they are simultaneously available to support behavior on a seemingly orthogonal visuomotor task. Learning occurred across the simultaneous regularities, without attenuation even when the informational value of a regularity was reduced by the presence of the additional, convergent regularity. Thus, the simultaneous regularities do not compete for associative strength, as in overshadowing effects. Moreover, the visuomotor sequence learning and incidental auditory category learning do not appear to compete for common cognitive resources; learning across the simultaneous regularities was comparable to learning each regularity in isolation.


Assuntos
Aprendizagem , Redes Neurais de Computação , Humanos , Tempo de Reação , Retroalimentação , Cognição
12.
Trends Hear ; 26: 23312165221118792, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36131515

RESUMO

Most human auditory psychophysics research has historically been conducted in carefully controlled environments with calibrated audio equipment, and over potentially hours of repetitive testing with expert listeners. Here, we operationally define such conditions as having high 'auditory hygiene'. From this perspective, conducting auditory psychophysical paradigms online presents a serious challenge, in that results may hinge on absolute sound presentation level, reliably estimated perceptual thresholds, low and controlled background noise levels, and sustained motivation and attention. We introduce a set of procedures that address these challenges and facilitate auditory hygiene for online auditory psychophysics. First, we establish a simple means of setting sound presentation levels. Across a set of four level-setting conditions conducted in person, we demonstrate the stability and robustness of this level setting procedure in open air and controlled settings. Second, we test participants' tone-in-noise thresholds using widely adopted online experiment platforms and demonstrate that reliable threshold estimates can be derived online in approximately one minute of testing. Third, using these level and threshold setting procedures to establish participant-specific stimulus conditions, we show that an online implementation of the classic probe-signal paradigm can be used to demonstrate frequency-selective attention on an individual-participant basis, using a third of the trials used in recent in-lab experiments. Finally, we show how threshold and attentional measures relate to well-validated assays of online participants' in-task motivation, fatigue, and confidence. This demonstrates the promise of online auditory psychophysics for addressing new auditory perception and neuroscience questions quickly, efficiently, and with more diverse samples. Code for the tests is publicly available through Pavlovia and Gorilla.


Assuntos
Percepção Auditiva , Ruído , Limiar Auditivo , Humanos , Psicofísica
13.
J Exp Psychol Hum Percept Perform ; 48(9): 913-925, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35849375

RESUMO

Unfamiliar accents can systematically shift speech acoustics away from community norms and reduce comprehension. Yet, limited exposure improves comprehension. This perceptual adaptation indicates that the mapping from acoustics to speech representations is dynamic, rather than fixed. But, what drives adjustments is debated. Supervised learning accounts posit that activation of an internal speech representation via disambiguating information generates predictions about patterns of speech input typically associated with the representation. When actual input mismatches predictions, the mapping is adjusted. We tested two hypotheses of this account across consonants and vowels as listeners categorized speech conveying an English-like acoustic regularity or an artificial accent. Across conditions, signal manipulations impacted which of two acoustic dimensions best conveyed category identity, and predicted which dimension would exhibit the effects of perceptual adaptation. Moreover, the strength of phonetic category activation, as estimated by categorization responses reliant on the dominant acoustic dimension, predicted the magnitude of adaptation observed across listeners. The results align with predictions of supervised learning accounts, suggesting that perceptual adaptation arises from speech category activation, corresponding predictions about the patterns of acoustic input that align with the category, and adjustments in subsequent speech perception when input mismatches these expectations. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Fonética , Percepção da Fala , Humanos , Idioma , Fala/fisiologia , Acústica da Fala , Percepção da Fala/fisiologia
14.
PLoS Biol ; 20(7): e3001675, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35900975

RESUMO

The ability to recognize abstract features of voice during auditory perception is an intricate feat of human audition. For the listener, this occurs in near-automatic fashion to seamlessly extract complex cues from a highly variable auditory signal. Voice perception depends on specialized regions of auditory cortex, including superior temporal gyrus (STG) and superior temporal sulcus (STS). However, the nature of voice encoding at the cortical level remains poorly understood. We leverage intracerebral recordings across human auditory cortex during presentation of voice and nonvoice acoustic stimuli to examine voice encoding at the cortical level in 8 patient-participants undergoing epilepsy surgery evaluation. We show that voice selectivity increases along the auditory hierarchy from supratemporal plane (STP) to the STG and STS. Results show accurate decoding of vocalizations from human auditory cortical activity even in the complete absence of linguistic content. These findings show an early, less-selective temporal window of neural activity in the STG and STS followed by a sustained, strongly voice-selective window. Encoding models demonstrate divergence in the encoding of acoustic features along the auditory hierarchy, wherein STG/STS responses are best explained by voice category and acoustics, as opposed to acoustic features of voice stimuli alone. This is in contrast to neural activity recorded from STP, in which responses were accounted for by acoustic features. These findings support a model of voice perception that engages categorical encoding mechanisms within STG and STS to facilitate feature extraction.


Assuntos
Córtex Auditivo , Percepção da Fala , Voz , Estimulação Acústica , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico/métodos , Humanos , Imageamento por Ressonância Magnética , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia
15.
Psychon Bull Rev ; 29(5): 1925-1937, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35524011

RESUMO

Cognitive systems face a constant tension of maintaining existing representations that have been fine-tuned to long-term input regularities and adapting representations to meet the needs of short-term input that may deviate from long-term norms. Systems must balance the stability of long-term representations with plasticity to accommodate novel contexts. We investigated the interaction between perceptual biases or priors acquired across the long-term and sensitivity to statistical regularities introduced in the short-term. Participants were first passively exposed to short-term acoustic regularities and then learned categories in a supervised training task that either conflicted or aligned with long-term perceptual priors. We found that the long-term priors had robust and pervasive impact on categorization behavior. In contrast, behavior was not influenced by the nature of the short-term passive exposure. These results demonstrate that perceptual priors place strong constraints on the course of learning and that short-term passive exposure to acoustic regularities has limited impact on directing subsequent category learning.


Assuntos
Aprendizagem , Humanos
16.
J Acoust Soc Am ; 151(2): 992, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35232077

RESUMO

Speech contrasts are signaled by multiple acoustic dimensions, but these dimensions are not equally diagnostic. Moreover, the relative diagnosticity, or weight, of acoustic dimensions in speech can shift in different communicative contexts for both speech perception and speech production. However, the literature remains unclear on whether, and if so how, talkers adjust speech to emphasize different acoustic dimensions in the context of changing communicative demands. Here, we examine the interplay of flexible cue weights in speech production and perception across amplitude and duration, secondary non-spectral acoustic dimensions for phonated Mandarin Chinese lexical tone, across natural speech and whispering, which eliminates fundamental frequency contour, the primary acoustic dimension. Phonated and whispered Mandarin productions from native talkers revealed enhancement of both duration and amplitude cues in whispered, compared to phonated speech. When nonspeech amplitude-modulated noises modeled these patterns of enhancement, identification of the noises as Mandarin lexical tone categories was more accurate than identification of noises modeling phonated speech amplitude and duration cues. Thus, speakers exaggerate secondary cues in whispered speech and listeners make use of this information. Yet, enhancement is not symmetric among the four Mandarin lexical tones, indicating possible constraints on the realization of this enhancement.


Assuntos
Percepção da Fala , Fala , China , Sinais (Psicologia) , Fonética , Percepção da Altura Sonora
17.
J Assoc Res Otolaryngol ; 23(2): 151-166, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35235100

RESUMO

Distinguishing between regular and irregular heartbeats, conversing with speakers of different accents, and tuning a guitar-all rely on some form of auditory learning. What drives these experience-dependent changes? A growing body of evidence suggests an important role for non-sensory influences, including reward, task engagement, and social or linguistic context. This review is a collection of contributions that highlight how these non-sensory factors shape auditory plasticity and learning at the molecular, physiological, and behavioral level. We begin by presenting evidence that reward signals from the dopaminergic midbrain act on cortico-subcortical networks to shape sound-evoked responses of auditory cortical neurons, facilitate auditory category learning, and modulate the long-term storage of new words and their meanings. We then discuss the role of task engagement in auditory perceptual learning and suggest that plasticity in top-down cortical networks mediates learning-related improvements in auditory cortical and perceptual sensitivity. Finally, we present data that illustrates how social experience impacts sound-evoked activity in the auditory midbrain and forebrain and how the linguistic environment rapidly shapes speech perception. These findings, which are derived from both human and animal models, suggest that non-sensory influences are important regulators of auditory learning and plasticity and are often implemented by shared neural substrates. Application of these principles could improve clinical training strategies and inform the development of treatments that enhance auditory learning in individuals with communication disorders.


Assuntos
Córtex Auditivo , Plasticidade Neuronal , Animais , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Plasticidade Neuronal/fisiologia
18.
Cognition ; 222: 104997, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35007885

RESUMO

Categories are often structured by the similarities of instances within the category defined across dimensions or features. Researchers typically assume that there is a direct, linear relationship between the physical input dimensions across which category exemplars are defined and the psychological representation of these dimensions. However, this assumption is not always warranted. Through a set of simulations, we demonstrate that the psychological representations of input dimensions developed through long-term prior experience can place very strong constraints on category learning. We compare the model's behavior to auditory, visual, and cross-modal human category learning and make conclusions regarding the nature of the psychological representations of the dimensions in those studies. These simulations support the conclusion that the nature of psychological representations of input dimensions is a critical aspect to understanding the mechanisms underlying category learning.


Assuntos
Aprendizagem , Redes Neurais de Computação , Humanos
19.
J Exp Psychol Learn Mem Cogn ; 48(6): 769-784, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34570548

RESUMO

Category learning is fundamental to cognition, but little is known about how it proceeds in real-world environments when learners do not have instructions to search for category-relevant information, do not make overt category decisions, and do not experience direct feedback. Prior research demonstrates that listeners can acquire task-irrelevant auditory categories incidentally as they engage in primarily visuomotor tasks. The current study examines the factors that support this incidental category learning. Three experiments systematically manipulated the relationship of four novel auditory categories with a consistent visual feature (color or location) that informed a simple behavioral keypress response regarding the visual feature. In both an in-person experiment and two online replications with extensions, incidental auditory category learning occurred reliably when category exemplars consistently aligned with visuomotor demands of the primary task, but not when they were misaligned. The presence of an additional irrelevant visual feature that was uncorrelated with the primary task demands neither enhanced nor harmed incidental learning. By contrast, incidental learning did not occur when auditory categories were aligned consistently with one visual feature, but the motor response in the primary task was aligned with another, category-unaligned visual feature. Moreover, category learning did not reliably occur across passive observation or when participants made a category-nonspecific, generic motor response. These findings show that incidental learning of categories is strongly mediated by the character of coincident behavior. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Cognição , Aprendizagem , Retroalimentação , Humanos , Aprendizagem/fisiologia
20.
Ear Hear ; 43(1): 9-22, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34751676

RESUMO

Following a conversation in a crowded restaurant or at a lively party poses immense perceptual challenges for some individuals with normal hearing thresholds. A number of studies have investigated whether noise-induced cochlear synaptopathy (CS; damage to the synapses between cochlear hair cells and the auditory nerve following noise exposure that does not permanently elevate hearing thresholds) contributes to this difficulty. A few studies have observed correlations between proxies of noise-induced CS and speech perception in difficult listening conditions, but many have found no evidence of a relationship. To understand these mixed results, we reviewed previous studies that have examined noise-induced CS and performance on speech perception tasks in adverse listening conditions in adults with normal or near-normal hearing thresholds. Our review suggests that superficially similar speech perception paradigms used in previous investigations actually placed very different demands on sensory, perceptual, and cognitive processing. Speech perception tests that use low signal-to-noise ratios and maximize the importance of fine sensory details- specifically by using test stimuli for which lexical, syntactic, and semantic cues do not contribute to performance-are more likely to show a relationship to estimated CS levels. Thus, the current controversy as to whether or not noise-induced CS contributes to individual differences in speech perception under challenging listening conditions may be due in part to the fact that many of the speech perception tasks used in past studies are relatively insensitive to CS-induced deficits.


Assuntos
Percepção da Fala , Fala , Estimulação Acústica , Adulto , Limiar Auditivo/fisiologia , Humanos , Individualidade , Mascaramento Perceptivo , Percepção da Fala/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA