Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
PLoS Biol ; 19(2): e3001142, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33635855

RESUMEN

Rhythmic sensory or electrical stimulation will produce rhythmic brain responses. These rhythmic responses are often interpreted as endogenous neural oscillations aligned (or "entrained") to the stimulus rhythm. However, stimulus-aligned brain responses can also be explained as a sequence of evoked responses, which only appear regular due to the rhythmicity of the stimulus, without necessarily involving underlying neural oscillations. To distinguish evoked responses from true oscillatory activity, we tested whether rhythmic stimulation produces oscillatory responses which continue after the end of the stimulus. Such sustained effects provide evidence for true involvement of neural oscillations. In Experiment 1, we found that rhythmic intelligible, but not unintelligible speech produces oscillatory responses in magnetoencephalography (MEG) which outlast the stimulus at parietal sensors. In Experiment 2, we found that transcranial alternating current stimulation (tACS) leads to rhythmic fluctuations in speech perception outcomes after the end of electrical stimulation. We further report that the phase relation between electroencephalography (EEG) responses and rhythmic intelligible speech can predict the tACS phase that leads to most accurate speech perception. Together, we provide fundamental results for several lines of research-including neural entrainment and tACS-and reveal endogenous neural oscillations as a key underlying principle for speech perception.


Asunto(s)
Encéfalo/fisiología , Percepción del Habla/fisiología , Adulto , Relojes Biológicos , Electroencefalografía , Femenino , Humanos , Magnetoencefalografía , Masculino , Persona de Mediana Edad , Estimulación Transcraneal de Corriente Directa
2.
J Neurosci ; 41(32): 6919-6932, 2021 08 11.
Artículo en Inglés | MEDLINE | ID: mdl-34210777

RESUMEN

Human listeners achieve quick and effortless speech comprehension through computations of conditional probability using Bayes rule. However, the neural implementation of Bayesian perceptual inference remains unclear. Competitive-selection accounts (e.g., TRACE) propose that word recognition is achieved through direct inhibitory connections between units representing candidate words that share segments (e.g., hygiene and hijack share /haidʒ/). Manipulations that increase lexical uncertainty should increase neural responses associated with word recognition when words cannot be uniquely identified. In contrast, predictive-selection accounts (e.g., Predictive-Coding) propose that spoken word recognition involves comparing heard and predicted speech sounds and using prediction error to update lexical representations. Increased lexical uncertainty in words, such as hygiene and hijack, will increase prediction error and hence neural activity only at later time points when different segments are predicted. We collected MEG data from male and female listeners to test these two Bayesian mechanisms and used a competitor priming manipulation to change the prior probability of specific words. Lexical decision responses showed delayed recognition of target words (hygiene) following presentation of a neighboring prime word (hijack) several minutes earlier. However, this effect was not observed with pseudoword primes (higent) or targets (hijure). Crucially, MEG responses in the STG showed greater neural responses for word-primed words after the point at which they were uniquely identified (after /haidʒ/ in hygiene) but not before while similar changes were again absent for pseudowords. These findings are consistent with accounts of spoken word recognition in which neural computations of prediction error play a central role.SIGNIFICANCE STATEMENT Effective speech perception is critical to daily life and involves computations that combine speech signals with prior knowledge of spoken words (i.e., Bayesian perceptual inference). This study specifies the neural mechanisms that support spoken word recognition by testing two distinct implementations of Bayes perceptual inference. Most established theories propose direct competition between lexical units such that inhibition of irrelevant candidates leads to selection of critical words. Our results instead support predictive-selection theories (e.g., Predictive-Coding): by comparing heard and predicted speech sounds, neural computations of prediction error can help listeners continuously update lexical probabilities, allowing for more rapid word identification.


Asunto(s)
Reconocimiento en Psicología/fisiología , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Adulto , Teorema de Bayes , Comprensión/fisiología , Femenino , Humanos , Magnetoencefalografía , Masculino , Persona de Mediana Edad , Adulto Joven
3.
J Cogn Neurosci ; 32(3): 403-425, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-31682564

RESUMEN

Semantically ambiguous words challenge speech comprehension, particularly when listeners must select a less frequent (subordinate) meaning at disambiguation. Using combined magnetoencephalography (MEG) and EEG, we measured neural responses associated with distinct cognitive operations during semantic ambiguity resolution in spoken sentences: (i) initial activation and selection of meanings in response to an ambiguous word and (ii) sentence reinterpretation in response to subsequent disambiguation to a subordinate meaning. Ambiguous words elicited an increased neural response approximately 400-800 msec after their acoustic offset compared with unambiguous control words in left frontotemporal MEG sensors, corresponding to sources in bilateral frontotemporal brain regions. This response may reflect increased demands on processes by which multiple alternative meanings are activated and maintained until later selection. Disambiguating words heard after an ambiguous word were associated with marginally increased neural activity over bilateral temporal MEG sensors and a central cluster of EEG electrodes, which localized to similar bilateral frontal and left temporal regions. This later neural response may reflect effortful semantic integration or elicitation of prediction errors that guide reinterpretation of previously selected word meanings. Across participants, the amplitude of the ambiguity response showed a marginal positive correlation with comprehension scores, suggesting that sentence comprehension benefits from additional processing around the time of an ambiguous word. Better comprehenders may have increased availability of subordinate meanings, perhaps due to higher quality lexical representations and reflected in a positive correlation between vocabulary size and comprehension success.


Asunto(s)
Encéfalo/fisiología , Comprensión/fisiología , Semántica , Percepción del Habla/fisiología , Adulto , Electroencefalografía , Femenino , Humanos , Magnetoencefalografía , Masculino , Vocabulario , Adulto Joven
4.
Neuroimage ; 217: 116661, 2020 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-32081785

RESUMEN

Using fMRI and multivariate pattern analysis, we determined whether spectral and temporal acoustic features are represented by independent or integrated multivoxel codes in human cortex. Listeners heard band-pass noise varying in frequency (spectral) and amplitude-modulation (AM) rate (temporal) features. In the superior temporal plane, changes in multivoxel activity due to frequency were largely invariant with respect to AM rate (and vice versa), consistent with an independent representation. In contrast, in posterior parietal cortex, multivoxel representation was exclusively integrated and tuned to specific conjunctions of frequency and AM features (albeit weakly). Direct between-region comparisons show that whereas independent coding of frequency weakened with increasing levels of the hierarchy, such a progression for AM and integrated coding was less fine-grained and only evident in the higher hierarchical levels from non-core to parietal cortex (with AM coding weakening and integrated coding strengthening). Our findings support the notion that primary auditory cortex can represent spectral and temporal acoustic features in an independent fashion and suggest a role for parietal cortex in feature integration and the structuring of sensory input.


Asunto(s)
Corteza Auditiva/diagnóstico por imagen , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Estimulación Acústica , Adolescente , Adulto , Algoritmos , Mapeo Encefálico , Análisis por Conglomerados , Femenino , Lateralidad Funcional/fisiología , Humanos , Imagen por Resonancia Magnética , Masculino , Análisis Multivariante , Ruido , Lóbulo Parietal/diagnóstico por imagen , Lóbulo Parietal/fisiología , Adulto Joven
5.
Proc Natl Acad Sci U S A ; 113(12): E1747-56, 2016 Mar 22.
Artículo en Inglés | MEDLINE | ID: mdl-26957596

RESUMEN

Human perception is shaped by past experience on multiple timescales. Sudden and dramatic changes in perception occur when prior knowledge or expectations match stimulus content. These immediate effects contrast with the longer-term, more gradual improvements that are characteristic of perceptual learning. Despite extensive investigation of these two experience-dependent phenomena, there is considerable debate about whether they result from common or dissociable neural mechanisms. Here we test single- and dual-mechanism accounts of experience-dependent changes in perception using concurrent magnetoencephalographic and EEG recordings of neural responses evoked by degraded speech. When speech clarity was enhanced by prior knowledge obtained from matching text, we observed reduced neural activity in a peri-auditory region of the superior temporal gyrus (STG). Critically, longer-term improvements in the accuracy of speech recognition following perceptual learning resulted in reduced activity in a nearly identical STG region. Moreover, short-term neural changes caused by prior knowledge and longer-term neural changes arising from perceptual learning were correlated across subjects with the magnitude of learning-induced changes in recognition accuracy. These experience-dependent effects on neural processing could be dissociated from the neural effect of hearing physically clearer speech, which similarly enhanced perception but increased rather than decreased STG responses. Hence, the observed neural effects of prior knowledge and perceptual learning cannot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception. Instead, our results support a predictive coding account of speech perception; computational simulations show how a single mechanism, minimization of prediction error, can drive immediate perceptual effects of prior knowledge and longer-term perceptual learning of degraded speech.


Asunto(s)
Modelos Neurológicos , Fonética , Inteligibilidad del Habla , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Adolescente , Adulto , Mapeo Encefálico , Simulación por Computador , Electroencefalografía , Femenino , Humanos , Aprendizaje/fisiología , Magnetoencefalografía , Masculino , Imagen Multimodal , Factores de Tiempo , Adulto Joven
6.
Neuroimage ; 126: 164-72, 2016 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-26631816

RESUMEN

Two key questions concerning change detection in crowded acoustic environments are the extent to which cortical processing is specialized for different forms of acoustic change and when in the time-course of cortical processing neural activity becomes predictive of behavioral outcomes. Here, we address these issues by using magnetoencephalography (MEG) to probe the cortical dynamics of change detection in ongoing acoustic scenes containing as many as ten concurrent sources. Each source was formed of a sequence of tone pips with a unique carrier frequency and temporal modulation pattern, designed to mimic the spectrotemporal structure of natural sounds. Our results show that listeners are more accurate and quicker to detect the appearance (than disappearance) of an auditory source in the ongoing scene. Underpinning this behavioral asymmetry are change-evoked responses differing not only in magnitude and latency, but also in their spatial patterns. We find that even the earliest (~50 ms) cortical response to change is predictive of behavioral outcomes (detection times), consistent with the hypothesized role of local neural transients in supporting change detection.


Asunto(s)
Percepción Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Magnetoencefalografía/métodos , Detección de Señal Psicológica/fisiología , Adulto , Femenino , Humanos , Masculino , Tiempo de Reacción/fisiología , Factores de Tiempo , Adulto Joven
7.
J Neurosci ; 32(25): 8443-53, 2012 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-22723684

RESUMEN

A striking feature of human perception is that our subjective experience depends not only on sensory information from the environment but also on our prior knowledge or expectations. The precise mechanisms by which sensory information and prior knowledge are integrated remain unclear, with longstanding disagreement concerning whether integration is strictly feedforward or whether higher-level knowledge influences sensory processing through feedback connections. Here we used concurrent EEG and MEG recordings to determine how sensory information and prior knowledge are integrated in the brain during speech perception. We manipulated listeners' prior knowledge of speech content by presenting matching, mismatching, or neutral written text before a degraded (noise-vocoded) spoken word. When speech conformed to prior knowledge, subjective perceptual clarity was enhanced. This enhancement in clarity was associated with a spatiotemporal profile of brain activity uniquely consistent with a feedback process: activity in the inferior frontal gyrus was modulated by prior knowledge before activity in lower-level sensory regions of the superior temporal gyrus. In parallel, we parametrically varied the level of speech degradation, and therefore the amount of sensory detail, so that changes in neural responses attributable to sensory information and prior knowledge could be directly compared. Although sensory detail and prior knowledge both enhanced speech clarity, they had an opposite influence on the evoked response in the superior temporal gyrus. We argue that these data are best explained within the framework of predictive coding in which sensory activity is compared with top-down predictions and only unexplained activity propagated through the cortical hierarchy.


Asunto(s)
Conocimiento , Percepción del Habla/fisiología , Adolescente , Adulto , Análisis de Varianza , Corteza Cerebral/fisiología , Señales (Psicología) , Interpretación Estadística de Datos , Electroencefalografía , Lóbulo Frontal/fisiología , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador , Magnetoencefalografía , Desempeño Psicomotor/fisiología , Habla , Adulto Joven
8.
iScience ; 26(4): 106299, 2023 Apr 21.
Artículo en Inglés | MEDLINE | ID: mdl-37153450

RESUMEN

People with misophonia have strong aversive reactions to specific "trigger" sounds. Here we challenge this key idea of specificity. Machine learning was used to identify a misophonic profile from a multivariate sound-response pattern. Misophonia could be classified from most sounds (traditional triggers and non-triggers) and, moreover, cross-classification showed that the profile was largely transferable across sounds (rather than idiosyncratic for each sound). By splitting our participants in other ways, we were able to show-using the same approach-a differential diagnostic profile factoring in potential co-morbidities (autism, hyperacusis, ASMR). The broad autism phenotype was classified via aversions to repetitive sounds rather than the eating sounds most easily classified in misophonia. Within misophonia, the presence of hyperacusis and sound-induced pain had widespread effects across all sounds. Overall, we show that misophonia is characterized by a distinctive reaction to most sounds that ultimately becomes most noticeable for a sub-set of those sounds.

9.
Cell Rep ; 42(5): 112422, 2023 05 30.
Artículo en Inglés | MEDLINE | ID: mdl-37099422

RESUMEN

Humans use predictions to improve speech perception, especially in noisy environments. Here we use 7-T functional MRI (fMRI) to decode brain representations of written phonological predictions and degraded speech signals in healthy humans and people with selective frontal neurodegeneration (non-fluent variant primary progressive aphasia [nfvPPA]). Multivariate analyses of item-specific patterns of neural activation indicate dissimilar representations of verified and violated predictions in left inferior frontal gyrus, suggestive of processing by distinct neural populations. In contrast, precentral gyrus represents a combination of phonological information and weighted prediction error. In the presence of intact temporal cortex, frontal neurodegeneration results in inflexible predictions. This manifests neurally as a failure to suppress incorrect predictions in anterior superior temporal gyrus and reduced stability of phonological representations in precentral gyrus. We propose a tripartite speech perception network in which inferior frontal gyrus supports prediction reconciliation in echoic memory, and precentral gyrus invokes a motor model to instantiate and refine perceptual predictions for speech.


Asunto(s)
Corteza Motora , Habla , Humanos , Habla/fisiología , Mapeo Encefálico , Lóbulo Frontal/fisiología , Encéfalo , Lóbulo Temporal , Imagen por Resonancia Magnética/métodos
10.
Elife ; 92020 11 04.
Artículo en Inglés | MEDLINE | ID: mdl-33147138

RESUMEN

Human speech perception can be described as Bayesian perceptual inference but how are these Bayesian computations instantiated neurally? We used magnetoencephalographic recordings of brain responses to degraded spoken words and experimentally manipulated signal quality and prior knowledge. We first demonstrate that spectrotemporal modulations in speech are more strongly represented in neural responses than alternative speech representations (e.g. spectrogram or articulatory features). Critically, we found an interaction between speech signal quality and expectations from prior written text on the quality of neural representations; increased signal quality enhanced neural representations of speech that mismatched with prior expectations, but led to greater suppression of speech that matched prior expectations. This interaction is a unique neural signature of prediction error computations and is apparent in neural responses within 100 ms of speech input. Our findings contribute to the detailed specification of a computational model of speech perception based on predictive coding frameworks.


Asunto(s)
Encéfalo/fisiología , Encéfalo/fisiopatología , Magnetoencefalografía , Trastornos del Habla/fisiopatología , Percepción del Habla , Adolescente , Adulto , Teorema de Bayes , Simulación por Computador , Femenino , Humanos , Modelos Lineales , Masculino , Neuronas/fisiología , Análisis de Regresión , Habla , Adulto Joven
11.
Curr Biol ; 29(12): R582-R584, 2019 06 17.
Artículo en Inglés | MEDLINE | ID: mdl-31211980

RESUMEN

What is the nature of the neural code by which the human brain represents spoken language? New research suggests that previous findings of a language-specific code in cortical responses to speech can be explained solely by simple acoustic features.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Estimulación Acústica , Acústica , Encéfalo , Humanos , Lenguaje , Habla
12.
PLoS One ; 14(12): e0226288, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31881550

RESUMEN

Temporal-envelope cues are essential for successful speech perception. We asked here whether training on stimuli containing temporal-envelope cues without speech content can improve the perception of spectrally-degraded (vocoded) speech in which the temporal-envelope (but not the temporal fine structure) is mainly preserved. Two groups of listeners were trained on different amplitude-modulation (AM) based tasks, either AM detection or AM-rate discrimination (21 blocks of 60 trials during two days, 1260 trials; frequency range: 4Hz, 8Hz, and 16Hz), while an additional control group did not undertake any training. Consonant identification in vocoded vowel-consonant-vowel stimuli was tested before and after training on the AM tasks (or at an equivalent time interval for the control group). Following training, only the trained groups showed a significant improvement in the perception of vocoded speech, but the improvement did not significantly differ from that observed for controls. Thus, we do not find convincing evidence that this amount of training with temporal-envelope cues without speech content provide significant benefit for vocoded speech intelligibility. Alternative training regimens using vocoded speech along the linguistic hierarchy should be explored.


Asunto(s)
Percepción Auditiva/fisiología , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Adulto , Señales (Psicología) , Femenino , Humanos , Masculino , Percepción del Tiempo , Adulto Joven
13.
Elife ; 52016 09 07.
Artículo en Inglés | MEDLINE | ID: mdl-27602577

RESUMEN

We use psychophysics and MEG to test how sensitivity to input statistics facilitates auditory-scene-analysis (ASA). Human subjects listened to 'scenes' comprised of concurrent tone-pip streams (sources). On occasional trials a new source appeared partway. Listeners were more accurate and quicker to detect source appearance in scenes comprised of temporally-regular (REG), rather than random (RAND), sources. MEG in passive listeners and those actively detecting appearance events revealed increased sustained activity in auditory and parietal cortex in REG relative to RAND scenes, emerging ~400 ms of scene-onset. Over and above this, appearance in REG scenes was associated with increased responses relative to RAND scenes. The effect of temporal structure on appearance-evoked responses was delayed when listeners were focused on the scenes relative to when listening passively, consistent with the notion that attention reduces 'surprise'. Overall, the results implicate a mechanism that tracks predictability of multiple concurrent sources to facilitate active and passive ASA.


Asunto(s)
Anticipación Psicológica , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Lateralidad Funcional/fisiología , Lóbulo Parietal/fisiología , Estimulación Acústica , Adolescente , Adulto , Atención/fisiología , Corteza Auditiva/anatomía & histología , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Magnetoencefalografía , Masculino , Lóbulo Parietal/anatomía & histología , Psicofísica , Tiempo de Reacción
14.
J Exp Psychol Hum Percept Perform ; 40(1): 186-99, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-23750966

RESUMEN

An unresolved question is how the reported clarity of degraded speech is enhanced when listeners have prior knowledge of speech content. One account of this phenomenon proposes top-down modulation of early acoustic processing by higher-level linguistic knowledge. Alternative, strictly bottom-up accounts argue that acoustic information and higher-level knowledge are combined at a late decision stage without modulating early acoustic processing. Here we tested top-down and bottom-up accounts using written text to manipulate listeners' knowledge of speech content. The effect of written text on the reported clarity of noise-vocoded speech was most pronounced when text was presented before (rather than after) speech (Experiment 1). Fine-grained manipulation of the onset asynchrony between text and speech revealed that this effect declined when text was presented more than 120 ms after speech onset (Experiment 2). Finally, the influence of written text was found to arise from phonological (rather than lexical) correspondence between text and speech (Experiment 3). These results suggest that prior knowledge effects are time-limited by the duration of auditory echoic memory for degraded speech, consistent with top-down modulation of early acoustic processing by linguistic knowledge.


Asunto(s)
Memoria a Corto Plazo/fisiología , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Adolescente , Adulto , Humanos , Conocimiento , Lectura , Factores de Tiempo , Adulto Joven
15.
PLoS One ; 8(7): e68928, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23840904

RESUMEN

Perceptual decision making is prone to errors, especially near threshold. Physiological, behavioural and modeling studies suggest this is due to the intrinsic or 'internal' noise in neural systems, which derives from a mixture of bottom-up and top-down sources. We show here that internal noise can form the basis of perceptual decision making when the external signal lacks the required information for the decision. We recorded electroencephalographic (EEG) activity in listeners attempting to discriminate between identical tones. Since the acoustic signal was constant, bottom-up and top-down influences were under experimental control. We found that early cortical responses to the identical stimuli varied in global field power and topography according to the perceptual decision made, and activity preceding stimulus presentation could predict both later activity and behavioural decision. Our results suggest that activity variations induced by internal noise of both sensory and cognitive origin are sufficient to drive discrimination judgments.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Toma de Decisiones/fisiología , Ruido , Adolescente , Adulto , Discriminación en Psicología/fisiología , Electroencefalografía , Femenino , Humanos , Masculino , Adulto Joven
16.
PLoS One ; 7(5): e36929, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22606309

RESUMEN

BACKGROUND: The time course and outcome of perceptual learning can be affected by the length and distribution of practice, but the training regimen parameters that govern these effects have received little systematic study in the auditory domain. We asked whether there was a minimum requirement on the number of trials within a training session for learning to occur, whether there was a maximum limit beyond which additional trials became ineffective, and whether multiple training sessions provided benefit over a single session. METHODOLOGY/PRINCIPAL FINDINGS: We investigated the efficacy of different regimens that varied in the distribution of practice across training sessions and in the overall amount of practice received on a frequency discrimination task. While learning was relatively robust to variations in regimen, the group with the shortest training sessions (∼8 min) had significantly faster learning in early stages of training than groups with longer sessions. In later stages, the group with the longest training sessions (>1 hr) showed slower learning than the other groups, suggesting overtraining. Between-session improvements were inversely correlated with performance; they were largest at the start of training and reduced as training progressed. In a second experiment we found no additional longer-term improvement in performance, retention, or transfer of learning for a group that trained over 4 sessions (∼4 hr in total) relative to a group that trained for a single session (∼1 hr). However, the mechanisms of learning differed; the single-session group continued to improve in the days following cessation of training, whereas the multi-session group showed no further improvement once training had ceased. CONCLUSIONS/SIGNIFICANCE: Shorter training sessions were advantageous because they allowed for more latent, between-session and post-training learning to emerge. These findings suggest that efficient regimens should use short training sessions, and optimized spacing between sessions.


Asunto(s)
Percepción Auditiva/fisiología , Aprendizaje/fisiología , Enseñanza/métodos , Adolescente , Adulto , Humanos , Retención en Psicología , Factores de Tiempo , Adulto Joven
17.
PLoS One ; 5(3): e9816, 2010 Mar 23.
Artículo en Inglés | MEDLINE | ID: mdl-20352121

RESUMEN

BACKGROUND: Although feedback on performance is generally thought to promote perceptual learning, the role and necessity of feedback remain unclear. We investigated the effect of providing varying amounts of positive feedback while listeners attempted to discriminate between three identical tones on learning frequency discrimination. METHODOLOGY/PRINCIPAL FINDINGS: Using this novel procedure, the feedback was meaningless and random in relation to the listeners' responses, but the amount of feedback provided (or lack thereof) affected learning. We found that a group of listeners who received positive feedback on 10% of the trials improved their performance on the task (learned), while other groups provided either with excess (90%) or with no feedback did not learn. Superimposed on these group data, however, individual listeners showed other systematic changes of performance. In particular, those with lower non-verbal IQ who trained in the no feedback condition performed more poorly after training. CONCLUSIONS/SIGNIFICANCE: This pattern of results cannot be accounted for by learning models that ascribe an external teacher role to feedback. We suggest, instead, that feedback is used to monitor performance on the task in relation to its perceived difficulty, and that listeners who learn without the benefit of feedback are adept at self-monitoring of performance, a trait that also supports better performance on non-verbal IQ tests. These results show that 'perceptual' learning is strongly influenced by top-down processes of motivation and intelligence.


Asunto(s)
Percepción Auditiva/fisiología , Inteligencia , Motivación , Estimulación Acústica , Adolescente , Adulto , Aprendizaje Discriminativo/fisiología , Retroalimentación , Audición , Humanos , Pruebas de Inteligencia , Aprendizaje , Modelos Estadísticos , Método de Montecarlo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA