Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
2.
Artigo em Inglês | MEDLINE | ID: mdl-36148149

RESUMO

Background: Numerous resting-state studies on attention deficit hyperactivity disorder (ADHD) have reported aberrant functional connectivity (FC) between the default-mode network (DMN) and the ventral attention/salience network (VA/SN). This finding has commonly been interpreted as an index of poorer DMN regulation associated with symptoms of mind wandering in ADHD literature. However, a competing perspective suggests that dysfunctional organization of the DMN and VA/SN may additionally index increased sensitivity to the external environment. The goal of the current study was to test this latter perspective in relation to auditory distraction by investigating whether ADHD-adults exhibit aberrant FC between DMN, VA/SN, and auditory networks. Methods: Twelve minutes of resting-state fMRI data was collected from two adult groups: ADHD (n = 17) and controls (n = 17); from which the FC between predefined regions comprising the DMN, VA/SN, and auditory networks were analyzed. Results: A weaker anticorrelation between the VA/SN and DMN was observed in ADHD. DMN and VA/SN hubs also exhibited aberrant FC with the auditory network in ADHD. Additionally, participants who displayed a stronger anticorrelation between the VA/SN and auditory network at rest, also performed better on a cognitively demanding behavioral task that involved ignoring a distracting auditory stimulus. Conclusion: Results are consistent with the hypothesis that auditory distraction in ADHD is linked to aberrant interactions between DMN, VA/SN, and auditory systems. Our findings support models that implicate dysfunctional organization of the DMN and VA/SN in the disorder and encourage more research into sensory interactions with these major networks.

3.
Front Psychol ; 13: 967260, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36118435

RESUMO

The review gives an introductory description of the successive development of data patterns based on comparisons between hearing-impaired and normal hearing participants' speech understanding skills, later prompting the formulation of the Ease of Language Understanding (ELU) model. The model builds on the interaction between an input buffer (RAMBPHO, Rapid Automatic Multimodal Binding of PHOnology) and three memory systems: working memory (WM), semantic long-term memory (SLTM), and episodic long-term memory (ELTM). RAMBPHO input may either match or mismatch multimodal SLTM representations. Given a match, lexical access is accomplished rapidly and implicitly within approximately 100-400 ms. Given a mismatch, the prediction is that WM is engaged explicitly to repair the meaning of the input - in interaction with SLTM and ELTM - taking seconds rather than milliseconds. The multimodal and multilevel nature of representations held in WM and LTM are at the center of the review, being integral parts of the prediction and postdiction components of language understanding. Finally, some hypotheses based on a selective use-disuse of memory systems mechanism are described in relation to mild cognitive impairment and dementia. Alternative speech perception and WM models are evaluated, and recent developments and generalisations, ELU model tests, and boundaries are discussed.

4.
Front Hum Neurosci ; 15: 771711, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34916918

RESUMO

Cognitive control provides us with the ability to inter alia, regulate the locus of attention and ignore environmental distractions in accordance with our goals. Auditory distraction is a frequently cited symptom in adults with attention deficit hyperactivity disorder (aADHD)-yet few task-based fMRI studies have explored whether deficits in cognitive control (associated with the disorder) impedes on the ability to suppress/compensate for exogenously evoked cortical responses to noise in this population. In the current study, we explored the effects of auditory distraction as function of working memory (WM) load. Participants completed two tasks: an auditory target detection (ATD) task in which the goal was to actively detect salient oddball tones amidst a stream of standard tones in noise, and a visual n-back task consisting of 0-, 1-, and 2-back WM conditions whilst concurrently ignoring the same tonal signal from the ATD task. Results indicated that our sample of young aADHD (n = 17), compared to typically developed controls (n = 17), had difficulty attenuating auditory cortical responses to the task-irrelevant sound when WM demands were high (2-back). Heightened auditory activity to task-irrelevant sound was associated with both poorer WM performance and symptomatic inattentiveness. In the ATD task, we observed a significant increase in functional communications between auditory and salience networks in aADHD. Because performance outcomes were on par with controls for this task, we suggest that this increased functional connectivity in aADHD was likely an adaptive mechanism for suboptimal listening conditions. Taken together, our results indicate that aADHD are more susceptible to noise interference when they are engaged in a primary task. The ability to cope with auditory distraction appears to be related to the WM demands of the task and thus the capacity to deploy cognitive control.

5.
Front Neurosci ; 14: 573254, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33100961

RESUMO

Under adverse listening conditions, prior linguistic knowledge about the form (i.e., phonology) and meaning (i.e., semantics) help us to predict what an interlocutor is about to say. Previous research has shown that accurate predictions of incoming speech increase speech intelligibility, and that semantic predictions enhance the perceptual clarity of degraded speech even when exact phonological predictions are possible. In addition, working memory (WM) is thought to have specific influence over anticipatory mechanisms by actively maintaining and updating the relevance of predicted vs. unpredicted speech inputs. However, the relative impact on speech processing of deviations from expectations related to form and meaning is incompletely understood. Here, we use MEG to investigate the cortical temporal processing of deviations from the expected form and meaning of final words during sentence processing. Our overall aim was to observe how deviations from the expected form and meaning modulate cortical speech processing under adverse listening conditions and investigate the degree to which this is associated with WM capacity. Results indicated that different types of deviations are processed differently in the auditory N400 and Mismatch Negativity (MMN) components. In particular, MMN was sensitive to the type of deviation (form or meaning) whereas the N400 was sensitive to the magnitude of the deviation rather than its type. WM capacity was associated with the ability to process phonological incoming information and semantic integration.

6.
Ear Hear ; 40(5): 1140-1148, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30624251

RESUMO

OBJECTIVES: The precision of stimulus-driven information is less critical for comprehension when accurate knowledge-based predictions of the upcoming stimulus can be generated. A recent study in listeners without hearing impairment (HI) has shown that form- and meaning-based predictability independently and cumulatively enhance perceived clarity of degraded speech. In the present study, we investigated whether form- and meaning-based predictability enhanced the perceptual clarity of degraded speech for individuals with moderate to severe sensorineural HI, a group for whom such enhancement may be particularly important. DESIGN: Spoken sentences with high or low semantic coherence were degraded by noise-vocoding and preceded by matching or nonmatching text primes. Matching text primes allowed generation of form-based predictions while semantic coherence allowed generation of meaning-based predictions. RESULTS: The results showed that both form- and meaning-based predictions make degraded speech seem clearer to individuals with HI. The benefit of form-based predictions was seen across levels of speech quality and was greater for individuals with HI in the present study than for individuals without HI in our previous study. However, for individuals with HI, the benefit of meaning-based predictions was only apparent when the speech was slightly degraded. When it was more severely degraded, the benefit of meaning-based predictions was only seen when matching text primes preceded the degraded speech. The benefit in terms of perceptual clarity of meaning-based predictions was positively related to verbal fluency but not working memory performance. CONCLUSIONS: Taken together, these results demonstrate that, for individuals with HI, form-based predictability has a robust effect on perceptual clarity that is greater than the effect previously shown for individuals without HI. However, when speech quality is moderately or severely degraded, meaning-based predictability is contingent on form-based predictability. Further, the ability to mobilize the lexicon seems to contribute to the strength of meaning-based predictions. Whereas individuals without HI may be able to devote explicit working memory capacity for storing meaning-based predictions, individuals with HI may already be using all available explicit capacity to process the degraded speech and thus become reliant on explicit skills such as their verbal fluency to generate useful meaning-based predictions.


Assuntos
Perda Auditiva Neurossensorial/fisiopatologia , Percepção da Fala , Idoso , Compreensão , Feminino , Auxiliares de Audição , Perda Auditiva Neurossensorial/reabilitação , Humanos , Masculino , Memória de Curto Prazo , Pessoa de Meia-Idade , Semântica , Índice de Gravidade de Doença
7.
J Exp Psychol Hum Percept Perform ; 44(2): 277-285, 2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-28557490

RESUMO

The perceptual clarity of speech is influenced by more than just the acoustic quality of the sound; it also depends on contextual support. For example, a degraded sentence is perceived to be clearer when the content of the speech signal is provided with matching text (i.e., form-based predictability) before hearing the degraded sentence. Here, we investigate whether sentence-level semantic coherence (i.e., meaning-based predictability), enhances perceptual clarity of degraded sentences, and if so, whether the mechanism is the same as that underlying enhancement by matching text. We also ask whether form- and meaning-based predictability are related to individual differences in cognitive abilities. Twenty participants listened to spoken sentences that were either clear or degraded by noise vocoding and rated the clarity of each item. The sentences had either high or low semantic coherence. Each spoken word was preceded by the homologous printed word (matching text), or by a meaningless letter string (nonmatching text). Cognitive abilities were measured with a working memory test. Results showed that perceptual clarity was significantly enhanced both by matching text and by semantic coherence. Importantly, high coherence enhanced the perceptual clarity of the degraded sentences even when they were preceded by matching text, suggesting that the effects of form- and meaning-based predictions on perceptual clarity are independent and additive. However, when working memory capacity indexed by the Size-Comparison Span Test was controlled for, only form-based predictions enhanced perceptual clarity, and then only at some sound quality levels, suggesting that prediction effects are to a certain extent dependent on cognitive abilities. (PsycINFO Database Record


Assuntos
Memória de Curto Prazo/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Fonética , Psicolinguística , Leitura , Semântica , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
8.
Int J Audiol ; 55(11): 623-42, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-27589015

RESUMO

OBJECTIVE: The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. STUDY SAMPLE: Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4 dB HL. DESIGN: LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. RESULTS: The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R(2) = 0.40). CONCLUSIONS: All LEVEL 2 factors are important theoretically as well as for clinical assessment.


Assuntos
Cognição , Correção de Deficiência Auditiva/instrumentação , Correção de Deficiência Auditiva/psicologia , Auxiliares de Audição , Transtornos da Audição/psicologia , Transtornos da Audição/terapia , Pessoas com Deficiência Auditiva/psicologia , Pessoas com Deficiência Auditiva/reabilitação , Inteligibilidade da Fala , Percepção da Fala , Estimulação Acústica , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria de Tons Puros , Limiar Auditivo , Compreensão , Função Executiva , Feminino , Audição , Transtornos da Audição/diagnóstico , Transtornos da Audição/fisiopatologia , Humanos , Masculino , Memória de Curto Prazo , Pessoa de Meia-Idade , Testes Neuropsicológicos , Ruído/efeitos adversos , Mascaramento Perceptivo
10.
Front Psychol ; 4: 780, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24298261
11.
Front Syst Neurosci ; 7: 31, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23874273

RESUMO

Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.

12.
Eur J Neurosci ; 37(5): 777-85, 2013 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-23281939

RESUMO

Recent human behavioral studies have shown semantic and/or lexical processing for stimuli presented below the auditory perception threshold. Here, we investigated electroencephalographic responses to words, pseudo-words and complex sounds, in conditions where phonological and lexical categorizations were behaviorally successful (categorized stimuli) or unsuccessful (uncategorized stimuli). Data showed a greater decrease in low-beta power at left-hemisphere temporal electrodes for categorized non-lexical sounds (complex sounds and pseudo-words) than for categorized lexical sounds (words), consistent with the signature of a failure in lexical access. Similar differences between lexical and non-lexical sounds were observed for uncategorized stimuli, although these stimuli did not yield evoked potentials or theta activity. The results of the present study suggest that behaviorally uncategorized stimuli were processed at the lexical level, and provide evidence of the neural bases of the results observed in previous behavioral studies investigating auditory perception in the absence of stimulus awareness.


Assuntos
Ritmo beta , Som , Percepção da Fala/fisiologia , Fala/fisiologia , Ritmo Teta , Adulto , Percepção Auditiva , Córtex Cerebral/fisiologia , Potenciais Evocados , Feminino , Lateralidade Funcional , Humanos , Masculino , Fonética
13.
Front Psychol ; 2: 176, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21845183

RESUMO

If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo-words, and complex non-phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub-threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2) that was followed by a two alternative forced-choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo-words) were better detected than non-phonological stimuli (complex sounds), presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo-words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non-speech processing could not be attributed to energetic differences in the stimuli.

14.
PLoS One ; 6(5): e20273, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21655277

RESUMO

Numerous studies have reported subliminal repetition and semantic priming in the visual modality. We transferred this paradigm to the auditory modality. Prime awareness was manipulated by a reduction of sound intensity level. Uncategorized prime words (according to a post-test) were followed by semantically related, unrelated, or repeated target words (presented without intensity reduction) and participants performed a lexical decision task (LDT). Participants with slower reaction times in the LDT showed semantic priming (faster reaction times for semantically related compared to unrelated targets) and negative repetition priming (slower reaction times for repeated compared to semantically related targets). This is the first report of semantic priming in the auditory modality without conscious categorization of the prime.


Assuntos
Semântica , Fala , Feminino , Humanos , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA