Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 88
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Cereb Cortex ; 31(7): 3165-3176, 2021 06 10.
Artículo en Inglés | MEDLINE | ID: mdl-33625498

RESUMEN

Stimulus degradation adds to working memory load during speech processing. We investigated whether this applies to sign processing and, if so, whether the mechanism implicates secondary auditory cortex. We conducted an fMRI experiment where 16 deaf early signers (DES) and 22 hearing non-signers performed a sign-based n-back task with three load levels and stimuli presented at high and low resolution. We found decreased behavioral performance with increasing load and decreasing visual resolution, but the neurobiological mechanisms involved differed between the two manipulations and did so for both groups. Importantly, while the load manipulation was, as predicted, accompanied by activation in the frontoparietal working memory network, the resolution manipulation resulted in temporal and occipital activation. Furthermore, we found evidence of cross-modal reorganization in the secondary auditory cortex: DES had stronger activation and stronger connectivity between this and several other regions. We conclude that load and stimulus resolution have different neural underpinnings in the visual-verbal domain, which has consequences for current working memory models, and that for DES the secondary auditory cortex is involved in the binding of representations when task demands are low.


Asunto(s)
Corteza Auditiva/diagnóstico por imagen , Sordera/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Memoria a Corto Plazo/fisiología , Lengua de Signos , Percepción Visual , Adulto , Corteza Auditiva/fisiología , Sordera/fisiopatología , Femenino , Humanos , Masculino , Plasticidad Neuronal/fisiología , Estimulación Luminosa/métodos , Tiempo de Reacción/fisiología , Percepción Visual/fisiología , Adulto Joven
2.
J Exp Child Psychol ; 210: 105203, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34118494

RESUMEN

Background noise makes listening effortful and may lead to fatigue. This may compromise classroom learning, especially for children with a non-native background. In the current study, we used pupillometry to investigate listening effort and fatigue during listening comprehension under typical (0 dB signal-to-noise ratio [SNR]) and favorable (+10 dB SNR) listening conditions in 63 Swedish primary school children (7-9 years of age) performing a narrative speech-picture verification task. Our sample comprised both native (n = 25) and non-native (n = 38) speakers of Swedish. Results revealed greater pupil dilation, indicating more listening effort, in the typical listening condition compared with the favorable listening condition, and it was primarily the non-native speakers who contributed to this effect (and who also had lower performance accuracy than the native speakers). Furthermore, the native speakers had greater pupil dilation during successful trials, whereas the non-native speakers showed greatest pupil dilation during unsuccessful trials, especially in the typical listening condition. This set of results indicates that whereas native speakers can apply listening effort to good effect, non-native speakers may have reached their effort ceiling, resulting in poorer listening comprehension. Finally, we found that baseline pupil size decreased over trials, which potentially indicates more listening-related fatigue, and this effect was greater in the typical listening condition compared with the favorable listening condition. Collectively, these results provide novel insight into the underlying dynamics of listening effort, fatigue, and listening comprehension in typical classroom conditions compared with favorable classroom conditions, and they demonstrate for the first time how sensitive this interplay is to language experience.


Asunto(s)
Percepción del Habla , Percepción Auditiva , Niño , Fatiga , Humanos , Ruido , Instituciones Académicas
3.
Eur J Neurosci ; 51(11): 2236-2249, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-31872480

RESUMEN

Change in linguistic prosody generates a mismatch negativity response (MMN), indicating neural representation of linguistic prosody, while change in affective prosody generates a positive response (P3a), reflecting its motivational salience. However, the neural response to concurrent affective and linguistic prosody is unknown. The present paper investigates the integration of these two prosodic features in the brain by examining the neural response to separate and concurrent processing by electroencephalography (EEG). A spoken pair of Swedish words-['fɑ́ːsɛn] phase and ['fɑ̀ːsɛn] damn-that differed in emotional semantics due to linguistic prosody was presented to 16 subjects in an angry and neutral affective prosody using a passive auditory oddball paradigm. Acoustically matched pseudowords-['vɑ́ːsɛm] and ['vɑ̀ːsɛm]-were used as controls. Following the constructionist concept of emotions, accentuating the conceptualization of emotions based on language, it was hypothesized that concurrent affective and linguistic prosody with the same valence-angry ['fɑ̀ːsɛn] damn-would elicit a unique late EEG signature, reflecting the temporal integration of affective voice with emotional semantics of prosodic origin. In accordance, linguistic prosody elicited an MMN at 300-350 ms, and affective prosody evoked a P3a at 350-400 ms, irrespective of semantics. Beyond these responses, concurrent affective and linguistic prosody evoked a late positive component (LPC) at 820-870 ms in frontal areas, indicating the conceptualization of affective prosody based on linguistic prosody. This study provides evidence that the brain does not only distinguish between these two functions of prosody but also integrates them based on language and experience.


Asunto(s)
Emociones , Percepción del Habla , Mapeo Encefálico , Electroencefalografía , Humanos , Lingüística , Semántica
4.
J Exp Child Psychol ; 191: 104733, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-31805463

RESUMEN

Procedural memory underpins the learning of skills and habits. It is often tested in children and adults with sequence learning on the serial reaction time (SRT) task, which involves manual motor control. However, due to infants' slowly developing control of motor actions, most procedures that require motor control cannot be examined in infancy. Here, we investigated procedural memory using an SRT task adapted for infants. During the task, images appeared at one of three locations on a screen, with the location order following a five-item recurring sequence. Three blocks of recurring sequences were followed by a random-order fourth block and finally another block of recurring sequences. Eye movement data were collected for infants (n = 35) and adults (n = 31). Reaction time was indexed by calculating the saccade latencies for orienting to each image as it appeared. The entire protocol took less than 3 min. Sequence learning in the SRT task can be operationalized as an increase in latencies in the random block as compared with the preceding and following sequence blocks. This pattern was observed in both the infants and adults. This study is the first to report learning in an SRT task in infants as young as 9  months. This SRT protocol is a promising procedure for measuring procedural memory in infants.


Asunto(s)
Desarrollo Infantil/fisiología , Memoria/fisiología , Aprendizaje Seriado/fisiología , Percepción Visual/fisiología , Adulto , Tecnología de Seguimiento Ocular , Femenino , Humanos , Lactante , Masculino , Adulto Joven
5.
Ear Hear ; 40(5): 1140-1148, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30624251

RESUMEN

OBJECTIVES: The precision of stimulus-driven information is less critical for comprehension when accurate knowledge-based predictions of the upcoming stimulus can be generated. A recent study in listeners without hearing impairment (HI) has shown that form- and meaning-based predictability independently and cumulatively enhance perceived clarity of degraded speech. In the present study, we investigated whether form- and meaning-based predictability enhanced the perceptual clarity of degraded speech for individuals with moderate to severe sensorineural HI, a group for whom such enhancement may be particularly important. DESIGN: Spoken sentences with high or low semantic coherence were degraded by noise-vocoding and preceded by matching or nonmatching text primes. Matching text primes allowed generation of form-based predictions while semantic coherence allowed generation of meaning-based predictions. RESULTS: The results showed that both form- and meaning-based predictions make degraded speech seem clearer to individuals with HI. The benefit of form-based predictions was seen across levels of speech quality and was greater for individuals with HI in the present study than for individuals without HI in our previous study. However, for individuals with HI, the benefit of meaning-based predictions was only apparent when the speech was slightly degraded. When it was more severely degraded, the benefit of meaning-based predictions was only seen when matching text primes preceded the degraded speech. The benefit in terms of perceptual clarity of meaning-based predictions was positively related to verbal fluency but not working memory performance. CONCLUSIONS: Taken together, these results demonstrate that, for individuals with HI, form-based predictability has a robust effect on perceptual clarity that is greater than the effect previously shown for individuals without HI. However, when speech quality is moderately or severely degraded, meaning-based predictability is contingent on form-based predictability. Further, the ability to mobilize the lexicon seems to contribute to the strength of meaning-based predictions. Whereas individuals without HI may be able to devote explicit working memory capacity for storing meaning-based predictions, individuals with HI may already be using all available explicit capacity to process the degraded speech and thus become reliant on explicit skills such as their verbal fluency to generate useful meaning-based predictions.


Asunto(s)
Pérdida Auditiva Sensorineural/fisiopatología , Percepción del Habla , Anciano , Comprensión , Femenino , Audífonos , Pérdida Auditiva Sensorineural/rehabilitación , Humanos , Masculino , Memoria a Corto Plazo , Persona de Mediana Edad , Semántica , Índice de Severidad de la Enfermedad
6.
Ear Hear ; 40(2): 272-286, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-29923867

RESUMEN

OBJECTIVES: Speech understanding may be cognitively demanding, but it can be enhanced when semantically related text cues precede auditory sentences. The present study aimed to determine whether (a) providing text cues reduces pupil dilation, a measure of cognitive load, during listening to sentences, (b) repeating the sentences aloud affects recall accuracy and pupil dilation during recall of cue words, and (c) semantic relatedness between cues and sentences affects recall accuracy and pupil dilation during recall of cue words. DESIGN: Sentence repetition following text cues and recall of the text cues were tested. Twenty-six participants (mean age, 22 years) with normal hearing listened to masked sentences. On each trial, a set of four-word cues was presented visually as text preceding the auditory presentation of a sentence whose meaning was either related or unrelated to the cues. On each trial, participants first read the cue words, then listened to a sentence. Following this they spoke aloud either the cue words or the sentence, according to instruction, and finally on all trials orally recalled the cues. Peak pupil dilation was measured throughout listening and recall on each trial. Additionally, participants completed a test measuring the ability to perceive degraded verbal text information and three working memory tests (a reading span test, a size-comparison span test, and a test of memory updating). RESULTS: Cue words that were semantically related to the sentence facilitated sentence repetition but did not reduce pupil dilation. Recall was poorer and there were more intrusion errors when the cue words were related to the sentences. Recall was also poorer when sentences were repeated aloud. Both behavioral effects were associated with greater pupil dilation. Larger reading span capacity and smaller size-comparison span were associated with larger peak pupil dilation during listening. Furthermore, larger reading span and greater memory updating ability were both associated with better cue recall overall. CONCLUSIONS: Although sentence-related word cues facilitate sentence repetition, our results indicate that they do not reduce cognitive load during listening in noise with a concurrent memory load. As expected, higher working memory capacity was associated with better recall of the cues. Unexpectedly, however, semantic relatedness with the sentence reduced word cue recall accuracy and increased intrusion errors, suggesting an effect of semantic confusion. Further, speaking the sentence aloud also reduced word cue recall accuracy, probably due to articulatory suppression. Importantly, imposing a memory load during listening to sentences resulted in the absence of formerly established strong effects of speech intelligibility on the pupil dilation response. This nullified intelligibility effect demonstrates that the pupil dilation response to a cognitive (memory) task can completely overshadow the effect of perceptual factors on the pupil dilation response. This highlights the importance of taking cognitive task load into account during auditory testing.


Asunto(s)
Señales (Psicología) , Recuerdo Mental/fisiología , Pupila/fisiología , Percepción del Habla/fisiología , Adolescente , Adulto , Percepción Auditiva , Cognición , Femenino , Humanos , Masculino , Memoria , Memoria a Corto Plazo , Semántica , Relación Señal-Ruido , Adulto Joven
7.
Cereb Cortex ; 28(10): 3540-3554, 2018 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-28968707

RESUMEN

Early deafness results in crossmodal reorganization of the superior temporal cortex (STC). Here, we investigated the effect of deafness on cognitive processing. Specifically, we studied the reorganization, due to deafness and sign language (SL) knowledge, of linguistic and nonlinguistic visual working memory (WM). We conducted an fMRI experiment in groups that differed in their hearing status and SL knowledge: deaf native signers, and hearing native signers, hearing nonsigners. Participants performed a 2-back WM task and a control task. Stimuli were signs from British Sign Language (BSL) or moving nonsense objects in the form of point-light displays. We found characteristic WM activations in fronto-parietal regions in all groups. However, deaf participants also recruited bilateral posterior STC during the WM task, independently of the linguistic content of the stimuli, and showed less activation in fronto-parietal regions. Resting-state connectivity analysis showed increased connectivity between frontal regions and STC in deaf compared to hearing individuals. WM for signs did not elicit differential activations, suggesting that SL WM does not rely on modality-specific linguistic processing. These findings suggest that WM networks are reorganized due to early deafness, and that the organization of cognitive networks is shaped by the nature of the sensory inputs available during development.


Asunto(s)
Sordera/fisiopatología , Audición/fisiología , Memoria a Corto Plazo/fisiología , Red Nerviosa/fisiopatología , Adulto , Sordera/diagnóstico por imagen , Femenino , Humanos , Desarrollo del Lenguaje , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Red Nerviosa/diagnóstico por imagen , Plasticidad Neuronal/fisiología , Psicolingüística , Tiempo de Reacción/fisiología , Lengua de Signos , Adulto Joven
8.
Int J Audiol ; 58(5): 247-261, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30714435

RESUMEN

OBJECTIVE: The current update of the Ease of Language Understanding (ELU) model evaluates the predictive and postdictive aspects of speech understanding and communication. DESIGN: The aspects scrutinised concern: (1) Signal distortion and working memory capacity (WMC), (2) WMC and early attention mechanisms, (3) WMC and use of phonological and semantic information, (4) hearing loss, WMC and long-term memory (LTM), (5) WMC and effort, and (6) the ELU model and sign language. Study Samples: Relevant literature based on own or others' data was used. RESULTS: Expectations 1-4 are supported whereas 5-6 are constrained by conceptual issues and empirical data. Further strands of research were addressed, focussing on WMC and contextual use, and on WMC deployment in relation to hearing status. A wider discussion of task demands, concerning, for example, inference-making and priming, is also introduced and related to the overarching ELU functions of prediction and postdiction. Finally, some new concepts and models that have been inspired by the ELU-framework are presented and discussed. CONCLUSIONS: The ELU model has been productive in generating empirical predictions/expectations, the majority of which have been confirmed. Nevertheless, new insights and boundary conditions need to be experimentally tested to further shape the model.


Asunto(s)
Cognición , Pérdida Auditiva/psicología , Memoria a Corto Plazo , Percepción del Habla , Atención , Humanos , Memoria a Largo Plazo
9.
Neural Plast ; 2018: 2576047, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30662455

RESUMEN

Congenital deafness is often compensated by early sign language use leading to typical language development with corresponding neural underpinnings. However, deaf individuals are frequently reported to have poorer numerical abilities than hearing individuals and it is not known whether the underlying neuronal networks differ between groups. In the present study, adult deaf signers and hearing nonsigners performed a digit and letter order tasks, during functional magnetic resonance imaging. We found the neuronal networks recruited in the two tasks to be generally similar across groups, with significant activation in the dorsal visual stream for the letter order task, suggesting letter identification and position encoding. For the digit order task, no significant activation was found for either of the two groups. Region of interest analyses on parietal numerical processing regions revealed different patterns of activation across groups. Importantly, deaf signers showed significant activation in the right horizontal portion of the intraparietal sulcus for the digit order task, suggesting engagement of magnitude manipulation during numerical order processing in this group.


Asunto(s)
Encéfalo/diagnóstico por imagen , Sordera/diagnóstico por imagen , Red Nerviosa/diagnóstico por imagen , Adulto , Encéfalo/fisiopatología , Sordera/congénito , Sordera/fisiopatología , Femenino , Lateralidad Funcional/fisiología , Humanos , Imagen por Resonancia Magnética , Masculino , Red Nerviosa/fisiopatología , Lengua de Signos , Adulto Joven
10.
J Deaf Stud Deaf Educ ; 22(4): 404-421, 2017 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-28961874

RESUMEN

Strengthening the connections between sign language and written language may improve reading skills in deaf and hard-of-hearing (DHH) signing children. The main aim of the present study was to investigate whether computerized sign language-based literacy training improves reading skills in DHH signing children who are learning to read. Further, longitudinal associations between sign language skills and developing reading skills were investigated. Participants were recruited from Swedish state special schools for DHH children, where pupils are taught in both sign language and spoken language. Reading skills were assessed at five occasions and the intervention was implemented in a cross-over design. Results indicated that reading skills improved over time and that development of word reading was predicted by the ability to imitate unfamiliar lexical signs, but there was only weak evidence that it was supported by the intervention. These results demonstrate for the first time a longitudinal link between sign-based abilities and word reading in DHH signing children who are learning to read. We suggest that the active construction of novel lexical forms may be a supramodal mechanism underlying word reading development.


Asunto(s)
Instrucción por Computador/métodos , Educación de Personas con Discapacidad Auditiva/métodos , Alfabetización , Lengua de Signos , Niño , Femenino , Humanos , Masculino , Lectura
11.
J Cogn Neurosci ; 28(1): 20-40, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26351993

RESUMEN

The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.


Asunto(s)
Mapeo Encefálico , Corteza Cerebral/fisiopatología , Percepción/fisiología , Fonética , Adulto , Análisis de Varianza , Corteza Cerebral/irrigación sanguínea , Señales (Psicología) , Sordera/patología , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Oxígeno/sangre , Estimulación Luminosa , Psicoacústica , Tiempo de Reacción/fisiología , Semántica
12.
Neuroimage ; 124(Pt A): 96-106, 2016 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-26348556

RESUMEN

Sensory cortices undergo crossmodal reorganisation as a consequence of sensory deprivation. Congenital deafness in humans represents a particular case with respect to other types of sensory deprivation, because cortical reorganisation is not only a consequence of auditory deprivation, but also of language-driven mechanisms. Visual crossmodal plasticity has been found in secondary auditory cortices of deaf individuals, but it is still unclear if reorganisation also takes place in primary auditory areas, and how this relates to language modality and auditory deprivation. Here, we dissociated the effects of language modality and auditory deprivation on crossmodal plasticity in Heschl's gyrus as a whole, and in cytoarchitectonic region Te1.0 (likely to contain the core auditory cortex). Using fMRI, we measured the BOLD response to viewing sign language in congenitally or early deaf individuals with and without sign language knowledge, and in hearing controls. Results show that differences between hearing and deaf individuals are due to a reduction in activation caused by visual stimulation in the hearing group, which is more significant in Te1.0 than in Heschl's gyrus as a whole. Furthermore, differences between deaf and hearing groups are due to auditory deprivation, and there is no evidence that the modality of language used by deaf individuals contributes to crossmodal plasticity in Heschl's gyrus.


Asunto(s)
Corteza Auditiva/fisiopatología , Sordera/fisiopatología , Plasticidad Neuronal , Lengua de Signos , Adulto , Mapeo Encefálico , Imagen Eco-Planar , Femenino , Humanos , Lingüística , Masculino , Persona de Mediana Edad
13.
Ear Hear ; 37 Suppl 1: 69S-76S, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27355773

RESUMEN

Everyday listening may be experienced as effortful, especially by individuals with hearing loss. This may be due to internal factors, such as cognitive load, and external factors, such as noise. Even when speech is audible, internal and external factors may combine to reduce cognitive spare capacity, or the ability to engage in cognitive processing of spoken information. A better understanding of cognitive spare capacity and how it can be optimally allocated may guide new approaches to rehabilitation and ultimately improve outcomes. This article presents results of three tests of cognitive spare capacity:1. Sentence-final Word Identification and Recall (SWIR) test2. Cognitive Spare Capacity Test (CSCT)3. Auditory Inference Span Test (AIST)Results show that noise reduces cognitive spare capacity even when speech intelligibility is retained. In addition, SWIR results show that hearing aid signal processing can increase cognitive spare capacity, and CSCT and AIST results show that increasing load reduces cognitive spare capacity. Correlational evidence suggests that while the effect of noise on cognitive spare capacity is related to working memory capacity, the effect of load is related to executive function. Future studies should continue to investigate how hearing aid signal processing can mitigate the effect of load on cognitive spare capacity, and whether such effects can be enhanced by developing executive skills through training. The mechanisms modulating cognitive spare capacity should be investigated by studying their neural correlates, and tests of cognitive spare capacity should be developed for clinical use in conjunction with developing new approaches to rehabilitation.


Asunto(s)
Cognición , Memoria a Corto Plazo , Ruido , Percepción del Habla , Humanos
14.
Ear Hear ; 37(1): e26-36, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26244401

RESUMEN

OBJECTIVES: Verbal reasoning performance is an indicator of the ability to think constructively in everyday life and relies on both crystallized and fluid intelligence. This study aimed to determine the effect of functional hearing on verbal reasoning when controlling for age, gender, and education. In addition, the study investigated whether hearing aid usage mitigated the effect and examined different routes from hearing to verbal reasoning. DESIGN: Cross-sectional data on 40- to 70-year-old community-dwelling participants from the UK Biobank resource were accessed. Data consisted of behavioral and subjective measures of functional hearing, assessments of numerical and linguistic verbal reasoning, measures of executive function, and demographic and lifestyle information. Data on 119,093 participants who had completed hearing and verbal reasoning tests were submitted to multiple regression analyses, and data on 61,688 of these participants, who had completed additional cognitive tests and provided relevant lifestyle information, were submitted to structural equation modeling. RESULTS: Poorer performance on the behavioral measure of functional hearing was significantly associated with poorer verbal reasoning in both the numerical and linguistic domains (p < 0.001). There was no association between the subjective measure of functional hearing and verbal reasoning. Functional hearing significantly interacted with education (p < 0.002), showing a trend for functional hearing to have a greater impact on verbal reasoning among those with a higher level of formal education. Among those with poor hearing, hearing aid usage had a significant positive, but not necessarily causal, effect on both numerical and linguistic verbal reasoning (p < 0.005). The estimated effect of hearing aid usage was less than the effect of poor functional hearing. Structural equation modeling analyses confirmed that controlling for education reduced the effect of functional hearing on verbal reasoning and showed that controlling for executive function eliminated the effect. However, when computer usage was controlled for, the eliminating effect of executive function was weakened. CONCLUSIONS: Poor functional hearing was associated with poor verbal reasoning in a 40- to 70-year-old community-dwelling population after controlling for age, gender, and education. The effect of functional hearing on verbal reasoning was significantly reduced among hearing aid users and completely overcome by good executive function skills, which may be enhanced by playing computer games.


Asunto(s)
Cognición , Función Ejecutiva , Audífonos , Pérdida Auditiva/psicología , Inteligencia , Adulto , Factores de Edad , Anciano , Audiometría de Tonos Puros , Estudios Transversales , Escolaridad , Femenino , Pérdida Auditiva/rehabilitación , Humanos , Vida Independiente , Modelos Lineales , Masculino , Persona de Mediana Edad , Análisis de Regresión , Factores Sexuales , Reino Unido
15.
Ear Hear ; 37(5): 620-2, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27232076

RESUMEN

Experimental work has shown better visuospatial working memory (VSWM) in profoundly deaf individuals compared to those with normal hearing. Other data, including the UK Biobank resource shows poorer VSWM in individuals with poorer hearing. Using the same database, the authors investigated VSWM in individuals who reported profound deafness. Included in this study were 112 participants who were profoundly deaf, 1310 with poor hearing and 74,635 with normal hearing. All participants performed a card-pair matching task as a test of VSWM. Although variance in VSWM performance was large among profoundly deaf participants, at group level it was superior to that of participants with both normal and poor hearing. VSWM in adults is related to hearing status but the association is not linear. Future study should investigate the mechanism behind enhanced VSWM in profoundly deaf adults.


Asunto(s)
Sordera/psicología , Memoria a Corto Plazo , Procesamiento Espacial , Adulto , Anciano , Estudios de Casos y Controles , Femenino , Pérdida Auditiva/psicología , Humanos , Masculino , Persona de Mediana Edad , Índice de Severidad de la Enfermedad , Reino Unido
16.
Ear Hear ; 37 Suppl 1: 145S-54S, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27355764

RESUMEN

In adaptive Speech Reception Threshold (SRT) tests used in the audiological clinic, speech is presented at signal to noise ratios (SNRs) that are lower than those generally encountered in real-life communication situations. At higher, ecologically valid SNRs, however, SRTs are insensitive to changes in hearing aid signal processing that may be of benefit to listeners who are hard of hearing. Previous studies conducted in Swedish using the Sentence-final Word Identification and Recall test (SWIR) have indicated that at such SNRs, the ability to recall spoken words may be a more informative measure. In the present study, a Danish version of SWIR, known as the Sentence-final Word Identification and Recall Test in a New Language (SWIRL) was introduced and evaluated in two experiments. The objective of experiment 1 was to determine if the Swedish results demonstrating benefit from noise reduction signal processing for hearing aid wearers could be replicated in 25 Danish participants with mild to moderate symmetrical sensorineural hearing loss. The objective of experiment 2 was to compare direct-drive and skin-drive transmission in 16 Danish users of bone-anchored hearing aids with conductive hearing loss or mixed sensorineural and conductive hearing loss. In experiment 1, performance on SWIRL improved when hearing aid noise reduction was used, replicating the Swedish results and generalizing them across languages. In experiment 2, performance on SWIRL was better for direct-drive compared with skin-drive transmission conditions. These findings indicate that spoken word recall can be used to identify benefits from hearing aid signal processing at ecologically valid, positive SNRs where SRTs are insensitive.


Asunto(s)
Audífonos , Pérdida Auditiva Conductiva/rehabilitación , Pérdida Auditiva Sensorineural/rehabilitación , Recuerdo Mental , Ruido , Ajuste de Prótesis/métodos , Percepción del Habla , Anciano , Dinamarca , Femenino , Humanos , Masculino , Persona de Mediana Edad , Evaluación de Resultado en la Atención de Salud , Relación Señal-Ruido , Prueba del Umbral de Recepción del Habla
17.
Ear Hear ; 37 Suppl 1: 5S-27S, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27355771

RESUMEN

The Fifth Eriksholm Workshop on "Hearing Impairment and Cognitive Energy" was convened to develop a consensus among interdisciplinary experts about what is known on the topic, gaps in knowledge, the use of terminology, priorities for future research, and implications for practice. The general term cognitive energy was chosen to facilitate the broadest possible discussion of the topic. It goes back to who described the effects of attention on perception; he used the term psychic energy for the notion that limited mental resources can be flexibly allocated among perceptual and mental activities. The workshop focused on three main areas: (1) theories, models, concepts, definitions, and frameworks; (2) methods and measures; and (3) knowledge translation. We defined effort as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task, with listening effort applying more specifically when tasks involve listening. We adapted Kahneman's seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding Effortful Listening (FUEL). Our FUEL incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention. Our FUEL also incorporates a motivation dimension based on complementary theories of motivational intensity, adaptive gain control, and optimal performance, fatigue, and pleasure. Using a three-dimensional illustration, we highlight how listening effort depends not only on hearing difficulties and task demands but also on the listener's motivation to expend mental effort in the challenging situations of everyday life.


Asunto(s)
Atención , Cognición , Pérdida Auditiva/psicología , Percepción del Habla , Percepción Auditiva , Comprensión , Humanos
18.
Mem Cognit ; 44(4): 608-20, 2016 May.
Artículo en Inglés | MEDLINE | ID: mdl-26800983

RESUMEN

Working memory (WM) for spoken language improves when the to-be-remembered items correspond to preexisting representations in long-term memory. We investigated whether this effect generalizes to the visuospatial domain by administering a visual n-back WM task to deaf signers and hearing signers, as well as to hearing nonsigners. Four different kinds of stimuli were presented: British Sign Language (BSL; familiar to the signers), Swedish Sign Language (SSL; unfamiliar), nonsigns, and nonlinguistic manual actions. The hearing signers performed better with BSL than with SSL, demonstrating a facilitatory effect of preexisting semantic representation. The deaf signers also performed better with BSL than with SSL, but only when WM load was high. No effect of preexisting phonological representation was detected. The deaf signers performed better than the hearing nonsigners with all sign-based materials, but this effect did not generalize to nonlinguistic manual actions. We argue that deaf signers, who are highly reliant on visual information for communication, develop expertise in processing sign-based items, even when those items do not have preexisting semantic or phonological representations. Preexisting semantic representation, however, enhances the quality of the gesture-based representations temporarily maintained in WM by this group, thereby releasing WM resources to deal with increased load. Hearing signers, on the other hand, may make strategic use of their speech-based representations for mnemonic purposes. The overall pattern of results is in line with flexible-resource models of WM.


Asunto(s)
Sordera/fisiopatología , Memoria a Corto Plazo/fisiología , Semántica , Lengua de Signos , Adulto , Humanos , Persona de Mediana Edad , Percepción Espacial/fisiología , Percepción Visual/fisiología
19.
Int J Audiol ; 55(11): 623-42, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-27589015

RESUMEN

OBJECTIVE: The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. STUDY SAMPLE: Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4 dB HL. DESIGN: LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. RESULTS: The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R(2) = 0.40). CONCLUSIONS: All LEVEL 2 factors are important theoretically as well as for clinical assessment.


Asunto(s)
Cognición , Corrección de Deficiencia Auditiva/instrumentación , Corrección de Deficiencia Auditiva/psicología , Audífonos , Trastornos de la Audición/psicología , Trastornos de la Audición/terapia , Personas con Deficiencia Auditiva/psicología , Personas con Deficiencia Auditiva/rehabilitación , Inteligibilidad del Habla , Percepción del Habla , Estimulación Acústica , Adulto , Anciano , Anciano de 80 o más Años , Audiometría de Tonos Puros , Umbral Auditivo , Comprensión , Función Ejecutiva , Femenino , Audición , Trastornos de la Audición/diagnóstico , Trastornos de la Audición/fisiopatología , Humanos , Masculino , Memoria a Corto Plazo , Persona de Mediana Edad , Pruebas Neuropsicológicas , Ruido/efectos adversos , Enmascaramiento Perceptual
20.
Ear Hear ; 36(1): 82-91, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25166628

RESUMEN

OBJECTIVES: A hearing aid noise reduction (NR) algorithm reduces the adverse effect of competing speech on memory for target speech for individuals with hearing impairment with high working memory capacity. In the present study, we investigated whether the positive effect of NR could be extended to individuals with low working memory capacity, as well as how NR influences recall performance for target native speech when the masker language is non-native. DESIGN: A sentence-final word identification and recall (SWIR) test was administered to 26 experienced hearing aid users. In this test, target spoken native language (Swedish) sentence lists were presented in competing native (Swedish) or foreign (Cantonese) speech with or without binary masking NR algorithm. After each sentence list, free recall of sentence final words was prompted. Working memory capacity was measured using a reading span (RS) test. RESULTS: Recall performance was associated with RS. However, the benefit obtained from NR was not associated with RS. Recall performance was more disrupted by native than foreign speech babble and NR improved recall performance in native but not foreign competing speech. CONCLUSIONS: Noise reduction improved memory for speech heard in competing speech for hearing aid users. Memory for native speech was more disrupted by native babble than foreign babble, but the disruptive effect of native speech babble was reduced to that of foreign babble when there was NR.


Asunto(s)
Algoritmos , Audífonos , Pérdida Auditiva Sensorineural/rehabilitación , Lenguaje , Memoria a Corto Plazo , Recuerdo Mental , Ruido/prevención & control , Percepción del Habla , Anciano , Femenino , Pérdida Auditiva Sensorineural/psicología , Humanos , Masculino , Memoria , Persona de Mediana Edad , Reconocimiento en Psicología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA