Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 1.493
Filtrar
Más filtros

Publication year range
1.
J Neurosci ; 44(1)2024 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-37963763

RESUMEN

Learning to process speech in a foreign language involves learning new representations for mapping the auditory signal to linguistic structure. Behavioral experiments suggest that even listeners that are highly proficient in a non-native language experience interference from representations of their native language. However, much of the evidence for such interference comes from tasks that may inadvertently increase the salience of native language competitors. Here we tested for neural evidence of proficiency and native language interference in a naturalistic story listening task. We studied electroencephalography responses of 39 native speakers of Dutch (14 male) to an English short story, spoken by a native speaker of either American English or Dutch. We modeled brain responses with multivariate temporal response functions, using acoustic and language models. We found evidence for activation of Dutch language statistics when listening to English, but only when it was spoken with a Dutch accent. This suggests that a naturalistic, monolingual setting decreases the interference from native language representations, whereas an accent in the listener's own native language may increase native language interference, by increasing the salience of the native language and activating native language phonetic and lexical representations. Brain responses suggest that such interference stems from words from the native language competing with the foreign language in a single word recognition system, rather than being activated in a parallel lexicon. We further found that secondary acoustic representations of speech (after 200 ms latency) decreased with increasing proficiency. This may reflect improved acoustic-phonetic models in more proficient listeners.Significance Statement Behavioral experiments suggest that native language knowledge interferes with foreign language listening, but such effects may be sensitive to task manipulations, as tasks that increase metalinguistic awareness may also increase native language interference. This highlights the need for studying non-native speech processing using naturalistic tasks. We measured neural responses unobtrusively while participants listened for comprehension and characterized the influence of proficiency at multiple levels of representation. We found that salience of the native language, as manipulated through speaker accent, affected activation of native language representations: significant evidence for activation of native language (Dutch) categories was only obtained when the speaker had a Dutch accent, whereas no significant interference was found to a speaker with a native (American) accent.


Asunto(s)
Percepción del Habla , Habla , Masculino , Humanos , Lenguaje , Fonética , Aprendizaje , Encéfalo , Percepción del Habla/fisiología
2.
J Neurosci ; 44(31)2024 Jul 31.
Artículo en Inglés | MEDLINE | ID: mdl-38839303

RESUMEN

Complex auditory scenes pose a challenge to attentive listening, rendering listeners slower and more uncertain in their perceptual decisions. How can we explain such behaviors from the dynamics of cortical networks that pertain to the control of listening behavior? We here follow up on the hypothesis that human adaptive perception in challenging listening situations is supported by modular reconfiguration of auditory-control networks in a sample of N = 40 participants (13 males) who underwent resting-state and task functional magnetic resonance imaging (fMRI). Individual titration of a spatial selective auditory attention task maintained an average accuracy of ∼70% but yielded considerable interindividual differences in listeners' response speed and reported confidence in their own perceptual decisions. Whole-brain network modularity increased from rest to task by reconfiguring auditory, cinguloopercular, and dorsal attention networks. Specifically, interconnectivity between the auditory network and cinguloopercular network decreased during the task relative to the resting state. Additionally, interconnectivity between the dorsal attention network and cinguloopercular network increased. These interconnectivity dynamics were predictive of individual differences in response confidence, the degree of which was more pronounced after incorrect judgments. Our findings uncover the behavioral relevance of functional cross talk between auditory and attentional-control networks during metacognitive assessment of one's own perception in challenging listening situations and suggest two functionally dissociable cortical networked systems that shape the considerable metacognitive differences between individuals in adaptive listening behavior.


Asunto(s)
Atención , Percepción Auditiva , Imagen por Resonancia Magnética , Red Nerviosa , Humanos , Masculino , Femenino , Adulto , Percepción Auditiva/fisiología , Red Nerviosa/fisiología , Red Nerviosa/diagnóstico por imagen , Atención/fisiología , Adulto Joven , Metacognición/fisiología , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Estimulación Acústica/métodos , Mapeo Encefálico
3.
Cereb Cortex ; 34(5)2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38715408

RESUMEN

Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.


Asunto(s)
Envejecimiento , Encéfalo , Comprensión , Ruido , Espectroscopía Infrarroja Corta , Percepción del Habla , Humanos , Adulto , Percepción del Habla/fisiología , Masculino , Femenino , Espectroscopía Infrarroja Corta/métodos , Persona de Mediana Edad , Adulto Joven , Anciano , Comprensión/fisiología , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Envejecimiento/fisiología , Mapeo Encefálico/métodos , Estimulación Acústica/métodos
4.
J Neurosci ; 43(26): 4856-4866, 2023 06 28.
Artículo en Inglés | MEDLINE | ID: mdl-37127361

RESUMEN

Listening in noisy environments requires effort- the active engagement of attention and other cognitive abilities- as well as increased arousal. The ability to separately quantify the contribution of these components is key to understanding the dynamics of effort and how it may change across listening situations and in certain populations. We concurrently measured two types of ocular data in young participants (both sexes): pupil dilation (PD; thought to index arousal aspects of effort) and microsaccades (MS; hypothesized to reflect automatic visual exploratory sampling), while they performed a speech-in-noise task under high- (HL) and low- (LL) listening load conditions. Sentences were manipulated so that the behaviorally relevant information (keywords) appeared at the end (Experiment 1) or beginning (Experiment 2) of the sentence, resulting in different temporal demands on focused attention. In line with previous reports, PD effects were associated with increased dilation under load. We observed a sustained difference between HL and LL conditions, consistent with increased phasic and tonic arousal. Importantly we show that MS rate was also modulated by listening load. This was manifested as a reduced MS rate in HL relative to LL. Critically, in contrast to the sustained difference seen for PD, MS effects were localized in time, specifically during periods when demands on auditory attention were greatest. These results demonstrate that auditory selective attention interfaces with the mechanisms controlling MS generation, establishing MS as an informative measure, complementary to PD, with which to quantify the temporal dynamics of auditory attentional processing under effortful listening conditions.SIGNIFICANCE STATEMENT Listening effort, reflecting the "cognitive bandwidth" deployed to effectively process sound in adverse environments, contributes critically to listening success. Understanding listening effort and the processes involved in its allocation is a major challenge in auditory neuroscience. Here, we demonstrate that microsaccade rate can be used to index a specific subcomponent of listening effort, the allocation of instantaneous auditory attention, that is distinct from the modulation of arousal indexed by pupil dilation (currently the dominant measure of listening effort). These results reveal the push-pull process through which auditory attention interfaces with the (visual) attention network that controls microsaccades, establishing microsaccades as a powerful tool for measuring auditory attention and its deficits.


Asunto(s)
Pupila , Percepción del Habla , Masculino , Femenino , Humanos , Percepción Auditiva , Ruido , Nivel de Alerta
5.
J Neurosci ; 43(32): 5856-5869, 2023 08 09.
Artículo en Inglés | MEDLINE | ID: mdl-37491313

RESUMEN

Hearing impairment affects many older adults but is often diagnosed decades after speech comprehension in noisy situations has become effortful. Accurate assessment of listening effort may thus help diagnose hearing impairment earlier. However, pupillometry-the most used approach to assess listening effort-has limitations that hinder its use in practice. The current study explores a novel way to assess listening effort through eye movements. Building on cognitive and neurophysiological work, we examine the hypothesis that eye movements decrease when speech listening becomes challenging. In three experiments with human participants from both sexes, we demonstrate, consistent with this hypothesis, that fixation duration increases and spatial gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (simple sentences, naturalistic stories). In contrast, pupillometry was less sensitive to speech masking during story listening, suggesting pupillometric measures may not be as effective for the assessments of listening effort in naturalistic speech-listening paradigms. Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in the brain regions that support the regulation of eye movements, such as frontal eye field and superior colliculus, are modulated when listening is effortful.SIGNIFICANCE STATEMENT Assessment of listening effort is critical for early diagnosis of age-related hearing loss. Pupillometry is most used but has several disadvantages. The current study explores a novel way to assess listening effort through eye movements. We examine the hypothesis that eye movements decrease when speech listening becomes effortful. We demonstrate, consistent with this hypothesis, that fixation duration increases and gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (sentences, naturalistic stories). Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in brain regions that support the regulation of eye movements are modulated when listening is effortful.


Asunto(s)
Percepción del Habla , Habla , Masculino , Femenino , Humanos , Anciano , Movimientos Oculares , Percepción del Habla/fisiología , Percepción Auditiva , Ruido , Inteligibilidad del Habla
6.
Eur J Neurosci ; 59(8): 2059-2074, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38303522

RESUMEN

Linear models are becoming increasingly popular to investigate brain activity in response to continuous and naturalistic stimuli. In the context of auditory perception, these predictive models can be 'encoding', when stimulus features are used to reconstruct brain activity, or 'decoding' when neural features are used to reconstruct the audio stimuli. These linear models are a central component of some brain-computer interfaces that can be integrated into hearing assistive devices (e.g., hearing aids). Such advanced neurotechnologies have been widely investigated when listening to speech stimuli but rarely when listening to music. Recent attempts at neural tracking of music show that the reconstruction performances are reduced compared with speech decoding. The present study investigates the performance of stimuli reconstruction and electroencephalogram prediction (decoding and encoding models) based on the cortical entrainment of temporal variations of the audio stimuli for both music and speech listening. Three hypotheses that may explain differences between speech and music stimuli reconstruction were tested to assess the importance of the speech-specific acoustic and linguistic factors. While the results obtained with encoding models suggest different underlying cortical processing between speech and music listening, no differences were found in terms of reconstruction of the stimuli or the cortical data. The results suggest that envelope-based linear modelling can be used to study both speech and music listening, despite the differences in the underlying cortical mechanisms.


Asunto(s)
Música , Percepción del Habla , Percepción Auditiva/fisiología , Habla , Percepción del Habla/fisiología , Electroencefalografía , Estimulación Acústica
7.
Hum Brain Mapp ; 45(13): e70023, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39268584

RESUMEN

The relationship between speech production and perception is a topic of ongoing debate. Some argue that there is little interaction between the two, while others claim they share representations and processes. One perspective suggests increased recruitment of the speech motor system in demanding listening situations to facilitate perception. However, uncertainties persist regarding the specific regions involved and the listening conditions influencing its engagement. This study used activation likelihood estimation in coordinate-based meta-analyses to investigate the neural overlap between speech production and three speech perception conditions: speech-in-noise, spectrally degraded speech and linguistically complex speech. Neural overlap was observed in the left frontal, insular and temporal regions. Key nodes included the left frontal operculum (FOC), left posterior lateral part of the inferior frontal gyrus (IFG), left planum temporale (PT), and left pre-supplementary motor area (pre-SMA). The left IFG activation was consistently observed during linguistic processing, suggesting sensitivity to the linguistic content of speech. In comparison, the left pre-SMA activation was observed when processing degraded and noisy signals, indicating sensitivity to signal quality. Activations of the left PT and FOC activation were noted in all conditions, with the posterior FOC area overlapping in all conditions. Our meta-analysis reveals context-independent (FOC, PT) and context-dependent (pre-SMA, posterior lateral IFG) regions within the speech motor system during challenging speech perception. These regions could contribute to sensorimotor integration and executive cognitive control for perception and production.


Asunto(s)
Percepción del Habla , Habla , Humanos , Percepción del Habla/fisiología , Habla/fisiología , Mapeo Encefálico , Funciones de Verosimilitud , Corteza Motora/fisiología , Corteza Cerebral/fisiología , Corteza Cerebral/diagnóstico por imagen
8.
Dev Neurosci ; : 1-14, 2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38723615

RESUMEN

INTRODUCTION: Children with specific language impairment (SLI) have difficulties in different speech and language domains. Electrophysiological studies have documented that auditory processing in children with SLI is atypical and probably caused by delayed and abnormal auditory maturation. During the resting state, or different auditory tasks, children with SLI show low or high beta spectral power, which could be a clinical correlate for investigating brain rhythms. METHODS: The aim of this study was to examine the electrophysiological cortical activity of the beta rhythm while listening to words and nonwords in children with SLI in comparison to typical development (TD) children. The participants were 50 children with SLI, aged 4 and 5 years, and 50 age matched TD children. The children were divided into two subgroups according to age: (1) children 4 years of age; (2) children 5 years of age. RESULTS: The older group differed from the younger group in beta auditory processing, with increased values of beta spectral power in the right frontal, temporal, and parietal regions. In addition, children with SLI have higher beta spectral power than TD children in the bilateral temporal regions. CONCLUSION: Complex beta auditory activation in TD and SLI children indicates the presence of early changes in functional brain connectivity.

9.
Psychol Sci ; 35(5): 455-470, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38630602

RESUMEN

It is important for people to feel listened to in professional and personal communications, and yet they can feel unheard even when others have listened well. We propose that this feeling may arise because speakers conflate agreement with listening quality. In 11 studies (N = 3,396 adults), we held constant or manipulated a listener's objective listening behaviors, manipulating only after the conversation whether the listener agreed with the speaker. Across various topics, mediums (e.g., video, chat), and cues of objective listening quality, speakers consistently perceived disagreeing listeners as worse listeners. This effect persisted after controlling for other positive impressions of the listener (e.g., likability). This effect seemed to emerge because speakers believe their views are correct, leading them to infer that a disagreeing listener must not have been listening very well. Indeed, it may be prohibitively difficult for someone to simultaneously convey that they disagree and that they were listening.


Asunto(s)
Disentimientos y Disputas , Humanos , Adulto , Femenino , Masculino , Adulto Joven , Comunicación , Percepción del Habla , Adolescente , Persona de Mediana Edad
10.
BMC Cancer ; 24(1): 17, 2024 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-38166682

RESUMEN

BACKGROUND: Although the side effects of chemotherapy are frequently described in research studies, there is little evidence on how common they are in everyday clinical care. This study's goal was to assess the most prevalent short-term side effects experienced by patients with localized breast cancer, undergoing chemotherapy based on anthracyclines and taxane-containing treatments, at the medical oncology department of the Mohammed VI University Hospital of Marrakech, Morocco. METHODS: This was a descriptive study. We conducted a listening session at the outpatient department of the hospital with the help of a structured questionnaire. The session engaged 122 women who had undergone cycles of chemotherapy. A chi-square test was used to compare the incidence and relative risk of short side effects with both anthracycline and taxane-containing regimens. RESULTS: The average age of participants was 49.1 years. In both regimens, the findings highlighted the frequency and relative risk of the following adverse effects: systemic symptoms (fever, asthenia and sleep disorder), gastrointestinal toxicity (Vomiting, nausea, diarrhoea, constipation, mucositis and loss of appetite), dermatological toxicity (Skin reactions on hands/feet, nail toxicity, allergies, alopecia and peripheral edema), neurological toxicity (neuropathy), arthromyalgia and ocular toxicity. CONCLUSIONS: In conclusion, it is crucial for healthcare professionals to be conscious of the significance of these adverse effects. They must also know how to manage them. Likewise, the listening approach highlights its importance in the daily follow-up and monitoring of patients.


Asunto(s)
Neoplasias de la Mama , Humanos , Femenino , Persona de Mediana Edad , Neoplasias de la Mama/tratamiento farmacológico , Neoplasias de la Mama/etiología , Antraciclinas/efectos adversos , Marruecos/epidemiología , Taxoides/efectos adversos , Quimioterapia Adyuvante , Oncología Médica , Hospitales , Protocolos de Quimioterapia Combinada Antineoplásica/efectos adversos
11.
Am J Geriatr Psychiatry ; 1: 7-16, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38993691

RESUMEN

Introduction: This study investigated a remotely delivered, therapist-facilitated, personalized music listening intervention for community-dwelling older adults experiencing loneliness during the Covid-19 pandemic. We assessed its feasibility and individuals' experiences of social connection and emotional well-being during the intervention. Methods: Ten cognitively unimpaired older adults who endorsed loneliness completed eight weekly sessions with a board-certified music therapist via Zoom. Participants were guided in developing two online personalized music playlists and were asked to listen to playlists for at least one hour daily. Feasibility metrics were attendance, accessibility, and compliance rates. Post-study interview responses were analyzed using a rapid qualitative methodology. Exploratory pre- and post-study measures of loneliness and other aspects of psychological well-being were obtained using validated questionnaires. Results: Ten participants (mean age 75.38 [65 to 85] years, 80% women) were enrolled from March to August 2021. Attendance and compliance rates were 100% and the accessibility rate was 90%. Most participants associated music with positive memories before the program and many reported that the intervention prompted them to reconnect with music or listen to music with greater intention. They cited increased connection from interacting with the music therapist and the music itself, as well as specific positive emotional impacts from integrating music into their daily lives. Median pre- to post-questionnaire measures of psychological function all changed in an improved direction. Discussion: Remotely delivered music therapy may be a promising intervention to promote regular music listening and socioemotional well-being in lonely older adults.


Asunto(s)
COVID-19 , Soledad , Musicoterapia , Humanos , Musicoterapia/métodos , Anciano , Femenino , Masculino , COVID-19/psicología , Soledad/psicología , Proyectos Piloto , Anciano de 80 o más Años , Estudios de Factibilidad , SARS-CoV-2
12.
Eur J Neurol ; 31(5): e16240, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38332663

RESUMEN

BACKGROUND AND PURPOSE: Hearing impairment is common following aneurysmal subarachnoid haemorrhage (aSAH). Previous studies have demonstrated that auditory processing disorder (APD) is the primary underlying pathology. Assistive listening devices (ALDs) can be used to manage APD but have not been explored in aSAH. The aim of this study was to assess the benefit of an ALD for patients reporting hearing difficulty after aSAH. METHODS: This was a prospective pilot single-arm intervention study of an ALD for APD following aSAH. Patients who reported subjective hearing difficulty following aSAH were identified from the Wessex Neurological Centre aSAH database. Speech-in-noise was evaluated using the Bamford-Kowal-Bench (BKB) test under 60 and 65 dB noise conditions. BKB performance was compared with and without an ALD. Cognition was assessed using the Addenbrooke's Cognitive Examination-III. RESULTS: Fourteen aSAH patients with self-reported hearing loss were included in the analysis. Under both noise conditions the ALD significantly improved BKB performance (60 dB, Z = -3.30, p < 0.001; 65 dB, Z = -3.33, p < 0.001). There was no relationship between cognition and response to the ALD. CONCLUSIONS: This study demonstrates the marked benefit of ALDs to manage APD following aSAH, regardless of cognitive status. This finding has implications for the management of this common yet disabling deficit which impacts quality of life and employment. A further trial of ALDs in this patient group is needed to test whether these large, short-term benefits can be practically translated to the community for long-term benefit when used at home.


Asunto(s)
Pérdida Auditiva , Hemorragia Subaracnoidea , Humanos , Hemorragia Subaracnoidea/complicaciones , Hemorragia Subaracnoidea/terapia , Calidad de Vida , Estudios Prospectivos , Audición , Pérdida Auditiva/etiología
13.
Dev Sci ; 27(5): e13508, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38616615

RESUMEN

To learn the meaning of a new word, or to recognize the meaning of a known one, both children and adults benefit from surrounding words, or the sentential context. Most of the evidence from children is based on their accuracy and efficiency when listening to speech in their familiar native accent: they successfully use the words they know to identify other words' referents. Here, we assess how accurately and efficiently 4-year-old children use sentential context to identify referents of known and novel nouns in unfamiliar-accented speech, as compared to familiar-accented speech. In a looking-while-listening task, children showed considerable success in processing unfamiliar-accented speech. Children robustly mapped known nouns produced in an unfamiliar accent to their target referents rather than novel competitors, and they used informative surrounding verbs (e.g., "You can eat the dax") to identify the referents of both known and novel nouns-although there was a processing cost for unfamiliar-accented speech in some cases. This demonstrates that 4-year-olds successfully and rapidly process unfamiliar-accented speech by recruiting the same strategies available to them in familiar-accented speech, revealing impressive flexibility in word recognition and word learning across diverse linguistic environments. RESEARCH HIGHLIGHTS: We examined 4-year-old children's accuracy and processing efficiency in comprehending known and novel nouns embedded in sentences produced in familiar-accented or unfamiliar-accented speech. Children showed limited processing costs for unfamiliar-accented speech and mapped known words to their referents even when these were produced in unfamiliar-accented speech. Children used known verbs to predict the referents of upcoming nouns in both familiar- and unfamiliar-accented speech, but processing costs were evident for unfamiliar-accented speech. Thus, the strategies that support children's word comprehension and word learning in familiar-accented speech are available to them in unfamiliar accents as well.


Asunto(s)
Percepción del Habla , Humanos , Preescolar , Percepción del Habla/fisiología , Masculino , Femenino , Habla/fisiología , Desarrollo del Lenguaje , Reconocimiento en Psicología/fisiología
14.
Conscious Cogn ; 124: 103747, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39213729

RESUMEN

Reporting discomfort when noise affects listening experience suggests that listeners may be aware, at least to some extent, of adverse environmental conditions and their impact on listening experience. This involves monitoring internal states (effort and confidence). Here we quantified continuous self-report indices that track one's own internal states and investigated age-related differences in this ability. We instructed two groups of young and older adults to continuously report their confidence and effort while listening to stories in fluctuating noise. Using cross-correlation analyses between the time series of fluctuating noise and those of perceived effort or confidence, we showed that (1) participants modified their assessment of effort and confidence based on variations in the noise, with a 4 s lag; (2) there were no differences between the groups. These findings imply extending this method to other areas, expanding the definition of metacognition, and highlighting the value of this ability for older adults.


Asunto(s)
Ruido , Percepción del Habla , Humanos , Masculino , Anciano , Femenino , Adulto , Adulto Joven , Percepción del Habla/fisiología , Persona de Mediana Edad , Metacognición/fisiología , Envejecimiento/fisiología , Percepción Auditiva/fisiología , Autoimagen , Anciano de 80 o más Años , Factores de Edad
15.
J Exp Child Psychol ; 249: 106088, 2024 Sep 23.
Artículo en Inglés | MEDLINE | ID: mdl-39316884

RESUMEN

Multi-talker noise impedes children's speech processing and may affect children listening to their second language more than children listening to their first language. Evidence suggests that multi-talker noise also may impede children's memory retention and learning. A total of 80 culturally and linguistically diverse children aged 7 to 9 years listened to narratives in two listening conditions: quiet and multi-talker noise (signal-to-noise ratio +6 dB). Repeated recall (immediate and delayed recall), was measured with a 1-week retention interval. Retention was calculated as the difference in recall accuracy per question between immediate and delayed recall. Working memory capacity was assessed, and the children's degree of school language (Swedish) exposure was quantified. Immediate narrative recall was lower for the narrative encoded in noise than in quiet. During delayed recall, narrative recall was similar for both listening conditions. Children with higher degrees of school language exposure and higher working memory capacity had better narrative recall overall, but these factors were not associated with an effect of listening condition or retention. Multi-talker babble noise does not impair culturally and linguistically diverse primary school children's retention of spoken narratives as measured by multiple-choice questions. Although a quiet listening condition allows for a superior encoding compared with a noisy listening condition, details are likely lost during memory consolidation and re-consolidation.

16.
Mem Cognit ; 2024 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-38758512

RESUMEN

When speech is presented in noise, listeners must recruit cognitive resources to resolve the mismatch between the noisy input and representations in memory. A consequence of this effortful listening is impaired memory for content presented earlier. In the first study on effortful listening, Rabbitt, The Quarterly Journal of Experimental Psychology, 20, 241-248 (1968; Experiment 2) found that recall for a list of digits was poorer when subsequent digits were presented with masking noise than without. Experiment 3 of that study extended this effect to more naturalistic, passage-length materials. Although the findings of Rabbitt's Experiment 2 have been replicated multiple times, no work has assessed the robustness of Experiment 3. We conducted a replication attempt of Rabbitt's Experiment 3 at three signal-to-noise ratios (SNRs). Results at one of the SNRs (Experiment 1a of the current study) were in the opposite direction from what Rabbitt, The Quarterly Journal of Experimental Psychology, 20, 241-248, (1968) reported - that is, speech was recalled more accurately when it was followed by speech presented in noise rather than in the clear - and results at the other two SNRs showed no effect of noise (Experiments 1b and 1c). In addition, reanalysis of a replication of Rabbitt's seminal finding in his second experiment showed that the effect of effortful listening on previously presented information is transient. Thus, effortful listening caused by noise appears to only impair memory for information presented immediately before the noise, which may account for our finding that noise in the second-half of a long passage did not impair recall of information presented in the first half of the passage.

17.
Br J Clin Psychol ; 2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-38946045

RESUMEN

OBJECTIVES: Characterization of psychotherapy as the "talking cure" de-emphasizes the importance of an active listener on the curative effect of talking. We test whether the working alliance and its benefits emerge from expression of voice, per se, or whether active listening is needed. We examine the role of listening in a social identity model of working alliance. METHODS: University student participants in a laboratory experiment spoke about stress management to another person (a confederate student) who either did or did not engage in active listening. Participants reported their perceptions of alliance, key social-psychological variables, and well-being. RESULTS: Active listening led to significantly higher ratings of alliance, procedural justice, social identification, and identity leadership, compared to no active listening. Active listening also led to greater positive affect and satisfaction. Ultimately, an explanatory path model was supported in which active listening predicted working alliance through social identification, identity leadership, and procedural justice. CONCLUSIONS: Listening quality enhances alliance and well-being in a manner consistent with a social identity model of working alliance, and is a strategy for facilitating alliance in therapy.

18.
Proc Natl Acad Sci U S A ; 118(7)2021 02 16.
Artículo en Inglés | MEDLINE | ID: mdl-33568530

RESUMEN

Brain connectivity plays a major role in the encoding, transfer, and integration of sensory information. Interregional synchronization of neural oscillations in the γ-frequency band has been suggested as a key mechanism underlying perceptual integration. In a recent study, we found evidence for this hypothesis showing that the modulation of interhemispheric oscillatory synchrony by means of bihemispheric high-density transcranial alternating current stimulation (HD-TACS) affects binaural integration of dichotic acoustic features. Here, we aimed to establish a direct link between oscillatory synchrony, effective brain connectivity, and binaural integration. We experimentally manipulated oscillatory synchrony (using bihemispheric γ-TACS with different interhemispheric phase lags) and assessed the effect on effective brain connectivity and binaural integration (as measured with functional MRI and a dichotic listening task, respectively). We found that TACS reduced intrahemispheric connectivity within the auditory cortices and antiphase (interhemispheric phase lag 180°) TACS modulated connectivity between the two auditory cortices. Importantly, the changes in intra- and interhemispheric connectivity induced by TACS were correlated with changes in perceptual integration. Our results indicate that γ-band synchronization between the two auditory cortices plays a functional role in binaural integration, supporting the proposed role of interregional oscillatory synchrony in perceptual integration.


Asunto(s)
Percepción Auditiva , Encéfalo/fisiología , Lateralidad Funcional , Conectoma , Femenino , Ritmo Gamma , Humanos , Imagen por Resonancia Magnética , Masculino , Estimulación Transcraneal de Corriente Directa , Adulto Joven
19.
J Med Internet Res ; 26: e48599, 2024 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-38289645

RESUMEN

BACKGROUND: The increased availability of web-based medical information has encouraged patients with chronic pain to seek health care information from multiple sources, such as consultation with health care providers combined with web-based information. The type and quality of information that is available on the web is very heterogeneous, in terms of content, reliability, and trustworthiness. To date, no studies have evaluated what information is available about neuromodulation on the web for patients with chronic pain. OBJECTIVE: This study aims to explore the type, quality, and content of web-based information regarding spinal cord stimulation (SCS) for chronic pain that is freely available and targeted at health care consumers. METHODS: The social listening tool Awario was used to search Facebook (Meta Platforms, Inc), Twitter (Twitter, Inc), YouTube (Google LLC), Instagram (Meta Platforms, Inc), blogs, and the web for suitable hits with "pain" and "neuromodulation" as keywords. Quality appraisal of the extracted information was performed using the DISCERN instrument. A thematic analysis through inductive coding was conducted. RESULTS: The initial search identified 2174 entries, of which 630 (28.98%) entries were eventually withheld, which could be categorized as web pages, including news and blogs (114/630, 18.1%); Reddit (Reddit, Inc) posts (32/630, 5.1%); Vimeo (Vimeo, Inc) hits (38/630, 6%); or YouTube (Google LLC) hits (446/630, 70.8%). Most posts originated in the United States (519/630, 82.4%). Regarding the content of information, 66.2% (383/579) of the entries discussed (fully discussed or partially discussed) how SCS works. In total, 55.6% (322/579) of the entries did not elaborate on the fact that there may be >1 potential treatment choice and 47.7% (276/579) did not discuss the influence of SCS on the overall quality of life. The inductive coding revealed 4 main themes. The first theme of pain and the burden of pain (1274/8886, 14.34% coding references) explained about pain, pain management, individual impact of pain, and patient experiences. The second theme included neuromodulation as a treatment approach (3258/8886, 36.66% coding references), incorporating the background on neuromodulation, patient-centered care, SCS therapy, and risks. Third, several device-related aspects (1722/8886, 19.38% coding references) were presented. As a final theme, patient benefits and testimonials of treatment with SCS (2632/8886, 29.62% coding references) were revealed with subthemes regarding patient benefits, eligibility, and testimonials and expectations. CONCLUSIONS: Health care consumers have access to web-based information about SCS, where details about the surgical procedures, the type of material, working mechanisms, risks, patient expectations, testimonials, and the potential benefits of this therapy are discussed. The reliability, trustworthiness, and correctness of web-based sources should be carefully considered before automatically relying on the content.


Asunto(s)
Dolor Crónico , Estimulación de la Médula Espinal , Humanos , Dolor Crónico/terapia , Internet , Calidad de Vida
20.
J Res Adolesc ; 34(3): 745-758, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38566546

RESUMEN

Relational theories of human development explain how stereotypes and their underlying ideologies thwart social connections that are fundamental for individuals to thrive, especially in early adolescence. Intervention research to address this crisis of connection is still emergent and active listening is one promising strategy to this end; however, its efficacy has not been examined in part because no validated measures of active listening for this population exist. This validation study is the first to examine whether the behavioral dimensions of one form of active listening can be captured using a coding scheme to assess adolescents' engagement in a live interviewing task (N = 293). Importantly, the measure was developed within the context of a theory-driven intervention to train adolescents in transformative curiosity and listening to enhance connection. Findings indicate that two dimensions underlie the measure as hypothesized, open-ended questions and follow-up questions, with acceptable internal consistency. The measure is sensitive to change in adolescents' questioning skills before and after the intervention. Further, asking follow-up questions was positively related to empathy and also predicted a respondent's perception of their interviewer as a good listener. The effect for asking open-ended questions was moderated by dyad-level tendencies to elicit disclosure from others. The current measure not only examines question asking as a more nuanced behavioral dimension of active listening than previous measures, it is also the first to do so among a sample of early adolescents. The measure will be useful in assessing active listening interventions' efficacy to address the crisis of connection.


Asunto(s)
Conducta del Adolescente , Humanos , Adolescente , Femenino , Masculino , Conducta del Adolescente/psicología , Empatía , Reproducibilidad de los Resultados , Encuestas y Cuestionarios , Relaciones Interpersonales
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda