Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
bioRxiv ; 2023 Aug 24.
Article in English | MEDLINE | ID: mdl-37662393

ABSTRACT

Seeing the speaker's face greatly improves our speech comprehension in noisy environments. This is due to the brain's ability to combine the auditory and the visual information around us, a process known as multisensory integration. Selective attention also strongly influences what we comprehend in scenarios with multiple speakers - an effect known as the cocktail-party phenomenon. However, the interaction between attention and multisensory integration is not fully understood, especially when it comes to natural, continuous speech. In a recent electroencephalography (EEG) study, we explored this issue and showed that multisensory integration is enhanced when an audiovisual speaker is attended compared to when that speaker is unattended. Here, we extend that work to investigate how this interaction varies depending on a person's gaze behavior, which affects the quality of the visual information they have access to. To do so, we recorded EEG from 31 healthy adults as they performed selective attention tasks in several paradigms involving two concurrently presented audiovisual speakers. We then modeled how the recorded EEG related to the audio speech (envelope) of the presented speakers. Crucially, we compared two classes of model - one that assumed underlying multisensory integration (AV) versus another that assumed two independent unisensory audio and visual processes (A+V). This comparison revealed evidence of strong attentional effects on multisensory integration when participants were looking directly at the face of an audiovisual speaker. This effect was not apparent when the speaker's face was in the peripheral vision of the participants. Overall, our findings suggest a strong influence of attention on multisensory integration when high fidelity visual (articulatory) speech information is available. More generally, this suggests that the interplay between attention and multisensory integration during natural audiovisual speech is dynamic and is adaptable based on the specific task and environment.

2.
Neuroimage ; 274: 120143, 2023 07 01.
Article in English | MEDLINE | ID: mdl-37121375

ABSTRACT

In noisy environments, our ability to understand speech benefits greatly from seeing the speaker's face. This is attributed to the brain's ability to integrate audio and visual information, a process known as multisensory integration. In addition, selective attention plays an enormous role in what we understand, the so-called cocktail-party phenomenon. But how attention and multisensory integration interact remains incompletely understood, particularly in the case of natural, continuous speech. Here, we addressed this issue by analyzing EEG data recorded from participants who undertook a multisensory cocktail-party task using natural speech. To assess multisensory integration, we modeled the EEG responses to the speech in two ways. The first assumed that audiovisual speech processing is simply a linear combination of audio speech processing and visual speech processing (i.e., an A + V model), while the second allows for the possibility of audiovisual interactions (i.e., an AV model). Applying these models to the data revealed that EEG responses to attended audiovisual speech were better explained by an AV model, providing evidence for multisensory integration. In contrast, unattended audiovisual speech responses were best captured using an A + V model, suggesting that multisensory integration is suppressed for unattended speech. Follow up analyses revealed some limited evidence for early multisensory integration of unattended AV speech, with no integration occurring at later levels of processing. We take these findings as evidence that the integration of natural audio and visual speech occurs at multiple levels of processing in the brain, each of which can be differentially affected by attention.


Subject(s)
Speech Perception , Humans , Speech Perception/physiology , Speech , Attention/physiology , Visual Perception/physiology , Brain/physiology , Acoustic Stimulation , Auditory Perception
3.
BMC Public Health ; 23(1): 158, 2023 01 24.
Article in English | MEDLINE | ID: mdl-36694149

ABSTRACT

BACKGROUND AND AIMS: This systematic review sought to identify, explain and interpret the prominent or recurring themes relating to the barriers and facilitators of reporting and recording of self-harm in young people across different settings, such as the healthcare setting, schools and the criminal justice setting. METHODS: A search strategy was developed to ensure all relevant literature around the reporting and recording of self-harm in young people was obtained. Literature searches were conducted in six databases and a grey literature search of policy documents and relevant material was also conducted. Due to the range of available literature, both quantitative and qualitative methodologies were considered for inclusion. RESULTS: Following the completion of the literature searches and sifting, nineteen papers were eligible for inclusion. Facilitators to reporting self-harm across the different settings were found to be recognising self-harm behaviours, using passive screening, training and experience, positive communication, and safe, private information sharing. Barriers to reporting self-harm included confidentiality concerns, negative perceptions of young people, communication difficulties, stigma, staff lacking knowledge around self-harm, and a lack of time, money and resources. Facilitators to recording self-harm across the different settings included being open to discussing what is recorded, services working together and co-ordinated help. Barriers to recording self-harm were mainly around stigma, the information being recorded and the ability of staff being able to do so, and their length of professional experience. CONCLUSION: Following the review of the current evidence, it was apparent that there was still progress to be made to improve the reporting and recording of self-harm in young people, across the different settings. Future work should concentrate on better understanding the facilitators, whilst aiming to ameliorate the barriers.


Subject(s)
Self-Injurious Behavior , Humans , Adolescent , Self-Injurious Behavior/diagnosis , Social Stigma , Schools
4.
Front Hum Neurosci ; 17: 1283206, 2023.
Article in English | MEDLINE | ID: mdl-38162285

ABSTRACT

Seeing the speaker's face greatly improves our speech comprehension in noisy environments. This is due to the brain's ability to combine the auditory and the visual information around us, a process known as multisensory integration. Selective attention also strongly influences what we comprehend in scenarios with multiple speakers-an effect known as the cocktail-party phenomenon. However, the interaction between attention and multisensory integration is not fully understood, especially when it comes to natural, continuous speech. In a recent electroencephalography (EEG) study, we explored this issue and showed that multisensory integration is enhanced when an audiovisual speaker is attended compared to when that speaker is unattended. Here, we extend that work to investigate how this interaction varies depending on a person's gaze behavior, which affects the quality of the visual information they have access to. To do so, we recorded EEG from 31 healthy adults as they performed selective attention tasks in several paradigms involving two concurrently presented audiovisual speakers. We then modeled how the recorded EEG related to the audio speech (envelope) of the presented speakers. Crucially, we compared two classes of model - one that assumed underlying multisensory integration (AV) versus another that assumed two independent unisensory audio and visual processes (A+V). This comparison revealed evidence of strong attentional effects on multisensory integration when participants were looking directly at the face of an audiovisual speaker. This effect was not apparent when the speaker's face was in the peripheral vision of the participants. Overall, our findings suggest a strong influence of attention on multisensory integration when high fidelity visual (articulatory) speech information is available. More generally, this suggests that the interplay between attention and multisensory integration during natural audiovisual speech is dynamic and is adaptable based on the specific task and environment.

5.
J Neurosci ; 42(4): 682-691, 2022 01 26.
Article in English | MEDLINE | ID: mdl-34893546

ABSTRACT

Humans have the remarkable ability to selectively focus on a single talker in the midst of other competing talkers. The neural mechanisms that underlie this phenomenon remain incompletely understood. In particular, there has been longstanding debate over whether attention operates at an early or late stage in the speech processing hierarchy. One way to better understand this is to examine how attention might differentially affect neurophysiological indices of hierarchical acoustic and linguistic speech representations. In this study, we do this by using encoding models to identify neural correlates of speech processing at various levels of representation. Specifically, we recorded EEG from fourteen human subjects (nine female and five male) during a "cocktail party" attention experiment. Model comparisons based on these data revealed phonetic feature processing for attended, but not unattended speech. Furthermore, we show that attention specifically enhances isolated indices of phonetic feature processing, but that such attention effects are not apparent for isolated measures of acoustic processing. These results provide new insights into the effects of attention on different prelexical representations of speech, insights that complement recent anatomic accounts of the hierarchical encoding of attended speech. Furthermore, our findings support the notion that, for attended speech, phonetic features are processed as a distinct stage, separate from the processing of the speech acoustics.SIGNIFICANCE STATEMENT Humans are very good at paying attention to one speaker in an environment with multiple speakers. However, the details of how attended and unattended speech are processed differently by the brain is not completely clear. Here, we explore how attention affects the processing of the acoustic sounds of speech as well as the mapping of those sounds onto categorical phonetic features. We find evidence of categorical phonetic feature processing for attended, but not unattended speech. Furthermore, we find evidence that categorical phonetic feature processing is enhanced by attention, but acoustic processing is not. These findings add an important new layer in our understanding of how the human brain solves the cocktail party problem.


Subject(s)
Acoustic Stimulation/methods , Attention/physiology , Phonetics , Speech Perception/physiology , Speech/physiology , Adult , Electroencephalography/methods , Female , Humans , Male , Photic Stimulation/methods , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...