Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30.251
Filter
2.
BMC Pediatr ; 24(1): 449, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-38997661

ABSTRACT

BACKGROUND: Language delay affects near- and long-term social communication and learning in toddlers, and, an increasing number of experts pay attention to it. The development of prosody discrimination is one of the earliest stages of language development in which key skills for later stages are mastered. Therefore, analyzing the relationship between brain discrimination of speech prosody and language abilities may provide an objective basis for the diagnosis and intervention of language delay. METHODS: In this study, all cases(n = 241) were enrolled from a tertiary women's hospital, from 2021 to 2022. We used functional near-infrared spectroscopy (fNIRS) to assess children's neural prosody discrimination abilities, and a Chinese communicative development inventory (CCDI) were used to evaluate their language abilities. RESULTS: Ninety-eight full-term and 108 preterm toddlers were included in the final analysis in phase I and II studies, respectively. The total CCDI screening abnormality rate was 9.2% for full-term and 34.3% for preterm toddlers. Full-term toddlers showed prosody discrimination ability in all channels except channel 5, while preterm toddlers showed prosody discrimination ability in channel 6 only. Multifactorial logistic regression analyses showed that prosody discrimination of the right angular gyrus (channel 3) had a statistically significant effect on language delay (odd ratio = 0.301, P < 0.05) in full-term toddlers. Random forest (RF) regression model presented that prosody discrimination reflected by channels and brain regions based on fNIRS data was an important parameter for predicting language delay in preterm toddlers, among which the prosody discrimination reflected by the right angular gyrus (channel 4) was the most important parameter. The area under the model Receiver operating characteristic (ROC) curve was 0.687. CONCLUSIONS: Neural prosody discrimination ability is positively associated with language development, assessment of brain prosody discrimination abilities through fNIRS could be used as an objective indicator for early identification of children with language delay in the future clinical application.


Subject(s)
Language Development Disorders , Language Development , Spectroscopy, Near-Infrared , Humans , Female , Male , Child, Preschool , Language Development Disorders/diagnosis , Infant , Speech Perception/physiology , Brain/physiology , Brain/diagnostic imaging
3.
J Acoust Soc Am ; 156(1): 341-349, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38990038

ABSTRACT

Previous research has shown that learning effects are present for speech intelligibility in temporally modulated (TM) noise, but not in stationary noise. The present study aimed to gain more insight into the factors that might affect the time course (the number of trials required to reach stable performance) and size [the improvement in the speech reception threshold (SRT)] of the learning effect. Two hypotheses were addressed: (1) learning effects are present in both TM and spectrally modulated (SM) noise and (2) the time course and size of the learning effect depend on the amount of masking release caused by either TM or SM noise. Eighteen normal-hearing adults (23-62 years) participated in SRT measurements, in which they listened to sentences in six masker conditions, including stationary, TM, and SM noise conditions. The results showed learning effects in all TM and SM noise conditions, but not for the stationary noise condition. The learning effect was related to the size of masking release: a larger masking release was accompanied by an increased time course of the learning effect and a larger learning effect. The results also indicate that speech is processed differently in SM noise than in TM noise.


Subject(s)
Acoustic Stimulation , Learning , Noise , Perceptual Masking , Speech Intelligibility , Speech Perception , Humans , Noise/adverse effects , Adult , Young Adult , Male , Speech Perception/physiology , Female , Middle Aged , Speech Reception Threshold Test , Time Factors , Auditory Threshold
4.
Article in Chinese | MEDLINE | ID: mdl-38965850

ABSTRACT

Objectives: To investigate the outcomes of cochlear implantation in Mandarin-speaking cochlear implant (CI) users with single-sided deafness (SSD). Methods: This study was a single-center prospective cohort study. Eleven Mandarin-speaking adult SSD patients who underwent CI implantation at Capital Medical University Beijing Tongren Hospital from August 2020 to October 2021 were recruited, including 6 males and 5 females, with the age ranging from 24 to 50 years old. In a sound field with 7 loudspeakers distributed at 180°, we measured root-mean-square error(RMSE)in SSD patients at the preoperative, 1-month, 3-month, 6-month, and 12-month after switch-on to assess the improvement of sound source localization. The Mandarin Speech Perception (MSP) was used in the sound field to test the speech reception threshold (SRT) of SSD patients under different signal-to-noise locations in a steady-state noise under conditions of CI off and CI on, to reflect the head shadow effect(SSSDNNH), binaural summation effect(S0N0) and squelch effect(S0NSSD). The Tinnitus Handicap Inventory (THI) and the Visual Analogue Scale (VAS) were used to assess changes in tinnitus severity and tinnitus loudness in SSD patients at each time point. The Speech, Spatial and Qualities of Hearing Scale(SSQ) and the Nijmegen Cochlear Implantation Scale (NCIQ) were used to assess the subjective benefits of spatial speech perception and quality of life in SSD patients after cochlear implantation. SPSS 19.0 software was used for statistical analysis. Results: SSD patients showed a significant improvement in the poorer ear in hearing thresholds with CI-on compared with CI-off; The ability to localize the sound source was significantly improved, with statistically significant differences in RMSE at each follow-up time compared with the preoperative period (P<0.05). In the SSSDNNH condition, which reflects the head shadow effect, the SRT in binaural hearing was significantly improved by 6.5 dB compared with unaided condition, and the difference was statistically significant (t=6.25, P=0.001). However, there was no significant improvement in SRT between the binaural hearing condition and unaided conditions in the S0N0 and S0NSSD conditions (P>0.05). The total score of THI and three dimensions were significant decreased (P<0.05). Tinnitus VAS scores were significantly lower in binaural hearing compared to the unaided condition (P<0.001). The total score of SSQ, and the scores of speech and spatial dimensions were significant improved in binaural hearing compared to the unaided condition (P<0.001). There was no statistical difference in NCIQ questionnaire scores between preoperative and postoperative (P>0.05), and only the self-efficacy subscore showed a significant increase(Z=-2.497,P=0.013). Conclusion: CI could help Mandarin-speaking SSD patients restore binaural hearing to some extent, improve sound localization and speech recognition in noise. In addition, CI in SSD patients could suppress tinnitus, reduce the loudness of tinnitus, and improve subjective perceptions of spatial hearing and quality of life.


Subject(s)
Cochlear Implantation , Humans , Male , Female , Cochlear Implantation/methods , Adult , Middle Aged , Prospective Studies , Treatment Outcome , Hearing Loss, Unilateral/surgery , Cochlear Implants , Speech Perception , Young Adult , Sound Localization , Tinnitus/surgery , Deafness/surgery , Hearing Aids
5.
Sci Rep ; 14(1): 15194, 2024 07 02.
Article in English | MEDLINE | ID: mdl-38956187

ABSTRACT

After a right hemisphere stroke, more than half of the patients are impaired in their capacity to produce or comprehend speech prosody. Yet, and despite its social-cognitive consequences for patients, aprosodia following stroke has received scant attention. In this report, we introduce a novel, simple psychophysical procedure which, by combining systematic digital manipulations of speech stimuli and reverse-correlation analysis, allows estimating the internal sensory representations that subtend how individual patients perceive speech prosody, and the level of internal noise that govern behavioral variability in how patients apply these representations. Tested on a sample of N = 22 right-hemisphere stroke survivors and N = 21 age-matched controls, the representation + noise model provides a promising alternative to the clinical gold standard for evaluating aprosodia (MEC): both parameters strongly associate with receptive, and not expressive, aprosodia measured by MEC within the patient group; they have better sensitivity than MEC for separating high-functioning patients from controls; and have good specificity with respect to non-prosody-related impairments of auditory attention and processing. Taken together, individual differences in either internal representation, internal noise, or both, paint a potent portrait of the variety of sensory/cognitive mechanisms that can explain impairments of prosody processing after stroke.


Subject(s)
Speech Perception , Stroke , Humans , Stroke/physiopathology , Stroke/complications , Speech Perception/physiology , Male , Female , Middle Aged , Aged , Noise , Psychophysics/methods , Adult
6.
Sci Rep ; 14(1): 15029, 2024 07 01.
Article in English | MEDLINE | ID: mdl-38951556

ABSTRACT

Recent advances in haptic technology could allow haptic hearing aids, which convert audio to tactile stimulation, to become viable for supporting people with hearing loss. A tactile vocoder strategy for audio-to-tactile conversion, which exploits these advances, has recently shown significant promise. In this strategy, the amplitude envelope is extracted from several audio frequency bands and used to modulate the amplitude of a set of vibro-tactile tones. The vocoder strategy allows good consonant discrimination, but vowel discrimination is poor and the strategy is susceptible to background noise. In the current study, we assessed whether multi-band amplitude envelope expansion can effectively enhance critical vowel features, such as formants, and improve speech extraction from noise. In 32 participants with normal touch perception, tactile-only phoneme discrimination with and without envelope expansion was assessed both in quiet and in background noise. Envelope expansion improved performance in quiet by 10.3% for vowels and by 5.9% for consonants. In noise, envelope expansion improved overall phoneme discrimination by 9.6%, with no difference in benefit between consonants and vowels. The tactile vocoder with envelope expansion can be deployed in real-time on a compact device and could substantially improve clinical outcomes for a new generation of haptic hearing aids.


Subject(s)
Hearing Aids , Noise , Speech Perception , Humans , Speech Perception/physiology , Male , Female , Adult , Young Adult , Touch/physiology , Acoustic Stimulation/methods , Touch Perception/physiology , Hearing Loss/physiopathology
7.
J Acoust Soc Am ; 156(1): 93-106, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38958486

ABSTRACT

Older adults with hearing loss may experience difficulty recognizing speech in noise due to factors related to attenuation (e.g., reduced audibility and sensation levels, SLs) and distortion (e.g., reduced temporal fine structure, TFS, processing). Furthermore, speech recognition may improve when the amplitude modulation spectrum of the speech and masker are non-overlapping. The current study investigated this by filtering the amplitude modulation spectrum into different modulation rates for speech and speech-modulated noise. The modulation depth of the noise was manipulated to vary the SL of speech glimpses. Younger adults with normal hearing and older adults with normal or impaired hearing listened to natural speech or speech vocoded to degrade TFS cues. Control groups of younger adults were tested on all conditions with spectrally shaped speech and threshold matching noise, which reduced audibility to match that of the older hearing-impaired group. All groups benefitted from increased masker modulation depth and preservation of syllabic-rate speech modulations. Older adults with hearing loss had reduced speech recognition across all conditions. This was explained by factors related to attenuation, due to reduced SLs, and distortion, due to reduced TFS processing, which resulted in poorer auditory processing of speech cues during the dips of the masker.


Subject(s)
Acoustic Stimulation , Auditory Threshold , Cues , Noise , Perceptual Masking , Speech Perception , Humans , Speech Perception/physiology , Aged , Noise/adverse effects , Adult , Young Adult , Male , Female , Middle Aged , Age Factors , Recognition, Psychology , Time Factors , Aging/physiology , Presbycusis/physiopathology , Presbycusis/diagnosis , Presbycusis/psychology , Persons With Hearing Impairments/psychology , Aged, 80 and over , Case-Control Studies , Speech Intelligibility
8.
Elife ; 132024 Jul 22.
Article in English | MEDLINE | ID: mdl-39038076

ABSTRACT

To what extent does speech and music processing rely on domain-specific and domain-general neural networks? Using whole-brain intracranial EEG recordings in 18 epilepsy patients listening to natural, continuous speech or music, we investigated the presence of frequency-specific and network-level brain activity. We combined it with a statistical approach in which a clear operational distinction is made between shared, preferred, and domain-selective neural responses. We show that the majority of focal and network-level neural activity is shared between speech and music processing. Our data also reveal an absence of anatomical regional selectivity. Instead, domain-selective neural responses are restricted to distributed and frequency-specific coherent oscillations, typical of spectral fingerprints. Our work highlights the importance of considering natural stimuli and brain dynamics in their full complexity to map cognitive and brain functions.


Subject(s)
Music , Humans , Male , Female , Adult , Nerve Net/physiology , Speech/physiology , Auditory Perception/physiology , Epilepsy/physiopathology , Young Adult , Electroencephalography , Cerebral Cortex/physiology , Electrocorticography , Speech Perception/physiology , Middle Aged , Brain Mapping
9.
J Acoust Soc Am ; 156(1): 638-654, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39051718

ABSTRACT

This experimental study investigated whether infants use iconicity in speech and gesture cues to interpret word meanings. Specifically, we tested infants' sensitivity to size sound symbolism and iconic gesture cues and asked whether combining these cues in a multimodal fashion would enhance infants' sensitivity in a superadditive manner. Thirty-six 14-17-month-old infants participated in a preferential looking task in which they heard a spoken nonword (e.g., "zudzud") while observing a small and large object (e.g., a small and large square). All infants were presented with an iconic cue for object size (small or large) (1) in the pitch of the spoken non-word (high vs low), (2) in gesture (small or large), or (3) congruently in pitch and gesture (e.g., a high pitch and small gesture indicating a small square). Infants did not show a preference for congruently sized objects in any iconic cue condition. Bayes factor analyses showed moderate to strong support for the null hypotheses. In conclusion, 14-17-month-old infants did not use iconic pitch cues, iconic gesture cues, or iconic multimodal cues (pitch and gesture) to associate speech sounds with their referents. These findings challenge theories that emphasize the role of iconicity in early language development.


Subject(s)
Cues , Gestures , Speech Perception , Humans , Infant , Male , Female , Acoustic Stimulation , Bayes Theorem , Symbolism , Pitch Perception , Comprehension , Size Perception
10.
JASA Express Lett ; 4(7)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39051871

ABSTRACT

Since its creation, the coordinate response measure (CRM) corpus has been applied in hundreds of studies to explore the mechanisms of informational masking in multi-talker situations, but also in speech-in-noise or auditory attentional tasks. Here, we present its French version, with equivalent content to the original version in English. Furthermore, an evaluation of speech-on-speech intelligibility in French shows informational masking with similar result patterns to the original data in English. This validation of the French CRM corpus allows to propose the use of the CRM for intelligibility tests in French, and for comparisons with a foreign language under masking conditions.


Subject(s)
Language , Speech Intelligibility , Speech Perception , Humans , Speech Perception/physiology , Female , Male , Adult , Perceptual Masking/physiology , France , Young Adult , Noise
11.
Trends Hear ; 28: 23312165241258056, 2024.
Article in English | MEDLINE | ID: mdl-39053892

ABSTRACT

This study investigated the morphology of the functional near-infrared spectroscopy (fNIRS) response to speech sounds measured from 16 sleeping infants and how it changes with repeated stimulus presentation. We observed a positive peak followed by a wide negative trough, with the latter being most evident in early epochs. We argue that the overall response morphology captures the effects of two simultaneous, but independent, response mechanisms that are both activated at the stimulus onset: one being the obligatory response to a sound stimulus by the auditory system, and the other being a neural suppression effect induced by the arousal system. Because the two effects behave differently with repeated epochs, it is possible to mathematically separate them and use fNIRS to study factors that affect the development and activation of the arousal system in infants. The results also imply that standard fNIRS analysis techniques need to be adjusted to take into account the possibilities of multiple simultaneous brain systems being activated and that the response to a stimulus is not necessarily stationary.


Subject(s)
Acoustic Stimulation , Arousal , Sleep , Spectroscopy, Near-Infrared , Humans , Spectroscopy, Near-Infrared/methods , Acoustic Stimulation/methods , Infant , Sleep/physiology , Female , Male , Arousal/physiology , Speech Perception/physiology , Auditory Cortex/physiology , Auditory Cortex/diagnostic imaging , Auditory Pathways/physiology , Brain Mapping/methods , Time Factors , Age Factors , Oxyhemoglobins/metabolism
12.
Trends Hear ; 28: 23312165241260621, 2024.
Article in English | MEDLINE | ID: mdl-39053897

ABSTRACT

While listening, we commonly participate in simultaneous activities. For instance, at receptions people often stand while engaging in conversation. It is known that listening and postural control are associated with each other. Previous studies focused on the interplay of listening and postural control when the speech identification task had rather high cognitive control demands. This study aimed to determine whether listening and postural control interact when the speech identification task requires minimal cognitive control, i.e., when words are presented without background noise, or a large memory load. This study included 22 young adults, 27 middle-aged adults, and 21 older adults. Participants performed a speech identification task (auditory single task), a postural control task (posture single task) and combined postural control and speech identification tasks (dual task) to assess the effects of multitasking. The difficulty levels of the listening and postural control tasks were manipulated by altering the level of the words (25 or 30 dB SPL) and the mobility of the platform (stable or moving). The sound level was increased for adults with a hearing impairment. In the dual-task, listening performance decreased, especially for middle-aged and older adults, while postural control improved. These results suggest that even when cognitive control demands for listening are minimal, interaction with postural control occurs. Correlational analysis revealed that hearing loss was a better predictor than age of speech identification and postural control.


Subject(s)
Aging , Cognition , Multitasking Behavior , Postural Balance , Speech Perception , Standing Position , Humans , Male , Female , Middle Aged , Speech Perception/physiology , Adult , Aged , Young Adult , Age Factors , Postural Balance/physiology , Multitasking Behavior/physiology , Aging/physiology , Aging/psychology , Acoustic Stimulation , Noise/adverse effects , Speech Intelligibility , Hearing/physiology , Recognition, Psychology
13.
JASA Express Lett ; 4(7)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39046893

ABSTRACT

Although the telephone band (0.3-3 kHz) provides sufficient information for speech recognition, the contribution of the non-telephone band (<0.3 and >3 kHz) is unclear. To investigate its contribution, speech intelligibility and talker identification were evaluated using consonants, vowels, and sentences. The non-telephone band produced relatively good intelligibility for consonants (76.0%) and sentences (77.4%), but not vowels (11.5%). The non-telephone band supported good talker identification only with sentences (74.5%), but not vowels (45.8%) or consonants (10.8%). Furthermore, the non-telephone band cannot produce satisfactory speech intelligibility in noise at the sentence level, suggesting the importance of full-band access in realistic listening.


Subject(s)
Speech Intelligibility , Speech Perception , Humans , Speech Perception/physiology , Male , Female , Telephone , Adult , Young Adult , Phonetics , Noise
14.
J Parkinsons Dis ; 14(5): 999-1013, 2024.
Article in English | MEDLINE | ID: mdl-39031381

ABSTRACT

Background: Research indicates that people with Parkinson's disease (PwPs) may experience challenges in both peripheral and central auditory processing, although findings are inconsistent across studies. Due to the diversity of auditory measures used, there is a need for standardized, replicable hearing assessments to clarify which aspects of audition are impacted in PWPs and whether they are linked to motor and non-motor symptoms. Objective: To characterize auditory processes and their possible alteration in PwPs. To address this, we collected a comprehensive set of standardized measures of audition using PART, a digital testing platform designed to facilitate replication. Additionally, we examined the relationship between auditory, cognitive, and clinical variables in PwPs. Methods: We included 44 PwPs and 54 age and education matched healthy controls. Assessments included detection of diotic and dichotic frequency modulation, temporal gaps, spectro-temporal broad-band modulation, and speech-on-speech masking. Results: We found no statistically significant differences in auditory processing measures between PwPs and the comparison group (ps > 0.07). In PwPs, an auditory processing composite score showed significant medium size correlations with cognitive measures (0.39 < r<0.41, ps < 0.02) and clinical variables of motor symptom severity, quality of life, depression, and caretaker burden (0.33 < r<0.52, ps < 0.03). Conclusions: While larger datasets are needed to clarify whether PwPs experience more auditory difficulties than healthy controls, our results underscore the importance of considering auditory processing on the symptomatic spectrum of Parkinson's disease using standardized replicable methodologies.


It is unknown whether there exists a relationship between Parkinson's disease (PD) and hearing ability. While some studies have found hearing difficulties to be associated with PD, other studies failed to replicate these effects. We suggest that a possible reason for these differing findings are differences in how hearing is measured. To clarify the literature, we tested a group of people with Parkinson's (PwPs) on several aspects of hearing using a freely available tablet-based app. We compared PwPs hearing tests to those of an age and education matched group of people without PD. While we found no clear differences among the groups, we did find better hearing abilities were related to less motor symptom severity and depression, better reported quality of life, and less reported burden of the disease experienced by the caretaker. We conclude that while there is no solid evidence showing the hearing is necessarily impaired in PD, that measuring hearing in PwPs can provide valuable clinical information. This can inform new approaches to treatment for people living with PD such as those related with improving hearing.


Subject(s)
Auditory Perception , Parkinson Disease , Humans , Parkinson Disease/physiopathology , Parkinson Disease/complications , Male , Female , Aged , Middle Aged , Auditory Perception/physiology , Auditory Perceptual Disorders/etiology , Auditory Perceptual Disorders/physiopathology , Auditory Perceptual Disorders/diagnosis , Speech Perception/physiology
15.
Cognition ; 250: 105866, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38971020

ABSTRACT

Language experience confers a benefit to voice learning, a concept described in the literature as the language familiarity effect (LFE). What experiences are necessary for the LFE to be conferred is less clear. We contribute empirically and theoretically to this debate by examining within and across language voice learning with Cantonese-English bilingual voices in a talker-voice association paradigm. Listeners were trained in Cantonese or English and assessed on their abilities to generalize voice learning at test on Cantonese and English utterances. By testing listeners from four language backgrounds - English Monolingual, Cantonese-English Multilingual, Tone Multilingual, and Non-tone Multilingual groups - we assess whether the LFE and group-level differences in voice learning are due to varying abilities (1) in accessing the relative acoustic-phonetic features that distinguish a voice, (2) learning at a given rate, or (3) generalizing learning of talker-voice associations to novel same-language and different-language utterances. The specific four language background groups allow us to investigate the roles of language-specific familiarity, tone language experience, and generic multilingual experience in voice learning. Differences in performance across listener groups shows evidence in support of the LFE and the role of two mechanisms for voice learning: the extraction and association of talker-specific, language-general information that is more robustly generalized across languages, and talker-specific, language-specific information that may be more readily accessible and learnable, but due to its language-specific nature, is less able to be extended to another language.


Subject(s)
Learning , Multilingualism , Speech Perception , Voice , Humans , Voice/physiology , Speech Perception/physiology , Female , Male , Learning/physiology , Adult , Young Adult , Language , Recognition, Psychology/physiology , Phonetics
16.
Otolaryngol Pol ; 78(4): 1-6, 2024 Jul 21.
Article in English | MEDLINE | ID: mdl-39041849

ABSTRACT

<b>Introduction:</b> Speech audiometry is well established and frequently used test in audiology as well as in cochlear implant recipient's performance evaluation. Expanding indications for cochlear implantation forces use of more refined methods of both assessment and prognosis of outcome of aural rehabilitation. Variability of speech intelligibility tests and materials require standardized protocol facilitating outcome comparison.<b>Aim:</b> Aim of this review paper is analysis of usage of speech audiometry and other speech intelligibility tests and its results reporting in patients with cochlear implant in Poland and in the World.<b>Materials and methods:</b> Protocols of many different domestic and foreign health centers where compared, showing many methodological differences. Selection of literature for analysis was made according to PRISMA algorithm recommendations. Twenty research papers were chosen for review process.<b>Discussion:</b> In many papers we found lack of data regarding methodology of performed tests. Many authors indicate difficulties in comparing results, especially if publication lacks basic technical information. Despite that if right method is applied, results can be compared. In literature only one level of material presentation in test is prevalent. Speech audiometry is significant in exploring connections between multiple pre-op and post-op prognostic aspects of cochlear implantation.<b>Conclusions:</b> Because of variability in presentation and reporting of CI patients outcomes, consensus is needed in area of system facilitating comparison of research results. This may provide simple solution for accurate analysis and choosing right set of data. Schematic of presentation of audiological data in authors health center was proposed as example.


Subject(s)
Audiometry, Speech , Cochlear Implantation , Cochlear Implants , Speech Perception , Humans , Audiometry, Speech/methods , Poland , Female , Male
17.
Codas ; 36(5): e20240009, 2024.
Article in English | MEDLINE | ID: mdl-39046026

ABSTRACT

PURPOSE: The study aimed to identify (1) whether the age and gender of listeners and the length of vocal stimuli affect emotion discrimination accuracy in voice; and (2) whether the determined level of expression of perceived affective emotions is age and gender-dependent. METHODS: Thirty-two age-matched listeners listened to 270 semantically neutral voice samples produced in neutral, happy, and angry intonation by ten professional actors. The participants were required to categorize the auditory stimulus based on three options and judge the intensity of emotional expression in the sample using a customized tablet web interface. RESULTS: The discrimination accuracy of happy and angry emotions decreased with age, while accuracy in discriminating neutral emotions increased with age. Females rated the intensity level of perceived affective emotions higher than males across all linguistic units. These were: for angry emotions in words (z = -3.599, p < .001), phrases (z = -3.218, p = .001), and texts (z = -2.272, p = .023), for happy emotions in words (z = -5.799, p < .001), phrases (z = -4.706, p < .001), and texts (z = -2.699, p = .007). CONCLUSION: Accuracy in perceiving vocal expressions of emotions varies according to age and gender. Young adults are better at distinguishing happy and angry emotions than middle-aged adults, while middle-aged adults tend to categorize perceived affective emotions as neutral. Gender also plays a role, with females rating expressions of affective emotions in voices higher than males. Additionally, the length of voice stimuli impacts emotion discrimination accuracy.


Subject(s)
Emotions , Speech Perception , Voice , Humans , Female , Male , Adult , Emotions/physiology , Age Factors , Young Adult , Sex Factors , Middle Aged , Speech Perception/physiology , Voice/physiology , Adolescent , Aged
18.
Hum Brain Mapp ; 45(11): e26797, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39041175

ABSTRACT

Speech comprehension is crucial for human social interaction, relying on the integration of auditory and visual cues across various levels of representation. While research has extensively studied multisensory integration (MSI) using idealised, well-controlled stimuli, there is a need to understand this process in response to complex, naturalistic stimuli encountered in everyday life. This study investigated behavioural and neural MSI in neurotypical adults experiencing audio-visual speech within a naturalistic, social context. Our novel paradigm incorporated a broader social situational context, complete words, and speech-supporting iconic gestures, allowing for context-based pragmatics and semantic priors. We investigated MSI in the presence of unimodal (auditory or visual) or complementary, bimodal speech signals. During audio-visual speech trials, compared to unimodal trials, participants more accurately recognised spoken words and showed a more pronounced suppression of alpha power-an indicator of heightened integration load. Importantly, on the neural level, these effects surpassed mere summation of unimodal responses, suggesting non-linear MSI mechanisms. Overall, our findings demonstrate that typically developing adults integrate audio-visual speech and gesture information to facilitate speech comprehension in noisy environments, highlighting the importance of studying MSI in ecologically valid contexts.


Subject(s)
Gestures , Speech Perception , Humans , Female , Male , Speech Perception/physiology , Young Adult , Adult , Visual Perception/physiology , Electroencephalography , Comprehension/physiology , Acoustic Stimulation , Speech/physiology , Brain/physiology , Photic Stimulation/methods
19.
Sci Rep ; 14(1): 16603, 2024 Jul 18.
Article in English | MEDLINE | ID: mdl-39025957

ABSTRACT

Electrophysiological brain activity has been shown to synchronize with the quasi-regular repetition of grammatical phrases in connected speech-so-called phrase-rate neural tracking. Current debate centers around whether this phenomenon is best explained in terms of the syntactic properties of phrases or in terms of syntax-external information, such as the sequential repetition of parts of speech. As these two factors were confounded in previous studies, much of the literature is compatible with both accounts. Here, we used electroencephalography (EEG) to determine if and when the brain is sensitive to both types of information. Twenty native speakers of Mandarin Chinese listened to isochronously presented streams of monosyllabic words, which contained either grammatical two-word phrases (e.g., catch fish, sell house) or non-grammatical word combinations (e.g., full lend, bread far). Within the grammatical conditions, we varied two structural factors: the position of the head of each phrase and the type of attachment. Within the non-grammatical conditions, we varied the consistency with which parts of speech were repeated. Tracking was quantified through evoked power and inter-trial phase coherence, both derived from the frequency-domain representation of EEG responses. As expected, neural tracking at the phrase rate was stronger in grammatical sequences than in non-grammatical sequences without syntactic structure. Moreover, it was modulated by both attachment type and head position, revealing the structure-sensitivity of phrase-rate tracking. We additionally found that the brain tracks the repetition of parts of speech in non-grammatical sequences. These data provide an integrative perspective on the current debate about neural tracking effects, revealing that the brain utilizes regularities computed over multiple levels of linguistic representation in guiding rhythmic computation.


Subject(s)
Brain , Electroencephalography , Humans , Male , Female , Adult , Brain/physiology , Young Adult , Language , Speech Perception/physiology , Speech/physiology
20.
Int J Pediatr Otorhinolaryngol ; 182: 112020, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38964177

ABSTRACT

BACKGROUND AND OBJECTIVES: Lexical tone presents challenges to cochlear implant (CI) users especially in noise conditions. Bimodal hearing utilizes residual acoustic hearing in the contralateral side and may offer benefits for tone recognition in noise. The purpose of the present study was to evaluate tone recognition in both steady-state noise and multi-talker babbles by the prelingually-deafened, Mandarin-speaking children with unilateral CIs or bimodal hearing. METHODS: Fifty-three prelingually-deafened, Mandarin-speaking children who received CIs participated in this study. Twenty-two of them were unilateral CI users and 31 wore a hearing aid (HA) in the contralateral ear (i.e., bimodal hearing). All subjects were tested for Mandarin tone recognition in quiet and in two types of maskers: speech-spectrum-shaped noise (SSN) and two-talker babbles (TTB) at four signal-to-noise ratios (-6, 0, +6, and +12 dB). RESULTS: While no differences existed in tone recognition in quiet between the two groups, the Bimodal group outperformed the Unilateral CI group under noise conditions. The differences between the two groups were significant at SNRs of 0, +6, and +12 dB in the SSN conditions (all p < 0.05), and at SNRs of +6 and +12 dB of TTB conditions (both p < 0.01), but not significant at other conditions (p > 0.05). The TTB exerted a greater masking effect than the SSN for tone recognition in the Unilateral CI group as well as in the Bimodal group at all SNRs tested (all p < 0.05). Among demographic or audiometric variables, only age at implantation showed a weak but significant correlation with the mean tone recognition performance under the SSN conditions (r = -0.276, p = 0.045). However, when Bonferroni correction was applied to the correlation analysis results, the weak correlation became not significant. CONCLUSION: Prelingually-deafened children with CIs face challenges in tone perception in noisy environments, especially when the noise is fluctuating in amplitude such as the multi-talker babbles. Wearing a HA on the contralateral side when residual hearing permits is beneficial for tone recognition in noise.


Subject(s)
Cochlear Implants , Noise , Speech Perception , Humans , Male , Female , Speech Perception/physiology , Child , Child, Preschool , Deafness/surgery , Hearing Aids , Cochlear Implantation/methods , Language
SELECTION OF CITATIONS
SEARCH DETAIL