Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 58
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Cereb Cortex ; 33(10): 6465-6473, 2023 05 09.
Article in English | MEDLINE | ID: mdl-36702477

ABSTRACT

Absolute pitch (AP) is the ability to rapidly label pitch without an external reference. The speed of AP labeling may be related to faster sensory processing. We compared time needed for auditory processing in AP musicians, non-AP musicians, and nonmusicians (NM) using high-density electroencephalographic recording. Participants responded to pure tones and sung voice. Stimuli evoked a negative deflection peaking at ~100 ms (N1) post-stimulus onset, followed by a positive deflection peaking at ~200 ms (P2). N1 latency was shortest in AP, intermediate in non-AP musicians, and longest in NM. Source analyses showed decreased auditory cortex and increased frontal cortex contributions to N1 for complex tones compared with pure tones. Compared with NM, AP musicians had weaker source currents in left auditory cortex but stronger currents in left inferior frontal gyrus (IFG) during N1, and stronger currents in left IFG during P2. Compared with non-AP musicians, AP musicians exhibited stronger source currents in right insula and left IFG during N1, and stronger currents in left IFG during P2. Non-AP musicians had stronger N1 currents in right auditory cortex than nonmusicians. Currents in left IFG and left auditory cortex were correlated to response times exclusively in AP. Findings suggest a left frontotemporal network supports rapid pitch labeling in AP.


Subject(s)
Music , Pitch Perception , Humans , Pitch Perception/physiology , Auditory Perception , Prefrontal Cortex , Reaction Time/physiology , Electroencephalography , Acoustic Stimulation , Pitch Discrimination/physiology , Evoked Potentials, Auditory/physiology
2.
Ear Hear ; 44(3): 460-476, 2023.
Article in English | MEDLINE | ID: mdl-36536499

ABSTRACT

OBJECTIVES: Given the low rates of hearing aid adoption among individuals with hearing loss, it is imperative to better understand the decision-making processes leading to greater hearing aid uptake. A careful analysis of the existing literature on theoretical approaches to studying these processes is needed to help researchers frame hypotheses and methodology in studies on audiology. Therefore, we conducted a scoping review with two aims. First, we examine theories that have been used to study research on hearing aid adoption. Second, we propose additional theories from the behavioral sciences that have not yet been used to examine hearing aid uptake but that can inform future research. DESIGN: We identified peer-reviewed publications whose research was driven by one or more theoretical approaches by searching through PubMed, ProQuest PsycINFO, CINHAL Plus, Web of Science, Scopus, and OVID Medline/Embase/PsycINFO. The publications were examined by two researchers for eligibility. RESULTS: Twenty-three papers were included in the analysis. The most common theoretical approaches studied include the Health Belief Model, the Transtheoretical Model of Behavior Change, Self-Determination Theory, and the COM-B Model. Seven other theoretical frameworks based on cognitive psychology and behavioral economics have also appeared in the literature. In addition, we propose considering nudge theory, framing effect, prospect theory, social learning theory, social identity theory, dual process theories, and affective-based theories of decision making when studying hearing aid adoption. CONCLUSIONS: We conclude that, although a number of theories have been considered in research on hearing aid uptake, there are considerable methodological limitations to their use. Furthermore, the field can benefit greatly from the inclusion of novel theoretical approaches drawn from outside of audiology.


Subject(s)
Audiology , Deafness , Hearing Aids , Hearing Loss , Humans , Hearing Loss/rehabilitation
3.
Cogn Affect Behav Neurosci ; 22(2): 291-303, 2022 04.
Article in English | MEDLINE | ID: mdl-34811708

ABSTRACT

Sensorimotor brain areas have been implicated in the recognition of emotion expressed on the face and through nonverbal vocalizations. However, no previous study has assessed whether sensorimotor cortices are recruited during the perception of emotion in speech-a signal that includes both audio (speech sounds) and visual (facial speech movements) components. To address this gap in the literature, we recruited 24 participants to listen to speech clips produced in a way that was either happy, sad, or neutral in expression. These stimuli also were presented in one of three modalities: audio-only (hearing the voice but not seeing the face), video-only (seeing the face but not hearing the voice), or audiovisual. Brain activity was recorded using electroencephalography, subjected to independent component analysis, and source-localized. We found that the left presupplementary motor area was more active in response to happy and sad stimuli than neutral stimuli, as indexed by greater mu event-related desynchronization. This effect did not differ by the sensory modality of the stimuli. Activity levels in other sensorimotor brain areas did not differ by emotion, although they were greatest in response to visual-only and audiovisual stimuli. One possible explanation for the pre-SMA result is that this brain area may actively support speech emotion recognition by using our extensive experience expressing emotion to generate sensory predictions that in turn guide our perception.


Subject(s)
Motor Cortex , Speech Perception , Acoustic Stimulation , Auditory Perception , Emotions , Humans , Speech , Speech Perception/physiology , Visual Perception/physiology
4.
Exp Brain Res ; 240(2): 537-548, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34817643

ABSTRACT

This study aims to clarify unresolved questions from two earlier studies by McGarry et al. Exp Brain Res 218(4): 527-538, 2012 and Kaplan and Iacoboni Cogn Process 8: 103-113, 2007 on human mirror neuron system (hMNS) responsivity to multimodal presentations of actions. These questions are: (1) whether the two frontal areas originally identified by Kaplan and Iacoboni (ventral premotor cortex [vPMC] and inferior frontal gyrus [IFG]) are both part of the hMNS (i.e., do they respond to execution as well as observation), (2) whether both areas yield effects of biologicalness (biological, control) and modality (audio, visual, audiovisual), and (3) whether the vPMC is preferentially responsive to multimodal input. To resolve these questions about the hMNS, we replicated and extended McGarry et al.'s electroencephalography (EEG) study, while incorporating advanced source localization methods. Participants were asked to execute movements (ripping paper) as well as observe those movements across the same three modalities (audio, visual, and audiovisual), all while 64-channel EEG data was recorded. Two frontal sources consistent with those identified in prior studies showed mu event-related desynchronization (mu-ERD) under execution and observation conditions. These sources also showed a greater response to biological movement than to control stimuli as well as a distinct visual advantage, with greater responsivity to visual and audiovisual compared to audio conditions. Exploratory analyses of mu-ERD in the vPMC under visual and audiovisual observation conditions suggests that the hMNS tracks the magnitude of visual movement over time.


Subject(s)
Mirror Neurons , Motor Cortex , Electroencephalography/methods , Humans , Mirror Neurons/physiology , Motor Cortex/physiology , Movement/physiology
5.
Ear Hear ; 43(3): 836-848, 2022.
Article in English | MEDLINE | ID: mdl-34623112

ABSTRACT

OBJECTIVES: Understanding speech-in-noise can be highly effortful. Decreasing the signal-to-noise ratio (SNR) of speech increases listening effort, but it is relatively unclear if decreasing the level of semantic context does as well. The current study used functional near-infrared spectroscopy to evaluate two primary hypotheses: (1) listening effort (operationalized as oxygenation of the left lateral PFC) increases as the SNR decreases and (2) listening effort increases as context decreases. DESIGN: Twenty-eight younger adults with normal hearing completed the Revised Speech Perception in Noise Test, in which they listened to sentences and reported the final word. These sentences either had an easy SNR (+4 dB) or a hard SNR (-2 dB), and were either low in semantic context (e.g., "Tom could have thought about the sport") or high in context (e.g., "She had to vacuum the rug"). PFC oxygenation was measured throughout using functional near-infrared spectroscopy. RESULTS: Accuracy on the Revised Speech Perception in Noise Test was worse when the SNR was hard than when it was easy, and worse for sentences low in semantic context than high in context. Similarly, oxygenation across the entire PFC (including the left lateral PFC) was greater when the SNR was hard, and left lateral PFC oxygenation was greater when context was low. CONCLUSIONS: These results suggest that activation of the left lateral PFC (interpreted here as reflecting listening effort) increases to compensate for acoustic and linguistic challenges. This may reflect the increased engagement of domain-general and domain-specific processes subserved by the dorsolateral prefrontal cortex (e.g., cognitive control) and inferior frontal gyrus (e.g., predicting the sensory consequences of articulatory gestures), respectively.


Subject(s)
Semantics , Speech Perception , Adult , Female , Humans , Listening Effort , Signal-To-Noise Ratio , Spectroscopy, Near-Infrared , Speech Perception/physiology
6.
Int J Audiol ; 61(10): 799-808, 2022 10.
Article in English | MEDLINE | ID: mdl-34883031

ABSTRACT

OBJECTIVE: To evaluate remote testing as a tool for measuring emotional responses to non-speech sounds. DESIGN: Participants self-reported their hearing status and rated valence and arousal in response to non-speech sounds on an Internet crowdsourcing platform. These ratings were compared to data obtained in a laboratory setting with participants who had confirmed normal or impaired hearing. STUDY SAMPLE: Adults with normal and impaired hearing. RESULTS: In both settings, participants with hearing loss rated pleasant sounds as less pleasant than did their peers with normal hearing. The difference in valence ratings between groups was generally smaller when measured in the remote setting than in the laboratory setting. This difference was the result of participants with normal hearing rating sounds as less extreme (less pleasant, less unpleasant) in the remote setting than did their peers in the laboratory setting, whereas no such difference was noted for participants with hearing loss. Ratings of arousal were similar from participants with normal and impaired hearing; the similarity persisted in both settings. CONCLUSIONS: In both test settings, participants with hearing loss rated pleasant sounds as less pleasant than did their normal hearing counterparts. Future work is warranted to explain the ratings of participants with normal hearing.


Subject(s)
Hearing Aids , Hearing Loss , Speech Perception , Adult , Emotions , Hearing , Hearing Loss/diagnosis , Hearing Loss/psychology , Hearing Tests , Humans , Speech Perception/physiology
7.
J Cogn Neurosci ; 33(4): 635-650, 2021 04.
Article in English | MEDLINE | ID: mdl-33475449

ABSTRACT

The ability to synchronize movements to a rhythmic stimulus, referred to as sensorimotor synchronization (SMS), is a behavioral measure of beat perception. Although SMS is generally superior when rhythms are presented in the auditory modality, recent research has demonstrated near-equivalent SMS for vibrotactile presentations of isochronous rhythms [Ammirante, P., Patel, A. D., & Russo, F. A. Synchronizing to auditory and tactile metronomes: A test of the auditory-motor enhancement hypothesis. Psychonomic Bulletin & Review, 23, 1882-1890, 2016]. The current study aimed to replicate and extend this study by incorporating a neural measure of beat perception. Nonmusicians were asked to tap to rhythms or to listen passively while EEG data were collected. Rhythmic complexity (isochronous, nonisochronous) and presentation modality (auditory, vibrotactile, bimodal) were fully crossed. Tapping data were consistent with those observed by Ammirante et al. (2016), revealing near-equivalent SMS for isochronous rhythms across modality conditions and a drop-off in SMS for nonisochronous rhythms, especially in the vibrotactile condition. EEG data revealed a greater degree of neural entrainment for isochronous compared to nonisochronous trials as well as for auditory and bimodal compared to vibrotactile trials. These findings led us to three main conclusions. First, isochronous rhythms lead to higher levels of beat perception than nonisochronous rhythms across modalities. Second, beat perception is generally enhanced for auditory presentations of rhythm but still possible under vibrotactile presentation conditions. Finally, exploratory analysis of neural entrainment at harmonic frequencies suggests that beat perception may be enhanced for bimodal presentations of rhythm.


Subject(s)
Auditory Perception , Movement , Acoustic Stimulation , Humans
8.
J Community Psychol ; 49(2): 588-604, 2021 03.
Article in English | MEDLINE | ID: mdl-33314203

ABSTRACT

Reconnecting Indigenous youth with their cultural traditions has been identified as an essential part of healing the intergenerational effects of forced assimilation policies. Past work suggests that learning the music of one's culture can foster cultural identity and community bonding, which may serve as protective factors for well-being. An 8-week traditional song and dance program was implemented in a school setting for Indigenous youth. An evaluation was conducted using a mixed-method design to determine the impact of the program on 35 youth in the community. A triangulation of qualitative and quantitative data revealed several important themes, including personal development, cultural development, social development, student engagement in school-based programming, and perpetuating cultural knowledge. The program provided students with an opportunity to connect with their cultural traditions through activities that encouraged self and cultural expression. Community responses suggested that this type of programming is highly valued among Indigenous communities.


Subject(s)
Music , Social Identification , Adolescent , Humans , Learning , Schools , Social Change
9.
Exp Brain Res ; 238(4): 825-832, 2020 Apr.
Article in English | MEDLINE | ID: mdl-32130431

ABSTRACT

The perception of an event is strongly influenced by the context in which it occurs. Here, we examined the effect of a rhythmic context on detection of asynchrony in both the auditory and vibrotactile modalities. Using the method of constant stimuli and a two-alternative forced choice (2AFC), participants were presented with pairs of pure tones played either simultaneously or with various levels of stimulus onset asynchrony (SOA). Target stimuli in both modalities were nested within either: (i) a regularly occurring, predictable rhythm (ii) an irregular, unpredictable rhythm, or (iii) no rhythm at all. Vibrotactile asynchrony detection had higher thresholds and showed greater variability than auditory asynchrony detection in general. Asynchrony detection thresholds for auditory targets but not vibrotactile targets were significantly reduced when the target stimulus was embedded in a regular rhythm as compared to no rhythm. Embedding within an irregular rhythm produced no such improvement. The observed modality asymmetries are interpreted with regard to the superior temporal resolution of the auditory system and specialized brain circuitry supporting auditory-motor coupling.


Subject(s)
Auditory Perception/physiology , Sensory Thresholds/physiology , Time Perception/physiology , Touch Perception/physiology , Adult , Female , Humans , Male , Vibration , Young Adult
10.
Brain Cogn ; 145: 105622, 2020 11.
Article in English | MEDLINE | ID: mdl-32949847

ABSTRACT

Spontaneous motor cortical activity during passive perception of action has been interpreted as a sensorimotor simulation of the observed action. There is currently interest in how sensorimotor simulation can support higher-up cognitive functions, such as memory, but this is relatively unexplored in the auditory domain. In the present study, we examined whether the established memory advantage for vocal melodies over non-vocal melodies is attributable to stronger sensorimotor simulation during perception of vocal relative to non-vocal action. Participants listened to 24 unfamiliar folk melodies presented in vocal or piano timbres. These were encoded during three interference conditions: whispering (vocal-motor interference), tapping (non-vocal motor interference), and no-interference. Afterwards, participants heard the original 24 melodies presented among 24 foils and judged whether melodies were old or new. A vocal-memory advantage was found in the no-interference and tapping conditions; however, the advantage was eliminated in the whispering condition. This suggests that sensorimotor simulationduring the perception of vocal melodies is responsible for the observed vocal-memory advantage.


Subject(s)
Memory , Music , Voice , Auditory Perception , Hearing , Humans
11.
Noise Health ; 20(93): 42-46, 2018.
Article in English | MEDLINE | ID: mdl-29676294

ABSTRACT

INTRODUCTION: This study is a follow-up to prior research from our group that attempts to relate noise exposure and hearing thresholds in active performing musicians of the National Ballet of Canada Orchestra. MATERIALS AND METHODS: Exposures obtained in early 2010 were compared to exposures obtained in early 2017 (the present study). In addition, audiometric thresholds obtained in early 2012 were compared to thresholds obtained in early 2017 (the present study). This collection of measurements presents an opportunity to observe the regularities in the patterns of exposure as well as threshold changes that may be expected in active orchestra musicians over a 5-year span. RESULTS: The pattern of noise exposure across instrument groups, which was consistent over the two time points, reveals highest exposures among brass, percussion/basses, and woodwinds. However, the average noise exposure across groups and time was consistently below 85 dBA, which suggests no occupational hazard. These observations were corroborated by audiometric thresholds, which were generally (a) in the normal range and (b) unchanged in the 5-year period between measurements. CONCLUSION: Because exposure levels were consistently below 85 dBA and changes in audiometric thresholds were minimal, we conclude that musicians experienced little-to-no risk of noise-induced hearing loss.


Subject(s)
Hearing Loss, Noise-Induced/etiology , Music , Occupational Diseases/etiology , Occupational Exposure/adverse effects , Audiometry, Pure-Tone , Auditory Threshold , Canada , Follow-Up Studies , Hearing Loss, Noise-Induced/diagnosis , Humans , Occupational Diseases/diagnosis , Risk Factors
12.
Ear Hear ; 38(4): 455-464, 2017.
Article in English | MEDLINE | ID: mdl-28085739

ABSTRACT

OBJECTIVES: Children who use cochlear implants (CIs) have characteristic pitch processing deficits leading to impairments in music perception and in understanding emotional intention in spoken language. Music training for normal-hearing children has previously been shown to benefit perception of emotional prosody. The purpose of the present study was to assess whether deaf children who use CIs obtain similar benefits from music training. We hypothesized that music training would lead to gains in auditory processing and that these gains would transfer to emotional speech prosody perception. DESIGN: Study participants were 18 child CI users (ages 6 to 15). Participants received either 6 months of music training (i.e., individualized piano lessons) or 6 months of visual art training (i.e., individualized painting lessons). Measures of music perception and emotional speech prosody perception were obtained pre-, mid-, and post-training. The Montreal Battery for Evaluation of Musical Abilities was used to measure five different aspects of music perception (scale, contour, interval, rhythm, and incidental memory). The emotional speech prosody task required participants to identify the emotional intention of a semantically neutral sentence under audio-only and audiovisual conditions. RESULTS: Music training led to improved performance on tasks requiring the discrimination of melodic contour and rhythm, as well as incidental memory for melodies. These improvements were predominantly found from mid- to post-training. Critically, music training also improved emotional speech prosody perception. Music training was most advantageous in audio-only conditions. Art training did not lead to the same improvements. CONCLUSIONS: Music training can lead to improvements in perception of music and emotional speech prosody, and thus may be an effective supplementary technique for supporting auditory rehabilitation following cochlear implantation.


Subject(s)
Cochlear Implantation , Deafness/rehabilitation , Emotions , Music , Social Perception , Speech Perception , Adolescent , Child , Cochlear Implants , Deafness/physiopathology , Deafness/psychology , Female , Humans , Male , Pitch Perception
13.
Neurocase ; 22(6): 526-537, 2016 12.
Article in English | MEDLINE | ID: mdl-28001646

ABSTRACT

Congenital amusia is a condition in which an individual suffers from a deficit of musical pitch perception and production. Individuals suffering from congenital amusia generally tend to abstain from musical activities. Here, we present the unique case of Tim Falconer, a self-described musicophile who also suffers from congenital amusia. We describe and assess Tim's attempts to train himself out of amusia through a self-imposed 18-month program of formal vocal training and practice. We tested Tim with respect to music perception and vocal production across seven sessions including pre- and post-training assessments. We also obtained diffusion-weighted images of his brain to assess connectivity between auditory and motor planning areas via the arcuate fasciculus (AF). Tim's behavioral and brain data were compared to that of normal and amusic controls. While Tim showed temporary gains in his singing ability, he did not reach normal levels, and these gains faded when he was not engaged in regular lessons and practice. Tim did show some sustained gains with respect to the perception of musical rhythm and meter. We propose that Tim's lack of improvement in pitch perception and production tasks is due to long-standing and likely irreversible reduction in connectivity along the AF fiber tract.


Subject(s)
Auditory Perceptual Disorders/physiopathology , Music , Pitch Perception/physiology , Teaching , Voice , Acoustic Stimulation , Analysis of Variance , Anisotropy , Diffusion Magnetic Resonance Imaging , Functional Laterality , Humans , Male , Middle Aged
14.
Cogn Affect Behav Neurosci ; 15(1): 32-44, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25029995

ABSTRACT

In the present study, we examined the involvement of the extended mirror neuron system (MNS)-specifically, areas that have a strong functional connection to the core system itself-during emotional and nonemotional judgments about human song. We presented participants with audiovisual recordings of sung melodic intervals (two-tone sequences) and manipulated emotion and pitch judgments while keeping the stimuli identical. Mu event-related desynchronization (ERD) was measured as an index of MNS activity, and a source localization procedure was performed on the data to isolate the brain sources contributing to this ERD. We found that emotional judgments of human song led to greater amounts of ERD than did pitch distance judgments (nonemotional), as well as control judgments related to the singer's hair, or pitch distance judgments about a synthetic tone sequence. Our findings support and expand recent research suggesting that the extended MNS is involved to a greater extent during emotional than during nonemotional perception of human action.


Subject(s)
Brain Waves/physiology , Emotions/physiology , Evoked Potentials/physiology , Mirror Neurons/physiology , Music/psychology , Pitch Perception/physiology , Adult , Female , Humans , Judgment/physiology , Male , Young Adult
15.
Ear Hear ; 36(2): 217-28, 2015.
Article in English | MEDLINE | ID: mdl-25350404

ABSTRACT

OBJECTIVES: Despite vast amounts of research examining the influence of hearing loss on speech perception, comparatively little is known about its influence on music perception. No standardized test exists to quantify music perception of hearing-impaired (HI) persons in a clinically practical manner. This study presents the Adaptive Music Perception (AMP) test as a tool to assess important aspects of music perception with hearing loss. DESIGN: A computer-driven test was developed to determine the discrimination thresholds of 10 low-level physical dimensions (e.g., duration, level) in the context of perceptual judgments about musical dimensions: meter, harmony, melody, and timbre. In the meter test, the listener is asked to judge whether a tone sequence is duple or triple in meter. The harmony test requires that the listener make judgments about the stability of the chord sequences. In the melody test, the listener must judge whether a comparison melody is the same as a standard melody when presented in transposition and in the context of a chordal accompaniment that serves as a mask. The timbre test requires that the listener determines which of two comparison tones is different in timbre from a standard tone (ABX design). Twenty-one HI participants and 19 normal-hearing (NH) participants were recruited to carry out the music tests. Participants were tested twice on separate occasions to evaluate test-retest reliability. RESULTS: The HI group had significantly higher discrimination thresholds than the NH group in 7 of the 10 low-level physical dimensions: frequency discrimination in the meter test, dissonance and intonation perception in the harmony test, melody-to-chord ratio for both melody types in the melody test, and the perception of brightness and spectral irregularity in the timbre test. Small but significant improvement between test and retest was observed in three dimensions: frequency discrimination (meter test), dissonance (harmony test), and attack length (timbre test). All other dimensions did not show a session effect. Test-retest reliability was poor (<0.6) for spectral irregularity (timbre test); acceptable (>0.6) for pitch and duration (meter test), dissonance and intonation (harmony test), and melody-to-chord ratio I and II (melody test); and excellent (>0.8) for level (meter test) and attack (timbre test). CONCLUSION: The AMP test revealed differences in a wide range of music perceptual abilities between NH and HI listeners. The recognition of meter was more difficult for HI listeners when the listening task was based on frequency discrimination. The HI group was less sensitive to changes in harmony and had more difficulties with distinguishing melodies in a background of music. In addition, the thresholds to discriminate timbre were significantly higher for the HI group in brightness and spectral irregularity dimensions. The AMP test can be used as a research tool to further investigate music perception with hearing aids and compare the benefit of different music processing strategies for the HI listener. Future testing will involve larger samples with the inclusion of hearing aided conditions allowing for the establishment of norms so that the test might be appropriate for use in clinical practice.


Subject(s)
Auditory Perception , Hearing Loss/physiopathology , Hearing Tests/methods , Music , Adult , Aged , Aged, 80 and over , Auditory Threshold , Case-Control Studies , Diagnosis, Computer-Assisted , Female , Hearing Loss/diagnosis , Humans , Male , Middle Aged
16.
Proc Natl Acad Sci U S A ; 108(37): 15510-5, 2011 Sep 13.
Article in English | MEDLINE | ID: mdl-21876156

ABSTRACT

Human song exhibits great structural diversity, yet certain aspects of melodic shape (how pitch is patterned over time) are widespread. These include a predominance of arch-shaped and descending melodic contours in musical phrases, a tendency for phrase-final notes to be relatively long, and a bias toward small pitch movements between adjacent notes in a melody [Huron D (2006) Sweet Anticipation: Music and the Psychology of Expectation (MIT Press, Cambridge, MA)]. What is the origin of these features? We hypothesize that they stem from motor constraints on song production (i.e., the energetic efficiency of their underlying motor actions) rather than being innately specified. One prediction of this hypothesis is that any animals subject to similar motor constraints on song will exhibit similar melodic shapes, no matter how distantly related those animals are to humans. Conversely, animals who do not share similar motor constraints on song will not exhibit convergent melodic shapes. Birds provide an ideal case for testing these predictions, because their peripheral mechanisms of song production have both notable similarities and differences from human vocal mechanisms [Riede T, Goller F (2010) Brain Lang 115:69-80]. We use these similarities and differences to make specific predictions about shared and distinct features of human and avian song structure and find that these predictions are confirmed by empirical analysis of diverse human and avian song samples.


Subject(s)
Motor Activity/physiology , Music , Sparrows/physiology , Vocalization, Animal/physiology , Animals , Humans , Respiration , Sound Spectrography , Vibration
17.
Semin Hear ; 44(2): 188-210, 2023 May.
Article in English | MEDLINE | ID: mdl-37122884

ABSTRACT

Listening effort is a long-standing area of interest in auditory cognitive neuroscience. Prior research has used multiple techniques to shed light on the neurophysiological mechanisms underlying listening during challenging conditions. Functional near-infrared spectroscopy (fNIRS) is growing in popularity as a tool for cognitive neuroscience research, and its recent advances offer many potential advantages over other neuroimaging modalities for research related to listening effort. This review introduces the basic science of fNIRS and its uses for auditory cognitive neuroscience. We also discuss its application in recently published studies on listening effort and consider future opportunities for studying effortful listening with fNIRS. After reading this article, the learner will know how fNIRS works and summarize its uses for listening effort research. The learner will also be able to apply this knowledge toward generation of future research in this area.

18.
Sci Rep ; 13(1): 2808, 2023 02 16.
Article in English | MEDLINE | ID: mdl-36797318

ABSTRACT

Prior research has revealed a native-accent advantage, whereby nonnative-accented speech is more difficult to process than native-accented speech. Nonnative-accented speakers also experience more negative social judgments. In the current study, we asked three questions. First, does exposure to nonnative-accented speech increase speech intelligibility or decrease listening effort, thereby narrowing the native-accent advantage? Second, does lower intelligibility or higher listening effort contribute to listeners' negative social judgments of speakers? Third and finally, does increased intelligibility or decreased listening effort with exposure to speech bring about more positive social judgments of speakers? To address these questions, normal-hearing adults listened to a block of English sentences with a native accent and a block with nonnative accent. We found that once participants were accustomed to the task, intelligibility was greater for nonnative-accented speech and increased similarly with exposure for both accents. However, listening effort decreased only for nonnative-accented speech, soon reaching the level of native-accented speech. In addition, lower intelligibility and higher listening effort was associated with lower ratings of speaker warmth, speaker competence, and willingness to interact with the speaker. Finally, competence ratings increased over time to a similar extent for both accents, with this relationship fully mediated by intelligibility and listening effort. These results offer insight into how listeners process and judge unfamiliar speakers.


Subject(s)
Judgment , Speech Perception , Adult , Humans , Listening Effort , Language , Speech Intelligibility
19.
Front Digit Health ; 5: 1064115, 2023.
Article in English | MEDLINE | ID: mdl-36744277

ABSTRACT

The greying of the world is leading to a rapid acceleration in both the healthcare costs and caregiver burden that are associated with dementia. There is an urgent need to develop new, easily scalable modalities of support. This perspective paper presents the theoretical background, rationale, and development plans for a music-based digital therapeutic to manage the neuropsychiatric symptoms of dementia, particularly agitation and anxiety. We begin by presenting the findings of a survey we conducted with key opinion leaders. The findings highlight the value of a music-based digital therapeutic for treating neuropsychiatric symptoms, particularly agitation and anxiety. We then consider the neural substrates of these neuropsychiatric symptoms before going on to evaluate randomized control trials on the efficacy of music-based interventions in their treatment. Finally, we present our development plans for the adaptation of an existing music-based digital therapeutic that was previously shown to be efficacious in the treatment of adult anxiety symptoms.

20.
J Voice ; 2023 Jan 13.
Article in English | MEDLINE | ID: mdl-36642592

ABSTRACT

OBJECTIVES: Parkinson's disease (PD) is a neurodegenerative disease leading to motor impairments and dystonia across diverse muscle groups including vocal muscles. The vocal production challenges associated with PD have received considerably less research attention than the primary gross motor symptoms of the disease despite having a substantial effect on quality of life. Increasingly, people living with PD are discovering group singing as an asset-based approach to community building that is purported to strengthen vocal muscles and improve vocal quality. STUDY DESIGN/METHODS: The present study investigated the impact of community choir on vocal production in people living with PD across two sites. Prior to and immediately following a 12-week community choir at each site, vocal testing included a range of vocal-acoustic measures, including lowest and highest achievable pitch, duration of phonation, loudness, jitter, and shimmer. RESULTS: Results showed that group singing significantly improved some, though not all, measures of vocal production. Group singing improved lowest pitch (both groups), duration (both groups), intensity (one group), and jitter (one group) and shimmer (both groups). CONCLUSIONS: These findings support community choir as a feasible and scalable complementary approach to managing vocal production challenges associated with PD.

SELECTION OF CITATIONS
SEARCH DETAIL