Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Sci Adv ; 10(20): eadm9797, 2024 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-38748798

RESUMEN

Both music and language are found in all known human societies, yet no studies have compared similarities and differences between song, speech, and instrumental music on a global scale. In this Registered Report, we analyzed two global datasets: (i) 300 annotated audio recordings representing matched sets of traditional songs, recited lyrics, conversational speech, and instrumental melodies from our 75 coauthors speaking 55 languages; and (ii) 418 previously published adult-directed song and speech recordings from 209 individuals speaking 16 languages. Of our six preregistered predictions, five were strongly supported: Relative to speech, songs use (i) higher pitch, (ii) slower temporal rate, and (iii) more stable pitches, while both songs and speech used similar (iv) pitch interval size and (v) timbral brightness. Exploratory analyses suggest that features vary along a "musi-linguistic" continuum when including instrumental melodies and recited lyrics. Our study provides strong empirical evidence of cross-cultural regularities in music and speech.


Asunto(s)
Lenguaje , Música , Habla , Humanos , Habla/fisiología , Masculino , Percepción de la Altura Tonal/fisiología , Femenino , Adulto , Publicación de Preinscripción
2.
Sci Rep ; 14(1): 5501, 2024 03 06.
Artículo en Inglés | MEDLINE | ID: mdl-38448636

RESUMEN

Speech and music are two fundamental modes of human communication. Lateralisation of key processes underlying their perception has been related both to the distinct sensitivity to low-level spectrotemporal acoustic features and to top-down attention. However, the interplay between bottom-up and top-down processes needs to be clarified. In the present study, we investigated the contribution of acoustics and attention to melodies or sentences to lateralisation in fMRI functional network topology. We used sung speech stimuli selectively filtered in temporal or spectral modulation domains with crossed and balanced verbal and melodic content. Perception of speech decreased with degradation of temporal information, whereas perception of melodies decreased with spectral degradation. Applying graph theoretical metrics on fMRI connectivity matrices, we found that local clustering, reflecting functional specialisation, linearly increased when spectral or temporal cues crucial for the task goal were incrementally degraded. These effects occurred in a bilateral fronto-temporo-parietal network for processing temporally degraded sentences and in right auditory regions for processing spectrally degraded melodies. In contrast, global topology remained stable across conditions. These findings suggest that lateralisation for speech and music partially depends on an interplay of acoustic cues and task goals under increased attentional demands.


Asunto(s)
Señales (Psicología) , Imagen por Resonancia Magnética , Humanos , Comunicación , Acústica , Percepción
3.
Ann N Y Acad Sci ; 1516(1): 76-84, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35918503

RESUMEN

Melodic Intonation Therapy (MIT) is a prominent rehabilitation program for individuals with post-stroke aphasia. Our meta-analysis investigated the efficacy of MIT while considering quality of outcomes, experimental design, influence of spontaneous recovery, MIT protocol variant, and level of generalization. Extensive literature search identified 606 studies in major databases and trial registers; of those, 22 studies-overall 129 participants-met all eligibility criteria. Multi-level mixed- and random-effects models served to separately meta-analyze randomized controlled trial (RCT) and non-RCT data. RCT evidence on validated outcomes revealed a small-to-moderate standardized effect in noncommunicative language expression for MIT-with substantial uncertainty. Unvalidated outcomes attenuated MIT's effect size compared to validated tests. MIT's effect size was 5.7 times larger for non-RCT data compared to RCT data (g̅case report = 2.01 vs. g̅RCT = 0.35 for validated Non-Communicative Language Expression measures). Effect size for non-RCT data decreased with number of months post-stroke, suggesting confound through spontaneous recovery. Deviation from the original MIT protocol did not systematically alter benefit from treatment. Progress on validated tests arose mainly from gains in repetition tasks rather than other domains of verbal expression, such as everyday communication ability. Our results confirm the promising role of MIT in improving trained and untrained performance on unvalidated outcomes, alongside validated repetition tasks, and highlight possible limitations in promoting everyday communication ability.


Asunto(s)
Afasia , Accidente Cerebrovascular , Afasia/terapia , Humanos , Lenguaje , Ensayos Clínicos Controlados Aleatorios como Asunto , Logopedia/métodos , Accidente Cerebrovascular/terapia
4.
Front Psychol ; 13: 786899, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35529579

RESUMEN

Music and spoken language share certain characteristics: both consist of sequences of acoustic elements that are combinatorically combined, and these elements partition the same continuous acoustic dimensions (frequency, formant space and duration). However, the resulting categories differ sharply: scale tones and note durations of small integer ratios appear in music, while speech uses phonemes, lexical tone, and non-isochronous durations. Why did music and language diverge into the two systems we have today, differing in these specific features? We propose a framework based on information theory and a reverse-engineering perspective, suggesting that design features of music and language are a response to their differential deployment along three different continuous dimensions. These include the familiar propositional-aesthetic ('goal') and repetitive-novel ('novelty') dimensions, and a dialogic-choric ('interactivity') dimension that is our focus here. Specifically, we hypothesize that music exhibits specializations enhancing coherent production by several individuals concurrently-the 'choric' context. In contrast, language is specialized for exchange in tightly coordinated turn-taking-'dialogic' contexts. We examine the evidence for our framework, both from humans and non-human animals, and conclude that many proposed design features of music and language follow naturally from their use in distinct dialogic and choric communicative contexts. Furthermore, the hybrid nature of intermediate systems like poetry, chant, or solo lament follows from their deployment in the less typical interactive context.

5.
Front Psychol ; 11: 586723, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33362651

RESUMEN

Vocal music and spoken language both have important roles in human communication, but it is unclear why these two different modes of vocal communication exist. Although similar, speech and song differ in certain design features. One interesting difference is in the pitch intonation contour, which consists of discrete tones in song, vs. gliding intonation contours in speech. Here, we investigated whether vocal phrases consisting of discrete pitches (song-like) or gliding pitches (speech-like) are remembered better, conducting three studies implementing auditory same-different tasks at three levels of difficulty. We tested two hypotheses: that discrete pitch contours aid auditory memory, independent of musical experience ("song memory advantage hypothesis"), or that the higher everyday experience perceiving and producing speech make speech intonation easier to remember ("experience advantage hypothesis"). We used closely matched stimuli, controlling for rhythm and timbre, and we included a stimulus intermediate between song-like and speech-like pitch contours (with partially gliding and partially discrete pitches). We also assessed participants' musicality to evaluate experience-dependent effects. We found that song-like vocal phrases are remembered better than speech-like vocal phrases, and that intermediate vocal phrases evoked a similar advantage to song-like vocal phrases. Participants with more musical experience were better in remembering all three types of vocal phrases. The precise roles of absolute and relative pitch perception and the influence of top-down vs. bottom-up processing should be clarified in future studies. However, our results suggest that one potential reason for the emergence of discrete pitch-a feature that characterises music across cultures-might be that it enhances auditory memory.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...