Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Neuroimage ; 141: 31-39, 2016 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-27436593

RESUMO

The faculty of language depends on the interplay between the production and perception of speech sounds. A relevant open question is whether the dimensions that organize voice perception in the brain are acoustical or depend on properties of the vocal system that produced it. One of the main empirical difficulties in answering this question is to generate sounds that vary along a continuum according to the anatomical properties the vocal apparatus that produced them. Here we use a mathematical model that offers the unique possibility of synthesizing vocal sounds by controlling a small set of anatomically based parameters. In a first stage the quality of the synthetic voice was evaluated. Using specific time traces for sub-glottal pressure and tension of the vocal folds, the synthetic voices generated perceptual responses, which are indistinguishable from those of real speech. The synthesizer was then used to investigate how the auditory cortex responds to the perception of voice depending on the anatomy of the vocal apparatus. Our fMRI results show that sounds are perceived as human vocalizations when produced by a vocal system that follows a simple relationship between the size of the vocal folds and the vocal tract. We found that these anatomical parameters encode the perceptual vocal identity (male, female, child) and show that the brain areas that respond to human speech also encode vocal identity. On the basis of these results, we propose that this low-dimensional model of the vocal system is capable of generating realistic voices and represents a novel tool to explore the voice perception with a precise control of the anatomical variables that generate speech. Furthermore, the model provides an explanation of how auditory cortices encode voices in terms of the anatomical parameters of the vocal system.


Assuntos
Córtex Auditivo/fisiologia , Glote/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Voz/fisiologia , Estimulação Acústica/métodos , Adulto , Auxiliares de Comunicação para Pessoas com Deficiência , Simulação por Computador , Feminino , Humanos , Masculino , Modelos Anatômicos , Acústica da Fala , Qualidade da Voz , Adulto Jovem
2.
medRxiv ; 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38853969

RESUMO

Amyotrophic lateral sclerosis (ALS) is a neurodegenerative motor neuron disease that causes progressive muscle weakness. Progressive bulbar dysfunction causes dysarthria and thus social isolation, reducing quality of life. The Everything ALS Speech Study obtained longitudinal clinical information and speech recordings from 292 participants. In a subset of 120 participants, we measured speaking rate (SR) and listener effort (LE), a measure of dysarthria severity rated by speech pathologists from recordings. LE intra- and inter-rater reliability was very high (ICC 0.88 to 0.92). LE correlated with other measures of dysarthria at baseline. LE changed over time in participants with ALS (slope 0.77 pts/month; p<0.001) but not controls (slope 0.005 pts/month; p=0.807). The slope of LE progression was similar in all participants with ALS who had bulbar dysfunction at baseline, regardless of ALS site of onset. LE could be a remotely collected clinically meaningful clinical outcome assessment for ALS clinical trials.

3.
Phys Rev E ; 106(6-1): 064308, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36671138

RESUMO

We set up a simple mathematical model for the dynamics of public interest in terms of media coverage and social interactions. We test the model on a series of events related to violence in the US during 2020, using the volume of tweets and retweets as a proxy of public interest, and the volume of news as a proxy of media coverage. The model successfully fits the data and allows inferring a measure of social sensibility that correlates with human mobility data. These findings suggest the basic ingredients and mechanisms that regulate social responses capable of igniting social mobilizations.


Assuntos
Mídias Sociais , Humanos , Comunicação
4.
PLoS One ; 16(1): e0245167, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33411825

RESUMO

The folds of the brain offer a particular challenge for the subarachnoid vascular grid. The primitive blood vessels that occupy this space, when the brain is flat, have to adapt to an everchanging geometry while constructing an efficient network. Surprisingly, the result is a non-redundant arterial system easily challenged by acute occlusions. Here, we generalize the optimal network building principles of a flat surface growing into a folded configuration and generate an ideal middle cerebral artery (MCA) configuration that can be directly compared with the normal brain anatomy. We then describe how the Sylvian fissure (the fold in which the MCA is buried) is formed during development and use our findings to account for the differences between the ideal and the actual shaping pattern of the MCA. Our results reveal that folding dynamics condition the development of arterial anastomosis yielding a network without loops and poor response to acute occlusions.


Assuntos
Encéfalo , Angiografia Cerebral , Artéria Cerebral Média , Encéfalo/irrigação sanguínea , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Feminino , Humanos , Masculino , Artéria Cerebral Média/diagnóstico por imagem , Artéria Cerebral Média/fisiologia
5.
Sci Rep ; 10(1): 3828, 2020 03 02.
Artigo em Inglês | MEDLINE | ID: mdl-32123186

RESUMO

Silent reading is a cognitive operation that produces verbal content with no vocal output. One relevant question is the extent to which this verbal content is processed as overt speech in the brain. To address this, we acquired sound, eye trajectories and lips' dynamics during the reading of consonant-consonant-vowel (CCV) combinations which are infrequent in the language. We found that the duration of the first fixations on the CCVs during silent reading correlate with the duration of the transitions between consonants when the CCVs are actually uttered. With the aid of an articulatory model of the vocal system, we show that transitions measure the articulatory effort required to produce the CCVs. This means that first fixations during silent reading are lengthened when the CCVs require a greater laryngeal and/or articulatory effort to be pronounced. Our results support that a speech motor code is used for the recognition of infrequent text strings during silent reading.


Assuntos
Movimentos Oculares , Leitura , Adulto , Encéfalo/fisiologia , Feminino , Humanos , Masculino , Estimulação Luminosa , Adulto Jovem
6.
Phys Rev E ; 100(2-1): 020102, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31574671

RESUMO

Language change involves the competition between alternative linguistic forms. The spontaneous evolution of these forms typically results in monotonic growths or decays, such as in winner-take-all attractor behaviors. In the case of the Spanish past subjunctive, the spontaneous evolution of its two competing forms (ending in -ra and -se) was perturbed by the appearance of the Royal Spanish Academy in 1713, which enforced the spelling of both forms as perfectly interchangeable variants, at a moment in which the -ra form was predominant. Time series extracted from a massive corpus of books reveal that this regulation in fact produced a transient renewed interest for the old form -se which, once faded, left the -ra again as the dominant form up to the present day. We show that time series are successfully explained by a two-dimensional linear model that integrates an imitative and a novelty component. The model reveals that the temporal scale over which collective attention fades is in inverse proportion to the verb frequency. The integration of the two basic mechanisms of imitation and attention to novelty allows us to understand diverse competing objects, with lifetimes that range from hours for memes and news to decades for verbs, suggesting the existence of a general mechanism underlying cultural evolution.

7.
Phys Rev E ; 97(5-1): 052406, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-29906900

RESUMO

Speech requires programming the sequence of vocal gestures that produce the sounds of words. Here we explored the timing of this program by asking our participants to pronounce, as quickly as possible, a sequence of consonant-consonant-vowel (CCV) structures appearing on screen. We measured the delay between visual presentation and voice onset. In the case of plosive consonants, produced by sharp and well defined movements of the vocal tract, we found that delays are positively correlated with the duration of the transition between consonants. We then used a battery of statistical tests and mathematical vocal models to show that delays reflect the motor planning of CCVs and transitions are proxy indicators of the vocal effort needed to produce them. These results support that the effort required to produce the sequence of movements of a vocal gesture modulates the onset of the motor plan.


Assuntos
Movimento , Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
8.
PLoS One ; 13(3): e0193466, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29561853

RESUMO

Sound-symbolic word classes are found in different cultures and languages worldwide. These words are continuously produced to code complex information about events. Here we explore the capacity of creative language to transport complex multisensory information in a controlled experiment, where our participants improvised onomatopoeias from noisy moving objects in audio, visual and audiovisual formats. We found that consonants communicate movement types (slide, hit or ring) mainly through the manner of articulation in the vocal tract. Vowels communicate shapes in visual stimuli (spiky or rounded) and sound frequencies in auditory stimuli through the configuration of the lips and tongue. A machine learning model was trained to classify movement types and used to validate generalizations of our results across formats. We implemented the classifier with a list of cross-linguistic onomatopoeias simple actions were correctly classified, while different aspects were selected to build onomatopoeias of complex actions. These results show how the different aspects of complex sensory information are coded and how they interact in the creation of novel onomatopoeias.


Assuntos
Percepção Auditiva/fisiologia , Fonética , Física , Som , Percepção Visual/fisiologia , Voz/fisiologia , Adulto , Feminino , Humanos , Idioma , Masculino , Pessoa de Meia-Idade , Modelos Teóricos , Percepção da Fala/fisiologia , Adulto Jovem
9.
Front Psychol ; 6: 908, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26191020

RESUMO

Musical theory has built on the premise that musical structures can refer to something different from themselves (Nattiez and Abbate, 1990). The aim of this work is to statistically corroborate the intuitions of musical thinkers and practitioners starting at least with Plato, that music can express complex human concepts beyond merely "happy" and "sad" (Mattheson and Lenneberg, 1958). To do so, we ask whether musical improvisations can be used to classify the semantic category of the word that triggers them. We investigated two specific domains of semantics: morality and logic. While morality has been historically associated with music, logic concepts, which involve more abstract forms of thought, are more rarely associated with music. We examined musical improvisations inspired by positive and negative morality (e.g., good and evil) and logic concepts (true and false), analyzing the associations between these words and their musical representations in terms of acoustic and perceptual features. We found that music conveys information about valence (good and true vs. evil and false) with remarkable consistency across individuals. This information is carried by several musical dimensions which act in synergy to achieve very high classification accuracy. Positive concepts are represented by music with more ordered pitch structure and lower harmonic and sensorial dissonance than negative concepts. Music also conveys information indicating whether the word which triggered it belongs to the domains of logic or morality (true vs. good), principally through musical articulation. In summary, improvisations consistently map logic and morality information to specific musical dimensions, testifying the capacity of music to accurately convey semantic information in domains related to abstract forms of thought.

10.
Artigo em Inglês | MEDLINE | ID: mdl-25904860

RESUMO

Song production in songbirds is controlled by a network of nuclei distributed across several brain regions, which drives respiratory and vocal motor systems to generate sound. We built a model for birdsong production, whose variables are the average activities of different neural populations within these nuclei of the song system. We focus on the predictions of respiratory patterns of song, because these can be easily measured and therefore provide a validation for the model. We test the hypothesis that it is possible to construct a model in which (1) the activity of an expiratory related (ER) neural population fits the observed pressure patterns used by canaries during singing, and (2) a higher forebrain neural population, HVC, is sparsely active, simultaneously with significant motor instances of the pressure patterns. We show that in order to achieve these two requirements, the ER neural population needs to receive two inputs: a direct one, and its copy after being processed by other areas of the song system. The model is capable of reproducing the measured respiratory patterns and makes specific predictions on the timing of HVC activity during their production. These results suggest that vocal production is controlled by a circular network rather than by a simple top-down architecture.

11.
PLoS One ; 8(11): e80373, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24244681

RESUMO

Current models of human vocal production that capture peripheral dynamics in speech require large dimensional measurements of the neural activity, which are mapped into equally complex motor gestures. In this work we present a motor description for vowels as points in a discrete low-dimensional space. We monitor the dynamics of 3 points at the oral cavity using Hall-effect transducers and magnets, describing the resulting signals during normal utterances in terms of active/inactive patterns that allow a robust vowel classification in an abstract binary space. We use simple matrix algebra to link this representation to the anatomy of the vocal tract and to recent reports of highly tuned neuronal activations for vowel production, suggesting a plausible global strategy for vowel codification and motor production.


Assuntos
Fonética , Adulto , Feminino , Humanos , Idioma , Masculino , Boca/fisiologia , Fala/fisiologia , Acústica da Fala
12.
Sci Rep ; 3: 3407, 2013 Dec 03.
Artigo em Inglês | MEDLINE | ID: mdl-24297083

RESUMO

What are the features that impersonators select to elicit a speaker's identity? We built a voice database of public figures (targets) and imitations produced by professional impersonators. They produced one imitation based on their memory of the target (caricature) and another one after listening to the target audio (replica). A set of naive participants then judged identity and similarity of pairs of voices. Identity was better evoked by the caricatures and replicas were perceived to be closer to the targets in terms of voice similarity. We used this data to map relevant acoustic dimensions for each task. Our results indicate that speaker identity is mainly associated with vocal tract features, while perception of voice similarity is related to vocal folds parameters. We therefore show the way in which acoustic caricatures emphasize identity features at the cost of loosing similarity, which allows drawing an analogy with caricatures in the visual space.

13.
Front Hum Neurosci ; 6: 71, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22557952

RESUMO

While there is broad consensus about the structural similarities between language and music, comparably less attention has been devoted to semantic correspondences between these two ubiquitous manifestations of human culture. We have investigated the relations between music and a narrow and bounded domain of semantics: the words and concepts referring to taste sensations. In a recent work, we found that taste words were consistently mapped to musical parameters. Bitter is associated with low-pitched and continuous music (legato), salty is characterized by silences between notes (staccato), sour is high pitched, dissonant and fast and sweet is consonant, slow and soft (Mesz et al., 2011). Here we extended these ideas, in a synergistic dialog between music and science, investigating whether music can be algorithmically generated from taste-words. We developed and implemented an algorithm that exploits a large corpus of classic and popular songs. New musical pieces were produced by choosing fragments from the corpus and modifying them to minimize their distance to the region in musical space that characterizes each taste. In order to test the capability of the produced music to elicit significant associations with the different tastes, musical pieces were produced and judged by a group of non-musicians. Results showed that participants could decode well above chance the taste-word of the composition. We also discuss how our findings can be expressed in a performance bridging music and cognitive science.

14.
Perception ; 40(2): 209-19, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21650094

RESUMO

Zarlino, one of the most important music theorists of the XVI century, described the minor consonances as 'sweet' (dolci) and 'soft' (soavi) (Zarlino 1558/1983, in On the Modes New Haven, CT: Yale University Press, 1983). Hector Berlioz, in his Treatise on Modern Instrumentation and Orchestration (London: Novello, 1855), speaks about the 'small acid-sweet voice' of the oboe. In line with this tradition of describing musical concepts in terms of taste words, recent empirical studies have found reliable associations between taste perception and low-level sound and musical parameters, like pitch and phonetic features. Here we investigated whether taste words elicited consistent musical representations by asking trained musicians to improvise on the basis of the four canonical taste words: sweet, sour, bitter, and salty. Our results showed that, even in free improvisation, taste words elicited very reliable and consistent musical patterns:'bitter' improvisations are low-pitched and legato (without interruption between notes), 'salty' improvisations are staccato (notes sharply detached from each other), 'sour' improvisations are high-pitched and dissonant, and 'sweet' improvisations are consonant, slow, and soft. Interestingly, projections of the improvisations of taste words to musical space (a vector space defined by relevant musical parameters) revealed that, in musical space, improvisations based on different taste words were nearly orthogonal or opposite. Decoding methods could classify binary choices of improvisations (i.e., identify the improvisation word from the melody) at performance of around 80%--well above chance. In a second experiment we investigated the mapping from perception of music to taste words. Fifty-seven non-musical experts listened to a fraction of the improvisations. We found that listeners classified with high performance the taste word which had elicited the improvisation. Our results, furthermore, show that associations of taste and music go beyond basic sensory attributes into the domain of semantics, and open a new venue of investigation to understand the origins of these consistent taste-musical patterns.


Assuntos
Música , Paladar , Estimulação Acústica , Adulto , Análise de Variância , Discriminação Psicológica , Feminino , Humanos , Masculino
15.
Phys Rev Lett ; 96(5): 058103, 2006 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-16486997

RESUMO

A central aspect of the motor control of birdsong production is the capacity to generate diverse respiratory rhythms, which determine the coarse temporal pattern of song. The neural mechanisms that underlie this diversity of respiratory gestures and the resulting acoustic syllables are largely unknown. We show that the respiratory patterns of the highly complex and variable temporal organization of song in the canary (Serinus canaria) can be generated as solutions of a simple model describing the integration between song control and respiratory centers. This example suggests that subharmonic behavior can play an important role in providing a complex variety of responses with minimal neural substrate.


Assuntos
Canários/fisiologia , Modelos Biológicos , Dinâmica não Linear , Respiração , Centro Respiratório/fisiologia , Vocalização Animal , Animais , Simulação por Computador , Neurônios Motores/fisiologia
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa