Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 5.413
Filtrar
Más filtros

Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 121(22): e2316818121, 2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38768360

RESUMEN

In mammals, offspring vocalizations typically encode information about identity and body condition, allowing parents to limit alloparenting and adjust care. But how do these vocalizations mediate parental behavior in species faced with the problem of rearing not one, but multiple offspring, such as domestic dogs? Comprehensive acoustic analyses of 4,400 whines recorded from 220 Beagle puppies in 40 litters revealed litter and individual (within litter) differences in call acoustic structure. By then playing resynthesized whines to mothers, we showed that they provided more care to their litters, and were more likely to carry the emitting loudspeaker to the nest, in response to whine variants derived from their own puppies than from strangers. Importantly, care provisioning was attenuated by experimentally moving the fundamental frequency (fo, perceived as pitch) of their own puppies' whines outside their litter-specific range. Within most litters, we found a negative relationship between puppies' whine fo and body weight. Consistent with this, playbacks showed that maternal care was stronger in response to high-pitched whine variants simulating relatively small offspring within their own litter's range compared to lower-pitched variants simulating larger offspring. We thus show that maternal care in a litter-rearing species relies on a dual assessment of offspring identity and condition, largely based on level-specific inter- and intra-litter variation in offspring call fo. This dual encoding system highlights how, even in a long-domesticated species, vocalizations reflect selective pressures to meet species-specific needs. Comparative work should now investigate whether similar communication systems have convergently evolved in other litter-rearing species.


Asunto(s)
Conducta Materna , Vocalización Animal , Animales , Perros , Conducta Materna/fisiología , Vocalización Animal/fisiología , Femenino , Peso Corporal
2.
Proc Natl Acad Sci U S A ; 121(25): e2305948121, 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38857400

RESUMEN

For over a century, the evolution of animal play has sparked scientific curiosity. The prevalence of social play in juvenile mammals suggests that play is a beneficial behavior, potentially contributing to individual fitness. Yet evidence from wild animals supporting the long-hypothesized link between juvenile social play, adult behavior, and fitness remains limited. In Western Australia, adult male bottlenose dolphins (Tursiops aduncus) form multilevel alliances that are crucial for their reproductive success. A key adult mating behavior involves allied males using joint action to herd individual females. Juveniles of both sexes invest significant time in play that resembles adult herding-taking turns in mature male (actor) and female (receiver) roles. Using a 32-y dataset of individual-level association patterns, paternity success, and behavioral observations, we show that juvenile males with stronger social bonds are significantly more likely to engage in joint action when play-herding in actor roles. Juvenile males also monopolized the actor role and produced an adult male herding vocalization ("pops") when playing with females. Notably, males who spent more time playing in the actor role as juveniles achieved more paternities as adults. These findings not only reveal that play behavior provides male dolphins with mating skill practice years before they sexually mature but also demonstrate in a wild animal population that juvenile social play predicts adult reproductive success.


Asunto(s)
Delfín Mular , Reproducción , Conducta Sexual Animal , Conducta Social , Animales , Masculino , Delfín Mular/fisiología , Femenino , Reproducción/fisiología , Conducta Sexual Animal/fisiología , Australia Occidental , Vocalización Animal/fisiología , Juego e Implementos de Juego
3.
Annu Rev Neurosci ; 41: 553-572, 2018 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-29986164

RESUMEN

Hearing is often viewed as a passive process: Sound enters the ear, triggers a cascade of activity through the auditory system, and culminates in an auditory percept. In contrast to a passive process, motor-related signals strongly modulate the auditory system from the eardrum to the cortex. The motor modulation of auditory activity is most well documented during speech and other vocalizations but also can be detected during a wide variety of other sound-generating behaviors. An influential idea is that these motor-related signals suppress neural responses to predictable movement-generated sounds, thereby enhancing sensitivity to environmental sounds during movement while helping to detect errors in learned acoustic behaviors, including speech and musicianship. Findings in humans, monkeys, songbirds, and mice provide new insights into the circuits that convey motor-related signals to the auditory system, while lending support to the idea that these signals function predictively to facilitate hearing and vocal learning.


Asunto(s)
Vías Auditivas/fisiología , Audición/fisiología , Movimiento/fisiología , Vocalización Animal/fisiología , Estimulación Acústica , Animales , Humanos
4.
Proc Natl Acad Sci U S A ; 120(27): e2300262120, 2023 07 04.
Artículo en Inglés | MEDLINE | ID: mdl-37364108

RESUMEN

Human caregivers interacting with children typically modify their speech in ways that promote attention, bonding, and language acquisition. Although this "motherese," or child-directed communication (CDC), occurs in a variety of human cultures, evidence among nonhuman species is very rare. We looked for its occurrence in a nonhuman mammalian species with long-term mother-offspring bonds that is capable of vocal production learning, the bottlenose dolphin (Tursiops truncatus). Dolphin signature whistles provide a unique opportunity to test for CDC in nonhuman animals, because we are able to quantify changes in the same vocalizations produced in the presence or absence of calves. We analyzed recordings made during brief catch-and-release events of wild bottlenose dolphins in waters near Sarasota Bay, Florida, United States, and found that females produced signature whistles with significantly higher maximum frequencies and wider frequency ranges when they were recorded with their own dependent calves vs. not with them. These differences align with the higher fundamental frequencies and wider pitch ranges seen in human CDC. Our results provide evidence in a nonhuman mammal for changes in the same vocalizations when produced in the presence vs. absence of offspring, and thus strongly support convergent evolution of motherese, or CDC, in bottlenose dolphins. CDC may function to enhance attention, bonding, and vocal learning in dolphin calves, as it does in human children. Our data add to the growing body of evidence that dolphins provide a powerful animal model for studying the evolution of vocal learning and language.


Asunto(s)
Delfín Mular , Femenino , Animales , Humanos , Vocalización Animal , Madres , Espectrografía del Sonido , Desarrollo del Lenguaje
5.
Proc Natl Acad Sci U S A ; 120(9): e2219394120, 2023 02 28.
Artículo en Inglés | MEDLINE | ID: mdl-36802437

RESUMEN

Vocal fatigue is a measurable form of performance fatigue resulting from overuse of the voice and is characterized by negative vocal adaptation. Vocal dose refers to cumulative exposure of the vocal fold tissue to vibration. Professionals with high vocal demands, such as singers and teachers, are especially prone to vocal fatigue. Failure to adjust habits can lead to compensatory lapses in vocal technique and an increased risk of vocal fold injury. Quantifying and recording vocal dose to inform individuals about potential overuse is an important step toward mitigating vocal fatigue. Previous work establishes vocal dosimetry methods, that is, processes to quantify vocal fold vibration dose but with bulky, wired devices that are not amenable to continuous use during natural daily activities; these previously reported systems also provide limited mechanisms for real-time user feedback. This study introduces a soft, wireless, skin-conformal technology that gently mounts on the upper chest to capture vibratory responses associated with vocalization in a manner that is immune to ambient noises. Pairing with a separate, wirelessly linked device supports haptic feedback to the user based on quantitative thresholds in vocal usage. A machine learning-based approach enables precise vocal dosimetry from the recorded data, to support personalized, real-time quantitation and feedback. These systems have strong potential to guide healthy behaviors in vocal use.


Asunto(s)
Canto , Trastornos de la Voz , Voz , Humanos , Retroalimentación , Trastornos de la Voz/etiología , Voz/fisiología , Pliegues Vocales/fisiología
6.
Proc Natl Acad Sci U S A ; 119(27): e2201275119, 2022 07 05.
Artículo en Inglés | MEDLINE | ID: mdl-35759672

RESUMEN

Fine audiovocal control is a hallmark of human speech production and depends on precisely coordinated muscle activity guided by sensory feedback. Little is known about shared audiovocal mechanisms between humans and other mammals. We hypothesized that real-time audiovocal control in bat echolocation uses the same computational principles as human speech. To test the prediction of this hypothesis, we applied state feedback control (SFC) theory to the analysis of call frequency adjustments in the echolocating bat, Hipposideros armiger. This model organism exhibits well-developed audiovocal control to sense its surroundings via echolocation. Our experimental paradigm was analogous to one implemented in human subjects. We measured the bats' vocal responses to spectrally altered echolocation calls. Individual bats exhibited highly distinct patterns of vocal compensation to these altered calls. Our findings mirror typical observations of speech control in humans listening to spectrally altered speech. Using mathematical modeling, we determined that the same computational principles of SFC apply to bat echolocation and human speech, confirming the prediction of our hypothesis.


Asunto(s)
Quirópteros , Ecolocación , Retroalimentación Sensorial , Vocalización Animal , Animales , Percepción Auditiva/fisiología , Quirópteros/fisiología , Ecolocación/fisiología , Retroalimentación Sensorial/fisiología , Femenino , Humanos , Modelos Biológicos , Habla/fisiología , Vocalización Animal/fisiología
7.
J Neurophysiol ; 131(2): 304-310, 2024 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-38116612

RESUMEN

Motor performance is monitored continuously by specialized brain circuits and used adaptively to modify behavior on a moment-to-moment basis and over longer time periods. During vocal behaviors, such as singing in songbirds, internal evaluation of motor performance relies on sensory input from the auditory and vocal-respiratory systems. Sensory input from the auditory system to the motor system, often referred to as auditory feedback, has been well studied in singing zebra finches (Taeniopygia guttata), but little is known about how and where nonauditory sensory feedback is evaluated. Here we show that brief perturbations in air sac pressure cause short-latency neural responses in the higher-order song control nucleus HVC (used as proper name), an area necessary for song learning and song production. Air sacs were briefly pressurized through a cannula in anesthetized or sedated adult male zebra finches, and neural responses were recorded in both nucleus parambigualis (PAm), a brainstem inspiratory center, and HVC, a cortical premotor nucleus. These findings show that song control nuclei in the avian song system are sensitive to perturbations directly targeted to vocal-respiratory, or viscerosensory, afferents and support a role for multimodal sensory feedback integration in modifying and controlling vocal control circuits.NEW & NOTEWORTHY This study presents the first evidence of sensory input from the vocal-respiratory periphery directly activating neurons in a motor circuit for vocal production in songbirds. It was previously thought that this circuit relies exclusively on sensory input from the auditory system, but we provide groundbreaking evidence for nonauditory sensory input reaching the higher-order premotor nucleus HVC, expanding our understanding of what sensory feedback may be available for vocal control.


Asunto(s)
Pinzones , Animales , Masculino , Pinzones/fisiología , Aprendizaje/fisiología , Tronco Encefálico , Retroalimentación Sensorial , Vocalización Animal/fisiología
8.
J Neurophysiol ; 131(5): 950-963, 2024 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-38629163

RESUMEN

Rare disruptions of the transcription factor FOXP1 are implicated in a human neurodevelopmental disorder characterized by autism and/or intellectual disability with prominent problems in speech and language abilities. Avian orthologues of this transcription factor are evolutionarily conserved and highly expressed in specific regions of songbird brains, including areas associated with vocal production learning and auditory perception. Here, we investigated possible contributions of FoxP1 to song discrimination and auditory perception in juvenile and adult female zebra finches. They received lentiviral knockdowns of FoxP1 in one of two brain areas involved in auditory stimulus processing, HVC (proper name) or CMM (caudomedial mesopallium). Ninety-six females, distributed over different experimental and control groups were trained to discriminate between two stimulus songs in an operant Go/Nogo paradigm and subsequently tested with an array of stimuli. This made it possible to assess how well they recognized and categorized altered versions of training stimuli and whether localized FoxP1 knockdowns affected the role of different features during discrimination and categorization of song. Although FoxP1 expression was significantly reduced by the knockdowns, neither discrimination of the stimulus songs nor categorization of songs modified in pitch, sequential order of syllables or by reversed playback were affected. Subsequently, we analyzed the full dataset to assess the impact of the different stimulus manipulations for cue weighing in song discrimination. Our findings show that zebra finches rely on multiple parameters for song discrimination, but with relatively more prominent roles for spectral parameters and syllable sequencing as cues for song discrimination.NEW & NOTEWORTHY In humans, mutations of the transcription factor FoxP1 are implicated in speech and language problems. In songbirds, FoxP1 has been linked to male song learning and female preference strength. We found that FoxP1 knockdowns in female HVC and caudomedial mesopallium (CMM) did not alter song discrimination or categorization based on spectral and temporal information. However, this large dataset allowed to validate different cue weights for spectral over temporal information for song recognition.


Asunto(s)
Señales (Psicología) , Aprendizaje Discriminativo , Pinzones , Factores de Transcripción Forkhead , Técnicas de Silenciamiento del Gen , Vocalización Animal , Animales , Pinzones/fisiología , Factores de Transcripción Forkhead/genética , Factores de Transcripción Forkhead/metabolismo , Femenino , Aprendizaje Discriminativo/fisiología , Vocalización Animal/fisiología , Percepción Auditiva/fisiología , Proteínas Represoras/genética , Proteínas Represoras/metabolismo , Estimulación Acústica
9.
Am Nat ; 203(2): 267-283, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38306283

RESUMEN

AbstractVocal production learning (the capacity to learn to produce vocalizations) is a multidimensional trait that involves different learning mechanisms during different temporal and socioecological contexts. Key outstanding questions are whether vocal production learning begins during the embryonic stage and whether mothers play an active role in this through pupil-directed vocalization behaviors. We examined variation in vocal copy similarity (an indicator of learning) in eight species from the songbird family Maluridae, using comparative and experimental approaches. We found that (1) incubating females from all species vocalized inside the nest and produced call types including a signature "B element" that was structurally similar to their nestlings' begging call; (2) in a prenatal playback experiment using superb fairy wrens (Malurus cyaneus), embryos showed a stronger heart rate response to playbacks of the B element than to another call element (A); and (3) mothers that produced slower calls had offspring with greater similarity between their begging call and the mother's B element vocalization. We conclude that malurid mothers display behaviors concordant with pupil-directed vocalizations and may actively influence their offspring's early life through sound learning shaped by maternal call tempo.


Asunto(s)
Passeriformes , Pájaros Cantores , Animales , Femenino , Humanos , Madres , Vocalización Animal/fisiología , Pájaros Cantores/fisiología , Aprendizaje
10.
Am Nat ; 204(2): 181-190, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39008842

RESUMEN

AbstractWhere dramatic sexual displays are involved in attracting a mate, individuals can enhance their performances by manipulating their physical environment. Typically, individuals alter their environment either in preparation for a performance by creating a "stage" or during the display itself by using discrete objects as "props." We examined an unusual case of performative manipulation of an entire stage by male Albert's lyrebirds (Menura alberti) during their complex song and dance displays. We found that males from throughout the species' range shake the entangled forest vegetation of their display platforms, creating a highly conspicuous and stereotypical movement external to their bodies. This "stage shaking" is performed in two different rhythms, with the second rhythm an isochronous beat that matches the beat of the coinciding vocalizations. Our results provide evidence that stage shaking is an integral, and thus likely functional, component of male Albert's lyrebird sexual displays and so highlight an intriguing but poorly understood facet of complex communication.


Asunto(s)
Vocalización Animal , Masculino , Animales , Conducta Sexual Animal , Ambiente , Passeriformes/fisiología , Comunicación Animal
11.
Hum Brain Mapp ; 45(14): e70040, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39394899

RESUMEN

Growing evidence suggests that conceptual knowledge influences emotion perception, yet the neural mechanisms underlying this effect are not fully understood. Recent studies have shown that brain representations of facial emotion categories in visual-perceptual areas are predicted by conceptual knowledge, but it remains to be seen if auditory regions are similarly affected. Moreover, it is not fully clear whether these conceptual influences operate at a modality-independent level. To address these questions, we conducted a functional magnetic resonance imaging study presenting participants with both facial and vocal emotional stimuli. This dual-modality approach allowed us to investigate effects on both modality-specific and modality-independent brain regions. Using univariate and representational similarity analyses, we found that brain representations in both visual (middle and lateral occipital cortices) and auditory (superior temporal gyrus) regions were predicted by conceptual understanding of emotions for faces and voices, respectively. Additionally, we discovered that conceptual knowledge also influenced supra-modal representations in the superior temporal sulcus. Dynamic causal modeling revealed a brain network showing both bottom-up and top-down flows, suggesting a complex interplay of modality-specific and modality-independent regions in emotional processing. These findings collectively indicate that the neural representations of emotions in both sensory-perceptual and modality-independent regions are likely shaped by each individual's conceptual knowledge.


Asunto(s)
Mapeo Encefálico , Emociones , Imagen por Resonancia Magnética , Humanos , Emociones/fisiología , Femenino , Masculino , Adulto Joven , Adulto , Reconocimiento Facial/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Formación de Concepto/fisiología , Expresión Facial , Percepción Visual/fisiología
12.
Hum Brain Mapp ; 45(10): e26724, 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-39001584

RESUMEN

Music is ubiquitous, both in its instrumental and vocal forms. While speech perception at birth has been at the core of an extensive corpus of research, the origins of the ability to discriminate instrumental or vocal melodies is still not well investigated. In previous studies comparing vocal and musical perception, the vocal stimuli were mainly related to speaking, including language, and not to the non-language singing voice. In the present study, to better compare a melodic instrumental line with the voice, we used singing as a comparison stimulus, to reduce the dissimilarities between the two stimuli as much as possible, separating language perception from vocal musical perception. In the present study, 45 newborns were scanned, 10 full-term born infants and 35 preterm infants at term-equivalent age (mean gestational age at test = 40.17 weeks, SD = 0.44) using functional magnetic resonance imaging while listening to five melodies played by a musical instrument (flute) or sung by a female voice. To examine the dynamic task-based effective connectivity, we employed a psychophysiological interaction of co-activation patterns (PPI-CAPs) analysis, using the auditory cortices as seed region, to investigate moment-to-moment changes in task-driven modulation of cortical activity during an fMRI task. Our findings reveal condition-specific, dynamically occurring patterns of co-activation (PPI-CAPs). During the vocal condition, the auditory cortex co-activates with the sensorimotor and salience networks, while during the instrumental condition, it co-activates with the visual cortex and the superior frontal cortex. Our results show that the vocal stimulus elicits sensorimotor aspects of the auditory perception and is processed as a more salient stimulus while the instrumental condition activated higher-order cognitive and visuo-spatial networks. Common neural signatures for both auditory stimuli were found in the precuneus and posterior cingulate gyrus. Finally, this study adds knowledge on the dynamic brain connectivity underlying the newborns capability of early and specialized auditory processing, highlighting the relevance of dynamic approaches to study brain function in newborn populations.


Asunto(s)
Percepción Auditiva , Imagen por Resonancia Magnética , Música , Humanos , Femenino , Masculino , Percepción Auditiva/fisiología , Recién Nacido , Canto/fisiología , Recien Nacido Prematuro/fisiología , Mapeo Encefálico , Estimulación Acústica , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Voz/fisiología
13.
Biochem Biophys Res Commun ; 732: 150401, 2024 Nov 05.
Artículo en Inglés | MEDLINE | ID: mdl-39033554

RESUMEN

The pathophysiology of laryngopharyngeal reflux (LPR) and its impact on the vocal fold is not well understood, but may involve acid damage to vocal fold barrier functions. Two different components encompass vocal fold barrier function: the mucus barrier and tight junctions. Mucus retained on epithelial microprojections protects the inside of the vocal fold by neutralizing acidic damage. Tight junctions control permeability between cells. Here we developed an in vitro experimental system to evaluate acidic injury and repair of vocal fold barrier functions. We first established an in vitro model of rat vocal fold epithelium that could survive at least one week after barrier function maturation. The model enabled repeated evaluation of the course of vocal fold repair processes. Then, an injury experiment was conducted in which vocal fold cells were exposed to a 5-min treatment with acidic pepsin that injured tight junctions and cell surface microprojections. Both of them healed within one day of injury. Comparing vocal fold cells treated with acid alone with cells treated with acidic pepsin showed that acidic pepsin had a stronger effect on intercellular permeability than acid alone, whereas pepsin had little effect on microprojections. This result suggests that the proteolytic action of pepsin has a larger effect on protein-based tight junctions than on phospholipids in microprojections. This experimental system could contribute to a better understanding of vocal fold repair processes after chemical or physical injuries, as well as voice problems due to LPR pathogenesis.


Asunto(s)
Pepsina A , Uniones Estrechas , Pliegues Vocales , Animales , Pepsina A/metabolismo , Pepsina A/farmacología , Pliegues Vocales/efectos de los fármacos , Pliegues Vocales/patología , Pliegues Vocales/metabolismo , Pliegues Vocales/lesiones , Ratas , Uniones Estrechas/metabolismo , Uniones Estrechas/efectos de los fármacos , Ratas Sprague-Dawley , Masculino , Reflujo Laringofaríngeo/metabolismo , Reflujo Laringofaríngeo/tratamiento farmacológico , Reflujo Laringofaríngeo/patología , Concentración de Iones de Hidrógeno
14.
Proc Biol Sci ; 291(2029): 20240659, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39163980

RESUMEN

Species worldwide are experiencing anthropogenic environmental change, and the long-term impacts on animal cultural traditions such as vocal dialects are often unknown. Our prior studies of the yellow-naped amazon (Amazona auropalliata) revealed stable vocal dialects over an 11-year period (1994-2005), with modest shifts in geographic boundaries and acoustic structure of contact calls. Here, we examined whether yellow-naped amazons maintained stable dialects over the subsequent 11-year time span from 2005 to 2016, culminating in 22 years of study. Over this same period, this species suffered a dramatic decrease in population size that prompted two successive uplists in IUCN status, from vulnerable to critically endangered. In this most recent 11-year time span, we found evidence of geographic shifts in call types, manifesting in more bilingual sites and introgression across the formerly distinct North-South acoustic boundary. We also found greater evidence of acoustic drift, in the form of new emerging call types and greater acoustic variation overall. These results suggest cultural traditions such as dialects may change in response to demographic and environmental conditions, with broad implications for threatened species.


Asunto(s)
Amazona , Vocalización Animal , Animales , Amazona/fisiología , Especies en Peligro de Extinción , Densidad de Población , Conservación de los Recursos Naturales
15.
BMC Neurosci ; 25(1): 31, 2024 Jul 04.
Artículo en Inglés | MEDLINE | ID: mdl-38965498

RESUMEN

BACKGROUND: Most vocal learning species exhibit an early critical period during which their vocal control neural circuitry facilitates the acquisition of new vocalizations. Some taxa, most notably humans and parrots, retain some degree of neurobehavioral plasticity throughout adulthood, but both the extent of this plasticity and the neurogenetic mechanisms underlying it remain unclear. Differential expression of the transcription factor FoxP2 in both songbird and parrot vocal control nuclei has been identified previously as a key pattern facilitating vocal learning. We hypothesize that the resilience of vocal learning to cognitive decline in open-ended learners will be reflected in an absence of age-related changes in neural FoxP2 expression. We tested this hypothesis in the budgerigar (Melopsittacus undulatus), a small gregarious parrot in which adults converge on shared call types in response to shifts in group membership. We formed novel flocks of 4 previously unfamiliar males belonging to the same age class, either "young adult" (6 mo - 1 year) or "older adult" (≥ 3 year), and then collected audio-recordings over a 20-day learning period to assess vocal learning ability. Following behavioral recording, immunohistochemistry was performed on collected neural tissue to measure FoxP2 protein expression in a parrot vocal learning center, the magnocellular nucleus of the medial striatum (MMSt), and its adjacent striatum. RESULTS: Although older adults show lower vocal diversity (i.e. repertoire size) and higher absolute levels of FoxP2 in the MMSt than young adults, we find similarly persistent downregulation of FoxP2 and equivalent vocal plasticity and vocal convergence in the two age cohorts. No relationship between individual variation in vocal learning measures and FoxP2 expression was detected. CONCLUSIONS: We find neural evidence to support persistent vocal learning in the budgerigar, suggesting resilience to aging in the open-ended learning program of this species. The lack of a significant relationship between FoxP2 expression and individual variability in vocal learning performance suggests that other neurogenetic mechanisms could also regulate this complex behavior.


Asunto(s)
Envejecimiento , Factores de Transcripción Forkhead , Aprendizaje , Vocalización Animal , Animales , Factores de Transcripción Forkhead/metabolismo , Factores de Transcripción Forkhead/genética , Vocalización Animal/fisiología , Masculino , Envejecimiento/fisiología , Envejecimiento/metabolismo , Aprendizaje/fisiología , Melopsittacus/fisiología , Neuronas/metabolismo , Neuronas/fisiología
16.
BMC Neurosci ; 25(1): 48, 2024 Oct 04.
Artículo en Inglés | MEDLINE | ID: mdl-39367300

RESUMEN

BACKGROUND: Which mammals show vocal learning abilities, e.g., can learn new sounds, or learn to use sounds in new contexts? Vocal usage and comprehension learning are submodules of vocal learning. Specifically, vocal usage learning is the ability to learn to use a vocalization in a new context; vocal comprehension learning is the ability to comprehend a vocalization in a new context. Among mammals, harbor seals (Phoca vitulina) are good candidates to investigate vocal learning. Here, we test whether harbor seals are capable of vocal usage and comprehension learning. RESULTS: We trained two harbor seals to (i) switch contexts from a visual to an auditory cue. In particular, the seals first produced two vocalization types in response to two hand signs; they then transitioned to producing these two vocalization types upon the presentation of two distinct sets of playbacks of their own vocalizations. We then (ii) exposed the seals to a combination of trained and novel vocalization stimuli. In a final experiment, (iii) we broadcasted only novel vocalizations of the two vocalization types to test whether seals could generalize from the trained set of stimuli to only novel items of a given vocal category. Both seals learned all tasks and took ≤ 16 sessions to succeed across all experiments. In particular, the seals showed contextual learning through switching the context from former visual to novel auditory cues, vocal matching and generalization. Finally, by responding to the played-back vocalizations with distinct vocalizations, the animals showed vocal comprehension learning. CONCLUSIONS: It has been suggested that harbor seals are vocal learners; however, to date, these observations had not been confirmed in controlled experiments. Here, through three experiments, we could show that harbor seals are capable of both vocal usage and comprehension learning.


Asunto(s)
Comprensión , Aprendizaje , Phoca , Vocalización Animal , Animales , Phoca/fisiología , Vocalización Animal/fisiología , Aprendizaje/fisiología , Comprensión/fisiología , Masculino , Estimulación Acústica , Femenino , Percepción Auditiva/fisiología , Señales (Psicología)
17.
Strahlenther Onkol ; 200(5): 418-424, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38488899

RESUMEN

PURPOSE: This study aimed to assess the margin for the planning target volume (PTV) using the Van Herk formula. We then validated the proposed margin by real-time magnetic resonance imaging (MRI). METHODS: An analysis of cone-beam computed tomography (CBCT) data from early glottic cancer patients was performed to evaluate organ motion. Deformed clinical target volumes (CTV) after rigid registration were acquired using the Velocity program (Varian Medical Systems, Palo Alto, CA, USA). Systematic (Σ) and random errors (σ) were evaluated. The margin for the PTV was defined as 2.5 Σ + 0.7 σ according to the Van Herk formula. To validate this margin, we accrued healthy volunteers. Sagittal real-time cine MRI was conducted using the ViewRay system (ViewRay Inc., Oakwood Village, OH, USA). Within the obtained sagittal images, the vocal cord was delineated. The movement of the vocal cord was summed up and considered as the internal target volume (ITV). We then assessed the degree of overlap between the ITV and the PTV (vocal cord plus margins) by calculating the volume overlap ratio, represented as (ITV∩PTV)/ITV. RESULTS: CBCTs of 17 early glottic patients were analyzed. Σ and σ were 0.55 and 0.57 for left-right (LR), 0.70 and 0.60 for anterior-posterior (AP), and 1.84 and 1.04 for superior-inferior (SI), respectively. The calculated margin was 1.8 mm (LR), 2.2 mm (AP), and 5.3 mm (SI). Four healthy volunteers participated for validation. A margin of 3 mm (AP) and 5 mm (SI) was applied to the vocal cord as the PTV. The average volume overlap ratio between ITV and PTV was 0.92 (range 0.85-0.99) without swallowing and 0.77 (range 0.70-0.88) with swallowing. CONCLUSION: By evaluating organ motion by using CBCT, the margin was 1.8 (LR), 2.2 (AP), and 5.3 mm (SI). The margin acquired using CBCT fitted well in real-time cine MRI. Given that swallowing during radiotherapy can result in a substantial displacement, it is crucial to consider strategies aimed at minimizing swallowing and related motion.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Glotis , Neoplasias Laríngeas , Imagen por Resonancia Cinemagnética , Humanos , Tomografía Computarizada de Haz Cónico/métodos , Imagen por Resonancia Cinemagnética/métodos , Glotis/diagnóstico por imagen , Masculino , Neoplasias Laríngeas/diagnóstico por imagen , Neoplasias Laríngeas/radioterapia , Persona de Mediana Edad , Femenino , Adulto , Anciano , Movimientos de los Órganos , Sistemas de Computación , Planificación de la Radioterapia Asistida por Computador/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
18.
Cerebellum ; 23(4): 1490-1497, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38285133

RESUMEN

Dysarthria is disabling in persons with degenerative ataxia. There is limited evidence for speech therapy interventions. In this pilot study, we used the Voice trainer app, which was originally developed for patients with Parkinson's disease, as a feedback tool for vocal control. We hypothesized that patients with ataxic dysarthria would benefit from the Voice trainer app to better control their loudness and pitch, resulting in a lower speaking rate and better intelligibility. This intervention study consisted of five therapy sessions of 30 min within 3 weeks using the principles of the Pitch Limiting Voice Treatment. Patients received real-time visual feedback on loudness and pitch during the exercises. Besides, they were encouraged to practice at home or to use the Voice trainer in daily life. We used observer-rated and patient-rated outcome measures. The primary outcome measure was intelligibility, as measured by the Dutch sentence intelligibility test. Twenty-one out of 25 included patients with degenerative ataxia completed the therapy. We found no statistically significant improvements in intelligibility (p = .56). However, after the intervention, patients were speaking slower (p = .03) and the pause durations were longer (p < .001). The patients were satisfied about using the app. At the group level, we found no evidence for an effect of the Voice trainer app on intelligibility in degenerative ataxia. Because of the heterogeneity of ataxic dysarthria, a more tailor-made rather than generic intervention seems warranted.


Asunto(s)
Disartria , Aplicaciones Móviles , Entrenamiento de la Voz , Humanos , Proyectos Piloto , Masculino , Femenino , Persona de Mediana Edad , Anciano , Disartria/terapia , Disartria/rehabilitación , Adulto , Logopedia/métodos , Inteligibilidad del Habla/fisiología , Resultado del Tratamiento
19.
Artículo en Inglés | MEDLINE | ID: mdl-38733407

RESUMEN

Auditory streaming underlies a receiver's ability to organize complex mixtures of auditory input into distinct perceptual "streams" that represent different sound sources in the environment. During auditory streaming, sounds produced by the same source are integrated through time into a single, coherent auditory stream that is perceptually segregated from other concurrent sounds. Based on human psychoacoustic studies, one hypothesis regarding auditory streaming is that any sufficiently salient perceptual difference may lead to stream segregation. Here, we used the eastern grey treefrog, Hyla versicolor, to test this hypothesis in the context of vocal communication in a non-human animal. In this system, females choose their mate based on perceiving species-specific features of a male's pulsatile advertisement calls in social environments (choruses) characterized by mixtures of overlapping vocalizations. We employed an experimental paradigm from human psychoacoustics to design interleaved pulsatile sequences (ABAB…) that mimicked key features of the species' advertisement call, and in which alternating pulses differed in pulse rise time, which is a robust species recognition cue in eastern grey treefrogs. Using phonotaxis assays, we found no evidence that perceptually salient differences in pulse rise time promoted the segregation of interleaved pulse sequences into distinct auditory streams. These results do not support the hypothesis that any perceptually salient acoustic difference can be exploited as a cue for stream segregation in all species. We discuss these findings in the context of cues used for species recognition and auditory streaming.

20.
J Exp Biol ; 2024 Oct 18.
Artículo en Inglés | MEDLINE | ID: mdl-39422211

RESUMEN

While birds' impressive singing abilities are made possible by the syrinx, the upper vocal system (i.e., trachea, larynx, and beak) could also play a role in sound filtration. Yet, we still lack a clear understanding of the range of elongation this system can undertake, especially along the trachea. Here, we used biplanar cineradiography and X-ray Reconstruction of Moving Morphology (XROMM) to record 15 species of cadaveric birds from 9 different orders while an operator moved the bird's cadaveric heads in different directions. In all studied species, we found elongation of the trachea to be correlated with neck extension, and significantly greater (ranging from 18% to 48% for the whole motion; and from 1.4% to 15.7% for the singing positions) than previously reported on a live singing bird (3%). This elongation or compression was not always homogeneous along its entire length. Some specimens showed increased lengthening in the rostral part and others in both the rostral and caudal parts of the vocal tract. The diversity of elongation patterns shows that trachea elongation is more complex than previously thought. Since tracheal lengthening affects sound frequencies, our results contribute to our understanding of the mechanisms involved in complex communication signals, one of the amazing traits we share with birds.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA