Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 80
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Commun Psychol ; 2(1): 65, 2024 Jul 18.
Artículo en Inglés | MEDLINE | ID: mdl-39242947

RESUMEN

Human nonverbal vocalizations such as screams and cries often reflect their evolved functions. Although the universality of these putatively primordial vocal signals and their phylogenetic roots in animal calls suggest a strong reflexive foundation, many of the emotional vocalizations that we humans produce are under our voluntary control. This suggests that, like speech, volitional vocalizations may require auditory input to develop typically. Here, we acoustically analyzed hundreds of volitional vocalizations produced by profoundly deaf adults and typically-hearing controls. We show that deaf adults produce unconventional and homogenous vocalizations of aggression and pain that are unusually high-pitched, unarticulated, and with extremely few harsh-sounding nonlinear phenomena compared to controls. In contrast, fear vocalizations of deaf adults are relatively acoustically typical. In four lab experiments involving a range of perception tasks with 444 participants, listeners were less accurate in identifying the intended emotions of vocalizations produced by deaf vocalizers than by controls, perceived their vocalizations as less authentic, and reliably detected deafness. Vocalizations of congenitally deaf adults with zero auditory experience were most atypical, suggesting additive effects of auditory deprivation. Vocal learning in humans may thus be required not only for speech, but also to acquire the full repertoire of volitional non-linguistic vocalizations.

2.
Sci Total Environ ; 949: 174868, 2024 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-39034006

RESUMEN

Passive Acoustic Monitoring (PAM), which involves using autonomous record units for studying wildlife behaviour and distribution, often requires handling big acoustic datasets collected over extended periods. While these data offer invaluable insights about wildlife, their analysis can present challenges in dealing with geophonic sources. A major issue in the process of detection of target sounds is represented by wind-induced noise. This can lead to false positive detections, i.e., energy peaks due to wind gusts misclassified as biological sounds, or false negative, i.e., the wind noise masks the presence of biological sounds. Acoustic data dominated by wind noise makes the analysis of vocal activity unreliable, thus compromising the detection of target sounds and, subsequently, the interpretation of the results. Our work introduces a straightforward approach for detecting recordings affected by windy events using a pre-trained convolutional neural network. This process facilitates identifying wind-compromised data. We consider this dataset pre-processing crucial for ensuring the reliable use of PAM data. We implemented this preprocessing by leveraging YAMNet, a deep learning model for sound classification tasks. We evaluated YAMNet as-is ability to detect wind-induced noise and tested its performance in a Transfer Learning scenario by using our annotated data from the Stony Point Penguin Colony in South Africa. While the classification of YAMNet as-is achieved a precision of 0.71, and recall of 0.66, those metrics strongly improved after the training on our annotated dataset, reaching a precision of 0.91, and recall of 0.92, corresponding to a relative increment of >28 %. Our study demonstrates the promising application of YAMNet in the bioacoustics and ecoacoustics fields, addressing the need for wind-noise-free acoustic data. We released an open-access code that, combined with the efficiency and peak performance of YAMNet, can be used on standard laptops for a broad user base.


Asunto(s)
Monitoreo del Ambiente , Redes Neurales de la Computación , Viento , Monitoreo del Ambiente/métodos , Acústica , Sudáfrica , Ruido , Animales
3.
iScience ; 27(7): 110375, 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39055954

RESUMEN

Baby cries can convey both static information related to individual identity and dynamic information related to the baby's emotional and physiological state. How do these dimensions interact? Are they transmitted independently, or do they compete against one another? Here we show that the universal acoustic expression of pain in distress cries overrides individual differences at the expense of identity signaling. Our acoustic analysis show that pain cries, compared with discomfort cries, are characterized by a more unstable source, thus interfering with the production of identity cues. Machine learning analyses and psychoacoustic experiments reveal that while the baby's identity remains encoded in pain cries, it is considerably weaker than in discomfort cries. Our results are consistent with the prediction that the costs of failing to signal distress outweigh the cost of weakening cues to identity.

4.
Proc Natl Acad Sci U S A ; 121(22): e2316818121, 2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38768360

RESUMEN

In mammals, offspring vocalizations typically encode information about identity and body condition, allowing parents to limit alloparenting and adjust care. But how do these vocalizations mediate parental behavior in species faced with the problem of rearing not one, but multiple offspring, such as domestic dogs? Comprehensive acoustic analyses of 4,400 whines recorded from 220 Beagle puppies in 40 litters revealed litter and individual (within litter) differences in call acoustic structure. By then playing resynthesized whines to mothers, we showed that they provided more care to their litters, and were more likely to carry the emitting loudspeaker to the nest, in response to whine variants derived from their own puppies than from strangers. Importantly, care provisioning was attenuated by experimentally moving the fundamental frequency (fo, perceived as pitch) of their own puppies' whines outside their litter-specific range. Within most litters, we found a negative relationship between puppies' whine fo and body weight. Consistent with this, playbacks showed that maternal care was stronger in response to high-pitched whine variants simulating relatively small offspring within their own litter's range compared to lower-pitched variants simulating larger offspring. We thus show that maternal care in a litter-rearing species relies on a dual assessment of offspring identity and condition, largely based on level-specific inter- and intra-litter variation in offspring call fo. This dual encoding system highlights how, even in a long-domesticated species, vocalizations reflect selective pressures to meet species-specific needs. Comparative work should now investigate whether similar communication systems have convergently evolved in other litter-rearing species.


Asunto(s)
Conducta Materna , Vocalización Animal , Animales , Perros , Conducta Materna/fisiología , Vocalización Animal/fisiología , Femenino , Peso Corporal
5.
J Exp Psychol Gen ; 153(2): 511-530, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38010781

RESUMEN

Across many species, a major function of vocal communication is to convey formidability, with low voice frequencies traditionally considered the main vehicle for projecting large size and aggression. Vocal loudness is often ignored, yet it might explain some puzzling exceptions to this frequency code. Here we demonstrate, through acoustic analyses of over 3,000 human vocalizations and four perceptual experiments, that vocalizers produce low frequencies when attempting to sound large, but loudness is prioritized for displays of strength and aggression. Our results show that, although being loud is effective for signaling strength and aggression, it poses a physiological trade-off with low frequencies because a loud voice is achieved by elevating pitch and opening the mouth wide into a-like vowels. This may explain why aggressive vocalizations are often high-pitched and why open vowels are considered "large" in sound symbolism despite their high first formant. Callers often compensate by adding vocal harshness (nonlinear vocal phenomena) to undesirably high-pitched loud vocalizations, but a combination of low and loud remains an honest predictor of both perceived and actual physical formidability. The proposed notion of a loudness-frequency trade-off thus adds a new dimension to the widely accepted frequency code and requires a fundamental rethinking of the evolutionary forces shaping the form of acoustic signals. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Voz , Humanos , Calidad de la Voz , Agresión , Comunicación , Sonido
6.
Behav Res Methods ; 56(6): 5588-5604, 2024 09.
Artículo en Inglés | MEDLINE | ID: mdl-38158551

RESUMEN

Formants (vocal tract resonances) are increasingly analyzed not only by phoneticians in speech but also by behavioral scientists studying diverse phenomena such as acoustic size exaggeration and articulatory abilities of non-human animals. This often involves estimating vocal tract length acoustically and producing scale-invariant representations of formant patterns. We present a theoretical framework and practical tools for carrying out this work, including open-source software solutions included in R packages soundgen and phonTools. Automatic formant measurement with linear predictive coding is error-prone, but formant_app provides an integrated environment for formant annotation and correction with visual and auditory feedback. Once measured, formants can be normalized using a single recording (intrinsic methods) or multiple recordings from the same individual (extrinsic methods). Intrinsic speaker normalization can be as simple as taking formant ratios and calculating the geometric mean as a measure of overall scale. The regression method implemented in the function estimateVTL calculates the apparent vocal tract length assuming a single-tube model, while its residuals provide a scale-invariant vowel space based on how far each formant deviates from equal spacing (the schwa function). Extrinsic speaker normalization provides more accurate estimates of speaker- and vowel-specific scale factors by pooling information across recordings with simple averaging or mixed models, which we illustrate with example datasets and R code. The take-home messages are to record several calls or vowels per individual, measure at least three or four formants, check formant measurements manually, treat uncertain values as missing, and use the statistical tools best suited to each modeling context.


Asunto(s)
Programas Informáticos , Humanos , Fonética , Habla/fisiología , Acústica del Lenguaje , Pliegues Vocales/fisiología , Acústica
7.
Curr Biol ; 33(23): R1236-R1237, 2023 12 04.
Artículo en Inglés | MEDLINE | ID: mdl-38052174

RESUMEN

Cat purring, the unusual, pulsed vibration that epitomizes comfort, enjoys a special status in the world of vocal communication research. Indeed, it has long been flagged as a rare exception to the dominant theory of voice production in mammals. A new study presents histological and biomechanical evidence that purring can occur passively, without needing muscle vibration in the larynx controlled by an independent neural oscillator.


Asunto(s)
Laringe , Pliegues Vocales , Gatos , Animales , Pliegues Vocales/fisiología , Laringe/fisiología , Vibración , Vocalización Animal , Comunicación , Fonación , Mamíferos
8.
iScience ; 26(11): 108204, 2023 Nov 17.
Artículo en Inglés | MEDLINE | ID: mdl-37908309

RESUMEN

Humans have evolved voluntary control over vocal production for speaking and singing, while preserving the phylogenetically older system of spontaneous nonverbal vocalizations such as laughs and screams. To test for systematic acoustic differences between these vocal domains, we analyzed a broad, cross-cultural corpus representing over 2 h of speech, singing, and nonverbal vocalizations. We show that, while speech is relatively low-pitched and tonal with mostly regular phonation, singing and especially nonverbal vocalizations vary enormously in pitch and often display harsh-sounding, irregular phonation owing to nonlinear phenomena. The evolution of complex supralaryngeal articulatory spectro-temporal modulation has been critical for speech, yet has not significantly constrained laryngeal source modulation. In contrast, articulation is very limited in nonverbal vocalizations, which predominantly contain minimally articulated open vowels and rapid temporal modulation in the roughness range. We infer that vocal source modulation works best for conveying affect, while vocal filter modulation mainly facilitates semantic communication.

9.
Proc Biol Sci ; 290(2008): 20231029, 2023 Oct 11.
Artículo en Inglés | MEDLINE | ID: mdl-37817600

RESUMEN

Variation in formant frequencies has been shown to affect social interactions and sexual competition in a range of avian species. Yet, the anatomical bases of this variation are poorly understood. Here, we investigated the morphological correlates of formants production in the vocal apparatus of African penguins. We modelled the geometry of the supra-syringeal vocal tract of 20 specimens to generate a population of virtual vocal tracts with varying dimensions. We then estimated the acoustic response of these virtual vocal tracts and extracted the centre frequency of the first four predicted formants. We demonstrate that: (i) variation in length and cross-sectional area of vocal tracts strongly affects the formant pattern, (ii) the tracheal region determines most of this variation, and (iii) the skeletal size of penguins does not correlate with the trachea length and consequently has relatively little effect on formants. We conclude that in African penguins, while the variation in vocal tract geometry generates variation in resonant frequencies supporting the discrimination of conspecifics, such variation does not provide information on the emitter's body size. Overall, our findings advance our understanding of the role of formant frequencies in bird vocal communication.


Asunto(s)
Spheniscidae , Animales , Spheniscidae/fisiología , Vocalización Animal/fisiología , Tamaño Corporal , Acústica , Comunicación
10.
Biology (Basel) ; 12(9)2023 08 31.
Artículo en Inglés | MEDLINE | ID: mdl-37759590

RESUMEN

Global biodiversity is in rapid decline, and many seabird species have disproportionally poorer conservation statuses than terrestrial birds. A good understanding of population dynamics is necessary for successful conservation efforts, making noninvasive, cost-effective monitoring tools essential. Here, we set out to investigate whether passive acoustic monitoring (PAM) could be used to estimate the number of animals within a set area of an African penguin (Spheniscus demersus) colony in South Africa. We were able to automate the detection of ecstatic display songs (EDSs) in our recordings, thus facilitating the handling of large datasets. This allowed us to show that calling rate increased with wind speed and humidity but decreased with temperature, and to highlight apparent abundance variations between nesting habitat types. We then showed that the number of EDSs in our recordings positively correlated with the number of callers counted during visual observations, indicating that the density could be estimated based on calling rate. Our observations suggest that increasing temperatures may adversely impact penguin calling behaviour, with potential negative consequences for population dynamics, suggesting the importance of effective conservation measures. Crucially, this study shows that PAM could be successfully used to monitor this endangered species' populations with minimal disturbance.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA