Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
J Acoust Soc Am ; 155(1): 274-283, 2024 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-38215217

RESUMEN

Echolocating bats and dolphins use biosonar to determine target range, but differences in range discrimination thresholds have been reported for the two species. Whether these differences represent a true difference in their sensory system capability is unknown. Here, the dolphin's range discrimination threshold as a function of absolute range and echo-phase was investigated. Using phantom echoes, the dolphins were trained to echo-inspect two simulated targets and indicate the closer target by pressing a paddle. One target was presented at a time, requiring the dolphin to hold the initial range in memory as they compared it to the second target. Range was simulated by manipulating echo-delay while the received echo levels, relative to the dolphins' clicks, were held constant. Range discrimination thresholds were determined at seven different ranges from 1.75 to 20 m. In contrast to bats, range discrimination thresholds increased from 4 to 75 cm, across the entire ranges tested. To investigate the acoustic features used more directly, discrimination thresholds were determined when the echo was given a random phase shift (±180°). Results for the constant-phase versus the random-phase echo were quantitatively similar, suggesting that dolphins used the envelope of the echo waveform to determine the difference in range.


Asunto(s)
Delfín Mular , Quirópteros , Ecolocación , Animales , Acústica , Espectrografía del Sonido
2.
J Neurosci ; 41(1): 73-88, 2021 01 06.
Artículo en Inglés | MEDLINE | ID: mdl-33177068

RESUMEN

The capacity for sensory systems to encode relevant information that is invariant to many stimulus changes is central to normal, real-world, cognitive function. This invariance is thought to be reflected in the complex spatiotemporal activity patterns of neural populations, but our understanding of population-level representational invariance remains coarse. Applied topology is a promising tool to discover invariant structure in large datasets. Here, we use topological techniques to characterize and compare the spatiotemporal pattern of coactive spiking within populations of simultaneously recorded neurons in the secondary auditory region caudal medial neostriatum of European starlings (Sturnus vulgaris). We show that the pattern of population spike train coactivity carries stimulus-specific structure that is not reducible to that of individual neurons. We then introduce a topology-based similarity measure for population coactivity that is sensitive to invariant stimulus structure and show that this measure captures invariant neural representations tied to the learned relationships between natural vocalizations. This demonstrates one mechanism whereby emergent stimulus properties can be encoded in population activity, and shows the potential of applied topology for understanding invariant representations in neural populations.SIGNIFICANCE STATEMENT Information in neural populations is carried by the temporal patterns of spikes. We applied novel mathematical tools from the field of algebraic topology to quantify the structure of these temporal patterns. We found that, in a secondary auditory region of a songbird, these patterns reflected invariant information about a learned stimulus relationship. These results demonstrate that topology provides a novel approach for characterizing neural responses that is sensitive to invariant relationships that are critical for the perception of natural stimuli.


Asunto(s)
Corteza Auditiva/fisiología , Fenómenos Electrofisiológicos , Pájaros Cantores/fisiología , Estorninos/fisiología , Estimulación Acústica , Algoritmos , Animales , Vías Auditivas/citología , Vías Auditivas/fisiología , Condicionamiento Operante , Potenciales Evocados Auditivos/fisiología , Femenino , Masculino , Modelos Neurológicos , Neostriado/citología , Neostriado/fisiología , Neuronas/fisiología , Vocalización Animal/fisiología
3.
Proc Biol Sci ; 289(1970): 20212657, 2022 03 09.
Artículo en Inglés | MEDLINE | ID: mdl-35259983

RESUMEN

To convey meaning, human language relies on hierarchically organized, long-range relationships spanning words, phrases, sentences and discourse. As the distances between elements (e.g. phonemes, characters, words) in human language sequences increase, the strength of the long-range relationships between those elements decays following a power law. This power-law relationship has been attributed variously to long-range sequential organization present in human language syntax, semantics and discourse structure. However, non-linguistic behaviours in numerous phylogenetically distant species, ranging from humpback whale song to fruit fly motility, also demonstrate similar long-range statistical dependencies. Therefore, we hypothesized that long-range statistical dependencies in human speech may occur independently of linguistic structure. To test this hypothesis, we measured long-range dependencies in several speech corpora from children (aged 6 months-12 years). We find that adult-like power-law statistical dependencies are present in human vocalizations at the earliest detectable ages, prior to the production of complex linguistic structure. These linguistic structures cannot, therefore, be the sole cause of long-range statistical dependencies in language.


Asunto(s)
Desarrollo del Lenguaje , Lenguaje , Animales , Drosophila , Humanos , Lingüística , Semántica , Habla
4.
PLoS Comput Biol ; 17(9): e1008100, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34555020

RESUMEN

Neuronal activity within the premotor region HVC is tightly synchronized to, and crucial for, the articulate production of learned song in birds. Characterizations of this neural activity detail patterns of sequential bursting in small, carefully identified subsets of neurons in the HVC population. The dynamics of HVC are well described by these characterizations, but have not been verified beyond this scale of measurement. There is a rich history of using local field potentials (LFP) to extract information about behavior that extends beyond the contribution of individual cells. These signals have the advantage of being stable over longer periods of time, and they have been used to study and decode human speech and other complex motor behaviors. Here we characterize LFP signals presumptively from the HVC of freely behaving male zebra finches during song production to determine if population activity may yield similar insights into the mechanisms underlying complex motor-vocal behavior. Following an initial observation that structured changes in the LFP were distinct to all vocalizations during song, we show that it is possible to extract time-varying features from multiple frequency bands to decode the identity of specific vocalization elements (syllables) and to predict their temporal onsets within the motif. This demonstrates the utility of LFP for studying vocal behavior in songbirds. Surprisingly, the time frequency structure of HVC LFP is qualitatively similar to well-established oscillations found in both human and non-human mammalian motor areas. This physiological similarity, despite distinct anatomical structures, may give insight into common computational principles for learning and/or generating complex motor-vocal behaviors.


Asunto(s)
Potenciales de Acción/fisiología , Pinzones/fisiología , Corteza Motora/fisiología , Vocalización Animal/fisiología , Animales , Masculino
5.
Neural Comput ; 33(11): 2881-2907, 2021 Oct 12.
Artículo en Inglés | MEDLINE | ID: mdl-34474477

RESUMEN

UMAP is a nonparametric graph-based dimensionality reduction algorithm using applied Riemannian geometry and algebraic topology to find low-dimensional embeddings of structured data. The UMAP algorithm consists of two steps: (1) computing a graphical representation of a data set (fuzzy simplicial complex) and (2) through stochastic gradient descent, optimizing a low-dimensional embedding of the graph. Here, we extend the second step of UMAP to a parametric optimization over neural network weights, learning a parametric relationship between data and embedding. We first demonstrate that parametric UMAP performs comparably to its nonparametric counterpart while conferring the benefit of a learned parametric mapping (e.g., fast online embeddings for new data). We then explore UMAP as a regularization, constraining the latent distribution of autoencoders, parametrically varying global structure preservation, and improving classifier accuracy for semisupervised learning by capturing structure in unlabeled data.1.

6.
PLoS Comput Biol ; 16(10): e1008228, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-33057332

RESUMEN

Animals produce vocalizations that range in complexity from a single repeated call to hundreds of unique vocal elements patterned in sequences unfolding over hours. Characterizing complex vocalizations can require considerable effort and a deep intuition about each species' vocal behavior. Even with a great deal of experience, human characterizations of animal communication can be affected by human perceptual biases. We present a set of computational methods for projecting animal vocalizations into low dimensional latent representational spaces that are directly learned from the spectrograms of vocal signals. We apply these methods to diverse datasets from over 20 species, including humans, bats, songbirds, mice, cetaceans, and nonhuman primates. Latent projections uncover complex features of data in visually intuitive and quantifiable ways, enabling high-powered comparative analyses of vocal acoustics. We introduce methods for analyzing vocalizations as both discrete sequences and as continuous latent variables. Each method can be used to disentangle complex spectro-temporal structure and observe long-timescale organization in communication.


Asunto(s)
Aprendizaje Automático no Supervisado , Vocalización Animal/clasificación , Vocalización Animal/fisiología , Algoritmos , Animales , Quirópteros/fisiología , Análisis por Conglomerados , Biología Computacional , Bases de Datos Factuales , Humanos , Ratones , Pájaros Cantores/fisiología , Espectrografía del Sonido , Voz/fisiología
7.
Nano Lett ; 19(9): 6244-6254, 2019 09 11.
Artículo en Inglés | MEDLINE | ID: mdl-31369283

RESUMEN

The enhanced electrochemical activity of nanostructured materials is readily exploited in energy devices, but their utility in scalable and human-compatible implantable neural interfaces can significantly advance the performance of clinical and research electrodes. We utilize low-temperature selective dealloying to develop scalable and biocompatible one-dimensional platinum nanorod (PtNR) arrays that exhibit superb electrochemical properties at various length scales, stability, and biocompatibility for high performance neurotechnologies. PtNR arrays record brain activity with cellular resolution from the cortical surfaces in birds and nonhuman primates. Significantly, strong modulation of surface recorded single unit activity by auditory stimuli is demonstrated in European Starling birds as well as the modulation of local field potentials in the visual cortex by light stimuli in a nonhuman primate and responses to electrical stimulation in mice. PtNRs record behaviorally and physiologically relevant neuronal dynamics from the surface of the brain with high spatiotemporal resolution, which paves the way for less invasive brain-machine interfaces.


Asunto(s)
Potenciales de Acción , Materiales Biocompatibles , Interfaces Cerebro-Computador , Nanotubos , Neuronas/metabolismo , Platino (Metal) , Corteza Visual/fisiología , Animales , Estimulación Eléctrica , Electrodos , Macaca mulatta , Masculino , Ratones , Pájaros Cantores
8.
Proc Natl Acad Sci U S A ; 113(5): 1441-6, 2016 Feb 02.
Artículo en Inglés | MEDLINE | ID: mdl-26787894

RESUMEN

High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes.


Asunto(s)
Corteza Auditiva/fisiología , Neuronas/fisiología , Animales , Corteza Auditiva/citología
9.
Proc Natl Acad Sci U S A ; 113(6): 1666-71, 2016 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-26811447

RESUMEN

Humans easily recognize "transposed" musical melodies shifted up or down in log frequency. Surprisingly, songbirds seem to lack this capacity, although they can learn to recognize human melodies and use complex acoustic sequences for communication. Decades of research have led to the widespread belief that songbirds, unlike humans, are strongly biased to use absolute pitch (AP) in melody recognition. This work relies almost exclusively on acoustically simple stimuli that may belie sensitivities to more complex spectral features. Here, we investigate melody recognition in a species of songbird, the European Starling (Sturnus vulgaris), using tone sequences that vary in both pitch and timbre. We find that small manipulations altering either pitch or timbre independently can drive melody recognition to chance, suggesting that both percepts are poor descriptors of the perceptual cues used by birds for this task. Instead we show that melody recognition can generalize even in the absence of pitch, as long as the spectral shapes of the constituent tones are preserved. These results challenge conventional views regarding the use of pitch cues in nonhuman auditory sequence recognition.


Asunto(s)
Patrones de Reconocimiento Fisiológico/fisiología , Percepción de la Altura Tonal/fisiología , Espectrografía del Sonido , Sonido , Estorninos/fisiología , Estimulación Acústica , Animales , Conducta Animal , Ruido
10.
J Acoust Soc Am ; 144(6): 3575, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-30599667

RESUMEN

The frequency range of hearing is important for assessing the potential impact of anthropogenic noise on marine mammals. Auditory evoked potentials (AEPs) are commonly used to assess toothed whale hearing, but measurement methods vary across researchers and laboratories. In particular, estimates of the upper-frequency limit of hearing (UFL) can vary due to interactions between the unintended spread of spectral energy to frequencies below the desired test frequency and a sharp decline in hearing sensitivity at frequencies near the UFL. To assess the impact of stimulus bandwidth on UFL measurement, AEP hearing tests were conducted in four bottlenose dolphins (Tursiops truncatus) with normal and impaired hearing ranges. Dolphins were tested at frequencies near the UFL and at a frequency 1/2-octave below the UFL, where hearing sensitivity was better (i.e., threshold was lower). Thresholds were measured using sinusoidal amplitude modulated (SAM) tones and tone-bursts of varying bandwidth. Measured thresholds varied inversely as a function of stimulus bandwidth near the UFL with narrow-band tone-bursts approximating thresholds measured using SAM tones. Bandwidth did not impact measured thresholds where hearing was more sensitive, highlighting how stimulus bandwidth and the rate of decline of hearing sensitivity interact to affect measured threshold near the UFL.

11.
Eur J Neurosci ; 41(5): 725-33, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-25728189

RESUMEN

Natural acoustic communication signals, such as speech, are typically high-dimensional with a wide range of co-varying spectro-temporal features at multiple timescales. The synaptic and network mechanisms for encoding these complex signals are largely unknown. We are investigating these mechanisms in high-level sensory regions of the songbird auditory forebrain, where single neurons show sparse, object-selective spiking responses to conspecific songs. Using whole-cell in vivo patch clamp techniques in the caudal mesopallium and the caudal nidopallium of starlings, we examine song-driven subthreshold and spiking activity. We find that both the subthreshold and the spiking activity are reliable (i.e. the same song drives a similar response each time it is presented) and specific (i.e. responses to different songs are distinct). Surprisingly, however, the reliability and specificity of the subthreshold response was uniformly high regardless of when the cell spiked, even for song stimuli that drove no spikes. We conclude that despite a selective and sparse spiking response, high-level auditory cortical neurons are under continuous, non-selective, stimulus-specific synaptic control. To investigate the role of local network inhibition in this synaptic control, we then recorded extracellularly while pharmacologically blocking local GABAergic transmission. This manipulation modulated the strength and the reliability of stimulus-driven spiking, consistent with a role for local inhibition in regulating the reliability of network activity and the stimulus specificity of the subthreshold response in single cells. We discuss these results in the context of underlying computations that could generate sparse, stimulus-selective spiking responses, and models for hierarchical pooling.


Asunto(s)
Corteza Auditiva/fisiología , Umbral Auditivo , Potenciales Evocados Auditivos , Vocalización Animal , Animales , Corteza Auditiva/citología , Neuronas GABAérgicas/fisiología , Estorninos , Transmisión Sináptica
12.
J Neurophysiol ; 111(6): 1183-9, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24353301

RESUMEN

Recognition of natural stimuli requires a combination of selectivity and invariance. Classical neurobiological models achieve selectivity and invariance, respectively, by assigning to each cortical neuron either a computation equivalent to the logical "AND" or a computation equivalent to the logical "OR." One powerful OR-like operation is the MAX function, which computes the maximum over input activities. The MAX function is frequently employed in computer vision to achieve invariance and considered a key operation in visual cortex. Here we explore the computations for selectivity and invariance in the auditory system of a songbird, using natural stimuli. We ask two related questions: does the MAX operation exist in auditory system? Is it implemented by specialized "MAX" neurons, as assumed in vision? By analyzing responses of individual neurons to combinations of stimuli we systematically sample the space of implemented feature recombination functions. Although we frequently observe the MAX function, we show that the same neurons that implement it also readily implement other operations, including the AND-like response. We then show that sensory adaptation, a ubiquitous property of neural circuits, causes transitions between these operations in individual neurons, violating the fixed neuron-to-computation mapping posited in the state-of-the-art object-recognition models. These transitions, however, accord with predictions of neural-circuit models incorporating divisive normalization and variable polynomial nonlinearities at the spike threshold. Because these biophysical properties are not tied to a particular sensory modality but are generic, the flexible neuron-to-computation mapping demonstrated in this study in the auditory system is likely a general property.


Asunto(s)
Percepción Auditiva , Neuronas/fisiología , Patrones de Reconocimiento Fisiológico , Adaptación Fisiológica , Animales , Encéfalo/citología , Encéfalo/fisiología , Femenino , Masculino , Modelos Neurológicos , Estorninos/fisiología
13.
Anim Cogn ; 17(5): 1023-30, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-24526277

RESUMEN

The sequential patterning of complex acoustic elements is a salient feature of bird song and other forms of vocal communication. For European starlings (Sturnus vulgaris), a songbird species, individual vocal recognition is improved when the temporal organization of song components (called motifs) follows the normal patterns of each singer. This sensitivity to natural motif sequences may underlie observations that starlings can also learn more complex, unnatural motif patterns. Alternatively, it has been proposed that the apparent acquisition of abstract motif patterning rules instead reflects idiosyncrasies of the training conditions used in prior experiments. That is, that motif patterns are learned not by recognizing differences in temporal structures between patterns, but by identifying serendipitous features (e.g., acoustical cues) in the small sets of training and testing stimuli used. Here, we investigate this possibility, by asking whether starlings can learn to discriminate between two arbitrary motif patterns, when unique examples of each pattern are presented on every trial. Our results demonstrate that abstract motif patterning rules can be acquired from trial-unique stimuli and suggest that such training leads to better pattern generalization compared with training with much smaller stimulus subsets.


Asunto(s)
Aprendizaje/fisiología , Estorninos/fisiología , Vocalización Animal/fisiología , Estimulación Acústica , Animales , Percepción Auditiva/fisiología , Masculino , Reconocimiento en Psicología/fisiología , Factores de Tiempo
14.
bioRxiv ; 2024 Mar 03.
Artículo en Inglés | MEDLINE | ID: mdl-38464215

RESUMEN

Studies comparing acoustic signals often rely on pixel-wise differences between spectrograms, as in for example mean squared error (MSE). Pixel-wise errors are not representative of perceptual sensitivity, however, and such measures can be highly sensitive to small local signal changes that may be imperceptible. In computer vision, high-level visual features extracted with convolutional neural networks (CNN) can be used to calculate the fidelity of computer-generated images. Here, we propose the auditory perceptual distance (APD) metric based on acoustic features extracted with an unsupervised CNN and validated by perceptual behavior. Using complex vocal signals from songbirds, we trained a Siamese CNN on a self-supervised task using spectrograms rescaled to match the auditory frequency sensitivity of European starlings, Sturnus vulgaris. We define APD for any pair of sounds as the cosine distance between corresponding feature vectors extracted by the trained CNN. We show that APD is more robust to temporal and spectral translation than MSE, and captures the sigmoidal shape of typical behavioral psychometric functions over complex acoustic spaces. When fine-tuned using starlings' behavioral judgments of naturalistic song syllables, the APD model yields even more accurate predictions of perceptual sensitivity, discrimination, and categorization on novel complex (high-dimensional) acoustic dimensions, including diverging decisions for identical stimuli following different training conditions. Thus, the APD model outperforms MSE in robustness and perceptual accuracy, and offers tunability to match experience-dependent perceptual biases.

15.
Nat Commun ; 15(1): 677, 2024 Jan 23.
Artículo en Inglés | MEDLINE | ID: mdl-38263364

RESUMEN

Spoken language comprehension requires abstraction of linguistic information from speech, but the interaction between auditory and linguistic processing of speech remains poorly understood. Here, we investigate the nature of this abstraction using neural responses recorded intracranially while participants listened to conversational English speech. Capitalizing on multiple, language-specific patterns where phonological and acoustic information diverge, we demonstrate the causal efficacy of the phoneme as a unit of analysis and dissociate the unique contributions of phonemic and spectrographic information to neural responses. Quantitive higher-order response models also reveal that unique contributions of phonological information are carried in the covariance structure of the stimulus-response relationship. This suggests that linguistic abstraction is shaped by neurobiological mechanisms that involve integration across multiple spectro-temporal features and prior phonological information. These results link speech acoustics to phonology and morphosyntax, substantiating predictions about abstractness in linguistic theory and providing evidence for the acoustic features that support that abstraction.


Asunto(s)
Lenguaje , Habla , Humanos , Lingüística , Acústica , Acústica del Lenguaje
16.
J Neurophysiol ; 109(7): 1690-703, 2013 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-23303858

RESUMEN

Sensory systems are dynamic. They must process a wide range of natural signals that facilitate adaptive behaviors in a manner that depends on an organism's constantly changing goals. A full understanding of the sensory physiology that underlies adaptive natural behaviors must therefore account for the activity of sensory systems in light of these behavioral goals. Here we present a novel technique that combines in vivo electrophysiological recording from awake, freely moving songbirds with operant conditioning techniques that allow control over birds' recognition of conspecific song, a widespread natural behavior in songbirds. We show that engaging in a vocal recognition task alters the response properties of neurons in the caudal mesopallium (CM), an avian analog of mammalian auditory cortex, in European starlings. Compared with awake, passive listening, active engagement of subjects in an auditory recognition task results in neurons responding to fewer song stimuli and a decrease in the trial-to-trial variability in their driven firing rates. Mean firing rates also change during active recognition, but not uniformly. Relative to nonengaged listening, active recognition causes increases in the driven firing rates in some neurons, decreases in other neurons, and stimulus-specific changes in other neurons. These changes lead to both an increase in stimulus selectivity and an increase in the information conveyed by the neurons about the animals' behavioral task. This study demonstrates the behavioral dependence of neural responses in the avian auditory forebrain and introduces the starling as a model for real-time monitoring of task-related neural processing of complex auditory objects.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva , Neuronas/fisiología , Prosencéfalo/fisiología , Canto , Animales , Corteza Auditiva/citología , Condicionamiento Operante , Potenciales Evocados Auditivos , Femenino , Masculino , Estorninos
17.
J Neurophysiol ; 109(3): 721-33, 2013 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-23155175

RESUMEN

Changes in inhibition during development are well documented, but the role of inhibition in adult learning-related plasticity is not understood. In songbirds, vocal recognition learning alters the neural representation of songs across the auditory forebrain, including the caudomedial nidopallium (NCM), a region analogous to mammalian secondary auditory cortices. Here, we block local inhibition with the iontophoretic application of gabazine, while simultaneously measuring song-evoked spiking activity in NCM of European starlings trained to recognize sets of conspecific songs. We find that local inhibition differentially suppresses the responses to learned and unfamiliar songs and enhances spike-rate differences between learned categories of songs. These learning-dependent response patterns emerge, in part, through inhibitory modulation of selectivity for song components and the masking of responses to specific acoustic features without altering spectrotemporal tuning. The results describe a novel form of inhibitory modulation of the encoding of learned categories and demonstrate that inhibition plays a central role in shaping the responses of neurons to learned, natural signals.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/efectos de los fármacos , Aprendizaje , Inhibición Neural , Canto , Animales , Condicionamiento Operante , Potenciales Evocados Auditivos , Iontoforesis , Plasticidad Neuronal , Piridazinas/administración & dosificación , Piridazinas/farmacología , Estorninos/fisiología
18.
J Neurosci ; 31(7): 2595-606, 2011 Feb 16.
Artículo en Inglés | MEDLINE | ID: mdl-21325527

RESUMEN

Many learned behaviors are thought to require the activity of high-level neurons that represent categories of complex signals, such as familiar faces or native speech sounds. How these complex, experience-dependent neural responses emerge within the brain's circuitry is not well understood. The caudomedial mesopallium (CMM), a secondary auditory region in the songbird brain, contains neurons that respond to specific combinations of song components and respond preferentially to the songs that birds have learned to recognize. Here, we examine the transformation of these learned responses across a broader forebrain circuit that includes the caudolateral mesopallium (CLM), an auditory region that provides input to CMM. We recorded extracellular single-unit activity in CLM and CMM in European starlings trained to recognize sets of conspecific songs and compared multiple encoding properties of neurons between these regions. We find that the responses of CMM neurons are more selective between song components, convey more information about song components, and are more variable over repeated components than the responses of CLM neurons. While learning enhances neural encoding of song components in both regions, CMM neurons encode more information about the learned categories associated with songs than do CLM neurons. Collectively, these data suggest that CLM and CMM are part of a functional sensory hierarchy that is modified by learning to yield representations of natural vocal signals that are increasingly informative with respect to behavior.


Asunto(s)
Percepción Auditiva/fisiología , Aprendizaje/fisiología , Red Nerviosa/fisiología , Prosencéfalo/fisiología , Vocalización Animal/fisiología , Estimulación Acústica/métodos , Potenciales de Acción/fisiología , Animales , Neuronas/fisiología , Probabilidad , Prosencéfalo/citología , Reconocimiento en Psicología/fisiología , Estorninos , Estadísticas no Paramétricas
19.
Nature ; 440(7088): 1204-7, 2006 Apr 27.
Artículo en Inglés | MEDLINE | ID: mdl-16641998

RESUMEN

Humans regularly produce new utterances that are understood by other members of the same language community. Linguistic theories account for this ability through the use of syntactic rules (or generative grammars) that describe the acceptable structure of utterances. The recursive, hierarchical embedding of language units (for example, words or phrases within shorter sentences) that is part of the ability to construct new utterances minimally requires a 'context-free' grammar that is more complex than the 'finite-state' grammars thought sufficient to specify the structure of all non-human communication signals. Recent hypotheses make the central claim that the capacity for syntactic recursion forms the computational core of a uniquely human language faculty. Here we show that European starlings (Sturnus vulgaris) accurately recognize acoustic patterns defined by a recursive, self-embedding, context-free grammar. They are also able to classify new patterns defined by the grammar and reliably exclude agrammatical patterns. Thus, the capacity to classify sequences from recursive, centre-embedded grammars is not uniquely human. This finding opens a new range of complex syntactic processing mechanisms to physiological investigation.


Asunto(s)
Comunicación Animal , Percepción Auditiva/fisiología , Lenguaje , Aprendizaje/fisiología , Estorninos/fisiología , Estimulación Acústica , Animales , Humanos , Lingüística , Modelos Neurológicos , Semántica , Procesos Estocásticos
20.
R Soc Open Sci ; 9(9): 220704, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36177196

RESUMEN

The acoustic structure of birdsong is spectrally and temporally complex. Temporal complexity is often investigated in a syntactic framework focusing on the statistical features of symbolic song sequences. Alternatively, temporal patterns can be investigated in a rhythmic framework that focuses on the relative timing between song elements. Here, we investigate the merits of combining both frameworks by integrating syntactic and rhythmic analyses of Australian pied butcherbird (Cracticus nigrogularis) songs, which exhibit organized syntax and diverse rhythms. We show that rhythms of the pied butcherbird song bouts in our sample are categorically organized and predictable by the song's first-order sequential syntax. These song rhythms remain categorically distributed and strongly associated with the first-order sequential syntax even after controlling for variance in note length, suggesting that the silent intervals between notes induce a rhythmic structure on note sequences. We discuss the implication of syntactic-rhythmic relations as a relevant feature of song complexity with respect to signals such as human speech and music, and advocate for a broader conception of song complexity that takes into account syntax, rhythm, and their interaction with other acoustic and perceptual features.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA