Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 14.790
Filtrar
Más filtros

Intervalo de año de publicación
1.
Cell ; 186(7): 1307-1308, 2023 03 30.
Artículo en Inglés | MEDLINE | ID: mdl-37001497

RESUMEN

Plants are not exactly known to be great conversationalists. In this issue of Cell, a new study highlights that when stressed by desiccation or cutting injury, tomato and tobacco plants can produce airborne ultrasonic emissions. These sounds are loud enough to be heard by insects and can be analytically categorized using trained neural networks, pointing to their potential informative value.


Asunto(s)
Solanum lycopersicum , Sonido , Plantas , Audición , Nicotiana
2.
Cell ; 186(7): 1328-1336.e10, 2023 03 30.
Artículo en Inglés | MEDLINE | ID: mdl-37001499

RESUMEN

Stressed plants show altered phenotypes, including changes in color, smell, and shape. Yet, airborne sounds emitted by stressed plants have not been investigated before. Here we show that stressed plants emit airborne sounds that can be recorded from a distance and classified. We recorded ultrasonic sounds emitted by tomato and tobacco plants inside an acoustic chamber, and in a greenhouse, while monitoring the plant's physiological parameters. We developed machine learning models that succeeded in identifying the condition of the plants, including dehydration level and injury, based solely on the emitted sounds. These informative sounds may also be detectable by other organisms. This work opens avenues for understanding plants and their interactions with the environment and may have significant impact on agriculture.


Asunto(s)
Plantas , Sonido , Estrés Fisiológico
3.
Cell ; 184(22): 5622-5634.e25, 2021 10 28.
Artículo en Inglés | MEDLINE | ID: mdl-34610277

RESUMEN

Disinhibitory neurons throughout the mammalian cortex are powerful enhancers of circuit excitability and plasticity. The differential expression of neuropeptide receptors in disinhibitory, inhibitory, and excitatory neurons suggests that each circuit motif may be controlled by distinct neuropeptidergic systems. Here, we reveal that a bombesin-like neuropeptide, gastrin-releasing peptide (GRP), recruits disinhibitory cortical microcircuits through selective targeting and activation of vasoactive intestinal peptide (VIP)-expressing cells. Using a genetically encoded GRP sensor, optogenetic anterograde stimulation, and trans-synaptic tracing, we reveal that GRP regulates VIP cells most likely via extrasynaptic diffusion from several local and long-range sources. In vivo photometry and CRISPR-Cas9-mediated knockout of the GRP receptor (GRPR) in auditory cortex indicate that VIP cells are strongly recruited by novel sounds and aversive shocks, and GRP-GRPR signaling enhances auditory fear memories. Our data establish peptidergic recruitment of selective disinhibitory cortical microcircuits as a mechanism to regulate fear memories.


Asunto(s)
Corteza Auditiva/metabolismo , Bombesina/metabolismo , Miedo/fisiología , Memoria/fisiología , Red Nerviosa/metabolismo , Secuencia de Aminoácidos , Animales , Calcio/metabolismo , Señalización del Calcio , Condicionamiento Clásico , Péptido Liberador de Gastrina/química , Péptido Liberador de Gastrina/metabolismo , Regulación de la Expresión Génica , Genes Inmediatos-Precoces , Células HEK293 , Humanos , Espacio Intracelular/metabolismo , Masculino , Ratones Endogámicos C57BL , Receptores de Bombesina/metabolismo , Sonido , Péptido Intestinal Vasoactivo/metabolismo
4.
Nature ; 627(8002): 123-129, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38383781

RESUMEN

Baleen whales (mysticetes) use vocalizations to mediate their complex social and reproductive behaviours in vast, opaque marine environments1. Adapting to an obligate aquatic lifestyle demanded fundamental physiological changes to efficiently produce sound, including laryngeal specializations2-4. Whereas toothed whales (odontocetes) evolved a nasal vocal organ5, mysticetes have been thought to use the larynx for sound production1,6-8. However, there has been no direct demonstration that the mysticete larynx can phonate, or if it does, how it produces the great diversity of mysticete sounds9. Here we combine experiments on the excised larynx of three mysticete species with detailed anatomy and computational models to show that mysticetes evolved unique laryngeal structures for sound production. These structures allow some of the largest animals that ever lived to efficiently produce frequency-modulated, low-frequency calls. Furthermore, we show that this phonation mechanism is likely to be ancestral to all mysticetes and shares its fundamental physical basis with most terrestrial mammals, including humans10, birds11, and their closest relatives, odontocetes5. However, these laryngeal structures set insurmountable physiological limits to the frequency range and depth of their vocalizations, preventing them from escaping anthropogenic vessel noise12,13 and communicating at great depths14, thereby greatly reducing their active communication range.


Asunto(s)
Evolución Biológica , Ballenas , Animales , Humanos , Ballenas/fisiología , Sonido
5.
Nature ; 631(8019): 118-124, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38898274

RESUMEN

Locating sound sources such as prey or predators is critical for survival in many vertebrates. Terrestrial vertebrates locate sources by measuring the time delay and intensity difference of sound pressure at each ear1-5. Underwater, however, the physics of sound makes interaural cues very small, suggesting that directional hearing in fish should be nearly impossible6. Yet, directional hearing has been confirmed behaviourally, although the mechanisms have remained unknown for decades. Several hypotheses have been proposed to explain this remarkable ability, including the possibility that fish evolved an extreme sensitivity to minute interaural differences or that fish might compare sound pressure with particle motion signals7,8. However, experimental challenges have long hindered a definitive explanation. Here we empirically test these models in the transparent teleost Danionella cerebrum, one of the smallest vertebrates9,10. By selectively controlling pressure and particle motion, we dissect the sensory algorithm underlying directional acoustic startles. We find that both cues are indispensable for this behaviour and that their relative phase controls its direction. Using micro-computed tomography and optical vibrometry, we further show that D. cerebrum has the sensory structures to implement this mechanism. D. cerebrum shares these structures with more than 15% of living vertebrate species, suggesting a widespread mechanism for inferring sound direction.


Asunto(s)
Señales (Psicología) , Cyprinidae , Audición , Localización de Sonidos , Animales , Femenino , Masculino , Algoritmos , Audición/fisiología , Presión , Sonido , Localización de Sonidos/fisiología , Vibración , Microtomografía por Rayos X , Cyprinidae/fisiología , Movimiento (Física) , Reflejo de Sobresalto , Material Particulado
6.
EMBO J ; 42(23): e114587, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-37800695

RESUMEN

Our sense of hearing enables the processing of stimuli that differ in sound pressure by more than six orders of magnitude. How to process a wide range of stimulus intensities with temporal precision is an enigmatic phenomenon of the auditory system. Downstream of dynamic range compression by active cochlear micromechanics, the inner hair cells (IHCs) cover the full intensity range of sound input. Yet, the firing rate in each of their postsynaptic spiral ganglion neurons (SGNs) encodes only a fraction of it. As a population, spiral ganglion neurons with their respective individual coding fractions cover the entire audible range. How such "dynamic range fractionation" arises is a topic of current research and the focus of this review. Here, we discuss mechanisms for generating the diverse functional properties of SGNs and formulate testable hypotheses. We postulate that an interplay of synaptic heterogeneity, molecularly distinct subtypes of SGNs, and efferent modulation serves the neural decomposition of sound information and thus contributes to a population code for sound intensity.


Asunto(s)
Cóclea , Células Ciliadas Auditivas Internas , Células Ciliadas Auditivas Internas/fisiología , Sonido , Sinapsis/fisiología , Ganglio Espiral de la Cóclea
7.
Nat Immunol ; 21(7): 724-726, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32572239
8.
PLoS Biol ; 22(4): e3002586, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38683852

RESUMEN

Having two ears enables us to localize sound sources by exploiting interaural time differences (ITDs) in sound arrival. Principal neurons of the medial superior olive (MSO) are sensitive to ITD, and each MSO neuron responds optimally to a best ITD (bITD). In many cells, especially those tuned to low sound frequencies, these bITDs correspond to ITDs for which the contralateral ear leads, and are often larger than the ecologically relevant range, defined by the ratio of the interaural distance and the speed of sound. Using in vivo recordings in gerbils, we found that shortly after hearing onset the bITDs were even more contralaterally leading than found in adult gerbils, and travel latencies for contralateral sound-evoked activity clearly exceeded those for ipsilateral sounds. During the following weeks, both these latencies and their interaural difference decreased. A computational model indicated that spike timing-dependent plasticity can underlie this fine-tuning. Our results suggest that MSO neurons start out with a strong predisposition toward contralateral sounds due to their longer neural travel latencies, but that, especially in high-frequency neurons, this predisposition is subsequently mitigated by differential developmental fine-tuning of the travel latencies.


Asunto(s)
Estimulación Acústica , Gerbillinae , Neuronas , Complejo Olivar Superior , Animales , Neuronas/fisiología , Complejo Olivar Superior/fisiología , Localización de Sonidos/fisiología , Masculino , Núcleo Olivar/fisiología , Sonido , Femenino
9.
Cell ; 151(1): 41-55, 2012 Sep 28.
Artículo en Inglés | MEDLINE | ID: mdl-23021214

RESUMEN

Natural sensory input shapes both structure and function of developing neurons, but how early experience-driven morphological and physiological plasticity are interrelated remains unclear. Using rapid time-lapse two-photon calcium imaging of network activity and single-neuron growth within the unanesthetized developing brain, we demonstrate that visual stimulation induces coordinated changes to neuronal responses and dendritogenesis. Further, we identify the transcription factor MEF2A/2D as a major regulator of neuronal response to plasticity-inducing stimuli directing both structural and functional changes. Unpatterned sensory stimuli that change plasticity thresholds induce rapid degradation of MEF2A/2D through a classical apoptotic pathway requiring NMDA receptors and caspases-9 and -3/7. Knockdown of MEF2A/2D alone is sufficient to induce a metaplastic shift in threshold of both functional and morphological plasticity. These findings demonstrate how sensory experience acting through altered levels of the transcription factor MEF2 fine-tunes the plasticity thresholds of brain neurons during neural circuit formation.


Asunto(s)
Encéfalo/embriología , Factores Reguladores Miogénicos/metabolismo , Plasticidad Neuronal , Factores de Transcripción/metabolismo , Proteínas de Xenopus/metabolismo , Xenopus laevis/embriología , Animales , Percepción Auditiva , Encéfalo/citología , Caspasas/metabolismo , Factores de Transcripción MEF2 , Neuronas/metabolismo , Receptores de N-Metil-D-Aspartato/metabolismo , Sonido , Percepción Visual
10.
Proc Natl Acad Sci U S A ; 121(7): e2313549121, 2024 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-38315846

RESUMEN

The loss of elastic stability (buckling) can lead to catastrophic failure in the context of traditional engineering structures. Conversely, in nature, buckling often serves a desirable function, such as in the prey-trapping mechanism of the Venus fly trap (Dionaea muscipula). This paper investigates the buckling-enabled sound production in the wingbeat-powered (aeroelastic) tymbals of Yponomeuta moths. The hindwings of Yponomeuta possess a striated band of ridges that snap through sequentially during the up- and downstroke of the wingbeat cycle-a process reminiscent of cellular buckling in compressed slender shells. As a result, bursts of ultrasonic clicks are produced that deter predators (i.e. bats). Using various biological and mechanical characterization techniques, we show that wing camber changes during the wingbeat cycle act as the single actuation mechanism that causes buckling to propagate sequentially through each stria on the tymbal. The snap-through of each stria excites a bald patch of the wing's membrane, thereby amplifying sound pressure levels and radiating sound at the resonant frequencies of the patch. In addition, the interaction of phased tymbal clicks from the two wings enhances the directivity of the acoustic signal strength, suggesting an improvement in acoustic protection. These findings unveil the acousto-mechanics of Yponomeuta tymbals and uncover their buckling-driven evolutionary origin. We anticipate that through bioinspiration, aeroelastic tymbals will encourage novel developments in the context of multi-stable morphing structures, acoustic structural monitoring, and soft robotics.


Asunto(s)
Mariposas Nocturnas , Sonido , Animales , Ultrasonido , Acústica
11.
Proc Natl Acad Sci U S A ; 121(10): e2314017121, 2024 Mar 05.
Artículo en Inglés | MEDLINE | ID: mdl-38408231

RESUMEN

Motion is the basis of nearly all animal behavior. Evolution has led to some extraordinary specializations of propulsion mechanisms among invertebrates, including the mandibles of the dracula ant and the claw of the pistol shrimp. In contrast, vertebrate skeletal movement is considered to be limited by the speed of muscle, saturating around 250 Hz. Here, we describe the unique propulsion mechanism by which Danionella cerebrum, a miniature cyprinid fish of only 12 mm length, produces high amplitude sounds exceeding 140 dB (re. 1 µPa, at a distance of one body length). Using a combination of high-speed video, micro-computed tomography (micro-CT), RNA profiling, and finite difference simulations, we found that D. cerebrum employ a unique sound production mechanism that involves a drumming cartilage, a specialized rib, and a dedicated muscle adapted for low fatigue. This apparatus accelerates the drumming cartilage at over 2,000 g, shooting it at the swim bladder to generate a rapid, loud pulse. These pulses are chained together to make calls with either bilaterally alternating or unilateral muscle contractions. D. cerebrum use this remarkable mechanism for acoustic communication with conspecifics.


Asunto(s)
Comunicación Animal , Cyprinidae , Animales , Microtomografía por Rayos X , Sonido , Acústica , Cyprinidae/genética
12.
PLoS Biol ; 21(8): e3002277, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37651461

RESUMEN

The ability to process and act upon incoming sounds during locomotion is critical for survival and adaptive behavior. Despite the established role that the auditory cortex (AC) plays in behavior- and context-dependent sound processing, previous studies have found that auditory cortical activity is on average suppressed during locomotion as compared to immobility. While suppression of auditory cortical responses to self-generated sounds results from corollary discharge, which weakens responses to predictable sounds, the functional role of weaker responses to unpredictable external sounds during locomotion remains unclear. In particular, whether suppression of external sound-evoked responses during locomotion reflects reduced involvement of the AC in sound processing or whether it results from masking by an alternative neural computation in this state remains unresolved. Here, we tested the hypothesis that rather than simple inhibition, reduced sound-evoked responses during locomotion reflect a tradeoff with the emergence of explicit and reliable coding of locomotion velocity. To test this hypothesis, we first used neural inactivation in behaving mice and found that the AC plays a critical role in sound-guided behavior during locomotion. To investigate the nature of this processing, we used two-photon calcium imaging of local excitatory auditory cortical neural populations in awake mice. We found that locomotion had diverse influences on activity of different neurons, with a net suppression of baseline-subtracted sound-evoked responses and neural stimulus detection, consistent with previous studies. Importantly, we found that the net inhibitory effect of locomotion on baseline-subtracted sound-evoked responses was strongly shaped by elevated ongoing activity that compressed the response dynamic range, and that rather than reflecting enhanced "noise," this ongoing activity reliably encoded the animal's locomotion speed. Decoding analyses revealed that locomotion speed and sound are robustly co-encoded by auditory cortical ensemble activity. Finally, we found consistent patterns of joint coding of sound and locomotion speed in electrophysiologically recorded activity in freely moving rats. Together, our data suggest that rather than being suppressed by locomotion, auditory cortical ensembles explicitly encode it alongside sound information to support sound perception during locomotion.


Asunto(s)
Corteza Auditiva , Animales , Ratones , Ratas , Audición , Locomoción , Sonido , Percepción
13.
Proc Natl Acad Sci U S A ; 120(25): e2218951120, 2023 06 20.
Artículo en Inglés | MEDLINE | ID: mdl-37307440

RESUMEN

We report a label-free acoustic microfluidic method to confine single, cilia-driven swimming cells in space without limiting their rotational degrees of freedom. Our platform integrates a surface acoustic wave (SAW) actuator and bulk acoustic wave (BAW) trapping array to enable multiplexed analysis with high spatial resolution and trapping forces that are strong enough to hold individual microswimmers. The hybrid BAW/SAW acoustic tweezers employ high-efficiency mode conversion to achieve submicron image resolution while compensating for parasitic system losses to immersion oil in contact with the microfluidic chip. We use the platform to quantify cilia and cell body motion for wildtype biciliate cells, investigating effects of environmental variables like temperature and viscosity on ciliary beating, synchronization, and three-dimensional helical swimming. We confirm and expand upon the existing understanding of these phenomena, for example determining that increasing viscosity promotes asynchronous beating. Motile cilia are subcellular organelles that propel microorganisms or direct fluid and particulate flow. Thus, cilia are critical to cell survival and human health. The unicellular alga Chlamydomonas reinhardtii is widely used to investigate the mechanisms underlying ciliary beating and coordination. However, freely swimming cells are difficult to image with sufficient resolution to capture cilia motion, necessitating that the cell body be held during experiments. Acoustic confinement is a compelling alternative to use of a micropipette, or to magnetic, electrical, and optical trapping that may modify the cells and affect their behavior. Beyond establishing our approach to studying microswimmers, we demonstrate a unique ability to mechanically perturb cells via rapid acoustic positioning.


Asunto(s)
Acústica , Natación , Humanos , Sonido , Cilios , Cuerpo Celular
14.
Proc Natl Acad Sci U S A ; 120(17): e2218367120, 2023 04 25.
Artículo en Inglés | MEDLINE | ID: mdl-37068255

RESUMEN

Italian is sexy, German is rough-but how about Páez or Tamil? Are there universal phonesthetic judgments based purely on the sound of a language, or are preferences attributable to language-external factors such as familiarity and cultural stereotypes? We collected 2,125 recordings of 228 languages from 43 language families, including 5 to 11 speakers of each language to control for personal vocal attractiveness, and asked 820 native speakers of English, Chinese, or Semitic languages to indicate how much they liked these languages. We found a strong preference for languages perceived as familiar, even when they were misidentified, a variety of cultural-geographical biases, and a preference for breathy female voices. The scores by English, Chinese, and Semitic speakers were weakly correlated, indicating some cross-cultural concordance in phonesthetic judgments, but overall there was little consensus between raters about which languages sounded more beautiful, and average scores per language remained within ±2% after accounting for confounds related to familiarity and voice quality of individual speakers. None of the tested phonetic features-the presence of specific phonemic classes, the overall size of phonetic repertoire, its typicality and similarity to the listener's first language-were robust predictors of pleasantness ratings, apart from a possible slight preference for nontonal languages. While population-level phonesthetic preferences may exist, their contribution to perceptual judgments of short speech recordings appears to be minor compared to purely personal preferences, the speaker's voice quality, and perceived resemblance to other languages culturally branded as beautiful or ugly.


Asunto(s)
Percepción del Habla , Voz , Humanos , Femenino , India , Lenguaje , Sonido , Habla
15.
Proc Natl Acad Sci U S A ; 120(48): e2303562120, 2023 Nov 28.
Artículo en Inglés | MEDLINE | ID: mdl-37988462

RESUMEN

Eye movements alter the relationship between the visual and auditory spatial scenes. Signals related to eye movements affect neural pathways from the ear through auditory cortex and beyond, but how these signals contribute to computing the locations of sounds with respect to the visual scene is poorly understood. Here, we evaluated the information contained in eye movement-related eardrum oscillations (EMREOs), pressure changes recorded in the ear canal that occur in conjunction with simultaneous eye movements. We show that EMREOs contain parametric information about horizontal and vertical eye displacement as well as initial/final eye position with respect to the head. The parametric information in the horizontal and vertical directions can be modeled as combining linearly, allowing accurate prediction of the EMREOs associated with oblique (diagonal) eye movements. Target location can also be inferred from the EMREO signals recorded during eye movements to those targets. We hypothesize that the (currently unknown) mechanism underlying EMREOs could impose a two-dimensional eye-movement-related transfer function on any incoming sound, permitting subsequent processing stages to compute the positions of sounds in relation to the visual scene.


Asunto(s)
Movimientos Oculares , Movimientos Sacádicos , Movimiento , Fenómenos Fisiológicos Oculares , Sonido
16.
Proc Natl Acad Sci U S A ; 120(29): e2301463120, 2023 07 18.
Artículo en Inglés | MEDLINE | ID: mdl-37428927

RESUMEN

Auditory perception is traditionally conceived as the perception of sounds-a friend's voice, a clap of thunder, a minor chord. However, daily life also seems to present us with experiences characterized by the absence of sound-a moment of silence, a gap between thunderclaps, the hush after a musical performance. In these cases, do we positively hear silence? Or do we just fail to hear, and merely judge or infer that it is silent? This longstanding question remains controversial in both the philosophy and science of perception, with prominent theories holding that sounds are the only objects of auditory experience and thus that our encounter with silence is cognitive, not perceptual. However, this debate has largely remained theoretical, without a key empirical test. Here, we introduce an empirical approach to this theoretical dispute, presenting experimental evidence that silence can be genuinely perceived (not just cognitively inferred). We ask whether silences can "substitute" for sounds in event-based auditory illusions-empirical signatures of auditory event representation in which auditory events distort perceived duration. Seven experiments introduce three "silence illusions"-the one-silence-is-more illusion, silence-based warping, and the oddball-silence illusion-each adapted from a prominent perceptual illusion previously thought to arise only from sounds. Subjects were immersed in ambient noise interrupted by silences structurally identical to the sounds in the original illusions. In all cases, silences elicited temporal distortions perfectly analogous to the illusions produced by sounds. Our results suggest that silence is truly heard, not merely inferred, introducing a general approach for studying the perception of absence.


Asunto(s)
Ilusiones , Humanos , Ruido , Sonido , Percepción Auditiva , Audición , Estimulación Acústica/métodos
17.
Proc Natl Acad Sci U S A ; 120(42): e2218679120, 2023 10 17.
Artículo en Inglés | MEDLINE | ID: mdl-37812719

RESUMEN

The ways in which seabirds navigate over very large spatial scales remain poorly understood. While olfactory and visual information can provide guidance over short distances, their range is often limited to 100s km, far below the navigational capacity of wide-ranging animals such as albatrosses. Infrasound is a form of low-frequency sound that propagates for 1,000s km in the atmosphere. In marine habitats, its association with storms and ocean surface waves could in effect make it a useful cue for anticipating environmental conditions that favor or hinder flight or be associated with profitable foraging patches. However, behavioral responses of wild birds to infrasound remain untested. Here, we explored whether wandering albatrosses, Diomedea exulans, respond to microbarom infrasound at sea. We used Global Positioning System tracks of 89 free-ranging albatrosses in combination with acoustic modeling to investigate whether albatrosses preferentially orientate toward areas of 'loud' microbarom infrasound on their foraging trips. We found that in addition to responding to winds encountered in situ, albatrosses moved toward source regions associated with higher sound pressure levels. These findings suggest that albatrosses may be responding to long-range infrasonic cues. As albatrosses depend on winds and waves for soaring flight, infrasonic cues may help albatrosses to identify environmental conditions that allow them to energetically optimize flight over long distances. Our results shed light on one of the great unresolved mysteries in nature, navigation in seemingly featureless ocean environments.


Asunto(s)
Aves , Señales (Psicología) , Animales , Aves/fisiología , Viento , Olfato , Sonido
18.
Proc Natl Acad Sci U S A ; 120(46): e2302814120, 2023 Nov 14.
Artículo en Inglés | MEDLINE | ID: mdl-37934821

RESUMEN

Male crickets attract females by producing calls with their forewings. Louder calls travel further and are more effective at attracting mates. However, crickets are much smaller than the wavelength of their call, and this limits their power output. A small group called tree crickets make acoustic tools called baffles which reduce acoustic short-circuiting, a source of dipole inefficiency. Here, we ask why baffling is uncommon among crickets. We hypothesize that baffling may be rare because like other tools they offer insufficient advantage for most species. To test this, we modelled the calling efficiencies of crickets within the full space of possible natural wing sizes and call frequencies, in multiple acoustic environments. We then generated efficiency landscapes, within which we plotted 112 cricket species across 7 phylogenetic clades. We found that all sampled crickets, in all conditions, could gain efficiency from tool use. Surprisingly, we also found that calling from the ground significantly increased efficiency, with or without a baffle, by as much as an order of magnitude. We found that the ground provides some reduction of acoustic short-circuiting but also halves the air volume within which sound is radiated. It simultaneously reflects sound upwards, allowing recapture of a significant amount of acoustic energy through constructive interference. Thus, using the ground as a reflective baffle is an effective strategy for increasing calling efficiency. Indeed, theory suggests that this increase in efficiency is accessible not just to crickets but to all acoustically communicating animals whether they are dipole or monopole sound sources.


Asunto(s)
Críquet , Gryllidae , Animales , Femenino , Filogenia , Acústica , Sonido , Alas de Animales , Vocalización Animal , Estimulación Acústica
19.
J Neurosci ; 44(11)2024 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-38331581

RESUMEN

Microsaccades are small, involuntary eye movements that occur during fixation. Their role is debated with recent hypotheses proposing a contribution to automatic scene sampling. Microsaccadic inhibition (MSI) refers to the abrupt suppression of microsaccades, typically evoked within 0.1 s after new stimulus onset. The functional significance and neural underpinnings of MSI are subjects of ongoing research. It has been suggested that MSI is a component of the brain's attentional re-orienting network which facilitates the allocation of attention to new environmental occurrences by reducing disruptions or shifts in gaze that could interfere with processing. The extent to which MSI is reflexive or influenced by top-down mechanisms remains debated. We developed a task that examines the impact of auditory top-down attention on MSI, allowing us to disentangle ocular dynamics from visual sensory processing. Participants (N = 24 and 27; both sexes) listened to two simultaneous streams of tones and were instructed to attend to one stream while detecting specific task "targets." We quantified MSI in response to occasional task-irrelevant events presented in both the attended and unattended streams (frequency steps in Experiment 1, omissions in Experiment 2). The results show that initial stages of MSI are not affected by auditory attention. However, later stages (∼0.25 s postevent onset), affecting the extent and duration of the inhibition, are enhanced for sounds in the attended stream compared to the unattended stream. These findings provide converging evidence for the reflexive nature of early MSI stages and robustly demonstrate the involvement of auditory attention in modulating the later stages.


Asunto(s)
Movimientos Oculares , Percepción Visual , Masculino , Femenino , Humanos , Percepción Visual/fisiología , Sensación , Sonido , Percepción Auditiva/fisiología
20.
J Neurosci ; 44(1)2024 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-37949655

RESUMEN

The key assumption of the predictive coding framework is that internal representations are used to generate predictions on how the sensory input will look like in the immediate future. These predictions are tested against the actual input by the so-called prediction error units, which encode the residuals of the predictions. What happens to prediction errors, however, if predictions drawn by different stages of the sensory hierarchy contradict each other? To answer this question, we conducted two fMRI experiments while female and male human participants listened to sequences of sounds: pure tones in the first experiment and frequency-modulated sweeps in the second experiment. In both experiments, we used repetition to induce predictions based on stimulus statistics (stats-informed predictions) and abstract rules disclosed in the task instructions to induce an orthogonal set of (task-informed) predictions. We tested three alternative scenarios: neural responses in the auditory sensory pathway encode prediction error with respect to (1) the stats-informed predictions, (2) the task-informed predictions, or (3) a combination of both. Results showed that neural populations in all recorded regions (bilateral inferior colliculus, medial geniculate body, and primary and secondary auditory cortices) encode prediction error with respect to a combination of the two orthogonal sets of predictions. The findings suggest that predictive coding exploits the non-linear architecture of the auditory pathway for the transmission of predictions. Such non-linear transmission of predictions might be crucial for the predictive coding of complex auditory signals like speech.Significance Statement Sensory systems exploit our subjective expectations to make sense of an overwhelming influx of sensory signals. It is still unclear how expectations at each stage of the processing pipeline are used to predict the representations at the other stages. The current view is that this transmission is hierarchical and linear. Here we measured fMRI responses in auditory cortex, sensory thalamus, and midbrain while we induced two sets of mutually inconsistent expectations on the sensory input, each putatively encoded at a different stage. We show that responses at all stages are concurrently shaped by both sets of expectations. The results challenge the hypothesis that expectations are transmitted linearly and provide for a normative explanation of the non-linear physiology of the corticofugal sensory system.


Asunto(s)
Corteza Auditiva , Vías Auditivas , Humanos , Masculino , Femenino , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Corteza Auditiva/fisiología , Encéfalo/fisiología , Sonido , Estimulación Acústica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA