Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters











Database
Language
Publication year range
1.
Sci Rep ; 14(1): 19181, 2024 08 19.
Article in English | MEDLINE | ID: mdl-39160202

ABSTRACT

How we move our bodies affects how we perceive sound. For instance, head movements help us to better localize the source of a sound and to compensate for asymmetric hearing loss. However, many auditory experiments are designed to restrict head and body movements. To study the role of movement in hearing, we developed a behavioral task called sound-seeking that rewarded freely moving mice for tracking down an ongoing sound source. Over the course of learning, mice more efficiently navigated to the sound. Next, we asked how sound-seeking was affected by hearing loss induced by surgical removal of the malleus from the middle ear. After bilateral hearing loss sound-seeking performance drastically declined and did not recover. In striking contrast, after unilateral hearing loss mice were only transiently impaired and then recovered their sound-seek ability over about a week. Throughout recovery, unilateral mice increasingly relied on a movement strategy of sequentially checking potential locations for the sound source. In contrast, the startle reflex (an innate auditory behavior) was preserved after unilateral hearing loss and abolished by bilateral hearing loss without recovery over time. In sum, mice compensate with body movement for permanent unilateral damage to the peripheral auditory system. Looking forward, this paradigm provides an opportunity to examine how movement enhances perception and enables resilient adaptation to sensory disorders.


Subject(s)
Sound Localization , Animals , Mice , Sound Localization/physiology , Reflex, Startle/physiology , Hearing Loss/physiopathology , Male , Acoustic Stimulation , Mice, Inbred C57BL , Behavior, Animal , Sound , Female
2.
PLoS One ; 17(5): e0266810, 2022.
Article in English | MEDLINE | ID: mdl-35544461

ABSTRACT

Mechanical ventilators are safety-critical devices that help patients breathe, commonly found in hospital intensive care units (ICUs)-yet, the high costs and proprietary nature of commercial ventilators inhibit their use as an educational and research platform. We present a fully open ventilator device-The People's Ventilator: PVP1-with complete hardware and software documentation including detailed build instructions and a DIY cost of $1,700 USD. We validate PVP1 against both key performance criteria specified in the U.S. Food and Drug Administration's Emergency Use Authorization for Ventilators, and in a pediatric context against a state-of-the-art commercial ventilator. Notably, PVP1 performs well over a wide range of test conditions and performance stability is demonstrated for a minimum of 75,000 breath cycles over three days with an adult mechanical test lung. As an open project, PVP1 can enable future educational, academic, and clinical developments in the ventilator space.


Subject(s)
Intensive Care Units , Ventilators, Mechanical , Adult , Child , Humans , Respiration, Artificial
3.
Elife ; 92020 12 08.
Article in English | MEDLINE | ID: mdl-33289631

ABSTRACT

The ability to control a behavioral task or stimulate neural activity based on animal behavior in real-time is an important tool for experimental neuroscientists. Ideally, such tools are noninvasive, low-latency, and provide interfaces to trigger external hardware based on posture. Recent advances in pose estimation with deep learning allows researchers to train deep neural networks to accurately quantify a wide variety of animal behaviors. Here, we provide a new DeepLabCut-Live! package that achieves low-latency real-time pose estimation (within 15 ms, >100 FPS), with an additional forward-prediction module that achieves zero-latency feedback, and a dynamic-cropping mode that allows for higher inference speeds. We also provide three options for using this tool with ease: (1) a stand-alone GUI (called DLC-Live! GUI), and integration into (2) Bonsai, and (3) AutoPilot. Lastly, we benchmarked performance on a wide range of systems so that experimentalists can easily decide what hardware is required for their needs.


Subject(s)
Feedback, Physiological/physiology , Posture/physiology , Animals , Behavior, Animal/physiology , Mice , Neural Networks, Computer , Software
4.
J Acoust Soc Am ; 145(3): 1168, 2019 03.
Article in English | MEDLINE | ID: mdl-31067917

ABSTRACT

Speech is perceived as a series of relatively invariant phonemes despite extreme variability in the acoustic signal. To be perceived as nearly-identical phonemes, speech sounds that vary continuously over a range of acoustic parameters must be perceptually discretized by the auditory system. Such many-to-one mappings of undifferentiated sensory information to a finite number of discrete categories are ubiquitous in perception. Although many mechanistic models of phonetic perception have been proposed, they remain largely unconstrained by neurobiological data. Current human neurophysiological methods lack the necessary spatiotemporal resolution to provide it: speech is too fast, and the neural circuitry involved is too small. This study demonstrates that mice are capable of learning generalizable phonetic categories, and can thus serve as a model for phonetic perception. Mice learned to discriminate consonants and generalized consonant identity across novel vowel contexts and speakers, consistent with true category learning. A mouse model, given the powerful genetic and electrophysiological tools for probing neural circuits available for them, has the potential to powerfully augment a mechanistic understanding of phonetic perception.

SELECTION OF CITATIONS
SEARCH DETAIL