Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
bioRxiv ; 2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39091759

ABSTRACT

Sound produces surface waves along the cochlea's basilar membrane. To achieve the ear's astonishing frequency resolution and sensitivity to faint sounds, dissipation in the cochlea must be canceled via active processes in hair cells, effectively bringing the cochlea to the edge of instability. But how can the cochlea be globally tuned to the edge of instability with only local feedback? To address this question, we use a discretized version of a standard model of basilar membrane dynamics, but with an explicit contribution from active processes in hair cells. Surprisingly, we find the basilar membrane supports two qualitatively distinct sets of modes: a continuum of localized modes and a small number of collective extended modes. Localized modes sharply peak at their resonant position and are largely uncoupled. As a result, they can be amplified almost independently from each other by local hair cells via feedback reminiscent of self-organized criticality. However, this amplification can destabilize the collective extended modes; avoiding such instabilities places limits on possible molecular mechanisms for active feedback in hair cells. Our work illuminates how and under what conditions individual hair cells can collectively create a critical cochlea.

2.
ArXiv ; 2024 Jul 19.
Article in English | MEDLINE | ID: mdl-39070039

ABSTRACT

Sound produces surface waves along the cochlea's basilar membrane. To achieve the ear's astonishing frequency resolution and sensitivity to faint sounds, dissipation in the cochlea must be canceled via active processes in hair cells, effectively bringing the cochlea to the edge of instability. But how can the cochlea be globally tuned to the edge of instability with only local feedback? To address this question, we use a discretized version of a standard model of basilar membrane dynamics, but with an explicit contribution from active processes in hair cells. Surprisingly, we find the basilar membrane supports two qualitatively distinct sets of modes: a continuum of localized modes and a small number of collective extended modes. Localized modes sharply peak at their resonant position and are largely uncoupled. As a result, they can be amplified almost independently from each other by local hair cells via feedback reminiscent of self-organized criticality. However, this amplification can destabilize the collective extended modes; avoiding such instabilities places limits on possible molecular mechanisms for active feedback in hair cells. Our work illuminates how and under what conditions individual hair cells can collectively create a critical cochlea.

3.
Entropy (Basel) ; 25(3)2023 Mar 01.
Article in English | MEDLINE | ID: mdl-36981323

ABSTRACT

Inference from limited data requires a notion of measure on parameter space, which is most explicit in the Bayesian framework as a prior distribution. Jeffreys prior is the best-known uninformative choice, the invariant volume element from information geometry, but we demonstrate here that this leads to enormous bias in typical high-dimensional models. This is because models found in science typically have an effective dimensionality of accessible behaviors much smaller than the number of microscopic parameters. Any measure which treats all of these parameters equally is far from uniform when projected onto the sub-space of relevant parameters, due to variations in the local co-volume of irrelevant directions. We present results on a principled choice of measure which avoids this issue and leads to unbiased posteriors by focusing on relevant parameters. This optimal prior depends on the quantity of data to be gathered, and approaches Jeffreys prior in the asymptotic limit. However, for typical models, this limit cannot be justified without an impossibly large increase in the quantity of data, exponential in the number of microscopic parameters.

4.
Rep Prog Phys ; 86(3)2022 Dec 28.
Article in English | MEDLINE | ID: mdl-36576176

ABSTRACT

Complex models in physics, biology, economics, and engineering are oftensloppy, meaning that the model parameters are not well determined by the model predictions for collective behavior. Many parameter combinations can vary over decades without significant changes in the predictions. This review uses information geometry to explore sloppiness and its deep relation to emergent theories. We introduce themodel manifoldof predictions, whose coordinates are the model parameters. Itshyperribbonstructure explains why only a few parameter combinations matter for the behavior. We review recent rigorous results that connect the hierarchy of hyperribbon widths to approximation theory, and to the smoothness of model predictions under changes of the control variables. We discuss recent geodesic methods to find simpler models on nearby boundaries of the model manifold-emergent theories with fewer parameters that explain the behavior equally well. We discuss a Bayesian prior which optimizes the mutual information between model parameters and experimental data, naturally favoring points on the emergent boundary theories and thus simpler models. We introduce a 'projected maximum likelihood' prior that efficiently approximates this optimal prior, and contrast both to the poor behavior of the traditional Jeffreys prior. We discuss the way the renormalization group coarse-graining in statistical mechanics introduces a flow of the model manifold, and connect stiff and sloppy directions along the model manifold with relevant and irrelevant eigendirections of the renormalization group. Finally, we discuss recently developed 'intensive' embedding methods, allowing one to visualize the predictions of arbitrary probabilistic models as low-dimensional projections of an isometric embedding, and illustrate our method by generating the model manifold of the Ising model.


Subject(s)
Models, Statistical , Physics , Bayes Theorem , Engineering
5.
Proc Natl Acad Sci U S A ; 115(8): 1760-1765, 2018 02 20.
Article in English | MEDLINE | ID: mdl-29434042

ABSTRACT

We use the language of uninformative Bayesian prior choice to study the selection of appropriately simple effective models. We advocate for the prior which maximizes the mutual information between parameters and predictions, learning as much as possible from limited data. When many parameters are poorly constrained by the available data, we find that this prior puts weight only on boundaries of the parameter space. Thus, it selects a lower-dimensional effective theory in a principled way, ignoring irrelevant parameter directions. In the limit where there are sufficient data to tightly constrain any number of parameters, this reduces to the Jeffreys prior. However, we argue that this limit is pathological when applied to the hyperribbon parameter manifolds generic in science, because it leads to dramatic dependence on effects invisible to experiment.


Subject(s)
Models, Statistical , Algorithms , Bayes Theorem
SELECTION OF CITATIONS
SEARCH DETAIL