Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 121(28): e2320870121, 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38959033

RESUMEN

Efficient storage and sharing of massive biomedical data would open up their wide accessibility to different institutions and disciplines. However, compressors tailored for natural photos/videos are rapidly limited for biomedical data, while emerging deep learning-based methods demand huge training data and are difficult to generalize. Here, we propose to conduct Biomedical data compRession with Implicit nEural Function (BRIEF) by representing the target data with compact neural networks, which are data specific and thus have no generalization issues. Benefiting from the strong representation capability of implicit neural function, BRIEF achieves 2[Formula: see text]3 orders of magnitude compression on diverse biomedical data at significantly higher fidelity than existing techniques. Besides, BRIEF is of consistent performance across the whole data volume, and supports customized spatially varying fidelity. BRIEF's multifold advantageous features also serve reliable downstream tasks at low bandwidth. Our approach will facilitate low-bandwidth data sharing and promote collaboration and progress in the biomedical field.


Asunto(s)
Difusión de la Información , Redes Neurales de la Computación , Humanos , Difusión de la Información/métodos , Compresión de Datos/métodos , Aprendizaje Profundo , Investigación Biomédica/métodos
2.
Artículo en Inglés | MEDLINE | ID: mdl-38728128

RESUMEN

Despite their remarkable performance, deep neural networks remain mostly "black boxes", suggesting inexplicability and hindering their wide applications in fields requiring making rational decisions. Here we introduce HOPE  (High-order Polynomial Expansion), a method for expanding a network into a high-order Taylor polynomial on a reference input. Specifically, we derive the high-order derivative rule for composite functions and extend the rule to neural networks to obtain their high-order derivatives quickly and accurately. From these derivatives, we can then derive the Taylor polynomial of the neural network, which provides an explicit expression of the network's local interpretations. We combine the Taylor polynomials obtained under different reference inputs to obtain the global interpretation of the neural network. Numerical analysis confirms the high accuracy, low computational complexity, and good convergence of the proposed method. Moreover, we demonstrate HOPE's wide applications built on deep learning, including function discovery, fast inference, and feature selection. We compared HOPE  with other XAI methods and demonstrated our advantages. The code is available at https://github.com/HarryPotterXTX/HOPE.git.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...