Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38442060

RESUMEN

Neural networks are developed to model the behavior of the brain. One crucial question in this field pertains to when and how a neural network can memorize a given set of patterns. There are two mechanisms to store information: associative memory and sequential pattern recognition. In the case of associative memory, the neural network operates with dynamical attractors that are point attractors, each corresponding to one of the patterns to be stored within the network. In contrast, sequential pattern recognition involves the network memorizing a set of patterns and subsequently retrieving them in a specific order over time. From a dynamical perspective, this corresponds to the presence of a continuous attractor or a cyclic attractor composed of the sequence of patterns stored within the network in a given order. Evidence suggests that the brain is capable of simultaneously performing both associative memory and sequential pattern recognition. Therefore, these types of attractors coexist within the neural network, signifying that some patterns are stored as point attractors, while others are stored as continuous or cyclic attractors. This article investigates the coexistence of cyclic attractors and continuous or point attractors in certain nonlinear neural networks, enabling the simultaneous emergence of various memory mechanisms. By selectively grouping neurons, conditions are established for the existence of cyclic attractors, continuous attractors, and point attractors, respectively. Furthermore, each attractor is explicitly represented, and a competitive dynamic emerges among these coexisting attractors, primarily regulated by adjustments to external inputs.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14639-14652, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37695973

RESUMEN

Despite the impressive results achieved by deep learning based 3D reconstruction, the techniques of directly learning to model 4D human captures with detailed geometry have been less studied. This work presents a novel neural compositional representation for Human 4D Modeling with transformER (H4MER). Specifically, our H4MER is a compact and compositional representation for dynamic human by exploiting the human body prior from the widely used SMPL parametric model. Thus, H4MER can represent a dynamic 3D human over a temporal span with the codes of shape, initial pose, motion and auxiliaries. A simple yet effective linear motion model is proposed to provide a rough and regularized motion estimation, followed by per-frame compensation for pose and geometry details with the residual encoded in the auxiliary codes. We present a novel Transformer-based feature extractor and conditional GRU decoder to facilitate learning and improve the representation capability. Extensive experiments demonstrate our method is not only effective in recovering dynamic human with accurate motion and detailed geometry, but also amenable to various 4D human related tasks, including monocular video fitting, motion retargeting, 4D completion, and future prediction.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Movimiento (Física) , Modelos Lineales
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...