Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37015405

RESUMO

The combination of different sensory information to predict upcoming situations is an innate capability of intelligent beings. Consequently, various studies in the Artificial Intelligence field are currently being conducted to transfer this ability to artificial systems. Autonomous vehicles can particularly benefit from the combination of multi-modal information from the different sensors of the agent. This paper proposes a method for video-frame prediction that leverages odometric data. It can then serve as a basis for anomaly detection. A Dynamic Bayesian Network framework is adopted, combined with the use of Deep Learning methods to learn an appropriate latent space. First, a Markov Jump Particle Filter is built over the odometric data. This odometry model comprises a set of clusters. As a second step, the video model is learned. It is composed of a Kalman Variational Autoencoder modified to leverage the odometry clusters for focusing its learning attention on features related to the dynamic tasks that the vehicle is performing. We call the obtained overall model Cluster-Guided Kalman Variational Autoencoder. Evaluation is conducted using data from a car moving in a closed environment [1] and leveraging a part of the University of Alcalá DriveSet dataset [2], where several drivers move in a normal and drowsy way along a secondary road.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA