RESUMEN
We propose a memory-enhanced multi-stage goal-driven network (ME-MGNet) for egocentric trajectory prediction in dynamic scenes. Our key idea is to build a scene layout memory inspired by human perception in order to transfer knowledge from prior experiences to the current scenario in a top-down manner. Specifically, given a test scene, we first perform scene-level matching based on our scene layout memory to retrieve trajectories from visually similar scenes in the training data. This is followed by trajectory-level matching and memory filtering to obtain a set of goal features. In addition, a multi-stage goal generator takes these goal features and uses a backward decoder to produce several stage goals. Finally, we integrate the above steps into a conditional autoencoder and a forward decoder to produce trajectory prediction results. Experiments on three public datasets, JAAD, PIE, and KITTI, and a new egocentric trajectory prediction dataset, Fuzhou DashCam (FZDC), validate the efficacy of the proposed method.