Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 15219-15232, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37578915

RESUMEN

Neuromorphic cameras are emerging imaging technology that has advantages over conventional imaging sensors in several aspects including dynamic range, sensing latency, and power consumption. However, the signal-to-noise level and the spatial resolution still fall behind the state of conventional imaging sensors. In this article, we address the denoising and super-resolution problem for modern neuromorphic cameras. We employ 3D U-Net as the backbone neural architecture for such a task. The networks are trained and tested on two types of neuromorphic cameras: a dynamic vision sensor and a spike camera. Their pixels generate signals asynchronously, the former is based on perceived light changes and the latter is based on accumulated light intensity. To collect the datasets for training such networks, we design a display-camera system to record high frame-rate videos at multiple resolutions, providing supervision for denoising and super-resolution. The networks are trained in a noise-to-noise fashion, where the two ends of the network are unfiltered noisy data. The output of the networks has been tested for downstream applications including event-based visual object tracking and image reconstruction. Experimental results demonstrate the effectiveness of improving the quality of neuromorphic events and spikes, and the corresponding improvement to downstream applications with state-of-the-art performance.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(7): 8553-8565, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37022447

RESUMEN

Reconstruction of high dynamic range image from a single low dynamic range image captured by a conventional RGB camera, which suffers from over- or under-exposure, is an ill-posed problem. In contrast, recent neuromorphic cameras like event camera and spike camera can record high dynamic range scenes in the form of intensity maps, but with much lower spatial resolution and no color information. In this article, we propose a hybrid imaging system (denoted as NeurImg) that captures and fuses the visual information from a neuromorphic camera and ordinary images from an RGB camera to reconstruct high-quality high dynamic range images and videos. The proposed NeurImg-HDR+ network consists of specially designed modules, which bridges the domain gaps on resolution, dynamic range, and color representation between two types of sensors and images to reconstruct high-resolution, high dynamic range images and videos. We capture a test dataset of hybrid signals on various HDR scenes using the hybrid camera, and analyze the advantages of the proposed fusing strategy by comparing it to state-of-the-art inverse tone mapping methods and merging two low dynamic range images approaches. Quantitative and qualitative experiments on both synthetic data and real-world scenarios demonstrate the effectiveness of the proposed hybrid high dynamic range imaging system. Code and dataset can be found at: https://github.com/hjynwa/NeurImg-HDR.

3.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 8261-8275, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34543190

RESUMEN

Many visual and robotics tasks in real-world scenarios rely on robust handling of high speed motion and high dynamic range (HDR) with effectively high spatial resolution and low noise. Such stringent requirements, however, cannot be directly satisfied by a single imager or imaging modality, rather by multi-modal sensors with complementary advantages. In this paper, we address high performance imaging by exploring the synergy between traditional frame-based sensors with high spatial resolution and low sensor noise, and emerging event-based sensors with high speed and high dynamic range. We introduce a novel computational framework, termed Guided Event Filtering (GEF), to process these two streams of input data and output a stream of super-resolved yet noise-reduced events. To generate high quality events, GEF first registers the captured noisy events onto the guidance image plane according to our flow model. it then performs joint image filtering that inherits the mutual structure from both inputs. Lastly, GEF re-distributes the filtered event frame in the space-time volume while preserving the statistical characteristics of the original events. When the guidance images under-perform, GEF incorporates an event self-guiding mechanism that resorts to neighbor events for guidance. We demonstrate the benefits of GEF by applying the output high quality events to existing event-based algorithms across diverse application categories, including high speed object tracking, depth estimation, high frame-rate video synthesis, and super resolution/HDR/color image restoration.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...