Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Opt Express ; 28(8): 12108-12120, 2020 Apr 13.
Artigo em Inglês | MEDLINE | ID: mdl-32403711

RESUMO

Light field microscopy (LFM) is an emerging technology for high-speed wide-field 3D imaging by capturing 4D light field of 3D volumes. However, its 3D imaging capability comes at a cost of lateral resolution. In addition, the lateral resolution is not uniform across depth in the light field dconvolution reconstructions. To address these problems, here, we propose a snapshot multifocal light field microscopy (MFLFM) imaging method. The underlying concept of the MFLFM is to collect multiple focal shifted light fields simultaneously. We show that by focal stacking those focal shifted light fields, the depth-of-field (DOF) of the LFM can be further improved but without sacrificing the lateral resolution. Also, if all differently focused light fields are utilized together in the deconvolution, the MFLFM could achieve a high and uniform lateral resolution within a larger DOF. We present a house-built MFLFM system by placing a diffractive optical element at the Fourier plane of a conventional LFM. The optical performance of the MFLFM are analyzed and given. Both simulations and proof-of-principle experimental results are provided to demonstrate the effectiveness and benefits of the MFLFM. We believe that the proposed snapshot MFLFM has potential to enable high-speed and high resolution 3D imaging applications.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 15219-15232, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37578915

RESUMO

Neuromorphic cameras are emerging imaging technology that has advantages over conventional imaging sensors in several aspects including dynamic range, sensing latency, and power consumption. However, the signal-to-noise level and the spatial resolution still fall behind the state of conventional imaging sensors. In this article, we address the denoising and super-resolution problem for modern neuromorphic cameras. We employ 3D U-Net as the backbone neural architecture for such a task. The networks are trained and tested on two types of neuromorphic cameras: a dynamic vision sensor and a spike camera. Their pixels generate signals asynchronously, the former is based on perceived light changes and the latter is based on accumulated light intensity. To collect the datasets for training such networks, we design a display-camera system to record high frame-rate videos at multiple resolutions, providing supervision for denoising and super-resolution. The networks are trained in a noise-to-noise fashion, where the two ends of the network are unfiltered noisy data. The output of the networks has been tested for downstream applications including event-based visual object tracking and image reconstruction. Experimental results demonstrate the effectiveness of improving the quality of neuromorphic events and spikes, and the corresponding improvement to downstream applications with state-of-the-art performance.

3.
Artigo em Inglês | MEDLINE | ID: mdl-36315535

RESUMO

We present a novel adaptive multimodal intensity-event algorithm to optimize an overall objective of object tracking under bit rate constraints for a host-chip architecture. The chip is a computationally resource-constrained device acquiring high-resolution intensity frames and events, while the host is capable of performing computationally expensive tasks. We develop a joint intensity-neuromorphic event rate-distortion compression framework with a quadtree (QT)-based compression of intensity and events scheme. The goal of this compression framework is to optimally allocate bits to the intensity frames and neuromorphic events based on the minimum distortion at a given communication channel capacity. The data acquisition on the chip is driven by the presence of objects of interest in the scene as detected by an object detector. The most informative intensity and event data are communicated to the host under rate constraints so that the best possible tracking performance is obtained. The detection and tracking of objects in the scene are done on the distorted data at the host. Intensity and events are jointly used in a fusion framework to enhance the quality of the distorted images, in order to improve the object detection and tracking performance. The performance assessment of the overall system is done in terms of the multiple object tracking accuracy (MOTA) score. Compared with using intensity modality only, there is an improvement in MOTA using both these modalities in different scenarios.

4.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 8261-8275, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34543190

RESUMO

Many visual and robotics tasks in real-world scenarios rely on robust handling of high speed motion and high dynamic range (HDR) with effectively high spatial resolution and low noise. Such stringent requirements, however, cannot be directly satisfied by a single imager or imaging modality, rather by multi-modal sensors with complementary advantages. In this paper, we address high performance imaging by exploring the synergy between traditional frame-based sensors with high spatial resolution and low sensor noise, and emerging event-based sensors with high speed and high dynamic range. We introduce a novel computational framework, termed Guided Event Filtering (GEF), to process these two streams of input data and output a stream of super-resolved yet noise-reduced events. To generate high quality events, GEF first registers the captured noisy events onto the guidance image plane according to our flow model. it then performs joint image filtering that inherits the mutual structure from both inputs. Lastly, GEF re-distributes the filtered event frame in the space-time volume while preserving the statistical characteristics of the original events. When the guidance images under-perform, GEF incorporates an event self-guiding mechanism that resorts to neighbor events for guidance. We demonstrate the benefits of GEF by applying the output high quality events to existing event-based algorithms across diverse application categories, including high speed object tracking, depth estimation, high frame-rate video synthesis, and super resolution/HDR/color image restoration.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA