Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
Opt Express ; 32(5): 7731-7761, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38439448

RESUMO

Non-line-of-sight (NLOS) imaging systems involve the measurement of an optical signal at a diffuse surface. A forward model encodes the physics of these measurements mathematically and can be inverted to generate a reconstruction of the hidden scene. Some existing NLOS imaging techniques rely on illuminating the diffuse surface and measuring the photon time of flight (ToF) of multi-bounce light paths. Alternatively, some methods depend on measuring high-frequency variations caused by shadows cast by occluders in the hidden scene. While forward models for ToF-NLOS and Shadow-NLOS have been developed separately, there has been limited work on unifying these two imaging modalities. Dove et al introduced a unified mathematical framework capable of modeling both imaging techniques [Opt. Express27, 18016 (2019)10.1364/OE.27.018016]. The authors utilize this general forward model, known as the two frequency spatial Wigner distribution (TFSWD), to discuss the implications of reconstruction resolution for combining the two modalities but only when the occluder geometry is known a priori. In this work, we develop a graphical representation of the TFSWD forward model and apply it to novel experimental setups with potential applications in NLOS imaging. Furthermore, we use this unified framework to explore the potential of combining these two imaging modalities in situations where the occluder geometry is not known in advance.

2.
J Biomed Opt ; 28(6): 066502, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37351197

RESUMO

Significance: Fluorescence lifetime imaging microscopy (FLIM) of the metabolic co-enzyme nicotinamide adenine dinucleotide (phosphate) [NAD(P)H] is a popular method to monitor single-cell metabolism within unperturbed, living 3D systems. However, FLIM of NAD(P)H has not been performed in a light-sheet geometry, which is advantageous for rapid imaging of cells within live 3D samples. Aim: We aim to design, validate, and demonstrate a proof-of-concept light-sheet system for NAD(P)H FLIM. Approach: A single-photon avalanche diode camera was integrated into a light-sheet microscope to achieve optical sectioning and limit out-of-focus contributions for NAD(P)H FLIM of single cells. Results: An NAD(P)H light-sheet FLIM system was built and validated with fluorescence lifetime standards and with time-course imaging of metabolic perturbations in pancreas cancer cells with 10 s integration times. NAD(P)H light-sheet FLIM in vivo was demonstrated with live neutrophil imaging in a larval zebrafish tail wound also with 10 s integration times. Finally, the theoretical and practical imaging speeds for NAD(P)H FLIM were compared across laser scanning and light-sheet geometries, indicating a 30× to 6× acquisition speed advantage for the light sheet compared to the laser scanning geometry. Conclusions: FLIM of NAD(P)H is feasible in a light-sheet geometry and is attractive for 3D live cell imaging applications, such as monitoring immune cell metabolism and migration within an organism.


Assuntos
NAD , Neoplasias Pancreáticas , Animais , NAD/metabolismo , Peixe-Zebra , Microscopia de Fluorescência/métodos , Fótons , Imagem Óptica/métodos
3.
bioRxiv ; 2023 Feb 03.
Artigo em Inglês | MEDLINE | ID: mdl-36778488

RESUMO

Single photon avalanche diode (SPAD) array sensors can increase the imaging speed for fluorescence lifetime imaging microscopy (FLIM) by transitioning from laser scanning to widefield geometries. While a SPAD camera in epi-fluorescence geometry enables widefield FLIM of fluorescently labeled samples, label-free imaging of single-cell autofluorescence is not feasible in an epi-fluorescence geometry because background fluorescence from out-of-focus features masks weak cell autofluorescence and biases lifetime measurements. Here, we address this problem by integrating the SPAD camera in a light sheet illumination geometry to achieve optical sectioning and limit out-of-focus contributions, enabling fast label-free FLIM of single-cell NAD(P)H autofluorescence. The feasibility of this NAD(P)H light sheet FLIM system was confirmed with time-course imaging of metabolic perturbations in pancreas cancer cells with 10 s integration times, and in vivo NAD(P)H light sheet FLIM was demonstrated with live neutrophil imaging in a zebrafish tail wound, also with 10 s integration times. Finally, the theoretical and practical imaging speeds for NAD(P)H FLIM were compared across laser scanning and light sheet geometries, indicating a 30X to 6X frame rate advantage for the light sheet compared to the laser scanning geometry. This light sheet system provides faster frame rates for 3D NAD(P)H FLIM for live cell imaging applications such as monitoring single cell metabolism and immune cell migration throughout an entire living organism.

4.
Artigo em Inglês | MEDLINE | ID: mdl-36049012

RESUMO

Computational approach to imaging around the corner, or non-line-of-sight (NLOS) imaging, is becoming a reality thanks to major advances in imaging hardware and reconstruction algorithms. A recent development towards practical NLOS imaging, Nam et al. [1] demonstrated a high-speed non-confocal imaging system that operates at 5 Hz, 100x faster than the prior art. This enormous gain in acquisition rate, however, necessitates numerous approximations in light transport, breaking many existing NLOS reconstruction methods that assume an idealized image formation model. To bridge the gap, we present a novel deep model that incorporates the complementary physics priors of wave propagation and volume rendering into a neural network for high-quality and robust NLOS reconstruction. This orchestrated design regularizes the solution space by relaxing the image formation model, resulting in a deep model that generalizes well on real captures despite being exclusively trained on synthetic data. Further, we devise a unified learning framework that enables our model to be flexibly trained using diverse supervision signals, including target intensity images or even raw NLOS transient measurements. Once trained, our model renders both intensity and depth images at inference time in a single forward pass, capable of processing more than 5 captures per second on a high-end GPU. Through extensive qualitative and quantitative experiments, we show that our method outperforms prior physics and learning based approaches on both synthetic and real measurements. We anticipate that our method along with the fast capturing system will accelerate future development of NLOS imaging for real world applications that require high-speed imaging.

5.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 7841-7853, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34613909

RESUMO

Non-Line-of-Sight (NLOS) imaging reconstructs occluded scenes based on indirect diffuse reflections. The computational complexity and memory consumption of existing NLOS reconstruction algorithms make them challenging to be implemented in real-time. This paper presents a fast and memory-efficient phasor field-diffraction-based NLOS reconstruction algorithm. In the proposed algorithm, the radial property of the Rayleigh Sommerfeld diffraction (RSD) kernels along with the linear property of Fourier transform are utilized to reconstruct the Fourier domain representations of RSD kernels using a set of kernel bases. Moreover, memory consumption is further reduced by sampling the kernel bases in a radius direction and constructing them during the run-time. According to the analysis, the memory efficiency can be improved by as much as 220×. Experimental results show that compared with the original RSD algorithm, the reconstruction time of the proposed algorithm is significantly reduced with little impact on the final imaging quality.

6.
Nat Commun ; 12(1): 6526, 2021 Nov 11.
Artigo em Inglês | MEDLINE | ID: mdl-34764273

RESUMO

Non-Line-Of-Sight (NLOS) imaging aims at recovering the 3D geometry of objects that are hidden from the direct line of sight. One major challenge with this technique is the weak available multibounce signal limiting scene size, capture speed, and reconstruction quality. To overcome this obstacle, we introduce a multipixel time-of-flight non-line-of-sight imaging method combining specifically designed Single Photon Avalanche Diode (SPAD) array detectors with a fast reconstruction algorithm that captures and reconstructs live low-latency videos of non-line-of-sight scenes with natural non-retroreflective objects. We develop a model of the signal-to-noise-ratio of non-line-of-sight imaging and use it to devise a method that reconstructs the scene such that signal-to-noise-ratio, motion blur, angular resolution, and depth resolution are all independent of scene depth suggesting that reconstruction of very large scenes may be possible.

7.
Opt Express ; 29(4): 4733-4745, 2021 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-33726023

RESUMO

The development of single-photon counting detectors and arrays has made tremendous steps in recent years, not the least because of various new applications, e.g., LIDAR devices. In this work, a 3D imaging device based on real thermal light intensity interferometry is presented. By using gated SPAD technology, a basic 3D scene is imaged in reasonable measurement time. Compared to conventional approaches, the proposed synchronized photon counting allows the use of more light modes to enhance 3D ranging performance. Advantages like robustness to atmospheric scattering or autonomy by exploiting external light sources can make this ranging approach interesting for future applications.

8.
Nat Commun ; 11(1): 1645, 2020 04 02.
Artigo em Inglês | MEDLINE | ID: mdl-32242010

RESUMO

Non-line-of-sight (NLOS) imaging recovers objects using diffusely reflected indirect light using transient illumination devices in combination with a computational inverse method. While capture systems capable of collecting light from the entire NLOS relay surface can be much more light efficient than single pixel point scanning detection, current reconstruction algorithms for such systems have computational and memory requirements that prevent real-time NLOS imaging. Existing real-time demonstrations also use retroreflective targets and reconstruct at resolutions far below the hardware limits. Our method presented here enables the reconstruction of room-sized scenes from non-confocal, parallel multi-pixel measurements in seconds with less memory usage. We anticipate that our method will enable real-time NLOS imaging when used with emerging single-photon avalanche diode array detectors with resolution only limited by the temporal resolution of the sensor.

9.
Opt Express ; 28(4): 5331-5339, 2020 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-32121756

RESUMO

The non-line-of-sight (NLOS) imaging problem has attracted a lot of interest in recent years. The objective is to produce images of objects that are hidden around a corner, using the information encoded in the time-of-flight (ToF) of photons that scatter multiple times after incidence at a given relay surface. Most current methods assume a Lambertian, flat and static relay surface, with non-moving targets in the hidden scene. Here we show NLOS reconstructions for a relay surface that is non-planar and rapidly changing during data acquisition. Our NLOS imaging system exploits two different detectors to collect the ToF data; one pertaining to the relay surface and another one regarding the ToF information of the hidden scene. The system is then able to associate where the multiply-scattered photons originated from the relay surface. This step allows us to account for changing relay positions in the reconstruction algorithm. Results show that the reconstructions for a dynamic relay surface are similar to the ones obtained using a traditional non-dynamic relay surface.

10.
J Biomed Opt ; 25(1): 1-17, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31833280

RESUMO

The excited state lifetime of a fluorophore together with its fluorescence emission spectrum provide information that can yield valuable insights into the nature of a fluorophore and its microenvironment. However, it is difficult to obtain both channels of information in a conventional scheme as detectors are typically configured either for spectral or lifetime detection. We present a fiber-based method to obtain spectral information from a multiphoton fluorescence lifetime imaging (FLIM) system. This is made possible using the time delay introduced in the fluorescence emission path by a dispersive optical fiber coupled to a detector operating in time-correlated single-photon counting mode. This add-on spectral implementation requires only a few simple modifications to any existing FLIM system and is considerably more cost-efficient compared to currently available spectral detectors.


Assuntos
Microscopia de Fluorescência por Excitação Multifotônica/instrumentação , Fibras Ópticas , Imagem Óptica/instrumentação , Animais , Bovinos , Células Cultivadas , Células Endoteliais/citologia , Células Endoteliais/metabolismo , Desenho de Equipamento , Corantes Fluorescentes , Microscopia de Fluorescência por Excitação Multifotônica/estatística & dados numéricos , Imagem Óptica/estatística & dados numéricos , Fenômenos Ópticos
11.
Opt Express ; 27(22): 32587-32608, 2019 Oct 28.
Artigo em Inglês | MEDLINE | ID: mdl-31684468

RESUMO

Time-of-flight (ToF) non-line-of-sight (NLoS) imaging reconstructs images of scenes with light that have undergone diffuse reflections. While, in the past, ToF light propagation and reconstruction methods have been described using their own inverse methods, it has recently been shown that ToF light transport can be described as the propagation of a wave, allowing it to be modeled by the same methods that are applied for direct imaging with electromagnetic or sound waves. This wave of fluctuating optical irradiance is called the phasor field (P-field) wave. Here, we perform a series of experiments to show the wave-like behavior of this P-field wave. We design a P-field source and detector and use them to demonstrate interference of P-field waves in a double slit experiment, as well as mutually-independent focusing and imaging of P-field waves and their optical carrier. Besides establishing the properties of P-field waves, our work demonstrates that imaging of ToF signals is possible without any computation enabling fast and energy-efficient NLoS imaging systems.

12.
Opt Express ; 27(20): 29380-29400, 2019 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-31684674

RESUMO

Non-line-of-sight (NLOS) imaging has recently attracted a lot of interest from the scientific community. The goal of this paper is to provide the basis for a comprehensive mathematical framework for NLOS imaging that is directly derived from physical concepts. We introduce the irradiance phasor field (P-field) as an abstract quantity for irradiance fluctuations, akin to the complex envelope of the Electrical field (E-field) that is used to describe propagation of electromagnetic energy. We demonstrate that the P-field propagator is analogous to the Huygens-Fresnel propagator that describes the propagation of other waves and show that NLOS light transport can be described with the processing methods that are available for LOS imaging. We perform simulations to demonstrate the accuracy and validity of the P-field formulation and provide experimental results to demonstrate a Huygens-like P-field summation behavior.

13.
Nature ; 572(7771): 620-623, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31384042

RESUMO

Non-line-of-sight imaging allows objects to be observed when partially or fully occluded from direct view, by analysing indirect diffuse reflections off a secondary relay surface. Despite many potential applications1-9, existing methods lack practical usability because of limitations including the assumption of single scattering only, ideal diffuse reflectance and lack of occlusions within the hidden scene. By contrast, line-of-sight imaging systems do not impose any assumptions about the imaged scene, despite relying on the mathematically simple processes of linear diffractive wave propagation. Here we show that the problem of non-line-of-sight imaging can also be formulated as one of diffractive wave propagation, by introducing a virtual wave field that we term the phasor field. Non-line-of-sight scenes can be imaged from raw time-of-flight data by applying the mathematical operators that model wave propagation in a conventional line-of-sight imaging system. Our method yields a new class of imaging algorithms that mimic the capabilities of line-of-sight cameras. To demonstrate our technique, we derive three imaging algorithms, modelled after three different line-of-sight systems. These algorithms rely on solving a wave diffraction integral, namely the Rayleigh-Sommerfeld diffraction integral. Fast solutions to Rayleigh-Sommerfeld diffraction and its approximations are readily available, benefiting our method. We demonstrate non-line-of-sight imaging of complex scenes with strong multiple scattering and ambient light, arbitrary materials, large depth range and occlusions. Our method handles these challenging cases without explicitly inverting a light-transport model. We believe that our approach will help to unlock the potential of non-line-of-sight imaging and promote the development of relevant applications not restricted to laboratory conditions.

14.
IEEE Trans Pattern Anal Mach Intell ; 41(7): 1615-1626, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29993536

RESUMO

Recent advances in computer vision and inverse light transport theory have resulted in several non-line-of-sight imaging techniques. These techniques use photon time-of-flight information encoded in light after multiple, diffuse reflections to reconstruct a three-dimensional scene. In this paper, we propose and describe two iterative backprojection algorithms, the additive error backprojection (AEB) and multiplicative error backprojection (MEB), whose goal is to improve the reconstruction of the scene under investigation over non-iterative backprojection algorithms. We evaluate the proposed algorithms' performance applied to simulated and real data (gathered from an experimental setup where the system needs to reconstruct an unknown scene). Results show that the proposed iterative algorithms are able to provide better reconstruction than the unfiltered, non-iterative backprojection algorithm for both simulated and physical scenes, but are more sensitive to errors in the light transport model.

15.
Rep Prog Phys ; 81(10): 105901, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-29900876

RESUMO

Cameras capable of capturing videos at a trillion frames per second allow to freeze light in motion, a very counterintuitive capability when related to our everyday experience in which light appears to travel instantaneously. By combining this capability with computational imaging techniques, new imaging opportunities emerge such as 3D imaging of scenes that are hidden behind a corner, the study of relativistic distortion effects, imaging through diffusive media and imaging of ultrafast optical processes such as laser ablation, supercontinuum and plasma generation. We provide an overview of the main techniques that have been developed for ultra-high speed photography with a particular focus on 'light-in-flight' imaging, i.e. applications where the key element is the imaging of light itself at frame rates that allow to freeze its motion and therefore extract information that would otherwise be blurred out and lost.

16.
Appl Opt ; 56(31): H51-H56, 2017 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-29091666

RESUMO

Lasers and laser diodes are widely used as illumination sources for optical imaging techniques. Time-of-flight (ToF) cameras with laser diodes and range imaging based on optical interferometry systems using lasers are among these techniques, with various applications in fields such as metrology and machine vision. ToF cameras can have imaging ranges of several meters, but offer only centimeter-level depth resolution. On the other hand, range imaging based on optical interferometry has depth resolution on the micrometer and even nanometer scale, but offers very limited (sub-millimeter) imaging ranges. In this paper, we propose a range imaging system based on multi-wavelength superheterodyne interferometry to simultaneously provide sub-millimeter depth resolution and an imaging range of tens to hundreds of millimeters. The proposed setup uses two tunable III-V semiconductor lasers and offers leverage between imaging range and resolution. The system is composed entirely of fiber connections except the scanning head, which enables it to be made into a portable device. We believe our proposed system has the potential to tremendously benefit many fields, such as metrology and computer vision.

17.
Opt Express ; 23(16): 20997-1011, 2015 Aug 10.
Artigo em Inglês | MEDLINE | ID: mdl-26367952

RESUMO

By using time-of-flight information encoded in multiply scattered light, it is possible to reconstruct images of objects hidden from the camera's direct line of sight. Here, we present a non-line-of-sight imaging system that uses a single-pixel, single-photon avalanche diode (SPAD) to collect time-of-flight information. Compared to earlier systems, this modification provides significant improvements in terms of power requirements, form factor, cost, and reconstruction time, while maintaining a comparable time resolution. The potential for further size and cost reduction of this technology make this system a good base for developing a practical system that can be used in real world applications.

18.
J Opt Soc Am A Opt Image Sci Vis ; 31(5): 957-63, 2014 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-24979627

RESUMO

Imaging through complex media is a well-known challenge, as scattering distorts a signal and invalidates imaging equations. For coherent imaging, the input field can be reconstructed using phase conjugation or knowledge of the complex transmission matrix. However, for incoherent light, wave interference methods are limited to small viewing angles. On the other hand, time-resolved methods do not rely on signal or object phase correlations, making them suitable for reconstructing wide-angle, larger-scale objects. Previously, a time-resolved technique was demonstrated for uniformly reflecting objects. Here, we generalize the technique to reconstruct the spatially varying reflectance of shapes hidden by angle-dependent diffuse layers. The technique is a noninvasive method of imaging three-dimensional objects without relying on coherence. For a given diffuser, ultrafast measurements are used in a convex optimization program to reconstruct a wide-angle, three-dimensional reflectance function. The method has potential use for biological imaging and material characterization.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Nefelometria e Turbidimetria/métodos , Fotometria/métodos , Luz , Espalhamento de Radiação
19.
Opt Express ; 20(17): 19096-108, 2012 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-23038550

RESUMO

We analyze multi-bounce propagation of light in an unknown hidden volume and demonstrate that the reflected light contains sufficient information to recover the 3D structure of the hidden scene. We formulate the forward and inverse theory of secondary scattering using ideas from energy front propagation and tomography. We show that using Fresnel approximation greatly simplifies this problem and the inversion can be achieved via a backpropagation process. We study the invertibility, uniqueness and choices of space-time-angle dimensions using synthetic examples. We show that a 2D streak camera can be used to discover and reconstruct hidden geometry. Using a 1D high speed time of flight camera, we show that our method can be used recover 3D shapes of objects "around the corner".


Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Inteligência Artificial , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
Nat Commun ; 3: 745, 2012 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-22434188

RESUMO

The recovery of objects obscured by scattering is an important goal in imaging and has been approached by exploiting, for example, coherence properties, ballistic photons or penetrating wavelengths. Common methods use scattered light transmitted through an occluding material, although these fail if the occluder is opaque. Light is scattered not only by transmission through objects, but also by multiple reflection from diffuse surfaces in a scene. This reflected light contains information about the scene that becomes mixed by the diffuse reflections before reaching the image sensor. This mixing is difficult to decode using traditional cameras. Here we report the combination of a time-of-flight technique and computational reconstruction algorithms to untangle image information mixed by diffuse reflection. We demonstrate a three-dimensional range camera able to look around a corner using diffusely reflected light that achieves sub-millimetre depth precision and centimetre lateral precision over 40 cm×40 cm×40 cm of hidden space.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA