Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 127
Filtrar
1.
Opt Express ; 31(10): 15355-15371, 2023 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-37157639

RESUMO

X-ray tomography is a non-destructive imaging technique that reveals the interior of an object from its projections at different angles. Under sparse-view and low-photon sampling, regularization priors are required to retrieve a high-fidelity reconstruction. Recently, deep learning has been used in X-ray tomography. The prior learned from training data replaces the general-purpose priors in iterative algorithms, achieving high-quality reconstructions with a neural network. Previous studies typically assume the noise statistics of test data are acquired a priori from training data, leaving the network susceptible to a change in the noise characteristics under practical imaging conditions. In this work, we propose a noise-resilient deep-reconstruction algorithm and apply it to integrated circuit tomography. By training the network with regularized reconstructions from a conventional algorithm, the learned prior shows strong noise resilience without the need for additional training with noisy examples, and allows us to obtain acceptable reconstructions with fewer photons in test data. The advantages of our framework may further enable low-photon tomographic imaging where long acquisition times limit the ability to acquire a large training set.

2.
Opt Express ; 30(2): 2247-2264, 2022 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-35209369

RESUMO

Randomized probe imaging (RPI) is a single-frame diffractive imaging method that uses highly randomized light to reconstruct the spatial features of a scattering object. The reconstruction process, known as phase retrieval, aims to recover a unique solution for the object without measuring the far-field phase information. Typically, reconstruction is done via time-consuming iterative algorithms. In this work, we propose a fast and efficient deep learning based method to reconstruct phase objects from RPI data. The method, which we call deep k-learning, applies the physical propagation operator to generate an approximation of the object as an input to the neural network. This way, the network no longer needs to parametrize the far-field diffraction physics, dramatically improving the results. Deep k-learning is shown to be computationally efficient and robust to Poisson noise. The advantages provided by our method may enable the analysis of far larger datasets in photon starved conditions, with important applications to the study of dynamic phenomena in physical science and biological engineering.

3.
Opt Express ; 30(13): 23238-23259, 2022 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-36225009

RESUMO

X-ray tomography is capable of imaging the interior of objects in three dimensions non-invasively, with applications in biomedical imaging, materials science, electronic inspection, and other fields. The reconstruction process can be an ill-conditioned inverse problem, requiring regularization to obtain satisfactory results. Recently, deep learning has been adopted for tomographic reconstruction. Unlike iterative algorithms which require a distribution that is known a priori, deep reconstruction networks can learn a prior distribution through sampling the training distributions. In this work, we develop a Physics-assisted Generative Adversarial Network (PGAN), a two-step algorithm for tomographic reconstruction. In contrast to previous efforts, our PGAN utilizes maximum-likelihood estimates derived from the measurements to regularize the reconstruction with both known physics and the learned prior. Compared with methods with less physics assisting in training, PGAN can reduce the photon requirement with limited projection angles to achieve a given error rate. The advantages of using a physics-assisted learned prior in X-ray tomography may further enable low-photon nanoscale imaging.

4.
Proc Natl Acad Sci U S A ; 116(40): 19848-19856, 2019 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-31527279

RESUMO

We present a machine learning-based method for tomographic reconstruction of dense layered objects, with range of projection angles limited to [Formula: see text] Whereas previous approaches to phase tomography generally require 2 steps, first to retrieve phase projections from intensity projections and then to perform tomographic reconstruction on the retrieved phase projections, in our work a physics-informed preprocessor followed by a deep neural network (DNN) conduct the 3-dimensional reconstruction directly from the intensity projections. We demonstrate this single-step method experimentally in the visible optical domain on a scaled-up integrated circuit phantom. We show that even under conditions of highly attenuated photon fluxes a DNN trained only on synthetic data can be used to successfully reconstruct physical samples disjoint from the synthetic training set. Thus, the need for producing a large number of physical examples for training is ameliorated. The method is generally applicable to tomography with electromagnetic or other types of radiation at all bands.

5.
Opt Express ; 29(4): 5316-5326, 2021 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-33726070

RESUMO

Scattering generally worsens the condition of inverse problems, with the severity depending on the statistics of the refractive index gradient and contrast. Removing scattering artifacts from images has attracted much work in the literature, including recently the use of static neural networks. S. Li et al. [Optica5(7), 803 (2018)10.1364/OPTICA.5.000803] trained a convolutional neural network to reveal amplitude objects hidden by a specific diffuser; whereas Y. Li et al. [Optica5(10), 1181 (2018)10.1364/OPTICA.5.001181] were able to deal with arbitrary diffusers, as long as certain statistical criteria were met. Here, we propose a novel dynamical machine learning approach for the case of imaging phase objects through arbitrary diffusers. The motivation is to strengthen the correlation among the patterns during the training and to reveal phase objects through scattering media. We utilize the on-axis rotation of a diffuser to impart dynamics and utilize multiple speckle measurements from different angles to form a sequence of images for training. Recurrent neural networks (RNN) embedded with the dynamics filter out useful information and discard the redundancies, thus quantitative phase information in presence of strong scattering. In other words, the RNN effectively averages out the effect of the dynamic random scattering media and learns more about the static pattern. The dynamical approach reveals transparent images behind the scattering media out of speckle correlation among adjacent measurements in a sequence. This method is also applicable to other imaging applications that involve any other spatiotemporal dynamics.

6.
Opt Express ; 29(22): 35078-35118, 2021 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-34808951

RESUMO

This Roadmap article on digital holography provides an overview of a vast array of research activities in the field of digital holography. The paper consists of a series of 25 sections from the prominent experts in digital holography presenting various aspects of the field on sensing, 3D imaging and displays, virtual and augmented reality, microscopy, cell identification, tomography, label-free live cell imaging, and other applications. Each section represents the vision of its author to describe the significant progress, potential impact, important developments, and challenging issues in the field of digital holography.


Assuntos
Holografia/métodos , Imageamento Tridimensional/métodos , Algoritmos , Animais , Ensaios de Triagem em Larga Escala , Humanos , Dispositivos Lab-On-A-Chip , Técnicas Analíticas Microfluídicas , Tomografia , Realidade Virtual
7.
Opt Lett ; 46(1): 130-133, 2021 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-33362033

RESUMO

In mask-based lensless imaging, iterative reconstruction methods based on the geometric optics model produce artifacts and are computationally expensive. We present a prototype of a lensless camera that uses a deep neural network (DNN) to realize rapid reconstruction for Fresnel zone aperture (FZA) imaging. A deep back-projection network (DBPN) is connected behind a U-Net providing an error feedback mechanism, which realizes the self-correction of features to recover the image detail. A diffraction model generates the training data under conditions of broadband incoherent imaging. In the reconstructed results, blur caused by diffraction is shown to have been ameliorated, while the computing time is 2 orders of magnitude faster than the traditional iterative image reconstruction algorithms. This strategy could drastically reduce the design and assembly costs of cameras, paving the way for integration of portable sensors and systems.

8.
Opt Express ; 28(15): 21578-21600, 2020 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-32752433

RESUMO

Imaging with low-dose light is of importance in various fields, especially when minimizing radiation-induced damage onto samples is desirable. The raw image captured at the detector plane is then predominantly a Poisson random process with Gaussian noise added due to the quantum nature of photo-electric conversion. Under such noisy conditions, highly ill-posed problems such as phase retrieval from raw intensity measurements become prone to strong artifacts in the reconstructions; a situation that deep neural networks (DNNs) have already been shown to be useful at improving. Here, we demonstrate that random phase modulation on the optical field, also known as coherent modulation imaging (CMI), in conjunction with the phase extraction neural network (PhENN) and a Gerchberg-Saxton-Fienup (GSF) approximant, further improves resilience to noise of the phase-from-intensity imaging problem. We offer design guidelines for implementing the CMI hardware with the proposed computational reconstruction scheme and quantify reconstruction improvement as function of photon count.

9.
Opt Express ; 28(18): 26267-26283, 2020 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-32906902

RESUMO

Sensing and correction of low-order wavefront aberrations is critical for high-contrast astronomical imaging. State of the art coronagraph systems typically use image-based sensing methods that exploit the rejected on-axis light, such as Lyot-based low order wavefront sensors (LLOWFS); these methods rely on linear least-squares fitting to recover Zernike basis coefficients from intensity data. However, the dynamic range of linear recovery is limited. We propose the use of deep neural networks with residual learning techniques for non-linear wavefront sensing. The deep residual learning approach extends the usable range of the LLOWFS sensor by more than an order of magnitude compared to the conventional methods, and can improve closed-loop control of systems with large initial wavefront error. We demonstrate that the deep learning approach performs well even in low-photon regimes common to coronagraphic imaging of exoplanets.

10.
Opt Express ; 28(2): 2511-2535, 2020 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-32121939

RESUMO

Deep neural networks (DNNs) are efficient solvers for ill-posed problems and have been shown to outperform classical optimization techniques in several computational imaging problems. In supervised mode, DNNs are trained by minimizing a measure of the difference between their actual output and their desired output; the choice of measure, referred to as "loss function," severely impacts performance and generalization ability. In a recent paper [A. Goy et al., Phys. Rev. Lett. 121(24), 243902 (2018)], we showed that DNNs trained with the negative Pearson correlation coefficient (NPCC) as the loss function are particularly fit for photon-starved phase-retrieval problems, though the reconstructions are manifestly deficient at high spatial frequencies. In this paper, we show that reconstructions by DNNs trained with default feature loss (defined at VGG layer ReLU-22) contain more fine details; however, grid-like artifacts appear and are enhanced as photon counts become very low. Two additional key findings related to these artifacts are presented here. First, the frequency signature of the artifacts depends on the VGG's inner layer that perceptual loss is defined upon, halving with each MaxPooling2D layer deeper in the VGG. Second, VGG ReLU-12 outperforms all other layers as the defining layer for the perceptual loss.

11.
Opt Express ; 28(16): 24152-24170, 2020 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-32752400

RESUMO

Deep learning (DL) has been applied extensively in many computational imaging problems, often leading to superior performance over traditional iterative approaches. However, two important questions remain largely unanswered: first, how well can the trained neural network generalize to objects very different from the ones in training? This is particularly important in practice, since large-scale annotated examples similar to those of interest are often not available during training. Second, has the trained neural network learnt the underlying (inverse) physics model, or has it merely done something trivial, such as memorizing the examples or point-wise pattern matching? This pertains to the interpretability of machine-learning based algorithms. In this work, we use the Phase Extraction Neural Network (PhENN) [Optica 4, 1117-1125 (2017)], a deep neural network (DNN) for quantitative phase retrieval in a lensless phase imaging system as the standard platform and show that the two questions are related and share a common crux: the choice of the training examples. Moreover, we connect the strength of the regularization effect imposed by a training set to the training process with the Shannon entropy of images in the dataset. That is, the higher the entropy of the training images, the weaker the regularization effect can be imposed. We also discover that weaker regularization effect leads to better learning of the underlying propagation model, i.e. the weak object transfer function, applicable for weakly scattering objects under the weak object approximation. Finally, simulation and experimental results show that better cross-domain generalization performance can be achieved if DNN is trained on a higher-entropy database, e.g. the ImageNet, than if the same DNN is trained on a lower-entropy database, e.g. MNIST, as the former allows the underlying physics model be learned better than the latter.

12.
Appl Opt ; 59(11): 3376-3382, 2020 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-32400448

RESUMO

Deep-learning-based single-pixel phase imaging is proposed. The method, termed deep ghost phase imaging (DGPI), succeeds the advantages of computational ghost imaging, i.e., has the phase imaging quality with high signal-to-noise ratio derived from the Fellgett's multiplex advantage and the point-like detection of diffracted light from objects. A deep convolutional neural network is learned to output a desired phase distribution from an input of a defocused intensity distribution reconstructed by the single-pixel imaging theory. Compared to the conventional interferometric and transport-of-intensity approaches to single-pixel phase imaging, the DGPI requires neither additional intensity measurements nor explicit approximations. The effects of defocus distance and light level are investigated by numerical simulation and an optical experiment confirms the feasibility of the DGPI.

13.
Opt Express ; 26(22): 29340-29352, 2018 Oct 29.
Artigo em Inglês | MEDLINE | ID: mdl-30470099

RESUMO

The phase extraction neural network (PhENN) [Optica 4, 1117 (2017)] is a computational architecture, based on deep machine learning, for lens-less quantitative phase retrieval from raw intensity data. PhENN is a deep convolutional neural network trained through examples consisting of pairs of true phase objects and their corresponding intensity diffraction patterns; thereafter, given a test raw intensity pattern, PhENN is capable of reconstructing the original phase object robustly, in many cases even for objects outside the database where the training examples were drawn from. Here, we show that the spatial frequency content of the training examples is an important factor limiting PhENN's spatial frequency response. For example, if the training database is relatively sparse in high spatial frequencies, as most natural scenes are, PhENN's ability to resolve fine spatial features in test patterns will be correspondingly limited. To combat this issue, we propose "flattening" the power spectral density of the training examples before presenting them to PhENN. For phase objects following the statistics of natural scenes, we demonstrate experimentally that the spectral pre-modulation method enhances the spatial resolution of PhENN by a factor of 2.

14.
Opt Express ; 26(25): 32532-32553, 2018 Dec 10.
Artigo em Inglês | MEDLINE | ID: mdl-30645419

RESUMO

We propose simultaneous measurement and reconstruction tailoring (SMaRT) for quantitative phase imaging; it is a joint optimization approach to inverse problems wherein minimizing the expected end-to-end error yields optimal design parameters for both the measurement and reconstruction processes. Using simulated and experimentally-collected data for a specific scenario, we demonstrate that optimizing the design of the two processes together reduces phase reconstruction error over past techniques that consider these two design problems separately. Our results suggest at times surprising design principles, and our approach can potentially inspire improved solution methods for other inverse problems in optics as well as the natural sciences.

15.
Phys Rev Lett ; 121(24): 243902, 2018 Dec 14.
Artigo em Inglês | MEDLINE | ID: mdl-30608745

RESUMO

Imaging systems' performance at low light intensity is affected by shot noise, which becomes increasingly strong as the power of the light source decreases. In this Letter, we experimentally demonstrate the use of deep neural networks to recover objects illuminated with weak light and demonstrate better performance than with the classical Gerchberg-Saxton phase retrieval algorithm for equivalent signal over noise ratio. The prior contained in the training image set can be leveraged by the deep neural network to detect features with a signal over noise ratio close to one. We apply this principle to a phase retrieval problem and show successful recovery of the object's most salient features with as little as one photon per detector pixel on average in the illumination beam. We also show that the phase reconstruction is significantly improved by training the neural network with an initial estimate of the object, as opposed to training it with the raw intensity measurement.

16.
Opt Express ; 25(12): 13125-13144, 2017 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-28788849

RESUMO

We present a rigorous investigation of resonant coupling between microspheres based on multipole expansions. The microspheres have diameters in the range of several micrometers and can be used to realize various photonic molecule configurations. We reveal and quantify the interactions between the whispering gallery modes inside individual microspheres and the propagation modes of the entire photonic molecule structures. We show that Fano-like resonances in photonic molecules can be engineered by tuning the coupling between the resonant and radiative modes when the structures are illuminated with simple dipole radiation.

17.
J Opt Soc Am A Opt Image Sci Vis ; 34(11): 2025-2033, 2017 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-29091654

RESUMO

A sensor pixel integrates optical intensity across its extent, and we explore the role that this integration plays in phase space tomography. The literature is inconsistent in its treatment of this integration-some approaches model this integration explicitly, some approaches are ambiguous about whether this integration is taken into account, and still some approaches assume pixel values to be point samples of the optical intensity. We show that making a point-sample assumption results in apodization of and thus systematic error in the recovered ambiguity function, leading to underestimating the overall degree of coherence. We explore the severity of this effect using a Gaussian Schell-model source and discuss when this effect, as opposed to noise, is the dominant source of error in the retrieved state of coherence.

18.
J Opt Soc Am A Opt Image Sci Vis ; 34(9): 1711-1719, 2017 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-29036145

RESUMO

Integral field spectroscopy (IFS) is a well-established method for measuring spectral intensity data of the form s(x,y,λ), where x, y are spatial coordinates and λ is the wavelength. In most flavors of IFS, there is a trade-off between sampling (x,y) and the measured wavelength band Δλ. Here we present the first, to our knowledge, attempt to overcome this trade-off by use of computational imaging and measurement diversity. We implement diversity by including a grating in our design, which allows rotation of the dispersed spectra between measurements. The raw intensity data captured from the rotated grating positions are then processed by an inverse algorithm that utilizes sparsity in the data. We present simulated results from spatial-spectral data in the experimental dataset. We used non-overlapping portions of the dataset to train our sparsity priors in the form of the dictionary, and to test the reconstruction quality. We found that, depending on the level of noise in the measurement, diversity up to a maximum number of measurements is beneficial in terms of reducing error, and yields diminishing returns if even more measurements are taken.

19.
Opt Express ; 24(18): 20069-79, 2016 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-27607616

RESUMO

Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object's phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phase retrieval such as ptychography. Our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.

20.
Opt Express ; 23(9): 12337-53, 2015 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-25969319

RESUMO

Microsphere-based microscopy systems have garnered lots of recent interest, mainly due to their capacity in focusing light and imaging beyond the diffraction limit. In this paper, we present theoretical foundations for studying the optical performance of such systems by developing a complete theoretical model encompassing the aspects of illumination, sample interaction and imaging/collection. Using this model, we show that surface waves play a significant role in focusing and imaging with the microsphere. We also show that by designing a radially polarized convergent beam, we can focus to a spot smaller than the diffraction limit. By exploiting surface waves, we are able to resolve two dipoles spaced 98 nm apart in simulation using light at a wavelength of 402.292 nm. Using our model, we also explore the effect of beam geometry and polarization on optical resolution and focal spot size, showing that both geometry and polarization greatly affect the shape of the spot.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa