RESUMO
This paper describes OpenSpyrit, an open access and open source ecosystem for reproducible research in hyperspectral single-pixel imaging, composed of SPAS (a Python single-pixel acquisition software), SPYRIT (a Python single-pixel reconstruction toolkit) and SPIHIM (a single-pixel hyperspectral image collection). The proposed OpenSpyrit ecosystem responds to the need for reproducibility and benchmarking in single-pixel imaging by providing open data and open software. The SPIHIM collection, which is the first open-access FAIR dataset for hyperspectral single-pixel imaging, currently includes 140 raw measurements acquired using SPAS and the corresponding hypercubes reconstructed using SPYRIT. The hypercubes are reconstructed by both inverse Hadamard transformation of the raw data and using the denoised completion network (DC-Net), a data-driven reconstruction algorithm. The hypercubes obtained by inverse Hadamard transformation have a native size of 64 × 64 × 2048 for a spectral resolution of 2.3 nm and a spatial resolution that is comprised between 182.4 µm and 15.2 µm depending on the digital zoom. The hypercubes obtained using the DC-Net are reconstructed at an increased resolution of 128 × 128 × 2048. The OpenSpyrit ecosystem should constitute a reference to support benchmarking for future developments in single-pixel imaging.
RESUMO
We demonstrate a method to image an object using a self-probing approach based on semiconductor high-harmonic generation. On the one hand, ptychography enables high-resolution imaging from the coherent light diffracted by an object. On the other hand, high-harmonic generation from crystals is emerging as a new source of extreme-ultraviolet ultrafast coherent light. We combine these two techniques by performing ptychography measurements with nanopatterned crystals serving as the object as well as the generation medium of the harmonics. We demonstrate that this strong field in situ approach can provide structural information about an object. With the future developments of crystal high harmonics as a compact short-wavelength light source, our demonstration can be an innovative approach for nanoscale imaging of photonic and electronic devices in research and industry.
RESUMO
Single-pixel cameras that measure image coefficients have various promising applications, in particular for hyper-spectral imaging. Here, we investigate deep neural networks that when fed with experimental data can output high-quality images in real time. Assuming that the measurements are corrupted by mixed Poisson-Gaussian noise, we propose to map the raw data from the measurement domain to the image domain based on a Tikhonov regularization. This step can be implemented as the first layer of a deep neural network, followed by any architecture of layers that acts in the image domain. We also describe a framework for training the network in the presence of noise. In particular, our approach includes an estimation of the image intensity and experimental parameters, together with a normalization scheme that allows varying noise levels to be handled during training and testing. Finally, we present results from simulations and experimental acquisitions with varying noise levels. Our approach yields images with improved peak signal-to-noise ratios, even for noise levels that were foreseen during the training of the networks, which makes the approach particularly suitable to deal with experimental data. Furthermore, while this approach focuses on single-pixel imaging, it can be adapted for other computational optics problems.
RESUMO
Single-pixel imaging acquires an image by measuring its coefficients in a transform domain, thanks to a spatial light modulator. However, as measurements are sequential, only a few coefficients can be measured in the real-time applications. Therefore, single-pixel reconstruction is usually an underdetermined inverse problem that requires regularization to obtain an appropriate solution. Combined with a spectral detector, the concept of single-pixel imaging allows for hyperspectral imaging. While each channel can be reconstructed independently, we propose to exploit the spectral redundancy between channels to regularize the reconstruction problem. In particular, we introduce a denoised completion network that includes 3D convolution filters. Contrary to black-box approaches, our network combines the classical Tikhonov theory with the deep learning methodology, leading to an explainable network. Considering both simulated and experimental data, we demonstrate that the proposed approach yields hyperspectral images with higher quantitative metrics than the approaches developed for grayscale images.
RESUMO
We propose a computational paradigm where off-the-shelf optical devices can be used to image objects in a scene well beyond their native optical resolution. By design, our approach is generic, does not require active illumination, and is applicable to several types of optical devices. It only requires the placement of a spatial light modulator some distance from the optical system. In this paper, we first introduce the acquisition strategy together with the reconstruction framework. We then conduct practical experiments with a webcam that confirm that this approach can image objects with substantially enhanced spatial resolution compared to the performance of the native optical device. We finally discuss potential applications, current limitations, and future research directions.
RESUMO
Time-resolved multispectral imaging has many applications in different fields, which range from characterization of biological tissues to environmental monitoring. In particular, optical techniques, such as lidar and fluorescence lifetime imaging, require imaging at the subnanosecond scales over an extended area. In this paper, we demonstrate experimentally a time-resolved multispectral acquisition scheme based on single-pixel imaging. Single-pixel imaging is an emerging paradigm that provides low-cost high-quality images. Here, we use an adaptive strategy that allows acquisition and image reconstruction times to be reduced drastically or full basis scans. Adaptive time-resolved multispectral imaging scheme can have significant applications in biological imaging, at scales from macroscopic to microscopic.
RESUMO
Compressive sensing is a powerful tool to efficiently acquire and reconstruct an image even in diffuse optical tomography (DOT) applications. In this work, a time-resolved DOT system based on structured light illumination, compressive detection, and multiple view acquisition has been proposed and experimentally validated on a biological tissue-mimicking phantom. The experimental scheme is based on two digital micromirror devices for illumination and detection modulation, in combination with a time-resolved single element detector. We fully validated the method and demonstrated both the imaging and tomographic capabilities of the system, providing state-of-the-art reconstruction quality.
RESUMO
In fluorescence diffuse optical tomography (fDOT), the accuracy of reconstructed fluorescence distributions highly depends on the knowledge of the tissue optical heterogeneities for correct modeling of light propagation. Common approaches are to assume homogeneous optical properties or, when structural information is available, assign optical properties to various segmented organs, which is likely to result in inaccurate reconstructions. Furthermore, DOT based only on intensity (continuous wave-DOT) is a nonunique inverse problem, and hence, cannot be used to retrieve simultaneously maps of absorption and diffusion coefficients. We propose a method that reconstructs a single parameter from the excitation measurements, which is used in the fDOT problem to accurately recover fluorescence distribution.
Assuntos
Tomografia Óptica/métodos , Processamento de Imagem Assistida por Computador , Espectrometria de FluorescênciaRESUMO
Color Doppler echocardiography is a widely used noninvasive imaging modality that provides real-time information about intracardiac blood flow. In an apical long-axis view of the left ventricle, color Doppler is subject to phase wrapping, or aliasing, especially during cardiac filling and ejection. When setting up quantitative methods based on color Doppler, it is necessary to correct this wrapping artifact. We developed an unfolded primal-dual network (PDNet) to unwrap (dealias) color Doppler echocardiographic images and compared its effectiveness against two state-of-the-art segmentation approaches based on nnU-Net and transformer models. We trained and evaluated the performance of each method on an in-house dataset and found that the nnU-Net-based method provided the best dealiased results, followed by the primal-dual approach and the transformer-based technique. Noteworthy, the PDNet, which had significantly fewer trainable parameters, performed competitively with respect to the other two methods, demonstrating the high potential of deep unfolding methods. Our results suggest that deep learning (DL)-based methods can effectively remove aliasing artifacts in color Doppler echocardiographic images, outperforming DeAN, a state-of-the-art semiautomatic technique. Overall, our results show that DL-based methods have the potential to effectively preprocess color Doppler images for downstream quantitative analysis.
Assuntos
Aprendizado Profundo , Ecocardiografia Doppler em Cores , Ecocardiografia Doppler em Cores/métodos , Ventrículos do Coração/diagnóstico por imagem , Tórax , ArtefatosRESUMO
In this work, we propose a model-based deep learning reconstruction algorithm for optical projection tomography (ToMoDL), to greatly reduce acquisition and reconstruction times. The proposed method iterates over a data consistency step and an image domain artefact removal step achieved by a convolutional neural network. A preprocessing stage is also included to avoid potential misalignments between the sample center of rotation and the detector. The algorithm is trained using a database of wild-type zebrafish (Danio rerio) at different stages of development to minimise the mean square error for a fixed number of iterations. Using a cross-validation scheme, we compare the results to other reconstruction methods, such as filtered backprojection, compressed sensing and a direct deep learning method where the pseudo-inverse solution is corrected by a U-Net. The proposed method performs equally well or better than the alternatives. For a highly reduced number of projections, only the U-Net method provides images comparable to those obtained with ToMoDL. However, ToMoDL has a much better performance if the amount of data available for training is limited, given that the number of network trainable parameters is smaller.
Assuntos
Aprendizado Profundo , Animais , Peixe-Zebra , Redes Neurais de Computação , Algoritmos , Tomografia , Processamento de Imagem Assistida por Computador/métodos , Imagens de FantasmasRESUMO
We report on the experimental demonstration of a fast reconstruction method for multiview fluorescence diffuse optical tomography by using a wavelet-based data compression. We experimentally demonstrate that the use of data compression combined with the multiview approach makes it possible to perform a fast reconstruction of high quality. A structured illumination approach, guided by the compression scheme, has been adopted to further reduce the acquisition time. The reconstruction algorithm is based on the finite element method, and hence is suitable for samples of any arbitrary shape.
Assuntos
Compressão de Dados/métodos , Tomografia Óptica/métodos , Corantes Fluorescentes/química , Iluminação , Espectrometria de Fluorescência , Fatores de TempoRESUMO
Color Doppler by transthoracic echocardiography creates two-dimensional fan-shaped maps of blood velocities in the cardiac cavities. It is a one-component velocimetric technique since it only returns the velocity components parallel to the ultrasound beams. Intraventricular vector flow mapping (iVFM) is a method to recover the blood velocity vectors from the Doppler scalar fields in an echocardiographic three-chamber view. We improved ouriVFM numerical scheme by imposing physical constraints. TheiVFM consisted in minimizing regularized Doppler residuals subject to the condition that two fluid-dynamics constraints were satisfied, namely planar mass conservation, and free-slip boundary conditions. The optimization problem was solved by using the Lagrange multiplier method. A finite-difference discretization of the optimization problem, written in the polar coordinate system centered on the cardiac ultrasound probe, led to a sparse linear system. The single regularization parameter was determined automatically for non-supervision considerations. The physics-constrained method was validated using realistic intracardiac flow data from a patient-specific computational fluid dynamics (CFD) model. The numerical evaluations showed that theiVFM-derived velocity vectors were in very good agreement with the CFD-based original velocities, with relative errors ranged between 0.3% and 12%. We calculated two macroscopic measures of flow in the cardiac region of interest, the mean vorticity and mean stream function, and observed an excellent concordance between physics-constrainediVFM and CFD. The capability of physics-constrainediVFM was finally tested within vivocolor Doppler data acquired in patients routinely examined in the echocardiographic laboratory. The vortex that forms during the rapid filling was deciphered. The physics-constrainediVFM algorithm is ready for pilot clinical studies and is expected to have a significant clinical impact on the assessment of diastolic function.
Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador , Velocidade do Fluxo Sanguíneo , Ecocardiografia/métodos , Humanos , Hidrodinâmica , Interpretação de Imagem Assistida por Computador/métodos , FísicaRESUMO
A simple and fast time-domain method for localizing inclusions, fluorescent optical probes or absorbers, is presented. The method offers new possibilities for situations where complete tomographic measurements are not permitted by the examined object, for example in endoscopic examination of the human prostate or the oesophagus. Feasibility has been envisioned with a phantom study conducted on a point-like fluorochrome embedded in a diffusing medium mimicking the optical properties of biological tissues.
Assuntos
Nefelometria e Turbidimetria/instrumentação , Nefelometria e Turbidimetria/métodos , Fenômenos Ópticos , Corantes Fluorescentes/química , Propriedades de Superfície , Fatores de TempoRESUMO
We present a fast reconstruction method for fluorescence optical tomography with structured illumination. Our approach is based on the exploitation of the wavelet transform of the measurements acquired after wavelet-patterned illuminations. This method, validated on experimental data, enables us to significantly reduce the acquisition and computation times with respect to the classical scanning approach. Therefore, it could be particularly suited for in vivo applications.
Assuntos
Iluminação/métodos , Espectrometria de Fluorescência/métodos , Tomografia Óptica/métodos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Fatores de TempoRESUMO
PURPOSE: In the context of fluorescence diffuse optical tomography, determining the optimal way to exploit the time-resolved information has been receiving much attention and different features of the time-resolved signals have been introduced. In this article, the authors revisit and generalize the notion of feature, considering the projection of the measurements onto some basis functions. This leads the authors to propose a novel approach based on the wavelet transform of the measurements. METHODS: A comparative study between the reconstructions obtained from the proposed wavelet-based approach and the reconstructions obtained from the reference temporal moments is provided. An inhomogeneous cubic medium is considered. Reconstructions are performed from synthetic measurements assuming Poisson noise statistics. In order to provide fairly comparable reconstructions, the reconstruction scheme is associated with a particular procedure for selecting the regularization parameter. RESULTS: In the noise-free case, the reconstruction quality is shown to be mainly driven by the number of selected features. In the presence of noise, however, the reconstruction quality depends on the type of the features. In this case, the wavelet approach is shown to outperform the moment approach. While the optimal time-resolved reconstruction quality, which is obtained considering the whole set of time samples, is recovered using only eight wavelet functions, it cannot be attained using moments. It is finally observed that the time-resolved information is of limited utility, in terms of reconstruction, when the maximum number of detected photons is lower than 10(5). CONCLUSIONS: The wavelet approach allows for better exploiting the time-resolved information, especially when the number of detected photons is low. However, when the number of detected photons decreases below a certain threshold, the time-resolved information itself is shown to be of limited utility.
Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Microscopia de Fluorescência/métodos , Tomografia Óptica/métodos , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
RESUMO
We present the effects of using a single-pixel camera approach to extract optical properties with the single-snapshot spatial frequency-domain imaging method. We acquired images of a human hand for spatial frequencies ranging from 0.1 to 0.4 mm - 1 with increasing compression ratios using adaptive basis scan wavelet prediction strategy. In summary, our findings indicate that the extracted optical properties remained usable up to 99% of compression rate at a spatial frequency of 0.2 mm - 1 with errors of 5% in reduced scattering and 10% in absorption.
Assuntos
Compressão de Dados/métodos , Imagem Óptica/métodos , Simulação por Computador , Desenho de Equipamento , Mãos/diagnóstico por imagem , Humanos , Imagem Óptica/instrumentação , Imagens de FantasmasRESUMO
The enhancement and control of non-linear phenomena at a nanometer scale has a wide range of applications in science and in industry. Among these phenomena, high-harmonic generation in solids is a recent focus of research to realize next generation petahertz optoelectronic devices or compact all solid state EUV sources. Here, we report on the realization of the first nanoscale high harmonic source. The strong field regime is reached by confining the electric field from a few nanojoules femtosecond laser in a single 3D semiconductor waveguide. We reveal a strong competition between enhancement of coherent harmonics and incoherent fluorescence favored by excitonic processes. However, far from the band edge, clear enhancement of the harmonic emission is reported with a robust sustainability offering a compact nanosource for applications. We illustrate the potential of our harmonic nano-device by performing a coherent diffractive imaging experiment. Ultra-compact UV/X-ray nanoprobes are foreseen to have other applications such as petahertz electronics, nano-tomography or nano-medicine.
RESUMO
PURPOSE: Dual-energy computed tomography (DECT) has been presented as a valid alternative to single-energy CT to reduce the uncertainty of the conversion of patient CT numbers to proton stopping power ratio (SPR) of tissues relative to water. The aim of this work was to optimize DECT acquisition protocols from simulations of X-ray images for the treatment planning of proton therapy using a projection-based dual-energy decomposition algorithm. METHODS: We have investigated the effect of various voltages and tin filtration combinations on the SPR map accuracy and precision, and the influence of the dose allocation between the low-energy (LE) and the high-energy (HE) acquisitions. For all spectra combinations, virtual CT projections of the Gammex phantom were simulated with a realistic energy-integrating detector response model. Two situations were simulated: an ideal case without noise (infinite dose) and a realistic situation with Poisson noise corresponding to a 20 mGy total central dose. To determine the optimal dose balance, the proportion of LE-dose with respect to the total dose was varied from 10% to 90% while keeping the central dose constant, for four dual-energy spectra. SPR images were derived using a two-step projection-based decomposition approach. The ranges of 70 MeV, 90 MeV, and 100 MeV proton beams onto the adult female (AF) reference computational phantom of the ICRP were analytically determined from the reconstructed SPR maps. RESULTS: The energy separation between the incident spectra had a strong impact on the SPR precision. Maximizing the incident energy gap reduced image noise. However, the energy gap was not a good metric to evaluate the accuracy of the SPR. In terms of SPR accuracy, a large variability of the optimal spectra was observed when studying each phantom material separately. The SPR accuracy was almost flat in the 30-70% LE-dose range, while the precision showed a minimum slightly shifted in favor of lower LE-dose. Photon noise in the SPR images (20 mGy dose) had lower impact on the proton range accuracy as comparable results were obtained for the noiseless situation (infinite dose). Root-mean-square range errors averaged over all irradiation angles associated to dual-energy imaging were comprised between 0.50 mm and 0.72 mm for the noiseless situation and between 0.51 mm and 0.77 mm for the realistic scenario. CONCLUSIONS: The impact of the dual-energy spectra and the dose allocation between energy levels on the SPR accuracy and precision determined through a projection-based dual-energy algorithm were evaluated to guide the choice of spectra for dual-energy CT for proton therapy. The dose balance between energy levels was not found to be sensitive for the SPR estimation. The optimal pair of dual-energy spectra was material dependent but on a heterogeneous anthropomorphic phantom, there was no significant difference in range accuracy and the choice of spectra could be driven by the precision, i.e., the energy gap.
Assuntos
Terapia com Prótons , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Imagens de Fantasmas , PrótonsRESUMO
PURPOSE: Exploiting the x-ray measurements obtained in different energy bins, spectral computed tomography (CT) has the ability to recover the 3-D description of a patient in a material basis. This may be achieved solving two subproblems, namely the material decomposition and the tomographic reconstruction problems. In this work, we address the material decomposition of spectral x-ray projection images, which is a nonlinear ill-posed problem. METHODS: Our main contribution is to introduce a material-dependent spatial regularization in the projection domain. The decomposition problem is solved iteratively using a Gauss-Newton algorithm that can benefit from fast linear solvers. A Matlab implementation is available online. The proposed regularized weighted least squares Gauss-Newton algorithm (RWLS-GN) is validated on numerical simulations of a thorax phantom made of up to five materials (soft tissue, bone, lung, adipose tissue, and gadolinium), which is scanned with a 120 kV source and imaged by a 4-bin photon counting detector. To evaluate the method performance of our algorithm, different scenarios are created by varying the number of incident photons, the concentration of the marker and the configuration of the phantom. The RWLS-GN method is compared to the reference maximum likelihood Nelder-Mead algorithm (ML-NM). The convergence of the proposed method and its dependence on the regularization parameter are also studied. RESULTS: We show that material decomposition is feasible with the proposed method and that it converges in few iterations. Material decomposition with ML-NM was very sensitive to noise, leading to decomposed images highly affected by noise, and artifacts even for the best case scenario. The proposed method was less sensitive to noise and improved contrast-to-noise ratio of the gadolinium image. Results were superior to those provided by ML-NM in terms of image quality and decomposition was 70 times faster. For the assessed experiments, material decomposition was possible with the proposed method when the number of incident photons was equal or larger than 105 and when the marker concentration was equal or larger than 0.03 g·cm-3 . CONCLUSIONS: The proposed method efficiently solves the nonlinear decomposition problem for spectral CT, which opens up new possibilities such as material-specific regularization in the projection domain and a parallelization framework, in which projections are solved in parallel.