Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
Add more filters










Publication year range
1.
J Microsc ; 2024 May 29.
Article in English | MEDLINE | ID: mdl-38808665

ABSTRACT

We propose a smartphone-based optical sectioning (SOS) microscope based on the HiLo technique, with a single smartphone replacing a high-cost illumination source and a camera sensor. We built our SOS with off-the-shelf optical, mechanical cage systems with 3D-printed adapters to seamlessly integrate the smartphone with the SOS main body. The liquid light guide can be integrated with the adapter, guiding the smartphone's LED light to the digital mirror device (DMD) with neglectable loss. We used an electrically tuneable lens (ETL) instead of a mechanical translation stage to realise low-cost axial scanning. The ETL was conjugated to the objective lens's back pupil plane (BPP) to construct a telecentric design by a 4f configuration to maintain the lateral magnification for different axial positions. SOS has a 571.5 µm telecentric scanning range and an 11.7 µm axial resolution. The broadband smartphone LED torch can effectively excite fluorescent polystyrene (PS) beads. We successfully used SOS for high-contrast fluorescent PS beads imaging with different wavelengths and optical sectioning imaging of multilayer fluorescent PS beads. To our knowledge, the proposed SOS is the first smartphone-based HiLo optical sectioning microscopy (£1965), which can save around £7035 compared with a traditional HiLo system (£9000). It is a powerful tool for biomedical research in resource-limited areas.

2.
Appl Opt ; 63(8): C32-C40, 2024 Mar 10.
Article in English | MEDLINE | ID: mdl-38568625

ABSTRACT

Compressed ultrafast photography (CUP) is a novel two-dimensional (2D) imaging technique to capture ultrafast dynamic scenes. Effective image reconstruction is essential in CUP systems. However, existing reconstruction algorithms mostly rely on image priors and complex parameter spaces. Therefore, in general, they are time-consuming and result in poor imaging quality, which limits their practical applications. In this paper, we propose a novel reconstruction algorithm, to the best of our knowledge, named plug-in-plug-fast deep video denoising net-total variation (PnP-TV-FastDVDnet), which exploits an image's spatial features and correlation features in the temporal dimension. Therefore, it offers higher-quality images than those in previously reported methods. First, we built a forward mathematical model of the CUP, and the closed-form solution of the three suboptimization problems was derived according to plug-in and plug-out frames. Secondly, we used an advanced video denoising algorithm based on a neural network named FastDVDnet to solve the denoising problem. The peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) are improved on actual CUP data compared with traditional algorithms. On benchmark and real CUP datasets, the proposed method shows the comparable visual results while reducing the running time by 96% over state-of-the-art algorithms.

3.
J Biomed Opt ; 29(1): 015004, 2024 01.
Article in English | MEDLINE | ID: mdl-38283935

ABSTRACT

Significance: Diffuse correlation spectroscopy (DCS) is a powerful, noninvasive optical technique for measuring blood flow. Traditionally the blood flow index (BFi) is derived through nonlinear least-square fitting the measured intensity autocorrelation function (ACF). However, the fitting process is computationally intensive, susceptible to measurement noise, and easily influenced by optical properties (absorption coefficient µa and reduced scattering coefficient µs') and scalp and skull thicknesses. Aim: We aim to develop a data-driven method that enables rapid and robust analysis of multiple-scattered light's temporal ACFs. Moreover, the proposed method can be applied to a range of source-detector distances instead of being limited to a specific source-detector distance. Approach: We present a deep learning architecture with one-dimensional convolution neural networks, called DCS neural network (DCS-NET), for BFi and coherent factor (ß) estimation. This DCS-NET was performed using simulated DCS data based on a three-layer brain model. We quantified the impact from physiologically relevant optical property variations, layer thicknesses, realistic noise levels, and multiple source-detector distances (5, 10, 15, 20, 25, and 30 mm) on BFi and ß estimations among DCS-NET, semi-infinite, and three-layer fitting models. Results: DCS-NET shows a much faster analysis speed, around 17,000-fold and 32-fold faster than the traditional three-layer and semi-infinite models, respectively. It offers higher intrinsic sensitivity to deep tissues compared with fitting methods. DCS-NET shows excellent anti-noise features and is less sensitive to variations of µa and µs' at a source-detector separation of 30 mm. Also, we have demonstrated that relative BFi (rBFi) can be extracted by DCS-NET with a much lower error of 8.35%. By contrast, the semi-infinite and three-layer fitting models result in significant errors in rBFi of 43.76% and 19.66%, respectively. Conclusions: DCS-NET can robustly quantify blood flow measurements at considerable source-detector distances, corresponding to much deeper biological tissues. It has excellent potential for hardware implementation, promising continuous real-time blood flow measurements.


Subject(s)
Deep Learning , Hemodynamics , Spectroscopy, Near-Infrared/methods , Regional Blood Flow/physiology , Scalp
4.
Methods Appl Fluoresc ; 11(2)2023 Mar 20.
Article in English | MEDLINE | ID: mdl-36863024

ABSTRACT

This paper reports a bespoke adder-based deep learning network for time-domain fluorescence lifetime imaging (FLIM). By leveraging thel1-norm extraction method, we propose a 1D Fluorescence Lifetime AdderNet (FLAN) without multiplication-based convolutions to reduce the computational complexity. Further, we compressed fluorescence decays in temporal dimension using a log-scale merging technique to discard redundant temporal information derived as log-scaling FLAN (FLAN+LS). FLAN+LS achieves 0.11 and 0.23 compression ratios compared with FLAN and a conventional 1D convolutional neural network (1D CNN) while maintaining high accuracy in retrieving lifetimes. We extensively evaluated FLAN and FLAN+LS using synthetic and real data. A traditional fitting method and other non-fitting, high-accuracy algorithms were compared with our networks for synthetic data. Our networks attained a minor reconstruction error in different photon-count scenarios. For real data, we used fluorescent beads' data acquired by a confocal microscope to validate the effectiveness of real fluorophores, and our networks can differentiate beads with different lifetimes. Additionally, we implemented the network architecture on a field-programmable gate array (FPGA) with a post-quantization technique to shorten the bit-width, thereby improving computing efficiency. FLAN+LS on hardware achieves the highest computing efficiency compared to 1D CNN and FLAN. We also discussed the applicability of our network and hardware architecture for other time-resolved biomedical applications using photon-efficient, time-resolved sensors.

5.
Sensors (Basel) ; 22(19)2022 Sep 26.
Article in English | MEDLINE | ID: mdl-36236390

ABSTRACT

Fluorescence lifetime imaging (FLIM) is a powerful tool that provides unique quantitative information for biomedical research. In this study, we propose a multi-layer-perceptron-based mixer (MLP-Mixer) deep learning (DL) algorithm named FLIM-MLP-Mixer for fast and robust FLIM analysis. The FLIM-MLP-Mixer has a simple network architecture yet a powerful learning ability from data. Compared with the traditional fitting and previously reported DL methods, the FLIM-MLP-Mixer shows superior performance in terms of accuracy and calculation speed, which has been validated using both synthetic and experimental data. All results indicate that our proposed method is well suited for accurately estimating lifetime parameters from measured fluorescence histograms, and it has great potential in various real-time FLIM applications.


Subject(s)
Deep Learning , Algorithms , Fluorescence Resonance Energy Transfer/methods , Microscopy, Fluorescence/methods , Optical Imaging/methods
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1883-1886, 2022 07.
Article in English | MEDLINE | ID: mdl-36085638

ABSTRACT

Convolutional neural networks (CNN) have revealed exceptional performance for fluorescence lifetime imaging (FLIM). However, redundant parameters and complicated topologies make it challenging to implement such networks on embedded hardware to achieve real-time processing. We report a lightweight, quantized neural architecture that can offer fast FLIM imaging. The forward-propagation is significantly simplified by replacing matrix multiplications in each convolution layer with additions and data quantization using a low bit-width. We first used synthetic 3-D lifetime data with given lifetime ranges and photon counts to assure correct average lifetimes can be obtained. Afterwards, human prostatic cancer cells incubated with gold nanoprobes were utilized to validate the feasibility of the network for real-world data. The quantized network yielded a 37.8% compression ratio without performance degradation. Clinical relevance - This neural network can be applied to diagnose cancer early based on fluorescence lifetime in a non-invasive way. This approach brings high accuracy and accelerates diagnostic processes for clinicians who are not experts in biomedical signal processing.


Subject(s)
Computers , Data Compression , Humans , Neural Networks, Computer , Optical Imaging , Signal Processing, Computer-Assisted
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1887-1890, 2022 07.
Article in English | MEDLINE | ID: mdl-36086288

ABSTRACT

Wide-field fluorescence lifetime imaging (FLIM) is a promising technique for biomedical and clinic applications. Integrating with CMOS single-photon avalanche diode (SPAD) sensor arrays can lead to cheaper and portable real-time FLIM systems. However, the FLIM data obtained by such sensor systems often have sophisticated noise features. There is still a lack of fast tools to recover lifetime parameters from highly noise-corrupted fluorescence signals efficiently. This paper proposes a smart wide-field FLIM system containing a 192×128 COMS SPAD sensor and a field-programmable gate array (FPGA) embedded deep learning (DL) FLIM processor. The processor adopts a hardware-friendly and light-weighted neural network for fluorescence lifetime analysis, showing the advantages of high accuracy against noise, fast speed, and low power consumption. Experimental results demonstrate the proposed system's superior and robust performances, promising for many FLIM applications such as FLIM-guided clinical surgeries, cancer diagnosis, and biomedical imaging.


Subject(s)
Optical Imaging , Photons , Computer Systems , Microscopy, Fluorescence/methods
8.
Sensors (Basel) ; 22(10)2022 May 15.
Article in English | MEDLINE | ID: mdl-35632167

ABSTRACT

We present a fast and accurate analytical method for fluorescence lifetime imaging microscopy (FLIM), using the extreme learning machine (ELM). We used extensive metrics to evaluate ELM and existing algorithms. First, we compared these algorithms using synthetic datasets. The results indicate that ELM can obtain higher fidelity, even in low-photon conditions. Afterwards, we used ELM to retrieve lifetime components from human prostate cancer cells loaded with gold nanosensors, showing that ELM also outperforms the iterative fitting and non-fitting algorithms. By comparing ELM with a computational efficient neural network, ELM achieves comparable accuracy with less training and inference time. As there is no back-propagation process for ELM during the training phase, the training speed is much higher than existing neural network approaches. The proposed strategy is promising for edge computing with online training.


Subject(s)
Algorithms , Neural Networks, Computer , Fluorescence , Humans , Male
9.
Opt Lett ; 46(15): 3612-3615, 2021 Aug 01.
Article in English | MEDLINE | ID: mdl-34329237

ABSTRACT

Time of flight and photometric stereo are two three-dimensional (3D) imaging techniques with complementary properties, where the former can achieve depth accuracy in discontinuous scenes, and the latter can reconstruct surfaces of objects with fine depth details and high spatial resolution. In this work, we demonstrate the surface reconstruction of complex 3D fields with discontinuity between objects by combining the two imaging methods. Using commercial LEDs, a single-photon avalanche diode camera, and a mobile phone device, high resolution of surface reconstruction is achieved with a RMS error of 6% for an object auto-selected from a scene imaged at a distance of 50 cm.

10.
Appl Opt ; 60(5): 1476-1483, 2021 Feb 10.
Article in English | MEDLINE | ID: mdl-33690594

ABSTRACT

A single-shot fluorescence lifetime imaging (FLIM) method based on the compressed ultrafast photography (CUP) is proposed, named space-restricted CUP (srCUP). srCUP is suitable for imaging objects moving slowly (<∼150/Mmm/s, M is the magnification of the objective lens) in the field of view with the intensity changing within nanoseconds in a measurement window around 10 ns. We used synthetic datasets to explore the performances of srCUP compared with CUP and TCUP (a variant of CUP). srCUP not only provides superior reconstruction performances, but its reconstruction speed is also twofold and threefold faster than CUP and TCUP, respectively. The lifetime determination performances were assessed by estimating lifetime components, amplitude- and intensity-weighted average lifetimes (τA and τI), with the reconstructed scenes using the least squares method based on a bi-exponential model. srCUP has the best accuracy and precision for lifetime determinations with a relative bias less than 7% and a coefficient of variation less than 7% for τA, and a relative bias less than 10% and a coefficient of variation less than 11% for τI.


Subject(s)
Optical Imaging/methods , Photography/methods , Algorithms , Kinetics , Least-Squares Analysis , Models, Chemical , Time Factors
11.
Opt Express ; 28(26): 39299-39310, 2020 Dec 21.
Article in English | MEDLINE | ID: mdl-33379483

ABSTRACT

The compressive ultrafast photography (CUP) has achieved real-time femtosecond imaging based on the compressive-sensing methods. However, the reconstruction performance usually suffers from artifacts brought by strong noise, aberration, and distortion, which prevents its applications. We propose a deep compressive ultrafast photography (DeepCUP) method. Various numerical simulations have been demonstrated on both the MNIST and UCF-101 datasets and compared with other state-of-the-art algorithms. The result shows that our DeepCUP has a superior performance in both PSNR and SSIM compared to previous compressed-sensing methods. We also illustrate the outstanding performance of the proposed method under system errors and noise in comparison to other methods.

12.
Methods Appl Fluoresc ; 8(3): 034001, 2020 Apr 15.
Article in English | MEDLINE | ID: mdl-32235056

ABSTRACT

Facultative intracellular pathogens are able to live inside and outside host cells. It is highly desirable to differentiate their cellular locations for the purposes of fundamental research and clinical applications. In this work, we developed a novel analysis platform that allows users to choose two analysis models: amplitude weighted lifetime (τ A) and intensity weighted lifetime (τ I) for fluorescence lifetime imaging microscopy (FLIM). We applied these two models to analyse FLIM images of mouse Raw macrophage cells that were infected with bacteria Shigella Sonnei, adherent and invasive E. coli (AIEC) and Lactobacillus. The results show that the fluorescence lifetimes of bacteria depend on their cellular locations. The τ A model is superior in visually differentiating bacteria that are in extra- and intra-cellular and membrane-bounded locations, whereas the τ I model show excellent precision. Both models show speedy performances that analysis can be performed within 0.3 s. We also compared the proposed models with a widely used commercial software tool (τ C, SPC Image, Becker & Hickl GmbH), showing similar τ I and τ C results. The platform also allows users to perform phasor analysis with great flexibility to pinpoint the regions of interest from lifetime images as well as phasor plots. This platform holds the disruptive potential of replacing z-stack imaging for identifying intracellular bacteria.


Subject(s)
Bacteria/pathogenicity , Image Processing, Computer-Assisted/methods , Microscopy, Fluorescence/methods , Humans
13.
Opt Express ; 27(24): 35485-35498, 2019 Nov 25.
Article in English | MEDLINE | ID: mdl-31878719

ABSTRACT

Multispectral and 3-D imaging are useful for a wide variety of applications, adding valuable spectral and depth information for image analysis. Single-photon avalanche diode (SPAD) based imaging systems provide photon time-of-arrival information, and can be used for imaging with time-correlated single photon counting techniques. Here we demonstrate an LED based synchronised illumination system, where temporally structured light can be used to relate time-of-arrival to specific wavelengths, thus recovering reflectance information. Cross-correlation of the received multi-peak histogram with a reference measurement yields a time delay, allowing depth information to be determined with cm-scale resolution despite the long sequence of relatively wide (∼10 ns) pulses. Using commercial LEDs and a SPAD imaging array, multispectral 3-D imaging is demonstrated across 9 visible wavelength bands.

14.
Opt Express ; 26(14): 17936-17947, 2018 Jul 09.
Article in English | MEDLINE | ID: mdl-30114076

ABSTRACT

Qualitative and quantitative measurements of complex flows demand for fast single-shot fluorescence lifetime imaging (FLI) technology with high precision. A method, single-shot time-gated fluorescence lifetime imaging using three-frame images (TFI-TGFLI), is presented. To our knowledge, it is the first work to combine a three-gate rapid lifetime determination (RLD) scheme and a four-channel framing camera to achieve this goal. Different from previously proposed two-gate RLD schemes, TFI-TGFLI can provide a wider lifetime range 0.6 ~ 13ns with reasonable precision. The performances of the proposed approach have been examined by both Monte-Carlo simulations and toluene seeded gas mixing jet diagnosis experiments. The measured average lifetimes of the whole excited areas agree well with the results obtained by the streak camera, and they are 7.6ns (N2 = 7L/min; O2 < 0.1L/min) and 2.6ns (N2 = 19L/min; O2 = 1L/min) with the standard deviations of 1.7ns and 0.8ns among the lifetime image pixels, respectively. The concentration distributions of the quenchers and fluorescent species were further analyzed, and they are consistent with the experimental settings.

15.
Phys Med Biol ; 62(4): 1632-1636, 2017 02 21.
Article in English | MEDLINE | ID: mdl-28145282

ABSTRACT

This comment is to clarify that Poisson noise instead of Gaussian noise shall be included to assess the performances of least-squares deconvolution with Laguerre expansion (LSD-LE) for analysing fluorescence lifetime imaging data obtained from time-resolved systems. Moreover, we also corrected an equation in the paper. As the LSD-LE method is rapid and has the potential to be widely applied not only for diagnostic but for wider bioimaging applications, it is desirable to have precise noise models and equations.


Subject(s)
Least-Squares Analysis , Microscopy, Fluorescence , Normal Distribution
16.
Opt Express ; 24(23): 26777-26791, 2016 Nov 14.
Article in English | MEDLINE | ID: mdl-27857408

ABSTRACT

Analyzing large fluorescence lifetime imaging (FLIM) data is becoming overwhelming; the latest FLIM systems easily produce massive amounts of data, making an efficient analysis more challenging than ever. In this paper we propose the combination of a custom-fit variable projection method, with a Laguerre expansion based deconvolution, to analyze bi-exponential data obtained from time-domain FLIM systems. Unlike nonlinear least squares methods, which require a suitable initial guess from an experienced researcher, the new method is free from manual interventions and hence can support automated analysis. Monte Carlo simulations are carried out on synthesized FLIM data to demonstrate the performance compared to other approaches. The performance is also illustrated on real-life FLIM data obtained from the study of autofluorescence of daisy pollen and the endocytosis of gold nanorods (GNRs) in living cells. In the latter, the fluorescence lifetimes of the GNRs are much shorter than the full width at half maximum of the instrument response function. Overall, our proposed method contains simple steps and shows great promise in realising automated FLIM analysis of large data sets.

17.
Opt Express ; 24(13): 13894-905, 2016 Jun 27.
Article in English | MEDLINE | ID: mdl-27410552

ABSTRACT

Fast deconvolution is an essential step to calibrate instrument responses in big fluorescence lifetime imaging microscopy (FLIM) image analysis. This paper examined a computationally effective least squares deconvolution method based on Laguerre expansion (LSD-LE), recently developed for clinical diagnosis applications, and proposed new criteria for selecting Laguerre basis functions (LBFs) without considering the mutual orthonormalities between LBFs. Compared with the previously reported LSD-LE, the improved LSD-LE allows to use a higher laser repetition rate, reducing the acquisition time per measurement. Moreover, we extended it, for the first time, to analyze bi-exponential fluorescence decays for more general FLIM-FRET applications. The proposed method was tested on both synthesized bi-exponential and realistic FLIM data for studying the endocytosis of gold nanorods in Hek293 cells. Compared with the previously reported constrained LSD-LE, it shows promising results.


Subject(s)
Microscopy, Fluorescence/instrumentation , Algorithms , Fluorescence , HEK293 Cells , Humans , Image Processing, Computer-Assisted/methods , Least-Squares Analysis , Microscopy, Fluorescence/methods
18.
Opt Lett ; 41(11): 2561-4, 2016 Jun 01.
Article in English | MEDLINE | ID: mdl-27244414

ABSTRACT

A novel high-speed fluorescence lifetime imaging (FLIM) analysis method based on artificial neural networks (ANN) has been proposed. In terms of image generation, the proposed ANN-FLIM method does not require iterative searching procedures or initial conditions, and it can generate lifetime images at least 180-fold faster than conventional least squares curve-fitting software tools. The advantages of ANN-FLIM were demonstrated on both synthesized and experimental data, showing that it has great potential to fuel current revolutions in rapid FLIM technologies.

19.
Opt Express ; 24(7): 6899-915, 2016 Apr 04.
Article in English | MEDLINE | ID: mdl-27136986

ABSTRACT

We demonstrate an implementation of a centre-of-mass method (CMM) incorporating background subtraction for use in multifocal fluorescence lifetime imaging microscopy to accurately determine fluorescence lifetime in live cell imaging using the Megaframe camera. The inclusion of background subtraction solves one of the major issues associated with centre-of-mass approaches, namely the sensitivity of the algorithm to background signal. The algorithm, which is predominantly implemented in hardware, provides real-time lifetime output and allows the user to effectively condense large amounts of photon data. Instead of requiring the transfer of thousands of photon arrival times, the lifetime is simply represented by one value which allows the system to collect data up to limit of pulse pile-up without any limitations on data transfer rates. In order to evaluate the performance of this new CMM algorithm with existing techniques (i.e. rapid lifetime determination and Levenburg-Marquardt), we imaged live MCF-7 human breast carcinoma cells transiently transfected with FRET standards. We show that, it offers significant advantages in terms of lifetime accuracy and insensitivity to variability in dark count rate (DCR) between Megaframe camera pixels. Unlike other algorithms no prior knowledge of the expected lifetime is required to perform lifetime determination. The ability of this technique to provide real-time lifetime readout makes it extremely useful for a number of applications.

20.
Opt Lett ; 41(8): 1768, 2016 Apr 15.
Article in English | MEDLINE | ID: mdl-27082340

ABSTRACT

Table 1 of an earlier paper [Opt. Lett.40, 336 (2015)10.1364/OL.40.000336] contained an incorrect mathematical expression. The error is rectified here.

SELECTION OF CITATIONS
SEARCH DETAIL