Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 125
Filter
Add more filters

Publication year range
1.
Cell ; 184(12): 3318-3332.e17, 2021 06 10.
Article in English | MEDLINE | ID: mdl-34038702

ABSTRACT

Long-term subcellular intravital imaging in mammals is vital to study diverse intercellular behaviors and organelle functions during native physiological processes. However, optical heterogeneity, tissue opacity, and phototoxicity pose great challenges. Here, we propose a computational imaging framework, termed digital adaptive optics scanning light-field mutual iterative tomography (DAOSLIMIT), featuring high-speed, high-resolution 3D imaging, tiled wavefront correction, and low phototoxicity with a compact system. By tomographic imaging of the entire volume simultaneously, we obtained volumetric imaging across 225 × 225 × 16 µm3, with a resolution of up to 220 nm laterally and 400 nm axially, at the millisecond scale, over hundreds of thousands of time points. To establish the capabilities, we investigated large-scale cell migration and neural activities in different species and observed various subcellular dynamics in mammals during neutrophil migration and tumor cell circulation.


Subject(s)
Algorithms , Imaging, Three-Dimensional , Optics and Photonics , Tomography , Animals , Calcium/metabolism , Cell Line, Tumor , Cell Membrane/metabolism , Cell Movement , Drosophila , HeLa Cells , Humans , Larva/physiology , Liver/diagnostic imaging , Male , Mice, Inbred C57BL , Neoplasms/pathology , Rats, Sprague-Dawley , Signal-To-Noise Ratio , Subcellular Fractions/physiology , Time Factors , Zebrafish
2.
Cell ; 180(3): 536-551.e17, 2020 02 06.
Article in English | MEDLINE | ID: mdl-31955849

ABSTRACT

Goal-directed behavior requires the interaction of multiple brain regions. How these regions and their interactions with brain-wide activity drive action selection is less understood. We have investigated this question by combining whole-brain volumetric calcium imaging using light-field microscopy and an operant-conditioning task in larval zebrafish. We find global, recurring dynamics of brain states to exhibit pre-motor bifurcations toward mutually exclusive decision outcomes. These dynamics arise from a distributed network displaying trial-by-trial functional connectivity changes, especially between cerebellum and habenula, which correlate with decision outcome. Within this network the cerebellum shows particularly strong and predictive pre-motor activity (>10 s before movement initiation), mainly within the granule cells. Turn directions are determined by the difference neuroactivity between the ipsilateral and contralateral hemispheres, while the rate of bi-hemispheric population ramping quantitatively predicts decision time on the trial-by-trial level. Our results highlight a cognitive role of the cerebellum and its importance in motor planning.


Subject(s)
Cerebellum/physiology , Decision Making/physiology , Reaction Time/physiology , Zebrafish/physiology , Animals , Behavior, Animal/physiology , Brain Mapping/methods , Cerebrum/physiology , Cognition/physiology , Conditioning, Operant/physiology , Goals , Habenula/physiology , Hot Temperature , Larva/physiology , Motor Activity/physiology , Movement , Neurons/physiology , Psychomotor Performance/physiology , Rhombencephalon/physiology
3.
Proc Natl Acad Sci U S A ; 121(40): e2402556121, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39320920

ABSTRACT

Fluorescence lifetime imaging microscopy (FLIM) is a powerful imaging technique that enables the visualization of biological samples at the molecular level by measuring the fluorescence decay rate of fluorescent probes. This provides critical information about molecular interactions, environmental changes, and localization within biological systems. However, creating high-resolution lifetime maps using conventional FLIM systems can be challenging, as it often requires extensive scanning that can significantly lengthen acquisition times. This issue is further compounded in three-dimensional (3D) imaging because it demands additional scanning along the depth axis. To tackle this challenge, we developed a computational imaging technique called light-field tomographic FLIM (LIFT-FLIM). Our approach allows for the acquisition of volumetric fluorescence lifetime images in a highly data-efficient manner, significantly reducing the number of scanning steps required compared to conventional point-scanning or line-scanning FLIM imagers. Moreover, LIFT-FLIM enables the measurement of high-dimensional data using low-dimensional detectors, which are typically low cost and feature a higher temporal bandwidth. We demonstrated LIFT-FLIM using a linear single-photon avalanche diode array on various biological systems, showcasing unparalleled single-photon detection sensitivity. Additionally, we expanded the functionality of our method to spectral FLIM and demonstrated its application in high-content multiplexed imaging of lung organoids. LIFT-FLIM has the potential to open up broad avenues in both basic and translational biomedical research.


Subject(s)
Microscopy, Fluorescence , Microscopy, Fluorescence/methods , Animals , Humans , Imaging, Three-Dimensional/methods , Mice , Fluorescent Dyes/chemistry , Tomography/methods
4.
Proc Natl Acad Sci U S A ; 120(31): e2304755120, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37487067

ABSTRACT

Three-dimensional single-pixel imaging (3D SPI) has become an attractive imaging modality for both biomedical research and optical sensing. 3D-SPI techniques generally depend on time-of-flight or stereovision principle to extract depth information from backscattered light. However, existing implementations for these two optical schemes are limited to surface mapping of 3D objects at depth resolutions, at best, at the millimeter level. Here, we report 3D light-field illumination single-pixel microscopy (3D-LFI-SPM) that enables volumetric imaging of microscopic objects with a near-diffraction-limit 3D optical resolution. Aimed at 3D space reconstruction, 3D-LFI-SPM optically samples the 3D Fourier spectrum by combining 3D structured light-field illumination with single-element intensity detection. We build a 3D-LFI-SPM prototype that provides an imaging volume of ∼390 × 390 × 3,800 µm3 and achieves 2.7-µm lateral resolution and better than 37-µm axial resolution. Its capability of 3D visualization of label-free optical absorption contrast is demonstrated by imaging single algal cells in vivo. Our approach opens broad perspectives for 3D SPI with potential applications in various fields, such as biomedical functional imaging.

5.
Nano Lett ; 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38620069

ABSTRACT

Exciton-polariton systems composed of a light-matter quasi-particle with a light effective mass easily realize Bose-Einstein condensation. In this work, we constructed an annular trap in a halide perovskite semiconductor microcavity and observed the spontaneous formation of symmetrical petal-shaped exciton-polariton condensation in the annular trap at room temperature. In our study, we found that the number of petals of the petal-shaped exciton-polariton condensates, which is decided by the orbital angular momentum, is dependent on the light intensity distribution. Therefore, the selective excitation of perovskite microcavity exciton-polariton condensates under all-optical control can be realized by adjusting the light intensity distribution. This could pave the way to room-temperature topological devices, optical cryptographical devices, and new quantum gyroscopes in the exciton-polariton system.

6.
Sensors (Basel) ; 24(17)2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39275635

ABSTRACT

In this paper, we study facial expression recognition (FER) using three modalities obtained from a light field camera: sub-aperture (SA), depth map, and all-in-focus (AiF) images. Our objective is to construct a more comprehensive and effective FER system by investigating multimodal fusion strategies. For this purpose, we employ EfficientNetV2-S, pre-trained on AffectNet, as our primary convolutional neural network. This model, combined with a BiGRU, is used to process SA images. We evaluate various fusion techniques at both decision and feature levels to assess their effectiveness in enhancing FER accuracy. Our findings show that the model using SA images surpasses state-of-the-art performance, achieving 88.13% ± 7.42% accuracy under the subject-specific evaluation protocol and 91.88% ± 3.25% under the subject-independent evaluation protocol. These results highlight our model's potential in enhancing FER accuracy and robustness, outperforming existing methods. Furthermore, our multimodal fusion approach, integrating SA, AiF, and depth images, demonstrates substantial improvements over unimodal models. The decision-level fusion strategy, particularly using average weights, proved most effective, achieving 90.13% ± 4.95% accuracy under the subject-specific evaluation protocol and 93.33% ± 4.92% under the subject-independent evaluation protocol. This approach leverages the complementary strengths of each modality, resulting in a more comprehensive and accurate FER system.


Subject(s)
Facial Expression , Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods , Automated Facial Recognition/methods , Algorithms , Pattern Recognition, Automated/methods
7.
Sensors (Basel) ; 24(8)2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38676139

ABSTRACT

Among the common applications of plenoptic cameras are depth reconstruction and post-shot refocusing. These require a calibration relating the camera-side light field to that of the scene. Numerous methods with this goal have been developed based on thin lens models for the plenoptic camera's main lens and microlenses. Our work addresses the often-overlooked role of the main lens exit pupil in these models, specifically in the decoding process of standard plenoptic camera (SPC) images. We formally deduce the connection between the refocusing distance and the resampling parameter for the decoded light field and provide an analysis of the errors that arise when the exit pupil is not considered. In addition, previous work is revisited with respect to the exit pupil's role, and all theoretical results are validated through a ray tracing-based simulation. With the public release of the evaluated SPC designs alongside our simulation and experimental data, we aim to contribute to a more accurate and nuanced understanding of plenoptic camera optics.

8.
Sensors (Basel) ; 24(11)2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38894371

ABSTRACT

The Rich spatial and angular information in light field images enables accurate depth estimation, which is a crucial aspect of environmental perception. However, the abundance of light field information also leads to high computational costs and memory pressure. Typically, selectively pruning some light field information can significantly improve computational efficiency but at the expense of reduced depth estimation accuracy in the pruned model, especially in low-texture regions and occluded areas where angular diversity is reduced. In this study, we propose a lightweight disparity estimation model that balances speed and accuracy and enhances depth estimation accuracy in textureless regions. We combined cost matching methods based on absolute difference and correlation to construct cost volumes, improving both accuracy and robustness. Additionally, we developed a multi-scale disparity cost fusion architecture, employing 3D convolutions and a UNet-like structure to handle matching costs at different depth scales. This method effectively integrates information across scales, utilizing the UNet structure for efficient fusion and completion of cost volumes, thus yielding more precise depth maps. Extensive testing shows that our method achieves computational efficiency on par with the most efficient existing methods, yet with double the accuracy. Moreover, our approach achieves comparable accuracy to the current highest-accuracy methods but with an order of magnitude improvement in computational performance.

9.
Small ; 19(10): e2206626, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36642809

ABSTRACT

Direct electrocatalytic reduction of N2 to NH3 under mild conditions is attracting considerable interests but still remains enormous challenges in terms of respect of intrinsic catalytic activity and limited electrocatalytic efficiency. Herein, a photo-enhanced strategy is developed to improve the NRR activity on Cu single atoms catalysts. The atomically dispersed Cu single atoms supported TiO2 nanosheets (Cu SAs/TiO2 ) achieve a Faradaic Efficiency (12.88%) and NH3 yield rate (6.26 µg h-1 mgcat -1 ) at -0.05 V versus RHE under the light irradiation field, in which NH3 yield rate is fivefold higher than that under pure electrocatalytic nitrogen reduction reaction (NRR) process and is remarkably superior in comparison to most of the similar type electrocatalysts. The existence of external light field improves electron transfer ability between CuO and TiO, and thus optimizes the accumulation of surface charges on Cu sites, endowing more electrons involved in nitrogen fixation. This work reveals an atomic-scale mechanistic understanding of field effect-enhanced electrochemical performance of catalysts and it provides predictive guidelines for the rational design of photo-enhanced electrochemical N2 reduction catalysts.

10.
New Phytol ; 237(6): 1980-1997, 2023 03.
Article in English | MEDLINE | ID: mdl-36477856

ABSTRACT

New imaging methodologies with high contrast and molecular specificity allow researchers to analyze dynamic processes in plant cells at multiple scales, from single protein and RNA molecules to organelles and cells, to whole organs and tissues. These techniques produce informative images and quantitative data on molecular dynamics to address questions that cannot be answered by conventional biochemical assays. Here, we review selected microscopy techniques, focusing on their basic principles and applications in plant science, discussing the pros and cons of each technique, and introducing methods for quantitative analysis. This review thus provides guidance for plant scientists in selecting the most appropriate techniques to decipher structures and dynamic processes at different levels, from protein dynamics to morphogenesis.


Subject(s)
Plant Cells , Proteins , Microscopy, Fluorescence/methods , Plants
11.
Sensors (Basel) ; 23(17)2023 Aug 28.
Article in English | MEDLINE | ID: mdl-37687936

ABSTRACT

A light field camera can capture light information from various directions within a scene, allowing for the reconstruction of the scene. The light field image inherently contains the depth information of the scene, and depth estimations of light field images have become a popular research topic. This paper proposes a depth estimation network of light field images with occlusion awareness. Since light field images contain many views from different viewpoints, identifying the combinations that contribute the most to the depth estimation of the center view is critical to improving the depth estimation accuracy. Current methods typically rely on a fixed set of views, such as vertical, horizontal, and diagonal, which may not be optimal for all scenes. To address this limitation, we propose a novel approach that considers all available views during depth estimation while leveraging an attention mechanism to assign weights to each view dynamically. By inputting all views into the network and employing the attention mechanism, we enable the model to adaptively determine the most informative views for each scene, thus achieving more accurate depth estimation. Furthermore, we introduce a multi-scale feature fusion strategy that amalgamates contextual information and expands the receptive field to enhance the network's performance in handling challenging scenarios, such as textureless and occluded regions.

12.
Sensors (Basel) ; 23(4)2023 Feb 13.
Article in English | MEDLINE | ID: mdl-36850722

ABSTRACT

Light field reconstruction and synthesis algorithms are essential for improving the lower spatial resolution for hand-held plenoptic cameras. Previous light field synthesis algorithms produce blurred regions around depth discontinuities, especially for stereo-based algorithms, where no information is available to fill the occluded areas in the light field image. In this paper, we propose a light field synthesis algorithm that uses the focal stack images and the all-in-focus image to synthesize a 9 × 9 sub-aperture view light field image. Our approach uses depth from defocus to estimate a depth map. Then, we use the depth map and the all-in-focus image to synthesize the sub-aperture views, and their corresponding depth maps by mimicking the apparent shifting of the central image according to the depth values. We handle the occluded regions in the synthesized sub-aperture views by filling them with the information recovered from the focal stack images. We also show that, if the depth levels in the image are known, we can synthesize a high-accuracy light field image with just five focal stack images. The accuracy of our approach is compared with three state-of-the-art algorithms: one non-learning and two CNN-based approaches, and the results show that our algorithm outperforms all three in terms of PSNR and SSIM metrics.

13.
Sensors (Basel) ; 23(4)2023 Feb 15.
Article in English | MEDLINE | ID: mdl-36850772

ABSTRACT

We propose a light-field microscopy display system that provides improved image quality and realistic three-dimensional (3D) measurement information. Our approach acquires both high-resolution two-dimensional (2D) and light-field images of the specimen sequentially. We put forward a matting Laplacian-based depth estimation algorithm to obtain nearly realistic 3D surface data, allowing the calculation of depth data, which is relatively close to the actual surface, and measurement information from the light-field images of specimens. High-reliability area data of the focus measure map and spatial affinity information of the matting Laplacian are used to estimate nearly realistic depths. This process represents a reference value for the light-field microscopy depth range that was not previously available. A 3D model is regenerated by combining the depth data and the high-resolution 2D image. The element image array is rendered through a simplified direction-reversal calculation method, which depends on user interaction from the 3D model and is displayed on the 3D display device. We confirm that the proposed system increases the accuracy of depth estimation and measurement and improves the quality of visualization and 3D display images.

14.
Sensors (Basel) ; 23(15)2023 Jul 31.
Article in English | MEDLINE | ID: mdl-37571595

ABSTRACT

Visual measurement methods are extensively used in various fields, such as aerospace, biomedicine, agricultural production, and social life, owing to their advantages of high speed, high accuracy, and non-contact. However, traditional camera-based measurement systems, relying on the pinhole imaging model, face challenges in achieving three-dimensional measurements using a single camera by one shot. Moreover, traditional visual systems struggle to meet the requirements of high precision, efficiency, and compact size simultaneously. With the development of light field theory, the light field camera has garnered significant attention as a novel measurement method. Due to its special structure, the light field camera enables high-precision three-dimensional measurements with a single camera through only one shot. This paper presents a comprehensive overview of light field camera measurement technologies, including the imaging principles, calibration methods, reconstruction algorithms, and measurement applications. Additionally, we explored future research directions and the potential application prospects of the light field camera.

15.
Sensors (Basel) ; 23(4)2023 Feb 10.
Article in English | MEDLINE | ID: mdl-36850618

ABSTRACT

Due to its widespread usage in many applications, numerous deep learning algorithms have been proposed to overcome Light Field's trade-off (LF). The sensor's low resolution limits angular and spatial resolution, which causes this trade-off. The proposed method should be able to model the non-local properties of the 4D LF data fully to mitigate this problem. Therefore, this paper proposes a different approach to increase spatial and angular information interaction for LF image super-resolution (SR). We achieved this by processing the LF Sub-Aperture Images (SAI) independently to extract the spatial information and the LF Macro-Pixel Image (MPI) to extract the angular information. The MPI or Lenslet LF image is characterized by its ability to integrate more complementary information between different viewpoints (SAIs). In particular, we extract initial features and then process MAI and SAIs alternately to incorporate angular and spatial information. Finally, the interacted features are added to the initial extracted features to reconstruct the final output. We trained the proposed network to minimize the sum of absolute errors between low-resolution (LR) input and high-resolution (HR) output images. Experimental results prove the high performance of our proposed method over the state-of-the-art methods on LFSR for small baseline LF images.

16.
Sensors (Basel) ; 23(3)2023 Jan 22.
Article in English | MEDLINE | ID: mdl-36772324

ABSTRACT

The practical usage of V2X communication protocols started emerging in recent years. Data built on sensor information are displayed via onboard units and smart devices. However, perceptually obtaining such data may be counterproductive in terms of visual attention, particularly in the case of safety-related applications. Using the windshield as a display may solve this issue, but switching between 2D information and the 3D reality of traffic may introduce issues of its own. To overcome such difficulties, automotive light field visualization is introduced. In this paper, we investigate the visualization of V2X communication protocols and use cases via projection-based light field technology. Our work is motivated by the abundance of V2X sensor data, the low latency of V2X data transfer, the availability of automotive light field prototypes, the prevalent dominance of non-autonomous and non-remote driving, and the lack of V2X-based light field solutions. As our primary contributions, we provide a comprehensive technological review of light field and V2X communication, a set of recommendations for design and implementation, an extensive discussion and implication analysis, the exploration of utilization based on standardized protocols, and use-case-specific considerations.

17.
Sensors (Basel) ; 23(14)2023 Jul 08.
Article in English | MEDLINE | ID: mdl-37514540

ABSTRACT

We propose a high-quality, three-dimensional display system based on a simplified light field image acquisition method, and a custom-trained full-connected deep neural network is proposed. The ultimate goal of the proposed system is to acquire and reconstruct the light field images with possibly the most elevated quality from the real-world objects in a general environment. A simplified light field image acquisition method acquires the three-dimensional information of natural objects in a simple way, with high-resolution/high-quality like multicamera-based methods. We trained a full-connected deep neural network model to output desired viewpoints of the object with the same quality. The custom-trained instant neural graphics primitives model with hash encoding output the overall desired viewpoints of the object within the acquired viewing angle in the same quality, based on the input perspectives, according to the pixel density of a display device and lens array specifications within the significantly short processing time. Finally, the elemental image array was rendered through the pixel re-arrangement from the entire viewpoints to visualize the entire field-of-view and re-constructed as a high-quality three-dimensional visualization on the integral imaging display. The system was implemented successfully, and the displayed visualizations and corresponding evaluated results confirmed that the proposed system offers a simple and effective way to acquire light field images from real objects with high-resolution and present high-quality three-dimensional visualization on the integral imaging display system.

18.
Entropy (Basel) ; 25(9)2023 Sep 15.
Article in English | MEDLINE | ID: mdl-37761635

ABSTRACT

An abundance of features in the light field has been demonstrated to be useful for saliency detection in complex scenes. However, bottom-up saliency detection models are limited in their ability to explore light field features. In this paper, we propose a light field saliency detection method that focuses on depth-induced saliency, which can more deeply explore the interactions between different cues. First, we localize a rough saliency region based on the compactness of color and depth. Then, the relationships among depth, focus, and salient objects are carefully investigated, and the focus cue of the focal stack is used to highlight the foreground objects. Meanwhile, the depth cue is utilized to refine the coarse salient objects. Furthermore, considering the consistency of color smoothing and depth space, an optimization model referred to as color and depth-induced cellular automata is improved to increase the accuracy of saliency maps. Finally, to avoid interference of redundant information, the mean absolute error is chosen as the indicator of the filter to obtain the best results. The experimental results on three public light field datasets show that the proposed method performs favorably against the state-of-the-art conventional light field saliency detection approaches and even light field saliency detection approaches based on deep learning.

19.
J Exp Biol ; 225(4)2022 02 15.
Article in English | MEDLINE | ID: mdl-35166335

ABSTRACT

The skate Leucoraja erinacea has an elaborately shaped pupil, whose characteristics and functions have received little attention. The goal of our study was to investigate the pupil response in relation to natural ambient light intensities. First, we took a recently developed sensory-ecological approach, which gave us a tool for creating a controlled light environment for behavioural work: during a field survey, we collected a series of calibrated natural habitat images from the perspective of the skates' eyes. From these images, we derived a vertical illumination profile using custom-written software for quantification of the environmental light field (ELF). After collecting and analysing these natural light field data, we created an illumination set-up in the laboratory, which closely simulated the natural vertical light gradient that skates experience in the wild and tested the light responsiveness - in particular the extent of dilation - of the skate pupil to controlled changes in this simulated light field. Additionally, we measured pupillary dilation and constriction speeds. Our results confirm that the skate pupil changes from nearly circular under low light to a series of small triangular apertures under bright light. A linear regression analysis showed a trend towards smaller skates having a smaller dynamic range of pupil area (dilation versus constriction ratio around 4-fold), and larger skates showing larger ranges (around 10- to 20-fold). Dilation took longer than constriction (between 30 and 45 min for dilation; less than 20 min for constriction), and there was considerable individual variation in dilation/constriction time. We discuss our findings in terms of the visual ecology of L. erinacea and consider the importance of accurately simulating natural light fields in the laboratory.


Subject(s)
Pupil , Skates, Fish , Animals , Constriction , Light , Photic Stimulation , Pupil/physiology , Skates, Fish/physiology
20.
IEEE Signal Process Mag ; 39(2): 58-72, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35261535

ABSTRACT

Understanding how networks of neurons process information is one of the key challenges in modern neuroscience. A necessary step to achieve this goal is to be able to observe the dynamics of large populations of neurons over a large area of the brain. Light-field microscopy (LFM), a type of scanless microscope, is a particularly attractive candidate for high-speed three-dimensional (3D) imaging. It captures volumetric information in a single snapshot, allowing volumetric imaging at video frame-rates. Specific features of imaging neuronal activity using LFM call for the development of novel machine learning approaches that fully exploit priors embedded in physics and optics models. Signal processing theory and wave-optics theory could play a key role in filling this gap, and contribute to novel computational methods with enhanced interpretability and generalization by integrating model-driven and data-driven approaches. This paper is devoted to a comprehensive survey to state-of-the-art of computational methods for LFM, with a focus on model-based and data-driven approaches.

SELECTION OF CITATIONS
SEARCH DETAIL