Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
1.
Patterns (N Y) ; 4(11): 100843, 2023 Nov 10.
Artigo em Inglês | MEDLINE | ID: mdl-38035197

RESUMO

This work introduces the EXSCLAIM! toolkit for the automatic extraction, separation, and caption-based natural language annotation of images from scientific literature. EXSCLAIM! is used to show how rule-based natural language processing and image recognition can be leveraged to construct an electron microscopy dataset containing thousands of keyword-annotated nanostructure images. Moreover, it is demonstrated how a combination of statistical topic modeling and semantic word similarity comparisons can be used to increase the number and variety of keyword annotations on top of the standard annotations from EXSCLAIM! With large-scale imaging datasets constructed from scientific literature, users are well positioned to train neural networks for classification and recognition tasks specific to microscopy-tasks often otherwise inhibited by a lack of sufficient annotated training data.

2.
Front Neurosci ; 17: 1127537, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37152590

RESUMO

Tactile sensing is essential for a variety of daily tasks. Inspired by the event-driven nature and sparse spiking communication of the biological systems, recent advances in event-driven tactile sensors and Spiking Neural Networks (SNNs) spur the research in related fields. However, SNN-enabled event-driven tactile learning is still in its infancy due to the limited representation abilities of existing spiking neurons and high spatio-temporal complexity in the event-driven tactile data. In this paper, to improve the representation capability of existing spiking neurons, we propose a novel neuron model called "location spiking neuron," which enables us to extract features of event-based data in a novel way. Specifically, based on the classical Time Spike Response Model (TSRM), we develop the Location Spike Response Model (LSRM). In addition, based on the most commonly-used Time Leaky Integrate-and-Fire (TLIF) model, we develop the Location Leaky Integrate-and-Fire (LLIF) model. Moreover, to demonstrate the representation effectiveness of our proposed neurons and capture the complex spatio-temporal dependencies in the event-driven tactile data, we exploit the location spiking neurons to propose two hybrid models for event-driven tactile learning. Specifically, the first hybrid model combines a fully-connected SNN with TSRM neurons and a fully-connected SNN with LSRM neurons. And the second hybrid model fuses the spatial spiking graph neural network with TLIF neurons and the temporal spiking graph neural network with LLIF neurons. Extensive experiments demonstrate the significant improvements of our models over the state-of-the-art methods on event-driven tactile learning, including event-driven tactile object recognition and event-driven slip detection. Moreover, compared to the counterpart artificial neural networks (ANNs), our SNN models are 10× to 100× energy-efficient, which shows the superior energy efficiency of our models and may bring new opportunities to the spike-based learning community and neuromorphic engineering. Finally, we thoroughly examine the advantages and limitations of various spiking neurons and discuss the broad applicability and potential impact of this work on other spike-based learning applications.

3.
Front Neurosci ; 17: 1127574, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37139528

RESUMO

One of the holy grails of neuroscience is to record the activity of every neuron in the brain while an animal moves freely and performs complex behavioral tasks. While important steps forward have been taken recently in large-scale neural recording in rodent models, single neuron resolution across the entire mammalian brain remains elusive. In contrast the larval zebrafish offers great promise in this regard. Zebrafish are a vertebrate model with substantial homology to the mammalian brain, but their transparency allows whole-brain recordings of genetically-encoded fluorescent indicators at single-neuron resolution using optical microscopy techniques. Furthermore zebrafish begin to show a complex repertoire of natural behavior from an early age, including hunting small, fast-moving prey using visual cues. Until recently work to address the neural bases of these behaviors mostly relied on assays where the fish was immobilized under the microscope objective, and stimuli such as prey were presented virtually. However significant progress has recently been made in developing brain imaging techniques for zebrafish which are not immobilized. Here we discuss recent advances, focusing particularly on techniques based on light-field microscopy. We also draw attention to several important outstanding issues which remain to be addressed to increase the ecological validity of the results obtained.

4.
Artigo em Inglês | MEDLINE | ID: mdl-36315535

RESUMO

We present a novel adaptive multimodal intensity-event algorithm to optimize an overall objective of object tracking under bit rate constraints for a host-chip architecture. The chip is a computationally resource-constrained device acquiring high-resolution intensity frames and events, while the host is capable of performing computationally expensive tasks. We develop a joint intensity-neuromorphic event rate-distortion compression framework with a quadtree (QT)-based compression of intensity and events scheme. The goal of this compression framework is to optimally allocate bits to the intensity frames and neuromorphic events based on the minimum distortion at a given communication channel capacity. The data acquisition on the chip is driven by the presence of objects of interest in the scene as detected by an object detector. The most informative intensity and event data are communicated to the host under rate constraints so that the best possible tracking performance is obtained. The detection and tracking of objects in the scene are done on the distorted data at the host. Intensity and events are jointly used in a fusion framework to enhance the quality of the distorted images, in order to improve the object detection and tracking performance. The performance assessment of the overall system is done in terms of the multiple object tracking accuracy (MOTA) score. Compared with using intensity modality only, there is an improvement in MOTA using both these modalities in different scenarios.

5.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 8261-8275, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34543190

RESUMO

Many visual and robotics tasks in real-world scenarios rely on robust handling of high speed motion and high dynamic range (HDR) with effectively high spatial resolution and low noise. Such stringent requirements, however, cannot be directly satisfied by a single imager or imaging modality, rather by multi-modal sensors with complementary advantages. In this paper, we address high performance imaging by exploring the synergy between traditional frame-based sensors with high spatial resolution and low sensor noise, and emerging event-based sensors with high speed and high dynamic range. We introduce a novel computational framework, termed Guided Event Filtering (GEF), to process these two streams of input data and output a stream of super-resolved yet noise-reduced events. To generate high quality events, GEF first registers the captured noisy events onto the guidance image plane according to our flow model. it then performs joint image filtering that inherits the mutual structure from both inputs. Lastly, GEF re-distributes the filtered event frame in the space-time volume while preserving the statistical characteristics of the original events. When the guidance images under-perform, GEF incorporates an event self-guiding mechanism that resorts to neighbor events for guidance. We demonstrate the benefits of GEF by applying the output high quality events to existing event-based algorithms across diverse application categories, including high speed object tracking, depth estimation, high frame-rate video synthesis, and super resolution/HDR/color image restoration.

6.
Nat Commun ; 12(1): 6647, 2021 11 17.
Artigo em Inglês | MEDLINE | ID: mdl-34789724

RESUMO

The presence of a scattering medium in the imaging path between an object and an observer is known to severely limit the visual acuity of the imaging system. We present an approach to circumvent the deleterious effects of scattering, by exploiting spectral correlations in scattered wavefronts. Our Synthetic Wavelength Holography (SWH) method is able to recover a holographic representation of hidden targets with sub-mm resolution over a nearly hemispheric angular field of view. The complete object field is recorded within 46 ms, by monitoring the scattered light return in a probe area smaller than 6 cm × 6 cm. This unique combination of attributes opens up a plethora of new Non-Line-of-Sight imaging applications ranging from medical imaging and forensics, to early-warning navigation systems and reconnaissance. Adapting the findings of this work to other wave phenomena will help unlock a wider gamut of applications beyond those envisioned in this paper.

7.
IEEE Trans Pattern Anal Mach Intell ; 43(7): 2193-2205, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33886466

RESUMO

The poor lateral and depth resolution of state-of-the-art 3D sensors based on the time-of-flight (ToF) principle has limited widespread adoption to a few niche applications. In this work, we introduce a novel sensor concept that provides ToF-based 3D measurements of real world objects and surfaces with depth precision up to 35 µm and point cloud densities commensurate with the native sensor resolution of standard CMOS/CCD detectors (up to several megapixels). Such capabilities are realized by combining the best attributes of continuous wave ToF sensing, multi-wavelength interferometry, and heterodyne interferometry into a single approach. We describe multiple embodiments of the approach, each featuring a different sensing modality and associated tradeoffs.


Assuntos
Algoritmos , Imageamento Tridimensional
8.
Opt Express ; 29(4): 4733-4745, 2021 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-33726023

RESUMO

The development of single-photon counting detectors and arrays has made tremendous steps in recent years, not the least because of various new applications, e.g., LIDAR devices. In this work, a 3D imaging device based on real thermal light intensity interferometry is presented. By using gated SPAD technology, a basic 3D scene is imaged in reasonable measurement time. Compared to conventional approaches, the proposed synchronized photon counting allows the use of more light modes to enhance 3D ranging performance. Advantages like robustness to atmospheric scattering or autonomy by exploiting external light sources can make this ranging approach interesting for future applications.

9.
Sci Rep ; 11(1): 2263, 2021 01 26.
Artigo em Inglês | MEDLINE | ID: mdl-33500435

RESUMO

Optical coherence tomography (OCT) is an optical technique which allows for volumetric visualization of the internal structures of translucent materials. Additional information can be gained by measuring the rate of signal attenuation in depth. Techniques have been developed to estimate the rate of attenuation on a voxel by voxel basis. This depth resolved attenuation analysis gives insight into tissue structure and organization in a spatially resolved way. However, the presence of speckle in the OCT measurement causes the attenuation coefficient image to contain unrealistic fluctuations and makes the reliability of these images at the voxel level poor. While the distribution of speckle in OCT images has appeared in literature, the resulting voxelwise corruption of the attenuation analysis has not. In this work, the estimated depth resolved attenuation coefficient from OCT data with speckle is shown to be approximately exponentially distributed. After this, a prior distribution for the depth resolved attenuation coefficient is derived for a simple system using statistical mechanics. Finally, given a set of depth resolved estimates which were made from OCT data in the presence of speckle, a posterior probability distribution for the true voxelwise attenuation coefficient is derived and a Bayesian voxelwise estimator for the coefficient is given. These results are demonstrated in simulation and validated experimentally.

10.
J Synchrotron Radiat ; 28(Pt 1): 309-317, 2021 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-33399582

RESUMO

Ptychography is a rapidly developing scanning microscopy which is able to view the internal structures of samples at a high resolution beyond the illumination size. The achieved spatial resolution is theoretically dose-limited. A broadband source can provide much higher flux compared with a monochromatic source; however, it conflicts with the necessary coherence requirements of this coherent diffraction imaging technique. In this paper, a multi-wavelength reconstruction algorithm has been developed to deal with the broad bandwidth in ptychography. Compared with the latest development of mixed-state reconstruction approach, this multi-wavelength approach is more accurate in the physical model, and also considers the spot size variation as a function of energy due to the chromatic focusing optics. Therefore, this method has been proved in both simulation and experiment to significantly improve the reconstruction when the source bandwidth, illumination size and scan step size increase. It is worth mentioning that the accurate and detailed information of the energy spectrum for the incident beam is not required in advance for the proposed method. Further, we combine multi-wavelength and mixed-state approaches to jointly solve temporal and spatial partial coherence in ptychography so that it can handle various disadvantageous experimental effects. The significant relaxation in coherence requirements by our approaches allows the use of high-flux broadband X-ray sources for high-efficient and high-resolution ptychographic imaging.

11.
NMR Biomed ; 34(1): e4405, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32875668

RESUMO

Highly accelerated real-time cine MRI using compressed sensing (CS) is a promising approach to achieve high spatio-temporal resolution and clinically acceptable image quality in patients with arrhythmia and/or dyspnea. However, its lengthy image reconstruction time may hinder its clinical translation. The purpose of this study was to develop a neural network for reconstruction of non-Cartesian real-time cine MRI k-space data faster (<1 min per slice with 80 frames) than graphics processing unit (GPU)-accelerated CS reconstruction, without significant loss in image quality or accuracy in left ventricular (LV) functional parameters. We introduce a perceptual complex neural network (PCNN) that trains on complex-valued MRI signal and incorporates a perceptual loss term to suppress incoherent image details. This PCNN was trained and tested with multi-slice, multi-phase, cine images from 40 patients (20 for training, 20 for testing), where the zero-filled images were used as input and the corresponding CS reconstructed images were used as practical ground truth. The resulting images were compared using quantitative metrics (structural similarity index (SSIM) and normalized root mean square error (NRMSE)) and visual scores (conspicuity, temporal fidelity, artifacts, and noise scores), individually graded on a five-point scale (1, worst; 3, acceptable; 5, best), and LV ejection fraction (LVEF). The mean processing time per slice with 80 frames for PCNN was 23.7 ± 1.9 s for pre-processing (Step 1, same as CS) and 0.822 ± 0.004 s for dealiasing (Step 2, 166 times faster than CS). Our PCNN produced higher data fidelity metrics (SSIM = 0.88 ± 0.02, NRMSE = 0.014 ± 0.004) compared with CS. While all the visual scores were significantly different (P < 0.05), the median scores were all 4.0 or higher for both CS and PCNN. LVEFs measured from CS and PCNN were strongly correlated (R2 = 0.92) and in good agreement (mean difference = -1.4% [2.3% of mean]; limit of agreement = 10.6% [17.6% of mean]). The proposed PCNN is capable of rapid reconstruction (25 s per slice with 80 frames) of non-Cartesian real-time cine MRI k-space data, without significant loss in image quality or accuracy in LV functional parameters.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Imagem Cinética por Ressonância Magnética , Redes Neurais de Computação , Idoso , Compressão de Dados , Feminino , Humanos , Masculino
12.
Radiol Cardiothorac Imaging ; 2(3): e190205, 2020 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-32656535

RESUMO

PURPOSE: To implement an integrated reconstruction pipeline including a graphics processing unit (GPU)-based convolutional neural network (CNN) architecture and test whether it reconstructs four-dimensional non-Cartesian, non-contrast material-enhanced MR angiographic k-space data faster than a central processing unit (CPU)-based compressed sensing (CS) reconstruction pipeline, without significant losses in data fidelity, summed visual score (SVS), or arterial vessel-diameter measurements. MATERIALS AND METHODS: Raw k-space data of 24 patients (18 men and six women; mean age, 56.8 years ± 11.8 [standard deviation]) suspected of having thoracic aortic disease were used to evaluate the proposed reconstruction pipeline derived from an open-source three-dimensional CNN. For training, 4800 zero-filled images and the corresponding CS-reconstructed images from 10 patients were used as input-output pairs. For testing, 6720 zero-filled images from 14 different patients were used as inputs to a trained CNN. Metrics for evaluating the agreement between the CNN and CS images included reconstruction times, structural similarity index (SSIM) and normalized root-mean-square error (NRMSE), SVS (3 = nondiagnostic, 9 = clinically acceptable, 15 = excellent), and vessel diameters. RESULTS: The mean reconstruction time was 65 times and 69 times shorter for the CPU-based and GPU-based CNN pipelines (216.6 seconds ± 40.5 and 204.9 seconds ± 40.5), respectively, than for CS (14 152.3 seconds ± 1708.6) (P < .001). Compared with CS as practical ground truth, CNNs produced high data fidelity (SSIM = 0.94 ± 0.02, NRMSE = 2.8% ± 0.4) and not significantly different (P = .25) SVS and aortic diameters, except at one out of seven locations, where the percentage difference was only 3% (ie, clinically irrelevant). CONCLUSION: The proposed integrated reconstruction pipeline including a CNN architecture is capable of rapidly reconstructing time-resolved volumetric cardiovascular MRI k-space data, without a significant loss in data quality, thereby supporting clinical translation of said non-contrast-enhanced MR angiograms. Supplemental material is available for this article. © RSNA, 2020.

13.
Opt Express ; 28(12): 17395-17408, 2020 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-32679948

RESUMO

Imaging through scattering media is challenging since the signal to noise ratio (SNR) of the reflection can be heavily reduced by scatterers. Single-pixel detectors (SPD) with high sensitivities offer compelling advantages for sensing such weak signals. In this paper, we focus on the use of ghost imaging to resolve 2D spatial information using just an SPD. We prototype a polarimetric ghost imaging system that suppresses backscattering from volumetric media and leverages deep learning for fast reconstructions. In this work, we implement ghost imaging by projecting Hadamard patterns that are optimized for imaging through scattering media. We demonstrate good quality reconstructions in highly scattering conditions using a 1.6% sampling rate.

14.
Opt Express ; 28(12): 18131-18134, 2020 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-32680013

RESUMO

This Feature Issue includes 19 articles that highlight advances in the field of Computational Optical Sensing and Imaging. Many of the articles were presented at the 2019 OSA Topical Meeting on Computational Optical Sensing and Imaging held in Munich, Germany, on June 24-27. Articles featured in the issue cover a broad array of topics ranging from imaging through scattering media, imaging round corners and compressive imaging to machine learning for recovery of images.

15.
Opt Express ; 28(8): 12108-12120, 2020 Apr 13.
Artigo em Inglês | MEDLINE | ID: mdl-32403711

RESUMO

Light field microscopy (LFM) is an emerging technology for high-speed wide-field 3D imaging by capturing 4D light field of 3D volumes. However, its 3D imaging capability comes at a cost of lateral resolution. In addition, the lateral resolution is not uniform across depth in the light field dconvolution reconstructions. To address these problems, here, we propose a snapshot multifocal light field microscopy (MFLFM) imaging method. The underlying concept of the MFLFM is to collect multiple focal shifted light fields simultaneously. We show that by focal stacking those focal shifted light fields, the depth-of-field (DOF) of the LFM can be further improved but without sacrificing the lateral resolution. Also, if all differently focused light fields are utilized together in the deconvolution, the MFLFM could achieve a high and uniform lateral resolution within a larger DOF. We present a house-built MFLFM system by placing a diffractive optical element at the Fourier plane of a conventional LFM. The optical performance of the MFLFM are analyzed and given. Both simulations and proof-of-principle experimental results are provided to demonstrate the effectiveness and benefits of the MFLFM. We believe that the proposed snapshot MFLFM has potential to enable high-speed and high resolution 3D imaging applications.

16.
Opt Express ; 28(7): 9027-9038, 2020 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-32225516

RESUMO

We introduce a system that exploits the screen and front-facing camera of a mobile device to perform three-dimensional deflectometry-based surface measurements. In contrast to current mobile deflectometry systems, our method can capture surfaces with large normal variation and wide field of view (FoV). We achieve this by applying automated multi-view panoramic stitching algorithms to produce a large FoV normal map from a hand-guided capture process without the need for external tracking systems, like robot arms or fiducials. The presented work enables 3D surface measurements of specular objects 'in the wild' with a system accessible to users with little to no technical imaging experience. We demonstrate high-quality 3D surface measurements without the need for a calibration procedure. We provide experimental results with our prototype Deflectometry system and discuss applications for computer vision tasks such as object detection and recognition.

17.
Opt Express ; 26(21): 27381-27402, 2018 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-30469808

RESUMO

Realizing both high temporal and spatial resolution across a large volume is a key challenge for 3D fluorescent imaging. Towards achieving this objective, we introduce an interferometric multifocus microscopy (iMFM) system, a combination of multifocus microscopy (MFM) with two opposing objective lenses. We show that the proposed iMFM is capable of simultaneously producing multiple focal plane interferometry that provides axial super-resolution and hence isotropic 3D resolution with a single exposure. We design and simulate the iMFM microscope by employing two special diffractive optical elements. The point spread function of this new iMFM microscope is simulated and the image formation model is given. For reconstruction, we use the Richardson-Lucy deconvolution algorithm with total variation regularization for 3D extended object recovery, and a maximum likelihood estimator (MLE) for single molecule tracking. A method for determining an initial axial position of the molecule is also proposed to improve the convergence of the MLE. We demonstrate both theoretically and numerically that isotropic 3D nanoscopic localization accuracy is achievable with an axial imaging range of 2um when tracking a fluorescent molecule in three dimensions and that the diffraction limited axial resolution can be improved by 3-4 times in the single shot wide-field 3D extended object recovery. We believe that iMFM will be a useful tool in 3D dynamic event imaging that requires both high temporal and spatial resolution.

18.
Angew Chem Int Ed Engl ; 57(34): 10910-10914, 2018 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-29940088

RESUMO

Nonlinear unmixing of hyperspectral reflectance data is one of the key problems in quantitative imaging of painted works of art. The approach presented is to interrogate a hyperspectral image cube by first decomposing it into a set of reflectance curves representing pure basis pigments and second to estimate the scattering and absorption coefficients of each pigment in a given pixel to produce estimates of the component fractions. This two-step algorithm uses a deep neural network to qualitatively identify the constituent pigments in any unknown spectrum and, based on the pigment(s) present and Kubelka-Munk theory to estimate the pigment concentration on a per-pixel basis. Using hyperspectral data acquired on a set of mock-up paintings and a well-characterized illuminated folio from the 15th century, the performance of the proposed algorithm is demonstrated for pigment recognition and quantitative estimation of concentration.

19.
Biomed Opt Express ; 9(12): 6477-6496, 2018 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-31065444

RESUMO

Despite recent advances, high performance single-shot 3D microscopy remains an elusive task. By introducing designed diffractive optical elements (DOEs), one is capable of converting a microscope into a 3D "kaleidoscope," in which case the snapshot image consists of an array of tiles and each tile focuses on different depths. However, the acquired multifocal microscopic (MFM) image suffers from multiple sources of degradation, which prevents MFM from further applications. We propose a unifying computational framework which simplifies the imaging system and achieves 3D reconstruction via computation. Our optical configuration omits optical elements for correcting chromatic aberrations and redesigns the multifocal grating to enlarge the tracking area. Our proposed setup features only one single grating in addition to a regular microscope. The aberration correction, along with Poisson and background denoising, are incorporated in our deconvolution-based fully-automated algorithm, which requires no empirical parameter-tuning. In experiments, we achieve spatial resolutions of 0.35um (lateral) and 0.5um (axial), which are comparable to the resolution that can be achieved with confocal deconvolution microscopy. We demonstrate a 3D video of moving bacteria recorded at 25 frames per second using our proposed computational multifocal microscopy technique.

20.
Opt Express ; 25(25): 31096-31110, 2017 Dec 11.
Artigo em Inglês | MEDLINE | ID: mdl-29245787

RESUMO

Three-dimensional imaging using Time-of-flight (ToF) sensors is rapidly gaining widespread adoption in many applications due to their cost effectiveness, simplicity, and compact size. However, the current generation of ToF cameras suffers from low spatial resolution due to physical fabrication limitations. In this paper, we propose CS-ToF, an imaging architecture to achieve high spatial resolution ToF imaging via optical multiplexing and compressive sensing. Our approach is based on the observation that, while depth is non-linearly related to ToF pixel measurements, a phasor representation of captured images results in a linear image formation model. We utilize this property to develop a CS-based technique that is used to recover high resolution 3D images. Based on the proposed architecture, we developed a prototype 1-megapixel compressive ToF camera that achieves as much as 4× improvement in spatial resolution and 3× improvement for natural scenes. We believe that our proposed CS-ToF architecture provides a simple and low-cost solution to improve the spatial resolution of ToF and related sensors.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA