Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
ACS Nano ; 18(18): 11717-11731, 2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38651873

RESUMEN

Evaluating the heterogeneity of extracellular vesicles (EVs) is crucial for unraveling their complex actions and biodistribution. Here, we identify consistent architectural heterogeneity of EVs using cryogenic transmission electron microscopy (cryo-TEM), which has an inherent ability to image biological samples without harsh labeling methods while preserving their native conformation. Imaging EVs isolated using different methodologies from distinct sources, such as cancer cells, normal cells, immortalized cells, and body fluids, we identify a structural atlas of their dominantly consistent shapes. We identify EV architectural attributes by utilizing a segmentation neural network model. In total, 7,576 individual EVs were imaged and quantified by our computational pipeline. Across all 7,576 independent EVs, the average eccentricity was 0.5366 ± 0.2, and the average equivalent diameter was 132.43 ± 67 nm. The architectural heterogeneity was consistent across all sources of EVs, independent of purification techniques, and compromised of single spherical, rod-like or tubular, and double shapes. This study will serve as a reference foundation for high-resolution images of EVs and offer insights into their potential biological impact.


Asunto(s)
Microscopía por Crioelectrón , Vesículas Extracelulares , Vesículas Extracelulares/química , Vesículas Extracelulares/metabolismo , Humanos , Redes Neurales de la Computación , Microscopía Electrónica de Transmisión , Procesamiento de Imagen Asistido por Computador/métodos
2.
Biomed Opt Express ; 14(8): 4037-4051, 2023 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-37799697

RESUMEN

Traditional miniaturized fluorescence microscopes are critical tools for modern biology. Invariably, they struggle to simultaneously image with a high spatial resolution and a large field of view (FOV). Lensless microscopes offer a solution to this limitation. However, real-time visualization of samples is not possible with lensless imaging, as image reconstruction can take minutes to complete. This poses a challenge for usability, as real-time visualization is a crucial feature that assists users in identifying and locating the imaging target. The issue is particularly pronounced in lensless microscopes that operate at close imaging distances. Imaging at close distances requires shift-varying deconvolution to account for the variation of the point spread function (PSF) across the FOV. Here, we present a lensless microscope that achieves real-time image reconstruction by eliminating the use of an iterative reconstruction algorithm. The neural network-based reconstruction method we show here, achieves more than 10000 times increase in reconstruction speed compared to iterative reconstruction. The increased reconstruction speed allows us to visualize the results of our lensless microscope at more than 25 frames per second (fps), while achieving better than 7 µm resolution over a FOV of 10 mm2. This ability to reconstruct and visualize samples in real-time empowers a more user-friendly interaction with lensless microscopes. The users are able to use these microscopes much like they currently do with conventional microscopes.

3.
Artículo en Inglés | MEDLINE | ID: mdl-37561613

RESUMEN

Using millimeter wave (mmWave) signals for imaging has an important advantage in that they can penetrate through poor environmental conditions such as fog, dust, and smoke that severely degrade optical-based imaging systems. However, mmWave radars, contrary to cameras and LiDARs, suffer from low angular resolution because of small physical apertures and conventional signal processing techniques. Sparse radar imaging, on the other hand, can increase the aperture size while minimizing the power consumption and read out bandwidth. This paper presents CoIR, an analysis by synthesis method that leverages the implicit neural network bias in convolutional decoders and compressed sensing to perform high accuracy sparse radar imaging. The proposed system is data set-agnostic and does not require any auxiliary sensors for training or testing. We introduce a sparse array design that allows for a 5.5× reduction in the number of antenna elements needed compared to conventional MIMO array designs. We demonstrate our system's improved imaging performance over standard mmWave radars and other competitive untrained methods on both simulated and experimental mmWave radar data.

4.
Sci Adv ; 9(26): eadg4671, 2023 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-37379386

RESUMEN

Diffraction-limited optical imaging through scattering media has the potential to transform many applications such as airborne and space-based imaging (through the atmosphere), bioimaging (through skin and human tissue), and fiber-based imaging (through fiber bundles). Existing wavefront shaping methods can image through scattering media and other obscurants by optically correcting wavefront aberrations using high-resolution spatial light modulators-but these methods generally require (i) guidestars, (ii) controlled illumination, (iii) point scanning, and/or (iv) statics scenes and aberrations. We propose neural wavefront shaping (NeuWS), a scanning-free wavefront shaping technique that integrates maximum likelihood estimation, measurement modulation, and neural signal representations to reconstruct diffraction-limited images through strong static and dynamic scattering media without guidestars, sparse targets, controlled illumination, nor specialized image sensors. We experimentally demonstrate guidestar-free, wide field-of-view, high-resolution, diffraction-limited imaging of extended, nonsparse, and static/dynamic scenes captured through static/dynamic aberrations.

5.
bioRxiv ; 2023 Feb 08.
Artículo en Inglés | MEDLINE | ID: mdl-36798295

RESUMEN

Optical neurotechnologies use light to interface with neurons and can monitor and manipulate neural activity with high spatial-temporal precision over large cortical extents. While there has been significant progress in miniaturizing microscope for head-mounted configurations, these existing devices are still very bulky and could never be fully implanted. Any viable translation of these technologies to human use will require a much more noninvasive, fully implantable form factor. Here, we leverage advances in microelectronics and heterogeneous optoelectronic packaging to develop a transformative, ultrathin, miniaturized device for bidirectional optical stimulation and recording: the subdural CMOS Optical Probe (SCOPe). By being thin enough to lie entirely within the subdural space of the primate brain, SCOPe defines a path for the eventual human translation of a new generation of brain-machine interfaces based on light.

6.
bioRxiv ; 2023 Dec 12.
Artículo en Inglés | MEDLINE | ID: mdl-38168235

RESUMEN

Evaluating the heterogeneity of extracellular vesicles (EVs) is crucial for unraveling their complex actions and biodistribution. Here, we identify consistent architectural heterogeneity of EVs using cryogenic transmission electron microscopy (cryo-TEM) which has an inherent ability to image biological samples without harsh labeling methods and while preserving their native conformation. Imaging EVs isolated using different methodologies from distinct sources such as cancer cells, normal cells, and body fluids, we identify a structural atlas of their dominantly consistent shapes. We identify EV architectural attributes by utilizing a segmentation neural network model. In total, 7,576 individual EVs were imaged and quantified by our computational pipeline. Across all 7,576 independent EVs, the average eccentricity was 0.5366, and the average equivalent diameter was 132.43 nm. The architectural heterogeneity was consistent across all sources of EVs, independent of purification techniques, and compromised of single spherical (S. Spherical), rod-like or tubular, and double shapes. This study will serve as a reference foundation for high-resolution EV images and offer insights into their potential biological impact.

7.
Optica ; 9(1): 1-16, 2022 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-36338918

RESUMEN

Lensless imaging provides opportunities to design imaging systems free from the constraints imposed by traditional camera architectures. Thanks to advances in imaging hardware, fabrication techniques, and new algorithms, researchers have recently developed lensless imaging systems that are extremely compact, lightweight or able to image higher-dimensional quantities. Here we review these recent advances and describe the design principles and their effects that one should consider when developing and using lensless imaging systems.

8.
J Opt Soc Am A Opt Image Sci Vis ; 39(10): 1903-1912, 2022 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-36215563

RESUMEN

Lensless cameras are ultra-thin imaging systems that replace the lens with a thin passive optical mask and computation. Passive mask-based lensless cameras encode depth information in their measurements for a certain depth range. Early works have shown that this encoded depth can be used to perform 3D reconstruction of close-range scenes. However, these approaches for 3D reconstructions are typically optimization based and require strong hand-crafted priors and hundreds of iterations to reconstruct. Moreover, the reconstructions suffer from low resolution, noise, and artifacts. In this work, we propose FlatNet3D-a feed-forward deep network that can estimate both depth and intensity from a single lensless capture. FlatNet3D is an end-to-end trainable deep network that directly reconstructs depth and intensity from a lensless measurement using an efficient physics-based 3D mapping stage and a fully convolutional network. Our algorithm is fast and produces high-quality results, which we validate using both simulated and real scenes captured using PhlatCam.

9.
Nat Biomed Eng ; 6(5): 617-628, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35256759

RESUMEN

The simple and compact optics of lensless microscopes and the associated computational algorithms allow for large fields of view and the refocusing of the captured images. However, existing lensless techniques cannot accurately reconstruct the typical low-contrast images of optically dense biological tissue. Here we show that lensless imaging of tissue in vivo can be achieved via an optical phase mask designed to create a point spread function consisting of high-contrast contours with a broad spectrum of spatial frequencies. We built a prototype lensless microscope incorporating the 'contour' phase mask and used it to image calcium dynamics in the cortex of live mice (over a field of view of about 16 mm2) and in freely moving Hydra vulgaris, as well as microvasculature in the oral mucosa of volunteers. The low cost, small form factor and computational refocusing capability of in vivo lensless microscopy may open it up to clinical uses, especially for imaging difficult-to-reach areas of the body.


Asunto(s)
Microscopía , Óptica y Fotónica , Algoritmos , Animales , Humanos , Ratones , Microscopía/métodos
10.
IEEE Trans Pattern Anal Mach Intell ; 44(4): 1934-1948, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33104508

RESUMEN

Lensless imaging has emerged as a potential solution towards realizing ultra-miniature cameras by eschewing the bulky lens in a traditional camera. Without a focusing lens, the lensless cameras rely on computational algorithms to recover the scenes from multiplexed measurements. However, the current iterative-optimization-based reconstruction algorithms produce noisier and perceptually poorer images. In this work, we propose a non-iterative deep learning-based reconstruction approach that results in orders of magnitude improvement in image quality for lensless reconstructions. Our approach, called FlatNet, lays down a framework for reconstructing high-quality photorealistic images from mask-based lensless cameras, where the camera's forward model formulation is known. FlatNet consists of two stages: (1) an inversion stage that maps the measurement into a space of intermediate reconstruction by learning parameters within the forward model formulation, and (2) a perceptual enhancement stage that improves the perceptual quality of this intermediate reconstruction. These stages are trained together in an end-to-end manner. We show high-quality reconstructions by performing extensive experiments on real and challenging scenes using two different types of lensless prototypes: one which uses a separable forward model and another, which uses a more general non-separable cropped-convolution model. Our end-to-end approach is fast, produces photorealistic reconstructions, and is easy to adopt for other mask-based lensless cameras.

11.
IEEE Trans Biomed Circuits Syst ; 15(6): 1295-1305, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34951854

RESUMEN

Emerging optical functional imaging and optogenetics are among the most promising approaches in neuroscience to study neuronal circuits. Combining both methods into a single implantable device enables all-optical neural interrogation with immediate applications in freely-behaving animal studies. In this paper, we demonstrate such a device capable of optical neural recording and stimulation over large cortical areas. This implantable surface device exploits lens-less computational imaging and a novel packaging scheme to achieve an ultra-thin (250µm-thick), mechanically flexible form factor. The core of this device is a custom-designed CMOS integrated circuit containing a 160×160 array of time-gated single-photon avalanche photodiodes (SPAD) for low-light intensity imaging and an interspersed array of dual-color (blue and green) flip-chip bonded micro-LED (µLED) as light sources. We achieved 60µm lateral imaging resolution and 0.2mm3 volumetric precision for optogenetics over a 5.4×5.4mm2 field of view (FoV). The device achieves a 125-fps frame-rate and consumes 40 mW of total power.


Asunto(s)
Neurociencias , Optogenética , Animales , Neuronas/fisiología , Imagen Óptica , Estimulación Luminosa , Prótesis e Implantes
12.
Opt Express ; 29(23): 38540-38556, 2021 Nov 08.
Artículo en Inglés | MEDLINE | ID: mdl-34808905

RESUMEN

Conventional continuous-wave amplitude-modulated time-of-flight (CWAM ToF) cameras suffer from a fundamental trade-off between light throughput and depth of field (DoF): a larger lens aperture allows more light collection but suffers from significantly lower DoF. However, both high light throughput, which increases signal-to-noise ratio, and a wide DoF, which enlarges the system's applicable depth range, are valuable for CWAM ToF applications. In this work, we propose EDoF-ToF, an algorithmic method to extend the DoF of large-aperture CWAM ToF cameras by using a neural network to deblur objects outside of the lens's narrow focal region and thus produce an all-in-focus measurement. A key component of our work is the proposed large-aperture ToF training data simulator, which models the depth-dependent blurs and partial occlusions caused by such apertures. Contrary to conventional image deblurring where the blur model is typically linear, ToF depth maps are nonlinear functions of scene intensities, resulting in a nonlinear blur model that we also derive for our simulator. Unlike extended DoF for conventional photography where depth information needs to be encoded (or made depth-invariant) using additional hardware (phase masks, focal sweeping, etc.), ToF sensor measurements naturally encode depth information, allowing a completely software solution to extended DoF. We experimentally demonstrate EDoF-ToF increasing the DoF of a conventional ToF system by 3.6 ×, effectively achieving the DoF of a smaller lens aperture that allows 22.1 × less light. Ultimately, EDoF-ToF enables CWAM ToF cameras to enjoy the benefits of both high light throughput and a wide DoF.

13.
IEEE Trans Pattern Anal Mach Intell ; 42(7): 1618-1629, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32324539

RESUMEN

We demonstrate a versatile thin lensless camera with a designed phase-mask placed at sub-2 mm from an imaging CMOS sensor. Using wave optics and phase retrieval methods, we present a general-purpose framework to create phase-masks that achieve desired sharp point-spread-functions (PSFs) for desired camera thicknesses. From a single 2D encoded measurement, we show the reconstruction of high-resolution 2D images, computational refocusing, and 3D imaging. This ability is made possible by our proposed high-performance contour-based PSF. The heuristic contour-based PSF is designed using concepts in signal processing to achieve maximal information transfer to a bit-depth limited sensor. Due to the efficient coding, we can use fast linear methods for high-quality image reconstructions and switch to iterative nonlinear methods for higher fidelity reconstructions and 3D imaging.

14.
PLoS One ; 14(8): e0219852, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31412054

RESUMEN

Schooling fishes, like flocking birds and swarming insects, display remarkable behavioral coordination. While over 25% of fish species exhibit schooling behavior, nighttime schooling has rarely been observed or reported. This is due to vision being the primary modality for schooling, which is corroborated by the fact that most fish schools disperse at critically low light levels. Here we report on a large aggregation of the bioluminescent flashlight fish Anomalops katoptron that exhibited nighttime schooling behavior during multiple moon phases, including the new moon. Data were recorded with a suite of low-light imaging devices, including a high-speed, high-resolution scientific complementary metal-oxide-semiconductor (sCMOS) camera. Image analysis revealed nighttime schooling using synchronized bioluminescent flashing displays, and demonstrated that school motion synchrony exhibits correlation with relative swim speed. A computer model of flashlight fish schooling behavior shows that only a small percentage of individuals need to exhibit bioluminescence in order for school cohesion to be maintained. Flashlight fish schooling is unique among fishes, in that bioluminescence enables schooling in conditions of no ambient light. In addition, some members can still partake in the school while not actively exhibiting their bioluminescence. Image analysis of our field data and model demonstrate that if a small percentage of fish become motivated to change direction, the rest of the school follows. The use of bioluminescence by flashlight fish to enable schooling in shallow water adds an additional ecological application to bioluminescence and suggests that schooling behavior in mesopelagic bioluminescent fishes may be also mediated by luminescent displays.


Asunto(s)
Conducta Animal/fisiología , Peces/fisiología , Luminiscencia , Conducta Social , Natación , Animales , Simulación por Computador , Peces/anatomía & histología , Modelos Biológicos
15.
Sci Adv ; 3(12): e1701548, 2017 12.
Artículo en Inglés | MEDLINE | ID: mdl-29226243

RESUMEN

Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world's tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high-frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...