Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Opt Lett ; 49(2): 322-325, 2024 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-38194559

RESUMEN

We demonstrate the fabrication of volume holograms using two-photon polymerization with dynamic control of light exposure. We refer to our method as (3 + 1)D printing. Volume holograms that are recorded by interfering reference and signal beams have a diffraction efficiency relation that is inversely proportional to the square of the number of superimposed holograms. By using (3 + 1)D printing for fabrication, the refractive index of each voxel is created independently and thus, by digitally filtering the undesired interference terms, the diffraction efficiency is now inversely proportional to the number of multiplexed gratings. We experimentally demonstrated this linear dependence by recording M = 50 volume gratings. To the best of our knowledge, this is the first experimental demonstration of distributed volume holograms that overcome the 1/M2 limit.

2.
Nat Commun ; 15(1): 808, 2024 Jan 27.
Artículo en Inglés | MEDLINE | ID: mdl-38280912

RESUMEN

A fundamental challenge in neuroengineering is determining a proper artificial input to a sensory system that yields the desired perception. In neuroprosthetics, this process is known as artificial sensory encoding, and it holds a crucial role in prosthetic devices restoring sensory perception in individuals with disabilities. For example, in visual prostheses, one key aspect of artificial image encoding is to downsample images captured by a camera to a size matching the number of inputs and resolution of the prosthesis. Here, we show that downsampling an image using the inherent computation of the retinal network yields better performance compared to learning-free downsampling methods. We have validated a learning-based approach (actor-model framework) that exploits the signal transformation from photoreceptors to retinal ganglion cells measured in explanted mouse retinas. The actor-model framework generates downsampled images eliciting a neuronal response in-silico and ex-vivo with higher neuronal reliability than the one produced by a learning-free approach. During the learning process, the actor network learns to optimize contrast and the kernel's weights. This methodological approach might guide future artificial image encoding strategies for visual prostheses. Ultimately, this framework could be applicable for encoding strategies in other sensory prostheses such as cochlear or limb.


Asunto(s)
Retina , Prótesis Visuales , Ratones , Animales , Reproducibilidad de los Resultados , Células Ganglionares de la Retina/fisiología , Aprendizaje/fisiología , Percepción Visual/fisiología
3.
Curr Opin Biotechnol ; 85: 103054, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38142647

RESUMEN

Despite remarkable progresses in quantitative phase imaging (QPI) microscopes, their wide acceptance is limited due to the lack of specificity compared with the well-established fluorescence microscopy. In fact, the absence of fluorescent tag prevents to identify subcellular structures in single cells, making challenging the interpretation of label-free 2D and 3D phase-contrast data. Great effort has been made by many groups worldwide to address and overcome such limitation. Different computational methods have been proposed and many more are currently under investigation to achieve label-free microscopic imaging at single-cell level to recognize and quantify different subcellular compartments. This route promises to bridge the gap between QPI and FM for real-world applications.


Asunto(s)
Microscopía , Imágenes de Fase Cuantitativa , Microscopía/métodos , Microscopía de Contraste de Fase/métodos
4.
Nat Comput Sci ; 1(8): 542-549, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38217249

RESUMEN

Today's heavy machine learning tasks are fueled by large datasets. Computing is performed with power-hungry processors whose performance is ultimately limited by the data transfer to and from memory. Optics is a powerful means of communicating and processing information, and there is currently intense interest in optical information processing for realizing high-speed computations. Here we present and experimentally demonstrate an optical computing framework called scalable optical learning operator, which is based on spatiotemporal effects in multimode fibers for a range of learning tasks including classifying COVID-19 X-ray lung images, speech recognition and predicting age from images of faces. The presented framework addresses the energy scaling problem of existing systems without compromising speed. We leverage simultaneous, linear and nonlinear interaction of spatial modes as a computation engine. We numerically and experimentally show the ability of the method to execute several different tasks with accuracy comparable with a digital implementation.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA