Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 12(1): 8446, 2022 May 19.
Artículo en Inglés | MEDLINE | ID: mdl-35589729

RESUMEN

Diffractive optical networks unify wave optics and deep learning to all-optically compute a given machine learning or computational imaging task as the light propagates from the input to the output plane. Here, we report the design of diffractive optical networks for the classification and reconstruction of spatially overlapping, phase-encoded objects. When two different phase-only objects spatially overlap, the individual object functions are perturbed since their phase patterns are summed up. The retrieval of the underlying phase images from solely the overlapping phase distribution presents a challenging problem, the solution of which is generally not unique. We show that through a task-specific training process, passive diffractive optical networks composed of successive transmissive layers can all-optically and simultaneously classify two different randomly-selected, spatially overlapping phase images at the input. After trained with ~ 550 million unique combinations of phase-encoded handwritten digits from the MNIST dataset, our blind testing results reveal that the diffractive optical network achieves an accuracy of > 85.8% for all-optical classification of two overlapping phase images of new handwritten digits. In addition to all-optical classification of overlapping phase objects, we also demonstrate the reconstruction of these phase images based on a shallow electronic neural network that uses the highly compressed output of the diffractive optical network as its input (with e.g., ~ 20-65 times less number of pixels) to rapidly reconstruct both of the phase images, despite their spatial overlap and related phase ambiguity. The presented phase image classification and reconstruction framework might find applications in e.g., computational imaging, microscopy and quantitative phase imaging fields.

2.
Sci Adv ; 7(13)2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33771863

RESUMEN

We demonstrate optical networks composed of diffractive layers trained using deep learning to encode the spatial information of objects into the power spectrum of the diffracted light, which are used to classify objects with a single-pixel spectroscopic detector. Using a plasmonic nanoantenna-based detector, we experimentally validated this single-pixel machine vision framework at terahertz spectrum to optically classify the images of handwritten digits by detecting the spectral power of the diffracted light at ten distinct wavelengths, each representing one class/digit. We also coupled this diffractive network-based spectral encoding with a shallow electronic neural network, which was trained to rapidly reconstruct the images of handwritten digits based on solely the spectral power detected at these ten distinct wavelengths, demonstrating task-specific image decompression. This single-pixel machine vision framework can also be extended to other spectral-domain measurement systems to enable new 3D imaging and sensing modalities integrated with diffractive network-based spectral encoding of information.

3.
Nat Commun ; 12(1): 37, 2021 Jan 04.
Artículo en Inglés | MEDLINE | ID: mdl-33397912

RESUMEN

Recent advances in deep learning have been providing non-intuitive solutions to various inverse problems in optics. At the intersection of machine learning and optics, diffractive networks merge wave-optics with deep learning to design task-specific elements to all-optically perform various tasks such as object classification and machine vision. Here, we present a diffractive network, which is used to shape an arbitrary broadband pulse into a desired optical waveform, forming a compact and passive pulse engineering system. We demonstrate the synthesis of various different pulses by designing diffractive layers that collectively engineer the temporal waveform of an input terahertz pulse. Our results demonstrate direct pulse shaping in terahertz spectrum, where the amplitude and phase of the input wavelengths are independently controlled through a passive diffractive device, without the need for an external pump. Furthermore, a physical transfer learning approach is presented to illustrate pulse-width tunability by replacing part of an existing network with newly trained diffractive layers, demonstrating its modularity. This learning-based diffractive pulse engineering framework can find broad applications in e.g., communications, ultra-fast imaging and spectroscopy.

4.
Light Sci Appl ; 8: 112, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31814969

RESUMEN

Deep learning has been transformative in many fields, motivating the emergence of various optical computing architectures. Diffractive optical network is a recently introduced optical computing framework that merges wave optics with deep-learning methods to design optical neural networks. Diffraction-based all-optical object recognition systems, designed through this framework and fabricated by 3D printing, have been reported to recognize hand-written digits and fashion products, demonstrating all-optical inference and generalization to sub-classes of data. These previous diffractive approaches employed monochromatic coherent light as the illumination source. Here, we report a broadband diffractive optical neural network design that simultaneously processes a continuum of wavelengths generated by a temporally incoherent broadband source to all-optically perform a specific task learned using deep learning. We experimentally validated the success of this broadband diffractive neural network architecture by designing, fabricating and testing seven different multi-layer, diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize (1) a series of tuneable, single-passband and dual-passband spectral filters and (2) spatially controlled wavelength de-multiplexing. Merging the native or engineered dispersion of various material systems with a deep-learning-based design strategy, broadband diffractive neural networks help us engineer the light-matter interaction in 3D, diverging from intuitive and analytical design methods to create task-specific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning.

5.
Sci Rep ; 8(1): 15650, 2018 10 23.
Artículo en Inglés | MEDLINE | ID: mdl-30353033

RESUMEN

With the advent of sperm sex sorting methods and computer-aided sperm analysis platforms, comparative 2D motility studies showed that there is no significant difference in the swimming speeds of X-sorted and Y-sorted sperm cells, clarifying earlier misconceptions. However, other differences in their swimming dynamics might have been undetectable as conventional optical microscopes are limited in revealing the complete 3D motion of free-swimming sperm cells, due to poor depth resolution and the trade-off between field-of-view and spatial resolution. Using a dual-view on-chip holographic microscope, we acquired the full 3D locomotion of 235X-sorted and 289 Y-sorted bovine sperms, precisely revealing their 3D translational head motion and the angular velocity of their head spin as well as the 3D flagellar motion. Our results confirmed that various motility parameters remain similar between X- and Y-sorted sperm populations; however, we found out that there is a statistically significant difference in Y-sorted bovine sperms' preference for helix-shaped 3D swimming trajectories, also exhibiting an increased linearity compared to X-sorted sperms. Further research on e.g., the differences in the kinematic response of X-sorted and Y-sorted sperm cells to the surrounding chemicals and ions might shed more light on the origins of these results.


Asunto(s)
Imagenología Tridimensional , Análisis para Determinación del Sexo , Cabeza del Espermatozoide/fisiología , Motilidad Espermática/fisiología , Cola del Espermatozoide/fisiología , Espermatozoides/fisiología , Animales , Bovinos , Femenino , Masculino
6.
Science ; 361(6406): 1004-1008, 2018 09 07.
Artículo en Inglés | MEDLINE | ID: mdl-30049787

RESUMEN

Deep learning has been transforming our ability to execute advanced inference tasks using computers. Here we introduce a physical mechanism to perform machine learning by demonstrating an all-optical diffractive deep neural network (D2NN) architecture that can implement various functions following the deep learning-based design of passive diffractive layers that work collectively. We created 3D-printed D2NNs that implement classification of images of handwritten digits and fashion products, as well as the function of an imaging lens at a terahertz spectrum. Our all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can execute; will find applications in all-optical image analysis, feature detection, and object classification; and will also enable new camera designs and optical components that perform distinctive tasks using D2NNs.

7.
ACS Nano ; 12(3): 2554-2559, 2018 03 27.
Artículo en Inglés | MEDLINE | ID: mdl-29522316

RESUMEN

We present a cost-effective and portable platform based on contact lenses for noninvasively detecting Staphylococcus aureus, which is part of the human ocular microbiome and resides on the cornea and conjunctiva. Using S. aureus-specific antibodies and a surface chemistry protocol that is compatible with human tears, contact lenses are designed to specifically capture S. aureus. After the bacteria capture on the lens and right before its imaging, the captured bacteria are tagged with surface-functionalized polystyrene microparticles. These microbeads provide sufficient signal-to-noise ratio for the quantification of the captured bacteria on the contact lens, without any fluorescent labels, by 3D imaging of the curved surface of each lens using only one hologram taken with a lens-free on-chip microscope. After the 3D surface of the contact lens is computationally reconstructed using rotational field transformations and holographic digital focusing, a machine learning algorithm is employed to automatically count the number of beads on the lens surface, revealing the count of the captured bacteria. To demonstrate its proof-of-concept, we created a field-portable and cost-effective holographic microscope, which weighs 77 g, controlled by a laptop. Using daily contact lenses that are spiked with bacteria, we demonstrated that this computational sensing platform provides a detection limit of ∼16 bacteria/µL. This contact-lens-based wearable sensor can be broadly applicable to detect various bacteria, viruses, and analytes in tears using a cost-effective and portable computational imager that might be used even at home by consumers.


Asunto(s)
Lentes de Contacto/microbiología , Holografía/instrumentación , Aprendizaje Automático , Microscopía/instrumentación , Staphylococcus aureus/aislamiento & purificación , Diseño de Equipo , Holografía/métodos , Humanos , Microscopía/métodos , Infecciones Estafilocócicas/microbiología
8.
RSC Adv ; 8(64): 36493-36502, 2018 Oct 26.
Artículo en Inglés | MEDLINE | ID: mdl-35558922

RESUMEN

Diagnostics based on fluorescence imaging of biomolecules is typically performed in well-equipped laboratories and is in general not suitable for remote and resource limited settings. Here we demonstrate the development of a compact, lightweight and cost-effective smartphone-based fluorescence microscope, capable of detecting signals from fluorescently labeled bacteria. By optimizing a peptide nucleic acid (PNA) based fluorescence in situ hybridization (FISH) assay, we demonstrate the use of the smartphone-based microscope for rapid identification of pathogenic bacteria. We evaluated the use of both a general nucleic acid stain as well as species-specific PNA probes and demonstrated that the mobile platform can detect bacteria with a sensitivity comparable to that of a conventional fluorescence microscope. The PNA-based FISH assay, in combination with the smartphone-based fluorescence microscope, allowed us to qualitatively analyze pathogenic bacteria in contaminated powdered infant formula (PIF) at initial concentrations prior to cultivation as low as 10 CFU per 30 g of PIF. Importantly, the detection can be done directly on the smartphone screen, without the need for additional image analysis. The assay should be straightforward to adapt for bacterial identification also in clinical samples. The cost-effectiveness, field-portability and simplicity of this platform will create various opportunities for its use in resource limited settings and point-of-care offices, opening up a myriad of additional applications based on other fluorescence-based diagnostic assays.

9.
ACS Nano ; 9(3): 3265-73, 2015 Mar 24.
Artículo en Inglés | MEDLINE | ID: mdl-25688665

RESUMEN

Sizing individual nanoparticles and dispersions of nanoparticles provides invaluable information in applications such as nanomaterial synthesis, air and water quality monitoring, virology, and medical diagnostics. Several conventional nanoparticle sizing approaches exist; however, there remains a lack of high-throughput approaches that are suitable for low-resource and field settings, i.e., methods that are cost-effective, portable, and can measure widely varying particle sizes and concentrations. Here we fill this gap using an unconventional approach that combines holographic on-chip microscopy with vapor-condensed nanolens self-assembly inside a cost-effective hand-held device. By using this approach and capturing time-resolved in situ images of the particles, we optimize the nanolens formation process, resulting in significant signal enhancement for the label-free detection and sizing of individual deeply subwavelength particles (smaller than λ/10) over a 30 mm(2) sample field-of-view, with an accuracy of ±11 nm. These time-resolved measurements are significantly more reliable than a single measurement at a given time, which was previously used only for nanoparticle detection without sizing. We experimentally demonstrate the sizing of individual nanoparticles as well as viruses, monodisperse samples, and complex polydisperse mixtures, where the sample concentrations can span ∼5 orders-of-magnitude and particle sizes can range from 40 nm to millimeter-scale. We believe that this high-throughput and label-free nanoparticle sizing platform, together with its cost-effective and hand-held interface, will make highly advanced nanoscopic measurements readily accessible to researchers in developing countries and even to citizen-scientists, and might especially be valuable for environmental and biomedical applications as well as for higher education and training programs.


Asunto(s)
Microscopía/métodos , Nanopartículas/química , Tamaño de la Partícula , Análisis Costo-Beneficio , Holografía , Microscopía/economía , Microscopía/instrumentación , Poliestirenos/química , Volatilización
10.
ACS Nano ; 8(7): 7340-9, 2014 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-24979060

RESUMEN

Nanostructured optical components, such as nanolenses, direct light at subwavelength scales to enable, among others, high-resolution lithography, miniaturization of photonic circuits, and nanoscopic imaging of biostructures. A major challenge in fabricating nanolenses is the appropriate positioning of the lens with respect to the sample while simultaneously ensuring it adopts the optimal size and shape for the intended use. One application of particular interest is the enhancement of contrast and signal-to-noise ratio in the imaging of nanoscale objects, especially over wide fields-of-view (FOVs), which typically come with limited resolution and sensitivity for imaging nano-objects. Here we present a self-assembly method for fabricating time- and temperature-tunable nanolenses based on the condensation of a polymeric liquid around a nanoparticle, which we apply to the high-throughput on-chip detection of spheroids smaller than 40 nm, rod-shaped particles with diameter smaller than 20 nm, and biofunctionalized nanoparticles, all across an ultralarge FOV of >20 mm(2). Previous nanoparticle imaging efforts across similar FOVs have detected spheroids no smaller than 100 nm, and therefore our results demonstrate the detection of particles >15-fold smaller in volume, which in free space have >240 times weaker Rayleigh scattering compared to the particle sizes detected in earlier wide-field imaging work. This entire platform, with its tunable nanolens condensation and wide-field imaging functions, is also miniaturized into a cost-effective and portable device, which might be especially important for field use, mobile sensing, and diagnostics applications, including, for example, the measurement of viral load in bodily fluids.


Asunto(s)
Lentes , Nanotecnología/instrumentación , Microesferas , Nanopartículas/química , Poliestirenos/química , Relación Señal-Ruido , Temperatura , Factores de Tiempo , Volatilización
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...