Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 216
Filtrar
1.
ACS Sens ; 6(9): 3242-3252, 2021 09 24.
Artigo em Inglês | MEDLINE | ID: mdl-34467761

RESUMO

The emergence of epigenetic gene regulation and its role in disease have motivated a growing field of epigenetic diagnostics for risk assessment and screening. In particular, irregular cytosine DNA base methylation has been implicated in several diseases, yet the methods for detecting these epigenetic marks are limited to lengthy protocols requiring bulky and costly equipment. We demonstrate a simple workflow for detecting methylated CpG dinucleotides in synthetic and genomic DNA samples using methylation-sensitive restriction enzyme digestion followed by loop-mediated isothermal amplification. We additionally demonstrate a cost-effective mobile fluorescence reader comprising a light-emitting diode bundle, a mirror, and optical fibers to transduce fluorescence signals associated with DNA amplification. The workflow can be performed in approximately 1 h, requiring only a simple heat source, and can therefore provide a foundation for distributable point-of-care testing of DNA methylation levels.


Assuntos
Ácidos Nucleicos , Fluorescência , Metilação , Técnicas de Diagnóstico Molecular , Técnicas de Amplificação de Ácido Nucleico
2.
Light Sci Appl ; 10(1): 196, 2021 Sep 24.
Artigo em Inglês | MEDLINE | ID: mdl-34561415

RESUMO

Spatially-engineered diffractive surfaces have emerged as a powerful framework to control light-matter interactions for statistical inference and the design of task-specific optical components. Here, we report the design of diffractive surfaces to all-optically perform arbitrary complex-valued linear transformations between an input (Ni) and output (No), where Ni and No represent the number of pixels at the input and output fields-of-view (FOVs), respectively. First, we consider a single diffractive surface and use a matrix pseudoinverse-based method to determine the complex-valued transmission coefficients of the diffractive features/neurons to all-optically perform a desired/target linear transformation. In addition to this data-free design approach, we also consider a deep learning-based design method to optimize the transmission coefficients of diffractive surfaces by using examples of input/output fields corresponding to the target transformation. We compared the all-optical transformation errors and diffraction efficiencies achieved using data-free designs as well as data-driven (deep learning-based) diffractive designs to all-optically perform (i) arbitrarily-chosen complex-valued transformations including unitary, nonunitary, and noninvertible transforms, (ii) 2D discrete Fourier transformation, (iii) arbitrary 2D permutation operations, and (iv) high-pass filtered coherent imaging. Our analyses reveal that if the total number (N) of spatially-engineered diffractive features/neurons is ≥Ni × No, both design methods succeed in all-optical implementation of the target transformation, achieving negligible error. However, compared to data-free designs, deep learning-based diffractive designs are found to achieve significantly larger diffraction efficiencies for a given N and their all-optical transformations are more accurate for N < Ni × No. These conclusions are generally applicable to various optical processors that employ spatially-engineered diffractive surfaces.

3.
Front Med (Lausanne) ; 8: 666554, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34485323

RESUMO

Lyme disease (also known as Lyme borreliosis) is the most common vector-borne disease in the United States with an estimated 476,000 cases per year. While historically, the long-term impact of Lyme disease on patients has been controversial, mounting evidence supports the idea that a substantial number of patients experience persistent symptoms following treatment. The research community has largely lacked the necessary funding to properly advance the scientific and clinical understanding of the disease, or to develop and evaluate innovative approaches for prevention, diagnosis, and treatment. Given the many outstanding questions raised into the diagnosis, clinical presentation and treatment of Lyme disease, and the underlying molecular mechanisms that trigger persistent disease, there is an urgent need for more support. This review article summarizes progress over the past 5 years in our understanding of Lyme and tick-borne diseases in the United States and highlights remaining challenges.

4.
Osteoarthr Cartil Open ; 3(1)2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34386778

RESUMO

Objective: To describe the characteristics of calcium pyrophosphate (CPP) crystal size and morphology under compensated polarized light microscopy (CPLM). Secondarily, to describe CPP crystals seen only with digital enhancement of CPLM images, confirmed with advanced imaging techniques. Methods: Clinical lab-identified CPP-positive synovial fluid samples were collected from 16 joint aspirates. Four raters used a standardized protocol to describe crystal shape, birefringence strength and color. A crystal expert confirmed CPLM-visualized crystal identification. For crystal measurement, a high-pass linear light filter was used to enhance resolution and line discrimination of digital images. This process identified additional enhanced crystals not seen by raters under CPLM. Single-shot computational polarized light microscopy (SCPLM) provided further confirmation of the enhanced crystals' presence. Results: Of 932 suspected crystals identified by CPLM, 569 met our inclusion criteria, and 293 (51%) were confirmed as CPP crystals. Of 175 unique confirmed crystals, 118 (67%) were rods (median area 3.6 µm2 [range, 1.0-22.9 µm2]), and 57 (33%) were rhomboids (median area 4.8 µm2 [range, 0.9-16.7 µm2]). Crystals visualized only after digital image enhancement were smaller and less birefringent than CPLM-identified crystals. Conclusions: CPP crystals that are smaller and weakly birefringent are more difficult to identify. There is likely a population of smaller, less birefringent CPP crystals that routinely goes undetected by CPLM. Describing the characteristics of poorly visible crystals may be of use for future development of novel crystal identification methods.

5.
Nat Commun ; 12(1): 4884, 2021 08 12.
Artigo em Inglês | MEDLINE | ID: mdl-34385460

RESUMO

Pathology is practiced by visual inspection of histochemically stained tissue slides. While the hematoxylin and eosin (H&E) stain is most commonly used, special stains can provide additional contrast to different tissue components. Here, we demonstrate the utility of supervised learning-based computational stain transformation from H&E to special stains (Masson's Trichrome, periodic acid-Schiff and Jones silver stain) using kidney needle core biopsy tissue sections. Based on the evaluation by three renal pathologists, followed by adjudication by a fourth pathologist, we show that the generation of virtual special stains from existing H&E images improves the diagnosis of several non-neoplastic kidney diseases, sampled from 58 unique subjects (P = 0.0095). A second study found that the quality of the computationally generated special stains was statistically equivalent to those which were histochemically stained. This stain-to-stain transformation framework can improve preliminary diagnoses when additional special stains are needed, also providing significant savings in time and cost.


Assuntos
Biópsia com Agulha de Grande Calibre/métodos , Aprendizado Profundo , Diagnóstico por Computador/métodos , Nefropatias/patologia , Rim/patologia , Coloração e Rotulagem/métodos , Algoritmos , Corantes/química , Corantes/classificação , Corantes/normas , Diagnóstico Diferencial , Humanos , Nefropatias/diagnóstico , Patologia Clínica/métodos , Patologia Clínica/normas , Padrões de Referência , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Coloração e Rotulagem/normas
6.
Lab Chip ; 21(18): 3550-3558, 2021 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-34292287

RESUMO

Particle agglutination assays are widely adopted immunological tests that are based on antigen-antibody interactions. Antibody-coated microscopic particles are mixed with a test sample that potentially contains the target antigen, as a result of which the particles form clusters, with a size that is a function of the antigen concentration and the reaction time. Here, we present a quantitative particle agglutination assay that combines mobile lens-free microscopy and deep learning for rapidly measuring the concentration of a target analyte; as its proof-of-concept, we demonstrate high-sensitivity C-reactive protein (hs-CRP) testing using human serum samples. A dual-channel capillary lateral flow device is designed to host the agglutination reaction using 4 µL of serum sample with a material cost of 1.79 cents per test. A mobile lens-free microscope records time-lapsed inline holograms of the lateral flow device, monitoring the agglutination process over 3 min. These captured holograms are processed, and at each frame the number and area of the particle clusters are automatically extracted and fed into shallow neural networks to predict the CRP concentration. 189 measurements using 88 unique patient serum samples were utilized to train, validate and blindly test our platform, which matched the corresponding ground truth concentrations in the hs-CRP range (0-10 µg mL-1) with an R2 value of 0.912. This computational sensing platform was also able to successfully differentiate very high CRP concentrations (e.g., >10-500 µg mL-1) from the hs-CRP range. This mobile, cost-effective and quantitative particle agglutination assay can be useful for various point-of-care sensing needs and global health related applications.


Assuntos
Aprendizado Profundo , Holografia , Aglutinação , Humanos , Microscopia , Testes Imediatos
8.
Light Sci Appl ; 10(1): 155, 2021 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-34326306

RESUMO

Optical coherence tomography (OCT) is a widely used non-invasive biomedical imaging modality that can rapidly provide volumetric images of samples. Here, we present a deep learning-based image reconstruction framework that can generate swept-source OCT (SS-OCT) images using undersampled spectral data, without any spatial aliasing artifacts. This neural network-based image reconstruction does not require any hardware changes to the optical setup and can be easily integrated with existing swept-source or spectral-domain OCT systems to reduce the amount of raw spectral data to be acquired. To show the efficacy of this framework, we trained and blindly tested a deep neural network using mouse embryo samples imaged by an SS-OCT system. Using 2-fold undersampled spectral data (i.e., 640 spectral points per A-line), the trained neural network can blindly reconstruct 512 A-lines in 0.59 ms using multiple graphics-processing units (GPUs), removing spatial aliasing artifacts due to spectral undersampling, also presenting a very good match to the images of the same samples, reconstructed using the full spectral OCT data (i.e., 1280 spectral points per A-line). We also successfully demonstrate that this framework can be further extended to process 3× undersampled spectral data per A-line, with some performance degradation in the reconstructed image quality compared to 2× spectral undersampling. Furthermore, an A-line-optimized undersampling method is presented by jointly optimizing the spectral sampling locations and the corresponding image reconstruction network, which improved the overall imaging performance using less spectral data points per A-line compared to 2× or 3× spectral undersampling results. This deep learning-enabled image reconstruction approach can be broadly used in various forms of spectral-domain OCT systems, helping to increase their imaging speed without sacrificing image resolution and signal-to-noise ratio.

9.
ACS Sens ; 6(6): 2403-2410, 2021 06 25.
Artigo em Inglês | MEDLINE | ID: mdl-34081429

RESUMO

Various volatile aerosols have been associated with adverse health effects; however, characterization of these aerosols is challenging due to their dynamic nature. Here, we present a method that directly measures the volatility of particulate matter (PM) using computational microscopy and deep learning. This method was applied to aerosols generated by electronic cigarettes (e-cigs), which vaporize a liquid mixture (e-liquid) that mainly consists of propylene glycol (PG), vegetable glycerin (VG), nicotine, and flavoring compounds. E-cig-generated aerosols were recorded by a field-portable computational microscope, using an impaction-based air sampler. A lensless digital holographic microscope inside this mobile device continuously records the inline holograms of the collected particles. A deep learning-based algorithm is used to automatically reconstruct the microscopic images of e-cig-generated particles from their holograms and rapidly quantify their volatility. To evaluate the effects of e-liquid composition on aerosol dynamics, we measured the volatility of the particles generated by flavorless, nicotine-free e-liquids with various PG/VG volumetric ratios, revealing a negative correlation between the particles' volatility and the volumetric ratio of VG in the e-liquid. For a given PG/VG composition, the addition of nicotine dominated the evaporation dynamics of the e-cig aerosol and the aforementioned negative correlation was no longer observed. We also revealed that flavoring additives in e-liquids significantly decrease the volatility of e-cig aerosol. The presented holographic volatility measurement technique and the associated mobile device might provide new insights on the volatility of e-cig-generated particles and can be applied to characterize various volatile PM.


Assuntos
Aprendizado Profundo , Sistemas Eletrônicos de Liberação de Nicotina , Aerossóis , Microscopia , Propilenoglicol
10.
Light Sci Appl ; 10(1): 62, 2021 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-33753716

RESUMO

Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical, medical and life sciences. Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network, which we term as Recurrent-MZ, 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to significantly increase the depth-of-field of a 63×/1.4NA objective lens, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions, including e.g., different sequences of input images, covering various axial permutations and unknown axial positioning errors. We also demonstrated wide-field to confocal cross-modality image transformations using Recurrent-MZ framework and performed 3D image reconstruction of a sample using a few wide-field 2D fluorescence images as input, matching confocal microscopy images of the same sample volume. Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework, overcoming the limitations of current 3D scanning microscopy tools.

11.
Sci Adv ; 7(13)2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33771863

RESUMO

We demonstrate optical networks composed of diffractive layers trained using deep learning to encode the spatial information of objects into the power spectrum of the diffracted light, which are used to classify objects with a single-pixel spectroscopic detector. Using a plasmonic nanoantenna-based detector, we experimentally validated this single-pixel machine vision framework at terahertz spectrum to optically classify the images of handwritten digits by detecting the spectral power of the diffracted light at ten distinct wavelengths, each representing one class/digit. We also coupled this diffractive network-based spectral encoding with a shallow electronic neural network, which was trained to rapidly reconstruct the images of handwritten digits based on solely the spectral power detected at these ten distinct wavelengths, demonstrating task-specific image decompression. This single-pixel machine vision framework can also be extended to other spectral-domain measurement systems to enable new 3D imaging and sensing modalities integrated with diffractive network-based spectral encoding of information.

12.
ACS Nano ; 15(4): 6305-6315, 2021 04 27.
Artigo em Inglês | MEDLINE | ID: mdl-33543919

RESUMO

Conventional spectrometers are limited by trade-offs set by size, cost, signal-to-noise ratio (SNR), and spectral resolution. Here, we demonstrate a deep learning-based spectral reconstruction framework using a compact and low-cost on-chip sensing scheme that is not constrained by many of the design trade-offs inherent to grating-based spectroscopy. The system employs a plasmonic spectral encoder chip containing 252 different tiles of nanohole arrays fabricated using a scalable and low-cost imprint lithography method, where each tile has a specific geometry and thus a specific optical transmission spectrum. The illumination spectrum of interest directly impinges upon the plasmonic encoder, and a CMOS image sensor captures the transmitted light without any lenses, gratings, or other optical components in between, making the entire hardware highly compact, lightweight, and field-portable. A trained neural network then reconstructs the unknown spectrum using the transmitted intensity information from the spectral encoder in a feed-forward and noniterative manner. Benefiting from the parallelization of neural networks, the average inference time per spectrum is ∼28 µs, which is much faster compared to other computational spectroscopy approaches. When blindly tested on 14 648 unseen spectra with varying complexity, our deep-learning based system identified 96.86% of the spectral peaks with an average peak localization error, bandwidth error, and height error of 0.19 nm, 0.18 nm, and 7.60%, respectively. This system is also highly tolerant to fabrication defects that may arise during the imprint lithography process, which further makes it ideal for applications that demand cost-effective, field-portable, and sensitive high-resolution spectroscopy tools.

13.
Nat Commun ; 12(1): 950, 2021 02 11.
Artigo em Inglês | MEDLINE | ID: mdl-33574261

RESUMO

The advent of highly sensitive photodetectors and the development of photostabilization strategies made detecting the fluorescence of single molecules a routine task in many labs around the world. However, to this day, this process requires cost-intensive optical instruments due to the truly nanoscopic signal of a single emitter. Simplifying single-molecule detection would enable many exciting applications, e.g., in point-of-care diagnostic settings, where costly equipment would be prohibitive. Here, we introduce addressable NanoAntennas with Cleared HOtSpots (NACHOS) that are scaffolded by DNA origami nanostructures and can be specifically tailored for the incorporation of bioassays. Single emitters placed in NACHOS emit up to 461-fold (average of 89 ± 7-fold) brighter enabling their detection with a customary smartphone camera and an 8-US-dollar objective lens. To prove the applicability of our system, we built a portable, battery-powered smartphone microscope and successfully carried out an exemplary single-molecule detection assay for DNA specific to antibiotic-resistant Klebsiella pneumonia on the road.


Assuntos
DNA/química , Microscopia , Nanotecnologia , Smartphone , Farmacorresistência Bacteriana , Fluorescência , Humanos , Klebsiella pneumoniae/efeitos dos fármacos , Masculino , Nanoestruturas , Testes Imediatos , Soro/química
16.
Nat Commun ; 12(1): 37, 2021 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-33397912

RESUMO

Recent advances in deep learning have been providing non-intuitive solutions to various inverse problems in optics. At the intersection of machine learning and optics, diffractive networks merge wave-optics with deep learning to design task-specific elements to all-optically perform various tasks such as object classification and machine vision. Here, we present a diffractive network, which is used to shape an arbitrary broadband pulse into a desired optical waveform, forming a compact and passive pulse engineering system. We demonstrate the synthesis of various different pulses by designing diffractive layers that collectively engineer the temporal waveform of an input terahertz pulse. Our results demonstrate direct pulse shaping in terahertz spectrum, where the amplitude and phase of the input wavelengths are independently controlled through a passive diffractive device, without the need for an external pump. Furthermore, a physical transfer learning approach is presented to illustrate pulse-width tunability by replacing part of an existing network with newly trained diffractive layers, demonstrating its modularity. This learning-based diffractive pulse engineering framework can find broad applications in e.g., communications, ultra-fast imaging and spectroscopy.

17.
Light Sci Appl ; 10(1): 14, 2021 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-33431804

RESUMO

A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning. Specifically, there has been a revival of interest in optical computing hardware due to its potential advantages for machine learning tasks in terms of parallelization, power efficiency and computation speed. Diffractive deep neural networks (D2NNs) form such an optical computing framework that benefits from deep learning-based design of successive diffractive layers to all-optically process information as the input light diffracts through these passive layers. D2NNs have demonstrated success in various tasks, including object classification, the spectral encoding of information, optical pulse shaping and imaging. Here, we substantially improve the inference performance of diffractive optical networks using feature engineering and ensemble learning. After independently training 1252 D2NNs that were diversely engineered with a variety of passive input filters, we applied a pruning algorithm to select an optimized ensemble of D2NNs that collectively improved the image classification accuracy. Through this pruning, we numerically demonstrated that ensembles of N = 14 and N = 30 D2NNs achieve blind testing accuracies of 61.14 ± 0.23% and 62.13 ± 0.05%, respectively, on the classification of CIFAR-10 test images, providing an inference improvement of >16% compared to the average performance of the individual D2NNs within each ensemble. These results constitute the highest inference accuracies achieved to date by any diffractive optical neural network design on the same dataset and might provide a significant leap to extend the application space of diffractive optical image classification and machine vision systems.

18.
Light Sci Appl ; 10(1): 25, 2021 Jan 28.
Artigo em Inglês | MEDLINE | ID: mdl-33510131

RESUMO

The precise engineering of materials and surfaces has been at the heart of some of the recent advances in optics and photonics. These advances related to the engineering of materials with new functionalities have also opened up exciting avenues for designing trainable surfaces that can perform computation and machine-learning tasks through light-matter interactions and diffraction. Here, we analyze the information-processing capacity of coherent optical networks formed by diffractive surfaces that are trained to perform an all-optical computational task between a given input and output field-of-view. We show that the dimensionality of the all-optical solution space covering the complex-valued transformations between the input and output fields-of-view is linearly proportional to the number of diffractive surfaces within the optical network, up to a limit that is dictated by the extent of the input and output fields-of-view. Deeper diffractive networks that are composed of larger numbers of trainable surfaces can cover a higher-dimensional subspace of the complex-valued linear transformations between a larger input field-of-view and a larger output field-of-view and exhibit depth advantages in terms of their statistical inference, learning, and generalization capabilities for different image classification tasks when compared with a single trainable diffractive surface. These analyses and conclusions are broadly applicable to various forms of diffractive surfaces, including, e.g., plasmonic and/or dielectric-based metasurfaces and flat optics, which can be used to form all-optical processors.

19.
Nature ; 588(7836): 39-47, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33268862

RESUMO

Artificial intelligence tasks across numerous applications require accelerators for fast and low-power execution. Optical computing systems may be able to meet these domain-specific needs but, despite half a century of research, general-purpose optical computing systems have yet to mature into a practical technology. Artificial intelligence inference, however, especially for visual computing applications, may offer opportunities for inference based on optical and photonic systems. In this Perspective, we review recent work on optical computing for artificial intelligence applications and discuss its promise and challenges.

20.
Artigo em Inglês | MEDLINE | ID: mdl-33223801

RESUMO

Optical machine learning offers advantages in terms of power efficiency, scalability and computation speed. Recently, an optical machine learning method based on Diffractive Deep Neural Networks (D2NNs) has been introduced to execute a function as the input light diffracts through passive layers, designed by deep learning using a computer. Here we introduce improvements to D2NNs by changing the training loss function and reducing the impact of vanishing gradients in the error back-propagation step. Using five phase-only diffractive layers, we numerically achieved a classification accuracy of 97.18% and 89.13% for optical recognition of handwritten digits and fashion products, respectively; using both phase and amplitude modulation (complex-valued) at each layer, our inference performance improved to 97.81% and 89.32%, respectively. Furthermore, we report the integration of D2NNs with electronic neural networks to create hybrid-classifiers that significantly reduce the number of input pixels into an electronic network using an ultra-compact front-end D2NN with a layer-to-layer distance of a few wavelengths, also reducing the complexity of the successive electronic network. Using a 5-layer phase-only D2NN jointly-optimized with a single fully-connected electronic layer, we achieved a classification accuracy of 98.71% and 90.04% for the recognition of handwritten digits and fashion products, respectively. Moreover, the input to the electronic network was compressed by >7.8 times down to 10×10 pixels. Beyond creating low-power and high-frame rate machine learning platforms, D2NN-based hybrid neural networks will find applications in smart optical imager and sensor design.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...