Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Opt Express ; 30(2): 1546-1554, 2022 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-35209312

RESUMO

Deep-brain microscopy is strongly limited by the size of the imaging probe, both in terms of achievable resolution and potential trauma due to surgery. Here, we show that a segment of an ultra-thin multi-mode fiber (cannula) can replace the bulky microscope objective inside the brain. By creating a self-consistent deep neural network that is trained to reconstruct anthropocentric images from the raw signal transported by the cannula, we demonstrate a single-cell resolution (< 10µm), depth sectioning resolution of 40 µm, and field of view of 200 µm, all with green-fluorescent-protein labelled neurons imaged at depths as large as 1.4 mm from the brain surface. Since ground-truth images at these depths are challenging to obtain in vivo, we propose a novel ensemble method that averages the reconstructed images from disparate deep-neural-network architectures. Finally, we demonstrate dynamic imaging of moving GCaMp-labelled C. elegans worms. Our approach dramatically simplifies deep-brain microscopy.


Assuntos
Encéfalo/diagnóstico por imagem , Aprendizado de Máquina , Microscopia de Fluorescência/métodos , Neuroimagem/métodos , Animais , Caenorhabditis elegans/citologia , Células Cultivadas , Proteínas de Fluorescência Verde/metabolismo , Processamento de Imagem Assistida por Computador/métodos , Camundongos , Procedimentos Cirúrgicos Minimamente Invasivos , Redes Neurais de Computação , Neurônios/citologia , Neurônios/metabolismo
2.
Appl Opt ; 60(10): B135-B140, 2021 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-33798147

RESUMO

We experimentally demonstrate a camera whose primary optic is a cannula/needle (diameter=0.22mm and length=12.5mm) that acts as a light pipe transporting light intensity from an object plane (35 cm away) to its opposite end. Deep neural networks (DNNs) are used to reconstruct color and grayscale images with a field of view of 18° and angular resolution of ∼0.4∘. We showed a large effective demagnification of 127×. Most interestingly, we showed that such a camera could achieve close to diffraction-limited performance with an effective numerical aperture of 0.045, depth of focus ∼16µm, and resolution close to the sensor pixel size (3.2 µm). When trained on images with depth information, the DNN can create depth maps. Finally, we show DNN-based classification of the EMNIST dataset before and after image reconstructions. The former could be useful for imaging with enhanced privacy.

3.
Optica ; 9(1): 26-31, 2022 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-37377640

RESUMO

Many deep learning approaches to solve computational imaging problems have proven successful through relying solely on the data. However, when applied to the raw output of a bare (optics-free) image sensor, these methods fail to reconstruct target images that are structurally diverse. In this work we propose a self-consistent supervised model that learns not only the inverse, but also the forward model to better constrain the predictions through encouraging the network to model the ideal bijective imaging system. To do this, we employ cycle consistency alongside traditional reconstruction losses, both of which we show are needed for incoherent optics-free image reconstruction. By eliminating all optics, we demonstrate imaging with the thinnest camera possible.

4.
Opt Contin ; 1(9): 2091-2099, 2022 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-37378086

RESUMO

A solid-glass cannula serves as a micro-endoscope that can deliver excitation light deep inside tissue while also collecting emitted fluorescence. Then, we utilize deep neural networks to reconstruct images from the collected intensity distributions. By using a commercially available dual-cannula probe, and training a separate deep neural network for each cannula, we effectively double the field of view compared to prior work. We demonstrated ex vivo imaging of fluorescent beads and brain slices and in vivo imaging from whole brains. We clearly resolved 4 µm beads, with FOV from each cannula of 0.2 mm (diameter), and produced images from a depth of ~1.2 mm in the whole brain, currently limited primarily by the labeling. Since no scanning is required, fast widefield fluorescence imaging limited primarily by the brightness of the fluorophores, collection efficiency of our system, and the frame rate of the camera becomes possible.

5.
OSA Contin ; 3(9): 2423-2428, 2020 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-33364554

RESUMO

We demonstrate optics-free imaging of complex color and monochrome QR-codes using a bare image sensor and trained artificial neural networks (ANNs). The ANN is trained to interpret the raw sensor data for human visualization. The image sensor is placed at a specified gap (1mm, 5mm and 10mm) from the QR code. We studied the robustness of our approach by experimentally testing the output of the ANNs with system perturbations of this gap, and the translational and rotational alignments of the QR code to the image sensor. Our demonstration opens us the possibility of using completely optics-free, non-anthropocentric cameras for application-specific imaging of complex, non-sparse objects.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA