Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
J Microsc ; 2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38563195

RESUMEN

Fibre bundle (FB)-based endoscopes are indispensable in biology and medical science due to their minimally invasive nature. However, resolution and contrast for fluorescence imaging are limited due to characteristic features of the FBs, such as low numerical aperture (NA) and individual fibre core sizes. In this study, we improved the resolution and contrast of sample fluorescence images acquired using in-house fabricated high-NA FBs by utilising generative adversarial networks (GANs). In order to train our deep learning model, we built an FB-based multifocal structured illumination microscope (MSIM) based on a digital micromirror device (DMD) which improves the resolution and the contrast substantially compared to basic FB-based fluorescence microscopes. After network training, the GAN model, employing image-to-image translation techniques, effectively transformed wide-field images into high-resolution MSIM images without the need for any additional optical hardware. The results demonstrated that GAN-generated outputs significantly enhanced both contrast and resolution compared to the original wide-field images. These findings highlight the potential of GAN-based models trained using MSIM data to enhance resolution and contrast in wide-field imaging for fibre bundle-based fluorescence microscopy. Lay Description: Fibre bundle (FB) endoscopes are essential in biology and medicine but suffer from limited resolution and contrast for fluorescence imaging. Here we improved these limitations using high-NA FBs and generative adversarial networks (GANs). We trained a GAN model with data from an FB-based multifocal structured illumination microscope (MSIM) to enhance resolution and contrast without additional optical hardware. Results showed significant enhancement in contrast and resolution, showcasing the potential of GAN-based models for fibre bundle-based fluorescence microscopy.

2.
IEEE J Biomed Health Inform ; 26(11): 5575-5583, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36054399

RESUMEN

Precise and quick monitoring of key cytometric features such as cell count, size, morphology, and DNA content is crucial in life science applications. Traditionally, image cytometry relies on visual inspection of hemocytometers. This approach is error-prone due to operator subjectivity. Recently, deep learning approaches have emerged as powerful tools enabling quick and accurate image cytometry applicable to different cell types. Leading to simpler, compact, and affordable solutions, these approaches revealed image cytometry as a viable alternative to flow cytometry or Coulter counting. In this study, we demonstrate a modular deep learning system, DeepCAN, providing a complete solution for automated cell counting and viability analysis. DeepCAN employs three different neural network blocks called Parallel Segmenter, Cluster CNN, and Viability CNN that are trained for initial segmentation, cluster separation, and viability analysis. Parallel Segmenter and Cluster CNN blocks achieve accurate segmentation of individual cells while Viability CNN block performs viability classification. A modified U-Net network, a well-known deep neural network model for bioimage analysis, is used in Parallel Segmenter while LeNet-5 architecture and its modified version Opto-Net are used for Cluster CNN and Viability CNN, respectively. We train the Parallel Segmenter using 15 images of A2780 cells and 5 images of yeasts cells, containing, in total, 14742 individual cell images. Similarly, 6101 and 5900 A2780 cell images are employed for training Cluster CNN and Viability CNN models, respectively. 2514 individual A2780 cell images are used to test the overall segmentation performance of Parallel Segmenter combined with Cluster CNN, revealing high Precision/Recall/F1-Score values of 96.52%/96.45%/98.06%, respectively. Cell counting/viability performance of DeepCAN is tested with A2780 (2514 cells), A549 (601 cells), Colo (356 cells), and MDA-MB-231 (887 cells) cell images revealing high analysis accuracies of 96.76%/99.02%, 93.82%/95.93%, and 92.18%/97.90%, 85.32%/97.40%, respectively.


Asunto(s)
Aprendizaje Profundo , Neoplasias Ováricas , Humanos , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Línea Celular Tumoral , Redes Neurales de la Computación
3.
PLoS One ; 17(9): e0273990, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36084054

RESUMEN

When combined with computational approaches, fluorescence imaging becomes one of the most powerful tools in biomedical research. It is possible to achieve resolution figures beyond the diffraction limit, and improve the performance and flexibility of high-resolution imaging systems with techniques such as structured illumination microscopy (SIM) reconstruction. In this study, the hardware and software implementation of an LED-based super-resolution imaging system using SIM employing GPU accelerated parallel image reconstruction is presented. The sample is illuminated with two-dimensional sinusoidal patterns with various orientations and lateral phase shifts generated using a digital micromirror device (DMD). SIM reconstruction is carried out in frequency space using parallel CUDA kernel functions. Furthermore, a general purpose toolbox for the parallel image reconstruction algorithm and an infrastructure that allows all users to perform parallel operations on images without developing any CUDA kernel code is presented. The developed image reconstruction algorithm was run separately on a CPU and a GPU. Two different SIM reconstruction algorithms have been developed for the CPU as mono-thread CPU algorithm and multi-thread OpenMP CPU algorithm. SIM reconstruction of 1024 × 1024 px images was achieved in 1.49 s using GPU computation, indicating an enhancement by ∼28 and ∼20 in computation time when compared with mono-thread CPU computation and multi-thread OpenMP CPU computation, respectively.


Asunto(s)
Iluminación , Microscopía , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Programas Informáticos
4.
Rev Sci Instrum ; 88(1): 013705, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-28147654

RESUMEN

We describe a novel radiation pressure based cantilever excitation method for imaging in dynamic mode atomic force microscopy (AFM) for the first time. Piezo-excitation is the most common method for cantilever excitation, however it may cause spurious resonance peaks. Therefore, the direct excitation of the cantilever plays a crucial role in AFM imaging. A fiber optic interferometer with a 1310 nm laser was used both for the excitation of the cantilever at the resonance and the deflection measurement of the cantilever in a commercial low temperature atomic force microscope/magnetic force microscope (AFM/MFM) from NanoMagnetics Instruments. The laser power was modulated at the cantilever's resonance frequency by a digital Phase Locked Loop (PLL). The laser beam is typically modulated by ∼500 µW, and ∼141.8 nmpp oscillation amplitude is obtained in moderate vacuum levels between 4 and 300 K. We have demonstrated the performance of the radiation pressure excitation in AFM/MFM by imaging atomic steps in graphite, magnetic domains in CoPt multilayers between 4 and 300 K and Abrikosov vortex lattice in BSCCO(2212) single crystal at 4 K for the first time.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA