Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
BMC Bioinformatics ; 22(1): 325, 2021 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-34130628

RESUMO

BACKGROUND: Automated segmentation of nuclei in microscopic images has been conducted to enhance throughput in pathological diagnostics and biological research. Segmentation accuracy and speed has been significantly enhanced with the advent of convolutional neural networks. A barrier in the broad application of neural networks to nuclei segmentation is the necessity to train the network using a set of application specific images and image labels. Previous works have attempted to create broadly trained networks for universal nuclei segmentation; however, such networks do not work on all imaging modalities, and best results are still commonly found when the network is retrained on user specific data. Stochastic optical reconstruction microscopy (STORM) based super-resolution fluorescence microscopy has opened a new avenue to image nuclear architecture at nanoscale resolutions. Due to the large size and discontinuous features typical of super-resolution images, automatic nuclei segmentation can be difficult. In this study, we apply commonly used networks (Mask R-CNN and UNet architectures) towards the task of segmenting super-resolution images of nuclei. First, we assess whether networks broadly trained on conventional fluorescence microscopy datasets can accurately segment super-resolution images. Then, we compare the resultant segmentations with results obtained using networks trained directly on our super-resolution data. We next attempt to optimize and compare segmentation accuracy using three different neural network architectures. RESULTS: Results indicate that super-resolution images are not broadly compatible with neural networks trained on conventional bright-field or fluorescence microscopy images. When the networks were trained on super-resolution data, however, we attained nuclei segmentation accuracies (F1-Score) in excess of 0.8, comparable to past results found when conducting nuclei segmentation on conventional fluorescence microscopy images. Overall, we achieved the best results utilizing the Mask R-CNN architecture. CONCLUSIONS: We found that convolutional neural networks are powerful tools capable of accurately and quickly segmenting localization-based super-resolution microscopy images of nuclei. While broadly trained and widely applicable segmentation algorithms are desirable for quick use with minimal input, optimal results are still found when the network is both trained and tested on visually similar images. We provide a set of Colab notebooks to disseminate the software into the broad scientific community ( https://github.com/YangLiuLab/Super-Resolution-Nuclei-Segmentation ).


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Algoritmos , Microscopia de Fluorescência
2.
Nat Commun ; 11(1): 1899, 2020 04 20.
Artigo em Inglês | MEDLINE | ID: mdl-32313005

RESUMO

Genomic DNA is folded into a higher-order structure that regulates transcription and maintains genomic stability. Although progress has been made on understanding biochemical characteristics of epigenetic modifications in cancer, the in-situ higher-order folding of chromatin structure during malignant transformation remains largely unknown. Here, using optimized stochastic optical reconstruction microscopy (STORM) for pathological tissue (PathSTORM), we uncover a gradual decompaction and fragmentation of higher-order chromatin folding throughout all stages of carcinogenesis in multiple tumor types, and prior to tumor formation. Our integrated imaging, genomic, and transcriptomic analyses reveal functional consequences in enhanced transcription activities and impaired genomic stability. We also demonstrate the potential of imaging higher-order chromatin disruption to detect high-risk precursors that cannot be distinguished by conventional pathology. Taken together, our findings reveal gradual decompaction and fragmentation of higher-order chromatin structure as an enabling characteristic in early carcinogenesis to facilitate malignant transformation, which may improve cancer diagnosis, risk stratification, and prevention.


Assuntos
Carcinogênese/patologia , Cromatina/patologia , Processamento de Imagem Assistida por Computador , Microscopia de Fluorescência/métodos , Neoplasias/diagnóstico por imagem , Animais , Biofísica , Epigênese Genética , Genoma , Heterocromatina , Humanos , Masculino , Camundongos , Neoplasias/patologia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Transcriptoma
3.
Int J Comput Assist Radiol Surg ; 14(2): 203-213, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30291592

RESUMO

PURPOSE: In this paper, we present a vein imaging system to combine reflectance mode visible spectrum images (VIS) with transmission mode near-infrared (NIR) images in real time. Clear vessel localization is achieved in this manner with combined NIR-VIS dual-modal imaging. METHODS: Transmission and reflectance mode optical instrumentation is used to combine VIS and NIR images. Two methods of displaying the combined images are demonstrated here. We have conducted experiments to determine the system's resolution, alignment accuracy, and depth penetration. Vein counts were taken from the hands of test subjects using the system and compared with vein counts taken by visual analysis. RESULTS: Results indicate that the system can improve vein detection in the human hand while detecting veins of a diameter < 0.5 mm at any working distance and of a 0.25 mm diameter at an optimal working distance of about 30 cm. The system has also been demonstrated to clearly detect silicone vessels with artificial blood of diameter 2, 1, and 0.5 mm diameter under a tissue depth of 3 mm. In a study involving 25 human subjects, we have demonstrated that vein visibility was significantly increased using our system. CONCLUSIONS: The results indicate that the device is a high-resolution solution to near-surface venous imaging. This technology can be applied for IV placement, morphological analysis for disease state detection, and biometric analysis.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imagem Óptica/métodos , Veias/diagnóstico por imagem , Humanos
4.
Methods Mol Biol ; 1444: 85-95, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27283420

RESUMO

Intraoperative imaging is an invaluable tool in many surgical procedures. We have developed a wearable stereoscopic imaging and display system entitled Integrated Imaging Goggle, which can provide real-time multimodal image guidance. With the Integrated Imaging Goggle, wide field-of-view fluorescence imaging is tracked and registered with intraoperative ultrasound imaging and preoperative tomography-based surgical navigation, to provide integrated multimodal imaging capabilities in real-time. Herein we describe the system instrumentation and the methods of using the Integrated Imaging Goggle to guide surgeries.


Assuntos
Carcinoma Hepatocelular/diagnóstico por imagem , Neoplasias Hepáticas/diagnóstico por imagem , Imagem Multimodal/instrumentação , Cirurgia Assistida por Computador/instrumentação , Cirurgia Assistida por Computador/métodos , Animais , Carcinoma Hepatocelular/cirurgia , Processamento de Imagem Assistida por Computador/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/instrumentação , Imageamento Tridimensional/métodos , Cuidados Intraoperatórios , Neoplasias Hepáticas/cirurgia , Imagem Multimodal/métodos , Tomografia Óptica/métodos , Ultrassonografia/métodos , Dispositivos Eletrônicos Vestíveis
5.
PLoS One ; 10(11): e0141956, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26529249

RESUMO

We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a) the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b) the first wearable system offering both large FOV and microscopic imaging simultaneously,


Assuntos
Percepção de Profundidade , Processamento de Imagem Assistida por Computador/métodos , Cirurgia Assistida por Computador/instrumentação , Cirurgia Assistida por Computador/métodos , Telemedicina/instrumentação , Telemedicina/métodos , Animais , Galinhas , Humanos , Processamento de Imagem Assistida por Computador/instrumentação , Neoplasias/cirurgia , Neoplasias/veterinária , Doenças das Aves Domésticas/cirurgia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA