Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Arterioscler Thromb Vasc Biol ; 44(7): 1584-1600, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38779855

RESUMO

BACKGROUND: Analysis of vascular networks is an essential step to unravel the mechanisms regulating the physiological and pathological organization of blood vessels. So far, most of the analyses are performed using 2-dimensional projections of 3-dimensional (3D) networks, a strategy that has several obvious shortcomings. For instance, it does not capture the true geometry of the vasculature and generates artifacts on vessel connectivity. These limitations are accepted in the field because manual analysis of 3D vascular networks is a laborious and complex process that is often prohibitive for large volumes. METHODS: To overcome these issues, we developed 3DVascNet, a deep learning-based software for automated segmentation and quantification of 3D retinal vascular networks. 3DVascNet performs segmentation based on a deep learning model, and it quantifies vascular morphometric parameters such as vessel density, branch length, vessel radius, and branching point density. We tested the performance of 3DVascNet using a large data set of 3D microscopy images of mouse retinal blood vessels. RESULTS: We demonstrated that 3DVascNet efficiently segments vascular networks in 3D and that vascular morphometric parameters capture phenotypes detected by using manual segmentation and quantification in 2 dimension. In addition, we showed that, despite being trained on retinal images, 3DVascNet has high generalization capability and successfully segments images originating from other data sets and organs. CONCLUSIONS: Overall, we present 3DVascNet, a freely available software that includes a user-friendly graphical interface for researchers with no programming experience, which will greatly facilitate the ability to study vascular networks in 3D in health and disease. Moreover, the source code of 3DVascNet is publicly available, thus it can be easily extended for the analysis of other 3D vascular networks by other users.


Assuntos
Aprendizado Profundo , Imageamento Tridimensional , Vasos Retinianos , Software , Animais , Vasos Retinianos/diagnóstico por imagem , Imageamento Tridimensional/métodos , Camundongos , Camundongos Endogâmicos C57BL , Interpretação de Imagem Assistida por Computador , Automação , Reprodutibilidade dos Testes
2.
Artigo em Inglês | MEDLINE | ID: mdl-38083101

RESUMO

In recent years, deep learning models have been extensively applied for the segmentation of microscopy images to efficiently and accurately quantify and characterize cells, nuclei, and other biological structures. However, typically these are supervised models that require large amounts of training data that are manually annotated to create the ground-truth. Since manual annotation of these segmentation masks is difficult and time-consuming, specially in 3D, we sought to develop a self-supervised segmentation method.Our method is based on an image-to-image translation model, the CycleGAN, which we use to learn the mapping from the fluorescence microscopy images domain to the segmentation domain. We exploit the fact that CycleGAN does not require paired data and train the model using synthetic masks, instead of manually labeled masks. These masks are created automatically based on the approximate shapes and sizes of the nuclei and Golgi, thus manual image segmentation is not needed in our proposed approach.The experimental results obtained with the proposed CycleGAN model are compared with two well-known supervised segmentation models: 3D U-Net [1] and Vox2Vox [2]. The CycleGAN model led to the following results: Dice coefficient of 78.07% for the nuclei class and 67.73% for the Golgi class with a difference of only 1.4% and 0.61% compared to the best results obtained with the supervised models Vox2Vox and 3D U-Net, respectively. Moreover, training and testing the CycleGAN model is about 5.78 times faster in comparison with the 3D U-Net model. Our results show that without manual annotation effort we can train a model that performs similarly to supervised models for the segmentation of organelles in 3D microscopy images.Clinical relevance- Segmentation of cell organelles in microscopy images is an important step to extract several features, such as the morphology, density, size, shape and texture of these organelles. These quantitative analyses provide valuable information to classify and diagnose diseases, and to study biological processes.


Assuntos
Núcleo Celular , Máscaras , Microscopia de Fluorescência
3.
PLoS One ; 18(11): e0294793, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37976273

RESUMO

[This corrects the article DOI: 10.1371/journal.pone.0280998.].

4.
PLoS One ; 18(2): e0280998, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36780440

RESUMO

Butterflies are increasingly becoming model insects where basic questions surrounding the diversity of their color patterns are being investigated. Some of these color patterns consist of simple spots and eyespots. To accelerate the pace of research surrounding these discrete and circular pattern elements we trained distinct convolutional neural networks (CNNs) for detection and measurement of butterfly spots and eyespots on digital images of butterfly wings. We compared the automatically detected and segmented spot/eyespot areas with those manually annotated. These methods were able to identify and distinguish marginal eyespots from spots, as well as distinguish these patterns from less symmetrical patches of color. In addition, the measurements of an eyespot's central area and surrounding rings were comparable with the manual measurements. These CNNs offer improvements of eyespot/spot detection and measurements relative to previous methods because it is not necessary to mathematically define the feature of interest. All that is needed is to point out the images that have those features to train the CNN.


Assuntos
Borboletas , Mariposas , Animais , Pigmentação , Redes Neurais de Computação , Asas de Animais
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 549-552, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086569

RESUMO

Fluorescence microscopy images of cell organelles enable the study of various complex biological processes. Recently, deep learning (DL) models are being used for the accurate automatic analysis of these images. DL models present state-of-the-art performance in many image analysis tasks such as object classification, segmentation and detection. However, to train a DL model a large manually annotated dataset is required. Manual annotation of 3D microscopy images is a time-consuming task and must be performed by specialists in the area. Thus, only a few images with annotations are typically available. Recent advances in generative adversarial networks (GANs) have allowed the translation of images with some conditions into realistic looking synthetic images. Therefore, in this work we explore approaches based on GANs to create synthetic 3D microscopy images. We compare four approaches that differ in the conditions of the input image. The quality of the generated images was assessed visually and using a quantitative objective GAN evaluation metric. The results showed that the GAN is able to generate synthetic images similar to the real ones. Hence, we have presented a method based on GANs to overcome the issue of small annotated datasets in the biomedical imaging field.


Assuntos
Processamento de Imagem Assistida por Computador , Projetos de Pesquisa , Processamento de Imagem Assistida por Computador/métodos , Microscopia de Fluorescência
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3017-3020, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891879

RESUMO

Blood vessels provide oxygen and nutrients to all tissues in the human body, and their incorrect organisation or dysfunction contributes to several diseases. Correct organisation of blood vessels is achieved through vascular patterning, a process that relies on endothelial cell polarization and migration against the blood flow direction. Unravelling the mechanisms governing endothelial cell polarity is essential to study the process of vascular patterning. Cell polarity is defined by a vector that goes from the nucleus centroid to the corresponding Golgi complex centroid, here defined as axial polarity. Currently, axial polarity is calculated manually, which is time-consuming and subjective. In this work, we used a deep learning approach to segment nuclei and Golgi in 3D fluorescence microscopy images of mouse retinas, and to assign nucleus-Golgi pairs. This approach predicts nuclei and Golgi segmentation masks but also a third mask corresponding to joint nuclei and Golgi segmentations. The joint segmentation mask is used to perform nucleus-Golgi pairing. We demonstrate that our deep learning approach using three masks successfully identifies nucleus-Golgi pairs, outperforming a pairing method based on a cost matrix. Our results pave the way for automated computation of axial polarity in 3D tissues and in vivo.


Assuntos
Núcleo Celular , Imageamento Tridimensional , Animais , Complexo de Golgi , Camundongos , Microscopia de Fluorescência
7.
Sci Rep ; 11(1): 19278, 2021 09 29.
Artigo em Inglês | MEDLINE | ID: mdl-34588507

RESUMO

The cell nucleus is a tightly regulated organelle and its architectural structure is dynamically orchestrated to maintain normal cell function. Indeed, fluctuations in nuclear size and shape are known to occur during the cell cycle and alterations in nuclear morphology are also hallmarks of many diseases including cancer. Regrettably, automated reliable tools for cell cycle staging at single cell level using in situ images are still limited. It is therefore urgent to establish accurate strategies combining bioimaging with high-content image analysis for a bona fide classification. In this study we developed a supervised machine learning method for interphase cell cycle staging of individual adherent cells using in situ fluorescence images of nuclei stained with DAPI. A Support Vector Machine (SVM) classifier operated over normalized nuclear features using more than 3500 DAPI stained nuclei. Molecular ground truth labels were obtained by automatic image processing using fluorescent ubiquitination-based cell cycle indicator (Fucci) technology. An average F1-Score of 87.7% was achieved with this framework. Furthermore, the method was validated on distinct cell types reaching recall values higher than 89%. Our method is a robust approach to identify cells in G1 or S/G2 at the individual level, with implications in research and clinical applications.


Assuntos
Núcleo Celular/fisiologia , Processamento de Imagem Assistida por Computador , Interfase/fisiologia , Análise de Célula Única/métodos , Máquina de Vetores de Suporte , Animais , Linhagem Celular , Conjuntos de Dados como Assunto , Humanos , Microscopia Intravital/métodos , Camundongos , Microscopia de Fluorescência/métodos
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1428-1431, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018258

RESUMO

Segmentation of cell nuclei in fluorescence microscopy images provides valuable information about the shape and size of the nuclei, its chromatin texture and DNA content. It has many applications such as cell tracking, counting and classification. In this work, we extended our recently proposed approach for nuclei segmentation based on deep learning, by adding to its input handcrafted features. Our handcrafted features introduce additional domain knowledge that nuclei are expected to have an approximately round shape. For round shapes the gradient vector of points at the border point to the center. To convey this information, we compute a map of gradient convergence to be used by the CNN as a new channel, in addition to the fluorescence microscopy image. We applied our method to a dataset of microscopy images of cells stained with DAPI. Our results show that with this approach we are able to decrease the number of missdetections and, therefore, increase the F1-Score when compared to our previously proposed approach. Moreover, the results show that faster convergence is obtained when handcrafted features are combined with deep learning.


Assuntos
Algoritmos , Aprendizado Profundo , Núcleo Celular , Cromatina , Microscopia de Fluorescência
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1432-1435, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018259

RESUMO

The progression of cells through the cell cycle is a tightly regulated process and is known to be key in maintaining normal tissue architecture and function. Disruption of these orchestrated phases will result in alterations that can lead to many diseases including cancer. Regrettably, reliable automatic tools to evaluate the cell cycle stage of individual cells are still lacking, in particular at interphase. Therefore, the development of new tools for a proper classification are urgently needed and will be of critical importance for cancer prognosis and predictive therapeutic purposes. Thus, in this work, we aimed to investigate three deep learning approaches for interphase cell cycle staging in microscopy images: 1) joint detection and cell cycle classification of nuclei patches; 2) detection of cell nuclei patches followed by classification of the cycle stage; 3) detection and segmentation of cell nuclei followed by classification of cell cycle staging. Our methods were applied to a dataset of microscopy images of nuclei stained with DAPI. The best results (0.908 F1-Score) were obtained with approach 3 in which the segmentation step allows for an intensity normalization that takes into account the intensities of all nuclei in a given image. These results show that for a correct cell cycle staging it is important to consider the relative intensities of the nuclei. Herein, we have developed a new deep learning method for interphase cell cycle staging at single cell level with potential implications in cancer prognosis and therapeutic strategies.


Assuntos
Núcleo Celular , Aprendizado Profundo , Ciclo Celular , Divisão Celular , Interfase
10.
PLoS One ; 13(10): e0205513, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30300393

RESUMO

PURPOSE: To characterize quantitative optical coherence tomography angiography (OCT-A) parameters in active neovascular age-related macular degeneration (nAMD) patients under treatment and remission nAMD patients. DESIGN: Retrospective, cross-sectional study. PARTICIPANTS: One hundred and four patients of whom 72 were in Group 1 (active nAMD) and 32 in Group 2 (remission nAMD) based on SD-OCT (Spectral Domain OCT) qualitative morphology. METHODS: This study was conducted at the Centre Ophtalmologique de l'Odeon between June 2016 and December 2017. Eyes were analyzed using SD-OCT and high-speed (100 000 A-scans/second) 1050-nm wavelength swept-source OCT-A. Speckle noise removal and choroidal neovascularization (CNV) blood flow delineation were automatically performed. Quantitative parameters analyzed included blood flow area (Area), vessel density, fractal dimension (FD) and lacunarity. OCT-A image algorithms and graphical user interfaces were built as a unified tool in Matlab coding language. Generalized Additive Models were used to study the association between OCT-A parameters and nAMD remission on structural OCT. The models' performance was assessed by the Akaike Information Criterion (AIC), Brier Score and by the area under the receiver operating characteristic curve (AUC). A p value of ≤ 0.05 was considered as statistically significant. RESULTS: Area, vessel density and FD were different (p<0.001) in the two groups. Regarding the association with CNV activity, Area alone had the highest AUC (AUC = 0.85; 95%CI: 0.77-0.93) followed by FD (AUC = 0.80; 95%CI: 0.71-0.88). Again, Area obtained the best values followed by FD in the AIC and Brier Score evaluations. The multivariate model that included both these variables attained the best performance considering all assessment criteria. CONCLUSIONS: Blood flow characteristics on OCT-A may be associated with exudative signs on structural OCT. In the future, analyses of OCT-A quantitative parameters could potentially help evaluate CNV activity status and to develop personalized treatment and follow-up cycles.


Assuntos
Angiografia , Neovascularização de Coroide/diagnóstico por imagem , Neovascularização de Coroide/terapia , Degeneração Macular/diagnóstico por imagem , Degeneração Macular/terapia , Tomografia de Coerência Óptica/métodos , Idoso de 80 Anos ou mais , Angiografia/métodos , Neovascularização de Coroide/fisiopatologia , Estudos Transversais , Olho/irrigação sanguínea , Olho/diagnóstico por imagem , Olho/fisiopatologia , Feminino , Humanos , Degeneração Macular/fisiopatologia , Masculino , Modelos Estatísticos , Fluxo Sanguíneo Regional , Indução de Remissão , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA