Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Bioinformatics ; 36(17): 4668-4670, 2020 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-32589734

RESUMO

MOTIVATION: An automated counting of beads is required for many high-throughput experiments such as studying mimicked bacterial invasion processes. However, state-of-the-art algorithms under- or overestimate the number of beads in low-resolution images. In addition, expert knowledge is needed to adjust parameters. RESULTS: In combination with our image labeling tool, BeadNet enables biologists to easily annotate and process their data reducing the expertise required in many existing image analysis pipelines. BeadNet outperforms state-of-the-art-algorithms in terms of missing, added and total amount of beads. AVAILABILITY AND IMPLEMENTATION: BeadNet (software, code and dataset) is available at https://bitbucket.org/t_scherr/beadnet. The image labeling tool is available at https://bitbucket.org/abartschat/imagelabelingtool. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Aprendizado Profundo , Microscopia , Algoritmos , Processamento de Imagem Assistida por Computador , Software
2.
Med Image Anal ; 92: 103047, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38157647

RESUMO

Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Núcleo Celular/patologia , Técnicas Histológicas/métodos
3.
IEEE Trans Biomed Eng ; 70(9): 2519-2528, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37028023

RESUMO

OBJECTIVE: The scarcity of high-quality annotated data is omnipresent in machine learning. Especially in biomedical segmentation applications, experts need to spend a lot of their time into annotating due to the complexity. Hence, methods to reduce such efforts are desired. METHODS: Self-Supervised Learning (SSL) is an emerging field that increases performance when unannotated data is present. However, profound studies regarding segmentation tasks and small datasets are still absent. A comprehensive qualitative and quantitative evaluation is conducted, examining SSL's applicability with a focus on biomedical imaging. We consider various metrics and introduce multiple novel application-specific measures. All metrics and state-of-the-art methods are provided in a directly applicable software package (https://osf.io/gu2t8/). RESULTS: We show that SSL can lead to performance improvements of up to 10%, which is especially notable for methods designed for segmentation tasks. CONCLUSION: SSL is a sensible approach to data-efficient learning, especially for biomedical applications, where generating annotations requires much effort. Additionally, our extensive evaluation pipeline is vital since there are significant differences between the various approaches. SIGNIFICANCE: We provide biomedical practitioners with an overview of innovative data-efficient solutions and a novel toolbox for their own application of new approaches. Our pipeline for analyzing SSL methods is provided as a ready-to-use software package.


Assuntos
Confiabilidade dos Dados , Aprendizado de Máquina , Aprendizado de Máquina Supervisionado , Processamento de Imagem Assistida por Computador
4.
PLoS One ; 18(3): e0283828, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37000778

RESUMO

The analysis of 3D microscopic cell culture images plays a vital role in the development of new therapeutics. While 3D cell cultures offer a greater similarity to the human organism than adherent cell cultures, they introduce new challenges for automatic evaluation, like increased heterogeneity. Deep learning algorithms are able to outperform conventional analysis methods in such conditions but require a large amount of training data. Due to data size and complexity, the manual annotation of 3D images to generate large datasets is a nearly impossible task. We therefore propose a pipeline that combines conventional simulation methods with deep-learning-based optimization to generate large 3D synthetic images of 3D cell cultures where the labels are known by design. The hybrid procedure helps to keep the generated image structures consistent with the underlying labels. A new approach and an additional measure are introduced to model and evaluate the reduced brightness and quality in deeper image regions. Our analyses show that the deep learning optimization step consistently improves the quality of the generated images. We could also demonstrate that a deep learning segmentation model trained with our synthetic data outperforms a classical segmentation method on real image data. The presented synthesis method allows selecting a segmentation model most suitable for the user's data, providing an ideal basis for further data analysis.


Assuntos
Aprendizado Profundo , Humanos , Benchmarking , Imageamento Tridimensional/métodos , Algoritmos , Técnicas de Cultura de Células em Três Dimensões , Processamento de Imagem Assistida por Computador/métodos
5.
PLoS One ; 16(9): e0257635, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34550999

RESUMO

When approaching thyroid gland tumor classification, the differentiation between samples with and without "papillary thyroid carcinoma-like" nuclei is a daunting task with high inter-observer variability among pathologists. Thus, there is increasing interest in the use of machine learning approaches to provide pathologists real-time decision support. In this paper, we optimize and quantitatively compare two automated machine learning methods for thyroid gland tumor classification on two datasets to assist pathologists in decision-making regarding these methods and their parameters. The first method is a feature-based classification originating from common image processing and consists of cell nucleus segmentation, feature extraction, and subsequent thyroid gland tumor classification utilizing different classifiers. The second method is a deep learning-based classification which directly classifies the input images with a convolutional neural network without the need for cell nucleus segmentation. On the Tharun and Thompson dataset, the feature-based classification achieves an accuracy of 89.7% (Cohen's Kappa 0.79), compared to the deep learning-based classification of 89.1% (Cohen's Kappa 0.78). On the Nikiforov dataset, the feature-based classification achieves an accuracy of 83.5% (Cohen's Kappa 0.46) compared to the deep learning-based classification 77.4% (Cohen's Kappa 0.35). Thus, both automated thyroid tumor classification methods can reach the classification level of an expert pathologist. To our knowledge, this is the first study comparing feature-based and deep learning-based classification regarding their ability to classify samples with and without papillary thyroid carcinoma-like nuclei on two large-scale datasets.


Assuntos
Aprendizado de Máquina , Câncer Papilífero da Tireoide/classificação , Neoplasias da Glândula Tireoide/classificação , Área Sob a Curva , Automação , Humanos , Processamento de Imagem Assistida por Computador , Curva ROC , Câncer Papilífero da Tireoide/patologia , Neoplasias da Glândula Tireoide/patologia
6.
PLoS One ; 15(12): e0243219, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33290432

RESUMO

The accurate segmentation and tracking of cells in microscopy image sequences is an important task in biomedical research, e.g., for studying the development of tissues, organs or entire organisms. However, the segmentation of touching cells in images with a low signal-to-noise-ratio is still a challenging problem. In this paper, we present a method for the segmentation of touching cells in microscopy images. By using a novel representation of cell borders, inspired by distance maps, our method is capable to utilize not only touching cells but also close cells in the training process. Furthermore, this representation is notably robust to annotation errors and shows promising results for the segmentation of microscopy images containing in the training data underrepresented or not included cell types. For the prediction of the proposed neighbor distances, an adapted U-Net convolutional neural network (CNN) with two decoder paths is used. In addition, we adapt a graph-based cell tracking algorithm to evaluate our proposed method on the task of cell tracking. The adapted tracking algorithm includes a movement estimation in the cost function to re-link tracks with missing segmentation masks over a short sequence of frames. Our combined tracking by detection method has proven its potential in the IEEE ISBI 2020 Cell Tracking Challenge (http://celltrackingchallenge.net/) where we achieved as team KIT-Sch-GE multiple top three rankings including two top performances using a single segmentation model for the diverse data sets.


Assuntos
Rastreamento de Células/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Algoritmos , Aprendizado Profundo , Células HeLa , Humanos , Microscopia/métodos , Imagem Óptica/métodos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa