Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
Nat Methods ; 20(7): 1010-1020, 2023 07.
Article in English | MEDLINE | ID: mdl-37202537

ABSTRACT

The Cell Tracking Challenge is an ongoing benchmarking initiative that has become a reference in cell segmentation and tracking algorithm development. Here, we present a significant number of improvements introduced in the challenge since our 2017 report. These include the creation of a new segmentation-only benchmark, the enrichment of the dataset repository with new datasets that increase its diversity and complexity, and the creation of a silver standard reference corpus based on the most competitive results, which will be of particular interest for data-hungry deep learning-based strategies. Furthermore, we present the up-to-date cell segmentation and tracking leaderboards, an in-depth analysis of the relationship between the performance of the state-of-the-art methods and the properties of the datasets and annotations, and two novel, insightful studies about the generalizability and the reusability of top-performing methods. These studies provide critical practical conclusions for both developers and users of traditional and machine learning-based cell segmentation and tracking algorithms.


Subject(s)
Benchmarking , Cell Tracking , Cell Tracking/methods , Machine Learning , Algorithms
2.
PLoS Genet ; 16(6): e1008774, 2020 06.
Article in English | MEDLINE | ID: mdl-32555736

ABSTRACT

Cranial neural crest (NC) contributes to the developing vertebrate eye. By multidimensional, quantitative imaging, we traced the origin of the ocular NC cells to two distinct NC populations that differ in the maintenance of sox10 expression, Wnt signalling, origin, route, mode and destination of migration. The first NC population migrates to the proximal and the second NC cell group populates the distal (anterior) part of the eye. By analysing zebrafish pax6a/b compound mutants presenting anterior segment dysgenesis, we demonstrate that Pax6a/b guide the two NC populations to distinct proximodistal locations. We further provide evidence that the lens whose formation is pax6a/b-dependent and lens-derived TGFß signals contribute to the building of the anterior segment. Taken together, our results reveal multiple roles of Pax6a/b in the control of NC cells during development of the anterior segment.


Subject(s)
Anterior Eye Segment/metabolism , Neural Crest/metabolism , Neurogenesis , PAX6 Transcription Factor/metabolism , Zebrafish Proteins/metabolism , Animals , Anterior Eye Segment/cytology , Anterior Eye Segment/embryology , Cell Movement , Mutation , Neural Crest/cytology , Neural Crest/embryology , Neurons/cytology , Neurons/metabolism , PAX6 Transcription Factor/genetics , Signal Transduction , Transforming Growth Factor beta/metabolism , Zebrafish , Zebrafish Proteins/genetics
3.
Bioinformatics ; 36(17): 4668-4670, 2020 11 01.
Article in English | MEDLINE | ID: mdl-32589734

ABSTRACT

MOTIVATION: An automated counting of beads is required for many high-throughput experiments such as studying mimicked bacterial invasion processes. However, state-of-the-art algorithms under- or overestimate the number of beads in low-resolution images. In addition, expert knowledge is needed to adjust parameters. RESULTS: In combination with our image labeling tool, BeadNet enables biologists to easily annotate and process their data reducing the expertise required in many existing image analysis pipelines. BeadNet outperforms state-of-the-art-algorithms in terms of missing, added and total amount of beads. AVAILABILITY AND IMPLEMENTATION: BeadNet (software, code and dataset) is available at https://bitbucket.org/t_scherr/beadnet. The image labeling tool is available at https://bitbucket.org/abartschat/imagelabelingtool. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
Deep Learning , Microscopy , Algorithms , Image Processing, Computer-Assisted , Software
4.
Chemphyschem ; 21(10): 1070-1078, 2020 05 18.
Article in English | MEDLINE | ID: mdl-32142187

ABSTRACT

Dispersed negatively charged silica nanoparticles segregate inside microfluidic water-in-oil (W/O) droplets that are coated with a positively charged lipid shell. We report a methodology for the quantitative analysis of this self-assembly process. By using real-time fluorescence microscopy and automated analysis of the recorded images, kinetic data are obtained that characterize the electrostatically-driven self-assembly. We demonstrate that the segregation rates can be controlled by the installment of functional moieties on the nanoparticle's surface, such as nucleic acid and protein molecules. We anticipate that our method enables the quantitative and systematic investigation of the segregation of (bio)functionalized nanoparticles in microfluidic droplets. This could lead to complex supramolecular architectures on the inner surface of micrometer-sized hollow spheres, which might be used, for example, as cell containers for applications in the life sciences.


Subject(s)
Fatty Acids, Monounsaturated/chemistry , Microfluidic Analytical Techniques , Mineral Oil/chemistry , Nanoparticles/chemistry , Quaternary Ammonium Compounds/chemistry , Silicon Dioxide/chemistry , Water/chemistry , Amines/chemistry , Animals , Cattle , DNA/chemistry , Kinetics , Particle Size , Serum Albumin, Bovine/chemistry , Surface Properties
5.
Klin Monbl Augenheilkd ; 236(12): 1399-1406, 2019 Dec.
Article in German | MEDLINE | ID: mdl-31671462

ABSTRACT

The use of deep neural networks ("deep learning") creates new possibilities in digital image processing. This approach has been widely applied and successfully used for the evaluation of image data in ophthalmology. In this article, the methodological approach of deep learning is examined and compared to the classical approach for digital image processing. The differences between the approaches are discussed and the increasingly important role of training data for model generation is explained. Furthermore, the approach of transfer learning for deep learning is presented with a representative data set from the field of corneal confocal microscopy. In this context, the advantages of the method and the specific problems when dealing with medical microscope data will be discussed.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Ophthalmology , Deep Learning , Microscopy, Confocal
6.
Med Image Anal ; 92: 103047, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38157647

ABSTRACT

Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Cell Nucleus/pathology , Histological Techniques/methods
7.
PLoS One ; 17(11): e0277601, 2022.
Article in English | MEDLINE | ID: mdl-36445903

ABSTRACT

In biotechnology, cell growth is one of the most important properties for the characterization and optimization of microbial cultures. Novel live-cell imaging methods are leading to an ever better understanding of cell cultures and their development. The key to analyzing acquired data is accurate and automated cell segmentation at the single-cell level. Therefore, we present microbeSEG, a user-friendly Python-based cell segmentation tool with a graphical user interface and OMERO data management. microbeSEG utilizes a state-of-the-art deep learning-based segmentation method and can be used for instance segmentation of a wide range of cell morphologies and imaging techniques, e.g., phase contrast or fluorescence microscopy. The main focus of microbeSEG is a comprehensible, easy, efficient, and complete workflow from the creation of training data to the final application of the trained segmentation model. We demonstrate that accurate cell segmentation results can be obtained within 45 minutes of user time. Utilizing public segmentation datasets or pre-labeling further accelerates the microbeSEG workflow. This opens the door for accurate and efficient data analysis of microbial cultures.


Subject(s)
Data Management , Deep Learning , Software , Workflow , Data Analysis
8.
PLoS One ; 16(9): e0249257, 2021.
Article in English | MEDLINE | ID: mdl-34492015

ABSTRACT

Automatic cell segmentation and tracking enables to gain quantitative insights into the processes driving cell migration. To investigate new data with minimal manual effort, cell tracking algorithms should be easy to apply and reduce manual curation time by providing automatic correction of segmentation errors. Current cell tracking algorithms, however, are either easy to apply to new data sets but lack automatic segmentation error correction, or have a vast set of parameters that needs either manual tuning or annotated data for parameter tuning. In this work, we propose a tracking algorithm with only few manually tunable parameters and automatic segmentation error correction. Moreover, no training data is needed. We compare the performance of our approach to three well-performing tracking algorithms from the Cell Tracking Challenge on data sets with simulated, degraded segmentation-including false negatives, over- and under-segmentation errors. Our tracking algorithm can correct false negatives, over- and under-segmentation errors as well as a mixture of the aforementioned segmentation errors. On data sets with under-segmentation errors or a mixture of segmentation errors our approach performs best. Moreover, without requiring additional manual tuning, our approach ranks several times in the top 3 on the 6th edition of the Cell Tracking Challenge.


Subject(s)
Algorithms , Cell Tracking/methods , Computer Graphics , Databases, Factual , Humans
9.
PLoS One ; 16(9): e0257635, 2021.
Article in English | MEDLINE | ID: mdl-34550999

ABSTRACT

When approaching thyroid gland tumor classification, the differentiation between samples with and without "papillary thyroid carcinoma-like" nuclei is a daunting task with high inter-observer variability among pathologists. Thus, there is increasing interest in the use of machine learning approaches to provide pathologists real-time decision support. In this paper, we optimize and quantitatively compare two automated machine learning methods for thyroid gland tumor classification on two datasets to assist pathologists in decision-making regarding these methods and their parameters. The first method is a feature-based classification originating from common image processing and consists of cell nucleus segmentation, feature extraction, and subsequent thyroid gland tumor classification utilizing different classifiers. The second method is a deep learning-based classification which directly classifies the input images with a convolutional neural network without the need for cell nucleus segmentation. On the Tharun and Thompson dataset, the feature-based classification achieves an accuracy of 89.7% (Cohen's Kappa 0.79), compared to the deep learning-based classification of 89.1% (Cohen's Kappa 0.78). On the Nikiforov dataset, the feature-based classification achieves an accuracy of 83.5% (Cohen's Kappa 0.46) compared to the deep learning-based classification 77.4% (Cohen's Kappa 0.35). Thus, both automated thyroid tumor classification methods can reach the classification level of an expert pathologist. To our knowledge, this is the first study comparing feature-based and deep learning-based classification regarding their ability to classify samples with and without papillary thyroid carcinoma-like nuclei on two large-scale datasets.


Subject(s)
Machine Learning , Thyroid Cancer, Papillary/classification , Thyroid Neoplasms/classification , Area Under Curve , Automation , Humans , Image Processing, Computer-Assisted , ROC Curve , Thyroid Cancer, Papillary/pathology , Thyroid Neoplasms/pathology
10.
PLoS One ; 15(12): e0243219, 2020.
Article in English | MEDLINE | ID: mdl-33290432

ABSTRACT

The accurate segmentation and tracking of cells in microscopy image sequences is an important task in biomedical research, e.g., for studying the development of tissues, organs or entire organisms. However, the segmentation of touching cells in images with a low signal-to-noise-ratio is still a challenging problem. In this paper, we present a method for the segmentation of touching cells in microscopy images. By using a novel representation of cell borders, inspired by distance maps, our method is capable to utilize not only touching cells but also close cells in the training process. Furthermore, this representation is notably robust to annotation errors and shows promising results for the segmentation of microscopy images containing in the training data underrepresented or not included cell types. For the prediction of the proposed neighbor distances, an adapted U-Net convolutional neural network (CNN) with two decoder paths is used. In addition, we adapt a graph-based cell tracking algorithm to evaluate our proposed method on the task of cell tracking. The adapted tracking algorithm includes a movement estimation in the cost function to re-link tracks with missing segmentation masks over a short sequence of frames. Our combined tracking by detection method has proven its potential in the IEEE ISBI 2020 Cell Tracking Challenge (http://celltrackingchallenge.net/) where we achieved as team KIT-Sch-GE multiple top three rankings including two top performances using a single segmentation model for the diverse data sets.


Subject(s)
Cell Tracking/methods , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Algorithms , Deep Learning , HeLa Cells , Humans , Microscopy/methods , Optical Imaging/methods
SELECTION OF CITATIONS
SEARCH DETAIL