Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
1.
PLoS One ; 18(7): e0282723, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37467187

RESUMEN

Fixed underwater observatories (FUO), equipped with digital cameras and other sensors, become more commonly used to record different kinds of time series data for marine habitat monitoring. With increasing numbers of campaigns, numbers of sensors and campaign time, the volume and heterogeneity of the data, ranging from simple temperature time series to series of HD images or video call for new data science approaches to analyze the data. While some works have been published on the analysis of data from one campaign, we address the problem of analyzing time series data from two consecutive monitoring campaigns (starting late 2017 and late 2018) in the same habitat. While the data from campaigns in two separate years provide an interesting basis for marine biology research, it also presents new data science challenges, like the the marine image analysis in data form more than one campaign. In this paper, we analyze the polyp activity of two Paragorgia arborea cold water coral (CWC) colonies using FUO data collected from November 2017 to June 2018 and from December 2018 to April 2019. We successfully apply convolutional neural networks (CNN) for the segmentation and classification of the coral and the polyp activities. The result polyp activity data alone showed interesting temporal patterns with differences and similarities between the two time periods. A one month "sleeping" period in spring with almost no activity was observed in both coral colonies, but with a shift of approximately one month. A time series prediction experiment allowed us to predict the polyp activity from the non-image sensor data using recurrent neural networks (RNN). The results pave a way to a new multi-sensor monitoring strategy for Paragorgia arborea behaviour.


Asunto(s)
Antozoos , Animales , Ciencia de los Datos , Ecosistema , Agua , Redes Neurales de la Computación
2.
PLoS One ; 18(2): e0272103, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36827378

RESUMEN

Diatoms represent one of the morphologically and taxonomically most diverse groups of microscopic eukaryotes. Light microscopy-based taxonomic identification and enumeration of frustules, the silica shells of these microalgae, is broadly used in aquatic ecology and biomonitoring. One key step in emerging digital variants of such investigations is segmentation, a task that has been addressed before, but usually in manually captured megapixel-sized images of individual diatom cells with a mostly clean background. In this paper, we applied deep learning-based segmentation methods to gigapixel-sized, high-resolution scans of diatom slides with a realistically cluttered background. This setup requires large slide scans to be subdivided into small images (tiles) to apply a segmentation model to them. This subdivision (tiling), when done using a sliding window approach, often leads to cropping relevant objects at the boundaries of individual tiles. We hypothesized that in the case of diatom analysis, reducing the amount of such cropped objects in the training data can improve segmentation performance by allowing for a better discrimination of relevant, intact frustules or valves from small diatom fragments, which are considered irrelevant when counting diatoms. We tested this hypothesis by comparing a standard sliding window / fixed-stride tiling approach with two new approaches we term object-based tile positioning with and without object integrity constraint. With all three tiling approaches, we trained Mask-R-CNN and U-Net models with different amounts of training data and compared their performance. Object-based tiling with object integrity constraint led to an improvement in pixel-based precision by 12-17 percentage points without substantially impairing recall when compared with standard sliding window tiling. We thus propose that training segmentation models with object-based tiling schemes can improve diatom segmentation from large gigapixel-sized images but could potentially also be relevant for other image domains.


Asunto(s)
Aprendizaje Profundo , Diatomeas , Microscopía , Procesamiento de Imagen Asistido por Computador/métodos
3.
Sensors (Basel) ; 22(14)2022 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-35891060

RESUMEN

Data augmentation is an established technique in computer vision to foster the generalization of training and to deal with low data volume. Most data augmentation and computer vision research are focused on everyday images such as traffic data. The application of computer vision techniques in domains like marine sciences has shown to be not that straightforward in the past due to special characteristics, such as very low data volume and class imbalance, because of costly manual annotation by human domain experts, and general low species abundances. However, the data volume acquired today with moving platforms to collect large image collections from remote marine habitats, like the deep benthos, for marine biodiversity assessment and monitoring makes the use of computer vision automatic detection and classification inevitable. In this work, we investigate the effect of data augmentation in the context of taxonomic classification in underwater, i.e., benthic images. First, we show that established data augmentation methods (i.e., geometric and photometric transformations) perform differently in marine image collections compared to established image collections like the Cityscapes dataset, showing everyday traffic images. Some of the methods even decrease the learning performance when applied to marine image collections. Second, we propose new data augmentation combination policies motivated by our observations and compare their effect to those proposed by the AutoAugment algorithm and can show that the proposed augmentation policy outperforms the AutoAugment results for marine image collections. We conclude that in the case of small marine image datasets, background knowledge, and heuristics should sometimes be applied to design an effective data augmentation method.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Biodiversidad , Ecosistema , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
5.
BMC Bioinformatics ; 23(1): 267, 2022 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-35804309

RESUMEN

BACKGROUND: Modern mass spectrometry has revolutionized the detection and analysis of metabolites but likewise, let the data skyrocket with repositories for metabolomics data filling up with thousands of datasets. While there are many software tools for the analysis of individual experiments with a few to dozens of chromatograms, we see a demand for a contemporary software solution capable of processing and analyzing hundreds or even thousands of experiments in an integrative manner with standardized workflows. RESULTS: Here, we introduce MetHoS as an automated web-based software platform for the processing, storage and analysis of great amounts of mass spectrometry-based metabolomics data sets originating from different metabolomics studies. MetHoS is based on Big Data frameworks to enable parallel processing, distributed storage and distributed analysis of even larger data sets across clusters of computers in a highly scalable manner. It has been designed to allow the processing and analysis of any amount of experiments and samples in an integrative manner. In order to demonstrate the capabilities of MetHoS, thousands of experiments were downloaded from the MetaboLights database and used to perform a large-scale processing, storage and statistical analysis in a proof-of-concept study. CONCLUSIONS: MetHoS is suitable for large-scale processing, storage and analysis of metabolomics data aiming at untargeted metabolomic analyses. It is freely available at: https://methos.cebitec.uni-bielefeld.de/ . Users interested in analyzing their own data are encouraged to apply for an account.


Asunto(s)
Metabolómica , Programas Informáticos , Procesamiento Automatizado de Datos , Espectrometría de Masas , Metabolómica/métodos , Flujo de Trabajo
6.
Sci Data ; 9(1): 414, 2022 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-35840583

RESUMEN

Underwater images are used to explore and monitor ocean habitats, generating huge datasets with unusual data characteristics that preclude traditional data management strategies. Due to the lack of universally adopted data standards, image data collected from the marine environment are increasing in heterogeneity, preventing objective comparison. The extraction of actionable information thus remains challenging, particularly for researchers not directly involved with the image data collection. Standardized formats and procedures are needed to enable sustainable image analysis and processing tools, as are solutions for image publication in long-term repositories to ascertain reuse of data. The FAIR principles (Findable, Accessible, Interoperable, Reusable) provide a framework for such data management goals. We propose the use of image FAIR Digital Objects (iFDOs) and present an infrastructure environment to create and exploit such FAIR digital objects. We show how these iFDOs can be created, validated, managed and stored, and which data associated with imagery should be curated. The goal is to reduce image management overheads while simultaneously creating visibility for image acquisition and publication efforts.

7.
MethodsX ; 8: 101218, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34434741

RESUMEN

The present work describes a new computer-assisted image analysis method for the rapid, simple, objective and reproducible quantification of actively discharged fungal spores which can serve as a manual for laboratories working in this context. The method can be used with conventional laboratory equipment by using bright field microscopes, standard scanners and the open-source software ImageJ. Compared to other conidia quantification methods by computer-assisted image analysis, the presented method bears a higher potential to be applied for large-scale sample quantities. The key to make quantification faster is the calculation of the linear relationship between the gray value and the automatically counted number of conidia that has only to be performed once in the beginning of analysis. Afterwards, the gray value is used as single parameter for quantification. The fast, easy and objective determination of sporulation capacity enables facilitated quality control of fungal formulations designed for biological pest control.•Rapid, simple, objective and reproducible quantification of fungal sporulation suitable for large-scale sample quantities.•Requires conventional laboratory equipment and open-source software without technical or computational expertise.•The number of automatically counted conidia can be correlated with the gray value and after initial calculation of a linear fit, the gray value can be applied as single quantification parameter.

8.
Sensors (Basel) ; 21(4)2021 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-33561961

RESUMEN

In recent years, an increasing number of cabled Fixed Underwater Observatories (FUOs) have been deployed, many of them equipped with digital cameras recording high-resolution digital image time series for a given period. The manual extraction of quantitative information from these data regarding resident species is necessary to link the image time series information to data from other sensors but requires computational support to overcome the bottleneck problem in manual analysis. As a priori knowledge about the objects of interest in the images is almost never available, computational methods are required that are not dependent on the posterior availability of a large training data set of annotated images. In this paper, we propose a new strategy for collecting and using training data for machine learning-based observatory image interpretation much more efficiently. The method combines the training efficiency of a special active learning procedure with the advantages of deep learning feature representations. The method is tested on two highly disparate data sets. In our experiments, we can show that the proposed method ALMI achieves on one data set a classification accuracy A > 90% with less than N = 258 data samples and A > 80% after N = 150 iterations, i.e., training samples, on the other data set outperforming the reference method regarding accuracy and training data required.

9.
Sci Rep ; 11(1): 4606, 2021 02 25.
Artículo en Inglés | MEDLINE | ID: mdl-33633175

RESUMEN

Mass Spectrometry Imaging (MSI) is an established and still evolving technique for the spatial analysis of molecular co-location in biological samples. Nowadays, MSI is expanding into new domains such as clinical pathology. In order to increase the value of MSI data, software for visual analysis is required that is intuitive and technique independent. Here, we present QUIMBI (QUIck exploration tool for Multivariate BioImages) a new tool for the visual analysis of MSI data. QUIMBI is an interactive visual exploration tool that provides the user with a convenient and straightforward visual exploration of morphological and spectral features of MSI data. To improve the overall quality of MSI data by reducing non-tissue specific signals and to ensure optimal compatibility with QUIMBI, the tool is combined with the new pre-processing tool ProViM (Processing for Visualization and multivariate analysis of MSI Data), presented in this work. The features of the proposed visual analysis approach for MSI data analysis are demonstrated with two use cases. The results show that the use of ProViM and QUIMBI not only provides a new fast and intuitive visual analysis, but also allows the detection of new co-location patterns in MSI data that are difficult to find with other methods.


Asunto(s)
Diagnóstico por Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Espectrometría de Masas/métodos , Animales , Humanos , Riñón/anatomía & histología , Masculino , Ratones , Seudoxantoma Elástico/patología , Piel/patología , Espectrometría de Masa por Láser de Matriz Asistida de Ionización Desorción/métodos , Vibrisas/anatomía & histología
10.
Sci Rep ; 10(1): 14416, 2020 09 02.
Artículo en Inglés | MEDLINE | ID: mdl-32879374

RESUMEN

Deep convolutional neural networks are emerging as the state of the art method for supervised classification of images also in the context of taxonomic identification. Different morphologies and imaging technologies applied across organismal groups lead to highly specific image domains, which need customization of deep learning solutions. Here we provide an example using deep convolutional neural networks (CNNs) for taxonomic identification of the morphologically diverse microalgal group of diatoms. Using a combination of high-resolution slide scanning microscopy, web-based collaborative image annotation and diatom-tailored image analysis, we assembled a diatom image database from two Southern Ocean expeditions. We use these data to investigate the effect of CNN architecture, background masking, data set size and possible concept drift upon image classification performance. Surprisingly, VGG16, a relatively old network architecture, showed the best performance and generalizing ability on our images. Different from a previous study, we found that background masking slightly improved performance. In general, training only a classifier on top of convolutional layers pre-trained on extensive, but not domain-specific image data showed surprisingly high performance (F1 scores around 97%) with already relatively few (100-300) examples per class, indicating that domain adaptation to a novel taxonomic group can be feasible with a limited investment of effort.

11.
Syst Biol ; 69(6): 1231-1253, 2020 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-32298457

RESUMEN

Natural history collections are leading successful large-scale projects of specimen digitization (images, metadata, DNA barcodes), thereby transforming taxonomy into a big data science. Yet, little effort has been directed towards safeguarding and subsequently mobilizing the considerable amount of original data generated during the process of naming 15,000-20,000 species every year. From the perspective of alpha-taxonomists, we provide a review of the properties and diversity of taxonomic data, assess their volume and use, and establish criteria for optimizing data repositories. We surveyed 4113 alpha-taxonomic studies in representative journals for 2002, 2010, and 2018, and found an increasing yet comparatively limited use of molecular data in species diagnosis and description. In 2018, of the 2661 papers published in specialized taxonomic journals, molecular data were widely used in mycology (94%), regularly in vertebrates (53%), but rarely in botany (15%) and entomology (10%). Images play an important role in taxonomic research on all taxa, with photographs used in >80% and drawings in 58% of the surveyed papers. The use of omics (high-throughput) approaches or 3D documentation is still rare. Improved archiving strategies for metabarcoding consensus reads, genome and transcriptome assemblies, and chemical and metabolomic data could help to mobilize the wealth of high-throughput data for alpha-taxonomy. Because long-term-ideally perpetual-data storage is of particular importance for taxonomy, energy footprint reduction via less storage-demanding formats is a priority if their information content suffices for the purpose of taxonomic studies. Whereas taxonomic assignments are quasifacts for most biological disciplines, they remain hypotheses pertaining to evolutionary relatedness of individuals for alpha-taxonomy. For this reason, an improved reuse of taxonomic data, including machine-learning-based species identification and delimitation pipelines, requires a cyberspecimen approach-linking data via unique specimen identifiers, and thereby making them findable, accessible, interoperable, and reusable for taxonomic research. This poses both qualitative challenges to adapt the existing infrastructure of data centers to a specimen-centered concept and quantitative challenges to host and connect an estimated $ \le $2 million images produced per year by alpha-taxonomic studies, plus many millions of images from digitization campaigns. Of the 30,000-40,000 taxonomists globally, many are thought to be nonprofessionals, and capturing the data for online storage and reuse therefore requires low-complexity submission workflows and cost-free repository use. Expert taxonomists are the main stakeholders able to identify and formalize the needs of the discipline; their expertise is needed to implement the envisioned virtual collections of cyberspecimens. [Big data; cyberspecimen; new species; omics; repositories; specimen identifier; taxonomy; taxonomic data.].


Asunto(s)
Clasificación , Bases de Datos Factuales/normas , Animales , Bases de Datos Factuales/tendencias
12.
Front Artif Intell ; 3: 49, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33733166

RESUMEN

Deep artificial neural networks have become the go-to method for many machine learning tasks. In the field of computer vision, deep convolutional neural networks achieve state-of-the-art performance for tasks such as classification, object detection, or instance segmentation. As deep neural networks become more and more complex, their inner workings become more and more opaque, rendering them a "black box" whose decision making process is no longer comprehensible. In recent years, various methods have been presented that attempt to peek inside the black box and to visualize the inner workings of deep neural networks, with a focus on deep convolutional neural networks for computer vision. These methods can serve as a toolbox to facilitate the design and inspection of neural networks for computer vision and the interpretation of the decision making process of the network. Here, we present the new tool Interactive Feature Localization in Deep neural networks (IFeaLiD) which provides a novel visualization approach to convolutional neural network layers. The tool interprets neural network layers as multivariate feature maps and visualizes the similarity between the feature vectors of individual pixels of an input image in a heat map display. The similarity display can reveal how the input image is perceived by different layers of the network and how the perception of one particular image region compares to the perception of the remaining image. IFeaLiD runs interactively in a web browser and can process even high resolution feature maps in real time by using GPU acceleration with WebGL 2. We present examples from four computer vision datasets with feature maps from different layers of a pre-trained ResNet101. IFeaLiD is open source and available online at https://ifealid.cebitec.uni-bielefeld.de.

13.
BMC Bioinformatics ; 20(1): 303, 2019 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-31164082

RESUMEN

BACKGROUND: The spatial distribution and colocalization of functionally related metabolites is analysed in order to investigate the spatial (and functional) aspects of molecular networks. We propose to consider community detection for the analysis of m/z-images to group molecules with correlative spatial distribution into communities so they hint at functional networks or pathway activity. To detect communities, we investigate a spectral approach by optimizing the modularity measure. We present an analysis pipeline and an online interactive visualization tool to facilitate explorative analysis of the results. The approach is illustrated with synthetical benchmark data and two real world data sets (barley seed and glioblastoma section). RESULTS: For the barley sample data set, our approach is able to reproduce the findings of a previous work that identified groups of molecules with distributions that correlate with anatomical structures of the barley seed. The analysis of glioblastoma section data revealed that some molecular compositions are locally focused, indicating the existence of a meaningful separation in at least two areas. This result is in line with the prior histological knowledge. In addition to confirming prior findings, the resulting graph structures revealed new subcommunities of m/z-images (i.e. metabolites) with more detailed distribution patterns. Another result of our work is the development of an interactive webtool called GRINE (Analysis of GRaph mapped Image Data NEtworks). CONCLUSIONS: The proposed method was successfully applied to identify molecular communities of laterally co-localized molecules. For both application examples, the detected communities showed inherent substructures that could easily be investigated with the proposed visualization tool. This shows the potential of this approach as a complementary addition to pixel clustering methods.


Asunto(s)
Visualización de Datos , Espectrometría de Masa por Láser de Matriz Asistida de Ionización Desorción/métodos , Neoplasias Encefálicas/patología , Análisis por Conglomerados , Glioblastoma/patología , Hordeum , Humanos , Análisis de Componente Principal , Semillas/anatomía & histología , Semillas/química
14.
PLoS One ; 14(6): e0218086, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31188894

RESUMEN

The evaluation of large amounts of digital image data is of growing importance for biology, including for the exploration and monitoring of marine habitats. However, only a tiny percentage of the image data collected is evaluated by marine biologists who manually interpret and annotate the image contents, which can be slow and laborious. In order to overcome the bottleneck in image annotation, two strategies are increasingly proposed: "citizen science" and "machine learning". In this study, we investigated how the combination of citizen science, to detect objects, and machine learning, to classify megafauna, could be used to automate annotation of underwater images. For this purpose, multiple large data sets of citizen science annotations with different degrees of common errors and inaccuracies observed in citizen science data were simulated by modifying "gold standard" annotations done by an experienced marine biologist. The parameters of the simulation were determined on the basis of two citizen science experiments. It allowed us to analyze the relationship between the outcome of a citizen science study and the quality of the classifications of a deep learning megafauna classifier. The results show great potential for combining citizen science with machine learning, provided that the participants are informed precisely about the annotation protocol. Inaccuracies in the position of the annotation had the most substantial influence on the classification accuracy, whereas the size of the marking and false positive detections had a smaller influence.


Asunto(s)
Ciencia Ciudadana/métodos , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/estadística & datos numéricos , Biología Marina/métodos , Animales , Organismos Acuáticos , Artrópodos/anatomía & histología , Artrópodos/clasificación , Cnidarios/anatomía & histología , Cnidarios/clasificación , Equinodermos/anatomía & histología , Equinodermos/clasificación , Humanos , Imagenología Tridimensional , Biología Marina/instrumentación , Moluscos/anatomía & histología , Moluscos/clasificación , Poríferos/anatomía & histología , Poríferos/clasificación
15.
Sci Rep ; 9(1): 6578, 2019 04 29.
Artículo en Inglés | MEDLINE | ID: mdl-31036904

RESUMEN

An array of sensors, including an HD camera mounted on a Fixed Underwater Observatory (FUO) were used to monitor a cold-water coral (Lophelia pertusa) reef in the Lofoten-Vesterålen area from April to November 2015. Image processing and deep learning enabled extraction of time series describing changes in coral colour and polyp activity (feeding). The image data was analysed together with data from the other sensors from the same period, to provide new insights into the short- and long-term dynamics in polyp features. The results indicate that diurnal variations and tidal current influenced polyp activity, by controlling the food supply. On a longer time-scale, the coral's tissue colour changed from white in the spring to slightly red during the summer months, which can be explained by a seasonal change in food supply. Our work shows, that using an effective integrative computational approach, the image time series is a new and rich source of information to understand and monitor the dynamics in underwater environments due to the high temporal resolution and coverage enabled with FUOs.


Asunto(s)
Antozoos/fisiología , Arrecifes de Coral , Conducta Alimentaria/fisiología , Grabación en Video , Animales , Biodiversidad , Color , Aprendizaje Profundo , Sedimentos Geológicos , Agua de Mar
16.
Bioinformatics ; 35(10): 1802-1804, 2019 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-30346487

RESUMEN

MOTIVATION: Live cell imaging plays a pivotal role in understanding cell growth. Yet, there is a lack of visualization alternatives for quick qualitative characterization of colonies. RESULTS: SeeVis is a Python workflow for automated and qualitative visualization of time-lapse microscopy data. It automatically pre-processes the movie frames, finds particles, traces their trajectories and visualizes them in a space-time cube offering three different color mappings to highlight different features. It supports the user in developing a mental model for the data. SeeVis completes these steps in 1.15 s/frame and creates a visualization with a selected color mapping. AVAILABILITY AND IMPLEMENTATION: https://github.com/ghattab/seevis/. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Microfluídica , Programas Informáticos , Microscopía , Flujo de Trabajo
17.
PLoS One ; 13(11): e0207498, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30444917

RESUMEN

Digital imaging has become one of the most important techniques in environmental monitoring and exploration. In the case of the marine environment, mobile platforms such as autonomous underwater vehicles (AUVs) are now equipped with high-resolution cameras to capture huge collections of images from the seabed. However, the timely evaluation of all these images presents a bottleneck problem as tens of thousands or more images can be collected during a single dive. This makes computational support for marine image analysis essential. Computer-aided analysis of environmental images (and marine images in particular) with machine learning algorithms is promising, but challenging and different to other imaging domains because training data and class labels cannot be collected as efficiently and comprehensively as in other areas. In this paper, we present Machine learning Assisted Image Annotation (MAIA), a new image annotation method for environmental monitoring and exploration that overcomes the obstacle of missing training data. The method uses a combination of autoencoder networks and Mask Region-based Convolutional Neural Network (Mask R-CNN), which allows human observers to annotate large image collections much faster than before. We evaluated the method with three marine image datasets featuring different types of background, imaging equipment and object classes. Using MAIA, we were able to annotate objects of interest with an average recall of 84.1% more than twice as fast as compared to "traditional" annotation methods, which are purely based on software-supported direct visual inspection and manual annotation. The speed gain increases proportionally with the size of a dataset. The MAIA approach represents a substantial improvement on the path to greater efficiency in the annotation of large benthic image collections.


Asunto(s)
Curaduría de Datos/métodos , Bases de Datos Factuales , Monitoreo del Ambiente/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Redes Neurales de la Computación , Océanos y Mares
18.
Artículo en Inglés | MEDLINE | ID: mdl-29541635

RESUMEN

Time-lapse imaging of cell colonies in microfluidic chambers provides time series of bioimages, i.e., biomovies. They show the behavior of cells over time under controlled conditions. One of the main remaining bottlenecks in this area of research is the analysis of experimental data and the extraction of cell growth characteristics, such as lineage information. The extraction of the cell line by human observers is time-consuming and error-prone. Previously proposed methods often fail because of their reliance on the accurate detection of a single cell, which is not possible for high density, high diversity of cell shapes and numbers, and high-resolution images with high noise. Our task is to characterize subpopulations in biomovies. In order to shift the analysis of the data from individual cell level to cellular groups with similar fluorescence or even subpopulations, we propose to represent the cells by two new abstractions: the particle and the patch. We use a three-step framework: preprocessing, particle tracking, and construction of the patch lineage. First, preprocessing improves the signal-to-noise ratio and spatially aligns the biomovie frames. Second, cell sampling is performed by assuming particles, which represent a part of a cell, cell or group of contiguous cells in space. Particle analysis includes the following: particle tracking, trajectory linking, filtering, and color information, respectively. Particle tracking consists of following the spatiotemporal position of a particle and gives rise to coherent particle trajectories over time. Typical tracking problems may occur (e.g., appearance or disappearance of cells, spurious artifacts). They are effectively processed using trajectory linking and filtering. Third, the construction of the patch lineage consists in joining particle trajectories that share common attributes (i.e., proximity and fluorescence intensity) and feature common ancestry. This step is based on patch finding, patching trajectory propagation, patch splitting, and patch merging. The main idea is to group together the trajectories of particles in order to gain spatial coherence. The final result of CYCASP is the complete graph of the patch lineage. Finally, the graph encodes the temporal and spatial coherence of the development of cellular colonies. We present results showing a computation time of less than 5 min for biomovies and simulated films. The method, presented here, allowed for the separation of colonies into subpopulations and allowed us to interpret the growth of colonies in a timely manner.

19.
Front Genet ; 8: 69, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28620411

RESUMEN

In order to understand gene function in bacterial life cycles, time lapse bioimaging is applied in combination with different marker protocols in so called microfluidics chambers (i.e., a multi-well plate). In one experiment, a series of T images is recorded for one visual field, with a pixel resolution of 60 nm/px. Any (semi-)automatic analysis of the data is hampered by a strong image noise, low contrast and, last but not least, considerable irregular shifts during the acquisition. Image registration corrects such shifts enabling next steps of the analysis (e.g., feature extraction or tracking). Image alignment faces two obstacles in this microscopic context: (a) highly dynamic structural changes in the sample (i.e., colony growth) and (b) an individual data set-specific sample environment which makes the application of landmarks-based alignments almost impossible. We present a computational image registration solution, we refer to as ViCAR: (Vi)sual (C)ues based (A)daptive (R)egistration, for such microfluidics experiments, consisting of (1) the detection of particular polygons (outlined and segmented ones, referred to as visual cues), (2) the adaptive retrieval of three coordinates throughout different sets of frames, and finally (3) an image registration based on the relation of these points correcting both rotation and translation. We tested ViCAR with different data sets and have found that it provides an effective spatial alignment thereby paving the way to extract temporal features pertinent to each resulting bacterial colony. By using ViCAR, we achieved an image registration with 99.9% of image closeness, based on the average rmsd of 4.10-2 pixels, and superior results compared to a state of the art algorithm.

20.
PLoS One ; 11(6): e0157329, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27285611

RESUMEN

This paper presents a machine learning based approach for analyses of photos collected from laboratory experiments conducted to assess the potential impact of water-based drill cuttings on deep-water rhodolith-forming calcareous algae. This pilot study uses imaging technology to quantify and monitor the stress levels of the calcareous algae Mesophyllum engelhartii (Foslie) Adey caused by various degrees of light exposure, flow intensity and amount of sediment. A machine learning based algorithm was applied to assess the temporal variation of the calcareous algae size (∼ mass) and color automatically. Measured size and color were correlated to the photosynthetic efficiency (maximum quantum yield of charge separation in photosystem II, [Formula: see text]) and degree of sediment coverage using multivariate regression. The multivariate regression showed correlations between time and calcareous algae sizes, as well as correlations between fluorescence and calcareous algae colors.


Asunto(s)
Sedimentos Geológicos , Rhodophyta/fisiología , Monitoreo del Ambiente/instrumentación , Diseño de Equipo , Sedimentos Geológicos/análisis , Aprendizaje Automático , Fotosíntesis , Complejo de Proteína del Fotosistema II/metabolismo , Proyectos Piloto , Rhodophyta/anatomía & histología , Rhodophyta/efectos de la radiación , Estrés Fisiológico , Luz Solar
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...