Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 30
1.
Comput Struct Biotechnol J ; 23: 1808-1823, 2024 Dec.
Article En | MEDLINE | ID: mdl-38707543

Today's digital data storage systems typically offer advanced data recovery solutions to address the problem of catastrophic data loss, such as software-based disk sector analysis or physical-level data retrieval methods for conventional hard disk drives. However, DNA-based data storage currently relies solely on the inherent error correction properties of the methods used to encode digital data into strands of DNA. Any error that cannot be corrected utilizing the redundancy added by DNA encoding methods results in permanent data loss. To provide data recovery for DNA storage systems, we present a method to automatically reconstruct corrupted or missing data stored in DNA using fountain codes. Our method exploits the relationships between packets encoded with fountain codes to identify and rectify corrupted or lost data. Furthermore, we present file type-specific and content-based data recovery methods for three file types, illustrating how a fusion of fountain encoding-specific redundancy and knowledge about the data can effectively recover information in a corrupted DNA storage system, both in an automatic and in a guided manual manner. To demonstrate our approach, we introduce DR4DNA, a software toolkit that contains all methods presented. We evaluate DR4DNA using both in-silico and in-vitro experiments.

2.
Glob Chang Biol ; 30(1): e17056, 2024 Jan.
Article En | MEDLINE | ID: mdl-38273542

Ecosystem functions and services are severely threatened by unprecedented global loss in biodiversity. To counteract these trends, it is essential to develop systems to monitor changes in biodiversity for planning, evaluating, and implementing conservation and mitigation actions. However, the implementation of monitoring systems suffers from a trade-off between grain (i.e., the level of detail), extent (i.e., the number of study sites), and temporal repetition. Here, we present an applied and realized networked sensor system for integrated biodiversity monitoring in the Nature 4.0 project as a solution to these challenges, which considers plants and animals not only as targets of investigation, but also as parts of the modular sensor network by carrying sensors. Our networked sensor system consists of three main closely interlinked components with a modular structure: sensors, data transmission, and data storage, which are integrated into pipelines for automated biodiversity monitoring. We present our own real-world examples of applications, share our experiences in operating them, and provide our collected open data. Our flexible, low-cost, and open-source solutions can be applied for monitoring individual and multiple terrestrial plants and animals as well as their interactions. Ultimately, our system can also be applied to area-wide ecosystem mapping tasks, thereby providing an exemplary cost-efficient and powerful solution for biodiversity monitoring. Building upon our experiences in the Nature 4.0 project, we identified ten key challenges that need to be addressed to better understand and counteract the ongoing loss of biodiversity using networked sensor systems. To tackle these challenges, interdisciplinary collaboration, additional research, and practical solutions are necessary to enhance the capability and applicability of networked sensor systems for researchers and practitioners, ultimately further helping to ensure the sustainable management of ecosystems and the provision of ecosystem services.


Conservation of Natural Resources , Ecosystem , Animals , Biodiversity , Plants
3.
Sensors (Basel) ; 23(17)2023 Aug 25.
Article En | MEDLINE | ID: mdl-37687874

Several areas of wireless networking, such as wireless sensor networks or the Internet of Things, require application data to be distributed to multiple receivers in an area beyond the transmission range of a single node. This can be achieved by using the wireless medium's broadcast property when retransmitting data. Due to the energy constraints of typical wireless devices, a broadcasting scheme that consumes as little energy as possible is highly desirable. In this article, we present a novel multi-hop data dissemination protocol called BTP. It uses a game-theoretical model to construct a spanning tree in a decentralized manner to minimize the total energy consumption of a network by minimizing the transmission power of each node. Although BTP is based on a game-theoretical model, it neither requires information exchange between distant nodes nor time synchronization during its operation, and it inhibits graph cycles effectively. The protocol is evaluated in Matlab and NS-3 simulations and through real-world implementation on a testbed of 75 Raspberry Pis. The evaluation conducted shows that our proposed protocol can achieve a total energy reduction of up to 90% compared to a simple broadcast protocol in real-world experiments.

4.
Nat Commun ; 14(1): 628, 2023 02 06.
Article En | MEDLINE | ID: mdl-36746948

The extensive information capacity of DNA, coupled with decreasing costs for DNA synthesis and sequencing, makes DNA an attractive alternative to traditional data storage. The processes of writing, storing, and reading DNA exhibit specific error profiles and constraints DNA sequences have to adhere to. We present DNA-Aeon, a concatenated coding scheme for DNA data storage. It supports the generation of variable-sized encoded sequences with a user-defined Guanine-Cytosine (GC) content, homopolymer length limitation, and the avoidance of undesired motifs. It further enables users to provide custom codebooks adhering to further constraints. DNA-Aeon can correct substitution errors, insertions, deletions, and the loss of whole DNA strands. Comparisons with other codes show better error-correction capabilities of DNA-Aeon at similar redundancy levels with decreased DNA synthesis costs. In-vitro tests indicate high reliability of DNA-Aeon even in the case of skewed sequencing read distributions and high read-dropout.


DNA Replication , DNA , Reproducibility of Results , DNA/genetics , Sequence Analysis, DNA , Algorithms
5.
Bioinform Adv ; 3(1): vbad117, 2023.
Article En | MEDLINE | ID: mdl-38496344

Motivation: There has been rapid progress in the development of error-correcting and constrained codes for DNA storage systems in recent years. However, improving the steps for processing raw sequencing data for DNA storage has a lot of untapped potential for further progress. In particular, constraints can be used as prior information to improve the processing of DNA sequencing data. Furthermore, a workflow tailored to DNA storage codes enables fair comparisons between different approaches while leading to reproducible results. Results: We present RepairNatrix, a read-processing workflow for DNA storage. RepairNatrix supports preprocessing of raw sequencing data for DNA storage applications and can be used to flag and heuristically repair constraint-violating sequences to further increase the recoverability of encoded data in the presence of errors. Compared to a preprocessing strategy without repair functionality, RepairNatrix reduced the number of raw reads required for the successful, error-free decoding of the input files by a factor of 25-35 across different datasets. Availability and implementation: RepairNatrix is available on Github: https://github.com/umr-ds/repairnatrix.

6.
Vet Pathol ; 59(4): 565-577, 2022 07.
Article En | MEDLINE | ID: mdl-35130766

The emergence of the coronavirus disease 2019 (COVID-19) caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) inspired rapid research efforts targeting the host range, pathogenesis and transmission mechanisms, and the development of antiviral strategies. Genetically modified mice, rhesus macaques, ferrets, and Syrian golden hamsters have been frequently used in studies of pathogenesis and efficacy of antiviral compounds and vaccines. However, alternatives to in vivo experiments, such as immortalized cell lines, primary respiratory epithelial cells cultured at an air-liquid interface, stem/progenitor cell-derived organoids, or tissue explants, have also been used for isolation of SARS-CoV-2, investigation of cytopathic effects, and pathogen-host interactions. Moreover, initial proof-of-concept studies for testing therapeutic agents can be performed with these tools, showing that animal-sparing cell culture methods could significantly reduce the need for animal models in the future, following the 3R principles of replace, reduce, and refine. So far, only few studies using animal-derived primary cells or tissues have been conducted in SARS-CoV-2 research, although natural infection has been shown to occur in several animal species. Therefore, the need for in-depth investigations on possible interspecies transmission routes and differences in susceptibility to SARS-CoV-2 is urgent. This review gives an overview of studies employing alternative culture systems like primary cell cultures, tissue explants, or organoids for investigations of the pathophysiology and reverse zoonotic potential of SARS-CoV-2 in animals. In addition, future possibilities of SARS-CoV-2 research in animals, including previously neglected methods like the use of precision-cut lung slices, will be outlined.


COVID-19 , Rodent Diseases , Animals , Antiviral Agents/therapeutic use , COVID-19/veterinary , Cricetinae , Disease Models, Animal , Ferrets , Lung/pathology , Macaca mulatta , Mice , Rodent Diseases/pathology , SARS-CoV-2
7.
BMC Bioinformatics ; 22(1): 406, 2021 Aug 17.
Article En | MEDLINE | ID: mdl-34404355

BACKGROUND: DNA is a promising storage medium for high-density long-term digital data storage. Since DNA synthesis and sequencing are still relatively expensive tasks, the coding methods used to store digital data in DNA should correct errors and avoid unstable or error-prone DNA sequences. Near-optimal rateless erasure codes, also called fountain codes, are particularly interesting codes to realize high-capacity and low-error DNA storage systems, as shown by Erlich and Zielinski in their approach based on the Luby transform (LT) code. Since LT is the most basic fountain code, there is a large untapped potential for improvement in using near-optimal erasure codes for DNA storage. RESULTS: We present NOREC4DNA, a software framework to use, test, compare, and improve near-optimal rateless erasure codes (NORECs) for DNA storage systems. These codes can effectively be used to store digital information in DNA and cope with the restrictions of the DNA medium. Additionally, they can adapt to possible variable lengths of DNA strands and have nearly zero overhead. We describe the design and implementation of NOREC4DNA. Furthermore, we present experimental results demonstrating that NOREC4DNA can flexibly be used to evaluate the use of NORECs in DNA storage systems. In particular, we show that NORECs that apparently have not yet been used for DNA storage, such as Raptor and Online codes, can achieve significant improvements over LT codes that were used in previous work. NOREC4DNA is available on https://github.com/umr-ds/NOREC4DNA . CONCLUSION: NOREC4DNA is a flexible and extensible software framework for using, evaluating, and comparing NORECs for DNA storage systems.


Algorithms , DNA , DNA/genetics , Information Storage and Retrieval , Sequence Analysis, DNA , Software
8.
BMC Bioinformatics ; 21(1): 526, 2020 Nov 16.
Article En | MEDLINE | ID: mdl-33198651

BACKGROUND: Sequencing of marker genes amplified from environmental samples, known as amplicon sequencing, allows us to resolve some of the hidden diversity and elucidate evolutionary relationships and ecological processes among complex microbial communities. The analysis of large numbers of samples at high sequencing depths generated by high throughput sequencing technologies requires efficient, flexible, and reproducible bioinformatics pipelines. Only a few existing workflows can be run in a user-friendly, scalable, and reproducible manner on different computing devices using an efficient workflow management system. RESULTS: We present Natrix, an open-source bioinformatics workflow for preprocessing raw amplicon sequencing data. The workflow contains all analysis steps from quality assessment, read assembly, dereplication, chimera detection, split-sample merging, sequence representative assignment (OTUs or ASVs) to the taxonomic assignment of sequence representatives. The workflow is written using Snakemake, a workflow management engine for developing data analysis workflows. In addition, Conda is used for version control. Thus, Snakemake ensures reproducibility and Conda offers version control of the utilized programs. The encapsulation of rules and their dependencies support hassle-free sharing of rules between workflows and easy adaptation and extension of existing workflows. Natrix is freely available on GitHub ( https://github.com/MW55/Natrix ) or as a Docker container on DockerHub ( https://hub.docker.com/r/mw55/natrix ). CONCLUSION: Natrix is a user-friendly and highly extensible workflow for processing Illumina amplicon data.


High-Throughput Nucleotide Sequencing , Software , Workflow , Cluster Analysis , DNA, Environmental/genetics , DNA, Environmental/isolation & purification , Data Analysis , Databases, Genetic , Floods , Microbiota/genetics , Reproducibility of Results
9.
PLoS Comput Biol ; 16(9): e1008179, 2020 09.
Article En | MEDLINE | ID: mdl-32898132

Detection and segmentation of macrophage cells in fluorescence microscopy images is a challenging problem, mainly due to crowded cells, variation in shapes, and morphological complexity. We present a new deep learning approach for cell detection and segmentation that incorporates previously learned nucleus features. A novel fusion of feature pyramids for nucleus detection and segmentation with feature pyramids for cell detection and segmentation is used to improve performance on a microscopic image dataset created by us and provided for public use, containing both nucleus and cell signals. Our experimental results indicate that cell detection and segmentation performance significantly benefit from the fusion of previously learned nucleus features. The proposed feature pyramid fusion architecture clearly outperforms a state-of-the-art Mask R-CNN approach for cell detection and segmentation with relative mean average precision improvements of up to 23.88% and 23.17%, respectively.


Eukaryotic Cells/cytology , Image Processing, Computer-Assisted/methods , Microscopy, Fluorescence/methods , Neural Networks, Computer , Computational Biology , Deep Learning , Humans , Macrophages/cytology , THP-1 Cells
10.
IEEE J Biomed Health Inform ; 24(11): 3154-3161, 2020 11.
Article En | MEDLINE | ID: mdl-32750950

In personalized medicine, a challenging task is to identify the most effective treatment for a patient. In oncology, several computational models have been developed to predict the response of drugs to therapy. However, the performance of these models depends on multiple factors. This paper presents a new approach, called Q-Rank, to predict the sensitivity of cell lines to anti-cancer drugs. Q-Rank integrates different prediction algorithms and identifies a suitable algorithm for a given application. Q-Rank is based on reinforcement learning methods to rank prediction algorithms on the basis of relevant features (e.g., omics characterization). The best-ranked algorithm is recommended and used to predict the response of drugs to therapy. Our experimental results indicate that Q-Rank outperforms the integrated models in predicting the sensitivity of cell lines to different drugs.


Antineoplastic Agents , Neoplasms , Pharmaceutical Preparations , Algorithms , Antineoplastic Agents/therapeutic use , Humans , Neoplasms/drug therapy , Precision Medicine
11.
Bioinformatics ; 36(11): 3322-3326, 2020 06 01.
Article En | MEDLINE | ID: mdl-32129840

SUMMARY: The development of de novo DNA synthesis, polymerase chain reaction (PCR), DNA sequencing and molecular cloning gave researchers unprecedented control over DNA and DNA-mediated processes. To reduce the error probabilities of these techniques, DNA composition has to adhere to method-dependent restrictions. To comply with such restrictions, a synthetic DNA fragment is often adjusted manually or by using custom-made scripts. In this article, we present MESA (Mosla Error Simulator), a web application for the assessment of DNA fragments based on limitations of DNA synthesis, amplification, cloning, sequencing methods and biological restrictions of host organisms. Furthermore, MESA can be used to simulate errors during synthesis, PCR, storage and sequencing processes. AVAILABILITY AND IMPLEMENTATION: MESA is available at mesa.mosla.de, with the source code available at github.com/umr-ds/mesa_dna_sim. CONTACT: dominik.heider@uni-marburg.de. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


DNA , Software , DNA/genetics , High-Throughput Nucleotide Sequencing , Polymerase Chain Reaction , Sequence Analysis, DNA
12.
Article En | MEDLINE | ID: mdl-27845672

In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.


Binding Sites , Computational Biology/methods , Computer Graphics , Proteins/chemistry , Sequence Alignment/methods , Proteins/genetics , Sequence Analysis, Protein/methods , Software
13.
IEEE Trans Nanobioscience ; 16(8): 708-717, 2017 12.
Article En | MEDLINE | ID: mdl-29364123

This paper presents a novel health analysis approach for heart failure prediction. It is based on the use of complex event processing (CEP) technology, combined with statistical approaches. A CEP engine processes incoming health data by executing threshold-based analysis rules. Instead of having to manually set up thresholds, our novel statistical algorithm automatically computes and updates thresholds according to recorded historical data. Experimental results demonstrate the merits of our approach in terms of speed, precision, and recall.


Algorithms , Computational Biology/methods , Heart Failure , Models, Statistical , Heart Failure/diagnosis , Heart Failure/epidemiology , Humans , Machine Learning , Predictive Value of Tests
14.
Article En | MEDLINE | ID: mdl-26736783

In this contribution, we present a semi-automatic segmentation algorithm for radiofrequency ablation (RFA) zones via optimal s-t-cuts. Our interactive graph-based approach builds upon a polyhedron to construct the graph and was specifically designed for computed tomography (CT) acquisitions from patients that had RFA treatments of Hepatocellular Carcinomas (HCC). For evaluation, we used twelve post-interventional CT datasets from the clinical routine and as evaluation metric we utilized the Dice Similarity Coefficient (DSC), which is commonly accepted for judging computer aided medical segmentation tasks. Compared with pure manual slice-by-slice expert segmentations from interventional radiologists, we were able to achieve a DSC of about eighty percent, which is sufficient for our clinical needs. Moreover, our approach was able to handle images containing (DSC=75.9%) and not containing (78.1%) the RFA needles still in place. Additionally, we found no statistically significant difference (p<;0.423) between the segmentation results of the subgroups for a Mann-Whitney test. Finally, to the best of our knowledge, this is the first time a segmentation approach for CT scans including the RFA needles is reported and we show why another state-of-the-art segmentation method fails for these cases. Intraoperative scans including an RFA probe are very critical in the clinical practice and need a very careful segmentation and inspection to avoid under-treatment, which may result in tumor recurrence (up to 40%). If the decision can be made during the intervention, an additional ablation can be performed without removing the entire needle. This decreases the patient stress and associated risks and costs of a separate intervention at a later date. Ultimately, the segmented ablation zone containing the RFA needle can be used for a precise ablation simulation as the real needle position is known.


Ablation Techniques/instrumentation , Carcinoma, Hepatocellular/diagnostic imaging , Carcinoma, Hepatocellular/surgery , Image Processing, Computer-Assisted/methods , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/surgery , Needles , Radio Waves , Algorithms , Humans , Recurrence , Tomography, X-Ray Computed
15.
PLoS One ; 9(4): e93389, 2014.
Article En | MEDLINE | ID: mdl-24705281

In this article, we present a graph-based method using a cubic template for volumetric segmentation of vertebrae in magnetic resonance imaging (MRI) acquisitions. The user can define the degree of deviation from a regular cube via a smoothness value Δ. The Cube-Cut algorithm generates a directed graph with two terminal nodes (s-t-network), where the nodes of the graph correspond to a cubic-shaped subset of the image's voxels. The weightings of the graph's terminal edges, which connect every node with a virtual source s or a virtual sink t, represent the affinity of a voxel to the vertebra (source) and to the background (sink). Furthermore, a set of infinite weighted and non-terminal edges implements the smoothness term. After graph construction, a minimal s-t-cut is calculated within polynomial computation time, which splits the nodes into two disjoint units. Subsequently, the segmentation result is determined out of the source-set. A quantitative evaluation of a C++ implementation of the algorithm resulted in an average Dice Similarity Coefficient (DSC) of 81.33% and a running time of less than a minute.


Anatomy, Cross-Sectional/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging/methods , Spine/anatomy & histology , Algorithms , Humans , Organ Size
16.
Comput Med Imaging Graph ; 38(4): 285-95, 2014 Jun.
Article En | MEDLINE | ID: mdl-24613389

In this contribution, a scale-invariant image segmentation algorithm is introduced that "wraps" the algorithm's parameters for the user by its interactive behavior, avoiding the definition of "arbitrary" numbers that the user cannot really understand. Therefore, we designed a specific graph-based segmentation method that only requires a single seed-point inside the target-structure from the user and is thus particularly suitable for immediate processing and interactive, real-time adjustments by the user. In addition, color or gray value information that is needed for the approach can be automatically extracted around the user-defined seed point. Furthermore, the graph is constructed in such a way, so that a polynomial-time mincut computation can provide the segmentation result within a second on an up-to-date computer. The algorithm presented here has been evaluated with fixed seed points on 2D and 3D medical image data, such as brain tumors, cerebral aneurysms and vertebral bodies. Direct comparison of the obtained automatic segmentation results with costlier, manual slice-by-slice segmentations performed by trained physicians, suggest a strong medical relevance of this interactive approach.


Anatomic Landmarks/pathology , Brain Diseases/pathology , Brain/pathology , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , User-Computer Interface , Algorithms , Computer Systems , Feedback , Humans , Reproducibility of Results , Sensitivity and Specificity , Translational Research, Biomedical/methods
17.
PLoS One ; 8(5): e63082, 2013.
Article En | MEDLINE | ID: mdl-23671656

Diffusion Tensor Imaging (DTI) and fiber tractography are established methods to reconstruct major white matter tracts in the human brain in-vivo. Particularly in the context of neurosurgical procedures, reliable information about the course of fiber bundles is important to minimize postoperative deficits while maximizing the tumor resection volume. Since routinely used deterministic streamline tractography approaches often underestimate the spatial extent of white matter tracts, a novel approach to improve fiber segmentation is presented here, considering clinical time constraints. Therefore, fiber tracking visualization is enhanced with statistical information from multiple tracking applications to determine uncertainty in reconstruction based on clinical DTI data. After initial deterministic fiber tracking and centerline calculation, new seed regions are generated along the result's midline. Tracking is applied to all new seed regions afterwards, varying in number and applied offset. The number of fibers passing each voxel is computed to model different levels of fiber bundle membership. Experimental results using an artificial data set of an anatomical software phantom are presented, using the Dice Similarity Coefficient (DSC) as a measure of segmentation quality. Different parameter combinations were classified to be superior to others providing significantly improved results with DSCs of 81.02%±4.12%, 81.32%±4.22% and 80.99%±3.81% for different levels of added noise in comparison to the deterministic fiber tracking procedure using the two-ROI approach with average DSCs of 65.08%±5.31%, 64.73%±6.02% and 65.91%±6.42%. Whole brain tractography based on the seed volume generated by the calculated seeds delivers average DSCs of 67.12%±0.86%, 75.10%±0.28% and 72.91%±0.15%, original whole brain tractography delivers DSCs of 67.16%, 75.03% and 75.54%, using initial ROIs as combined include regions, which is clearly improved by the repeated fiber tractography method.


Brain Mapping/methods , Pyramidal Tracts/physiopathology , Adult , Aged , Brain , Brain Neoplasms/physiopathology , Case-Control Studies , Diffusion Tensor Imaging , Female , Glioblastoma/physiopathology , Humans , Male , Middle Aged , Models, Neurological , Neural Pathways , Phantoms, Imaging , Software
18.
Sci Rep ; 3: 1364, 2013.
Article En | MEDLINE | ID: mdl-23455483

Volumetric change in glioblastoma multiforme (GBM) over time is a critical factor in treatment decisions. Typically, the tumor volume is computed on a slice-by-slice basis using MRI scans obtained at regular intervals. (3D)Slicer - a free platform for biomedical research - provides an alternative to this manual slice-by-slice segmentation process, which is significantly faster and requires less user interaction. In this study, 4 physicians segmented GBMs in 10 patients, once using the competitive region-growing based GrowCut segmentation module of Slicer, and once purely by drawing boundaries completely manually on a slice-by-slice basis. Furthermore, we provide a variability analysis for three physicians for 12 GBMs. The time required for GrowCut segmentation was on an average 61% of the time required for a pure manual segmentation. A comparison of Slicer-based segmentation with manual slice-by-slice segmentation resulted in a Dice Similarity Coefficient of 88.43 ± 5.23% and a Hausdorff Distance of 2.32 ± 5.23 mm.


Glioblastoma/diagnosis , Imaging, Three-Dimensional , Magnetic Resonance Imaging , Tumor Burden , Glioblastoma/pathology , Humans , Image Processing, Computer-Assisted
19.
Comput Methods Programs Biomed ; 110(3): 268-78, 2013 Jun.
Article En | MEDLINE | ID: mdl-23266223

Among all abnormal growths inside the skull, the percentage of tumors in sellar region is approximately 10-15%, and the pituitary adenoma is the most common sellar lesion. A time-consuming process that can be shortened by using adequate algorithms is the manual segmentation of pituitary adenomas. In this contribution, two methods for pituitary adenoma segmentation in the human brain are presented and compared using magnetic resonance imaging (MRI) patient data from the clinical routine: Method A is a graph-based method that sets up a directed and weighted graph and performs a min-cut for optimal segmentation results: Method B is a balloon inflation method that uses balloon inflation forces to detect the pituitary adenoma boundaries. The ground truth of the pituitary adenoma boundaries - for the evaluation of the methods - are manually extracted by neurosurgeons. Comparison is done using the Dice Similarity Coefficient (DSC), a measure for spatial overlap of different segmentation results. The average DSC for all data sets is 77.5±4.5% for the graph-based method and 75.9±7.2% for the balloon inflation method showing no significant difference. The overall segmentation time of the implemented approaches was less than 4s - compared with a manual segmentation that took, on the average, 3.9±0.5min.


Adenoma/pathology , Magnetic Resonance Imaging/methods , Pituitary Neoplasms/pathology , Algorithms , Computer Graphics , Computer Simulation , Humans , Magnetic Resonance Imaging/statistics & numerical data , Models, Anatomic
20.
Sci Rep ; 2: 420, 2012.
Article En | MEDLINE | ID: mdl-22639728

We present a scale-invariant, template-based segmentation paradigm that sets up a graph and performs a graph cut to separate an object from the background. Typically graph-based schemes distribute the nodes of the graph uniformly and equidistantly on the image, and use a regularizer to bias the cut towards a particular shape. The strategy of uniform and equidistant nodes does not allow the cut to prefer more complex structures, especially when areas of the object are indistinguishable from the background. We propose a solution by introducing the concept of a "template shape" of the target object in which the nodes are sampled non-uniformly and non-equidistantly on the image. We evaluate it on 2D-images where the object's textures and backgrounds are similar, and large areas of the object have the same gray level appearance as the background. We also evaluate it in 3D on 60 brain tumor datasets for neurosurgical planning purposes.


Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Pattern Recognition, Automated/methods , Radiographic Image Enhancement/methods , Adenoma/diagnostic imaging , Algorithms , Brain Neoplasms/diagnostic imaging , Humans , Pituitary Neoplasms , Reproducibility of Results , Spine/radiation effects
...