Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
1.
Sci Rep ; 13(1): 13778, 2023 Aug 23.
Article in English | MEDLINE | ID: mdl-37612320

ABSTRACT

Mathematical models of cognition are often memoryless and ignore potential fluctuations of their parameters. However, human cognition is inherently dynamic. Thus, we propose to augment mechanistic cognitive models with a temporal dimension and estimate the resulting dynamics from a superstatistics perspective. Such a model entails a hierarchy between a low-level observation model and a high-level transition model. The observation model describes the local behavior of a system, and the transition model specifies how the parameters of the observation model evolve over time. To overcome the estimation challenges resulting from the complexity of superstatistical models, we develop and validate a simulation-based deep learning method for Bayesian inference, which can recover both time-varying and time-invariant parameters. We first benchmark our method against two existing frameworks capable of estimating time-varying parameters. We then apply our method to fit a dynamic version of the diffusion decision model to long time series of human response times data. Our results show that the deep learning approach is very efficient in capturing the temporal dynamics of the model. Furthermore, we show that the erroneous assumption of static or homogeneous parameters will hide important temporal information.

2.
IEEE Trans Neural Netw Learn Syst ; 34(8): 4903-4917, 2023 Aug.
Article in English | MEDLINE | ID: mdl-34767511

ABSTRACT

Comparing competing mathematical models of complex processes is a shared goal among many branches of science. The Bayesian probabilistic framework offers a principled way to perform model comparison and extract useful metrics for guiding decisions. However, many interesting models are intractable with standard Bayesian methods, as they lack a closed-form likelihood function or the likelihood is computationally too expensive to evaluate. In this work, we propose a novel method for performing Bayesian model comparison using specialized deep learning architectures. Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset. Moreover, it requires no hand-crafted summary statistics of the data and is designed to amortize the cost of simulation over multiple models, datasets, and dataset sizes. This makes the method especially effective in scenarios where model fit needs to be assessed for a large number of datasets, so that case-based inference is practically infeasible. Finally, we propose a novel way to measure epistemic uncertainty in model comparison problems. We demonstrate the utility of our method on toy examples and simulated data from nontrivial models from cognitive science and single-cell neuroscience. We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work. We argue that our framework can enhance and enrich model-based analysis and inference in many fields dealing with computational models of natural processes. We further argue that the proposed measure of epistemic uncertainty provides a unique proxy to quantify absolute evidence even in a framework which assumes that the true data-generating model is within a finite set of candidate models.

4.
IEEE Trans Neural Netw Learn Syst ; 33(4): 1452-1466, 2022 04.
Article in English | MEDLINE | ID: mdl-33338021

ABSTRACT

Estimating the parameters of mathematical models is a common problem in almost all branches of science. However, this problem can prove notably difficult when processes and model descriptions become increasingly complex and an explicit likelihood function is not available. With this work, we propose a novel method for globally amortized Bayesian inference based on invertible neural networks that we call BayesFlow. The method uses simulations to learn a global estimator for the probabilistic mapping from observed data to underlying model parameters. A neural network pretrained in this way can then, without additional training or optimization, infer full posteriors on arbitrarily many real data sets involving the same model family. In addition, our method incorporates a summary network trained to embed the observed data into maximally informative summary statistics. Learning summary statistics from data makes the method applicable to modeling scenarios where standard inference techniques with handcrafted summary statistics fail. We demonstrate the utility of BayesFlow on challenging intractable models from population dynamics, epidemiology, cognitive science, and ecology. We argue that BayesFlow provides a general framework for building amortized Bayesian parameter estimation machines for any forward model from which data can be simulated.


Subject(s)
Learning , Neural Networks, Computer , Bayes Theorem
5.
PLoS Comput Biol ; 17(10): e1009472, 2021 10.
Article in English | MEDLINE | ID: mdl-34695111

ABSTRACT

Mathematical models in epidemiology are an indispensable tool to determine the dynamics and important characteristics of infectious diseases. Apart from their scientific merit, these models are often used to inform political decisions and interventional measures during an ongoing outbreak. However, reliably inferring the epidemical dynamics by connecting complex models to real data is still hard and requires either laborious manual parameter fitting or expensive optimization methods which have to be repeated from scratch for every application of a given model. In this work, we address this problem with a novel combination of epidemiological modeling with specialized neural networks. Our approach entails two computational phases: In an initial training phase, a mathematical model describing the epidemic is used as a coach for a neural network, which acquires global knowledge about the full range of possible disease dynamics. In the subsequent inference phase, the trained neural network processes the observed data of an actual outbreak and infers the parameters of the model in order to realistically reproduce the observed dynamics and reliably predict future progression. With its flexible framework, our simulation-based approach is applicable to a variety of epidemiological models. Moreover, since our method is fully Bayesian, it is designed to incorporate all available prior knowledge about plausible parameter values and returns complete joint posterior distributions over these parameters. Application of our method to the early Covid-19 outbreak phase in Germany demonstrates that we are able to obtain reliable probabilistic estimates for important disease characteristics, such as generation time, fraction of undetected infections, likelihood of transmission before symptom onset, and reporting delays using a very moderate amount of real-world observations.


Subject(s)
COVID-19/epidemiology , Models, Biological , Neural Networks, Computer , Bayes Theorem , Germany/epidemiology , Humans , Pandemics , Uncertainty
6.
Cancers (Basel) ; 13(13)2021 Jun 22.
Article in English | MEDLINE | ID: mdl-34206336

ABSTRACT

Modern generative deep learning (DL) architectures allow for unsupervised learning of latent representations that can be exploited in several downstream tasks. Within the field of oncological medical imaging, we term these latent representations "digital tumor signatures" and hypothesize that they can be used, in analogy to radiomics features, to differentiate between lesions and normal liver tissue. Moreover, we conjecture that they can be used for the generation of synthetic data, specifically for the artificial insertion and removal of liver tumor lesions at user-defined spatial locations in CT images. Our approach utilizes an implicit autoencoder, an unsupervised model architecture that combines an autoencoder and two generative adversarial network (GAN)-like components. The model was trained on liver patches from 25 or 57 inhouse abdominal CT scans, depending on the experiment, demonstrating that only minimal data is required for synthetic image generation. The model was evaluated on a publicly available data set of 131 scans. We show that a PCA embedding of the latent representation captures the structure of the data, providing the foundation for the targeted insertion and removal of tumor lesions. To assess the quality of the synthetic images, we conducted two experiments with five radiologists. For experiment 1, only one rater and the ensemble-rater were marginally above the chance level in distinguishing real from synthetic data. For the second experiment, no rater was above the chance level. To illustrate that the "digital signatures" can also be used to differentiate lesion from normal tissue, we employed several machine learning methods. The best performing method, a LinearSVM, obtained 95% (97%) accuracy, 94% (95%) sensitivity, and 97% (99%) specificity, depending on if all data or only normal appearing patches were used for training of the implicit autoencoder. Overall, we demonstrate that the proposed unsupervised learning paradigm can be utilized for the removal and insertion of liver lesions at user defined spatial locations and that the digital signatures can be used to discriminate between lesions and normal liver tissue in abdominal CT scans.

7.
Med Phys ; 48(4): 1893-1908, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33332644

ABSTRACT

PURPOSE: To investigate the feasibility and accuracy of proton dose calculations with artificial neural networks (ANNs) in challenging three-dimensional (3D) anatomies. METHODS: A novel proton dose calculation approach was designed based on the application of a long short-term memory (LSTM) network. It processes the 3D geometry as a sequence of two-dimensional (2D) computed tomography slices and outputs a corresponding sequence of 2D slices that forms the 3D dose distribution. The general accuracy of the approach is investigated in comparison to Monte Carlo reference simulations and pencil beam dose calculations. We consider both artificial phantom geometries and clinically realistic lung cases for three different pencil beam energies. RESULTS: For artificial phantom cases, the trained LSTM model achieved a 98.57% γ-index pass rate ([1%, 3 mm]) in comparison to MC simulations for a pencil beam with initial energy 104.25 MeV. For a lung patient case, we observe pass rates of 98.56%, 97.74%, and 94.51% for an initial energy of 67.85, 104.25, and 134.68 MeV, respectively. Applying the LSTM dose calculation on patient cases that were fully excluded from the training process yields an average γ-index pass rate of 97.85%. CONCLUSIONS: LSTM networks are well suited for proton dose calculation tasks. Further research, especially regarding model generalization and computational performance in comparison to established dose calculation methods, is warranted.


Subject(s)
Proton Therapy , Protons , Algorithms , Humans , Memory, Short-Term , Monte Carlo Method , Phantoms, Imaging , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted
8.
IEEE Trans Pattern Anal Mach Intell ; 43(10): 3724-3738, 2021 Oct.
Article in English | MEDLINE | ID: mdl-32175858

ABSTRACT

Image partitioning, or segmentation without semantics, is the task of decomposing an image into distinct segments, or equivalently to detect closed contours. Most prior work either requires seeds, one per segment; or a threshold; or formulates the task as multicut / correlation clustering, an NP-hard problem. Here, we propose an efficient algorithm for graph partitioning, the "Mutex Watershed". Unlike seeded watershed, the algorithm can accommodate not only attractive but also repulsive cues, allowing it to find a previously unspecified number of segments without the need for explicit seeds or a tunable threshold. We also prove that this simple algorithm solves to global optimality an objective function that is intimately related to the multicut / correlation clustering integer linear programming formulation. The algorithm is deterministic, very simple to implement, and has empirically linearithmic complexity. When presented with short-range attractive and long-range repulsive cues from a deep neural network, the Mutex Watershed gives the best results currently known for the competitive ISBI 2012 EM segmentation benchmark.

9.
Br J Math Stat Psychol ; 73(1): 23-43, 2020 02.
Article in English | MEDLINE | ID: mdl-30793299

ABSTRACT

Complex simulator-based models with non-standard sampling distributions require sophisticated design choices for reliable approximate parameter inference. We introduce a fast, end-to-end approach for approximate Bayesian computation (ABC) based on fully convolutional neural networks. The method enables users of ABC to derive simultaneously the posterior mean and variance of multidimensional posterior distributions directly from raw simulated data. Once trained on simulated data, the convolutional neural network is able to map real data samples of variable size to the first two posterior moments of the relevant parameter's distributions. Thus, in contrast to other machine learning approaches to ABC, our approach allows us to generate reusable models that can be applied by different researchers employing the same model. We verify the utility of our method on two common statistical models (i.e., a multivariate normal distribution and a multiple regression scenario), for which the posterior parameter distributions can be derived analytically. We then apply our method to recover the parameters of the leaky competing accumulator (LCA) model and we reference our results to the current state-of-the-art technique, which is the probability density estimation (PDA). Results show that our method exhibits a lower approximation error compared with other machine learning approaches to ABC. It also performs similarly to PDA in recovering the parameters of the LCA model.


Subject(s)
Algorithms , Bayes Theorem , Neural Networks, Computer , Computer Simulation , Humans , Likelihood Functions , Machine Learning , Regression Analysis
10.
Invest Radiol ; 54(10): 653-660, 2019 10.
Article in English | MEDLINE | ID: mdl-31261293

ABSTRACT

OBJECTIVES: Gadolinium-based contrast agents (GBCAs) have become an integral part in daily clinical decision making in the last 3 decades. However, there is a broad consensus that GBCAs should be exclusively used if no contrast-free magnetic resonance imaging (MRI) technique is available to reduce the amount of applied GBCAs in patients. In the current study, we investigate the possibility of predicting contrast enhancement from noncontrast multiparametric brain MRI scans using a deep-learning (DL) architecture. MATERIALS AND METHODS: A Bayesian DL architecture for the prediction of virtual contrast enhancement was developed using 10-channel multiparametric MRI data acquired before GBCA application. The model was quantitatively and qualitatively evaluated on 116 data sets from glioma patients and healthy subjects by comparing the virtual contrast enhancement maps to the ground truth contrast-enhanced T1-weighted imaging. Subjects were split in 3 different groups: enhancing tumors (n = 47), nonenhancing tumors (n = 39), and patients without pathologic changes (n = 30). The tumor regions were segmented for a detailed analysis of subregions. The influence of the different MRI sequences was determined. RESULTS: Quantitative results of the virtual contrast enhancement yielded a sensitivity of 91.8% and a specificity of 91.2%. T2-weighted imaging, followed by diffusion-weighted imaging, was the most influential sequence for the prediction of virtual contrast enhancement. Analysis of the whole brain showed a mean area under the curve of 0.969 ± 0.019, a peak signal-to-noise ratio of 22.967 ± 1.162 dB, and a structural similarity index of 0.872 ± 0.031. Enhancing and nonenhancing tumor subregions performed worse (except for the peak signal-to-noise ratio of the nonenhancing tumors). The qualitative evaluation by 2 raters using a 4-point Likert scale showed good to excellent (3-4) results for 91.5% of the enhancing and 92.3% of the nonenhancing gliomas. However, despite the good scores and ratings, there were visual deviations between the virtual contrast maps and the ground truth, including a more blurry, less nodular-like ring enhancement, few low-contrast false-positive enhancements of nonenhancing gliomas, and a tendency to omit smaller vessels. These "features" were also exploited by 2 trained radiologists when performing a Turing test, allowing them to discriminate between real and virtual contrast-enhanced images in 80% and 90% of the cases, respectively. CONCLUSIONS: The introduced model for virtual gadolinium enhancement demonstrates a very good quantitative and qualitative performance. Future systematic studies in larger patient collectives with varying neurological disorders need to evaluate if the introduced virtual contrast enhancement might reduce GBCA exposure in clinical practice.


Subject(s)
Brain Neoplasms/diagnostic imaging , Brain/diagnostic imaging , Image Enhancement/methods , Magnetic Resonance Imaging/methods , Adult , Bayes Theorem , Feasibility Studies , Female , Gadolinium , Humans , Male , Sensitivity and Specificity , Signal-To-Noise Ratio
11.
Int J Comput Assist Radiol Surg ; 14(6): 997-1007, 2019 Jun.
Article in English | MEDLINE | ID: mdl-30903566

ABSTRACT

PURPOSE: Optical imaging is evolving as a key technique for advanced sensing in the operating room. Recent research has shown that machine learning algorithms can be used to address the inverse problem of converting pixel-wise multispectral reflectance measurements to underlying tissue parameters, such as oxygenation. Assessment of the specific hardware used in conjunction with such algorithms, however, has not properly addressed the possibility that the problem may be ill-posed. METHODS: We present a novel approach to the assessment of optical imaging modalities, which is sensitive to the different types of uncertainties that may occur when inferring tissue parameters. Based on the concept of invertible neural networks, our framework goes beyond point estimates and maps each multispectral measurement to a full posterior probability distribution which is capable of representing ambiguity in the solution via multiple modes. Performance metrics for a hardware setup can then be computed from the characteristics of the posteriors. RESULTS: Application of the assessment framework to the specific use case of camera selection for physiological parameter estimation yields the following insights: (1) estimation of tissue oxygenation from multispectral images is a well-posed problem, while (2) blood volume fraction may not be recovered without ambiguity. (3) In general, ambiguity may be reduced by increasing the number of spectral bands in the camera. CONCLUSION: Our method could help to optimize optical camera design in an application-specific manner.


Subject(s)
Machine Learning , Neural Networks, Computer , Optical Imaging/methods , Algorithms , Humans , Uncertainty
12.
Sci Rep ; 6: 25007, 2016 04 27.
Article in English | MEDLINE | ID: mdl-27118379

ABSTRACT

Volumetric measurements in radiologic images are important for monitoring tumor growth and treatment response. To make these more reproducible and objective we introduce the concept of virtual raters (VRs). A virtual rater is obtained by combining knowledge of machine-learning algorithms trained with past annotations of multiple human raters with the instantaneous rating of one human expert. Thus, he is virtually guided by several experts. To evaluate the approach we perform experiments with multi-channel magnetic resonance imaging (MRI) data sets. Next to gross tumor volume (GTV) we also investigate subcategories like edema, contrast-enhancing and non-enhancing tumor. The first data set consists of N = 71 longitudinal follow-up scans of 15 patients suffering from glioblastoma (GB). The second data set comprises N = 30 scans of low- and high-grade gliomas. For comparison we computed Pearson Correlation, Intra-class Correlation Coefficient (ICC) and Dice score. Virtual raters always lead to an improvement w.r.t. inter- and intra-rater agreement. Comparing the 2D Response Assessment in Neuro-Oncology (RANO) measurements to the volumetric measurements of the virtual raters results in one-third of the cases in a deviating rating. Hence, we believe that our approach will have an impact on the evaluation of clinical studies as well as on routine imaging diagnostics.


Subject(s)
Glioma/diagnostic imaging , Neoplasm Grading/methods , Radiology/methods , Humans , Longitudinal Studies , Machine Learning
13.
Histochem Cell Biol ; 141(6): 613-27, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24722686

ABSTRACT

Although there are many reconstruction algorithms for localization microscopy, their use is hampered by the difficulty to adjust a possibly large number of parameters correctly. We propose SimpleSTORM, an algorithm that determines appropriate parameter settings directly from the data in an initial self-calibration phase. The algorithm is based on a carefully designed yet simple model of the image acquisition process which allows us to standardize each image such that the background has zero mean and unit variance. This standardization makes it possible to detect spots by a true statistical test (instead of hand-tuned thresholds) and to de-noise the images with an efficient matched filter. By reducing the strength of the matched filter, SimpleSTORM also performs reasonably on data with high-spot density, trading off localization accuracy for improved detection performance. Extensive validation experiments on the ISBI Localization Challenge Dataset, as well as real image reconstructions, demonstrate the good performance of our algorithm.


Subject(s)
Algorithms , Microscopy, Fluorescence/methods , Calibration , HeLa Cells , Humans , Time Factors
14.
Anal Chem ; 85(1): 147-55, 2013 Jan 02.
Article in English | MEDLINE | ID: mdl-23157438

ABSTRACT

Digital staining for the automated annotation of mass spectrometry imaging (MSI) data has previously been achieved using state-of-the-art classifiers such as random forests or support vector machines (SVMs). However, the training of such classifiers requires an expert to label exemplary data in advance. This process is time-consuming and hence costly, especially if the tissue is heterogeneous. In theory, it may be sufficient to only label a few highly representative pixels of an MS image, but it is not known a priori which pixels to select. This motivates active learning strategies in which the algorithm itself queries the expert by automatically suggesting promising candidate pixels of an MS image for labeling. Given a suitable querying strategy, the number of required training labels can be significantly reduced while maintaining classification accuracy. In this work, we propose active learning for convenient annotation of MSI data. We generalize a recently proposed active learning method to the multiclass case and combine it with the random forest classifier. Its superior performance over random sampling is demonstrated on secondary ion mass spectrometry data, making it an interesting approach for the classification of MS images.


Subject(s)
Spectrometry, Mass, Secondary Ion , Algorithms , Animals , Humans , MCF-7 Cells , Mice , Pattern Recognition, Automated , Support Vector Machine , Transplantation, Heterologous
15.
Bioinformatics ; 27(7): 987-93, 2011 Apr 01.
Article in English | MEDLINE | ID: mdl-21296750

ABSTRACT

MOTIVATION: Alignment of multiple liquid chromatography/mass spectrometry (LC/MS) experiments is a necessity today, which arises from the need for biological and technical repeats. Due to limits in sampling frequency and poor reproducibility of retention times, current LC systems suffer from missing observations and non-linear distortions of the retention times across runs. Existing approaches for peak correspondence estimation focus almost exclusively on solving the pairwise alignment problem, yielding straightforward but suboptimal results for multiple alignment problems. RESULTS: We propose SIMA, a novel automated procedure for alignment of peak lists from multiple LC/MS runs. SIMA combines hierarchical pairwise correspondence estimation with simultaneous alignment and global retention time correction. It employs a tailored multidimensional kernel function and a procedure based on maximum likelihood estimation to find the retention time distortion function that best fits the observed data. SIMA does not require a dedicated reference spectrum, is robust with regard to outliers, needs only two intuitive parameters and naturally incorporates incomplete correspondence information. In a comparison with seven alternative methods on four different datasets, we show that SIMA yields competitive and superior performance on real-world data. AVAILABILITY: A C++ implementation of the SIMA algorithm is available from http://hci.iwr.uni-heidelberg.de/MIP/Software.


Subject(s)
Algorithms , Chromatography, Liquid/methods , Mass Spectrometry/methods
16.
Bioinformatics ; 26(12): 1535-41, 2010 Jun 15.
Article in English | MEDLINE | ID: mdl-20439256

ABSTRACT

MOTIVATION: Time-resolved hydrogen exchange (HX) followed by mass spectrometry (MS) is a key technology for studying protein structure, dynamics and interactions. HX experiments deliver a time-dependent distribution of deuteration levels of peptide sequences of the protein of interest. The robust and complete estimation of this distribution for as many peptide fragments as possible is instrumental to understanding dynamic protein-level HX behavior. Currently, this data interpretation step still is a bottleneck in the overall HX/MS workflow. RESULTS: We propose HeXicon, a novel algorithmic workflow for automatic deuteration distribution estimation at increased sequence coverage. Based on an L(1)-regularized feature extraction routine, HeXicon extracts the full deuteration distribution, which allows insight into possible bimodal exchange behavior of proteins, rather than just an average deuteration for each time point. Further, it is capable of addressing ill-posed estimation problems, yielding sparse and physically reasonable results. HeXicon makes use of existing peptide sequence information, which is augmented by an inferred list of peptide candidates derived from a known protein sequence. In conjunction with a supervised classification procedure that balances sensitivity and specificity, HeXicon can deliver results with increased sequence coverage. AVAILABILITY: The entire HeXicon workflow has been implemented in C++ and includes a graphical user interface. It is available at http://hci.iwr.uni-heidelberg.de/software.php. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
Algorithms , Deuterium Exchange Measurement/methods , Mass Spectrometry/methods , Proteins/chemistry , Deuterium/chemistry , User-Computer Interface
17.
Bioinformatics ; 26(1): 77-83, 2010 Jan 01.
Article in English | MEDLINE | ID: mdl-19861354

ABSTRACT

MOTIVATION: The qualitative and quantitative characterization of protein abundance profiles over a series of time points or a set of environmental conditions is becoming increasingly important. Using isobaric mass tagging experiments, mass spectrometry-based quantitative proteomics deliver accurate peptide abundance profiles for relative quantitation. Associated data analysis workflows need to provide tailored statistical treatment that (i) takes the correlation structure of the normalized peptide abundance profiles into account and (ii) allows inference of protein-level similarity. We introduce a suitable distance measure for relative abundance profiles, derive a statistical test for equality and propose a protein-level representation of peptide-level measurements. This yields a workflow that delivers a similarity ranking of protein abundance profiles with respect to a defined reference. All procedures have in common that they operate based on the true correlation structure that underlies the measurements. This optimizes power and delivers more intuitive and efficient results than existing methods that do not take these circumstances into account. RESULTS: We use protein profile similarity screening to identify candidate proteins whose abundances are post-transcriptionally controlled by the Anaphase Promoting Complex/Cyclosome (APC/C), a specific E3 ubiquitin ligase that is a master regulator of the cell cycle. Results are compared with an established protein correlation profiling method. The proposed procedure yields a 50.9-fold enrichment of co-regulated protein candidates and a 2.5-fold improvement over the previous method. AVAILABILITY: A MATLAB toolbox is available from http://hci.iwr.uni-heidelberg.de/mip/proteomics.


Subject(s)
Algorithms , Gene Expression Profiling/methods , Mass Spectrometry/methods , Peptide Mapping/methods , Sequence Analysis, Protein/methods , Amino Acid Sequence , Molecular Sequence Data
18.
J Proteome Res ; 8(7): 3558-67, 2009 Jul.
Article in English | MEDLINE | ID: mdl-19469555

ABSTRACT

We show on imaging mass spectrometry (IMS) data that the Random Forest classifier can be used for automated tissue classification and that it results in predictions with high sensitivities and positive predictive values, even when intersample variability is present in the data. We further demonstrate how Markov Random Fields and vector-valued median filtering can be applied to reduce noise effects to further improve the classification results in a posthoc smoothing step. Our study gives clear evidence that digital staining by means of IMS constitutes a promising complement to chemical staining techniques.


Subject(s)
Mass Spectrometry/methods , Neoplasms/pathology , Proteomics/methods , Algorithms , Computational Biology/methods , Data Interpretation, Statistical , Gene Expression Profiling/methods , Humans , Image Processing, Computer-Assisted , Markov Chains , Models, Statistical , Oligonucleotide Array Sequence Analysis/methods , Pattern Recognition, Automated , Software
SELECTION OF CITATIONS
SEARCH DETAIL
...