Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
1.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 167-170, 2022 07.
Article in English | MEDLINE | ID: mdl-36086050

ABSTRACT

Monitoring the evolution of the Covid19 pandemic constitutes a critical step in sanitary policy design. Yet, the assessment of the pandemic intensity within the pandemic period remains a challenging task because of the limited quality of data made available by public health authorities (missing data, outliers and pseudoseasonalities, notably), that calls for cumbersome and ad-hoc preprocessing (denoising) prior to estimation. Recently, the estimation of the reproduction number, a measure of the pandemic intensity, was formulated as an inverse problem, combining data-model fidelity and space-time regularity constraints, solved by nonsmooth convex proximal minimizations. Though promising, that formulation lacks robustness against the limited quality of the Covid19 data and confidence assessment. The present work aims to address both limitations: First, it discusses solutions to produce a robust assessment of the pandemic intensity by accounting for the low quality of the data directly within the inverse problem formulation. Second, exploiting a Bayesian interpretation of the inverse problem formulation, it devises a Monte Carlo sampling strategy, tailored to a nonsmooth log-concave a posteriori distribution, to produce relevant credibility interval-based estimates for the Covid19 reproduction number. Clinical relevance Applied to daily counts of new infections made publicly available by the Health Authorities for around 200 countries, the proposed procedures permit robust assessments of the time evolution of the Covid19 pandemic intensity, updated automatically and on a daily basis.


Subject(s)
COVID-19 , Pandemics , Bayes Theorem , COVID-19/epidemiology , Humans , Monte Carlo Method , Reproduction
2.
IEEE Trans Image Process ; 16(3): 824-37, 2007 Mar.
Article in English | MEDLINE | ID: mdl-17357740

ABSTRACT

Issues involving missing data are typical settings where exact inference is not tractable as soon as nontrivial interactions occur between the missing variables. Approximations are required, and most of them are based either on simulation methods or on deterministic variational methods. While variational methods provide fast and reasonable approximate estimates in many scenarios, simulation methods offer more consideration of important theoretical issues such as accuracy of the approximation and convergence of the algorithms but at a much higher computational cost. In this work, we propose a new class of algorithms that combine the main features and advantages of both simulation and deterministic methods and consider applications to inference in hidden Markov random fields (HMRFs). These algorithms can be viewed as stochastic perturbations of variational expectation maximization (VEM) algorithms, which are not tractable for HMRF. We focus more specifically on one of these perturbations and we prove their (almost sure) convergence to the same limit set as the limit set of VEM. In addition, experiments on synthetic and real-world images show that the algorithm performance is very close and sometimes better than that of other existing simulation-based and variational EM-like algorithms.


Subject(s)
Algorithms , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Computer Simulation , Data Interpretation, Statistical , Markov Chains , Models, Statistical , Monte Carlo Method
3.
Bioinformatics ; 21(7): 1104-11, 2005 Apr 01.
Article in English | MEDLINE | ID: mdl-15531609

ABSTRACT

MOTIVATION: One important aspect of data-mining of microarray data is to discover the molecular variation among cancers. In microarray studies, the number n of samples is relatively small compared to the number p of genes per sample (usually in thousands). It is known that standard statistical methods in classification are efficient (i.e. in the present case, yield successful classifiers) particularly when n is (far) larger than p. This naturally calls for the use of a dimension reduction procedure together with the classification one. RESULTS: In this paper, the question of classification in such a high-dimensional setting is addressed. We view the classification problem as a regression one with few observations and many predictor variables. We propose a new method combining partial least squares (PLS) and Ridge penalized logistic regression. We review the existing methods based on PLS and/or penalized likelihood techniques, outline their interest in some cases and theoretically explain their sometimes poor behavior. Our procedure is compared with these other classifiers. The predictive performance of the resulting classification rule is illustrated on three data sets: Leukemia, Colon and Prostate.


Subject(s)
Biomarkers, Tumor/metabolism , Gene Expression Profiling/methods , Neoplasm Proteins/metabolism , Neoplasms/genetics , Neoplasms/metabolism , Oligonucleotide Array Sequence Analysis/methods , Pattern Recognition, Automated/methods , Algorithms , Biomarkers, Tumor/genetics , Diagnosis, Computer-Assisted/methods , Humans , Least-Squares Analysis , Models, Genetic , Models, Statistical , Neoplasm Proteins/genetics , Neoplasms/diagnosis , Regression Analysis , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...