Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
Front Public Health ; 12: 1175109, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38375340

RESUMO

Introduction: Converging evidence suggests that urban living is associated with an increased likelihood of developing mental health and sleep problems. Although these aspects have been investigated in separate streams of research, stress, autonomic reactivity and circadian misalignment can be hypothesized to play a prominent role in the causal pathways underlining the complex relationship between the urban environment and these two health dimensions. This study aims at quantifying the momentary impact of environmental stressors on increased autonomic reactivity and circadian rhythm, and thereby on mood and anxiety symptoms and sleep quality in the context of everyday urban living. Method: The present article reports the protocol for a feasibility study that aims at assessing the daily environmental and mobility exposures of 40 participants from the urban area of Jerusalem over 7 days. Every participant will carry a set of wearable sensors while being tracked through space and time with GPS receivers. Skin conductance and heart rate variability will be tracked to monitor participants' stress responses and autonomic reactivity, whereas electroencephalographic signal will be used for sleep quality tracking. Light exposure, actigraphy and skin temperature will be used for ambulatory circadian monitoring. Geographically explicit ecological momentary assessment (GEMA) will be used to assess participants' perception of the environment, mood and anxiety symptoms, sleep quality and vitality. For each outcome variable (sleep quality and mental health), hierarchical mixed models including random effects at the individual level will be used. In a separate analysis, to control for potential unobserved individual-level confounders, a fixed effect at the individual level will be specified for case-crossover analyses (comparing each participant to oneself). Conclusion: Recent developments in wearable sensing methods, as employed in our study or with even more advanced methods reviewed in the Discussion, make it possible to gather information on the functioning of neuro-endocrine and circadian systems in a real-world context as a way to investigate the complex interactions between environmental exposures, behavior and health. Our work aims to provide evidence on the health effects of urban stressors and circadian disruptors to inspire potential interventions, municipal policies and urban planning schemes aimed at addressing those factors.


Assuntos
Saúde Mental , Sono , Humanos , Sono/fisiologia , Ritmo Circadiano/fisiologia , Actigrafia , Afeto
2.
Nat Biotechnol ; 2024 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-38168992

RESUMO

Adoption of high-content omic technologies in clinical studies, coupled with computational methods, has yielded an abundance of candidate biomarkers. However, translating such findings into bona fide clinical biomarkers remains challenging. To facilitate this process, we introduce Stabl, a general machine learning method that identifies a sparse, reliable set of biomarkers by integrating noise injection and a data-driven signal-to-noise threshold into multivariable predictive modeling. Evaluation of Stabl on synthetic datasets and five independent clinical studies demonstrates improved biomarker sparsity and reliability compared to commonly used sparsity-promoting regularization methods while maintaining predictive performance; it distills datasets containing 1,400-35,000 features down to 4-34 candidate biomarkers. Stabl extends to multi-omic integration tasks, enabling biological interpretation of complex predictive models, as it hones in on a shortlist of proteomic, metabolomic and cytometric events predicting labor onset, microbial biomarkers of pre-term birth and a pre-operative immune signature of post-surgical infections. Stabl is available at https://github.com/gregbellan/Stabl .

3.
Curr Oncol ; 30(8): 7478-7488, 2023 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-37623022

RESUMO

Phosphaturic mesenchymal tumors (PMT) are rare neoplasms, which can give rise to a multifaceted syndrome, otherwise called tumor-induced osteomalacia (TIO). Localizing these tumors is crucial to obtain a cure for the phosphate metabolism derangement, which is often the main cause leading the patient to seek medical help, because of invalidating physical and neuromuscular symptoms. A proportion of these tumors is completely silent and may grow unnoticed, unless they become large enough to produce pain or discomfort. FGF-23 can be produced by several benign or malignant PMTs. The phosphate metabolism, radiology and histology of these rare tumors must be collectively assessed by a multidisciplinary team aimed at curing the disease locally and improving patients' quality of life. This narrative review, authored by multiple specialists of a tertiary care hospital center, will describe endocrine, radiological and histological features of these tumors, as well as present surgical and interventional strategies to manage PMTs.


Assuntos
Osteomalacia , Neoplasias de Tecidos Moles , Humanos , Qualidade de Vida , Hospitais , Fosfatos
4.
Medicina (Kaunas) ; 59(7)2023 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-37512031

RESUMO

Background: Femoral neck fractures are an epidemiologically significant issue with major effects on patients and health care systems, as they account for a large percentage of bone injuries in the elderly. Hip hemiarthroplasty is a common surgical procedure in the treatment of displaced femoral neck fractures. Several surgical approaches may be used to access the hip joint in case of femoral neck fractures, each with its own benefits and potential drawbacks, but none of them has consistently been found to be superior to the others. This article aims to systematically review and compare the different approaches in terms of the complication rate at the last follow-up. Methods: an in-depth search on PubMed/Scopus/Web of Science databases and a cross-referencing search was carried out concerning the articles comparing different approaches in hemiarthroplasty and reporting detailed data. Results: A total of 97,576 hips were included: 1030 treated with a direct anterior approach, 4131 with an anterolateral approach, 59,110 with a direct lateral approach, and 33,007 with a posterolateral approach. Comparing the different approaches, significant differences were found in both the overall complication rate and the rate of revision surgery performed (p < 0.05). In particular, the posterolateral approach showed a significantly higher complication rate than the lateral approach (8.4% vs. 3.2%, p < 0.001). Furthermore, the dislocation rate in the posterolateral group was significantly higher than in the other three groups considered (p < 0.026). However, the posterolateral group showed less blood loss than the anterolateral group (p < 0.001), a lower intraoperative fractures rate than the direct anterior group (p < 0.035), and shorter mean operative time than the direct lateral group (p < 0.018). Conclusions: The posterolateral approach showed a higher complication rate than direct lateral approach and a higher prosthetic dislocation rate than the other three types of surgical approaches. On the other hand, patients treated with posterolateral approach showed better outcomes in other parameters considered, such as mean operative time, mean blood loss and intraoperative fractures rate. The knowledge of the limitations of each approach and the most common associated complications can lead to choosing a surgical technique based on the patient's individual risk.


Assuntos
Artroplastia de Quadril , Fraturas do Colo Femoral , Hemiartroplastia , Humanos , Idoso , Artroplastia de Quadril/efeitos adversos , Hemiartroplastia/efeitos adversos , Hemiartroplastia/métodos , Fraturas do Colo Femoral/cirurgia , Articulação do Quadril , Quadril , Resultado do Tratamento
5.
Environ Int ; 178: 108095, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37487375

RESUMO

The urban environment plays an important role for the mental health of residents. Researchers mainly focus on residential neighbourhoods as exposure context, leaving aside the effects of non-residential environments. In order to consider the daily experience of urban spaces, a people-based approach focused on mobility paths is needed. Applying this approach, (1) this study investigated whether individuals' momentary mental well-being is related to the exposure to micro-urban spaces along the daily mobility paths within the two previous hours; (2) it explored whether these associations differ when environmental exposures are defined considering all location points or only outdoor location points; and (3) it examined the associations between the types of activity and mobility and momentary depressive symptomatology. Using a geographically-explicit ecological momentary assessment approach (GEMA), momentary depressive symptomatology of 216 older adults living in the Ile-de-France region was assessed using smartphone surveys, while participants were tracked with a GPS receiver and an accelerometer for seven days. Exposure to multiple elements of the streetscape was computed within a street network buffer of 25 m of each GPS point over the two hours prior to the questionnaire. Mobility and activity type were documented from a GPS-based mobility survey. We estimated Bayesian generalized mixed effect models with random effects at the individual and day levels and took into account time autocorrelation. We also estimated fixed effects. A better momentary mental wellbeing was observed when residents performed leisure activities or were involved in active mobility and when they were exposed to walkable areas (pedestrian dedicated paths, open spaces, parks and green areas), water elements, and commerce, leisure and cultural attractors over the previous two hours. These relationships were stronger when exposures were defined based only on outdoor location points rather than all location points, and when we considered within-individual differences compared to between-individual differences.


Assuntos
Saúde Mental , Humanos , Idoso , Teorema de Bayes , Inquéritos e Questionários , França
6.
J Clin Med ; 12(12)2023 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-37373844

RESUMO

Modular megaprostheses (MPs) are commonly used after bone-tumor resection, but they can offer a limb salvage solution in massive bone defects. The aim of this systematic review of the Literature is to provide a comprehensive data collection concerning the use of MPs in non-oncologic cases, and to provide an overview of this topic, especially from an epidemiologic point of view. Three different databases (PubMed, Scopus, and Web of Science) were searched for relevant articles, and further references were obtained by cross-referencing. Sixty-nine studies met the inclusion criteria, reporting on cases of MP in non-oncologic cases. A total of 2598 MPs were retrieved. Among these, 1353 (52.1%) were distal femur MPs, 941 (36.2%) were proximal femur MPs, 29 (1.4%) were proximal tibia MPs and 259 (10.0%) were total femur MPs. Megaprostheses were most commonly used to treat periprosthetic fractures (1158 cases, 44.6%), in particular in the distal femur (859, 74.2%). Overall, complications were observed in 513 cases (19.7%). Type I (soft tissue failures) and type IV (infection) according to the Henderson classification were the most frequent (158 and 213, respectively). In conclusion, patients with severe post-traumatic deformities and/or significant bone loss who have had previous septic complications should be considered as oncologic patients, not because of the disease, but because of the limited therapeutic options available. The benefits of this treatment include relatively short operative times and immediate weight-bearing, thus making MP particularly attractive in the lower limb.

7.
Res Sq ; 2023 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-36909508

RESUMO

High-content omic technologies coupled with sparsity-promoting regularization methods (SRM) have transformed the biomarker discovery process. However, the translation of computational results into a clinical use-case scenario remains challenging. A rate-limiting step is the rigorous selection of reliable biomarker candidates among a host of biological features included in multivariate models. We propose Stabl, a machine learning framework that unifies the biomarker discovery process with multivariate predictive modeling of clinical outcomes by selecting a sparse and reliable set of biomarkers. Evaluation of Stabl on synthetic datasets and four independent clinical studies demonstrates improved biomarker sparsity and reliability compared to commonly used SRMs at similar predictive performance. Stabl readily extends to double- and triple-omics integration tasks and identifies a sparser and more reliable set of biomarkers than those selected by state-of-the-art early- and late-fusion SRMs, thereby facilitating the biological interpretation and clinical translation of complex multi-omic predictive models. The complete package for Stabl is available online at https://github.com/gregbellan/Stabl.

8.
Ann Stat ; 50(2): 949-986, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36120512

RESUMO

Interpolators-estimators that achieve zero training error-have attracted growing attention in machine learning, mainly because state-of-the art neural networks appear to be models of this type. In this paper, we study minimum ℓ 2 norm ("ridgeless") interpolation least squares regression, focusing on the high-dimensional regime in which the number of unknown parameters p is of the same order as the number of samples n. We consider two different models for the feature distribution: a linear model, where the feature vectors x i ∈ ℝ p are obtained by applying a linear transform to a vector of i.i.d. entries, x i = Σ1/2 z i (with z i ∈ ℝ p ); and a nonlinear model, where the feature vectors are obtained by passing the input through a random one-layer neural network, xi = φ(Wz i ) (with z i ∈ ℝ d , W ∈ ℝ p × d a matrix of i.i.d. entries, and φ an activation function acting componentwise on Wz i ). We recover-in a precise quantitative way-several phenomena that have been observed in large-scale neural networks and kernel machines, including the "double descent" behavior of the prediction risk, and the potential benefits of overparametrization.

9.
Cancers (Basel) ; 13(24)2021 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-34944944

RESUMO

The aim of this study was to establish the prognostic effects of the proximity of the tumor to the main vessels in patients affected by soft tissue sarcomas (STS) of the thigh. A total of 529 adult patients with deeply seated STS of the thigh and popliteal fossa were included. Vascular proximity was defined on MRI: type 1 > 5 mm; type 2 ≤ 5 mm and >0 mm; type 3 close to the tumor; type 4 enclosed by the tumor. Proximity to major vessels type 1-2 had a local recurrence (LR) rate lower than type 3-4 (p < 0.001). In type 4, vascular by-pass reduced LR risk. On multivariate analysis infiltrative histotypes, high FNLCC grade, radiotherapy administration, and type 3-4 of proximity to major vessels were found to be independent prognostic factors for LR. We observed an augmented risk of recurrence, but not of survival as the tumor was near to the major vessels. When major vessels were found to be surrounded by the tumor on preoperative MRI, vascular resection and bypass reconstruction offered a better local control.

10.
Am J Obstet Gynecol MFM ; 2(2): 100100, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-33345966

RESUMO

BACKGROUND: Early prediction of preeclampsia is challenging because of poorly understood causes, various risk factors, and likely multiple pathogenic phenotypes of preeclampsia. Statistical learning methods are well-equipped to deal with a large number of variables, such as patients' clinical and laboratory data, and to select the most informative features automatically. OBJECTIVE: Our objective was to use statistical learning methods to analyze all available clinical and laboratory data that were obtained during routine prenatal visits in early pregnancy and to use them to develop a prediction model for preeclampsia. STUDY DESIGN: This was a retrospective cohort study that used data from 16,370 births at Lucile Packard Children Hospital at Stanford, CA, from April 2014 to January 2018. Two statistical learning algorithms were used to build a predictive model: (1) elastic net and (2) gradient boosting algorithm. Models for all preeclampsia and early-onset preeclampsia (<34 weeks gestation) were fitted with the use of patient data that were available at <16 weeks gestational age. The 67 variables that were considered in the models included maternal characteristics, medical history, routine prenatal laboratory results, and medication intake. The area under the receiver operator curve, true-positive rate, and false-positive rate were assessed via cross-validation. RESULTS: Using the elastic net algorithm, we developed a prediction model that contained a subset of the most informative features from all variables. The obtained prediction model for preeclampsia yielded an area under the curve of 0.79 (95% confidence interval, 0.75-0.83), sensitivity of 45.2%, and false-positive rate of 8.1%. The prediction model for early-onset preeclampsia achieved an area under the curve of 0.89 (95% confidence interval, 0.84-0.95), true-positive rate of 72.3%, and false-positive rate of 8.8%. CONCLUSION: Statistical learning methods in a retrospective cohort study automatically identified a set of significant features for prediction and yielded high prediction performance for preeclampsia risk from routine early pregnancy information.


Assuntos
Pré-Eclâmpsia , Criança , Feminino , Idade Gestacional , Humanos , Aprendizado de Máquina , Pré-Eclâmpsia/diagnóstico , Gravidez , Estudos Retrospectivos , Fatores de Risco
11.
Proc Natl Acad Sci U S A ; 115(33): E7665-E7671, 2018 08 14.
Artigo em Inglês | MEDLINE | ID: mdl-30054315

RESUMO

Multilayer neural networks are among the most powerful models in machine learning, yet the fundamental reasons for this success defy mathematical understanding. Learning a neural network requires optimizing a nonconvex high-dimensional objective (risk function), a problem that is usually attacked using stochastic gradient descent (SGD). Does SGD converge to a global optimum of the risk or only to a local optimum? In the former case, does this happen because local minima are absent or because SGD somehow avoids them? In the latter, why do local minima reached by SGD have good generalization properties? In this paper, we consider a simple case, namely two-layer neural networks, and prove that-in a suitable scaling limit-SGD dynamics is captured by a certain nonlinear partial differential equation (PDE) that we call distributional dynamics (DD). We then consider several specific examples and show how DD can be used to prove convergence of SGD to networks with nearly ideal generalization error. This description allows for "averaging out" some of the complexities of the landscape of neural networks and can be used to prove a general convergence result for noisy SGD.

12.
Stat Methods Med Res ; 27(8): 2312-2328, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-27932665

RESUMO

Early identification of individuals at risk for chronic diseases is of significant clinical value. Early detection provides the opportunity to slow the pace of a condition, and thus help individuals to improve or maintain their quality of life. Additionally, it can lessen the financial burden on health insurers and self-insured employers. As a solution to mitigate the rise in chronic conditions and related costs, an increasing number of employers have recently begun using wellness programs, which typically involve an annual health risk assessment. Unfortunately, these risk assessments have low detection capability, as they should be low-cost and hence rely on collecting relatively few basic biomarkers. Thus one may ask, how can we select a low-cost set of biomarkers that would be the most predictive of multiple chronic diseases? In this paper, we propose a statistical data-driven method to address this challenge by minimizing the number of biomarkers in the screening procedure while maximizing the predictive power over a broad spectrum of diseases. Our solution uses multi-task learning and group dimensionality reduction from machine learning and statistics. We provide empirical validation of the proposed solution using data from two different electronic medical records systems, with comparisons over a statistical benchmark.


Assuntos
Doença Crônica/tendências , Modelos Estatísticos , Biomarcadores , Custos e Análise de Custo , Previsões/métodos , Modelos Logísticos
13.
Proc Natl Acad Sci U S A ; 113(16): E2218-23, 2016 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-27001856

RESUMO

Statistical inference problems arising within signal processing, data mining, and machine learning naturally give rise to hard combinatorial optimization problems. These problems become intractable when the dimensionality of the data is large, as is often the case for modern datasets. A popular idea is to construct convex relaxations of these combinatorial problems, which can be solved efficiently for large-scale datasets. Semidefinite programming (SDP) relaxations are among the most powerful methods in this family and are surprisingly well suited for a broad range of problems where data take the form of matrices or graphs. It has been observed several times that when the statistical noise is small enough, SDP relaxations correctly detect the underlying combinatorial structures. In this paper we develop asymptotic predictions for several detection thresholds, as well as for the estimation error above these thresholds. We study some classical SDP relaxations for statistical problems motivated by graph synchronization and community detection in networks. We map these optimization problems to statistical mechanics models with vector spins and use nonrigorous techniques from statistical mechanics to characterize the corresponding phase transitions. Our results clarify the effectiveness of SDP relaxations in solving high-dimensional statistical problems.

14.
AMIA Annu Symp Proc ; 2015: 329-38, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26958164

RESUMO

Recently, in response to the rising costs of healthcare services, employers that are financially responsible for the healthcare costs of their workforce have been investing in health improvement programs for their employees. A main objective of these so called "wellness programs" is to reduce the incidence of chronic illnesses such as cardiovascular disease, cancer, diabetes, and obesity, with the goal of reducing future medical costs. The majority of these wellness programs include an annual screening to detect individuals with the highest risk of developing chronic disease. Once these individuals are identified, the company can invest in interventions to reduce the risk of those individuals. However, capturing many biomarkers per employee creates a costly screening procedure. We propose a statistical data-driven method to address this challenge by minimizing the number of biomarkers in the screening procedure while maximizing the predictive power over a broad spectrum of diseases. Our solution uses multi-task learning and group dimensionality reduction from machine learning and statistics. We provide empirical validation of the proposed solution using data from two different electronic medical records systems, with comparisons to a statistical benchmark.


Assuntos
Biomarcadores/análise , Doença Crônica/prevenção & controle , Registros Eletrônicos de Saúde , Promoção da Saúde , Medição de Risco/métodos , Doença Crônica/economia , Simulação por Computador , Custos de Cuidados de Saúde , Promoção da Saúde/economia , Humanos , Modelos Biológicos , Valor Preditivo dos Testes
15.
Proc Natl Acad Sci U S A ; 110(21): 8405-10, 2013 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-23650360

RESUMO

Let X(0) be an unknown M by N matrix. In matrix recovery, one takes n < MN linear measurements y(1),…,y(n) of X(0), where y(i) = Tr(A(T)iX(0)) and each A(i) is an M by N matrix. A popular approach for matrix recovery is nuclear norm minimization (NNM): solving the convex optimization problem min ||X||*subject to y(i) =Tr(A(T)(i)X) for all 1 ≤ i ≤ n, where || · ||* denotes the nuclear norm, namely, the sum of singular values. Empirical work reveals a phase transition curve, stated in terms of the undersampling fraction δ(n,M,N) = n/(MN), rank fraction ρ=rank(X0)/min {M,N}, and aspect ratio ß=M/N. Specifically when the measurement matrices Ai have independent standard Gaussian random entries, a curve δ*(ρ) = δ*(ρ;ß) exists such that, if δ > δ*(ρ), NNM typically succeeds for large M,N, whereas if δ < δ*(ρ), it typically fails. An apparently quite different problem is matrix denoising in Gaussian noise, in which an unknown M by N matrix X(0) is to be estimated based on direct noisy measurements Y =X(0) + Z, where the matrix Z has independent and identically distributed Gaussian entries. A popular matrix denoising scheme solves the unconstrained optimization problem min|| Y-X||(2)(F)/2+λ||X||*. When optimally tuned, this scheme achieves the asymptotic minimax mean-squared error M(ρ;ß) = lim(M,N → ∞)inf(λ)sup(rank(X) ≤ ρ · M)MSE(X,X(λ)), where M/N → . We report extensive experiments showing that the phase transition δ*(ρ) in the first problem, matrix recovery from Gaussian measurements, coincides with the minimax risk curve M(ρ)=M(ρ;ß) in the second problem, matrix denoising in Gaussian noise: δ*(ρ)=M(ρ), for any rank fraction 0 < ρ < 1 (at each common aspect ratio ß). Our experiments considered matrices belonging to two constraint classes: real M by N matrices, of various ranks and aspect ratios, and real symmetric positive-semidefinite N by N matrices, of various ranks.

16.
Proc Natl Acad Sci U S A ; 107(47): 20196-201, 2010 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-21076030

RESUMO

Which network structures favor the rapid spread of new ideas, behaviors, or technologies? This question has been studied extensively using epidemic models. Here we consider a complementary point of view and consider scenarios where the individuals' behavior is the result of a strategic choice among competing alternatives. In particular, we study models that are based on the dynamics of coordination games. Classical results in game theory studying this model provide a simple condition for a new action or innovation to become widespread in the network. The present paper characterizes the rate of convergence as a function of the structure of the interaction network. The resulting predictions differ strongly from the ones provided by epidemic models. In particular, it appears that innovation spreads much more slowly on well-connected network structures dominated by long-range links than in low-dimensional ones dominated, for example, by geographic proximity.


Assuntos
Difusão de Inovações , Modelos Teóricos , Apoio Social , Teoria dos Jogos , Cadeias de Markov , Método de Monte Carlo
17.
Proc Natl Acad Sci U S A ; 106(45): 18914-9, 2009 Nov 10.
Artigo em Inglês | MEDLINE | ID: mdl-19858495

RESUMO

Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity-undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity-undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity-undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity-undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity-undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism.


Assuntos
Algoritmos , Modelos Estatísticos , Tamanho da Amostra , Estatística como Assunto/métodos
18.
Proc Natl Acad Sci U S A ; 104(25): 10318-23, 2007 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-17567754

RESUMO

An instance of a random constraint satisfaction problem defines a random subset (the set of solutions) of a large product space chiN (the set of assignments). We consider two prototypical problem ensembles (random k-satisfiability and q-coloring of random regular graphs) and study the uniform measure with support on S. As the number of constraints per variable increases, this measure first decomposes into an exponential number of pure states ("clusters") and subsequently condensates over the largest such states. Above the condensation point, the mass carried by the n largest states follows a Poisson-Dirichlet process. For typical large instances, the two transitions are sharp. We determine their precise location. Further, we provide a formal definition of each phase transition in terms of different notions of correlation between distinct variables in the problem. The degree of correlation naturally affects the performances of many search/sampling algorithms. Empirical evidence suggests that local Monte Carlo Markov chain strategies are effective up to the clustering phase transition and belief propagation up to the condensation point. Finally, refined message passing techniques (such as survey propagation) may also beat this threshold.


Assuntos
Algoritmos , Modelos Teóricos , Transição de Fase , Entropia , Cadeias de Markov , Método de Monte Carlo
19.
Phys Rev E Stat Nonlin Soft Matter Phys ; 66(4 Pt 2): 046120, 2002 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-12443272

RESUMO

The state-of-the-art error correcting codes are based on large random constructions (random graphs, random permutations, etc.) and are decoded by linear-time iterative algorithms. Because of these features, they are remarkable examples of diluted mean-field spin glasses, both from the static and dynamic points of view. We analyze the behavior of decoding algorithms by mapping them onto statistical-physics models. This allows us to understand the intrinsic (i.e., algorithm independent) features of this behavior.

20.
Phys Rev Lett ; 88(17): 178701, 2002 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-12005788

RESUMO

Randomized search algorithms for hard combinatorial problems exhibit a large variability of performances. We study the different types of rare events which occur in such out-of-equilibrium stochastic processes and we show how they cooperate in determining the final distribution of running times. As a by-product of our analysis we show how search algorithms are optimized by random restarts.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...