Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Biom J ; 66(6): e202300185, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39101657

RESUMO

There has been growing research interest in developing methodology to evaluate the health care providers' performance with respect to a patient outcome. Random and fixed effects models are traditionally used for such a purpose. We propose a new method, using a fusion penalty to cluster health care providers based on quasi-likelihood. Without any priori knowledge of grouping information, our method provides a desirable data-driven approach for automatically clustering health care providers into different groups based on their performance. Further, the quasi-likelihood is more flexible and robust than the regular likelihood in that no distributional assumption is needed. An efficient alternating direction method of multipliers algorithm is developed to implement the proposed method. We show that the proposed method enjoys the oracle properties; namely, it performs as well as if the true group structure were known in advance. The consistency and asymptotic normality of the estimators are established. Simulation studies and analysis of the national kidney transplant registry data demonstrate the utility and validity of our method.


Assuntos
Biometria , Pessoal de Saúde , Análise por Conglomerados , Funções Verossimilhança , Humanos , Pessoal de Saúde/estatística & dados numéricos , Biometria/métodos , Transplante de Rim , Algoritmos
2.
Biometrics ; 79(3): 2404-2416, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36573805

RESUMO

The network analysis plays an important role in numerous application domains including biomedicine. Estimation of the number of communities is a fundamental and critical issue in network analysis. Most existing studies assume that the number of communities is known a priori, or lack of rigorous theoretical guarantee on the estimation consistency. In this paper, we propose a regularized network embedding model to simultaneously estimate the community structure and the number of communities in a unified formulation. The proposed model equips network embedding with a novel composite regularization term, which pushes the embedding vector toward its center and pushes similar community centers collapsed with each other. A rigorous theoretical analysis is conducted, establishing asymptotic consistency in terms of community detection and estimation of the number of communities. Extensive numerical experiments have also been conducted on both synthetic networks and brain functional connectivity network, which demonstrate the superior performance of the proposed method compared with existing alternatives.


Assuntos
Algoritmos , Encéfalo
3.
Stat Med ; 42(20): 3685-3698, 2023 09 10.
Artigo em Inglês | MEDLINE | ID: mdl-37315935

RESUMO

There has been growing research interest in developing methodology to evaluate healthcare centers' performance with respect to patient outcomes. Conventional assessments can be conducted using fixed or random effects models, as seen in provider profiling. We propose a new method, using fusion penalty to cluster healthcare centers with respect to a survival outcome. Without any prior knowledge of the grouping information, the new method provides a desirable data-driven approach for automatically clustering healthcare centers into distinct groups based on their performance. An efficient alternating direction method of multipliers algorithm is developed to implement the proposed method. The validity of our approach is demonstrated through simulation studies, and its practical application is illustrated by analyzing data from the national kidney transplant registry.


Assuntos
Algoritmos , Atenção à Saúde , Humanos , Modelos de Riscos Proporcionais , Simulação por Computador , Análise por Conglomerados
4.
Biometrics ; 78(1): 324-336, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-33215685

RESUMO

Electronic health records (EHRs) have become a platform for data-driven granular-level surveillance in recent years. In this paper, we make use of EHRs for early prevention of childhood obesity. The proposed method simultaneously provides smooth disease mapping and outlier information for obesity prevalence that are useful for raising public awareness and facilitating targeted intervention. More precisely, we consider a penalized multilevel generalized linear model. We decompose regional contribution into smooth and sparse signals, which are automatically identified by a combination of fusion and sparse penalties imposed on the likelihood function. In addition, we weigh the proposed likelihood to account for the missingness and potential nonrepresentativeness arising from the EHR data. We develop a novel alternating minimization algorithm, which is computationally efficient, easy to implement, and guarantees convergence. Simulation studies demonstrate superior performance of the proposed method. Finally, we apply our method to the University of Wisconsin Population Health Information Exchange database.


Assuntos
Registros Eletrônicos de Saúde , Obesidade Infantil , Algoritmos , Criança , Simulação por Computador , Humanos , Funções Verossimilhança , Obesidade Infantil/epidemiologia
5.
Stat Med ; 40(8): 1901-1916, 2021 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-33517583

RESUMO

In this article, we are interested in capturing heterogeneity in clustered or longitudinal data. Traditionally such heterogeneity is modeled by either fixed effects (FE) or random effects (RE). In FE models, the degree of freedom for the heterogeneity equals the number of clusters/subjects minus 1, which could result in less efficiency. In RE models, the heterogeneity across different clusters/subjects is described by, for example, a random intercept with 1 parameter (for the variance of the random intercept), which could lead to oversimplification and biases (for the estimates of subject-specific effects). Our "fused effects" model stands in between these two approaches: we assume that there are unknown number of distinct levels of heterogeneity, and use the fusion penalty approach for estimation and inference. We evaluate and compare the performance of our method to the FE and RE models by simulation studies. We apply our method to the Ocular Hypertension Treatment Study to capture the heterogeneity in the progression rate of primary open-angle glaucoma of left and right eyes of different subjects.


Assuntos
Glaucoma de Ângulo Aberto , Glaucoma , Viés , Simulação por Computador , Humanos , Projetos de Pesquisa
6.
Stat Anal Data Min ; 11(5): 203-226, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34386148

RESUMO

In this paper, we propose a procedure to find differential edges between two graphs from high-dimensional data. We estimate two matrices of partial correlations and their differences by solving a penalized regression problem. We assume sparsity only on differences between two graphs, not graphs themselves. Thus, we impose an ℓ 2 penalty on partial correlations and an ℓ 1 penalty on their differences in the penalized regression problem. We apply the proposed procedure to finding differential functional connectivity between healthy individuals and Alzheimer's disease patients.

7.
Electron J Stat ; 9(2): 2324-2347, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-27617051

RESUMO

In this manuscript, we study the statistical properties of convex clustering. We establish that convex clustering is closely related to single linkage hierarchical clustering and k-means clustering. In addition, we derive the range of the tuning parameter for convex clustering that yields a non-trivial solution. We also provide an unbiased estimator of the degrees of freedom, and provide a finite sample bound for the prediction error for convex clustering. We compare convex clustering to some traditional clustering methods in simulation studies.

8.
J Comput Graph Stat ; 19(4): 930-946, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-25878487

RESUMO

In this article, we propose a new method for principal component analysis (PCA), whose main objective is to capture natural "blocking" structures in the variables. Further, the method, beyond selecting different variables for different components, also encourages the loadings of highly correlated variables to have the same magnitude. These two features often help in interpreting the principal components. To achieve these goals, a fusion penalty is introduced and the resulting optimization problem solved by an alternating block optimization algorithm. The method is applied to a number of simulated and real datasets and it is shown that it achieves the stated objectives. The supplemental materials for this article are available online.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA