Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
BMC Med Res Methodol ; 22(1): 324, 2022 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-36526967

RESUMO

BACKGROUND: The role of immunological responses to exposed bacteria on disease incidence is increasingly under investigation. With many bacterial species, and many potential antibody reactions to a particular species, the large number of assays required for this type of discovery can make it prohibitively expensive. We propose a two-phase group testing design to more efficiently screen numerous antibody effects in a case-control setting. METHODS: Phase 1 uses group testing to select antibodies that are differentially expressed between cases and controls. The selected antibodies go on to Phase 2 individual testing. RESULTS: We evaluate the two-phase group testing design through simulations and example data and find that it substantially reduces the number of assays required relative to standard case-control and group testing designs, while maintaining similar statistical properties. CONCLUSION: The proposed two-phase group testing design can dramatically reduce the number of assays required, while providing comparable results to a case-control design.

2.
Stat Med ; 40(17): 3865-3880, 2021 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-33913183

RESUMO

Large-scale disease screening is a complicated process in which high costs must be balanced against pressing public health needs. When the goal is screening for infectious disease, one approach is group testing in which samples are initially tested in pools and individual samples are retested only if the initial pooled test was positive. Intuitively, if the prevalence of infection is small, this could result in a large reduction of the total number of tests required. Despite this, the use of group testing in medical studies has been limited, largely due to skepticism about the impact of pooling on the accuracy of a given assay. While there is a large body of research addressing the issue of testing errors in group testing studies, it is customary to assume that the misclassification parameters are known from an external population and/or that the values do not change with the group size. Both of these assumptions are highly questionable for many medical practitioners considering group testing in their study design. In this article, we explore how the failure of these assumptions might impact the efficacy of a group testing design and, consequently, whether group testing is currently feasible for medical screening. Specifically, we look at how incorrect assumptions about the sensitivity function at the design stage can lead to poor estimation of a procedure's overall sensitivity and expected number of tests. Furthermore, if a validation study is used to estimate the pooled misclassification parameters of a given assay, we show that the sample sizes required are so large as to be prohibitive in all but the largest screening programs.


Assuntos
Programas de Rastreamento , Custos e Análise de Custo , Humanos , Prevalência
3.
Stat Med ; 37(27): 3991-4006, 2018 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-29984411

RESUMO

For the two-sample problem, the Wilcoxon-Mann-Whitney (WMW) test is used frequently: it is simple to explain (a permutation test on the difference in mean ranks), it handles continuous or ordinal responses, it can be implemented for large or small samples, it is robust to outliers, it requires few assumptions, and it is efficient in many cases. Unfortunately, the WMW test is rarely presented with an effect estimate and confidence interval. A natural effect parameter associated with this test is the Mann-Whitney parameter, φ = Pr[ X

Assuntos
Intervalos de Confiança , Estatísticas não Paramétricas , Humanos , Modelos Estatísticos , Reprodutibilidade dos Testes
4.
Biom J ; 59(6): 1382-1398, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28792074

RESUMO

Group testing estimation, which utilizes pooled rather than individual units for testing, has been an ongoing area of research for over six decades. While it is often argued that such methods can yield large savings in terms of resources and/or time, these benefits depend very much on the initial choice of pool sizes. In fact, when poor group sizes are used, the results can be much worse than those obtained using standard techniques. Tools for addressing this problem in the literature have been based on either large sample results or prior knowledge of the parameter being estimated, with little guidance when these assumptions are not met. In this paper, we introduce and study random walk designs for choosing pool sizes when only a small number of tests can be run and prior knowledge is vague. To illustrate these methods, application is made to the estimation of prevalence for two diseases among Australian chrysanthemum crops.


Assuntos
Biometria/métodos , Chrysanthemum/virologia , Doenças das Plantas/virologia , Prevalência , Tamanho da Amostra , Processos Estocásticos
5.
Biometrics ; 72(1): 299-302, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26393800

RESUMO

In the context of group testing screening, McMahan, Tebbs, and Bilder (2012, Biometrics 68, 287-296) proposed a two-stage procedure in a heterogenous population in the presence of misclassification. In earlier work published in Biometrics, Kim, Hudgens, Dreyfuss, Westreich, and Pilcher (2007, Biometrics 63, 1152-1162) also proposed group testing algorithms in a homogeneous population with misclassification. In both cases, the authors evaluated performance of the algorithms based on the expected number of tests per person, with the optimal design being defined by minimizing this quantity. The purpose of this article is to show that although the expected number of tests per person is an appropriate evaluation criteria for group testing when there is no misclassification, it may be problematic when there is misclassification. Specifically, a valid criterion needs to take into account the amount of correct classification and not just the number of tests. We propose, a more suitable objective function that accounts for not only the expected number of tests, but also the expected number of correct classifications. We then show how using this objective function that accounts for correct classification is important for design when considering group testing under misclassification. We also present novel analytical results which characterize the optimal Dorfman (1943) design under the misclassification.


Assuntos
Algoritmos , Artefatos , Biometria/métodos , Infecções por Chlamydia/epidemiologia , Interpretação Estatística de Dados , Gonorreia/epidemiologia , Infecções por HIV/diagnóstico , Infecções por HIV/epidemiologia , Programas de Rastreamento/métodos , Vigilância da População/métodos , Medição de Risco/métodos , Humanos
7.
J Appl Stat ; 50(10): 2228-2245, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37434628

RESUMO

Group testing study designs have been used since the 1940s to reduce screening costs for uncommon diseases; for rare diseases, all cases are identifiable with substantially fewer tests than the population size. Substantial research has identified efficient designs under this paradigm. However, little work has focused on the important problem of disease screening among clustered data, such as geographic heterogeneity in HIV prevalence. We evaluated designs where we first estimate disease prevalence and then apply efficient group testing algorithms using these estimates. Specifically, we evaluate prevalence using individual testing on a fixed-size subset of each cluster and use these prevalence estimates to choose group sizes that minimize the corresponding estimated average number of tests per subject. We compare designs where we estimate cluster-specific prevalences as well as a common prevalence across clusters, use different group testing algorithms, construct groups from individuals within and in different clusters, and consider misclassification. For diseases with low prevalence, our results suggest that accounting for clustering is unnecessary. However, for diseases with higher prevalence and sizeable between-cluster heterogeneity, accounting for clustering in study design and implementation improves efficiency. We consider the practical aspects of our design recommendations with two examples with strong clustering effects: (1) Identification of HIV carriers in the US population and (2) Laboratory screening of anti-cancer compounds using cell lines.

8.
Biometrics ; 68(1): 45-52, 2012 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-21981372

RESUMO

Due to the rising cost of laboratory assays, it has become increasingly common in epidemiological studies to pool biospecimens. This is particularly true in longitudinal studies, where the cost of performing multiple assays over time can be prohibitive. In this article, we consider the problem of estimating the parameters of a Gaussian random effects model when the repeated outcome is subject to pooling. We consider different pooling designs for the efficient maximum likelihood estimation of variance components, with particular attention to estimating the intraclass correlation coefficient. We evaluate the efficiencies of different pooling design strategies using analytic and simulation study results. We examine the robustness of the designs to skewed distributions and consider unbalanced designs. The design methodology is illustrated with a longitudinal study of premenopausal women focusing on assessing the reproducibility of F2-isoprostane, a biomarker of oxidative stress, over the menstrual cycle.


Assuntos
F2-Isoprostanos/sangue , Ciclo Menstrual/sangue , Avaliação de Resultados em Cuidados de Saúde/métodos , Estresse Psicológico/sangue , Estresse Psicológico/epidemiologia , Biomarcadores/sangue , Feminino , Humanos , Funções Verossimilhança , Distribuição Normal , Pré-Menopausa , Estresse Psicológico/diagnóstico , Resultado do Tratamento
9.
Stat Med ; 31(22): 2498-512, 2012 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-21805485

RESUMO

Measurement error (ME) problems can cause bias or inconsistency of statistical inferences. When investigators are unable to obtain correct measurements of biological assays, special techniques to quantify MEs need to be applied. Sampling based on repeated measurements is a common strategy to allow for ME. This method has been well addressed in the literature under parametric assumptions. The approach with repeated measures data may not be applicable when the replications are complicated because of cost and/or time concerns. Pooling designs have been proposed as cost-efficient sampling procedures that can assist to provide correct statistical operations based on data subject to ME. We demonstrate that a mixture of both pooled and unpooled data (a hybrid pooled-unpooled design) can support very efficient estimation and testing in the presence of ME. Nonparametric techniques have not been well investigated to analyze repeated measures data or pooled data subject to ME. We propose and examine both the parametric and empirical likelihood methodologies for data subject to ME. We conclude that the likelihood methods based on the hybrid samples are very efficient and powerful. The results of an extensive Monte Carlo study support our conclusions. Real data examples demonstrate the efficiency of the proposed methods in practice.


Assuntos
Biomarcadores/análise , Interpretação Estatística de Dados , Funções Verossimilhança , Colesterol/sangue , Simulação por Computador , Humanos , Método de Monte Carlo , Infarto do Miocárdio/sangue
10.
J Stat Plan Inference ; 141(8): 2633-2644, 2011 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-21643478

RESUMO

The theoretical literature on quantile and distribution function estimation in infinite populations is very rich, and invariance plays an important role in these studies. This is not the case for the commonly occurring problem of estimation of quantiles in finite populations. The latter is more complicated and interesting because an optimal strategy consists not only of an estimator, but also of a sampling design, and the estimator may depend on the design and on the labels of sampled individuals, whereas in iid sampling, design issues and labels do not exist.We study estimation of finite population quantiles, with emphasis on estimators that are invariant under the group of monotone transformations of the data, and suitable invariant loss functions. Invariance under the finite group of permutation of the sample is also considered. We discuss nonrandomized and randomized estimators, best invariant and minimax estimators, and sampling strategies relative to different classes. Invariant loss functions and estimators in finite population sampling have a nonparametric flavor, and various natural combinatorial questions and tools arise as a result.

11.
Am Stat ; 73(2): 117-125, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31814627

RESUMO

Group testing has its origin in the identification of syphilis in the U.S. army during World War II. Much of the theoretical framework of group testing was developed starting in the late 1950s, with continued work into the 1990s. Recently, with the advent of new laboratory and genetic technologies, there has been an increasing interest in group testing designs for cost saving purposes. In this article, we compare different nested designs, including Dorfman, Sterrett and an optimal nested procedure obtained through dynamic programming. To elucidate these comparisons, we develop closed-form expressions for the optimal Sterrett procedure and provide a concise review of the prior literature for other commonly used procedures. We consider designs where the prevalence of disease is known as well as investigate the robustness of these procedures, when it is incorrectly assumed. This article provides a technical presentation that will be of interest to researchers as well as from a pedagogical perspective. Supplementary material for this article available online.

12.
Electron J Stat ; 13(2): 2624-2657, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-34267856

RESUMO

Estimation of a single Bernoulli parameter using pooled sampling is among the oldest problems in the group testing literature. To carry out such estimation, an array of efficient estimators have been introduced covering a wide range of situations routinely encountered in applications. More recently, there has been growing interest in using group testing to simultaneously estimate the joint probabilities of two correlated traits using a multinomial model. Unfortunately, basic estimation results, such as the maximum likelihood estimator (MLE), have not been adequately addressed in the literature for such cases. In this paper, we show that finding the MLE for this problem is equivalent to maximizing a multinomial likelihood with a restricted parameter space. A solution using the EM algorithm is presented which is guaranteed to converge to the global maximizer, even on the boundary of the parameter space. Two additional closed form estimators are presented with the goal of minimizing the bias and/or mean square error. The methods are illustrated by considering an application to the joint estimation of transmission prevalence for two strains of the Potato virus Y by the aphid Myzus persicae.

13.
Am Stat ; 69(1): 45-52, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-28042146

RESUMO

Group testing is an active area of current research and has important applications in medicine, biotechnology, genetics, and product testing. There have been recent advances in design and estimation, but the simple Dorfman procedure introduced by R. Dorfman in 1943 is widely used in practice. In many practical situations, the exact value of the probability p of being affected is unknown. We present both minimax and Bayesian solutions for the group size problem when p is unknown. For unbounded p, we show that the minimax solution for group size is 8, while using a Bayesian strategy with Jeffreys' prior results in a group size of 13. We also present solutions when p is bounded from above. For the practitioner, we propose strong justification for using a group size of between 8 and 13 when a constraint on p is not incorporated and provide useable code for computing the minimax group size under a constrained p.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa