Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 110
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Stat Med ; 40(6): 1429-1439, 2021 03 15.
Artigo em Inglês | MEDLINE | ID: mdl-33314199

RESUMO

Interval cancers are cancers detected symptomatically between screens or after the last screen. A mathematical model for the development of interval cancers can provide useful information for evaluating cancer screening. In this regard a useful quantity is MIC, the mean duration in years of progressive preclinical cancer (PPC) that leads to interval cancers. Estimation of MIC involved extending a previous model to include three negative screens, invoking the multinomial-Poisson transformation to avoid estimating background cancer trends, and varying screening test sensitivity. Simulations show that when the true MIC is 0.5, the method yields a reasonably narrow range of estimated MICs over the range of screening test sensitivities from 0.5 to 1.0. If the lower bound on the screening test sensitivity is 0.7, the method performs considerably better even for larger MICs. The application of the method involved annual lung cancer screening in the Prostate, Lung, Colorectal, and Ovarian trial. Assuming a normal distribution for PPC duration, the estimated MIC (95% confidence interval) ranged from 0.00 (0.00 to 0.34) at a screening test sensitivity of 1.0 to 0.54 (0.03, 1.00) at a screening test sensitivity of 0.5 Assuming an exponential distribution for PPC duration, which did not fit as well, the estimated MIC ranged from 0.27 (0.08, 0.49) at a screening test sensitivity of 0.5 to 0.73 (0.32, 1.26) at a screen test sensitivity of 1.0 Based on these results, investigators may wish to investigate more frequent lung cancer screening.


Assuntos
Neoplasias da Mama , Neoplasias Pulmonares , Detecção Precoce de Câncer , Humanos , Neoplasias Pulmonares/diagnóstico , Masculino , Programas de Rastreamento , Resultados Negativos
2.
Artigo em Inglês | MEDLINE | ID: mdl-32206075

RESUMO

A key aspect of the article by Lousdal on instrumental variables was a discussion of the monotonicity assumption. However, there was no mention of the history of the development of this assumption. The purpose of this letter is to note that Baker and Lindeman and Imbens and Angrist independently introduced the monotonicity assumption into the analysis of instrumental variables. The letter also places the monotonicity assumption in the context of the method of latent class instrumental variables.

3.
Stat Med ; 38(22): 4453-4474, 2019 09 30.
Artigo em Inglês | MEDLINE | ID: mdl-31392751

RESUMO

Many clinical or prevention studies involve missing or censored outcomes. Maximum likelihood (ML) methods provide a conceptually straightforward approach to estimation when the outcome is partially missing. Methods of implementing ML methods range from the simple to the complex, depending on the type of data and the missing-data mechanism. Simple ML methods for ignorable missing-data mechanisms (when data are missing at random) include complete-case analysis, complete-case analysis with covariate adjustment, survival analysis with covariate adjustment, and analysis via propensity-to-be-missing scores. More complex ML methods for ignorable missing-data mechanisms include the analysis of longitudinal dropouts via a marginal model for continuous data or a conditional model for categorical data. A moderately complex ML method for categorical data with a saturated model and either ignorable or nonignorable missing-data mechanisms is a perfect fit analysis, an algebraic method involving closed-form estimates and variances. A complex and flexible ML method with categorical data and either ignorable or nonignorable missing-data mechanisms is the method of composite linear models, a matrix method requiring specialized software. Except for the method of composite linear models, which can involve challenging matrix specifications, the implementation of these ML methods ranges in difficulty from easy to moderate.


Assuntos
Viés , Funções Verossimilhança , Simulação por Computador , Interpretação Estatística de Dados , Humanos , Modelos Estatísticos , Pontuação de Propensão , Ensaios Clínicos Controlados Aleatórios como Assunto , Análise de Sobrevida
5.
Stat Med ; 37(4): 507-518, 2018 02 20.
Artigo em Inglês | MEDLINE | ID: mdl-29164641

RESUMO

A surrogate endpoint in a randomized clinical trial is an endpoint that occurs after randomization and before the true, clinically meaningful, endpoint that yields conclusions about the effect of treatment on true endpoint. A surrogate endpoint can accelerate the evaluation of new treatments but at the risk of misleading conclusions. Therefore, criteria are needed for deciding whether to use a surrogate endpoint in a new trial. For the meta-analytic setting of multiple previous trials, each with the same pair of surrogate and true endpoints, this article formulates 5 criteria for using a surrogate endpoint in a new trial to predict the effect of treatment on the true endpoint in the new trial. The first 2 criteria, which are easily computed from a zero-intercept linear random effects model, involve statistical considerations: an acceptable sample size multiplier and an acceptable prediction separation score. The remaining 3 criteria involve clinical and biological considerations: similarity of biological mechanisms of treatments between the new trial and previous trials, similarity of secondary treatments following the surrogate endpoint between the new trial and previous trials, and a negligible risk of harmful side effects arising after the observation of the surrogate endpoint in the new trial. These 5 criteria constitute an appropriately high bar for using a surrogate endpoint to make a definitive treatment recommendation.


Assuntos
Biomarcadores/análise , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Bioestatística/métodos , Simulação por Computador , Humanos , Funções Verossimilhança , Modelos Lineares , Metanálise como Assunto , Tamanho da Amostra , Resultado do Tratamento
7.
Biometrics ; 72(3): 827-34, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-26753781

RESUMO

The twin method refers to the use of data from same-sex identical and fraternal twins to estimate the genetic and environmental contributions to a trait or outcome. The standard twin method is the variance component twin method that estimates heritability, the fraction of variance attributed to additive genetic inheritance. The latent class twin method estimates two quantities that are easier to interpret than heritability: the genetic prevalence, which is the fraction of persons in the genetic susceptibility latent class, and the heritability fraction, which is the fraction of persons in the genetic susceptibility latent class with the trait or outcome. We extend the latent class twin method in three important ways. First, we incorporate an additive genetic model to broaden the sensitivity analysis beyond the original autosomal dominant and recessive genetic models. Second, we specify a separate survival model to simplify computations and improve convergence. Third, we show how to easily adjust for covariates by extending the method of propensity scores from a treatment difference to zygosity. Applying the latent class twin method to data on breast cancer among Nordic twins, we estimated a genetic prevalence of 1%, a result with important implications for breast cancer prevention research.


Assuntos
Interpretação Estatística de Dados , Modelos Genéticos , Estudos em Gêmeos como Assunto/estatística & dados numéricos , Neoplasias da Mama/genética , Feminino , Interação Gene-Ambiente , Predisposição Genética para Doença , Humanos , Prevalência , Países Escandinavos e Nórdicos
8.
Stat Med ; 35(1): 147-60, 2016 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-26239275

RESUMO

In some two-arm randomized trials, some participants receive the treatment assigned to the other arm as a result of technical problems, refusal of a treatment invitation, or a choice of treatment in an encouragement design. In some before-and-after studies, the availability of a new treatment changes from one time period to this next. Under assumptions that are often reasonable, the latent class instrumental variable (IV) method estimates the effect of treatment received in the aforementioned scenarios involving all-or-none compliance and all-or-none availability. Key aspects are four initial latent classes (sometimes called principal strata) based on treatment received if in each randomization group or time period, the exclusion restriction assumption (in which randomization group or time period is an instrumental variable), the monotonicity assumption (which drops an implausible latent class from the analysis), and the estimated effect of receiving treatment in one latent class (sometimes called efficacy, the local average treatment effect, or the complier average causal effect). Since its independent formulations in the biostatistics and econometrics literatures, the latent class IV method (which has no well-established name) has gained increasing popularity. We review the latent class IV method from a clinical and biostatistical perspective, focusing on underlying assumptions, methodological extensions, and applications in our fields of obstetrics and cancer research.


Assuntos
Bioestatística/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Analgesia Epidural/estatística & dados numéricos , Biomarcadores , Análise Custo-Benefício , Detecção Precoce de Câncer/estatística & dados numéricos , Feminino , Humanos , Metanálise como Assunto , Modelos Estatísticos , Neoplasias/prevenção & controle , Neoplasias/terapia , Gravidez , Resultado do Tratamento
9.
Ann Intern Med ; 172(11): 775-776, 2020 06 02.
Artigo em Inglês | MEDLINE | ID: mdl-32479148
11.
Clin Trials ; 12(4): 299-308, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-25385934

RESUMO

BACKGROUND: A surrogate endpoint is an endpoint observed earlier than the true endpoint (a health outcome) that is used to draw conclusions about the effect of treatment on the unobserved true endpoint. A prognostic marker is a marker for predicting the risk of an event given a control treatment; it informs treatment decisions when there is information on anticipated benefits and harms of a new treatment applied to persons at high risk. A predictive marker is a marker for predicting the effect of treatment on outcome in a subgroup of patients or study participants; it provides more rigorous information for treatment selection than a prognostic marker when it is based on estimated treatment effects in a randomized trial. METHODS: We organized our discussion around a different theme for each topic. RESULTS: "Fundamentally an extrapolation" refers to the non-statistical considerations and assumptions needed when using surrogate endpoints to evaluate a new treatment. "Decision analysis to the rescue" refers to use the use of decision analysis to evaluate an additional prognostic marker because it is not possible to choose between purely statistical measures of marker performance. "The appeal of simplicity" refers to a straightforward and efficient use of a single randomized trial to evaluate overall treatment effect and treatment effect within subgroups using predictive markers. CONCLUSION: The simple themes provide a general guideline for evaluation of surrogate endpoints, prognostic markers, and predictive markers.


Assuntos
Biomarcadores/análise , Avaliação de Resultados em Cuidados de Saúde , Algoritmos , Interpretação Estatística de Dados , Previsões , Humanos , Avaliação de Resultados em Cuidados de Saúde/métodos
12.
Ann Intern Med ; 170(9): 664-665, 2019 05 07.
Artigo em Inglês | MEDLINE | ID: mdl-31060070

Assuntos
Bioestatística
14.
Stat Med ; 33(22): 3946-59, 2014 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-24825728

RESUMO

An important question in the evaluation of an additional risk prediction marker is how to interpret a small increase in the area under the receiver operating characteristic curve (AUC). Many researchers believe that a change in AUC is a poor metric because it increases only slightly with the addition of a marker with a large odds ratio. Because it is not possible on purely statistical grounds to choose between the odds ratio and AUC, we invoke decision analysis, which incorporates costs and benefits. For example, a timely estimate of the risk of later non-elective operative delivery can help a woman in labor decide if she wants an early elective cesarean section to avoid greater complications from possible later non-elective operative delivery. A basic risk prediction model for later non-elective operative delivery involves only antepartum markers. Because adding intrapartum markers to this risk prediction model increases AUC by 0.02, we questioned whether this small improvement is worthwhile. A key decision-analytic quantity is the risk threshold, here the risk of later non-elective operative delivery at which a patient would be indifferent between an early elective cesarean section and usual care. For a range of risk thresholds, we found that an increase in the net benefit of risk prediction requires collecting intrapartum marker data on 68 to 124 women for every correct prediction of later non-elective operative delivery. Because data collection is non-invasive, this test tradeoff of 68 to 124 is clinically acceptable, indicating the value of adding intrapartum markers to the risk prediction model.


Assuntos
Cesárea , Técnicas de Apoio para a Decisão , Medição de Risco/métodos , Área Sob a Curva , Feminino , Humanos , Estimativa de Kaplan-Meier , Gravidez , Curva ROC
16.
Med Decis Making ; 44(1): 53-63, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-37990924

RESUMO

BACKGROUND: The test tradeoff curve helps investigators decide if collecting data for risk prediction is worthwhile when risk prediction is used for treatment decisions. At a given benefit-cost ratio (the number of false-positive predictions one would trade for a true positive prediction) or risk threshold (the probability of developing disease at indifference between treatment and no treatment), the test tradeoff is the minimum number of data collections per true positive to yield a positive maximum expected utility of risk prediction. For example, a test tradeoff of 3,000 invasive tests per true-positive prediction of cancer may suggest that risk prediction is not worthwhile. A test tradeoff curve plots test tradeoff versus benefit-cost ratio or risk threshold. The test tradeoff curve evaluates risk prediction at the optimal risk score cutpoint for treatment, which is the cutpoint of the risk score (the estimated risk of developing disease) that maximizes the expected utility of risk prediction when the receiver-operating characteristic (ROC) curve is concave. METHODS: Previous methods for estimating the test tradeoff required grouping risk scores. Using individual risk scores, the new method estimates a concave ROC curve by constructing a concave envelope of ROC points, taking a slope-based moving average, minimizing a sum of squared errors, and connecting successive ROC points with line segments. RESULTS: The estimated concave ROC curve yields an estimated test tradeoff curve. Analyses of 2 synthetic data sets illustrate the method. CONCLUSION: Estimating the test tradeoff curve based on individual risk scores is straightforward to implement and more appealing than previous estimation methods that required grouping risk scores. HIGHLIGHTS: The test tradeoff curve helps investigators decide if collecting data for risk prediction is worthwhile when risk prediction is used for treatment decisions.At a given benefit-cost ratio or risk threshold, the test tradeoff is the minimum number of data collections per true positive to yield a positive maximum expected utility of risk prediction.Unlike previous estimation methods that grouped risk scores, the method uses individual risk scores to estimate a concave ROC curve, which yields an estimated test tradeoff curve.


Assuntos
Fatores de Risco , Humanos , Curva ROC
17.
Int J Biostat ; 2024 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-39069742

RESUMO

Chen and Heitjan (Sensitivity of estimands in clinical trials with imperfect compliance. Int J Biostat. 2023) used linear extrapolation to estimate the population average causal effect (PACE) from the complier average causal effect (CACE) in multiple randomized trials with all-or-none compliance. For extrapolating from CACE to PACE in this setting and in the paired availability design involving different availabilities of treatment among before-and-after studies, we recommend the sensitivity analysis in Baker and Lindeman (J Causal Inference, 2013) because it is not restricted to a linear model, as it involves various random effect and trend models.

18.
J Natl Cancer Inst ; 116(6): 795-799, 2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38419575

RESUMO

There is growing interest in multicancer detection tests, which identify molecular signals in the blood that indicate a potential preclinical cancer. A key stage in evaluating these tests is a prediagnostic performance study, in which investigators store specimens from asymptomatic individuals and later test stored specimens from patients with cancer and a random sample of controls to determine predictive performance. Performance metrics include rates of cancer-specific true-positive and false-positive findings and a cancer-specific positive predictive value, with the latter compared with a decision-analytic threshold. The sample size trade-off method, which trades imprecise targeting of the true-positive rate for precise targeting of a zero-false-positive rate can substantially reduce sample size while increasing the lower bound of the positive predictive value. For a 1-year follow-up, with ovarian cancer as the rarest cancer considered, the sample size trade-off method yields a sample size of 163 000 compared with a sample size of 720 000, based on standard calculations. These design and analysis recommendations should be considered in planning a specimen repository and in the prediagnostic evaluation of multicancer detection tests.


Assuntos
Detecção Precoce de Câncer , Neoplasias , Humanos , Neoplasias/diagnóstico , Neoplasias/sangue , Detecção Precoce de Câncer/métodos , Biomarcadores Tumorais/sangue , Projetos de Pesquisa , Tamanho da Amostra , Valor Preditivo dos Testes , Feminino , Neoplasias Ovarianas/diagnóstico , Neoplasias Ovarianas/sangue , Reações Falso-Positivas
19.
J Clin Epidemiol ; : 111508, 2024 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-39222723

RESUMO

OBJECTIVES: The main purpose of using a surrogate endpoint is to estimate the treatment effect on the true endpoint sooner than with a true endpoint. Based on a meta-regression of historical randomized trials with surrogate and true endpoints, we discuss statistics for applying and evaluating surrogate endpoints. METHODS: We computed statistics from two types of linear meta-regressions for trial-level data: simple random effects and novel random effects with correlations among estimated treatment effects in trials with more than 2 arms. A key statistic is the estimated intercept of the meta-regression line. An intercept that is small or not statistically significant increases confidence when extrapolating to a new treatment because of consistency with a single causal pathway and invariance to labeling of treatments as controls. For a regulator applying the meta-regression to a new treatment, a useful statistic is the 95% prediction interval. For a clinical trialist planning a trial of a new treatment, useful statistics are the surrogate threshold effect proportion, the sample size multiplier adjusted for dropouts, and the novel true endpoint advantage. RESULTS: We illustrate these statistics with surrogate endpoint meta-regressions involving anti-hypertension treatment, breast cancer screening, and colorectal cancer treatment. CONCLUSION: Regulators and trialists should consider using these statistics when applying and evaluating surrogate endpoints.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa