Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 102
Filtrar
1.
Pharm Stat ; 21(2): 361-371, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34626075

RESUMO

The one-arm, non-randomized, one/two-stage phase II designs have been a mainstay in oncology trials for evaluating response rates or similar variants (i.e., tests about single proportions). With the goal of screening new therapies that have the potential to move into a randomized phase III trial or a subsequent randomized phase II trial, all while maintaining a logistically feasible sample size. However, since the implementation of the Food and Drug Administration's Fast Track Designation, there has been a trend toward randomized phase II clinical trials as a source of stronger evidence for those seeking fast-track approvals. While there are many single- and multi-stage randomized designs for evaluating proportions in this phase II setting, there still exist limitations in terms of sample size (which directly impacts cost and study duration) or operating characteristics (ex. maintained type I error). In this article, we propose a new test for comparing two binomial proportions, which is a modification across existing methods (the standard z-test and Jung's test). This approach is contrasted with existing methods via numeric evaluation and further contrasted using a real-world oncology trial. The proposed method demonstrates improvements in efficiency and robustness against deviations from design assumptions. When applied to the existing trial, significant savings with respect to cost and time are illustrated. Our proposed test for comparing binomial proportions provides an efficient and robust alternative in the randomized phase II oncology setting, especially when the control arm has a high rate.


Assuntos
Ensaios Clínicos Fase II como Assunto , Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , Ensaios Clínicos Fase III como Assunto , Humanos , Neoplasias , Tamanho da Amostra
2.
Pharm Stat ; 20(4): 696-709, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33599032

RESUMO

In this work, we developed a robust permutation test for the concordance correlation coefficient (ρc ) for testing the general hypothesis H0 : ρc  = ρc(0) . The proposed test is based on an appropriately studentized statistic. Theoretically, the test is proven to be asymptotically valid in the general setting when two paired variables are uncorrelated but dependent. This desired property was demonstrated across a range of distributional assumptions and sample sizes in simulation studies, where the test exhibits robust type I error control in all settings tested, even when the sample size is small. We demonstrated the application of this test in two real world examples across cardiac output measurements and endocardiographic imaging.


Assuntos
Bioestatística , Tamanho da Amostra , Simulação por Computador , Humanos
3.
Comput Stat Data Anal ; 138: 96-106, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31031458

RESUMO

The practice of employing empirical likelihood (EL) components in place of parametric likelihood functions in the construction of Bayesian-type procedures has been well-addressed in the modern statistical literature. The EL prior, a Jeffreys-type prior, which asymptotically maximizes the Shannon mutual information between data and the parameters of interest, is rigorously derived. The focus of the proposed approach is on an integrated Kullback-Leibler distance between the EL-based posterior and prior density functions. The EL prior density is the density function for which the corresponding posterior form is asymptotically negligibly different from the EL. The proposed result can be used to develop a methodology for reducing the asymptotic bias of solutions of general estimating equations and M-estimation schemes by removing the first-order term. This technique is developed in a similar manner to methods employed to reduce the asymptotic bias of maximum likelihood estimates via penalizing the underlying parametric likelihoods by their Jeffreys invariant priors. A real data example related to a study of myocardial infarction illustrates the attractiveness of the proposed technique in practical aspects.

4.
Cancer Immunol Immunother ; 65(11): 1339-1352, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-27576783

RESUMO

Survivin is an anti-apoptotic protein that is highly expressed in many cancers, including malignant gliomas. Preclinical studies established that the conjugated survivin peptide mimic SurVaxM (SVN53-67/M57-KLH) could stimulate an anti-tumor immune response against murine glioma in vivo, as well as human glioma cells ex vivo. The current clinical study was conducted to test safety, immunogenicity and clinical effects of the vaccine. Recurrent malignant glioma patients whose tumors were survivin-positive, and who had either HLA-A*02 or HLA-A*03 MHC class I allele-positivity, were given subcutaneous injections of SurVaxM (500 µg) in Montanide ISA 51 with sargramostim (100 µg) at 2-week intervals. SurVaxM was well tolerated with mostly grade one adverse events (AE) and no serious adverse events (SAE) attributable to the study drug. Six patients experienced local injection site reactions; three patients reported fatigue (grades 1 and 2), and 2 patients experienced myalgia (grade 1). Six of eight immunologically evaluable patients developed both cellular and humoral immune responses to vaccine. The vaccine also stimulated HLA-A*02, HLA-A*03 and HLA-A*24 restricted T cell responses. Three patients maintained a partial clinical response or stable disease for more than 6 months. Median progression-free survival was 17.6 weeks, and median overall survival was 86.6 weeks from study entry with seven of nine patients surviving more than 12 months.


Assuntos
Neoplasias Encefálicas/terapia , Vacinas Anticâncer/imunologia , Glioma/terapia , Imunoterapia Ativa/métodos , Proteínas Inibidoras de Apoptose/imunologia , Peptídeos/imunologia , Linfócitos T/imunologia , Adulto , Neoplasias Encefálicas/imunologia , Neoplasias Encefálicas/mortalidade , Feminino , Glioma/imunologia , Glioma/mortalidade , Antígeno HLA-A2/metabolismo , Antígeno HLA-A3/metabolismo , Humanos , Imunidade Humoral , Proteínas Inibidoras de Apoptose/genética , Interferon gama/metabolismo , Masculino , Pessoa de Meia-Idade , Peptídeos/genética , Recidiva , Análise de Sobrevida , Survivina , Resultado do Tratamento , Vacinas de Subunidades Antigênicas
5.
Stat Med ; 35(13): 2251-82, 2016 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-26790540

RESUMO

The receiver operating characteristic (ROC) curve is a popular technique with applications, for example, investigating an accuracy of a biomarker to delineate between disease and non-disease groups. A common measure of accuracy of a given diagnostic marker is the area under the ROC curve (AUC). In contrast with the AUC, the partial area under the ROC curve (pAUC) looks into the area with certain specificities (i.e., true negative rate) only, and it can be often clinically more relevant than examining the entire ROC curve. The pAUC is commonly estimated based on a U-statistic with the plug-in sample quantile, making the estimator a non-traditional U-statistic. In this article, we propose an accurate and easy method to obtain the variance of the nonparametric pAUC estimator. The proposed method is easy to implement for both one biomarker test and the comparison of two correlated biomarkers because it simply adapts the existing variance estimator of U-statistics. In this article, we show accuracy and other advantages of the proposed variance estimation method by broadly comparing it with previously existing methods. Further, we develop an empirical likelihood inference method based on the proposed variance estimator through a simple implementation. In an application, we demonstrate that, depending on the inferences by either the AUC or pAUC, we can make a different decision on a prognostic ability of a same set of biomarkers. Copyright © 2016 John Wiley & Sons, Ltd.


Assuntos
Área Sob a Curva , Curva ROC , Estatísticas não Paramétricas , Variação Biológica da População , Biomarcadores/análise , Interpretação Estatística de Dados , Diagnóstico , Humanos , Modelos Estatísticos
6.
Stat Med ; 35(8): 1257-66, 2016 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-26526165

RESUMO

Simon's optimal two-stage design has been widely used in early phase clinical trials for Oncology and AIDS studies with binary endpoints. With this approach, the second-stage sample size is fixed when the trial passes the first stage with sufficient activity. Adaptive designs, such as those due to Banerjee and Tsiatis (2006) and Englert and Kieser (2013), are flexible in the sense that the second-stage sample size depends on the response from the first stage, and these designs are often seen to reduce the expected sample size under the null hypothesis as compared with Simon's approach. An unappealing trait of the existing designs is that they are not associated with a second-stage sample size, which is a non-increasing function of the first-stage response rate. In this paper, an efficient intelligent process, the branch-and-bound algorithm, is used in extensively searching for the optimal adaptive design with the smallest expected sample size under the null, while the type I and II error rates are maintained and the aforementioned monotonicity characteristic is respected. The proposed optimal design is observed to have smaller expected sample sizes compared to Simon's optimal design, and the maximum total sample size of the proposed adaptive design is very close to that from Simon's method. The proposed optimal adaptive two-stage design is recommended for use in practice to improve the flexibility and efficiency of early phase therapeutic development.


Assuntos
Ensaios Clínicos Fase II como Assunto/estatística & dados numéricos , Algoritmos , Bioestatística , Ensaios Clínicos Fase II como Assunto/métodos , Humanos , Projetos de Pesquisa , Tamanho da Amostra
7.
J Biopharm Stat ; 25(6): 1320-38, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25671781

RESUMO

Bioequivalence trials are commonly conducted to assess therapeutic equivalence between a generic and an innovator brand formulations. In such trials, drug concentrations are obtained repeatedly over time and are summarized using a metric such as the area under the concentration vs. time curve (AUC) for each subject. The usual practice is to then conduct two one-sided tests using these areas to evaluate for average bioequivalence. A major disadvantage of this approach is the loss of information encountered when ignoring the correlation structure between repeated measurements in the computation of areas. In this article, we propose a general linear model approach that incorporates the within-subject covariance structure for making inferences on mean areas. The model-based method can be seen to arise naturally from the reparameterization of the AUC as a linear combination of outcome means. We investigate and compare the inferential properties of our proposed method with the traditional two one-sided tests approach using Monte Carlo simulation studies. We also examine the properties of the method in the event of missing data. Simulations show that the proposed approach is a cost-effective, viable alternative to the traditional method with superior inferential properties. Inferential advantages are particularly apparent in the presence of missing data. To illustrate our approach, a real working example from an asthma study is utilized.


Assuntos
Equivalência Terapêutica , Algoritmos , Análise de Variância , Antiasmáticos/uso terapêutico , Área Sob a Curva , Asma/tratamento farmacológico , Asma/fisiopatologia , Criança , Ensaios Clínicos como Assunto/estatística & dados numéricos , Simulação por Computador , Intervalos de Confiança , Análise Custo-Benefício , Estudos Cross-Over , Humanos , Modelos Lineares , Modelos Estatísticos , Método de Monte Carlo
8.
Pharm Stat ; 14(3): 252-61, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25832442

RESUMO

Comparison of groups in longitudinal studies is often conducted using the area under the outcome versus time curve. However, outcomes may be subject to censoring due to a limit of detection and specific methods that take informative missingness into account need to be applied. In this article, we present a unified model-based method that accounts for both the within-subject variability in the estimation of the area under the curve as well as the missingness mechanism in the event of censoring. Simulation results demonstrate that our proposed method has a significant advantage over traditionally implemented methods with regards to its inferential properties. A working example from an AIDS study is presented to demonstrate the applicability of our approach.


Assuntos
Área Sob a Curva , Interpretação Estatística de Dados , Funções Verossimilhança , Limite de Detecção , Estudos Longitudinais , Viés , Método Duplo-Cego , Infecções por HIV/tratamento farmacológico , Inibidores da Protease de HIV/uso terapêutico , Humanos , Modelos Estatísticos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Resultado do Tratamento , Carga Viral/efeitos dos fármacos
9.
Stata J ; 14(2): 304-328, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-27445642

RESUMO

In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K-sample distributions. Recognizing that recent statistical software packages do not sufficiently address K-sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p-values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p-value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p-value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

10.
Commun Stat Simul Comput ; 53(2): 799-813, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38523867

RESUMO

In this note we introduce a new smooth nonparametric quantile function estimator based on a newly defined generalized expectile function and termed the sigmoidal quantile function estimator. We also introduce a hybrid quantile function estimator, which combines the optimal properties of the classic kernel quantile function estimator with our new generalized sigmoidal quantile function estimator. The generalized sigmoidal quantile function can estimate quantiles beyond the range of the data, which is important for certain applications given smaller sample sizes. This property of extrapolation is illustrated in order to improve standard bootstrap smoothing resampling methods.

11.
Commun Stat Theory Methods ; 53(6): 2141-2153, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38646087

RESUMO

In this work, we show that Spearman's correlation coefficient test about H0:ρs=0 found in most statistical software is theoretically incorrect and performs poorly when bivariate normality assumptions are not met or the sample size is small. There is common misconception that the tests about ρs=0 are robust to deviations from bivariate normality. However, we found under certain scenarios violation of the bivariate normality assumption has severe effects on type I error control for the common tests. To address this issue, we developed a robust permutation test for testing the hypothesis H0:ρs=0 based on an appropriately studentized statistic. We will show that the test is asymptotically valid in general settings. This was demonstrated by a comprehensive set of simulation studies, where the proposed test exhibits robust type I error control, even when the sample size is small. We also demonstrated the application of this test in two real world examples.

12.
Am Stat ; 78(1): 36-46, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38464588

RESUMO

Data-driven most powerful tests are statistical hypothesis decision-making tools that deliver the greatest power against a fixed null hypothesis among all corresponding data-based tests of a given size. When the underlying data distributions are known, the likelihood ratio principle can be applied to conduct most powerful tests. Reversing this notion, we consider the following questions. (a) Assuming a test statistic, say T, is given, how can we transform T to improve the power of the test? (b) Can T be used to generate the most powerful test? (c) How does one compare test statistics with respect to an attribute of the desired most powerful decision-making procedure? To examine these questions, we propose one-to-one mapping of the term "most powerful" to the distribution properties of a given test statistic via matching characterization. This form of characterization has practical applicability and aligns well with the general principle of sufficiency. Findings indicate that to improve a given test, we can employ relevant ancillary statistics that do not have changes in their distributions with respect to tested hypotheses. As an example, the present method is illustrated by modifying the usual t-test under nonparametric settings. Numerical studies based on generated data and a real-data set confirm that the proposed approach can be useful in practice.

13.
J Appl Stat ; 51(3): 481-496, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38370269

RESUMO

In this note, we evaluated the type I error control of the commonly used t-test found in most statistical software packages for testing the hypothesis on H0:ρ=0 vs. H1:ρ>0 based on the sample weighted Pearson correlation coefficient. We found the type I error rate is severely inflated in general cases, even under bivariate normality. To address this issue, we derived the large sample variance of the weighted Pearson correlation. Based on this result, we proposed an asymptotic test and a set of studentized permutation tests. A comprehensive set of simulation studies with a range of sample sizes and a variety of underlying distributions were conducted. The studentized permutation test based on Fisher's Z statistic was shown to robustly control the type I error even in the small sample and non-normality settings. The method was demonstrated with an example data of country-level preterm birth rates.

14.
iScience ; 27(6): 109995, 2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-38868185

RESUMO

The canonical mechanism behind tamoxifen's therapeutic effect on estrogen receptor α/ESR1+ breast cancers is inhibition of ESR1-dependent estrogen signaling. Although ESR1+ tumors expressing wild-type p53 were reported to be more responsive to tamoxifen (Tam) therapy, p53 has not been factored into choice of this therapy and the mechanism underlying the role of p53 in Tam response remains unclear. In a window-of-opportunity trial on patients with newly diagnosed stage I-III ESR1+/HER2/wild-type p53 breast cancer who were randomized to arms with or without Tam prior to surgery, we reveal that the ESR1-p53 interaction in tumors was inhibited by Tam. This resulted in functional reactivation of p53 leading to transcriptional reprogramming that favors tumor-suppressive signaling, as well as downregulation of oncogenic pathways. These findings illustrating the convergence of ESR1 and p53 signaling during Tam therapy enrich mechanistic understanding of the impact of p53 on the response to Tam therapy.

15.
Cancer Causes Control ; 24(9): 1675-85, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23737027

RESUMO

Breast tissues undergo extensive physiologic changes during pregnancy, which may affect breast carcinogenesis. Gestational hypertension, preeclampsia/eclampsia, gestational diabetes, pregnancy weight gain, and nausea and vomiting (N&V) during pregnancy may be indicative of altered hormonal and metabolic profiles and could impact breast cancer risk. Here, we examined associations between these characteristics of a woman's pregnancy and her subsequent breast cancer risk. Participants were parous women that were recruited to a population-based case-control study (Western New York Exposures and Breast Cancer Study). Cases (n = 960), aged 35-79 years, had incident, primary, histologically confirmed breast cancer. Controls (n = 1,852) were randomly selected from motor vehicle records (< 65 years) or Medicare rolls (≥ 65 years). Women were queried on their lifetime pregnancy experiences. Multivariable-adjusted logistic regression was used to estimate odds ratios (ORs) and 95% confidence intervals (CIs). N&V during pregnancy was inversely associated with breast cancer risk. Relative to those who never experienced N&V, ever experiencing N&V was associated with decreased risk (OR 0.69, 95% CI 0.56-0.84) as were increased N&V severity (p trend < 0.001), longer duration (p trend < 0.01), and larger proportion of affected pregnancies (p trend < 0.0001) among women with ≥ 3 pregnancies. Associations were stronger for more recent pregnancies (< 5 years). Findings did not differ by menopausal status or breast cancer subtype including estrogen receptor and HER2 expression status. Other pregnancy characteristics examined were not associated with risk. We observed strong inverse associations between pregnancy N&V and breast cancer risk. Replication of these findings and exploration of underlying mechanisms could provide important insight into breast cancer etiology and prevention.


Assuntos
Neoplasias da Mama/epidemiologia , Adulto , Idoso , Neoplasias da Mama/patologia , Estudos de Casos e Controles , Feminino , Humanos , Pessoa de Meia-Idade , New York/epidemiologia , Gravidez , Complicações na Gravidez/epidemiologia , Fatores de Risco , Aumento de Peso
16.
J Biopharm Stat ; 23(5): 1081-90, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23957517

RESUMO

In areas such as oncology, two-stage designs are often preferred as compared to one-stage designs due to the ability to stop the trial early when faced with evidence of lack of sufficient efficacy and the associated sample size savings. We present exact two-stage designs based on Barnard's exact test for differences in proportions and compare the designs to those proposed by Kepner ( 2010 ) and Jung ( 2010 ). In addition, we present tables of decision rules under a variety of assumed realities for use in trial planning. The procedure is recommended for use due to the substantial sample size savings experienced.


Assuntos
Ensaios Clínicos Fase II como Assunto/métodos , Interpretação Estatística de Dados , Modelos Estatísticos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Projetos de Pesquisa/estatística & dados numéricos , Ensaios Clínicos Fase II como Assunto/estatística & dados numéricos , Tomada de Decisões , Término Precoce de Ensaios Clínicos , Determinação de Ponto Final , Oncologia , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Projetos de Pesquisa/normas , Tamanho da Amostra
17.
Comput Stat Data Anal ; 57(1): 246-261, 2013 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-27030785

RESUMO

We propose a modified nonparametric Baumgartner-Weiß-Schindler test and investigate its use in testing for trends among K binomial populations. Exact conditional and unconditional approaches to p-value calculation are explored in conjunction with the statistic in addition to a similar test statistic proposed by Neuhäuser (2006), the unconditional approaches considered including the maximization approach (Basu, 1977), the confidence interval approach (Berger and Boos, 1994), and the E + M approach (Lloyd, 2008). The procedures are compared with regard to actual Type I error and power and examples are provided. The conditional approach and the E + M approach performed well, with the E + M approach having an actual level much closer to the nominal level. The E + M approach and the conditional approach are generally more powerful than the other p-value calculation approaches in the scenarios considered. The power difference between the conditional approach and the E + M approach is often small in the balance case. However, in the unbalanced case, the power comparison between those two approaches based on our proposed test statistic show that the E+ M approach has higher power than the conditional approach.

18.
J Stat Plan Inference ; 143(2): 334-345, 2013 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-23284224

RESUMO

The Wilcoxon rank-sum test and its variants are historically well-known to be very powerful nonparametric decision rules for testing no location difference between two groups given paired data versus a shift alternative. In this article, we propose a new alternative empirical likelihood (EL) ratio approach for testing the equality of marginal distributions given that sampling is from a continuous bivariate population. We show that in various shift alternative scenarios the proposed exact test is superior to the classic nonparametric procedures, which may break down completely or are frequently inferior to the density-based EL ratio test. This is particularly true in the cases where there is a non-constant shift under the alternative or the data distributions are skewed. An extensive Monte Carlo study shows that the proposed test has excellent operating characteristics. We apply the density-based EL ratio test to analyze real data from two medical studies.

19.
Comput Methods Programs Biomed ; 240: 107725, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37481906

RESUMO

In this paper, we build upon the work of DiCiccio and Romano (2017) by extending their permutation test approach, based on the Pearson correlation coefficient in the continuous case, to ordinal measures of association. We investigate commonly used ordinal measures such as the Spearman correlation, Kendall's tau-b, and gamma, which are widely implemented in commercial and open-source software packages for exact testing routines based on generalized hypergeometric probabilities. Similar to DiCiccio and Romano's method, we apply studentization to correct the test statistic, which yields asymptotically valid inference for testing no ordinal association. We present a comprehensive theoretical framework for our approach, followed by a simulation study. Furthermore, we use toy examples to highlight the differences between the exact tests and the asymptotically valid tests. Our findings align with those of DiCiccio and Romano, indicating that exact permutation tests based on ordinal measures of association are often not exact, whereas the asymptotically correct tests perform well for moderate to large sample sizes.


Assuntos
Software , Simulação por Computador , Probabilidade , Tamanho da Amostra
20.
Am Stat ; 77(1): 35-40, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37334071

RESUMO

In the paired data setting, the sign test is often described in statistical textbooks as a test for comparing differences between the medians of two marginal distributions. There is an implicit assumption that the median of the differences is equivalent to the difference of the medians when employing the sign test in this fashion. We demonstrate however that given asymmetry in the bivariate distribution of the paired data, there are often scenarios where the median of the differences is not equal to the difference of the medians. Further, we show that these scenarios will lead to a false interpretation of the sign test for its intended use in the paired data setting. We illustrate the false-interpretation concept via theory, a simulation study, and through a real-world example based on breast cancer RNA sequencing data obtained from the Cancer Genome Atlas (TCGA).

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa