RESUMO
Background: Idiopathic pulmonary fibrosis (IPF) carries significant mortality and unpredictable progression, with limited therapeutic options. Designing trials with patient-meaningful endpoints, enhancing the reliability and interpretability of results, and streamlining the regulatory approval process are of critical importance to advancing clinical care in IPF. Methods: A landmark in-person symposium in June 2023 assembled 43 participants from the US and internationally, including patients with IPF, investigators, and regulatory representatives, to discuss the immediate future of IPF clinical trial endpoints. Patient advocates were central to discussions, which evaluated endpoints according to regulatory standards and the FDA's 'feels, functions, survives' criteria. Results: Three themes emerged: 1) consensus on endpoints mirroring the lived experiences of patients with IPF; 2) consideration of replacing forced vital capacity (FVC) as the primary endpoint, potentially by composite endpoints that include 'feels, functions, survives' measures or FVC as components; 3) support for simplified, user-friendly patient-reported outcomes (PROs) as either components of primary composite endpoints or key secondary endpoints, supplemented by functional tests as secondary endpoints and novel biomarkers as supportive measures (FDA Guidance for Industry (Multiple Endpoints in Clinical Trials) available at: https://www.fda.gov/media/162416/download). Conclusions: This report, detailing the proceedings of this pivotal symposium, suggests a potential turning point in designing future IPF clinical trials more attuned to outcomes meaningful to patients, and documents the collective agreement across multidisciplinary stakeholders on the importance of anchoring IPF trial endpoints on real patient experiences-namely, how they feel, function, and survive. There is considerable optimism that clinical care in IPF will progress through trials focused on patient-centric insights, ultimately guiding transformative treatment strategies to enhance patients' quality of life and survival.
Assuntos
Fibrose Pulmonar Idiopática , Defesa do Paciente , Humanos , Fibrose Pulmonar Idiopática/tratamento farmacológico , National Institutes of Health (U.S.) , Qualidade de Vida , Reprodutibilidade dos Testes , Estados Unidos , Capacidade Vital , Ensaios Clínicos como AssuntoRESUMO
Allocating patients to treatment arms during a trial based on the observed responses accumulated up to the decision point, and sequential adaptation of this allocation, could minimize the expected number of failures or maximize total benefits to patients. In this study, we developed a Bayesian response-adaptive randomization (RAR) design targeting the endpoint of organ support-free days (OSFD) for patients admitted to the intensive care units. The OSFD is a mixture of mortality and morbidity assessed by the number of days of free of organ support within a predetermined post-randomization time-window. In the past, researchers treated OSFD as an ordinal outcome variable where the lowest category is death. We propose a novel RAR design for a composite endpoint of mortality and morbidity, for example, OSFD, by using a Bayesian mixture model with a Markov chain Monte Carlo sampling to estimate the posterior probability distribution of OSFD and determine treatment allocation ratios at each interim. Simulations were conducted to compare the performance of our proposed design under various randomization rules and different alpha spending functions. The results show that our RAR design using Bayesian inference allocated more patients to the better performing arm(s) compared to other existing adaptive rules while assuring adequate power and type I error rate control across a range of plausible clinical scenarios.
Assuntos
Projetos de Pesquisa , Humanos , Distribuição Aleatória , Teorema de Bayes , Probabilidade , MorbidadeRESUMO
BACKGROUND: Sleeping late has been a common phenomenon and brought harmful effects to our health. The purpose of this study was to investigate the association between sleep timing and major adverse cardiovascular events (MACEs) in patients with percutaneous coronary intervention (PCI). METHODS: Sleep onset time which was acquired by the way of sleep factors questionnaire in 426 inpatients was divided into before 22:00, 22:00 to 22:59, 23:00 to 23:59 and 24:00 and after. The median follow-up time was 35 months. The endpoints included angina pectoris (AP), new myocardial infarction (MI) or unplanned repeat revascularization, hospitalization for heart failure, cardiac death, nonfatal stroke, all-cause death and the composite endpoint of all events mentioned above. Cox proportional hazards regression was applied to analyze the relationship between sleep timing and endpoint events. RESULTS: A total of 64 composite endpoint events (CEEs) were reported, including 36 AP, 15 new MI or unplanned repeat revascularization, 6 hospitalization for heart failure, 2 nonfatal stroke and 5 all-cause death. Compared with sleeping time at 22:00-22:59, there was a higher incidence of AP in the bedtime ≥ 24:00 group (adjusted HR: 5.089; 95% CI: 1.278-20.260; P = 0.021). In addition, bedtime ≥ 24:00 was also associated with an increased risk of CEEs in univariate Cox regression (unadjusted HR: 2.893; 95% CI: 1.452-5.767; P = 0.003). After multivariable adjustments, bedtime ≥ 24:00 increased the risk of CEEs (adjusted HR: 3.156; 95% CI: 1.164-8.557; P = 0.024). CONCLUSION: Late sleeping increased the risk of MACEs and indicated a poor prognosis. It is imperative to instruct patients with PCI to form early bedtime habits.
Assuntos
Intervenção Coronária Percutânea , Sono , Humanos , Masculino , Intervenção Coronária Percutânea/efeitos adversos , Feminino , Pessoa de Meia-Idade , Idoso , Fatores de Tempo , Doenças Cardiovasculares/epidemiologia , Doenças Cardiovasculares/mortalidade , Fatores de Risco , Modelos de Riscos Proporcionais , Seguimentos , Inquéritos e QuestionáriosRESUMO
For randomized clinical trials where a single, primary, binary endpoint would require unfeasibly large sample sizes, composite endpoints (CEs) are widely chosen as the primary endpoint. Despite being commonly used, CEs entail challenges in designing and interpreting results. Given that the components may be of different relevance and have different effect sizes, the choice of components must be made carefully. Especially, sample size calculations for composite binary endpoints depend not only on the anticipated effect sizes and event probabilities of the composite components but also on the correlation between them. However, information on the correlation between endpoints is usually not reported in the literature which can be an obstacle for designing future sound trials. We consider two-arm randomized controlled trials with a primary composite binary endpoint and an endpoint that consists only of the clinically more important component of the CE. We propose a trial design that allows an adaptive modification of the primary endpoint based on blinded information obtained at an interim analysis. Especially, we consider a decision rule to select between a CE and its most relevant component as primary endpoint. The decision rule chooses the endpoint with the lower estimated required sample size. Additionally, the sample size is reassessed using the estimated event probabilities and correlation, and the expected effect sizes of the composite components. We investigate the statistical power and significance level under the proposed design through simulations. We show that the adaptive design is equally or more powerful than designs without adaptive modification on the primary endpoint. Besides, the targeted power is achieved even if the correlation is misspecified at the planning stage while maintaining the type 1 error. All the computations are implemented in R and illustrated by means of a peritoneal dialysis trial.
RESUMO
AIM: To assess composite endpoints combining glycaemic control (HbA1c < 7.0%, ≤ 6.5% or < 5.7%) with weight loss (≥ 5%, ≥ 10% or ≥ 15%) and without hypoglycaemia with tirzepatide in type 2 diabetes (T2D). MATERIALS AND METHODS: Data from the phase 3 SURPASS programme were evaluated post hoc by trial. Participants with T2D were randomized to tirzepatide (5, 10 and 15 mg), placebo (SURPASS-1,5), semaglutide 1 mg (SURPASS-2) or titrated basal insulin (SURPASS-3,4). The proportions of participants achieving the composite endpoints were compared between tirzepatide and the respective comparator groups at week 40/52. RESULTS: The proportions of participants achieving an HbA1c value of less than 7.0% with 5% or more weight loss and without hypoglycaemia ranged from 43% to 82% with tirzepatide across the SURPASS-1 to -5 trials versus 4%-5% with placebo, 51% with semaglutide 1 mg and 5% with basal insulin (P < .001 vs. all comparators). The proportions of participants achieving an HbA1c value of less than 7.0% with 10% or more, or 15% or more weight loss and without hypoglycaemia were significantly higher with all tirzepatide doses versus comparators across trials (P < .001 or P < .05). Similar results were observed for all other combinations of endpoints with an HbA1c value of 6.5% or less, or less than 5.7%, with more tirzepatide-treated participants achieving these endpoints versus those in the comparator groups, including semaglutide. CONCLUSIONS: Across the SURPASS-1 to -5 clinical trials, more tirzepatide-treated participants with T2D achieved clinically meaningful composite endpoints, which included reaching glycaemic targets with various degrees of weight loss and without hypoglycaemia, than those in the comparator groups.
Assuntos
Diabetes Mellitus Tipo 2 , Hipoglicemia , Insulinas , Humanos , Diabetes Mellitus Tipo 2/tratamento farmacológico , Hipoglicemiantes/uso terapêutico , Receptor do Peptídeo Semelhante ao Glucagon 1/agonistas , Hemoglobinas Glicadas , Redução de Peso , Hipoglicemia/tratamento farmacológico , Polipeptídeo Inibidor Gástrico/uso terapêutico , Glucose/uso terapêuticoRESUMO
BACKGROUND: Composite time-to-event endpoints are beneficial for assessing related outcomes jointly in clinical trials, but components of the endpoint may have different censoring mechanisms. For example, in the PRagmatic EValuation of evENTs And Benefits of Lipid-lowering in oldEr adults (PREVENTABLE) trial, the composite outcome contains one endpoint that is right censored (all-cause mortality) and two endpoints that are interval censored (dementia and persistent disability). Although Cox regression is an established method for time-to-event outcomes, it is unclear how models perform under differing component-wise censoring schemes for large clinical trial data. The goal of this article is to conduct a simulation study to investigate the performance of Cox models under different scenarios for composite endpoints with component-wise censoring. METHODS: We simulated data by varying the strength and direction of the association between treatment and outcome for the two component types, the proportion of events arising from the components of the outcome (right censored and interval censored), and the method for including the interval-censored component in the Cox model (upper value and midpoint of the interval). Under these scenarios, we compared the treatment effect estimate bias, confidence interval coverage, and power. RESULTS: Based on the simulation study, Cox models generally have adequate power to achieve statistical significance for comparing treatments for composite outcomes with component-wise censoring. In our simulation study, we did not observe substantive bias for scenarios under the null hypothesis or when the treatment has a similar relative effect on each component outcome. Performance was similar regardless of if the upper value or midpoint of the interval-censored part of the composite outcome was used. CONCLUSION: Cox regression is a suitable method for analysis of clinical trial data with composite time-to-event endpoints subject to different component-wise censoring mechanisms.
Assuntos
Modelos Estatísticos , Humanos , Idoso , Ensaios Clínicos Controlados Aleatórios como Assunto , Modelos de Riscos Proporcionais , Simulação por ComputadorRESUMO
BACKGROUND: The pathogenetic mechanism of atherosclerotic cardiovascular diseases is associated with insulin resistance (IR), which serves as a metabolic risk factor. As a novel indication for IR, triglyceride-glucose (TyG) index may predict cardiovascular disease outcomes. METHODS: In current study, a cohort of 157 individuals with newly developed de novo lesions who received DCB angioplasty between January 2017 and May 2021 were included. The midterm follow-up clinical results consisted of the presence of vessel-oriented composite endpoint (VOCE). The baseline TyG index was divided into three groups by tertiles. This study compared various clinical characteristics and parameters among different groups during DCB angioplasty. A multivariate Cox regression model was built to investigate the potential predictors. RESULTS: Higher TyG index indicated an increased risk of VOCE according to the adjusted model (HR = 4.0, 95%Cl: 1.0-15.4, P = 0.047). A non-linear correlation was uncovered between the index and VOCE from the smooth curve. Based on Kaplan-Meier curve, individuals in the highest TyG index group were more likely to develop VOCE (P < 0.05 for log-rank). CONCLUSIONS: The incidence of VOCE was shown to be independently and positively correlated with an elevated TyG index in individuals with de novo coronary lesions who received DCB angioplasty.
Assuntos
Angioplastia com Balão , Doença da Artéria Coronariana , Resistência à Insulina , Humanos , Glucose , Triglicerídeos , Resultado do Tratamento , Fatores de Risco , Glicemia/metabolismo , Estudos RetrospectivosRESUMO
OBJECTIVE: To explore the predictive value of the proportion of glomerulosclerosis (GS) incidences on the progression of membranous nephropathy with non-nephrotic proteinuria (NNP). METHODS: This study was a single-center, retrospective, cohort study. Patients with biopsy-proven idiopathic membranous nephropathy were divided into three groups based on the proportion of glomerular sclerosis, and their demographic, clinical, and pathological data were compared. The proportions of primary and secondary endpoints were recorded, and the relationship between GS and primary outcomes (progression to nephrotic syndrome, complete remission, and persistent NNP) and the renal composite endpoint was analyzed. RESULTS: A total of 112 patients were divided into three groups according to the proportions of glomerulosclerosis. The median follow-up time was 26.5 (13-51) months. There were significant differences in blood pressure (p < 0.01), renal interstitial lesions (p < 0.0001), and primary endpoints (p = 0.005). The survival analysis showed that prognosis was significantly worse in patients with a high proportion of GS than in those patients with a middle and low proportion of GS (p < 0.001). The Cox multivariate analysis showed that after adjusting for age, sex, BP, 24-h urinary protein, serum creatinine, treatment scheme, and pathological factors, the risk of renal composite outcome in the low proportion group was 0.076 times higher than that in the high proportion group (p = 0.009, HR = 0.076, 95% CI: 0.011-0.532). CONCLUSION: A high level of glomerulosclerosis was an independent risk factor for the prognosis of patients with membranous nephropathy with non-nephrotic proteinuria.
Assuntos
Glomerulonefrite Membranosa , Nefropatias , Síndrome Nefrótica , Proteinúria , Humanos , Estudos de Coortes , Glomerulonefrite Membranosa/complicações , Glomerulonefrite Membranosa/epidemiologia , Glomerulonefrite Membranosa/tratamento farmacológico , Nefropatias/complicações , Síndrome Nefrótica/complicações , Prognóstico , Proteinúria/etiologia , Proteinúria/complicações , Estudos Retrospectivos , Fatores de RiscoRESUMO
BACKGROUND: To determine how much an augmented analysis approach could improve the efficiency of prostate-specific antigen (PSA) response analyses in clinical practice. PSA response rates are commonly used outcome measures in metastatic castration-resistant prostate cancer (mCRPC) trial reports. PSA response is evaluated by comparing continuous PSA data (e.g., change from baseline) to a threshold (e.g., 50% reduction). Consequently, information in the continuous data is discarded. Recent papers have proposed an augmented approach that retains the conventional response rate, but employs the continuous data to improve precision of estimation. METHODS: A literature review identified published prostate cancer trials that included a waterfall plot of continuous PSA data. This continuous data was extracted to enable the conventional and augmented approaches to be compared. RESULTS: Sixty-four articles, reporting results for 78 mCRPC treatment arms, were re-analysed. The median efficiency gain from using the augmented analysis, in terms of the implied increase to the sample size of the original study, was 103.2% (IQR [89.8,190.9%]). CONCLUSIONS: Augmented PSA response analysis requires no additional data to be collected and can be performed easily using available software. It improves precision of estimation to a degree that is equivalent to a substantial sample size increase. The implication of this work is that prostate cancer trials using PSA response as a primary endpoint could be delivered with fewer participants and, therefore, more rapidly with reduced cost.
Assuntos
Monitoramento de Medicamentos/métodos , Neoplasias de Próstata Resistentes à Castração/tratamento farmacológico , Ensaios Clínicos como Assunto , Humanos , Masculino , Antígeno Prostático Específico/efeitos dos fármacos , Neoplasias de Próstata Resistentes à Castração/imunologia , Resultado do TratamentoRESUMO
In disease settings where study participants are at risk for death and a serious nonfatal event, composite endpoints defined as the time until the earliest of death or the nonfatal event are often used as the primary endpoint in clinical trials. In practice, if the nonfatal event can only be detected at clinic visits and the death time is known exactly, the resulting composite endpoint exhibits "component-wise censoring." The standard method used to estimate event-free survival in this setting fails to account for component-wise censoring. We apply a kernel smoothing method previously proposed for a marker process in a novel way to produce a nonparametric estimator for event-free survival that accounts for component-wise censoring. The key insight that allows us to apply this kernel method is thinking of nonfatal event status as an intermittently observed binary time-dependent variable rather than thinking of time to the nonfatal event as interval-censored. We also propose estimators for the probability in state and restricted mean time in state for reversible or irreversible illness-death models, under component-wise censoring, and derive their large-sample properties. We perform a simulation study to compare our method to existing multistate survival methods and apply the methods on data from a large randomized trial studying a multifactor intervention for reducing morbidity and mortality among men at above average risk of coronary heart disease.
Assuntos
Modelos de Riscos Proporcionais , Simulação por Computador , Humanos , Masculino , Probabilidade , Análise de SobrevidaRESUMO
Composite endpoints are very common in clinical research, such as recurrence-free survival in oncology research, defined as the earliest of either death or disease recurrence. Because of the way data are collected in such studies, component-wise censoring is common, where, for example, recurrence is an interval-censored event and death is a right-censored event. However, a common way to analyze such component-wise censored composite endpoints is to treat them as right-censored, with the date at which the non-fatal event was detected serving as the date the event occurred. This approach is known to introduce upward bias when the Kaplan-Meier estimator is applied, but has more complex impact on semi-parametric regression approaches. In this article we compare the performance of the Cox model estimators for right-censored data and the Cox model estimators for interval-censored data in the context of component-wise censored data where the visit process differs across levels of a covariate of interest, a common scenario in observational data. We additionally examine estimators of the cause-specific hazard when applied to the individual components of such component-wise censored composite endpoints. We found that when visit schedules differed according to levels of a covariate of interest, the Cox model estimators for right-censored data and the estimators for cause-specific hazards were increasingly biased as the frequency of visits decreased. The Cox model estimator for interval-censored data with censoring at the last disease-free date is recommended for use in the presence of differential visit schedules.
Assuntos
Modelos de Riscos Proporcionais , Viés , Simulação por Computador , Humanos , Análise de SobrevidaRESUMO
The Finkelstein and Schoenfeld (FS) test is a popular generalized pairwise comparison approach to analyze prioritized composite endpoints (eg, components are assessed in order of clinical importance). Power and sample size estimation for the FS test, however, are generally done via simulation studies. This simulation approach can be extremely computationally burdensome, compounded by increasing number of composite endpoints and with increasing sample size. Here we propose an analytical solution to calculate power and sample size for commonly encountered two-component hierarchical composite endpoints. The power formulas are derived assuming underlying distributions in each of the component outcomes on the population level, which provide a computationally efficient and practical alternative to the standard simulation approach. Monte Carlo simulation results demonstrate that performance of the proposed power formulas are consistent with that of the simulation approach, and have generally desirable objective properties including robustness to mis-specified distributional assumptions. We demonstrate the application of the proposed formulas by calculating power and sample size for the Transthyretin Amyloidosis Cardiomyopathy Clinical Trial.
Assuntos
Determinação de Ponto Final , Simulação por Computador , Determinação de Ponto Final/métodos , Humanos , Método de Monte Carlo , Tamanho da AmostraRESUMO
OBJECTIVE: The present study aimed to determine the factors related to relief from rest pain, wound healing, major adverse limb events (MALEs), and prognosis after infrainguinal revascularisation in patients with chronic limb threatening ischaemia (CLTI). METHODS: The data of patients who underwent infrainguinal revascularisation for CLTI between 2010 and 2020 was analysed retrospectively. The endpoint was the composite of relief from rest pain, wound healing, MALE, or death. RESULTS: A total of 234 limbs in 187 patients with CLTI were analysed. Of the 234 limbs, 149 (63.7%) underwent bypass surgery and 85 (36.3%) underwent endovascular therapy (EVT). The event free survival rates with respect to the composite endpoint at two years were 30.4% in the EVT and 48.5% in the bypass groups, respectively (p = .005). The event free survival rates at two years were 56.7% in bypass surgery and 29.5% in EVT in the indeterminate subgroup (p = .051). Multivariable analysis revealed that age (hazard ratio [HR] 1.03; 95% confidence interval [CI] 1.01 - 1.05; p < .001), coronary artery disease (CAD) (HR 1.45; 95% CI 1.01 - 2.07; p = .042), haemodialysis (HR 1.74; 95% CI 1.22 - 2.48; p = .002), Wound, Ischaemia and foot Infection stage (HR 1.34; 95% CI 1.07 - 1.68; p = .012), Global Limb Anatomical Staging System stage (HR 1.31; 95% CI 1.01 - 1.72; p = .043), EVT (HR 1.90; 95% CI 1.31 - 2.74; p < .001), Geriatric Nutritional Risk Index (HR 0.98; 95% CI 0.97 - 0.99; p = .021), and non-ambulatory status (HR 1.89; 95% CI 1.31 - 2.74; p < .001) were risk factors for the composite endpoint. CONCLUSION: Bypass surgery is superior to EVT with respect to the composite endpoint including relief from rest pain, wound healing, MALE, or death. Bypass surgery may be considered as the treatment of choice, instead of EVT, in patients in the indeterminate group according to the Global Vascular Guidelines preferred revascularisation method.
Assuntos
Procedimentos Endovasculares , Doença Arterial Periférica , Idoso , Amputação Cirúrgica , Isquemia Crônica Crítica de Membro , Procedimentos Endovasculares/efeitos adversos , Humanos , Isquemia/etiologia , Isquemia/cirurgia , Salvamento de Membro/métodos , Masculino , Dor/etiologia , Doença Arterial Periférica/complicações , Doença Arterial Periférica/cirurgia , Estudos Retrospectivos , Fatores de Risco , Fatores de Tempo , Resultado do Tratamento , CicatrizaçãoRESUMO
BACKGROUND: In clinical trials the study interest often lies in the comparison of a treatment to a control regarding a time to event endpoint. A composite endpoint allows to consider several time to event endpoints at once. Usually, only the time to the first occurring event for a patient is thereby analyzed. However, an individual may experience more than one non-fatal event. Including all observed events in the analysis can increase the power and provides a more complete picture of the disease. Thus, analytical methods for recurrent events are required. A challenge is that the different event types belonging to the composite often are of different clinical relevance. In this case, weighting the event types according to their clinical relevance is an option. Different weight-based methods for composite time to event endpoints were proposed. So far, there exists no systematic comparison of these methods. METHODS: Within this work we provide a systematic comparison of three methods proposed for weighted composite endpoints in a recurrent event setting combining non-fatal and fatal events of different clinical relevance. We consider an extension of an approach proposed by Wei and Lachin, an approach by Rauch et al., and an approach by Bakal et al.. Comparison is done based on a simulation study and based on a clinical study example. RESULTS: For all three approaches closed formula test statistics are available. The Wei-Lachin approach and the approach by Rauch et al. show similar results in mean squared error. For the approach by Wei and Lachin confidence intervals are provided. The approach by Bakal et al. is not related to a quantifiable estimand. The relevance weights of the different approaches work on different level, i.e. either on cause-specific hazard ratios or on event count. CONCLUSION: The provided comparison and simulations can help to guide applied researchers to choose an adequate method for the analysis of composite endpoints combining (recurrent) events of different clinical relevance. The approach by Wei and Lachin and Rauch et al. can be recommended in scenarios where the composite effect is time-independent. The approach by Bakal et al. should be applied carefully.
Assuntos
Ingestão de Alimentos , Projetos de Pesquisa , Simulação por Computador , Determinação de Ponto Final/métodos , Humanos , Modelos de Riscos ProporcionaisRESUMO
BACKGROUND: Unstable angina (UA) is a component of acute coronary syndrome that is only occasionally included in primary composite endpoints in clinical cardiovascular trials. The aim of this paper is to elucidate the potential benefits and disadvantages of including UA in such contexts. SUMMARY: UA comprises <10% of patients with acute coronary syndromes in contemporary settings. Based on the pathophysiological similarities, it is ideal as a part of a composite endpoint along with myocardial infarction (MI). By adding UA as a component of a primary composite endpoint, the number of events and feasibility of the trial should increase, thus decreasing its size and cost. Furthermore, UA has both economic and quality of life implications on a societal and an individual level. However, there are important challenges associated with the use of UA as an endpoint. With the introduction of high-sensitivity troponins, the number of individuals diagnosed with UA has decreased to rather low levels, with a reciprocal increase in the number of MI. In addition, UA is particularly challenging to define given the subjective assessment of the index symptoms, rendering a high risk of bias. To minimize bias, strict criteria are warranted, and events should be adjudicated by a blinded endpoint adjudication committee. KEY MESSAGES: UA should only be chosen as a component of a primary composite endpoint in cardiovascular trials after thoroughly evaluating the pros and cons. If it is chosen to include UA, appropriate precautions should be taken to minimize possible bias.
Assuntos
Síndrome Coronariana Aguda , Angina Instável , Ensaios Clínicos como Assunto , Infarto do Miocárdio , Síndrome Coronariana Aguda/terapia , Humanos , Infarto do Miocárdio/terapia , Qualidade de Vida , TroponinaRESUMO
There is an increasing interest in the use of win ratio with composite time-to-event due to its flexibility in combining component endpoints. Exploring this flexibility further, one interesting question is in assessing the impact when there is a difference in treatment effect in the component endpoints. For example, the active treatment may prolong the time to occurrence of the negative event such as death or ventilation; meanwhile, the treatment effect may also shorten the time to achieving positive events, such as recovery or improvement. Notably, this portrays a situation where the treatment effect on time to recovery is in a different direction of benefit compared to the time to ventilation or death. Under such circumstances, if a single endpoint is used, the benefit gained for other individual outcomes is not counted and is diminished. As consequence, the study may need a larger sample size to detect a significant effect of treatment. Such a scenario can be handled by win ratio in a novel way by ranking component events, which is different from the usual composite endpoint approach such as time-to-first event. To evaluate how the different directions of treatment effect on component endpoints will impact the win ratio analysis, we use a Clayton copula-based bivariate survival simulation to investigate the correlation of component time-to-event. Through simulation, we found that compared to the marginal model using single endpoints, the win ratio analysis on composite endpoint performs better, especially when the correlation between two events is weak. Then, we applied the methodology to an infectious disease progression simulated study motivated by COVID-19. The application demonstrates that the win ratio approach offers advantages in empirical power compared to the traditional Cox proportional hazard approach when there is a difference in treatment effect in the marginal events.
Assuntos
COVID-19 , Humanos , Determinação de Ponto Final/métodos , Simulação por ComputadorRESUMO
BACKGROUND: In renal studies, various outcome endpoints are used with variable definitions, making it nearly impossible to perform meta-analyses and deduce meaningful conclusions. Increasing attention is directed towards standardization of renal outcome reporting. METHODS: A working group was formed to produce a unifying definition of renal outcomes that can be used by all investigators. We propose major adverse renal events (MARE) as the term for a standardized composite of hard renal outcomes. We discuss the components for inclusion in MARE from existing evidence. RESULTS: MARE could include three to five items, considered relevant to patients and regulators. New onset of kidney injury, that is persistent albuminuria/proteinuria and/or decreasing glomerular filtration rate (GFR) <60 ml/min/1.73 m2, persistent signs of worsening kidney disease, development of end-stage kidney disease with estimated GFR <15 ml/min/1.73 m2 without or with initiation of kidney replacement therapy, and death from renal cause are core items of MARE. Additionally, patient reported outcomes should be reported in parallel to MARE as a standard set of primary (or secondary) endpoints in studies on kidney disease of diabetic, hypertensive-vascular, or other origin. CONCLUSIONS: MARE as a reporting standard will enhance the ability to compare studies and thus, facilitate meaningful meta-analyses. This will result in standardized endpoints that should result in guideline improvement to better individualize care of patients with kidney disease.
Assuntos
Determinação de Ponto Final/normas , Hipertensão/etiologia , Nefropatias/etiologia , Falência Renal Crônica/terapia , Avaliação de Resultados em Cuidados de Saúde , Terapia de Substituição Renal/efeitos adversos , Ensaios Clínicos como Assunto , Taxa de Filtração Glomerular , Humanos , Hipertensão/patologia , Nefropatias/patologia , Prognóstico , Taxa de SobrevidaRESUMO
In heart failure (HF) trials efficacy is usually assessed by a composite endpoint including cardiovascular death (CVD) and heart failure hospitalizations (HFHs), which has traditionally been evaluated with a time-to-first-event analysis based on a Cox model. As a considerable fraction of events is ignored that way, methods for recurrent events were suggested, among others the semiparametric proportional rates models by Lin, Wei, Yang, and Ying (LWYY model) and Mao and Lin (Mao-Lin model). In our work we apply least false parameter theory to explain the behavior of the composite treatment effect estimates resulting from the Cox model, the LWYY model, and the Mao-Lin model in clinically relevant scenarios parameterized through joint frailty models. These account for both different treatment effects on the two outcomes (CVD, HFHs) and the positive correlation between their risk rates. For the important setting of beneficial outcome-specific treatment effects we show that the correlation results in composite treatment effect estimates, which are decreasing with trial duration. The estimate from the Cox model is affected more by the attenuation than the estimates from the recurrent event models, which both demonstrate very similar behavior. Since the Mao-Lin model turns out to be less sensitive to harmful effects on mortality, we conclude that, among the three investigated approaches, the LWYY model is the most appropriate one for the composite endpoint in HF trials. Our investigations are motivated and compared with empirical results from the PARADIGM-HF trial (ClinicalTrials.gov identifier: NCT01035255), a large multicenter trial including 8399 chronic HF patients.
Assuntos
Insuficiência Cardíaca , Insuficiência Cardíaca/terapia , Humanos , Modelos de Riscos Proporcionais , Resultado do TratamentoRESUMO
BACKGROUND: Sample size calculation is a key point in the design of a randomized controlled trial. With time-to-event outcomes, it's often based on the logrank test. We provide a sample size calculation method for a composite endpoint (CE) based on the geometric average hazard ratio (gAHR) in case the proportional hazards assumption can be assumed to hold for the components, but not for the CE. METHODS: The required number of events, sample size and power formulae are based on the non-centrality parameter of the logrank test under the alternative hypothesis which is a function of the gAHR. We use the web platform, CompARE, for the sample size computations. A simulation study evaluates the empirical power of the logrank test for the CE based on the sample size in terms of the gAHR. We consider different values of the component hazard ratios, the probabilities of observing the events in the control group and the degrees of association between the components. We illustrate the sample size computations using two published randomized controlled trials. Their primary CEs are, respectively, progression-free survival (time to progression of disease or death) and the composite of bacteriologically confirmed treatment failure or Staphylococcus aureus related death by 12 weeks. RESULTS: For a target power of 0.80, the simulation study provided mean (± SE) empirical powers equal to 0.799 (±0.004) and 0.798 (±0.004) in the exponential and non-exponential settings, respectively. The power was attained in more than 95% of the simulated scenarios and was always above 0.78, regardless of compliance with the proportional-hazard assumption. CONCLUSIONS: The geometric average hazard ratio as an effect measure for a composite endpoint has a meaningful interpretation in the case of non-proportional hazards. Furthermore it is the natural effect measure when using the logrank test to compare the hazard rates of two groups and should be used instead of the standard hazard ratio.
Assuntos
Projetos de Pesquisa , Simulação por Computador , Grupos Controle , Humanos , Modelos de Riscos Proporcionais , Tamanho da AmostraRESUMO
In a comparative longitudinal clinical study, multiple clinical events of interest are typically collected in timing and occurrence during the follow-up period. These clinical events are often indicative of disease burden over the study period and provide overall evidence of benefit/risk of one treatment relative to another. While these clinical events are usually used to form a composite endpoint, only the first occurrence of the composite endpoint event is considered in primary efficacy analysis. This type of analysis is commonly performed but it may not be ideal. Most of the existing methods for analyzing multiple event-time data were developed, relying on certain model assumptions. However, the assumptions may greatly affect the inferences for treatment effect. In this paper, we propose a simple, non-parametric estimator of conditional mean survival time for multiple events to quantify treatment effect which has clinically meaningful interpretation. We use simulation studies to evaluate the performance of the new method. Further, we apply this method to analyze the data from a cardiovascular clinical trial as an illustration.