RESUMO
BACKGROUND: The benefits and safety of the treatment of mild chronic hypertension (blood pressure, <160/100 mm Hg) during pregnancy are uncertain. Data are needed on whether a strategy of targeting a blood pressure of less than 140/90 mm Hg reduces the incidence of adverse pregnancy outcomes without compromising fetal growth. METHODS: In this open-label, multicenter, randomized trial, we assigned pregnant women with mild chronic hypertension and singleton fetuses at a gestational age of less than 23 weeks to receive antihypertensive medications recommended for use in pregnancy (active-treatment group) or to receive no such treatment unless severe hypertension (systolic pressure, ≥160 mm Hg; or diastolic pressure, ≥105 mm Hg) developed (control group). The primary outcome was a composite of preeclampsia with severe features, medically indicated preterm birth at less than 35 weeks' gestation, placental abruption, or fetal or neonatal death. The safety outcome was small-for-gestational-age birth weight below the 10th percentile for gestational age. Secondary outcomes included composites of serious neonatal or maternal complications, preeclampsia, and preterm birth. RESULTS: A total of 2408 women were enrolled in the trial. The incidence of a primary-outcome event was lower in the active-treatment group than in the control group (30.2% vs. 37.0%), for an adjusted risk ratio of 0.82 (95% confidence interval [CI], 0.74 to 0.92; P<0.001). The percentage of small-for-gestational-age birth weights below the 10th percentile was 11.2% in the active-treatment group and 10.4% in the control group (adjusted risk ratio, 1.04; 95% CI, 0.82 to 1.31; P = 0.76). The incidence of serious maternal complications was 2.1% and 2.8%, respectively (risk ratio, 0.75; 95% CI, 0.45 to 1.26), and the incidence of severe neonatal complications was 2.0% and 2.6% (risk ratio, 0.77; 95% CI, 0.45 to 1.30). The incidence of any preeclampsia in the two groups was 24.4% and 31.1%, respectively (risk ratio, 0.79; 95% CI, 0.69 to 0.89), and the incidence of preterm birth was 27.5% and 31.4% (risk ratio, 0.87; 95% CI, 0.77 to 0.99). CONCLUSIONS: In pregnant women with mild chronic hypertension, a strategy of targeting a blood pressure of less than 140/90 mm Hg was associated with better pregnancy outcomes than a strategy of reserving treatment only for severe hypertension, with no increase in the risk of small-for-gestational-age birth weight. (Funded by the National Heart, Lung, and Blood Institute; CHAP ClinicalTrials.gov number, NCT02299414.).
Assuntos
Anti-Hipertensivos/uso terapêutico , Hipertensão Induzida pela Gravidez/tratamento farmacológico , Hipertensão , Resultado da Gravidez , Descolamento Prematuro da Placenta/epidemiologia , Descolamento Prematuro da Placenta/prevenção & controle , Peso ao Nascer , Doença Crônica , Feminino , Retardo do Crescimento Fetal/epidemiologia , Retardo do Crescimento Fetal/prevenção & controle , Humanos , Hipertensão/complicações , Hipertensão/tratamento farmacológico , Recém-Nascido , Pré-Eclâmpsia/epidemiologia , Pré-Eclâmpsia/prevenção & controle , Gravidez , Resultado da Gravidez/epidemiologia , Nascimento Prematuro/epidemiologia , Nascimento Prematuro/prevenção & controleRESUMO
BACKGROUND: The coronavirus disease 2019 pandemic highlighted the need to conduct efficient randomized clinical trials with interim monitoring guidelines for efficacy and futility. Several randomized coronavirus disease 2019 trials, including the Multiplatform Randomized Clinical Trial (mpRCT), used Bayesian guidelines with the belief that they would lead to quicker efficacy or futility decisions than traditional "frequentist" guidelines, such as spending functions and conditional power. We explore this belief using an intuitive interpretation of Bayesian methods as translating prior opinion about the treatment effect into imaginary prior data. These imaginary observations are then combined with actual observations from the trial to make conclusions. Using this approach, we show that the Bayesian efficacy boundary used in mpRCT is actually quite similar to the frequentist Pocock boundary. METHODS: The mpRCT's efficacy monitoring guideline considered stopping if, given the observed data, there was greater than 99% probability that the treatment was effective (odds ratio greater than 1). The mpRCT's futility monitoring guideline considered stopping if, given the observed data, there was greater than 95% probability that the treatment was less than 20% effective (odds ratio less than 1.2). The mpRCT used a normal prior distribution that can be thought of as supplementing the actual patients' data with imaginary patients' data. We explore the effects of varying probability thresholds and the prior-to-actual patient ratio in the mpRCT and compare the resulting Bayesian efficacy monitoring guidelines to the well-known frequentist Pocock and O'Brien-Fleming efficacy guidelines. We also contrast Bayesian futility guidelines with a more traditional 20% conditional power futility guideline. RESULTS: A Bayesian efficacy and futility monitoring boundary using a neutral, weakly informative prior distribution and a fixed probability threshold at all interim analyses is more aggressive than the commonly used O'Brien-Fleming efficacy boundary coupled with a 20% conditional power threshold for futility. The trade-off is that more aggressive boundaries tend to stop trials earlier, but incur a loss of power. Interestingly, the Bayesian efficacy boundary with 99% probability threshold is very similar to the classic Pocock efficacy boundary. CONCLUSIONS: In a pandemic where quickly weeding out ineffective treatments and identifying effective treatments is paramount, aggressive monitoring may be preferred to conservative approaches, such as the O'Brien-Fleming boundary. This can be accomplished with either Bayesian or frequentist methods.
RESUMO
Jeffries et al. (2018) investigated testing for a treatment difference in the setting of a randomized clinical trial with a single outcome measured longitudinally over a series of common follow-up times while adjusting for covariates. That paper examined the null hypothesis of no difference at any follow-up time versus the alternative of a difference for at least one follow-up time. We extend those results here by considering multivariate outcome measurements, where each individual outcome is examined at common follow-up times. We consider the case where there is interest in first testing for a treatment difference in a global function of the outcomes (e.g., weighted average or sum) with subsequent interest in examining the individual outcomes, should the global function show a treatment difference. Testing is conducted for each follow-up time and may be performed in the setting of a group sequential trial. Testing procedures are developed to determine follow-up times for which a global treatment difference exists and which individual combinations of outcome and follow-up time show evidence of a difference while controlling for multiplicity in outcomes, follow-up, and interim analyses. These approaches are examined in a study evaluating the effects of tissue plasminogen activator on longitudinally obtained stroke severity measurements.
Assuntos
Acidente Vascular Cerebral , Ativador de Plasminogênio Tecidual , Humanos , Ativador de Plasminogênio Tecidual/uso terapêutico , Estudos Longitudinais , Projetos de Pesquisa , Acidente Vascular Cerebral/tratamento farmacológicoRESUMO
Umbilical cord blood (UCB) transplantation is a potentially curative treatment for patients with refractory severe aplastic anaemia (SAA), but has historically been associated with delayed engraftment and high graft failure and mortality rates. We conducted a prospective phase 2 trial to assess outcome of an allogeneic transplant regimen that co-infused a single UCB unit with CD34+ -selected cells from a haploidentical relative. Among 29 SAA patients [including 10 evolved to myelodysplastic syndrome (MDS)] who underwent the haplo cord transplantation (median age 20 years), 97% had neutrophil recovery (median 10 days), and 93% had platelet recovery (median 32 days). Early myeloid engraftment was from the haplo donor and was gradually replaced by durable engraftment from UCB in most patients. The cumulative incidences of grade II-IV acute and chronic graft-versus-host disease (GVHD) were 21% and 41%, respectively. With a median follow-up of 7·5 years, overall survival was 83% and GVHD/relapse-free survival was 69%. Patient- and transplant-related factors had no impact on engraftment and survival although transplants with haplo-versus-cord killer-cell immunoglobulin-like receptor (KIR) ligand incompatibility had delayed cord engraftment. Our study shows haplo cord transplantation is associated with excellent engraftment and long-term outcome, providing an alternative option for patients with refractory SAA and hypoplastic MDS who lack human leucocyte antigen (HLA)-matched donors.
Assuntos
Anemia Aplástica , Transplante de Células-Tronco de Sangue do Cordão Umbilical , Doença Enxerto-Hospedeiro , Transplante de Células-Tronco Hematopoéticas , Síndromes Mielodisplásicas , Adolescente , Adulto , Anemia Aplástica/sangue , Anemia Aplástica/mortalidade , Anemia Aplástica/terapia , Criança , Pré-Escolar , Intervalo Livre de Doença , Feminino , Seguimentos , Doença Enxerto-Hospedeiro/sangue , Doença Enxerto-Hospedeiro/etiologia , Doença Enxerto-Hospedeiro/mortalidade , Humanos , Incidência , Contagem de Leucócitos , Masculino , Pessoa de Meia-Idade , Síndromes Mielodisplásicas/sangue , Síndromes Mielodisplásicas/mortalidade , Síndromes Mielodisplásicas/terapia , Contagem de Plaquetas , Estudos Prospectivos , Taxa de Sobrevida , Transplante HaploidênticoRESUMO
BACKGROUND: In a randomized trial comparing mitral-valve repair with mitral-valve replacement in patients with severe ischemic mitral regurgitation, we found no significant difference in the left ventricular end-systolic volume index (LVESVI), survival, or adverse events at 1 year after surgery. However, patients in the repair group had significantly more recurrences of moderate or severe mitral regurgitation. We now report the 2-year outcomes of this trial. METHODS: We randomly assigned 251 patients to mitral-valve repair or replacement. Patients were followed for 2 years, and clinical and echocardiographic outcomes were assessed. RESULTS: Among surviving patients, the mean (±SD) 2-year LVESVI was 52.6±27.7 ml per square meter of body-surface area with mitral-valve repair and 60.6±39.0 ml per square meter with mitral-valve replacement (mean changes from baseline, -9.0 ml per square meter and -6.5 ml per square meter, respectively). Two-year mortality was 19.0% in the repair group and 23.2% in the replacement group (hazard ratio in the repair group, 0.79; 95% confidence interval, 0.46 to 1.35; P=0.39). The rank-based assessment of LVESVI at 2 years (incorporating deaths) showed no significant between-group difference (z score=-1.32, P=0.19). The rate of recurrence of moderate or severe mitral regurgitation over 2 years was higher in the repair group than in the replacement group (58.8% vs. 3.8%, P<0.001). There were no significant between-group differences in rates of serious adverse events and overall readmissions, but patients in the repair group had more serious adverse events related to heart failure (P=0.05) and cardiovascular readmissions (P=0.01). On the Minnesota Living with Heart Failure questionnaire, there was a trend toward greater improvement in the replacement group (P=0.07). CONCLUSIONS: In patients undergoing mitral-valve repair or replacement for severe ischemic mitral regurgitation, we observed no significant between-group difference in left ventricular reverse remodeling or survival at 2 years. Mitral regurgitation recurred more frequently in the repair group, resulting in more heart-failure-related adverse events and cardiovascular admissions. (Funded by the National Institutes of Health and Canadian Institutes of Health Research; ClinicalTrials.gov number, NCT00807040.).
Assuntos
Implante de Prótese de Valva Cardíaca , Insuficiência da Valva Mitral/cirurgia , Valva Mitral/cirurgia , Qualidade de Vida , Insuficiência Cardíaca/etiologia , Ventrículos do Coração/anatomia & histologia , Ventrículos do Coração/fisiopatologia , Hospitalização , Humanos , Insuficiência da Valva Mitral/complicações , Insuficiência da Valva Mitral/mortalidade , Recidiva , Reoperação/estatística & dados numéricos , Falha de Tratamento , Função Ventricular Esquerda , Remodelação VentricularRESUMO
BACKGROUND: Among patients undergoing mitral-valve surgery, 30 to 50% present with atrial fibrillation, which is associated with reduced survival and increased risk of stroke. Surgical ablation of atrial fibrillation has been widely adopted, but evidence regarding its safety and effectiveness is limited. METHODS: We randomly assigned 260 patients with persistent or long-standing persistent atrial fibrillation who required mitral-valve surgery to undergo either surgical ablation (ablation group) or no ablation (control group) during the mitral-valve operation. Patients in the ablation group underwent further randomization to pulmonary-vein isolation or a biatrial maze procedure. All patients underwent closure of the left atrial appendage. The primary end point was freedom from atrial fibrillation at both 6 months and 12 months (as assessed by means of 3-day Holter monitoring). RESULTS: More patients in the ablation group than in the control group were free from atrial fibrillation at both 6 and 12 months (63.2% vs. 29.4%, P<0.001). There was no significant difference in the rate of freedom from atrial fibrillation between patients who underwent pulmonary-vein isolation and those who underwent the biatrial maze procedure (61.0% and 66.0%, respectively; P=0.60). One-year mortality was 6.8% in the ablation group and 8.7% in the control group (hazard ratio with ablation, 0.76; 95% confidence interval, 0.32 to 1.84; P=0.55). Ablation was associated with more implantations of a permanent pacemaker than was no ablation (21.5 vs. 8.1 per 100 patient-years, P=0.01). There were no significant between-group differences in major cardiac or cerebrovascular adverse events, overall serious adverse events, or hospital readmissions. CONCLUSIONS: The addition of atrial fibrillation ablation to mitral-valve surgery significantly increased the rate of freedom from atrial fibrillation at 1 year among patients with persistent or long-standing persistent atrial fibrillation, but the risk of implantation of a permanent pacemaker was also increased. (Funded by the National Institutes of Health and the Canadian Institutes of Health Research; ClinicalTrials.gov number, NCT00903370.).
Assuntos
Fibrilação Atrial/cirurgia , Ablação por Cateter/métodos , Doenças das Valvas Cardíacas/cirurgia , Valva Mitral/cirurgia , Idoso , Fibrilação Atrial/complicações , Fibrilação Atrial/prevenção & controle , Doenças Cardiovasculares/mortalidade , Ablação por Cateter/efeitos adversos , Eletrocardiografia Ambulatorial , Feminino , Doenças das Valvas Cardíacas/complicações , Implante de Prótese de Valva Cardíaca , Humanos , Estimativa de Kaplan-Meier , Masculino , Pessoa de Meia-Idade , Complicações Pós-Operatórias , Qualidade de Vida , Prevenção SecundáriaRESUMO
In longitudinal studies comparing two treatments over a series of common follow-up measurements, there may be interest in determining if there is a treatment difference at any follow-up period when there may be a non-monotone treatment effect over time. To evaluate this question, Jeffries and Geller (2015) examined a number of clinical trial designs that allowed adaptive choice of the follow-up time exhibiting the greatest evidence of treatment difference in a group sequential testing setting with Gaussian data. The methods are applicable when a few measurements were taken at prespecified follow-up periods. Here, we test the intersection null hypothesis of no difference at any follow-up time versus the alternative that there is a difference for at least one follow-up time. Results of Jeffries and Geller (2015) are extended by considering a broader range of modeled data and the inclusion of covariates using generalized estimating equations. Testing procedures are developed to determine a set of follow-up times that exhibit a treatment difference that accounts for multiplicity in follow-up times and interim analyses.
Assuntos
Análise de Variância , Estudos Longitudinais , Projetos de Pesquisa , Ensaios Clínicos como Assunto , Seguimentos , HumanosRESUMO
BACKGROUND: Ischemic mitral regurgitation is associated with a substantial risk of death. Practice guidelines recommend surgery for patients with a severe form of this condition but acknowledge that the supporting evidence for repair or replacement is limited. METHODS: We randomly assigned 251 patients with severe ischemic mitral regurgitation to undergo either mitral-valve repair or chordal-sparing replacement in order to evaluate efficacy and safety. The primary end point was the left ventricular end-systolic volume index (LVESVI) at 12 months, as assessed with the use of a Wilcoxon rank-sum test in which deaths were categorized below the lowest LVESVI rank. RESULTS: At 12 months, the mean LVESVI among surviving patients was 54.6±25.0 ml per square meter of body-surface area in the repair group and 60.7±31.5 ml per square meter in the replacement group (mean change from baseline, -6.6 and -6.8 ml per square meter, respectively). The rate of death was 14.3% in the repair group and 17.6% in the replacement group (hazard ratio with repair, 0.79; 95% confidence interval, 0.42 to 1.47; P=0.45 by the log-rank test). There was no significant between-group difference in LVESVI after adjustment for death (z score, 1.33; P=0.18). The rate of moderate or severe recurrence of mitral regurgitation at 12 months was higher in the repair group than in the replacement group (32.6% vs. 2.3%, P<0.001). There were no significant between-group differences in the rate of a composite of major adverse cardiac or cerebrovascular events, in functional status, or in quality of life at 12 months. CONCLUSIONS: We observed no significant difference in left ventricular reverse remodeling or survival at 12 months between patients who underwent mitral-valve repair and those who underwent mitral-valve replacement. Replacement provided a more durable correction of mitral regurgitation, but there was no significant between-group difference in clinical outcomes. (Funded by the National Institutes of Health and the Canadian Institutes of Health; ClinicalTrials.gov number, NCT00807040.).
Assuntos
Implante de Prótese de Valva Cardíaca , Anuloplastia da Valva Mitral , Insuficiência da Valva Mitral/cirurgia , Valva Mitral/cirurgia , Idoso , Doença da Artéria Coronariana/complicações , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Insuficiência da Valva Mitral/complicações , Insuficiência da Valva Mitral/fisiopatologia , Isquemia Miocárdica/complicações , Complicações Pós-Operatórias , Modelos de Riscos Proporcionais , Qualidade de Vida , Recidiva , Volume Sistólico , Função Ventricular Esquerda , Remodelação VentricularRESUMO
Hematopoietic stem cells can be mobilized from healthy donors using single-agent plerixafor without granulocyte colony-stimulating factor and, following allogeneic transplantation, can result in sustained donor-derived hematopoiesis. However, when a single dose of plerixafor is administered at a conventional 240 µg/kg dose, approximately one-third of donors will fail to mobilize the minimally acceptable dose of CD34+ cells needed for allogeneic transplantation. We conducted an open-label, randomized trial to assess the safety and activity of high-dose (480 µg/kg) plerixafor in CD34+ cell mobilization in healthy donors. Subjects were randomly assigned to receive either a high dose or a conventional dose (240 µg/kg) of plerixafor, given as a single subcutaneous injection, in a two-sequence, two-period, crossover design. Each treatment period was separated by a 2-week minimum washout period. The primary endpoint was the peak CD34+ count in the blood, with secondary endpoints of CD34+ cell area under the curve (AUC), CD34+ count at 24 hours, and time to peak CD34+ following the administration of plerixafor. We randomized 23 subjects to the two treatment sequences and 20 subjects received both doses of plerixafor. Peak CD34+ count in the blood was significantly increased (mean 32.2 versus 27.8 cells/µL, P=0.0009) and CD34+ cell AUC over 24 hours was significantly increased (mean 553 versus 446 h cells/µL, P<0.0001) following the administration of the 480 µg/kg dose of plerixafor compared with the 240 µg/kg dose. Remarkably, of seven subjects who mobilized poorly (peak CD34+ ≤20 cells/µL) after the 240 µg/kg dose of plerixafor, six achieved higher peak CD34+ cell numbers and all achieved higher CD34+ AUC over 24 hours after the 480 µg/kg dose. No grade 3 or worse drug-related adverse events were observed. This study establishes that high-dose plerixafor can be safely administered in healthy donors and mobilizes greater numbers of CD34+ cells than conventional-dose plerixafor, which may improve CD34+ graft yields and reduce the number of apheresis procedures needed to collect sufficient stem cells for allogeneic transplantation. (ClinicalTrials.gov, identifier: NCT00322127).
Assuntos
Antígenos CD34/metabolismo , Mobilização de Células-Tronco Hematopoéticas , Células-Tronco Hematopoéticas/efeitos dos fármacos , Células-Tronco Hematopoéticas/metabolismo , Compostos Heterocíclicos/administração & dosagem , Doadores de Tecidos , Adulto , Benzilaminas , Ensaio de Unidades Formadoras de Colônias , Estudos Cross-Over , Ciclamos , Feminino , Voluntários Saudáveis , Mobilização de Células-Tronco Hematopoéticas/métodos , Transplante de Células-Tronco Hematopoéticas/efeitos adversos , Transplante de Células-Tronco Hematopoéticas/métodos , Células-Tronco Hematopoéticas/citologia , Humanos , Masculino , Pessoa de Meia-Idade , Fatores de Tempo , Adulto JovemRESUMO
BACKGROUND: In hematopoietic cell transplantation (HCT), current risk adjustment strategies are based on clinical and disease-related variables. Although patient-reported outcomes (PROs) predict mortality in multiple cancers, they have been less well studied within HCT. Improvements in risk adjustment strategies in HCT would inform patient selection, patient counseling, and quality reporting. The objective of the current study was to determine whether pre-HCT PROs, in particular physical health, predict survival among patients undergoing autologous or allogeneic transplantation. METHODS: In this secondary analysis, the authors studied pre-HCT PROs that were reported by 336 allogeneic and 310 autologous HCT recipients enrolled in the Blood and Marrow Transplant Clinical Trials Network (BMT CTN) 0902 protocol, a study with broad representation of patients who underwent transplantation in the United States. RESULTS: Among allogeneic HCT recipients, the pre-HCT Medical Outcomes Study Short Form-36 Health Survey (SF-36) physical component summary (PCS) scale independently predicted overall mortality (hazards ratio, 1.40 per 10-point decrease; P<.001) and performed at least as well as currently used, non-PRO risk indices. Survival probability estimates at 1 year for the first, second, third, and fourth quartiles of the baseline PCS were 50%, 65%, 75%, and 83%, respectively. Early post-HCT decreases in PCS were associated with higher overall and treatment-related mortality. When adjusted for patient variables included in the US Stem Cell Therapeutic Outcomes Database model for transplant center-specific reporting, the SF-36 PCS retained independent prognostic value. CONCLUSIONS: PROs have the potential to improve prognostication in HCT. The authors recommend the routine collection of PROs before HCT, and consideration of the incorporation of PROs into risk adjustment for quality reporting.
Assuntos
Neoplasias Hematológicas/fisiopatologia , Neoplasias Hematológicas/terapia , Transplante de Células-Tronco Hematopoéticas/métodos , Aptidão Física/fisiologia , Condicionamento Pré-Transplante/métodos , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Qualidade de Vida , Risco Ajustado , Autorrelato , Inquéritos e Questionários , Transplante Homólogo , Resultado do TratamentoRESUMO
BACKGROUND: The clinical utility of genotype-guided (pharmacogenetically based) dosing of warfarin has been tested only in small clinical trials or observational studies, with equivocal results. METHODS: We randomly assigned 1015 patients to receive doses of warfarin during the first 5 days of therapy that were determined according to a dosing algorithm that included both clinical variables and genotype data or to one that included clinical variables only. All patients and clinicians were unaware of the dose of warfarin during the first 4 weeks of therapy. The primary outcome was the percentage of time that the international normalized ratio (INR) was in the therapeutic range from day 4 or 5 through day 28 of therapy. RESULTS: At 4 weeks, the mean percentage of time in the therapeutic range was 45.2% in the genotype-guided group and 45.4% in the clinically guided group (adjusted mean difference, [genotype-guided group minus clinically guided group], -0.2; 95% confidence interval, -3.4 to 3.1; P=0.91). There also was no significant between-group difference among patients with a predicted dose difference between the two algorithms of 1 mg per day or more. There was, however, a significant interaction between dosing strategy and race (P=0.003). Among black patients, the mean percentage of time in the therapeutic range was less in the genotype-guided group than in the clinically guided group. The rates of the combined outcome of any INR of 4 or more, major bleeding, or thromboembolism did not differ significantly according to dosing strategy. CONCLUSIONS: Genotype-guided dosing of warfarin did not improve anticoagulation control during the first 4 weeks of therapy. (Funded by the National Heart, Lung, and Blood Institute and others; COAG ClinicalTrials.gov number, NCT00839657.).
Assuntos
Algoritmos , Anticoagulantes/administração & dosagem , Hidrocarboneto de Aril Hidroxilases/genética , Genótipo , Vitamina K Epóxido Redutases/genética , Varfarina/administração & dosagem , Adulto , Idoso , Anticoagulantes/efeitos adversos , Citocromo P-450 CYP2C9 , Método Duplo-Cego , Feminino , Seguimentos , Hemorragia/induzido quimicamente , Humanos , Coeficiente Internacional Normatizado , Masculino , Farmacogenética , Tromboembolia , Falha de Tratamento , Varfarina/efeitos adversosRESUMO
In longitudinal studies comparing two treatments with a maximum follow-up time there may be interest in examining treatment effects for intermediate follow-up times. One motivation may be to identify the time period with greatest treatment difference when there is a non-monotone treatment effect over time; another motivation may be to make the trial more efficient in terms of time to reach a decision on whether a new treatment is efficacious or not. Here, we test the composite null hypothesis of no difference at any follow-up time versus the alternative that there is a difference at at least one follow-up time. The methods are applicable when a few measurements are taken over time, such as in early longitudinal trials or in ancillary studies. Suppose the test statistic Z(t(k)) will be used to test the hypothesis of no treatment effect at a fixed follow-up time t(k). In this context a common approach is to perform a pilot study on N1 subjects, and evaluate the treatment effect at the fixed time points t1, ,t(K) and choose t* as the value of t(k) for which Z(t(k)) is maximized. Having chosen t* a second trial can be designed. In a setting with group sequential testing we consider several adaptive alternatives to this approach that treat the pilot and second trial as a seamless, combined entity and evaluate Type I error and power characteristics. The adaptive designs we consider typically have improved power over the common, separate trial approach.
Assuntos
Ensaios Clínicos como Assunto/estatística & dados numéricos , Biometria , Simulação por Computador , Seguimentos , Humanos , Estudos Longitudinais , Modelos Estatísticos , Projetos Piloto , Probabilidade , Qualidade de Vida , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Fatores de Tempo , Disfunção Ventricular Esquerda/tratamento farmacológico , Disfunção Ventricular Esquerda/fisiopatologiaRESUMO
Studies show that engaging patients in exercise and/or stress management techniques during hematopoietic cell transplantation (HCT) improves quality of life. The Blood and Marrow Transplant Clinical Trials Network tested the efficacy of training patients to engage in self-directed exercise and stress management during HCT. The study randomized 711 patients at 21 centers to receive 1 of 4 training interventions before HCT: a self-directed exercise program, a self-administered stress management program, both, or neither. Participants completed self-reported assessments at enrollment and up to 180 days after HCT. Randomization was stratified by center and transplant type. There were no differences in the primary endpoints of the Physical Component Summary and Mental Component Summary scales of the Medical Outcomes Study Short Form 36 at day +100 among the groups, based on an intention-to-treat analysis. There also were no differences in overall survival, days of hospitalization through day +100 post-HCT, or in other patient-reported outcomes, including treatment-related distress, sleep quality, pain, and nausea. Patients randomized to training in stress management reported more use of those techniques, but patients randomized to training in exercise did not report more physical activity. Although other studies have reported efficacy of more intensive interventions, brief training in an easy-to-disseminate format for either self-directed exercise or stress management was not effective in our trial.
Assuntos
Terapia por Exercício , Neoplasias Hematológicas/psicologia , Neoplasias Hematológicas/terapia , Transplante de Células-Tronco Hematopoéticas , Estresse Psicológico/terapia , Condicionamento Pré-Transplante , Adolescente , Adulto , Idoso , Feminino , Neoplasias Hematológicas/imunologia , Neoplasias Hematológicas/mortalidade , Humanos , Análise de Intenção de Tratamento , Masculino , Entrevista Psiquiátrica Padronizada , Pessoa de Meia-Idade , Agonistas Mieloablativos/uso terapêutico , Prognóstico , Qualidade de Vida , Autorrelato , Análise de Sobrevida , Transplante HomólogoRESUMO
PURPOSE: Allogeneic hematopoietic cell transplantation (HCT) improves outcomes for patients with AML harboring an internal tandem duplication mutation of FLT3 (FLT3-ITD) AML. These patients are routinely treated with a FLT3 inhibitor after HCT, but there is limited evidence to support this. Accordingly, we conducted a randomized trial of post-HCT maintenance with the FLT3 inhibitor gilteritinib (ClinicalTrials.gov identifier: NCT02997202) to determine if all such patients benefit or if detection of measurable residual disease (MRD) could identify those who might benefit. METHODS: Adults with FLT3-ITD AML in first remission underwent HCT and were randomly assigned to placebo or 120 mg once daily gilteritinib for 24 months after HCT. The primary end point was relapse-free survival (RFS). Secondary end points included overall survival (OS) and the effect of MRD pre- and post-HCT on RFS and OS. RESULTS: Three hundred fifty-six participants were randomly assigned post-HCT to receive gilteritinib or placebo. Although RFS was higher in the gilteritinib arm, the difference was not statistically significant (hazard ratio [HR], 0.679 [95% CI, 0.459 to 1.005]; two-sided P = .0518). However, 50.5% of participants had MRD detectable pre- or post-HCT, and, in a prespecified subgroup analysis, gilteritinib was beneficial in this population (HR, 0.515 [95% CI, 0.316 to 0.838]; P = .0065). Those without detectable MRD showed no benefit (HR, 1.213 [95% CI, 0.616 to 2.387]; P = .575). CONCLUSION: Although the overall improvement in RFS was not statistically significant, RFS was higher for participants with detectable FLT3-ITD MRD pre- or post-HCT who received gilteritinib treatment. To our knowledge, these data are among the first to support the effectiveness of MRD-based post-HCT therapy.
Assuntos
Compostos de Anilina , Transplante de Células-Tronco Hematopoéticas , Leucemia Mieloide Aguda , Mutação , Pirazinas , Tirosina Quinase 3 Semelhante a fms , Humanos , Tirosina Quinase 3 Semelhante a fms/genética , Leucemia Mieloide Aguda/genética , Leucemia Mieloide Aguda/tratamento farmacológico , Leucemia Mieloide Aguda/terapia , Leucemia Mieloide Aguda/mortalidade , Masculino , Feminino , Pessoa de Meia-Idade , Pirazinas/uso terapêutico , Adulto , Compostos de Anilina/uso terapêutico , Idoso , Sequências de Repetição em Tandem , Adulto Jovem , Neoplasia Residual , Inibidores de Proteínas Quinases/uso terapêutico , Quimioterapia de Manutenção , Duplicação GênicaRESUMO
OBJECTIVE: To test whether treatment of mild chronic hypertension (CHTN) in pregnancy is associated with lower rates of unplanned maternal healthcare utilization postpartum. METHODS: This was a secondary analysis of the CHTN and pregnancy (CHAP) study, a prospective, open-label, pragmatic, multicenter, randomized treatment trial of pregnant people with mild chronic hypertension. All patients with a postpartum follow-up assessment were included. The primary outcome was unplanned healthcare utilization, defined as unplanned postpartum clinic visits, Emergency Department or triage visits, or unplanned hospital admissions within six weeks postpartum. Differences in outcomes were compared between study groups (Active Group: blood pressure goal of<140/90 mm Hg, and Control Group: blood pressure goal of <160/105 mm Hg) and factors associated with outcomes were examined using logistic regression. RESULTS: A total of 2,293 patients were included with 1,157 (50.5%) in the active group and 1,136 (49.5%) in the control group. Rates of unplanned maternal postpartum health care utilization did not differ between treatment and control groups, (20.2% vs 23.3%, p=0.07, aOR 0.84, 95% CI 0.69-1.03. However, Emergency Department or triage/maternity evaluation unit visits were significantly lower in the Active group (10.2% vs 13.2%, p=0.03, aOR 0.76, 95% 0.58-0.99). Higher BMI at enrollment and cesarean delivery were associated with higher odds of unplanned postpartum healthcare utilization. CONCLUSION: While treatment of mild CHTN during pregnancy and postpartum was not significantly associated with overall unplanned healthcare resource utilization, it was associated with lower rates of postpartum Emergency Department and triage visits.
RESUMO
OBJECTIVE: To evaluate maternal and neonatal outcomes by type of antihypertensive used in participants of the CHAP (Chronic Hypertension in Pregnancy) trial. METHODS: We conducted a planned secondary analysis of CHAP, an open-label, multicenter, randomized trial of antihypertensive treatment compared with standard care (no treatment unless severe hypertension developed) in pregnant patients with mild chronic hypertension (blood pressure 140-159/90-104 mm Hg before 20 weeks of gestation) and singleton pregnancies. We performed three comparisons based on medications prescribed at enrollment: labetalol compared with standard care, nifedipine compared with standard care, and labetalol compared with nifedipine. Although active compared with standard care groups were randomized, medication assignment within the active treatment group was not random but based on clinician or patient preference. The primary outcome was the occurrence of superimposed preeclampsia with severe features, preterm birth before 35 weeks of gestation, placental abruption, or fetal or neonatal death. The key secondary outcome was small for gestational age (SGA) neonates. We also compared medication adverse effects between groups. Relative risks (RRs) and 95% CIs were estimated with log binomial regression to adjust for confounding. RESULTS: Of 2,292 participants analyzed, 720 (31.4%) received labetalol, 417 (18.2%) received nifedipine, and 1,155 (50.4%) received no treatment. The mean gestational age at enrollment was 10.5±3.7 weeks; nearly half of participants (47.5%) identified as non-Hispanic Black; and 44.5% used aspirin. The primary outcome occurred in 217 (30.1%), 130 (31.2%), and 427 (37.0%) in the labetalol, nifedipine, and standard care groups, respectively. Risk of the primary outcome was lower among those receiving treatment (labetalol use vs standard adjusted RR 0.82, 95% CI, 0.72-0.94; nifedipine use vs standard adjusted RR 0.84, 95% CI, 0.71-0.99), but there was no significant difference in risk when labetalol was compared with nifedipine (adjusted RR 0.98, 95% CI, 0.82-1.18). There were no significant differences in SGA or serious adverse events between participants receiving labetalol and those receiving nifedipine. CONCLUSION: No significant differences in predetermined maternal or neonatal outcomes were detected on the basis of the use of labetalol or nifedipine for treatment of chronic hypertension in pregnancy. CLINICAL TRIAL REGISTRATION: ClinicalTrials.gov, NCT02299414.
Assuntos
Anti-Hipertensivos , Hipertensão , Labetalol , Nifedipino , Resultado da Gravidez , Humanos , Gravidez , Feminino , Labetalol/administração & dosagem , Labetalol/efeitos adversos , Labetalol/uso terapêutico , Nifedipino/administração & dosagem , Nifedipino/efeitos adversos , Nifedipino/uso terapêutico , Anti-Hipertensivos/administração & dosagem , Anti-Hipertensivos/efeitos adversos , Anti-Hipertensivos/uso terapêutico , Adulto , Hipertensão/tratamento farmacológico , Recém-Nascido , Complicações Cardiovasculares na Gravidez/tratamento farmacológico , Hipertensão Induzida pela Gravidez/tratamento farmacológico , Administração Oral , Recém-Nascido Pequeno para a Idade Gestacional , Pré-Eclâmpsia/tratamento farmacológico , Doença CrônicaRESUMO
BACKGROUND: Current dosing practices for warfarin are empiric and result in the need for frequent dose changes as the international normalized ratio gets too high or too low. As a result, patients are put at increased risk for thromboembolism, bleeding, and premature discontinuation of anticoagulation therapy. Prior research has identified clinical and genetic factors that can alter warfarin dose requirements, but few randomized clinical trials have examined the utility of using clinical and genetic information to improve anticoagulation control or clinical outcomes among a large, diverse group of patients initiating warfarin. METHODS: The COAG trial is a multicenter, double-blind, randomized trial comparing 2 approaches to guiding warfarin therapy initiation: initiation of warfarin therapy based on algorithms using clinical information plus an individual's genotype using genes known to influence warfarin response ("genotype-guided dosing") versus only clinical information ("clinical-guided dosing") (www.clinicaltrials.gov Identifier: NCT00839657). RESULTS: The COAG trial design is described. The study hypothesis is that, among 1,022 enrolled patients, genotype-guided dosing relative to clinical-guided dosing during the initial dosing period will increase the percentage of time that patients spend in the therapeutic international normalized ratio range in the first 4 weeks of therapy. CONCLUSION: The COAG will determine if genetic information provides added benefit above and beyond clinical information alone.
Assuntos
Anticoagulantes/administração & dosagem , Coagulação Sanguínea/genética , Hemorragia/induzido quimicamente , Tromboembolia/etiologia , Varfarina/administração & dosagem , Relação Dose-Resposta a Droga , Método Duplo-Cego , Genótipo , Humanos , Coeficiente Internacional Normatizado , Resultado do Tratamento , Estados UnidosRESUMO
The complexity of standard medical treatment for heart failure is growing, and such therapy typically involves 5 or more different medications. Given these pressures, there is increasing interest in harnessing cardiovascular biomarkers for clinical application to more effectively guide diagnosis, risk stratification, and therapy. It may be possible to realize an era of personalized medicine for heart failure treatment in which therapy is optimized and costs are controlled. The direct mechanistic coupling of biologic processes and therapies achieved in cancer treatment remains elusive in heart failure. Recent clinical trials and meta-analyses of biomarkers in heart failure have produced conflicting evidence. In this article, which comprises a summary of discussions from the Global Cardiovascular Clinical Trialists Forum held in Paris, France, we offer a brief overview of the background and rationale for biomarker testing in heart failure, describe opportunities and challenges from a regulatory perspective, and summarize current positions from government agencies in the United States and European Union.