RESUMO
Williams-Beuren syndrome (WBS) is a rare genetic condition caused by a chromosomal microdeletion at 7q11.23. It is a multisystem disorder characterized by distinct facies, intellectual disability, and supravalvar aortic stenosis (SVAS). Those with WBS are at increased risk of sudden death, but mechanisms underlying this remain poorly understood. We recently demonstrated autonomic abnormalities in those with WBS that are associated with increased susceptibility to arrhythmia and sudden cardiac death (SCD). A recently introduced method for heart rate variability (HRV) analysis called "heart rate fragmentation" (HRF) correlates with adverse cardiovascular events (CVEs) and death in studies where heart rate variability (HRV) failed to identify high-risk subjects. Some argue that HRF quantifies nonautonomic cardiovascular modulators. We, therefore, sought to apply HRF analysis to a WBS cohort to determine 1) if those with WBS show differences in HRF compared with healthy controls and 2) if HRF helps characterize HRV abnormalities in those with WBS. Similar to studies of those with coronary artery disease (CAD) and atherosclerosis, we found significantly higher HRF (4 out of 7 metrics) in those with WBS compared with healthy controls. Multivariable analyses showed a weak-to-moderate association between HRF and HRV, suggesting that HRF may reflect HRV characteristics not fully captured by traditional HRV metrics (autonomic markers). We also introduce a new metric inspired by HRF methodology, significant acute rate drop (SARD), which may detect vagal activity more directly. HRF and SARD may improve on traditional HRV measures to identify those at greatest risk for SCD both in those with WBS and in other populations.NEW & NOTEWORTHY This work is the first to apply heart rate fragmentation analyses to individuals with Williams syndrome and posits that the heart rate fragmentation parameter W3 may enable detection and investigation of phenomena underlying the proarrhythmic short-long-short RR interval sequences paradigm known to precede ventricular fibrillation and ventricular tachycardia. It also forwards a novel method for quantifying sinus arrhythmia and sinus pauses that likely correlate with parasympathetic activity.
Assuntos
Morte Súbita Cardíaca , Frequência Cardíaca , Síndrome de Williams , Síndrome de Williams/fisiopatologia , Síndrome de Williams/genética , Síndrome de Williams/complicações , Humanos , Morte Súbita Cardíaca/etiologia , Feminino , Masculino , Adolescente , Adulto , Adulto Jovem , Estudos de Casos e Controles , Fatores de Risco , Sistema Nervoso Autônomo/fisiopatologia , Criança , Medição de Risco , Arritmias Cardíacas/fisiopatologia , Arritmias Cardíacas/genética , Arritmias Cardíacas/diagnósticoRESUMO
BACKGROUND: Previous studies reported conflicting results on the relationship between oxytocin use for labor augmentation and the risk of postpartum hemorrhage, probably because it is rather challenging to disentangle oxytocin use from labor dystocia. OBJECTIVE: This study aimed to investigate the independent association between oxytocin use for augmentation and the risk of postpartum hemorrhage by using advanced statistical modeling to control for labor patterns and other covariates. STUDY DESIGN: We used data from 20,899 term, cephalic, singleton pregnancies of patients with spontaneous onset of labor and no previous cesarean delivery from Intermountain Healthcare in Utah in the Consortium on Safe Labor. Presence of postpartum hemorrhage was identified on the basis of a clinical diagnosis. Propensity scores were calculated using a generalized linear mixed model for oxytocin use for augmentation, and covariate balancing generalized propensity score was applied to obtain propensity scores for the duration and total dosage of oxytocin augmentation. A weighted generalized additive mixed model was used to depict dose-response curves between the duration and total dosage of oxytocin augmentation and the outcomes. The average treatment effects of oxytocin use for augmentation on postpartum hemorrhage and estimated blood loss (mL) were assessed by inverse probability weighting of propensity scores. RESULTS: The odds of both postpartum hemorrhage and estimated blood loss increased modestly when the duration and/or total dosage of oxytocin used for augmentation increased. However, in comparison with women for whom oxytocin was not used, oxytocin augmentation was not clinically or statistically significantly associated with estimated blood loss (6.5 mL; 95% confidence interval, 2.5-10.3) or postpartum hemorrhage (adjusted odds ratio, 1.02; 95% confidence interval, 0.82-1.24) when rigorously controlling for labor pattern and potential confounders. The results remained consistent regardless of inclusion of women with an intrapartum cesarean delivery. CONCLUSION: The odds of postpartum hemorrhage and estimated blood loss increased modestly with increasing duration and total dosage of oxytocin augmentation. However, in comparison with women for whom oxytocin was not used and after controlling for potential confounders, there was no clinically significant association between oxytocin use for augmentation and estimated blood loss or the risk of postpartum hemorrhage.
Assuntos
Trabalho de Parto , Ocitócicos , Hemorragia Pós-Parto , Gravidez , Humanos , Feminino , Estados Unidos/epidemiologia , Ocitocina/efeitos adversos , Hemorragia Pós-Parto/etiologia , Estudos Retrospectivos , Trabalho de Parto Induzido/efeitos adversos , Ocitócicos/efeitos adversosRESUMO
Consider the choice of outcome for overall treatment benefit in a clinical trial which measures the first time to each of several clinical events. We describe several new variants of the win ratio that incorporate the time spent in each clinical state over the common follow-up, where clinical state means the worst clinical event that has occurred by that time. One version allows restriction so that death during follow-up is most important, while time spent in other clinical states is still accounted for. Three other variants are described; one is based on the average pairwise win time, one creates a continuous outcome for each participant based on expected win times against a reference distribution and another that uses the estimated distributions of clinical state to compare the treatment arms. Finally, a combination testing approach is described to give robust power for detecting treatment benefit across a broad range of alternatives. These new methods are designed to be closer to the overall treatment benefit/harm from a patient's perspective, compared to the ordinary win ratio. The new methods are compared to the composite event approach and the ordinary win ratio. Simulations show that when overall treatment benefit on death is substantial, the variants based on either the participants' expected win times (EWTs) against a reference distribution or estimated clinical state distributions have substantially higher power than either the pairwise comparison or composite event methods. The methods are illustrated by re-analysis of the trial heart failure: a controlled trial investigating outcomes of exercise training.
Assuntos
Insuficiência Cardíaca , Humanos , Determinação de Ponto Final/métodos , Interpretação Estatística de DadosRESUMO
BACKGROUND: The coronavirus disease 2019 pandemic highlighted the need to conduct efficient randomized clinical trials with interim monitoring guidelines for efficacy and futility. Several randomized coronavirus disease 2019 trials, including the Multiplatform Randomized Clinical Trial (mpRCT), used Bayesian guidelines with the belief that they would lead to quicker efficacy or futility decisions than traditional "frequentist" guidelines, such as spending functions and conditional power. We explore this belief using an intuitive interpretation of Bayesian methods as translating prior opinion about the treatment effect into imaginary prior data. These imaginary observations are then combined with actual observations from the trial to make conclusions. Using this approach, we show that the Bayesian efficacy boundary used in mpRCT is actually quite similar to the frequentist Pocock boundary. METHODS: The mpRCT's efficacy monitoring guideline considered stopping if, given the observed data, there was greater than 99% probability that the treatment was effective (odds ratio greater than 1). The mpRCT's futility monitoring guideline considered stopping if, given the observed data, there was greater than 95% probability that the treatment was less than 20% effective (odds ratio less than 1.2). The mpRCT used a normal prior distribution that can be thought of as supplementing the actual patients' data with imaginary patients' data. We explore the effects of varying probability thresholds and the prior-to-actual patient ratio in the mpRCT and compare the resulting Bayesian efficacy monitoring guidelines to the well-known frequentist Pocock and O'Brien-Fleming efficacy guidelines. We also contrast Bayesian futility guidelines with a more traditional 20% conditional power futility guideline. RESULTS: A Bayesian efficacy and futility monitoring boundary using a neutral, weakly informative prior distribution and a fixed probability threshold at all interim analyses is more aggressive than the commonly used O'Brien-Fleming efficacy boundary coupled with a 20% conditional power threshold for futility. The trade-off is that more aggressive boundaries tend to stop trials earlier, but incur a loss of power. Interestingly, the Bayesian efficacy boundary with 99% probability threshold is very similar to the classic Pocock efficacy boundary. CONCLUSIONS: In a pandemic where quickly weeding out ineffective treatments and identifying effective treatments is paramount, aggressive monitoring may be preferred to conservative approaches, such as the O'Brien-Fleming boundary. This can be accomplished with either Bayesian or frequentist methods.
RESUMO
Rationale: Immature control of breathing is associated with apnea, periodic breathing, intermittent hypoxemia, and bradycardia in extremely preterm infants. However, it is not clear if such events independently predict worse respiratory outcome. Objectives: To determine if analysis of cardiorespiratory monitoring data can predict unfavorable respiratory outcomes at 40 weeks postmenstrual age (PMA) and other outcomes, such as bronchopulmonary dysplasia at 36 weeks PMA. Methods: The Prematurity-related Ventilatory Control (Pre-Vent) study was an observational multicenter prospective cohort study including infants born at <29 weeks of gestation with continuous cardiorespiratory monitoring. The primary outcome was either "favorable" (alive and previously discharged or inpatient and off respiratory medications/O2/support at 40 wk PMA) or "unfavorable" (either deceased or inpatient/previously discharged on respiratory medications/O2/support at 40 wk PMA). Measurements and Main Results: A total of 717 infants were evaluated (median birth weight, 850 g; gestation, 26.4 wk), 53.7% of whom had a favorable outcome and 46.3% of whom had an unfavorable outcome. Physiologic data predicted unfavorable outcome, with accuracy improving with advancing age (area under the curve, 0.79 at Day 7, 0.85 at Day 28 and 32 wk PMA). The physiologic variable that contributed most to prediction was intermittent hypoxemia with oxygen saturation as measured by pulse oximetry <90%. Models with clinical data alone or combining physiologic and clinical data also had good accuracy, with areas under the curve of 0.84-0.85 at Days 7 and 14 and 0.86-0.88 at Day 28 and 32 weeks PMA. Intermittent hypoxemia with oxygen saturation as measured by pulse oximetry <80% was the major physiologic predictor of severe bronchopulmonary dysplasia and death or mechanical ventilation at 40 weeks PMA. Conclusions: Physiologic data are independently associated with unfavorable respiratory outcome in extremely preterm infants.
Assuntos
Displasia Broncopulmonar , Lactente Extremamente Prematuro , Lactente , Recém-Nascido , Humanos , Estudos Prospectivos , Respiração Artificial , HipóxiaRESUMO
The past 20 years witnessed an invigoration of research on labor progression and a change of thinking regarding normal labor. New evidence is emerging, and more advanced statistical methods are applied to labor progression analyses. Given the wide variations in the onset of active labor and the pattern of labor progression, there is an emerging consensus that the definition of abnormal labor may not be related to an idealized or average labor curve. Alternative approaches to guide labor management have been proposed; for example, using an upper limit of a distribution of labor duration to define abnormally slow labor. Nonetheless, the methods of labor assessment are still primitive and subject to error; more objective measures and more advanced instruments are needed to identify the onset of active labor, monitor labor progression, and define when labor duration is associated with maternal/child risk. Cervical dilation alone may be insufficient to define active labor, and incorporating more physical and biochemical measures may improve accuracy of diagnosing active labor onset and progression. Because the association between duration of labor and perinatal outcomes is rather complex and influenced by various underlying and iatrogenic conditions, future research must carefully explore how to integrate statistical cut-points with clinical outcomes to reach a practical definition of labor abnormalities. Finally, research regarding the complex labor process may benefit from new approaches, such as machine learning technologies and artificial intelligence to improve the predictability of successful vaginal delivery with normal perinatal outcomes.
Assuntos
Distocia , Trabalho de Parto , Criança , Feminino , Humanos , Gravidez , Inteligência Artificial , Parto Obstétrico , Primeira Fase do Trabalho de PartoRESUMO
Jeffries et al. (2018) investigated testing for a treatment difference in the setting of a randomized clinical trial with a single outcome measured longitudinally over a series of common follow-up times while adjusting for covariates. That paper examined the null hypothesis of no difference at any follow-up time versus the alternative of a difference for at least one follow-up time. We extend those results here by considering multivariate outcome measurements, where each individual outcome is examined at common follow-up times. We consider the case where there is interest in first testing for a treatment difference in a global function of the outcomes (e.g., weighted average or sum) with subsequent interest in examining the individual outcomes, should the global function show a treatment difference. Testing is conducted for each follow-up time and may be performed in the setting of a group sequential trial. Testing procedures are developed to determine follow-up times for which a global treatment difference exists and which individual combinations of outcome and follow-up time show evidence of a difference while controlling for multiplicity in outcomes, follow-up, and interim analyses. These approaches are examined in a study evaluating the effects of tissue plasminogen activator on longitudinally obtained stroke severity measurements.
Assuntos
Acidente Vascular Cerebral , Ativador de Plasminogênio Tecidual , Humanos , Ativador de Plasminogênio Tecidual/uso terapêutico , Estudos Longitudinais , Projetos de Pesquisa , Acidente Vascular Cerebral/tratamento farmacológicoRESUMO
Importance: Preclinical models suggest dysregulation of the renin-angiotensin system (RAS) caused by SARS-CoV-2 infection may increase the relative activity of angiotensin II compared with angiotensin (1-7) and may be an important contributor to COVID-19 pathophysiology. Objective: To evaluate the efficacy and safety of RAS modulation using 2 investigational RAS agents, TXA-127 (synthetic angiotensin [1-7]) and TRV-027 (an angiotensin II type 1 receptor-biased ligand), that are hypothesized to potentiate the action of angiotensin (1-7) and mitigate the action of the angiotensin II. Design, Setting, and Participants: Two randomized clinical trials including adults hospitalized with acute COVID-19 and new-onset hypoxemia were conducted at 35 sites in the US between July 22, 2021, and April 20, 2022; last follow-up visit: July 26, 2022. Interventions: A 0.5-mg/kg intravenous infusion of TXA-127 once daily for 5 days or placebo. A 12-mg/h continuous intravenous infusion of TRV-027 for 5 days or placebo. Main Outcomes and Measures: The primary outcome was oxygen-free days, an ordinal outcome that classifies a patient's status at day 28 based on mortality and duration of supplemental oxygen use; an adjusted odds ratio (OR) greater than 1.0 indicated superiority of the RAS agent vs placebo. A key secondary outcome was 28-day all-cause mortality. Safety outcomes included allergic reaction, new kidney replacement therapy, and hypotension. Results: Both trials met prespecified early stopping criteria for a low probability of efficacy. Of 343 patients in the TXA-127 trial (226 [65.9%] aged 31-64 years, 200 [58.3%] men, 225 [65.6%] White, and 274 [79.9%] not Hispanic), 170 received TXA-127 and 173 received placebo. Of 290 patients in the TRV-027 trial (199 [68.6%] aged 31-64 years, 168 [57.9%] men, 195 [67.2%] White, and 225 [77.6%] not Hispanic), 145 received TRV-027 and 145 received placebo. Compared with placebo, both TXA-127 (unadjusted mean difference, -2.3 [95% CrI, -4.8 to 0.2]; adjusted OR, 0.88 [95% CrI, 0.59 to 1.30]) and TRV-027 (unadjusted mean difference, -2.4 [95% CrI, -5.1 to 0.3]; adjusted OR, 0.74 [95% CrI, 0.48 to 1.13]) resulted in no difference in oxygen-free days. In the TXA-127 trial, 28-day all-cause mortality occurred in 22 of 163 patients (13.5%) in the TXA-127 group vs 22 of 166 patients (13.3%) in the placebo group (adjusted OR, 0.83 [95% CrI, 0.41 to 1.66]). In the TRV-027 trial, 28-day all-cause mortality occurred in 29 of 141 patients (20.6%) in the TRV-027 group vs 18 of 140 patients (12.9%) in the placebo group (adjusted OR, 1.52 [95% CrI, 0.75 to 3.08]). The frequency of the safety outcomes was similar with either TXA-127 or TRV-027 vs placebo. Conclusions and Relevance: In adults with severe COVID-19, RAS modulation (TXA-127 or TRV-027) did not improve oxygen-free days vs placebo. These results do not support the hypotheses that pharmacological interventions that selectively block the angiotensin II type 1 receptor or increase angiotensin (1-7) improve outcomes for patients with severe COVID-19. Trial Registration: ClinicalTrials.gov Identifier: NCT04924660.
Assuntos
COVID-19 , Receptor Tipo 1 de Angiotensina , Sistema Renina-Angiotensina , Vasodilatadores , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Angiotensina II/metabolismo , Angiotensinas/administração & dosagem , Angiotensinas/uso terapêutico , COVID-19/complicações , COVID-19/mortalidade , COVID-19/fisiopatologia , COVID-19/terapia , Hipóxia/tratamento farmacológico , Hipóxia/etiologia , Hipóxia/mortalidade , Infusões Intravenosas , Ligantes , Oligopeptídeos/administração & dosagem , Oligopeptídeos/uso terapêutico , Ensaios Clínicos Controlados Aleatórios como Assunto , Receptor Tipo 1 de Angiotensina/administração & dosagem , Receptor Tipo 1 de Angiotensina/uso terapêutico , Sistema Renina-Angiotensina/efeitos dos fármacos , SARS-CoV-2 , Vasodilatadores/administração & dosagem , Vasodilatadores/uso terapêuticoRESUMO
For semi-competing risks data involving a non-terminal event and a terminal event we derive the asymptotic distributions of the event-specific win ratios under proportional hazards (PH) assumptions for the relevant cause-specific hazard functions of the non-terminal and terminal event, respectively. The win ratios converge to the respective hazard ratios under the PH assumptions and therefore are censoring-free, whether or not the censoring distributions in the two treatment arms are the same. With the asymptotic bivariate normal distributions of the win ratios, confidence intervals and testing procedures are obtained. Through extensive simulation studies and data analysis, we identified proper transformations of the win ratios that yield good control of the type one error rate for various testing procedures while maintaining competitive power. The confidence intervals also have good coverage probabilities. Furthermore, a test for the PH assumptions and a test of equal hazard ratios are developed. The new procedures are illustrated in the clinical trial Aldosterone Antagonist Therapy for Adults With Heart Failure and Preserved Systolic Function, which evaluated the effects of spironolactone in patients with heart failure and a preserved left ventricular ejection fraction.
Assuntos
Insuficiência Cardíaca , Função Ventricular Esquerda , Adulto , Insuficiência Cardíaca/tratamento farmacológico , Humanos , Antagonistas de Receptores de Mineralocorticoides/uso terapêutico , Modelos de Riscos Proporcionais , Espironolactona/uso terapêutico , Volume SistólicoRESUMO
BACKGROUND/AIMS: In clinical trials, the primary outcome is often a composite endpoint defined as time to the first occurrence of either death or certain non-fatal events. Thus, a portion of available data would be omitted. In the win ratio approach, priorities are given to the clinically more important events, and more data are used. However, its power may be low if the treatment effect is predominantly on the non-terminal event. METHODS: We propose event-specific win ratios obtained separately on the terminal and non-terminal events. They can then be used to form global tests such as a linear combination test, the maximum test, or a χ2 test. RESULTS: In simulations, these tests often improve the power of the original win ratio test. Furthermore, when the terminal and non-terminal events experience differential treatment effects, the new tests are often more powerful than the log-rank test for the composite outcome. Whether the treatment effect is primarily on the terminal events or not, the new tests based on the event-specific win ratios can be useful when different types of events are present. The new tests can reject the null hypothesis of no difference in the event distributions in the two treatment arms with the terminal event showing detrimental effect and the non-terminal event showing beneficial effect. The maximum test and the χ2 test do not have test-estimation coherency, but the maximum test has the coherency that the global null is rejected if and only if the null for one of the event types is rejected. When applied to data from the trial Aldosterone Antagonist Therapy for Adults With Heart Failure and Preserved Systolic Function (TOPCAT), the new tests all reject the null hypothesis of no treatment effect while both the log-rank test used in TOPCAT and the original win ratio approach show non-significant p-values. CONCLUSION: Whether the treatment effect is primarily on the terminal events or the non-terminal events, the maximum test based on the event-specific win ratios can be a useful alternative for testing treatment effect in clinical trials with time-to-event outcomes when different types of events are present.
Assuntos
Ensaios Clínicos como Assunto , Insuficiência Cardíaca , Projetos de Pesquisa , Adulto , Insuficiência Cardíaca/diagnóstico , Insuficiência Cardíaca/tratamento farmacológico , HumanosRESUMO
BACKGROUND/AIMS: The two-by-two factorial design randomizes participants to receive treatment A alone, treatment B alone, both treatments A and B(AB), or neither treatment (C). When the combined effect of A and B is less than the sum of the A and B effects, called a subadditive interaction, there can be low power to detect the A effect using an overall test, that is, factorial analysis, which compares the A and AB groups to the C and B groups. Such an interaction may have occurred in the Action to Control Cardiovascular Risk in Diabetes blood pressure trial (ACCORD BP) which simultaneously randomized participants to receive intensive or standard blood pressure, control and intensive or standard glycemic control. For the primary outcome of major cardiovascular event, the overall test for efficacy of intensive blood pressure control was nonsignificant. In such an instance, simple effect tests of A versus C and B versus C may be useful since they are not affected by a subadditive interaction, but they can have lower power since they use half the participants of the overall trial. We investigate multiple testing procedures which exploit the overall tests' sample size advantage and the simple tests' robustness to a potential interaction. METHODS: In the time-to-event setting, we use the stratified and ordinary logrank statistics' asymptotic means to calculate the power of the overall and simple tests under various scenarios. We consider the A and B research questions to be unrelated and allocate 0.05 significance level to each. For each question, we investigate three multiple testing procedures which allocate the type 1 error in different proportions for the overall and simple effects as well as the AB effect. The Equal Allocation 3 procedure allocates equal type 1 error to each of the three effects, the Proportional Allocation 2 procedure allocates 2/3 of the type 1 error to the overall A (respectively, B) effect and the remaining type 1 error to the AB effect, and the Equal Allocation 2 procedure allocates equal amounts to the simple A (respectively, B) and AB effects. These procedures are applied to ACCORD BP. RESULTS: Across various scenarios, Equal Allocation 3 had robust power for detecting a true effect. For ACCORD BP, all three procedures would have detected a benefit of intensive glycemia control. CONCLUSIONS: When there is no interaction, Equal Allocation 3 has less power than a factorial analysis. However, Equal Allocation 3 often has greater power when there is an interaction. The R package factorial2x2 can be used to explore the power gain or loss for different scenarios.
Assuntos
Projetos de Pesquisa , Pressão Sanguínea , Humanos , Tamanho da AmostraRESUMO
There is no established method for processing data from commercially available physical activity trackers. This study aims to develop a standardized approach to defining valid wear time for use in future interventions and analyses. Sixteen African American women (mean age = 62.1 years and mean body mass index = 35.5 kg/m2) wore the Fitbit Charge 2 for 20 days. Method 1 defined a valid day as ≥10-hr wear time with heart rate data. Method 2 removed minutes without heart rate data, minutes with heart rate ≤ mean - 2 SDs below mean and ≤2 steps, and nighttime. Linear regression modeled steps per day per week change. Using Method 1 (n = 292 person-days), participants had 20.5 (SD = 4.3) hr wear time per day compared with 16.3 (SD = 2.2) hr using Method 2 (n = 282) (p < .0001). With Method 1, participants took 7,436 (SD = 3,543) steps per day compared with 7,298 (SD = 3,501) steps per day with Method 2 (p = .64). The proposed algorithm represents a novel approach to standardizing data generated by physical activity trackers. Future studies are needed to improve the accuracy of physical activity data sets.
Assuntos
Exercício Físico , Monitores de Aptidão Física , Algoritmos , Índice de Massa Corporal , Feminino , Frequência Cardíaca , HumanosRESUMO
BACKGROUND: Short- and long-term effects of mobilization regimens in hematopoietic stem cell and granulocyte donors have been well characterized. In this study, we examined the longitudinal hematopoietic changes related to repeat stimulated granulocyte donation. STUDY DESIGN AND METHODS: Complete blood counts for consecutive granulocyte donors between October 1994 and May 2017 were compared to unstimulated granulocyte donors. Plateletpheresis donors served as controls. The longitudinal change in precollection white blood cell (WBC) counts for these donor groups were modeled using a linear mixed-effects model. The investigated variables were granulocyte, lymphocyte, and monocyte counts and the granulocyte collection yield. Contrasts were performed to explore the effect of donation number on precollection counts. RESULTS: For the granulocyte-colony-stimulating factor plus dexamethasone (G-CSF/Dex)-stimulated group, both the granulocyte and the lymphocyte counts decreased 6.51 × 109 /L (-23.1%, p < 0.001) and 0.21 × 109 /L (-20.4%, p < 0.001), respectively, between Donation 1 and Donation 20. This effect was still present at the 3- to 4-year interval (b = -0.0008313, SE = 0.00029, p = 0.004). For the unstimulated donor group between Donation 1 and Donation 20, the lymphocyte count decreased by 0.62 × 109 /L (-51.5%, p < 0.001). This effect was only significant up to Year 2 (b = -0.0026, SE = 0.0010, p = 0.013). CONCLUSIONS: Past granulocyte donations were found to have a statistically strong negative effect on precollection granulocyte counts and lymphocyte counts and decreased granulocyte yield both in the G-CSF/Dex-stimulated donors and the unstimulated donors. In this statistical model, for both these groups, the effect of past donations on granulocyte and WBC counts were still detectable 2 years later.
Assuntos
Doadores de Sangue/estatística & dados numéricos , Granulócitos/citologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Hematopoese/fisiologia , Mobilização de Células-Tronco Hematopoéticas , Humanos , Masculino , Pessoa de Meia-Idade , Adulto JovemRESUMO
OBJECTIVE: To determine if daily respiratory status improved more in extremely low gestational age (GA) premature infants after diuretic exposure compared with those not exposed in modern neonatal intensive care units. STUDY DESIGN: The Prematurity and Respiratory Outcomes Program (PROP) was a multicenter observational cohort study of 835 extremely premature infants, GAs of 230/7-286/7 weeks, enrolled in the first week of life from 13 US tertiary neonatal intensive care units. We analyzed the PROP study daily medication and respiratory support records of infants ≤34 weeks postmenstrual age. We determined whether there was a temporal association between the administration of diuretics and an acute change in respiratory status in premature infants in the neonatal intensive care unit, using an ordered categorical ranking of respiratory status. RESULTS: Infants in the diuretic exposed group of PROP were of lower mean GA and lower mean birth weight (P < .0001). Compared with infants unexposed to diuretics, the probability (adjusted for infant characteristics including GA, birth weight, sex, and respiratory status before receiving diuretics) that the exposed infants were on a higher level of respiratory support was significantly greater (OR, >1) for each day after the initial day of diuretic exposure. CONCLUSIONS: Our analysis did not support the ability of diuretics to substantially improve the extremely premature infant's respiratory status. Further study of both safety and efficacy of diuretics in this setting are warranted. TRIAL REGISTRATION: Clinicaltrials.gov: NCT01435187.
Assuntos
Diuréticos/uso terapêutico , Lactente Extremamente Prematuro/fisiologia , Síndrome do Desconforto Respiratório do Recém-Nascido/tratamento farmacológico , Manuseio das Vias Aéreas/métodos , Estudos de Coortes , Feminino , Idade Gestacional , Humanos , Recém-Nascido , Unidades de Terapia Intensiva Neonatal , Masculino , Respiração , Síndrome do Desconforto Respiratório do Recém-Nascido/fisiopatologia , Estados UnidosRESUMO
In longitudinal studies comparing two treatments over a series of common follow-up measurements, there may be interest in determining if there is a treatment difference at any follow-up period when there may be a non-monotone treatment effect over time. To evaluate this question, Jeffries and Geller (2015) examined a number of clinical trial designs that allowed adaptive choice of the follow-up time exhibiting the greatest evidence of treatment difference in a group sequential testing setting with Gaussian data. The methods are applicable when a few measurements were taken at prespecified follow-up periods. Here, we test the intersection null hypothesis of no difference at any follow-up time versus the alternative that there is a difference for at least one follow-up time. Results of Jeffries and Geller (2015) are extended by considering a broader range of modeled data and the inclusion of covariates using generalized estimating equations. Testing procedures are developed to determine a set of follow-up times that exhibit a treatment difference that accounts for multiplicity in follow-up times and interim analyses.
Assuntos
Análise de Variância , Estudos Longitudinais , Projetos de Pesquisa , Ensaios Clínicos como Assunto , Seguimentos , HumanosRESUMO
We investigate different primary efficacy analysis approaches for a 2-armed randomized clinical trial when interest is focused on a time to event primary outcome that is subject to a competing risk. We extend the work of Friedlin and Korn (2005) by considering estimation as well as testing and by simulating the primary and competing events' times from both a cause-specific hazards model as well as a joint subdistribution-cause-specific hazards model. We show that the cumulative incidence function can provide useful prognostic information for a particular patient but is not advisable for the primary efficacy analysis. Instead, it is preferable to fit a Cox model for the primary event which treats the competing event as an independent censoring. This is reasonably robust for controlling type I error and treatment effect bias with respect to the true primary and competing events' cause-specific hazards model, even when there is a shared, moderately prognostic, unobserved baseline frailty for the primary and competing events in that model. However, when it is plausible that a strongly prognostic frailty exists, combining the primary and competing events into a composite event should be considered. Finally, when there is an a priori interest in having both the primary and competing events in the primary analysis, we compare a bivariate approach for establishing overall treatment efficacy to the composite event approach. The ideas are illustrated by analyzing the Women's Health Initiative clinical trials sponsored by the National Heart, Lung, and Blood Institute.
RESUMO
In an observational study of the effect of a treatment on a time-to-event outcome, a major problem is accounting for confounding because of unknown or unmeasured factors. We propose including covariates in a Cox model that can partially account for an unknown time-independent frailty that is related to starting or stopping treatment as well as the outcome of interest. These covariates capture the times at which treatment is started or stopped and so are called treatment choice (TC) covariates. Three such models are developed: first, an interval TC model that assumes a very general form for the respective hazard functions of starting treatment, stopping treatment, and the outcome of interest and second, a parametric TC model that assumes that the log hazard functions for starting treatment, stopping treatment, and the outcome event include frailty as an additive term. Finally, a hybrid TC model that combines attributes from the parametric and interval TC models. As compared with an ordinary Cox model, the TC models are shown to substantially reduce the bias of the estimated hazard ratio for treatment when data are simulated from a realistic Cox model with residual confounding due to the unobserved frailty. The simulations also indicate that the bias decreases or levels off as the sample size increases. A TC model is illustrated by analyzing the Women's Health Initiative Observational Study of hormone replacement for post-menopausal women. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.
Assuntos
Fatores de Confusão Epidemiológicos , Estudos Observacionais como Assunto/métodos , Modelos de Riscos Proporcionais , Resultado do Tratamento , Viés , Simulação por Computador , Feminino , Terapia de Reposição Hormonal , Humanos , Incidência , Pós-MenopausaRESUMO
Many folate-related genes have been investigated for possible causal roles in neural tube defects (NTDs) and oral clefts. However, no previous reports have examined the major gene responsible for folate uptake, the proton-coupled folate transporter (SLC46A1). We tested for association between these birth defects and single nucleotide polymorphisms in the SLC46A1 gene. The NTD study population included 549 complete and incomplete case-family triads, and 999 controls from Ireland. The oral clefts study population comprised a sample from Utah (495 complete and incomplete case-family triads and 551 controls) and 221 Filipino multiplex cleft families. There was suggestive evidence of increased NTD case risk with the rs17719944 minor allele (odds ratio (OR): 1.29; 95% confidence intervals (CI): [1.00-1.67]), and decreased maternal risk of an NTD pregnancy with the rs4795436 minor allele (OR: 0.62; [0.39-0.99]). In the Utah sample, the rs739439 minor allele was associated with decreased case risk for cleft lip with cleft palate (genotype relative risk (GRR): 0.56 [0.32-0.98]). Additionally, the rs2239907 minor allele was associated with decreased case risk for cleft lip with cleft palate in several models, and with cleft palate only in a recessive model (OR: 0.41; [0.20-0.85]). These associations did not remain statistically significant after correcting for multiple hypothesis testing. Nominal associations between SLC46A1 polymorphisms and both Irish NTDs and oral clefts in the Utah population suggest some role in the etiology of these birth defects, but further investigation in other populations is needed.
Assuntos
Fenda Labial/genética , Defeitos do Tubo Neural/genética , Polimorfismo de Nucleotídeo Único , Transportador de Folato Acoplado a Próton/genética , Alelos , Estudos de Casos e Controles , Frequência do Gene , Estudos de Associação Genética , Genótipo , Humanos , Fatores de RiscoRESUMO
In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account.