Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 67
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Biostatistics ; 25(1): 220-236, 2023 12 15.
Article in English | MEDLINE | ID: mdl-36610075

ABSTRACT

Trial-level surrogates are useful tools for improving the speed and cost effectiveness of trials but surrogates that have not been properly evaluated can cause misleading results. The evaluation procedure is often contextual and depends on the type of trial setting. There have been many proposed methods for trial-level surrogate evaluation, but none, to our knowledge, for the specific setting of platform studies. As platform studies are becoming more popular, methods for surrogate evaluation using them are needed. These studies also offer a rich data resource for surrogate evaluation that would not normally be possible. However, they also offer a set of statistical issues including heterogeneity of the study population, treatments, implementation, and even potentially the quality of the surrogate. We propose the use of a hierarchical Bayesian semiparametric model for the evaluation of potential surrogates using nonparametric priors for the distribution of true effects based on Dirichlet process mixtures. The motivation for this approach is to flexibly model relationships between the treatment effect on the surrogate and the treatment effect on the outcome and also to identify potential clusters with differential surrogate value in a data-driven manner so that treatment effects on the surrogate can be used to reliably predict treatment effects on the clinical outcome. In simulations, we find that our proposed method is superior to a simple, but fairly standard, hierarchical Bayesian method. We demonstrate how our method can be used in a simulated illustrative example (based on the ProBio trial), in which we are able to identify clusters where the surrogate is, and is not useful. We plan to apply our method to the ProBio trial, once it is completed.


Subject(s)
Clinical Trials as Topic , Humans , Bayes Theorem , Treatment Outcome
2.
Biostatistics ; 24(4): 1017-1030, 2023 10 18.
Article in English | MEDLINE | ID: mdl-36050911

ABSTRACT

When multiple mediators are present, there are additional effects that may be of interest beyond the well-known natural (NDE) and controlled direct effects (CDE). These effects cross the type of control on the mediators, setting one to a constant level and one to its natural level, which differs across subjects. We introduce five such estimands for the cross-CDE and -NDE when two mediators are measured. We consider both the scenario where one mediator is influenced by the other, referred to as sequential mediators, and the scenario where the mediators do not influence each other. Such estimands may be of interest in immunology, as we discuss in relation to measured immunological responses to SARS-CoV-2 vaccination. We provide identifying expressions for the estimands in observational settings where there is no residual confounding, and where intervention, outcome, and mediators are of arbitrary type. We further provide tight symbolic bounds for the estimands in randomized settings where there may be residual confounding of the outcome and mediator relationship and all measured variables are binary.


Subject(s)
COVID-19 , Models, Statistical , Humans , COVID-19 Vaccines , COVID-19/prevention & control , SARS-CoV-2
3.
Stat Med ; 43(3): 534-547, 2024 02 10.
Article in English | MEDLINE | ID: mdl-38096856

ABSTRACT

There are now many options for doubly robust estimation; however, there is a concerning trend in the applied literature to believe that the combination of a propensity score and an adjusted outcome model automatically results in a doubly robust estimator and/or to misuse more complex established doubly robust estimators. A simple alternative, canonical link generalized linear models (GLM) fit via inverse probability of treatment (propensity score) weighted maximum likelihood estimation followed by standardization (the g $$ g $$ -formula) for the average causal effect, is a doubly robust estimation method. Our aim is for the reader not just to be able to use this method, which we refer to as IPTW GLM, for doubly robust estimation, but to fully understand why it has the doubly robust property. For this reason, we define clearly, and in multiple ways, all concepts needed to understand the method and why it is doubly robust. In addition, we want to make very clear that the mere combination of propensity score weighting and an adjusted outcome model does not generally result in a doubly robust estimator. Finally, we hope to dispel the misconception that one can adjust for residual confounding remaining after propensity score weighting by adjusting in the outcome model for what remains 'unbalanced' even when using doubly robust estimators. We provide R code for our simulations and real open-source data examples that can be followed step-by-step to use and hopefully understand the IPTW GLM method. We also compare to a much better-known but still simple doubly robust estimator.


Subject(s)
Models, Statistical , Humans , Computer Simulation , Data Interpretation, Statistical , Probability , Propensity Score , Linear Models
4.
BMC Pregnancy Childbirth ; 24(1): 25, 2024 Jan 03.
Article in English | MEDLINE | ID: mdl-38172881

ABSTRACT

BACKGROUND: To improve future mobile health (mHealth) interventions in resource-limited settings, knowledge of participants' adherence to interactive interventions is needed, but previous studies are limited. We aimed to investigate how women in prevention of mother-to-child transmission of HIV (PMTCT) care in Kenya used, adhered to, and evaluated an interactive text-messaging intervention. METHODS: We conducted a cohort study nested within the WelTel PMTCT trial among 299 pregnant women living with HIV aged ≥ 18 years. They received weekly text messages from their first antenatal care visit until 24 months postpartum asking "How are you?". They were instructed to text within 48 h stating that they were "okay" or had a "problem". Healthcare workers phoned non-responders and problem-responders to manage any issue. We used multivariable-adjusted logistic and negative binomial regression to estimate adjusted odds ratios (aORs), rate ratios (aRRs) and 95% confidence intervals (CIs) to assess associations between baseline characteristics and text responses. Perceptions of the intervention were evaluated through interviewer-administered follow-up questionnaires at 24 months postpartum. RESULTS: The 299 participants sent 15,183 (48%) okay-responses and 438 (1%) problem-responses. There were 16,017 (51%) instances of non-response. The proportion of non-responses increased with time and exceeded 50% around 14 months from enrolment. Most reported problems were health related (84%). Having secondary education was associated with reporting a problem (aOR:1.88; 95%CI: 1.08-3.27) compared to having primary education or less. Younger age (18-24 years) was associated with responding to < 50% of messages (aOR:2.20; 95%CI: 1.03-4.72), compared to being 35-44 years. Women with higher than secondary education were less likely (aOR:0.28; 95%CI: 0.13-0.64), to respond to < 50% of messages compared to women with primary education or less. Women who had disclosed their HIV status had a lower rate of non-response (aRR:0.77; 95%CI: 0.60-0.97). In interviews with 176 women, 167 (95%) agreed or strongly agreed that the intervention had been helpful, mainly by improving access to and communication with their healthcare providers (43%). CONCLUSION: In this observational study, women of younger age, lower education, and who had not disclosed their HIV status were less likely to adhere to interactive text-messaging. The majority of those still enrolled at the end of the intervention reported that text-messaging had been helpful, mainly by improving access to healthcare providers. Future mHealth interventions aiming to improve PMTCT care need to be targeted to attract the attention of women with lower education and younger age.


Subject(s)
HIV Infections , Text Messaging , Adolescent , Adult , Female , Humans , Pregnancy , Cohort Studies , HIV Infections/drug therapy , HIV Infections/prevention & control , Infectious Disease Transmission, Vertical/prevention & control , Kenya , Young Adult
5.
Br J Cancer ; 128(7): 1278-1285, 2023 03.
Article in English | MEDLINE | ID: mdl-36690722

ABSTRACT

BACKGROUND: Medical advances in the treatment of cancer have allowed the development of multiple approved treatments and prognostic and predictive biomarkers for many types of cancer. Identifying improved treatment strategies among approved treatment options, the study of which is termed comparative effectiveness, using predictive biomarkers is becoming more common. RCTs that incorporate predictive biomarkers into the study design, called prediction-driven RCTs, are needed to rigorously evaluate these treatment strategies. Although researched extensively in the experimental treatment setting, literature is lacking in providing guidance about prediction-driven RCTs in the comparative effectiveness setting. METHODS: Realistic simulations with time-to-event endpoints are used to compare contrasts of clinical utility and provide examples of simulated prediction-driven RCTs in the comparative effectiveness setting. RESULTS: Our proposed contrast for clinical utility accurately estimates the true clinical utility in the comparative effectiveness setting while in some scenarios, the contrast used in current literature does not. DISCUSSION: It is important to properly define contrasts of interest according to the treatment setting. Realistic simulations should be used to choose and evaluate the RCT design(s) able to directly estimate that contrast. In the comparative effectiveness setting, our proposed contrast for clinical utility should be used.


Subject(s)
Neoplasms , Research Design , Humans , Neoplasms/therapy
6.
Stat Med ; 42(12): 1946-1964, 2023 05 30.
Article in English | MEDLINE | ID: mdl-36890728

ABSTRACT

Long-term register data offer unique opportunities to explore causal effects of treatments on time-to-event outcomes, in well-characterized populations with minimum loss of follow-up. However, the structure of the data may pose methodological challenges. Motivated by the Swedish Renal Registry and estimation of survival differences for renal replacement therapies, we focus on the particular case when an important confounder is not recorded in the early period of the register, so that the entry date to the register deterministically predicts confounder missingness. In addition, an evolving composition of the treatment arms populations, and suspected improved survival outcomes in later periods lead to informative administrative censoring, unless the entry date is appropriately accounted for. We investigate different consequences of these issues on causal effect estimation following multiple imputation of the missing covariate data. We analyse the performance of different combinations of imputation models and estimation methods for the population average survival. We further evaluate the sensitivity of our results to the nature of censoring and misspecification of fitted models. We find that an imputation model including the cumulative baseline hazard, event indicator, covariates and interactions between the cumulative baseline hazard and covariates, followed by regression standardization, leads to the best estimation results overall, in simulations. Standardization has two advantages over inverse probability of treatment weighting here: it can directly account for the informative censoring by including the entry date as a covariate in the outcome model, and allows for straightforward variance computation using readily available software.


Subject(s)
Models, Statistical , Humans , Data Interpretation, Statistical , Probability , Survival Analysis , Treatment Outcome
7.
Eur J Epidemiol ; 38(5): 501-509, 2023 May.
Article in English | MEDLINE | ID: mdl-37043152

ABSTRACT

In studies where the outcome is a change-score, it is often debated whether or not the analysis should adjust for the baseline score. When the aim is to make causal inference, it has been argued that the two analyses (adjusted vs. unadjusted) target different causal parameters, which may both be relevant. However, these arguments are not applicable when the aim is to make predictions rather than to estimate causal effects. When the scores are measured with error, there have been attempts to quantify the bias resulting from adjustment for the (mis-)measured baseline score or lack thereof. However, these bias results have been derived under an unrealistically simple model, and assuming that the target parameter is the unadjusted (for the true baseline score) association, thus dismissing the adjusted association as a possibly relevant target parameter. In this paper we address these limitations. We argue that, even if the aim is to make predictions, there are two possibly relevant target parameters; one adjusted for the baseline score and one unadjusted. We consider both the simple case when there are no measurement errors, and the more complex case when the scores are measured with error. For the latter case, we consider a more realistic model than previous authors. Under this model we derive analytic expressions for the biases that arise when adjusting or not adjusting for the (mis-)measured baseline score, with respect to the two possible target parameters. Finally, we use these expressions to discuss when adjustment is warranted in change-score analyses.


Subject(s)
Bias , Humans , Causality
8.
Br J Cancer ; 127(9): 1636-1641, 2022 11.
Article in English | MEDLINE | ID: mdl-35986088

ABSTRACT

BACKGROUND: Providing estimates of uncertainty for statistical quantities is important for statistical inference. When the statistical quantity of interest is a survival curve, which is a function over time, the appropriate type of uncertainty estimate is a confidence band constructed to account for the correlation between points on the curve, we will call this a simultaneous confidence band. This, however, is not the type of confidence band provided in standard software, which is constructed by joining the confidence intervals at given time points. METHODS: We show that this type of band does not have desirable joint/simultaneous coverage properties in comparison to simultaneous bands. RESULTS: There are different ways of constructing simultaneous confidence bands, and we find that bands based on the likelihood ratio appear to have the most desirable properties. Although there is no standard software available in the three major statistical packages to compute likelihood-based simultaneous bands, we summarise and give code to use available statistical software to construct other simultaneous forms of bands, which we illustrate using a study of colon cancer. CONCLUSIONS: There is a need for more user-friendly statistical software to compute simultaneous confidence bands using the available methods.


Subject(s)
Software , Humans , Likelihood Functions , Survival Analysis , Uncertainty , Confidence Intervals
9.
J Biopharm Stat ; 32(6): 858-870, 2022 11 02.
Article in English | MEDLINE | ID: mdl-35574690

ABSTRACT

There have been many strategies to adapt machine learning algorithms to account for right censored observations in survival data in order to build more accurate risk prediction models. These adaptions have included pre-processing steps such as pseudo-observation transformation of the survival outcome or inverse probability of censoring weighted (IPCW) bootstrapping of the observed binary indicator of an event prior to a time point of interest. These pre-processing steps allow existing or newly developed machine learning methods, which were not specifically developed with time-to-event data in mind, to be applied to right censored survival data for predicting the risk of experiencing an event. Stacking or ensemble methods can improve on risk predictions, but in general, the combination of pseudo-observation-based algorithms, IPCW bootstrapping, IPC weighting of the methods directly, and methods developed specifically for survival has not been considered in the same ensemble. In this paper, we propose an ensemble procedure based on the area under the pseudo-observation-based-time-dependent ROC curve to optimally stack predictions from any survival or survival adapted algorithm. The real application results show that our proposed method can improve on single survival based methods such as survival random forest or on other strategies that use a pre-processing step such as inverse probability of censoring weighted bagging or pseudo-observations alone.


Subject(s)
Algorithms , Random Forest , Humans , Area Under Curve , Probability , ROC Curve , Survival Analysis
10.
Ann Intern Med ; 174(8): 1118-1125, 2021 08.
Article in English | MEDLINE | ID: mdl-33844575

ABSTRACT

Multiple candidate vaccines to prevent COVID-19 have entered large-scale phase 3 placebo-controlled randomized clinical trials, and several have demonstrated substantial short-term efficacy. At some point after demonstration of substantial efficacy, placebo recipients should be offered the efficacious vaccine from their trial, which will occur before longer-term efficacy and safety are known. The absence of a placebo group could compromise assessment of longer-term vaccine effects. However, by continuing follow-up after vaccination of the placebo group, this study shows that placebo-controlled vaccine efficacy can be mathematically derived by assuming that the benefit of vaccination over time has the same profile for the original vaccine recipients and the original placebo recipients after their vaccination. Although this derivation provides less precise estimates than would be obtained by a standard trial where the placebo group remains unvaccinated, this proposed approach allows estimation of longer-term effect, including durability of vaccine efficacy and whether the vaccine eventually becomes harmful for some. Deferred vaccination, if done open-label, may lead to riskier behavior in the unblinded original vaccine group, confounding estimates of long-term vaccine efficacy. Hence, deferred vaccination via blinded crossover, where the vaccine group receives placebo and vice versa, would be the preferred way to assess vaccine durability and potential delayed harm. Deferred vaccination allows placebo recipients timely access to the vaccine when it would no longer be proper to maintain them on placebo, yet still allows important insights about immunologic and clinical effectiveness over time.


Subject(s)
COVID-19 Vaccines/administration & dosage , COVID-19/prevention & control , Clinical Trials, Phase III as Topic/standards , Randomized Controlled Trials as Topic/standards , Clinical Trials, Phase III as Topic/methods , Cross-Over Studies , Double-Blind Method , Drug Administration Schedule , Follow-Up Studies , Humans , Randomized Controlled Trials as Topic/methods , Research Design/standards , SARS-CoV-2 , Treatment Outcome
11.
Am J Epidemiol ; 190(9): 1882-1889, 2021 09 01.
Article in English | MEDLINE | ID: mdl-33728441

ABSTRACT

The test-negative study design is often used to estimate vaccine effectiveness in influenza studies, but it has also been proposed in the context of other infectious diseases, such as cholera, dengue, or Ebola. It was introduced as a variation of the case-control design, in an attempt to reduce confounding bias due to health-care-seeking behavior, and has quickly gained popularity because of its logistic advantages. However, examination of the directed acyclic graphs that describe the test-negative design reveals that without strong assumptions, the estimated odds ratio derived under this sampling mechanism is not collapsible over the selection variable, such that the results obtained for the sampled individuals cannot be generalized to the whole population. In this paper, we show that adjustment for severity of disease can reduce this bias and, under certain assumptions, makes it possible to unbiasedly estimate a causal odds ratio. We support our findings with extensive simulations and discuss them in the context of recently published cholera test-negative studies of the effectiveness of cholera vaccines.


Subject(s)
Infections/pathology , Research Design , Severity of Illness Index , Vaccines/therapeutic use , Bias , Case-Control Studies , Cholera/pathology , Cholera/prevention & control , Cholera Vaccines/therapeutic use , Humans , Infection Control/methods , Models, Statistical , Odds Ratio , Patient Acceptance of Health Care/statistics & numerical data , Treatment Outcome
12.
Biostatistics ; 21(2): e33-e46, 2020 04 01.
Article in English | MEDLINE | ID: mdl-30247535

ABSTRACT

Surrogate evaluation is a difficult problem that is made more so by the presence of interference. Our proposed procedure can allow for relatively easy evaluation of surrogates for indirect or spill-over clinical effects at the cluster level. Our definition of surrogacy is based on the causal-association paradigm (Joffe and Greene, 2009. Related causal frameworks for surrogate outcomes. Biometrics65, 530-538), under which surrogates are evaluated by the strength of the association between a causal treatment effect on the clinical outcome and a causal treatment effect on the candidate surrogate. Hudgens and Halloran (2008, Toward causal inference with interference. Journal of the American Statistical Association103, 832-842) introduced estimators that can be used for many of the marginal causal estimands of interest in the presence of interference. We extend these to consider surrogates for not just direct effects, but indirect and total effects at the cluster level. We suggest existing estimators that can be used to evaluate biomarkers under our proposed definition of surrogacy. In our motivating setting of a transmission blocking malaria vaccine, there is expected to be no direct protection to those vaccinated and predictive surrogates are urgently needed. We use a set of simulated data examples based on the proposed Phase IIb/III trial design of transmission blocking malaria vaccine to demonstrate how our definition, proposed criteria and procedure can be used to identify biomarkers as predictive cluster level surrogates in the presence of interference on the clinical outcome.


Subject(s)
Biomarkers , Biostatistics/methods , Outcome Assessment, Health Care/methods , Causality , Clinical Trials as Topic , Humans , Malaria/prevention & control , Malaria Vaccines
13.
Transfusion ; 61(2): 464-473, 2021 02.
Article in English | MEDLINE | ID: mdl-33186486

ABSTRACT

BACKGROUND: Recently, plateletpheresis donations using a widely used leukoreduction system (LRS) chamber have been associated with T-cell lymphopenia. However, clinical health consequences of plateletpheresis-associated lymphopenia are still unknown. STUDY DESIGN AND METHODS: A nationwide cohort study using the SCANDAT3-S database was conducted with all platelet- and plasmapheresis donors in Sweden between 1996 and 2017. A Cox proportional hazards model, using donations as time-dependent exposures, was used to assess the risk of infections associated with plateletpheresis donations using an LRS chamber. RESULTS: A total of 74 408 apheresis donors were included. Among donors with the same donation frequency, plateletpheresis donors using an LRS chamber were at an increased risk of immunosuppression-related infections and common bacterial infections in a dose-dependent manner. While very frequent donors and infections were rare in absolute terms resulting in wide confidence intervals (CIs), the increased risk was significant starting at one-third or less of the allowed donation frequency in a 10-year exposure window, with hazard ratios reaching 10 or more. No plateletpheresis donors that used an LRS chamber experienced a Pneumocystis jirovecii, aspergillus, disseminated mycobacterial, or cryptococcal infection. In a subcohort (n = 42), donations with LRS were associated with low CD4+ T-cell counts (Pearson's R = -0.41; 95% CI, - 0.63 to -0.12). CONCLUSION: Frequent plateletpheresis donation using an LRS chamber was associated with CD4+ T-cell lymphopenia and an increased risk of infections. These findings suggest a need to monitor T-lymphocyte counts in frequent platelet donors and to conduct future investigations of long-term donor health and for regulators to consider steps to mitigate lymphodepletion in donors.


Subject(s)
Blood Donors , Infections/epidemiology , Leukocyte Reduction Procedures/instrumentation , Lymphopenia/etiology , Plateletpheresis/adverse effects , Adult , Bacterial Infections/epidemiology , Bacterial Infections/etiology , Blood Donors/statistics & numerical data , Databases, Factual , Disease Susceptibility , Female , Follow-Up Studies , Humans , Immunocompromised Host , Infections/etiology , Lymphocyte Count , Lymphopenia/epidemiology , Male , Middle Aged , Mycoses/epidemiology , Mycoses/etiology , Plateletpheresis/instrumentation , Proportional Hazards Models , Retrospective Studies , Risk , Sweden/epidemiology , Young Adult
14.
Stat Med ; 40(19): 4185-4199, 2021 08 30.
Article in English | MEDLINE | ID: mdl-34046930

ABSTRACT

Chronic medical conditions often necessitate regular testing for proper treatment. Regular testing of all afflicted individuals may not be feasible due to limited resources, as is true with HIV monitoring in resource-limited settings. Pooled testing methods have been developed in order to allow regular testing for all while reducing resource burden. However, the most commonly used methods do not make use of covariate information predictive of treatment failure, which could improve performance. We propose and evaluate four prediction-driven pooled testing methods that incorporate covariate information to improve pooled testing performance. We then compare these methods in the HIV treatment management setting to current methods with respect to testing efficiency, sensitivity, and number of testing rounds using simulated data and data collected in Rakai, Uganda. Results show that the prediction-driven methods increase efficiency by up to 20% compared with current methods while maintaining equivalent sensitivity and reducing number of testing rounds by up to 70%. When predictions were incorrect, the performance of prediction-based matrix methods remained robust. The best performing method using our motivating data from Rakai was a prediction-driven hybrid method, maintaining sensitivity over 96% and efficiency over 75% in likely scenarios. If these methods perform similarly in the field, they may contribute to improving mortality and reducing transmission in resource-limited settings. Although we evaluate our proposed pooling methods in the HIV treatment setting, they can be applied to any setting that necessitates testing of a quantitative biomarker for a threshold-based decision.


Subject(s)
HIV Infections , HIV Infections/diagnosis , HIV Infections/drug therapy , Humans , Research Design , Treatment Failure , Uganda/epidemiology
15.
Clin Infect Dis ; 71(4): 1017-1021, 2020 08 14.
Article in English | MEDLINE | ID: mdl-31532827

ABSTRACT

BACKGROUND: After scale-up of antiretroviral therapy (ART), routine annual viral load monitoring has been adopted by most countries, but reduced frequency of viral load monitoring may offer cost savings in resource-limited settings. We investigated if viral load monitoring frequency could be reduced while maintaining detection of treatment failure. METHODS: The Rakai Health Sciences Program performed routine, biannual viral load monitoring on 2489 people living with human immunodeficiency virus (age ≥15 years). On the basis of these data, we built a 2-stage simulation model to compare different viral load monitoring schemes. We fit Weibull regression models for time to viral load >1000 copies/mL (treatment failure), and simulated data for 10 000 individuals over 5 years to compare 5 monitoring schemes to the current viral load testing every 6 months and every 12 months. RESULTS: Among 7 monitoring schemes tested, monitoring every 6 months for all subjects had the fewest months of undetected failure but also had the highest number of viral load tests. Adaptive schemes using previous viral load measurements to inform future monitoring significantly decreased the number of viral load tests without markedly increasing the number of months of undetected failure. The best adaptive monitoring scheme resulted in a 67% reduction in viral load measurements, while increasing the months of undetected failure by <20%. CONCLUSIONS: Adaptive viral load monitoring based on previous viral load measurements may be optimal for maintaining patient care while reducing costs, allowing more patients to be treated and monitored. Future empirical studies to evaluate differentiated monitoring are warranted.


Subject(s)
Anti-HIV Agents , HIV Infections , Adolescent , Anti-HIV Agents/therapeutic use , CD4 Lymphocyte Count , Diagnostic Tests, Routine , HIV Infections/diagnosis , HIV Infections/drug therapy , Humans , Treatment Failure , Uganda , Viral Load
16.
Clin Infect Dis ; 71(3): 652-660, 2020 07 27.
Article in English | MEDLINE | ID: mdl-31504347

ABSTRACT

BACKGROUND: Patients living with human immunodeficiency virus (PLWH) with low CD4 counts are at high risk for immune reconstitution inflammatory syndrome (IRIS) and death at antiretroviral therapy (ART) initiation. METHODS: We investigated the clinical impact of IRIS in PLWH and CD4 counts <100 cells/µL starting ART in an international, prospective study in the United States, Thailand, and Kenya. An independent review committee adjudicated IRIS events. We assessed associations between baseline biomarkers, IRIS, immune recovery at week 48, and death by week 48 with Cox models. RESULTS: We enrolled 506 participants (39.3% were women). Median age was 37 years, and CD4 count was 29 cells/µL. Within 6 months of ART, 97 (19.2%) participants developed IRIS and 31 (6.5%) died. Participants with lower hemoglobin at baseline were at higher IRIS risk (hazard ratio [HR], 1.2; P = .004). IRIS was independently associated with increased risk of death after adjustment for known risk factors (HR, 3.2; P = .031). Being female (P = .004) and having a lower body mass index (BMI; P = .003), higher white blood cell count (P = .005), and higher D-dimer levels (P = .044) were also significantly associated with increased risk of death. Decision-tree analysis identified hemoglobin <8.5 g/dL as predictive of IRIS and C-reactive protein (CRP) >106 µg/mL and BMI <15.6 kg/m2 as predictive of death. CONCLUSIONS: For PLWH with severe immunosuppression initiating ART, baseline low BMI and hemoglobin and high CRP and D-dimer levels may be clinically useful predictors of IRIS and death risk.


Subject(s)
HIV Infections , Immune Reconstitution Inflammatory Syndrome , Lymphopenia , Adult , CD4 Lymphocyte Count , Female , HIV , HIV Infections/complications , HIV Infections/drug therapy , Humans , Immune Reconstitution Inflammatory Syndrome/epidemiology , Incidence , Kenya , Lymphopenia/epidemiology , Male , Prospective Studies , Thailand
17.
Epidemiology ; 31(3): 359-364, 2020 05.
Article in English | MEDLINE | ID: mdl-32091429

ABSTRACT

The predictions from an accurate prognostic model can be of great interest to patients and clinicians. When predictions are reported to individuals, they may decide to take action to improve their health or they may simply be comforted by the knowledge. However, if there is a clearly defined space of actions in the clinical context, a formal decision rule based on the prediction has the potential to have a much broader impact. The use of a prediction-based decision rule should be formalized and preferably compared with the standard of care in a randomized trial to assess its clinical utility; however, evidence is needed to motivate such a trial. We outline how observational data can be used to propose a decision rule based on a prognostic prediction model. We then propose a framework for emulating a prediction driven trial to evaluate the clinical utility of a prediction-based decision rule in observational data. A split-sample structure is often feasible and useful to develop the prognostic model, define the decision rule, and evaluate its clinical utility. See video abstract at, http://links.lww.com/EDE/B656.


Subject(s)
Clinical Decision-Making , Models, Statistical , Prognosis , Clinical Decision-Making/methods , Humans , Observational Studies as Topic , Randomized Controlled Trials as Topic
18.
Biometrics ; 76(4): 1053-1063, 2020 12.
Article in English | MEDLINE | ID: mdl-31868914

ABSTRACT

Many infectious diseases are well prevented by proper vaccination. However, when a vaccine is not completely efficacious, there is great interest in determining how the vaccine effect differs in subgroups conditional on measured immune responses postvaccination and also according to the type of infecting agent (eg, strain of a virus). The former is often called correlate of protection (CoP) analysis, while the latter has been called sieve analysis. We propose a unified framework for simultaneously assessing CoP and sieve effects of a vaccine in a large Phase III randomized trial. We use flexible parametric models treating times to infection from different agents as competing risks and estimated maximum likelihood to fit the models. The parametric models under competing risks allow for estimation of both cumulative incidence-based contrasts and instantaneous rates. We outline the assumptions with which we can link the observable data to the causal contrasts of interest, propose hypothesis testing procedures, and evaluate our proposed methods in an extensive simulation study.


Subject(s)
Vaccines , Causality , Computer Simulation , Incidence , Vaccination
19.
BMC Pregnancy Childbirth ; 20(1): 225, 2020 Apr 16.
Article in English | MEDLINE | ID: mdl-32299386

ABSTRACT

BACKGROUND: Social concerns about unintentional HIV status disclosure and HIV-related stigma are barriers to pregnant women's access to prevention of mother-to-child transmission of HIV (PMTCT) care. There is limited quantitative evidence of women's social and emotional barriers to PMTCT care and HIV disclosure. We aimed to investigate how social concerns related to participation in PMTCT care are associated with HIV status disclosure to partners and relatives among pregnant women living with HIV in western Kenya. METHODS: A cross-sectional study, including 437 pregnant women living with HIV, was carried out at enrolment in a multicentre mobile phone intervention trial (WelTel PMTCT) in western Kenya. Women diagnosed with HIV on the day of enrolment were excluded. To investigate social concerns and their association with HIV disclosure we used multivariable-adjusted logistic regression, adjusted for sociodemographic and HIV-related characteristics, to estimate odds ratios (OR) and 95% confidence intervals (CI). RESULTS: The majority (80%) had disclosed their HIV status to a current partner and 46% to a relative. Older women (35-44 years) had lower odds of disclosure to a partner (OR = 0.15; 95% CI: 0.05-0.44) compared to women 18-24 years. The most common social concern was involuntary HIV status disclosure (reported by 21%). Concern about isolation or lack of support from family or friends was reported by 9%, and was associated with lower odds of disclosure to partners (OR = 0.33; 95% CI: 0.12-0.85) and relatives (OR = 0.37; 95% CI: 0.16-0.85). Concern about separation (reported by 5%; OR = 0.17; 95% CI: 0.05-0.57), and concern about conflict with a partner (reported by 5%; OR = 0.18; 95% CI: 0.05-0.67), was associated with lower odds of disclosure to a partner. CONCLUSIONS: Compared to previous reports from Kenya, our estimated disclosure rate to a partner is higher, suggesting a possible improvement over time in disclosure. Younger pregnant women appear to be more likely to disclose, suggesting a possible decreased stigma and more openness about HIV among younger couples. Healthcare providers and future interventional studies seeking to increase partner disclosure should consider supporting women regarding their concerns about isolation, lack of support, separation, and conflict with a partner. PMTCT care should be organized to ensure women's privacy and confidentiality.


Subject(s)
Disclosure/statistics & numerical data , HIV Infections/transmission , Infectious Disease Transmission, Vertical/prevention & control , Social Stigma , Adolescent , Adult , Confidentiality , Cross-Sectional Studies , Female , Humans , Kenya , Pregnancy , Sexual Partners/psychology , Young Adult
20.
Biostatistics ; 19(3): 307-324, 2018 07 01.
Article in English | MEDLINE | ID: mdl-28968890

ABSTRACT

An intermediate response measure that accurately predicts efficacy in a new setting at the individual level could be used both for prediction and personalized medical decisions. In this article, we define a predictive individual-level general surrogate (PIGS), which is an individual-level intermediate response that can be used to accurately predict individual efficacy in a new setting. While methods for evaluating trial-level general surrogates, which are predictors of trial-level efficacy, have been developed previously, few, if any, methods have been developed to evaluate individual-level general surrogates, and no methods have formalized the use of cross-validation to quantify the expected prediction error. Our proposed method uses existing methods of individual-level surrogate evaluation within a given clinical trial setting in combination with cross-validation over a set of clinical trials to evaluate surrogate quality and to estimate the absolute prediction error that is expected in a new trial setting when using a PIGS. Simulations show that our method performs well across a variety of scenarios. We use our method to evaluate and to compare candidate individual-level general surrogates over a set of multi-national trials of a pentavalent rotavirus vaccine.


Subject(s)
Biomarkers , Biostatistics/methods , Data Interpretation, Statistical , Models, Statistical , Outcome Assessment, Health Care/methods , Research Design , Child, Preschool , Clinical Trials as Topic , Humans , Risk Assessment/methods , Rotavirus Infections/prevention & control
SELECTION OF CITATIONS
SEARCH DETAIL