Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 450
Filter
Add more filters

Publication year range
1.
Stat Med ; 43(12): 2314-2331, 2024 May 30.
Article in English | MEDLINE | ID: mdl-38561927

ABSTRACT

BACKGROUND: Non-inferiority trials comparing different active drugs are often subject to treatment non-adherence. Intention-to-treat (ITT) and per-protocol (PP) analyses have been advocated in such studies but are not guaranteed to be unbiased in the presence of differential non-adherence. METHODS: The REMoxTB trial evaluated two 4-month experimental regimens compared with a 6-month control regimen for newly diagnosed drug-susceptible TB. The primary endpoint was a composite unfavorable outcome of treatment failure or recurrence within 18 months post-randomization. We conducted a simulation study based on REMoxTB to assess the performance of statistical methods for handling non-adherence in non-inferiority trials, including: ITT and PP analyses, adjustment for observed adherence, multiple imputation (MI) of outcomes, inverse-probability-of-treatment weighting (IPTW), and a doubly-robust (DR) estimator. RESULTS: When non-adherence differed between trial arms, ITT, and PP analyses often resulted in non-trivial bias in the estimated treatment effect, which consequently under- or over-inflated the type I error rate. Adjustment for observed adherence led to similar issues, whereas the MI, IPTW and DR approaches were able to correct bias under most non-adherence scenarios; they could not always eliminate bias entirely in the presence of unobserved confounding. The IPTW and DR methods were generally unbiased and maintained desired type I error rates and statistical power. CONCLUSIONS: When non-adherence differs between trial arms, ITT and PP analyses can produce biased estimates of efficacy, potentially leading to the acceptance of inferior treatments or efficacious regimens being missed. IPTW and the DR estimator are relatively straightforward methods to supplement ITT and PP approaches.


Subject(s)
Computer Simulation , Intention to Treat Analysis , Humans , Equivalence Trials as Topic , Medication Adherence/statistics & numerical data , Antitubercular Agents/therapeutic use , Antitubercular Agents/administration & dosage , Tuberculosis/drug therapy , Treatment Outcome , Bias , Models, Statistical
2.
Brain ; 146(7): 2717-2722, 2023 07 03.
Article in English | MEDLINE | ID: mdl-36856727

ABSTRACT

An increase in the efficiency of clinical trial conduct has been successfully demonstrated in the oncology field, by the use of multi-arm, multi-stage trials allowing the evaluation of multiple therapeutic candidates simultaneously, and seamless recruitment to phase 3 for those candidates passing an interim signal of efficacy. Replicating this complex innovative trial design in diseases such as Parkinson's disease is appealing, but in addition to the challenges associated with any trial assessing a single potentially disease modifying intervention in Parkinson's disease, a multi-arm platform trial must also specifically consider the heterogeneous nature of the disease, alongside the desire to potentially test multiple treatments with different mechanisms of action. In a multi-arm trial, there is a need to appropriately stratify treatment arms to ensure each are comparable with a shared placebo/standard of care arm; however, in Parkinson's disease there may be a preference to enrich an arm with a subgroup of patients that may be most likely to respond to a specific treatment approach. The solution to this conundrum lies in having clearly defined criteria for inclusion in each treatment arm as well as an analysis plan that takes account of predefined subgroups of interest, alongside evaluating the impact of each treatment on the broader population of Parkinson's disease patients. Beyond this, there must be robust processes of treatment selection, and consensus derived measures to confirm target engagement and interim assessments of efficacy, as well as consideration of the infrastructure needed to support recruitment, and the long-term funding and sustainability of the platform. This has to incorporate the diverse priorities of clinicians, triallists, regulatory authorities and above all the views of people with Parkinson's disease.


Subject(s)
COVID-19 , Parkinson Disease , Humans
3.
Pharm Stat ; 2024 Apr 17.
Article in English | MEDLINE | ID: mdl-38631678

ABSTRACT

Accurate frequentist performance of a method is desirable in confirmatory clinical trials, but is not sufficient on its own to justify the use of a missing data method. Reference-based conditional mean imputation, with variance estimation justified solely by its frequentist performance, has the surprising and undesirable property that the estimated variance becomes smaller the greater the number of missing observations; as explained under jump-to-reference it effectively forces the true treatment effect to be exactly zero for patients with missing data.

4.
Biom J ; 66(1): e2300085, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37823668

ABSTRACT

For simulation studies that evaluate methods of handling missing data, we argue that generating partially observed data by fixing the complete data and repeatedly simulating the missingness indicators is a superficially attractive idea but only rarely appropriate to use.


Subject(s)
Research , Data Interpretation, Statistical , Computer Simulation
5.
Connect Tissue Res ; 64(3): 262-273, 2023 05.
Article in English | MEDLINE | ID: mdl-36524714

ABSTRACT

INTRODUCTION: Rotator cuff tear size affects clinical outcomes following rotator cuff repair and is correlated with the risk of recurrent tendon defects. This study aimed to understand if and how the initial defect size influences the structural and mechanical outcomes of the injured rotator cuff attachment in vivo. METHODS: Full-thickness punch injuries of the infraspinatus tendon-bone attachment in Long Evans rats were created to compare differences in healing outcomes between small and large defects. Biomechanical properties, gross morphology, bone remodeling, and cell and tissue morphology were assessed at both 3- and 8-weeks of healing. RESULTS: At the time of injury (no healing), large defects had decreased mechanical properties compared to small defects, and both defect sizes had decreased mechanical properties compared to intact attachments. However, the mechanical properties of the two defect groups were not significantly different from each other after 8-weeks of healing and significantly improved compared to no healing but failed to return to intact levels. Local bone volume at the defect site was higher in large compared to small defects on average and increased from 3- to 8-weeks. In contrast, bone quality decreased from 3- to 8-weeks of healing and these changes were not dependent on defect size. Qualitatively, large defects had increased collagen disorganization and neovascularization compared to small defects. DISCUSSION: In this study, we showed that both large and small defects did not regenerate the mechanical and structural integrity of the intact rat rotator cuff attachment following healing in vivo after 8 weeks of healing.


Subject(s)
Rotator Cuff Injuries , Rotator Cuff , Rats , Animals , Rats, Long-Evans , Tendons , Bone and Bones , Biomechanical Phenomena , Disease Models, Animal
6.
Stat Med ; 42(7): 1082-1095, 2023 03 30.
Article in English | MEDLINE | ID: mdl-36695043

ABSTRACT

One of the main challenges when using observational data for causal inference is the presence of confounding. A classic approach to account for confounding is the use of propensity score techniques that provide consistent estimators of the causal treatment effect under four common identifiability assumptions for causal effects, including that of no unmeasured confounding. Propensity score matching is a very popular approach which, in its simplest form, involves matching each treated patient to an untreated patient with a similar estimated propensity score, that is, probability of receiving the treatment. The treatment effect can then be estimated by comparing treated and untreated patients within the matched dataset. When missing data arises, a popular approach is to apply multiple imputation to handle the missingness. The combination of propensity score matching and multiple imputation is increasingly applied in practice. However, in this article we demonstrate that combining multiple imputation and propensity score matching can lead to over-coverage of the confidence interval for the treatment effect estimate. We explore the cause of this over-coverage and we evaluate, in this context, the performance of a correction to Rubin's rules for multiple imputation proposed by finding that this correction removes the over-coverage.


Subject(s)
Propensity Score , Humans , Data Interpretation, Statistical , Causality
7.
Stat Med ; 42(27): 4917-4930, 2023 11 30.
Article in English | MEDLINE | ID: mdl-37767752

ABSTRACT

In network meta-analysis, studies evaluating multiple treatment comparisons are modeled simultaneously, and estimation is informed by a combination of direct and indirect evidence. Network meta-analysis relies on an assumption of consistency, meaning that direct and indirect evidence should agree for each treatment comparison. Here we propose new local and global tests for inconsistency and demonstrate their application to three example networks. Because inconsistency is a property of a loop of treatments in the network meta-analysis, we locate the local test in a loop. We define a model with one inconsistency parameter that can be interpreted as loop inconsistency. The model builds on the existing ideas of node-splitting and side-splitting in network meta-analysis. To provide a global test for inconsistency, we extend the model across multiple independent loops with one degree of freedom per loop. We develop a new algorithm for identifying independent loops within a network meta-analysis. Our proposed models handle treatments symmetrically, locate inconsistency in loops rather than in nodes or treatment comparisons, and are invariant to choice of reference treatment, making the results less dependent on model parameterization. For testing global inconsistency in network meta-analysis, our global model uses fewer degrees of freedom than the existing design-by-treatment interaction approach and has the potential to increase power. To illustrate our methods, we fit the models to three network meta-analyses varying in size and complexity. Local and global tests for inconsistency are performed and we demonstrate that the global model is invariant to choice of independent loops.


Subject(s)
Algorithms , Research Design , Humans , Network Meta-Analysis
8.
Clin Trials ; 20(5): 497-506, 2023 10.
Article in English | MEDLINE | ID: mdl-37277978

ABSTRACT

INTRODUCTION: The ICH E9 addendum outlining the estimand framework for clinical trials was published in 2019 but provides limited guidance around how to handle intercurrent events for non-inferiority studies. Once an estimand is defined, it is also unclear how to deal with missing values using principled analyses for non-inferiority studies. METHODS: Using a tuberculosis clinical trial as a case study, we propose a primary estimand, and an additional estimand suitable for non-inferiority studies. For estimation, multiple imputation methods that align with the estimands for both primary and sensitivity analysis are proposed. We demonstrate estimation methods using the twofold fully conditional specification multiple imputation algorithm and then extend and use reference-based multiple imputation for a binary outcome to target the relevant estimands, proposing sensitivity analyses under each. We compare the results from using these multiple imputation methods with those from the original study. RESULTS: Consistent with the ICH E9 addendum, estimands can be constructed for a non-inferiority trial which improves on the per-protocol/intention-to-treat-type analysis population previously advocated, involving respectively a hypothetical or treatment policy strategy to handle relevant intercurrent events. Results from using the 'twofold' multiple imputation approach to estimate the primary hypothetical estimand, and using reference-based methods for an additional treatment policy estimand, including sensitivity analyses to handle the missing data, were consistent with the original study's reported per-protocol and intention-to-treat analysis in failing to demonstrate non-inferiority. CONCLUSIONS: Using carefully constructed estimands and appropriate primary and sensitivity estimators, using all the information available, results in a more principled and statistically rigorous approach to analysis. Doing so provides an accurate interpretation of the estimand.


Subject(s)
Models, Statistical , Research Design , Humans , Algorithms , Data Interpretation, Statistical , Equivalence Trials as Topic
9.
J Chem Phys ; 158(7): 074901, 2023 Feb 21.
Article in English | MEDLINE | ID: mdl-36813721

ABSTRACT

Soft porous coordination polymers (SPCPs) are materials with exceptional potential because of their ability to incorporate the properties of nominally rigid porous materials like metal-organic frameworks (MOFs) and those of soft matter, such as polymers of intrinsic microporosity (PIMs). This combination could offer the gas adsorption properties of MOFs together with the mechanical stability and processability of PIMs, opening up a space of flexible, highly responsive adsorbing materials. In order to understand their structure and behavior, we present a process for the construction of amorphous SPCPs from secondary building blocks. We then use classical molecular dynamics simulations to characterize the resulting structures based on branch functionalities (f), pore size distributions (PSDs), and radial distribution functions and compare them to experimentally synthesized analogs. In the course of this comparison, we demonstrate that the pore structure of SPCPs is due to both pores intrinsic to the secondary building blocks, and intercolloid spacing between colloid particles. We also illustrate the differences in nanoscale structure based on linker length and flexibility, particularly in the PSDs, finding that stiff linkers tend to produce SPCPs with larger maximum pore sizes.

10.
Stat Med ; 41(25): 5000-5015, 2022 11 10.
Article in English | MEDLINE | ID: mdl-35959539

ABSTRACT

BACKGROUND: Substantive model compatible multiple imputation (SMC-MI) is a relatively novel imputation method that is particularly useful when the analyst's model includes interactions, non-linearities, and/or partially observed random slope variables. METHODS: Here we thoroughly investigate a SMC-MI strategy based on joint modeling of the covariates of the analysis model. We provide code to apply the proposed strategy and we perform an extensive simulation work to test it in various circumstances. We explore the impact on the results of various factors, including whether the missing data are at the individual or cluster level, whether there are non-linearities and whether the imputation model is correctly specified. Finally, we apply the imputation methods to the motivating example data. RESULTS: SMC-JM appears to be superior to standard JM imputation, particularly in presence of large variation in random slopes, non-linearities, and interactions. Results seem to be robust to slight mis-specification of the imputation model for the covariates. When imputing level 2 data, enough clusters have to be observed in order to obtain unbiased estimates of the level 2 parameters. CONCLUSIONS: SMC-JM is preferable to standard JM imputation in presence of complexities in the analysis model of interest, such as non-linearities or random slopes.


Subject(s)
Models, Statistical , Research Design , Humans , Computer Simulation
11.
Stat Med ; 41(22): 4299-4310, 2022 09 30.
Article in English | MEDLINE | ID: mdl-35751568

ABSTRACT

Factorial trials offer an efficient method to evaluate multiple interventions in a single trial, however the use of additional treatments can obscure research objectives, leading to inappropriate analytical methods and interpretation of results. We define a set of estimands for factorial trials, and describe a framework for applying these estimands, with the aim of clarifying trial objectives and ensuring appropriate primary and sensitivity analyses are chosen. This framework is intended for use in factorial trials where the intent is to conduct "two-trials-in-one" (ie, to separately evaluate the effects of treatments A and B), and is comprised of four steps: (i) specifying how additional treatment(s) (eg, treatment B) will be handled in the estimand, and how intercurrent events affecting the additional treatment(s) will be handled; (ii) designating the appropriate factorial estimator as the primary analysis strategy; (iii) evaluating the interaction to assess the plausibility of the assumptions underpinning the factorial estimator; and (iv) performing a sensitivity analysis using an appropriate multiarm estimator to evaluate to what extent departures from the underlying assumption of no interaction may affect results. We show that adjustment for other factors is necessary for noncollapsible effect measures (such as odds ratio), and through a trial re-analysis we find that failure to consider the estimand could lead to inappropriate interpretation of results. We conclude that careful use of the estimands framework clarifies research objectives and reduces the risk of misinterpretation of trial results, and should become a standard part of both the protocol and reporting of factorial trials.


Subject(s)
Models, Statistical , Research Design , Data Interpretation, Statistical , Humans , Odds Ratio
12.
Stat Med ; 41(5): 838-844, 2022 02 28.
Article in English | MEDLINE | ID: mdl-35146786

ABSTRACT

Since its inception in 1969, the MSc in medical statistics program has placed a high priority on training students from Africa. In this article, we review how the program has shaped, and in turn been shaped by, two substantial capacity building initiatives: (a) a fellowship program, funded by the UK Medical Research Council, and run through the International Statistical Epidemiology Group at the LSHTM, and (b) the Sub-Saharan capacity building in Biostatistics (SSACAB) initiative, administered through the Developing Excellence in Leadership, Training and Science in Africa (DELTAS) program of the African Academy of Sciences. We reflect on the impact of both initiatives, and the implications for future work in this area.


Subject(s)
Capacity Building , Tropical Medicine , Africa South of the Sahara/epidemiology , Humans , Hygiene , London , Public Health , Tropical Medicine/education
13.
Clin Trials ; 19(5): 522-533, 2022 10.
Article in English | MEDLINE | ID: mdl-35850542

ABSTRACT

BACKGROUND/AIMS: Tuberculosis remains one of the leading causes of death from an infectious disease globally. Both choices of outcome definitions and approaches to handling events happening post-randomisation can change the treatment effect being estimated, but these are often inconsistently described, thus inhibiting clear interpretation and comparison across trials. METHODS: Starting from the ICH E9(R1) addendum's definition of an estimand, we use our experience of conducting large Phase III tuberculosis treatment trials and our understanding of the estimand framework to identify the key decisions regarding how different event types are handled in the primary outcome definition, and the important points that should be considered in making such decisions. A key issue is the handling of intercurrent (i.e. post-randomisation) events (ICEs) which affect interpretation of or preclude measurement of the intended final outcome. We consider common ICEs including treatment changes and treatment extension, poor adherence to randomised treatment, re-infection with a new strain of tuberculosis which is different from the original infection, and death. We use two completed tuberculosis trials (REMoxTB and STREAM Stage 1) as illustrative examples. These trials tested non-inferiority of new tuberculosis treatment regimens versus a control regimen. The primary outcome was a binary composite endpoint, 'favourable' or 'unfavourable', which was constructed from several components. RESULTS: We propose the following improvements in handling the above-mentioned ICEs and loss to follow-up (a post-randomisation event that is not in itself an ICE). First, changes to allocated regimens should not necessarily be viewed as an unfavourable outcome; from the patient perspective, the potential harms associated with a change in the regimen should instead be directly quantified. Second, handling poor adherence to randomised treatment using a per-protocol analysis does not necessarily target a clear estimand; instead, it would be desirable to develop ways to estimate the treatment effects more relevant to programmatic settings. Third, re-infection with a new strain of tuberculosis could be handled with different strategies, depending on whether the outcome of interest is the ability to attain culture negativity from infection with any strain of tuberculosis, or specifically the presenting strain of tuberculosis. Fourth, where possible, death could be separated into tuberculosis-related and non-tuberculosis-related and handled using appropriate strategies. Finally, although some losses to follow-up would result in early treatment discontinuation, patients lost to follow-up before the end of the trial should not always be classified as having an unfavourable outcome. Instead, loss to follow-up should be separated from not completing the treatment, which is an ICE and may be considered as an unfavourable outcome. CONCLUSION: The estimand framework clarifies many issues in tuberculosis trials but also challenges trialists to justify and improve their outcome definitions. Future trialists should consider all the above points in defining their outcomes.


Subject(s)
Reinfection , Research Design , Causality , Humans
14.
J Avian Med Surg ; 36(2): 121-127, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35972864

ABSTRACT

The purpose of this study was to determine the pharmacokinetics of cannabidiol (CBD), a potential treatment option that may alleviate pain in companion animals and humans, in the Hispaniolan Amazon parrot (Amazona ventralis). A pilot study administered a single oral dose of CBD in hemp oil at 10 mg/kg to 2 birds and 20 mg/kg to 2 birds. Because the maximum serum concentrations (Cmax) for these doses were 5.5 and 13 ng/mL, respectively, and the serum half-life was 2 hours for both groups, the doses were considered too low for clinical use in this species. Therefore, a study was designed in which 14 healthy 12-14-year-old parrots of both sexes and weighing 0.24-0.35 kg (mean, 0.28 kg) were enrolled. Seven birds were administered 60 mg/kg CBD PO, and 7 birds were administered 120 mg/kg CBD PO. Blood samples were obtained at time 0, and at 0.5, 1, 2, 3, 4, 6, and 10 hours posttreatment in a balanced incomplete block design. Quantification of plasma CBD concentrations was determined by use of a validated liquid chromatography-mass spectrometry assay. Pharmacokinetic parameters were determined by noncompartmental analysis. The areas under the curve (h·ng/mL) were 518 and 1863, Cmax (ng/ mL) were 213 and 562, and times to achieve Cmax (hours) were 0.5 and 4 for the 60 and 120 mg/kg doses, respectively. The serum half-life could not be determined in the 60 mg/kg treatment, but was 1.28 hours at 120 mg/kg. Adverse effects were not observed in any bird. The highly variable results and short half-life of the drug in Hispaniolan Amazon parrots, even at high doses, suggests that this drug formulation was inconsistent in achieving targeted concentrations as reported in other animal species.


Subject(s)
Amazona , Cannabidiol , Animals , Area Under Curve , Cannabis , Female , Humans , Male , Pilot Projects , Plant Extracts
15.
Clin Infect Dis ; 73(2): 195-202, 2021 07 15.
Article in English | MEDLINE | ID: mdl-32448894

ABSTRACT

BACKGROUND: Using data from the COHERE collaboration, we investigated whether primary prophylaxis for pneumocystis pneumonia (PcP) might be withheld in all patients on antiretroviral therapy (ART) with suppressed plasma human immunodeficiency virus (HIV) RNA (≤400 copies/mL), irrespective of CD4 count. METHODS: We implemented an established causal inference approach whereby observational data are used to emulate a randomized trial. Patients taking PcP prophylaxis were eligible for the emulated trial if their CD4 count was ≤200 cells/µL in line with existing recommendations. We compared the following 2 strategies for stopping prophylaxis: (1) when CD4 count was >200 cells/µL for >3 months or (2) when the patient was virologically suppressed (2 consecutive HIV RNA ≤400 copies/mL). Patients were artificially censored if they did not comply with these stopping rules. We estimated the risk of primary PcP in patients on ART, using the hazard ratio (HR) to compare the stopping strategies by fitting a pooled logistic model, including inverse probability weights to adjust for the selection bias introduced by the artificial censoring. RESULTS: A total of 4813 patients (10 324 person-years) complied with eligibility conditions for the emulated trial. With primary PcP diagnosis as an endpoint, the adjusted HR (aHR) indicated a slightly lower, but not statistically significant, different risk for the strategy based on viral suppression alone compared with the existing guidelines (aHR, .8; 95% confidence interval, .6-1.1; P = .2). CONCLUSIONS: This study suggests that primary PcP prophylaxis might be safely withheld in confirmed virologically suppressed patients on ART, regardless of their CD4 count.


Subject(s)
AIDS-Related Opportunistic Infections , HIV Infections , Pneumonia, Pneumocystis , AIDS-Related Opportunistic Infections/prevention & control , CD4 Lymphocyte Count , HIV , HIV Infections/complications , HIV Infections/drug therapy , Humans , Pneumonia, Pneumocystis/prevention & control , Pragmatic Clinical Trials as Topic
16.
Am J Epidemiol ; 190(4): 663-672, 2021 04 06.
Article in English | MEDLINE | ID: mdl-33057574

ABSTRACT

Marginal structural models (MSMs) are commonly used to estimate causal intervention effects in longitudinal nonrandomized studies. A common challenge when using MSMs to analyze observational studies is incomplete confounder data, where a poorly informed analysis method will lead to biased estimates of intervention effects. Despite a number of approaches described in the literature for handling missing data in MSMs, there is little guidance on what works in practice and why. We reviewed existing missing-data methods for MSMs and discussed the plausibility of their underlying assumptions. We also performed realistic simulations to quantify the bias of 5 methods used in practice: complete-case analysis, last observation carried forward, the missingness pattern approach, multiple imputation, and inverse-probability-of-missingness weighting. We considered 3 mechanisms for nonmonotone missing data encountered in research based on electronic health record data. Further illustration of the strengths and limitations of these analysis methods is provided through an application using a cohort of persons with sleep apnea: the research database of the French Observatoire Sommeil de la Fédération de Pneumologie. We recommend careful consideration of 1) the reasons for missingness, 2) whether missingness modifies the existing relationships among observed data, and 3) the scientific context and data source, to inform the choice of the appropriate method(s) for handling partially observed confounders in MSMs.


Subject(s)
Computer Simulation , Electronic Health Records/statistics & numerical data , Models, Statistical , Data Interpretation, Statistical , Humans
17.
Eur Respir J ; 57(3)2021 03.
Article in English | MEDLINE | ID: mdl-33093119

ABSTRACT

Real-world data provide the potential for generating evidence on drug treatment effects in groups excluded from trials, but rigorous, validated methodology for doing so is lacking. We investigated whether non-interventional methods applied to real-world data could reproduce results from the landmark TORCH COPD trial.We performed a historical cohort study (2000-2017) of COPD drug treatment effects in the UK Clinical Practice Research Datalink (CPRD). Two control groups were selected from CPRD by applying TORCH inclusion/exclusion criteria and 1:1 matching to TORCH participants, as follows. Control group 1: people with COPD not prescribed fluticasone propionate (FP)-salmeterol (SAL); control group 2: people with COPD prescribed SAL only. FP-SAL exposed groups were then selected from CPRD by propensity score matching to each control group. Outcomes studied were COPD exacerbations, death from any cause and pneumonia.2652 FP-SAL exposed people were propensity score matched to 2652 FP-SAL unexposed people while 991 FP-SAL exposed people were propensity score matched to 991 SAL exposed people. Exacerbation rate ratio was comparable to TORCH for FP-SAL versus SAL (0.85, 95% CI 0.74-0.97 versus 0.88, 0.81-0.95) but not for FP-SAL versus no FP-SAL (1.30, 1.19-1.42 versus 0.75, 0.69-0.81). In addition, active comparator results were consistent with TORCH for mortality (hazard ratio 0.93, 0.65-1.32 versus 0.93, 0.77-1.13) and pneumonia (risk ratio 1.39, 1.04-1.87 versus 1.47, 1.25-1.73).We obtained very similar results to the TORCH trial for active comparator analyses, but were unable to reproduce placebo-controlled results. Application of these validated methods for active comparator analyses to groups excluded from randomised controlled trials provides a practical way for contributing to the evidence base and supporting COPD treatment decisions.


Subject(s)
Bronchodilator Agents , Pulmonary Disease, Chronic Obstructive , Administration, Inhalation , Androstadienes , Bronchodilator Agents/therapeutic use , Cohort Studies , Drug Combinations , Fluticasone/therapeutic use , Fluticasone-Salmeterol Drug Combination , Humans , Pulmonary Disease, Chronic Obstructive/drug therapy , Randomized Controlled Trials as Topic , Treatment Outcome
18.
Cladistics ; 37(4): 423-441, 2021 08.
Article in English | MEDLINE | ID: mdl-34478190

ABSTRACT

Neotropical swarm-founding wasps are divided into 19 genera in the tribe Epiponini (Vespidae, Polistinae). They display extensive variation in several colony-level traits that make them an attractive model system for reconstructing the evolution of social phenotypes, including caste dimorphism and nest architecture. Epiponini has been upheld as a solid monophyletic group in most phylogenetic analyses carried out so far, supported by molecular, morphological and behavioural data. Recent molecular studies, however, propose different relationships among the genera of swarm-founding wasps. This study is based on the most comprehensive epiponine sampling so far and was analyzed by combining morphological, nesting and molecular data. The resulting phylogenetic hypothesis shows many of the traditional clades but still impacts the way certain behavioural characters, such as nest structure and castes, evolved, and thus requires some re-interpretations. Angiopolybia as sister to the remaining Epiponini implies that nest envelopes and a casteless system are plesiomorphic in the tribe. Molecular dating points to an early tribal diversification during the Eocene (c. 55-38 Ma), with the major differentiation of current genera concentrated in the Oligocene/Miocene boundary.


Subject(s)
Ovary/physiology , Phylogeny , Social Behavior , Social Evolution , Wasps/anatomy & histology , Wasps/physiology , Animals , Female , Geography , Ovary/anatomy & histology , Reproduction
19.
Health Econ ; 30(12): 3138-3158, 2021 12.
Article in English | MEDLINE | ID: mdl-34562295

ABSTRACT

Cost-effectiveness analyses (CEA) are recommended to include sensitivity analyses which make a range of contextually plausible assumptions about missing data. However, with longitudinal data on, for example, patients' health-related quality of life (HRQoL), the missingness patterns can be complicated because data are often missing both at specific timepoints (interim missingness) and following loss to follow-up. Methods to handle these complex missing data patterns have not been developed for CEA, and must recognize that data may be missing not at random, while accommodating both the correlation between costs and health outcomes and the non-normal distribution of these endpoints. We develop flexible Bayesian longitudinal models that allow the impact of interim missingness and loss to follow-up to be disentangled. This modeling framework enables studies to undertake sensitivity analyses according to various contextually plausible missing data mechanisms, jointly model costs and outcomes using appropriate distributions, and recognize the correlation among these endpoints over time. We exemplify these models in the REFLUX study in which 52% of participants had HRQoL data missing for at least one timepoint over the 5-year follow-up period. We provide guidance for sensitivity analyses and accompanying code to help future studies handle these complex forms of missing data.


Subject(s)
Models, Statistical , Quality of Life , Bayes Theorem , Cost-Benefit Analysis , Data Collection , Data Interpretation, Statistical , Humans , Longitudinal Studies
20.
Acta Paediatr ; 110(1): 72-78, 2021 01.
Article in English | MEDLINE | ID: mdl-32281685

ABSTRACT

AIM: A device for newborn heart rate (HR) monitoring at birth that is compatible with delayed cord clamping and minimises hypothermia risk could have advantages over current approaches. We evaluated a wireless, cap mounted device (fhPPG) for monitoring neonatal HR. METHODS: A total of 52 infants on the neonatal intensive care unit (NICU) and immediately following birth by elective caesarean section (ECS) were recruited. HR was monitored by electrocardiogram (ECG), pulse oximetry (PO) and the fhPPG device. Success rate, accuracy and time to output HR were compared with ECG as the gold standard. Standardised simulated data assessed the fhPPG algorithm accuracy. RESULTS: Compared to ECG HR, the median bias (and 95% limits of agreement) for the NICU was fhPPG -0.6 (-5.6, 4.9) vs PO -0.3 (-6.3, 6.2) bpm, and ECS phase fhPPG -0.5 (-8.7, 7.7) vs PO -0.1 (-7.6, 7.1) bpm. In both settings, fhPPG and PO correlated with paired ECG HRs (both R2  = 0.89). The fhPPG HR algorithm during simulations demonstrated a near-linear correlation (n = 1266, R2  = 0.99). CONCLUSION: Monitoring infants in the NICU and following ECS using a wireless, cap mounted device provides accurate HR measurements. This alternative approach could confer advantages compared with current methods of HR assessment and warrants further evaluation at birth.


Subject(s)
Cesarean Section , Electrocardiography , Female , Heart Rate , Humans , Infant, Newborn , Monitoring, Physiologic , Oximetry , Pregnancy
SELECTION OF CITATIONS
SEARCH DETAIL