Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 379
Filter
Add more filters

Publication year range
1.
Am J Epidemiol ; 2024 Feb 22.
Article in English | MEDLINE | ID: mdl-38400653

ABSTRACT

Targeted Maximum Likelihood Estimation (TMLE) is increasingly used for doubly robust causal inference, but how missing data should be handled when using TMLE with data-adaptive approaches is unclear. Based on the Victorian Adolescent Health Cohort Study, we conducted a simulation study to evaluate eight missing data methods in this context: complete-case analysis, extended TMLE incorporating outcome-missingness model, missing covariate missing indicator method, five multiple imputation (MI) approaches using parametric or machine-learning models. Six scenarios were considered, varying in exposure/outcome generation models (presence of confounder-confounder interactions) and missingness mechanisms (whether outcome influenced missingness in other variables and presence of interaction/non-linear terms in missingness models). Complete-case analysis and extended TMLE had small biases when outcome did not influence missingness in other variables. Parametric MI without interactions had large bias when exposure/outcome generation models included interactions. Parametric MI including interactions performed best in bias and variance reduction across all settings, except when missingness models included a non-linear term. When choosing a method to handle missing data in the context of TMLE, researchers must consider the missingness mechanism and, for MI, compatibility with the analysis method. In many settings, a parametric MI approach that incorporates interactions and non-linearities is expected to perform well.

2.
Clin Trials ; 21(2): 162-170, 2024 04.
Article in English | MEDLINE | ID: mdl-37904490

ABSTRACT

BACKGROUND: A 2×2 factorial design evaluates two interventions (A versus control and B versus control) by randomising to control, A-only, B-only or both A and B together. Extended factorial designs are also possible (e.g. 3×3 or 2×2×2). Factorial designs often require fewer resources and participants than alternative randomised controlled trials, but they are not widely used. We identified several issues that investigators considering this design need to address, before they use it in a late-phase setting. METHODS: We surveyed journal articles published in 2000-2022 relating to designing factorial randomised controlled trials. We identified issues to consider based on these and our personal experiences. RESULTS: We identified clinical, practical, statistical and external issues that make factorial randomised controlled trials more desirable. Clinical issues are (1) interventions can be easily co-administered; (2) risk of safety issues from co-administration above individual risks of the separate interventions is low; (3) safety or efficacy data are wanted on the combination intervention; (4) potential for interaction (e.g. effect of A differing when B administered) is low; (5) it is important to compare interventions with other interventions balanced, rather than allowing randomised interventions to affect the choice of other interventions; (6) eligibility criteria for different interventions are similar. Practical issues are (7) recruitment is not harmed by testing many interventions; (8) each intervention and associated toxicities is unlikely to reduce either adherence to the other intervention or overall follow-up; (9) blinding is easy to implement or not required. Statistical issues are (10) a suitable scale of analysis can be identified; (11) adjustment for multiplicity is not required; (12) early stopping for efficacy or lack of benefit can be done effectively. External issues are (13) adequate funding is available and (14) the trial is not intended for licensing purposes. An overarching issue (15) is that factorial design should give a lower sample size requirement than alternative designs. Across designs with varying non-adherence, retention, intervention effects and interaction effects, 2×2 factorial designs require lower sample size than a three-arm alternative when one intervention effect is reduced by no more than 24%-48% in the presence of the other intervention compared with in the absence of the other intervention. CONCLUSIONS: Factorial designs are not widely used and should be considered more often using our issues to consider. Low potential for at most small to modest interaction is key, for example, where the interventions have different mechanisms of action or target different aspects of the disease being studied.


Subject(s)
Research Design , Humans , Sample Size , Randomized Controlled Trials as Topic
3.
Contact Dermatitis ; 90(5): 445-457, 2024 May.
Article in English | MEDLINE | ID: mdl-38382085

ABSTRACT

Frequent use of methylchloroisothiazolinone/methylisothiazolinone (MCI/MI) and MI in cosmetic products has been the main cause of widespread sensitization and allergic contact dermatitis to these preservatives (biocides). Their use in non-cosmetic products is also an important source of sensitization. Less is known about sensitization rates and use of benzisothiazolinone (BIT), octylisothiazolinone (OIT), and dichlorooctylisothiazolinone (DCOIT), which have never been permitted in cosmetic products in Europe. BIT and OIT have occasionally been routinely patch-tested. These preservatives are often used together in chemical products and articles. In this study, we review the occurrence of contact allergy to MI, BIT, OIT, and DCOIT over time, based on concomitant patch testing in large studies, and case reports. We review EU legislations, and we discuss the role of industry, regulators, and dermatology in prevention of sensitization and protection of health. The frequency of contact allergy to MI, BIT, and OIT has increased. The frequency of contact allergy to DCOIT is not known because it has seldom been patch-tested. Label information on isothiazolinones in chemical products and articles, irrespective of concentration, is required for assessment of relevance, information to patients, and avoidance of exposure and allergic contact dermatitis.


Subject(s)
Cosmetics , Dermatitis, Allergic Contact , Disinfectants , Thiazoles , Humans , Dermatitis, Allergic Contact/epidemiology , Dermatitis, Allergic Contact/etiology , Dermatitis, Allergic Contact/prevention & control , Cosmetics/adverse effects , Disinfectants/adverse effects , Europe/epidemiology , Preservatives, Pharmaceutical/adverse effects , Patch Tests/adverse effects
4.
Biom J ; 66(1): e2200222, 2024 Jan.
Article in English | MEDLINE | ID: mdl-36737675

ABSTRACT

Although new biostatistical methods are published at a very high rate, many of these developments are not trustworthy enough to be adopted by the scientific community. We propose a framework to think about how a piece of methodological work contributes to the evidence base for a method. Similar to the well-known phases of clinical research in drug development, we propose to define four phases of methodological research. These four phases cover (I) proposing a new methodological idea while providing, for example, logical reasoning or proofs, (II) providing empirical evidence, first in a narrow target setting, then (III) in an extended range of settings and for various outcomes, accompanied by appropriate application examples, and (IV) investigations that establish a method as sufficiently well-understood to know when it is preferred over others and when it is not; that is, its pitfalls. We suggest basic definitions of the four phases to provoke thought and discussion rather than devising an unambiguous classification of studies into phases. Too many methodological developments finish before phase III/IV, but we give two examples with references. Our concept rebalances the emphasis to studies in phases III and IV, that is, carefully planned method comparison studies and studies that explore the empirical properties of existing methods in a wider range of problems.


Subject(s)
Biostatistics , Research Design
5.
Biom J ; 66(1): e2300085, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37823668

ABSTRACT

For simulation studies that evaluate methods of handling missing data, we argue that generating partially observed data by fixing the complete data and repeatedly simulating the missingness indicators is a superficially attractive idea but only rarely appropriate to use.


Subject(s)
Research , Data Interpretation, Statistical , Computer Simulation
6.
Biom J ; 66(1): e2200291, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38285405

ABSTRACT

Multiple imputation (MI) is a popular method for handling missing data. Auxiliary variables can be added to the imputation model(s) to improve MI estimates. However, the choice of which auxiliary variables to include is not always straightforward. Several data-driven auxiliary variable selection strategies have been proposed, but there has been limited evaluation of their performance. Using a simulation study we evaluated the performance of eight auxiliary variable selection strategies: (1, 2) two versions of selection based on correlations in the observed data; (3) selection using hypothesis tests of the "missing completely at random" assumption; (4) replacing auxiliary variables with their principal components; (5, 6) forward and forward stepwise selection; (7) forward selection based on the estimated fraction of missing information; and (8) selection via the least absolute shrinkage and selection operator (LASSO). A complete case analysis and an MI analysis using all auxiliary variables (the "full model") were included for comparison. We also applied all strategies to a motivating case study. The full model outperformed all auxiliary variable selection strategies in the simulation study, with the LASSO strategy the best performing auxiliary variable selection strategy overall. All MI analysis strategies that we were able to apply to the case study led to similar estimates, although computational time was substantially reduced when variable selection was employed. This study provides further support for adopting an inclusive auxiliary variable strategy where possible. Auxiliary variable selection using the LASSO may be a promising alternative when the full model fails or is too burdensome.


Subject(s)
Computer Simulation
7.
Lancet Oncol ; 24(7): 783-797, 2023 07.
Article in English | MEDLINE | ID: mdl-37414011

ABSTRACT

BACKGROUND: Adding docetaxel to androgen deprivation therapy (ADT) improves survival in patients with metastatic, hormone-sensitive prostate cancer, but uncertainty remains about who benefits most. We therefore aimed to obtain up-to-date estimates of the overall effects of docetaxel and to assess whether these effects varied according to prespecified characteristics of the patients or their tumours. METHODS: The STOPCAP M1 collaboration conducted a systematic review and meta-analysis of individual participant data. We searched MEDLINE (from database inception to March 31, 2022), Embase (from database inception to March 31, 2022), the Cochrane Central Register of Controlled Trials (from database inception to March 31, 2022), proceedings of relevant conferences (from Jan 1, 1990, to Dec 31, 2022), and ClinicalTrials.gov (from database inception to March 28, 2023) to identify eligible randomised trials that assessed docetaxel plus ADT compared with ADT alone in patients with metastatic, hormone-sensitive prostate cancer. Detailed and updated individual participant data were requested directly from study investigators or through relevant repositories. The primary outcome was overall survival. Secondary outcomes were progression-free survival and failure-free survival. Overall pooled effects were estimated using an adjusted, intention-to-treat, two-stage, fixed-effect meta-analysis, with one-stage and random-effects sensitivity analyses. Missing covariate values were imputed. Differences in effect by participant characteristics were estimated using adjusted two-stage, fixed-effect meta-analysis of within-trial interactions on the basis of progression-free survival to maximise power. Identified effect modifiers were also assessed on the basis of overall survival. To explore multiple subgroup interactions and derive subgroup-specific absolute treatment effects we used one-stage flexible parametric modelling and regression standardisation. We assessed the risk of bias using the Cochrane Risk of Bias 2 tool. This study is registered with PROSPERO, CRD42019140591. FINDINGS: We obtained individual participant data from 2261 patients (98% of those randomised) from three eligible trials (GETUG-AFU15, CHAARTED, and STAMPEDE trials), with a median follow-up of 72 months (IQR 55-85). Individual participant data were not obtained from two additional small trials. Based on all included trials and patients, there were clear benefits of docetaxel on overall survival (hazard ratio [HR] 0·79, 95% CI 0·70 to 0·88; p<0·0001), progression-free survival (0·70, 0·63 to 0·77; p<0·0001), and failure-free survival (0·64, 0·58 to 0·71; p<0·0001), representing 5-year absolute improvements of around 9-11%. The overall risk of bias was assessed to be low, and there was no strong evidence of differences in effect between trials for all three main outcomes. The relative effect of docetaxel on progression-free survival appeared to be greater with increasing clinical T stage (pinteraction=0·0019), higher volume of metastases (pinteraction=0·020), and, to a lesser extent, synchronous diagnosis of metastatic disease (pinteraction=0·077). Taking into account the other interactions, the effect of docetaxel was independently modified by volume and clinical T stage, but not timing. There was no strong evidence that docetaxel improved absolute effects at 5 years for patients with low-volume, metachronous disease (-1%, 95% CI -15 to 12, for progression-free survival; 0%, -10 to 12, for overall survival). The largest absolute improvement at 5 years was observed for those with high-volume, clinical T stage 4 disease (27%, 95% CI 17 to 37, for progression-free survival; 35%, 24 to 47, for overall survival). INTERPRETATION: The addition of docetaxel to hormone therapy is best suited to patients with poorer prognosis for metastatic, hormone-sensitive prostate cancer based on a high volume of disease and potentially the bulkiness of the primary tumour. There is no evidence of meaningful benefit for patients with metachronous, low-volume disease who should therefore be managed differently. These results will better characterise patients most and, importantly, least likely to gain benefit from docetaxel, potentially changing international practice, guiding clinical decision making, better informing treatment policy, and improving patient outcomes. FUNDING: UK Medical Research Council and Prostate Cancer UK.


Subject(s)
Prostatic Neoplasms , Male , Humans , Docetaxel , Prostatic Neoplasms/pathology , Androgen Antagonists , Disease-Free Survival , Hormones/therapeutic use , Antineoplastic Combined Chemotherapy Protocols/adverse effects , Randomized Controlled Trials as Topic
8.
Stat Med ; 42(8): 1156-1170, 2023 04 15.
Article in English | MEDLINE | ID: mdl-36732886

ABSTRACT

In some clinical scenarios, for example, severe sepsis caused by extensively drug resistant bacteria, there is uncertainty between many common treatments, but a conventional multiarm randomized trial is not possible because individual participants may not be eligible to receive certain treatments. The Personalised Randomized Controlled Trial design allows each participant to be randomized between a "personalised randomization list" of treatments that are suitable for them. The primary aim is to produce treatment rankings that can guide choice of treatment, rather than focusing on the estimates of relative treatment effects. Here we use simulation to assess several novel analysis approaches for this innovative trial design. One of the approaches is like a network meta-analysis, where participants with the same personalised randomization list are like a trial, and both direct and indirect evidence are used. We evaluate this proposed analysis and compare it with analyses making less use of indirect evidence. We also propose new performance measures including the expected improvement in outcome if the trial's rankings are used to inform future treatment rather than random choice. We conclude that analysis of a personalized randomized controlled trial can be performed by pooling data from different types of participants and is robust to moderate subgroup-by-intervention interactions based on the parameters of our simulation. The proposed approach performs well with respect to estimation bias and coverage. It provides an overall treatment ranking list with reasonable precision, and is likely to improve outcome on average if used to determine intervention policies and guide individual clinical decisions.


Subject(s)
Randomized Controlled Trials as Topic , Research Design , Humans , Precision Medicine , Patient Participation
9.
Stat Med ; 42(8): 1188-1206, 2023 04 15.
Article in English | MEDLINE | ID: mdl-36700492

ABSTRACT

When data are available from individual patients receiving either a treatment or a control intervention in a randomized trial, various statistical and machine learning methods can be used to develop models for predicting future outcomes under the two conditions, and thus to predict treatment effect at the patient level. These predictions can subsequently guide personalized treatment choices. Although several methods for validating prediction models are available, little attention has been given to measuring the performance of predictions of personalized treatment effect. In this article, we propose a range of measures that can be used to this end. We start by defining two dimensions of model accuracy for treatment effects, for a single outcome: discrimination for benefit and calibration for benefit. We then amalgamate these two dimensions into an additional concept, decision accuracy, which quantifies the model's ability to identify patients for whom the benefit from treatment exceeds a given threshold. Subsequently, we propose a series of performance measures related to these dimensions and discuss estimating procedures, focusing on randomized data. Our methods are applicable for continuous or binary outcomes, for any type of prediction model, as long as it uses baseline covariates to predict outcomes under treatment and control. We illustrate all methods using two simulated datasets and a real dataset from a trial in depression. We implement all methods in the R package predieval. Results suggest that the proposed measures can be useful in evaluating and comparing the performance of competing models in predicting individualized treatment effect.


Subject(s)
Models, Statistical , Precision Medicine , Randomized Controlled Trials as Topic , Humans , Treatment Outcome , Clinical Decision Rules
10.
Stat Med ; 42(8): 1127-1138, 2023 04 15.
Article in English | MEDLINE | ID: mdl-36661242

ABSTRACT

Bayesian analysis of a non-inferiority trial is advantageous in allowing direct probability statements to be made about the relative treatment difference rather than relying on an arbitrary and often poorly justified non-inferiority margin. When the primary analysis will be Bayesian, a Bayesian approach to sample size determination will often be appropriate for consistency with the analysis. We demonstrate three Bayesian approaches to choosing sample size for non-inferiority trials with binary outcomes and review their advantages and disadvantages. First, we present a predictive power approach for determining sample size using the probability that the trial will produce a convincing result in the final analysis. Next, we determine sample size by considering the expected posterior probability of non-inferiority in the trial. Finally, we demonstrate a precision-based approach. We apply these methods to a non-inferiority trial in antiretroviral therapy for treatment of HIV-infected children. A predictive power approach would be most accessible in practical settings, because it is analogous to the standard frequentist approach. Sample sizes are larger than with frequentist calculations unless an informative analysis prior is specified, because appropriate allowance is made for uncertainty in the assumed design parameters, ignored in frequentist calculations. An expected posterior probability approach will lead to a smaller sample size and is appropriate when the focus is on estimating posterior probability rather than on testing. A precision-based approach would be useful when sample size is restricted by limits on recruitment or costs, but it would be difficult to decide on sample size using this approach alone.


Subject(s)
Research Design , Child , Humans , Bayes Theorem , Probability , Sample Size , Uncertainty , Equivalence Trials as Topic
11.
Stat Med ; 42(27): 4917-4930, 2023 11 30.
Article in English | MEDLINE | ID: mdl-37767752

ABSTRACT

In network meta-analysis, studies evaluating multiple treatment comparisons are modeled simultaneously, and estimation is informed by a combination of direct and indirect evidence. Network meta-analysis relies on an assumption of consistency, meaning that direct and indirect evidence should agree for each treatment comparison. Here we propose new local and global tests for inconsistency and demonstrate their application to three example networks. Because inconsistency is a property of a loop of treatments in the network meta-analysis, we locate the local test in a loop. We define a model with one inconsistency parameter that can be interpreted as loop inconsistency. The model builds on the existing ideas of node-splitting and side-splitting in network meta-analysis. To provide a global test for inconsistency, we extend the model across multiple independent loops with one degree of freedom per loop. We develop a new algorithm for identifying independent loops within a network meta-analysis. Our proposed models handle treatments symmetrically, locate inconsistency in loops rather than in nodes or treatment comparisons, and are invariant to choice of reference treatment, making the results less dependent on model parameterization. For testing global inconsistency in network meta-analysis, our global model uses fewer degrees of freedom than the existing design-by-treatment interaction approach and has the potential to increase power. To illustrate our methods, we fit the models to three network meta-analyses varying in size and complexity. Local and global tests for inconsistency are performed and we demonstrate that the global model is invariant to choice of independent loops.


Subject(s)
Algorithms , Research Design , Humans , Network Meta-Analysis
12.
BMC Med Res Methodol ; 23(1): 274, 2023 11 21.
Article in English | MEDLINE | ID: mdl-37990159

ABSTRACT

BACKGROUND: For certain conditions, treatments aim to lessen deterioration over time. A trial outcome could be change in a continuous measure, analysed using a random slopes model with a different slope in each treatment group. A sample size for a trial with a particular schedule of visits (e.g. annually for three years) can be obtained using a two-stage process. First, relevant (co-) variances are estimated from a pre-existing dataset e.g. an observational study conducted in a similar setting. Second, standard formulae are used to calculate sample size. However, the random slopes model assumes linear trajectories with any difference in group means increasing proportionally to follow-up time. The impact of these assumptions failing is unclear. METHODS: We used simulation to assess the impact of a non-linear trajectory and/or non-proportional treatment effect on the proposed trial's power. We used four trajectories, both linear and non-linear, and simulated observational studies to calculate sample sizes. Trials of this size were then simulated, with treatment effects proportional or non-proportional to time. RESULTS: For a proportional treatment effect and a trial visit schedule matching the observational study, powers are close to nominal even for non-linear trajectories. However, if the schedule does not match the observational study, powers can be above or below nominal levels, with the extent of this depending on parameters such as the residual error variance. For a non-proportional treatment effect, using a random slopes model can lead to powers far from nominal levels. CONCLUSIONS: If trajectories are suspected to be non-linear, observational data used to inform power calculations should have the same visit schedule as the proposed trial where possible. Additionally, if the treatment effect is expected to be non-proportional, the random slopes model should not be used. A model allowing trajectories to vary freely over time could be used instead, either as a second line analysis method (bearing in mind that power will be lost) or when powering the trial.


Subject(s)
Sample Size , Humans , Computer Simulation
13.
Clin Trials ; 20(3): 269-275, 2023 06.
Article in English | MEDLINE | ID: mdl-36916466

ABSTRACT

BACKGROUND: A common intercurrent event affecting many trials is when some participants do not begin their assigned treatment. For example, in a double-blind drug trial, some participants may not receive any dose of study medication. Many trials use a 'modified intention-to-treat' approach, whereby participants who do not initiate treatment are excluded from the analysis. However, it is not clear (a) the estimand being targeted by such an approach and (b) the assumptions necessary for such an approach to be unbiased. METHODS: Using potential outcome notation, we demonstrate that a modified intention-to-treat analysis which excludes participants who do not begin treatment is estimating a principal stratum estimand (i.e. the treatment effect in the subpopulation of participants who would begin treatment, regardless of which arm they were assigned to). The modified intention-to-treat estimator is unbiased for the principal stratum estimand under the assumption that the intercurrent event is not affected by the assigned treatment arm, that is, participants who initiate treatment in one arm would also do so in the other arm (i.e. if someone began the intervention, they would also have begun the control, and vice versa). RESULTS: We identify two key criteria in determining whether the modified intention-to-treat estimator is likely to be unbiased: first, we must be able to measure the participants in each treatment arm who experience the intercurrent event, and second, the assumption that treatment allocation will not affect whether the participant begins treatment must be reasonable. Most double-blind trials will satisfy these criteria, as the decision to start treatment cannot be influenced by the allocation, and we provide an example of an open-label trial where these criteria are likely to be satisfied as well, implying that a modified intention-to-treat analysis which excludes participants who do not begin treatment is an unbiased estimator for the principal stratum effect in these settings. We also give two examples where these criteria will not be satisfied (one comparing an active intervention vs usual care, where we cannot identify which usual care participants would have initiated the active intervention, and another comparing two active interventions in an unblinded manner, where knowledge of the assigned treatment arm may affect the participant's choice to begin or not), implying that a modified intention-to-treat estimator will be biased in these settings. CONCLUSION: A modified intention-to-treat analysis which excludes participants who do not begin treatment can be an unbiased estimator for the principal stratum estimand. Our framework can help identify when the assumptions for unbiasedness are likely to hold, and thus whether modified intention-to-treat is appropriate or not.


Subject(s)
Intention to Treat Analysis , Humans , Double-Blind Method , Clinical Protocols
14.
Clin Trials ; 20(6): 594-602, 2023 12.
Article in English | MEDLINE | ID: mdl-37337728

ABSTRACT

BACKGROUND: The population-level summary measure is a key component of the estimand for clinical trials with time-to-event outcomes. This is particularly the case for non-inferiority trials, because different summary measures imply different null hypotheses. Most trials are designed using the hazard ratio as summary measure, but recent studies suggested that the difference in restricted mean survival time might be more powerful, at least in certain situations. In a recent letter, we conjectured that differences between summary measures can be explained using the concept of the non-inferiority frontier and that for a fair simulation comparison of summary measures, the same analysis methods, making the same assumptions, should be used to estimate different summary measures. The aim of this article is to make such a comparison between three commonly used summary measures: hazard ratio, difference in restricted mean survival time and difference in survival at a fixed time point. In addition, we aim to investigate the impact of using an analysis method that assumes proportional hazards on the operating characteristics of a trial designed with any of the three summary measures. METHODS: We conduct a simulation study in the proportional hazards setting. We estimate difference in restricted mean survival time and difference in survival non-parametrically, without assuming proportional hazards. We also estimate all three measures parametrically, using flexible survival regression, under the proportional hazards assumption. RESULTS: Comparing the hazard ratio assuming proportional hazards with the other summary measures not assuming proportional hazards, relative performance varies substantially depending on the specific scenario. Fixing the summary measure, assuming proportional hazards always leads to substantial power gains compared to using non-parametric methods. Fixing the modelling approach to flexible parametric regression assuming proportional hazards, difference in restricted mean survival time is most often the most powerful summary measure among those considered. CONCLUSION: When the hazards are likely to be approximately proportional, reflecting this in the analysis can lead to large gains in power for difference in restricted mean survival time and difference in survival. The choice of summary measure for a non-inferiority trial with time-to-event outcomes should be made on clinical grounds; when any of the three summary measures discussed here is equally justifiable, difference in restricted mean survival time is most often associated with the most powerful test, on the condition that it is estimated under proportional hazards.


Subject(s)
Research Design , Humans , Computer Simulation , Proportional Hazards Models , Sample Size , Survival Analysis , Time Factors
15.
Contact Dermatitis ; 88(2): 152-153, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36258285

ABSTRACT

A case report of a dentist presenting with allergic contact dermatitis to methacrylates present in dental bonding agent applied on the dorsum of a gloved hand. The patient presented with a localized dermatitis to the dorsum of the non-dominant hand which can be described as a 'manual tray sign'.


Subject(s)
Dermatitis, Allergic Contact , Dermatitis, Occupational , Eczema , Hand Dermatoses , Humans , Dermatitis, Allergic Contact/diagnosis , Dermatitis, Allergic Contact/etiology , Methacrylates/adverse effects , Dermatitis, Occupational/etiology , Dermatitis, Occupational/complications , Eczema/complications , Torso , Hand Dermatoses/chemically induced , Hand Dermatoses/diagnosis , Hand Dermatoses/complications , Patch Tests/adverse effects
16.
Ann Intern Med ; 175(11): 1560-1571, 2022 11.
Article in English | MEDLINE | ID: mdl-36252247

ABSTRACT

BACKGROUND: To what extent the COVID-19 pandemic and its containment measures influenced mental health in the general population is still unclear. PURPOSE: To assess the trajectory of mental health symptoms during the first year of the pandemic and examine dose-response relations with characteristics of the pandemic and its containment. DATA SOURCES: Relevant articles were identified from the living evidence database of the COVID-19 Open Access Project, which indexes COVID-19-related publications from MEDLINE via PubMed, Embase via Ovid, and PsycInfo. Preprint publications were not considered. STUDY SELECTION: Longitudinal studies that reported data on the general population's mental health using validated scales and that were published before 31 March 2021 were eligible. DATA EXTRACTION: An international crowd of 109 trained reviewers screened references and extracted study characteristics, participant characteristics, and symptom scores at each timepoint. Data were also included for the following country-specific variables: days since the first case of SARS-CoV-2 infection, the stringency of governmental containment measures, and the cumulative numbers of cases and deaths. DATA SYNTHESIS: In a total of 43 studies (331 628 participants), changes in symptoms of psychological distress, sleep disturbances, and mental well-being varied substantially across studies. On average, depression and anxiety symptoms worsened in the first 2 months of the pandemic (standardized mean difference at 60 days, -0.39 [95% credible interval, -0.76 to -0.03]); thereafter, the trajectories were heterogeneous. There was a linear association of worsening depression and anxiety with increasing numbers of reported cases of SARS-CoV-2 infection and increasing stringency in governmental measures. Gender, age, country, deprivation, inequalities, risk of bias, and study design did not modify these associations. LIMITATIONS: The certainty of the evidence was low because of the high risk of bias in included studies and the large amount of heterogeneity. Stringency measures and surges in cases were strongly correlated and changed over time. The observed associations should not be interpreted as causal relationships. CONCLUSION: Although an initial increase in average symptoms of depression and anxiety and an association between higher numbers of reported cases and more stringent measures were found, changes in mental health symptoms varied substantially across studies after the first 2 months of the pandemic. This suggests that different populations responded differently to the psychological stress generated by the pandemic and its containment measures. PRIMARY FUNDING SOURCE: Swiss National Science Foundation. (PROSPERO: CRD42020180049).


Subject(s)
COVID-19 , Humans , Anxiety/epidemiology , Anxiety/psychology , COVID-19/epidemiology , Depression/psychology , Mental Health , Pandemics , SARS-CoV-2
17.
Stata J ; 23(1): 24-52, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37461744

ABSTRACT

We describe the command artbin, which offers various new facilities for the calculation of sample size for binary outcome variables that are not otherwise available in Stata. While artbin has been available since 2004, it has not been previously described in the Stata Journal. artbin has been recently updated to include new options for different statistical tests, methods and study designs, improved syntax, and better handling of noninferiority trials. In this article, we describe the updated version of artbin and detail the various formulas used within artbin in different settings.

18.
Stata J ; 23(1): 3-23, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37155554

ABSTRACT

We describe a new command, artcat, that calculates sample size or power for a randomized controlled trial or similar experiment with an ordered categorical outcome, where analysis is by the proportional-odds model. artcat implements the method of Whitehead (1993, Statistics in Medicine 12: 2257-2271). We also propose and implement a new method that 1) allows the user to specify a treatment effect that does not obey the proportional-odds assumption, 2) offers greater accuracy for large treatment effects, and 3) allows for noninferiority trials. We illustrate the command and explore the value of an ordered categorical outcome over a binary outcome in various settings. We show by simulation that the methods perform well and that the new method is more accurate than Whitehead's method.

19.
Am J Epidemiol ; 191(5): 930-938, 2022 03 24.
Article in English | MEDLINE | ID: mdl-35146500

ABSTRACT

Comparative effectiveness research using network meta-analysis can present a hierarchy of competing treatments, from the most to the least preferable option. However, in published reviews, the research question associated with the hierarchy of multiple interventions is typically not clearly defined. Here we introduce the novel notion of a treatment hierarchy question that describes the criterion for choosing a specific treatment over one or more competing alternatives. For example, stakeholders might ask which treatment is most likely to improve mean survival by at least 2 years, or which treatment is associated with the longest mean survival. We discuss the most commonly used ranking metrics (quantities that compare the estimated treatment-specific effects), how the ranking metrics produce a treatment hierarchy, and the type of treatment hierarchy question that each ranking metric can answer. We show that the ranking metrics encompass the uncertainty in the estimation of the treatment effects in different ways, which results in different treatment hierarchies. When using network meta-analyses that aim to rank treatments, investigators should state the treatment hierarchy question they aim to address and employ the appropriate ranking metric to answer it. Following this new proposal will avoid some controversies that have arisen in comparative effectiveness research.


Subject(s)
Benchmarking , Humans , Network Meta-Analysis , Uncertainty
20.
Stat Med ; 41(26): 5203-5219, 2022 11 20.
Article in English | MEDLINE | ID: mdl-36054668

ABSTRACT

Network meta-analysis (NMA) of rare events has attracted little attention in the literature. Until recently, networks of interventions with rare events were analyzed using the inverse-variance NMA approach. However, when events are rare the normal approximations made by this model can be poor and effect estimates are potentially biased. Other methods for the synthesis of such data are the recent extension of the Mantel-Haenszel approach to NMA or the use of the noncentral hypergeometric distribution. In this article, we suggest a new common-effect NMA approach that can be applied even in networks of interventions with extremely low or even zero number of events without requiring study exclusion or arbitrary imputations. Our method is based on the implementation of the penalized likelihood function proposed by Firth for bias reduction of the maximum likelihood estimate to the logistic expression of the NMA model. A limitation of our method is that heterogeneity cannot be taken into account as an additive parameter as in most meta-analytical models. However, we account for heterogeneity by incorporating a multiplicative overdispersion term using a two-stage approach. We show through simulation that our method performs consistently well across all tested scenarios and most often results in smaller bias than other available methods. We also illustrate the use of our method through two clinical examples. We conclude that our "penalized likelihood NMA" approach is promising for the analysis of binary outcomes with rare events especially for networks with very few studies per comparison and very low control group risks.


Subject(s)
Research Design , Humans , Bias , Computer Simulation , Likelihood Functions , Network Meta-Analysis
SELECTION OF CITATIONS
SEARCH DETAIL