Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 131
Filter
2.
BMC Med Res Methodol ; 24(1): 91, 2024 Apr 19.
Article in English | MEDLINE | ID: mdl-38641771

ABSTRACT

Observational data provide invaluable real-world information in medicine, but certain methodological considerations are required to derive causal estimates. In this systematic review, we evaluated the methodology and reporting quality of individual-level patient data meta-analyses (IPD-MAs) conducted with non-randomized exposures, published in 2009, 2014, and 2019 that sought to estimate a causal relationship in medicine. We screened over 16,000 titles and abstracts, reviewed 45 full-text articles out of the 167 deemed potentially eligible, and included 29 into the analysis. Unfortunately, we found that causal methodologies were rarely implemented, and reporting was generally poor across studies. Specifically, only three of the 29 articles used quasi-experimental methods, and no study used G-methods to adjust for time-varying confounding. To address these issues, we propose stronger collaborations between physicians and methodologists to ensure that causal methodologies are properly implemented in IPD-MAs. In addition, we put forward a suggested checklist of reporting guidelines for IPD-MAs that utilize causal methods. This checklist could improve reporting thereby potentially enhancing the quality and trustworthiness of IPD-MAs, which can be considered one of the most valuable sources of evidence for health policy.


Subject(s)
Medicine , Research Design , Humans , Checklist
3.
BMJ Open ; 14(4): e083453, 2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38684262

ABSTRACT

INTRODUCTION: Opioid agonist treatment (OAT) tapering involves a gradual reduction in daily medication dose to ultimately reach a state of opioid abstinence. Due to the high risk of relapse and overdose after tapering, this practice is not recommended by clinical guidelines, however, clients may still request to taper off medication. The ideal time to initiate an OAT taper is not known. However, ethically, taper plans should acknowledge clients' preferences and autonomy but apply principles of shared informed decision-making regarding safety and efficacy. Linked population-level data capturing real-world tapering practices provide a valuable opportunity to improve existing evidence on when to contemplate starting an OAT taper. Our objective is to determine the comparative effectiveness of alternative times from OAT initiation at which a taper can be initiated, with a primary outcome of taper completion, as observed in clinical practice in British Columbia (BC), Canada. METHODS AND ANALYSIS: We propose a population-level retrospective observational study with a linkage of eight provincial health administrative databases in BC, Canada (01 January 2010 to 17 March 2020). Our primary outcomes include taper completion and all-cause mortality during treatment. We propose a 'per-protocol' target trial to compare different durations to taper initiation on the likelihood of taper completion. A range of sensitivity analyses will be used to assess the heterogeneity and robustness of the results including assessment of effectiveness and safety. ETHICS AND DISSEMINATION: The protocol, cohort creation and analysis plan have been classified and approved as a quality improvement initiative by Providence Health Care Research Ethics Board and the Simon Fraser University Office of Research Ethics. Results will be disseminated to local advocacy groups and decision-makers, national and international clinical guideline developers, presented at international conferences and published in peer-reviewed journals electronically and in print.


Subject(s)
Opiate Substitution Treatment , Opioid-Related Disorders , Humans , British Columbia , Retrospective Studies , Opioid-Related Disorders/drug therapy , Opiate Substitution Treatment/methods , Analgesics, Opioid/administration & dosage , Analgesics, Opioid/therapeutic use , Drug Tapering , Comparative Effectiveness Research , Time Factors , Research Design
4.
Psychol Methods ; 2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38512203

ABSTRACT

Following an extensive simulation study comparing the operating characteristics of three different procedures used for establishing equivalence (the frequentist "TOST," the Bayesian "HDI-ROPE," and the Bayes factor interval null procedure), Linde et al. (2021) conclude with the recommendation that "researchers rely more on the Bayes factor interval null approach for quantifying evidence for equivalence" (p. 1). We redo the simulation study of Linde et al. (2021) in its entirety but with the different procedures calibrated to have the same predetermined maximum Type I error rate. Our results suggest that, when calibrated in this way, the Bayes factor, HDI-ROPE, and frequentist equivalence tests all have similar-almost exactly-Type II error rates. In general any advocating for frequentist testing as better or worse than Bayesian testing in terms of empirical findings seems dubious at best. If one decides on which underlying principle to subscribe to in tackling a given problem, then the method follows naturally. Bearing in mind that each procedure can be reverse-engineered from the others (at least approximately), trying to use empirical performance to argue for 1 approach over another seems like tilting at windmills. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

5.
Epidemiology ; 35(2): 218-231, 2024 Mar 01.
Article in English | MEDLINE | ID: mdl-38290142

ABSTRACT

BACKGROUND: Instrumental variable (IV) analysis provides an alternative set of identification assumptions in the presence of uncontrolled confounding when attempting to estimate causal effects. Our objective was to evaluate the suitability of measures of prescriber preference and calendar time as potential IVs to evaluate the comparative effectiveness of buprenorphine/naloxone versus methadone for treatment of opioid use disorder (OUD). METHODS: Using linked population-level health administrative data, we constructed five IVs: prescribing preference at the individual, facility, and region levels (continuous and categorical variables), calendar time, and a binary prescriber's preference IV in analyzing the treatment assignment-treatment discontinuation association using both incident-user and prevalent-new-user designs. Using published guidelines, we assessed and compared each IV according to the four assumptions for IVs, employing both empirical assessment and content expertise. We evaluated the robustness of results using sensitivity analyses. RESULTS: The study sample included 35,904 incident users (43.3% on buprenorphine/naloxone) initiated on opioid agonist treatment by 1585 prescribers during the study period. While all candidate IVs were strong (A1) according to conventional criteria, by expert opinion, we found no evidence against assumptions of exclusion (A2), independence (A3), monotonicity (A4a), and homogeneity (A4b) for prescribing preference-based IV. Some criteria were violated for the calendar time-based IV. We determined that preference in provider-level prescribing, measured on a continuous scale, was the most suitable IV for comparative effectiveness of buprenorphine/naloxone and methadone for the treatment of OUD. CONCLUSIONS: Our results suggest that prescriber's preference measures are suitable IVs in comparative effectiveness studies of treatment for OUD.


Subject(s)
Methadone , Opioid-Related Disorders , Humans , Methadone/therapeutic use , Opioid-Related Disorders/drug therapy , Buprenorphine, Naloxone Drug Combination/therapeutic use , Opiate Substitution Treatment/methods , Health Status , Analgesics, Opioid/therapeutic use
6.
Biostatistics ; 25(2): 354-384, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-36881693

ABSTRACT

Naive estimates of incidence and infection fatality rates (IFR) of coronavirus disease 2019 suffer from a variety of biases, many of which relate to preferential testing. This has motivated epidemiologists from around the globe to conduct serosurveys that measure the immunity of individuals by testing for the presence of SARS-CoV-2 antibodies in the blood. These quantitative measures (titer values) are then used as a proxy for previous or current infection. However, statistical methods that use this data to its full potential have yet to be developed. Previous researchers have discretized these continuous values, discarding potentially useful information. In this article, we demonstrate how multivariate mixture models can be used in combination with post-stratification to estimate cumulative incidence and IFR in an approximate Bayesian framework without discretization. In doing so, we account for uncertainty from both the estimated number of infections and incomplete deaths data to provide estimates of IFR. This method is demonstrated using data from the Action to Beat Coronavirus erosurvey in Canada.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , Bayes Theorem , Incidence , SARS-CoV-2
7.
Res Sq ; 2023 Aug 30.
Article in English | MEDLINE | ID: mdl-37693428

ABSTRACT

Observational data provide invaluable real-world information in medicine, but certain methodological considerations are required to derive causal estimates. In this systematic review, we evaluated the methodology and reporting quality of individual-level patient data meta-analyses (IPD-MAs) published in 2009, 2014, and 2019 that sought to estimate a causal relationship in medicine. We screened over 16,000 titles and abstracts, reviewed 45 full-text articles out of the 167 deemed potentially eligible, and included 29 into the analysis. Unfortunately, we found that causal methodologies were rarely implemented, and reporting was generally poor across studies. Specifically, only three of the 29 articles used quasi-experimental methods, and no study used G-methods to adjust for time-varying confounding. To address these issues, we propose stronger collaborations between physicians and methodologists to ensure that causal methodologies are properly implemented in IPD-MAs. In addition, we put forward a suggested checklist of reporting guidelines for IPD-MAs that utilize causal methods. This checklist could improve reporting thereby potentially enhancing the quality and trustworthiness of IPD-MAs, which can be considered one of the most valuable sources of evidence for health policy.

8.
Med Decis Making ; 43(5): 564-575, 2023 07.
Article in English | MEDLINE | ID: mdl-37345680

ABSTRACT

BACKGROUND: A previously developed risk prediction model needs to be validated before being used in a new population. The finite size of the validation sample entails that there is uncertainty around model performance. We apply value-of-information (VoI) methodology to quantify the consequence of uncertainty in terms of net benefit (NB). METHODS: We define the expected value of perfect information (EVPI) for model validation as the expected loss in NB due to not confidently knowing which of the alternative decisions confers the highest NB. We propose bootstrap-based and asymptotic methods for EVPI computations and conduct simulation studies to compare their performance. In a case study, we use the non-US subsets of a clinical trial as the development sample for predicting mortality after myocardial infarction and calculate the validation EVPI for the US subsample. RESULTS: The computation methods generated similar EVPI values in simulation studies. EVPI generally declined with larger samples. In the case study, at the prespecified threshold of 0.02, the best decision with current information would be to use the model, with an incremental NB of 0.0020 over treating all. At this threshold, the EVPI was 0.0005 (relative EVPI = 25%). When scaled to the annual number of heart attacks in the US, the expected NB loss due to uncertainty was equal to 400 true positives or 19,600 false positives, indicating the value of further model validation. CONCLUSION: VoI methods can be applied to the NB calculated during external validation of clinical prediction models. While uncertainty does not directly affect the clinical implications of NB findings, validation EVPI provides an objective perspective to the need for further validation and can be reported alongside NB in external validation studies. HIGHLIGHTS: External validation is a critical step when transporting a risk prediction model to a new setting, but the finite size of the validation sample creates uncertainty about the performance of the model.In decision theory, such uncertainty is associated with loss of net benefit because it can prevent one from identifying whether the use of the model is beneficial over alternative strategies.We define the expected value of perfect information for external validation as the expected loss in net benefit by not confidently knowing if the use of the model is net beneficial.The adoption of a model for a new population should be based on its expected net benefit; independently, value-of-information methods can be used to decide whether further validation studies are warranted.


Subject(s)
Uncertainty , Humans , Cost-Benefit Analysis
9.
Med Decis Making ; 43(5): 621-626, 2023 07.
Article in English | MEDLINE | ID: mdl-37269136

ABSTRACT

HIGHLIGHTS: The unit normal loss integral (UNLI) is widely used in decision analysis and risk modeling, including in the computation of various value-of-information metrics, but its closed-form solution is only applicable to comparisons of 2 strategies.We derive a closed-form solution for 2-dimensional UNLI, extending the applicability of the UNLI to 3-strategy comparisons.Such closed-form computation takes only a fraction of a second and is free from simulation errors that affect the hitherto available methods.In addition to the relevance in 3-strategy model-based and data-driven decision analyses, a particular application is in risk prediction modeling, where the net benefit of a classifier should always be compared with 2 default strategies of treating none and treating all.


Subject(s)
Decision Making , Humans , Cost-Benefit Analysis
10.
Diagn Progn Res ; 7(1): 10, 2023 May 16.
Article in English | MEDLINE | ID: mdl-37189162

ABSTRACT

Prediction algorithms that quantify the expected benefit of a given treatment conditional on patient characteristics can critically inform medical decisions. Quantifying the performance of treatment benefit prediction algorithms is an active area of research. A recently proposed metric, the concordance statistic for benefit (cfb), evaluates the discriminative ability of a treatment benefit predictor by directly extending the concept of the concordance statistic from a risk model with a binary outcome to a model for treatment benefit. In this work, we scrutinize cfb on multiple fronts. Through numerical examples and theoretical developments, we show that cfb is not a proper scoring rule. We also show that it is sensitive to the unestimable correlation between counterfactual outcomes and to the definition of matched pairs. We argue that measures of statistical dispersion applied to predicted benefits do not suffer from these issues and can be an alternative metric for the discriminatory performance of treatment benefit predictors.

11.
BMJ Open ; 13(5): e068729, 2023 05 31.
Article in English | MEDLINE | ID: mdl-37258082

ABSTRACT

INTRODUCTION: Urine drug tests (UDTs) are commonly used for monitoring opioid agonist treatment (OAT) responses, supporting the clinical decision for take-home doses and monitoring potential diversion. However, there is limited evidence supporting the utility of mandatory UDTs-particularly the impact of UDT frequency on OAT retention. Real-world evidence can inform patient-centred approaches to OAT and improve current strategies to address the ongoing opioid public health emergency. Our objective is to determine the safety and comparative effectiveness of alternative UDT monitoring strategies as observed in clinical practice among OAT clients in British Columbia, Canada from 2010 to 2020. METHODS AND ANALYSIS: We propose a population-level retrospective cohort study of all individuals 18 years of age or older who initiated OAT from 1 January 2010 to 17 March 2020. The study will draw on eight linked health administrative databases from British Columbia. Our primary outcomes include OAT discontinuation and all-cause mortality. To determine the effectiveness of the intervention, we will emulate a 'per-protocol' target trial using a clone censoring approach to compare fixed and dynamic UDT monitoring strategies. A range of sensitivity analyses will be executed to determine the robustness of our results. ETHICS AND DISSEMINATION: The protocol, cohort creation and analysis plan have been classified and approved as a quality improvement initiative by Providence Health Care Research Ethics Board and the Simon Fraser University Office of Research Ethics. Results will be disseminated to local advocacy groups and decision-makers, national and international clinical guideline developers, presented at international conferences and published in peer-reviewed journals electronically and in print.


Subject(s)
Analgesics, Opioid , Opioid-Related Disorders , Humans , Adolescent , Adult , Analgesics, Opioid/therapeutic use , British Columbia , Retrospective Studies , Drug Evaluation, Preclinical , Mass Screening , Opioid-Related Disorders/drug therapy , Observational Studies as Topic
12.
Am J Epidemiol ; 192(8): 1406-1414, 2023 08 04.
Article in English | MEDLINE | ID: mdl-37092245

ABSTRACT

Regression calibration is a popular approach for correcting biases in estimated regression parameters when exposure variables are measured with error. This approach involves building a calibration equation to estimate the value of the unknown true exposure given the error-prone measurement and other covariates. The estimated, or calibrated, exposure is then substituted for the unknown true exposure in the health outcome regression model. When used properly, regression calibration can greatly reduce the bias induced by exposure measurement error. Here, we first provide an overview of the statistical framework for regression calibration, specifically discussing how a special type of error, called Berkson error, arises in the estimated exposure. We then present practical issues to consider when applying regression calibration, including: 1) how to develop the calibration equation and which covariates to include; 2) valid ways to calculate standard errors of estimated regression coefficients; and 3) problems arising if one of the covariates in the calibration model is a mediator of the relationship between the exposure and outcome. Throughout, we provide illustrative examples using data from the Hispanic Community Health Study/Study of Latinos (United States, 2008-2011) and simulations. We conclude with recommendations for how to perform regression calibration.


Subject(s)
Public Health , Humans , Calibration , Regression Analysis , Bias
14.
Res Synth Methods ; 14(2): 193-210, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36200133

ABSTRACT

A common problem in the analysis of multiple data sources, including individual participant data meta-analysis (IPD-MA), is the misclassification of binary variables. Misclassification may lead to biased estimators of model parameters, even when the misclassification is entirely random. We aimed to develop statistical methods that facilitate unbiased estimation of adjusted and unadjusted exposure-outcome associations and between-study heterogeneity in IPD-MA, where the extent and nature of exposure misclassification may vary across studies. We present Bayesian methods that allow misclassification of binary exposure variables to depend on study- and participant-level characteristics. In an example of the differential diagnosis of dengue using two variables, where the gold standard measurement for the exposure variable was unavailable for some studies which only measured a surrogate prone to misclassification, our methods yielded more accurate estimates than analyses naive with regard to misclassification or based on gold standard measurements alone. In a simulation study, the evaluated misclassification model yielded valid estimates of the exposure-outcome association, and was more accurate than analyses restricted to gold standard measurements. Our proposed framework can appropriately account for the presence of binary exposure misclassification in IPD-MA. It requires that some studies supply IPD for the surrogate and gold standard exposure, and allows misclassification to follow a random effects distribution across studies conditional on observed covariates (and outcome). The proposed methods are most beneficial when few large studies that measured the gold standard are available, and when misclassification is frequent.


Subject(s)
Bayes Theorem , Humans , Computer Simulation
15.
Biometrics ; 79(3): 1986-1995, 2023 09.
Article in English | MEDLINE | ID: mdl-36250351

ABSTRACT

Performing causal inference in observational studies requires we assume confounding variables are correctly adjusted for. In settings with few discrete-valued confounders, standard models can be employed. However, as the number of confounders increases these models become less feasible as there are fewer observations available for each unique combination of confounding variables. In this paper, we propose a new model for estimating treatment effects in observational studies that incorporates both parametric and nonparametric outcome models. By conceptually splitting the data, we can combine these models while maintaining a conjugate framework, allowing us to avoid the use of Markov chain Monte Carlo (MCMC) methods. Approximations using the central limit theorem and random sampling allow our method to be scaled to high-dimensional confounders. Through simulation studies we show our method can be competitive with benchmark models while maintaining efficient computation, and illustrate the method on a large epidemiological health survey.


Subject(s)
Observational Studies as Topic , Causality , Computer Simulation , Markov Chains , Monte Carlo Method
16.
Environ Health ; 21(1): 114, 2022 11 22.
Article in English | MEDLINE | ID: mdl-36419083

ABSTRACT

BACKGROUND: Serum concentrations of total cholesterol and related lipid measures have been associated with serum concentrations of per- and polyfluoroalkyl substances (PFAS) in humans, even among those with only background-level exposure to PFAS. Fiber is known to decrease serum cholesterol and a recent report based on National Health and Nutrition Examination Survey (NHANES) showed that PFAS and fiber are inversely associated. We hypothesized that confounding by dietary fiber may account for some of the association between cholesterol and PFAS. METHODS: We implemented a Bayesian correction for measurement error in estimated intake of dietary fiber to evaluate whether fiber confounds the cholesterol-PFAS association. The NHANES measure of diet, two 24-h recalls, allowed calculation of an estimate of the "true" long-term fiber intake for each subject. We fit models to the NHANES data on serum cholesterol and serum concentration of perfluorooctanoic acid (PFOA) and two other PFAS for 7,242 participants in NHANES. RESULTS: The Bayesian model, after adjustment for soluble fiber intake, suggested a decrease in the size of the coefficient for PFOA by 6.4% compared with the fiber-unadjusted model. CONCLUSIONS: The results indicated that the association of serum cholesterol with PFAS was not substantially confounded by fiber intake.


Subject(s)
Fluorocarbons , Humans , Nutrition Surveys , Bayes Theorem , Cholesterol , Dietary Fiber
17.
Article in English | MEDLINE | ID: mdl-35270597

ABSTRACT

BACKGROUND: Understanding and managing the impacts of population growth and densification are important steps for sustainable development. This study sought to evaluate the health trade-offs associated with increasing densification and to identify the optimal balance of neighbourhood densification for health. METHODS: We linked population density with a 27-year mortality dataset in Metro Vancouver that includes census-tract levels of life expectancy (LE), cause-specific mortalities, and area-level deprivation. We applied two methods: (1) difference-in-differences (DID) models to study the impacts of densification changes from the early 1990s on changes in mortality over a 27-year period; and (2) smoothed cubic splines to identify thresholds of densification at which mortality rates accelerated. RESULTS: At densities above ~9400 persons per km2, LE began to decrease more rapidly. By cause, densification was linked to decreased mortality for major causes of mortality in the region, such as cardiovascular diseases, neoplasms, and diabetes. Greater inequality with increasing density was observed for causes such as human immunodeficiency virus and acquired immunodeficiency syndrome (HIV/AIDS), sexually transmitted infections, and self-harm and interpersonal violence. CONCLUSIONS: Areas with higher population densities generally have lower rates of mortality from the major causes, but these environments are also associated with higher relative inequality from largely preventable causes of death.


Subject(s)
Acquired Immunodeficiency Syndrome , Life Expectancy , Canada/epidemiology , Cause of Death , Humans , Mortality , Residence Characteristics
18.
Med Decis Making ; 42(5): 661-671, 2022 07.
Article in English | MEDLINE | ID: mdl-35209762

ABSTRACT

BACKGROUND: Because of the finite size of the development sample, predicted probabilities from a risk prediction model are inevitably uncertain. We apply value-of-information methodology to evaluate the decision-theoretic implications of prediction uncertainty. METHODS: Adopting a Bayesian perspective, we extend the definition of the expected value of perfect information (EVPI) from decision analysis to net benefit calculations in risk prediction. In the context of model development, EVPI is the expected gain in net benefit by using the correct predictions as opposed to predictions from a proposed model. We suggest bootstrap methods for sampling from the posterior distribution of predictions for EVPI calculation using Monte Carlo simulations. We used subsets of data of various sizes from a clinical trial for predicting mortality after myocardial infarction to show how EVPI changes with sample size. RESULTS: With a sample size of 1000 and at the prespecified threshold of 2% on predicted risks, the gains in net benefit using the proposed and the correct models were 0.0006 and 0.0011, respectively, resulting in an EVPI of 0.0005 and a relative EVPI of 87%. EVPI was zero only at unrealistically high thresholds (>85%). As expected, EVPI declined with larger samples. We summarize an algorithm for incorporating EVPI calculations into the commonly used bootstrap method for optimism correction. CONCLUSION: The development EVPI can be used to decide whether a model can advance to validation, whether it should be abandoned, or whether a larger development sample is needed. Value-of-information methods can be applied to explore decision-theoretic consequences of uncertainty in risk prediction and can complement inferential methods in predictive analytics. R code for implementing this method is provided.


Subject(s)
Uncertainty , Bayes Theorem , Cost-Benefit Analysis , Humans , Monte Carlo Method , Sample Size
19.
J Clin Epidemiol ; 145: 29-38, 2022 05.
Article in English | MEDLINE | ID: mdl-35045316

ABSTRACT

OBJECTIVES: Among ID studies seeking to make causal inferences and pooling individual-level longitudinal data from multiple infectious disease cohorts, we sought to assess what methods are being used, how those methods are being reported, and whether these factors have changed over time. STUDY DESIGN AND SETTING: Systematic review of longitudinal observational infectious disease studies pooling individual-level patient data from 2+ studies published in English in 2009, 2014, or 2019. This systematic review protocol is registered with PROSPERO (CRD42020204104). RESULTS: Our search yielded 1,462 unique articles. Of these, 16 were included in the final review. Our analysis showed a lack of causal inference methods and of clear reporting on methods and the required assumptions. CONCLUSION: There are many approaches to causal inference which may help facilitate accurate inference in the presence of unmeasured and time-varying confounding. In observational ID studies leveraging pooled, longitudinal IPD, the absence of these causal inference methods and gaps in the reporting of key methodological considerations suggests there is ample opportunity to enhance the rigor and reporting of research in this field. Interdisciplinary collaborations between substantive and methodological experts would strengthen future work.


Subject(s)
Communicable Diseases , Causality , Communicable Diseases/epidemiology , Humans , Longitudinal Studies
20.
Int J Popul Data Sci ; 7(1): 1708, 2022.
Article in English | MEDLINE | ID: mdl-37650030

ABSTRACT

Introduction: Overdose events related to illicit opioids and other substances are a public health crisis in Canada. The BC Provincial Overdose Cohort is a collection of linked datasets identifying drug-related toxicity events, including death, ambulance, emergency room, hospital, and physician records. The datasets were brought together to understand factors associated with drug-related overdose and can also provide information on pathways of care among people who experience an overdose. Objectives: To describe pathways of recorded healthcare use for overdose events in British Columbia, Canada and discrepancies between data sources. Methods: Using the BC Provincial Overdose Cohort spanning 2015 to 2017, we examined pathways of recorded health care use for overdose through the framework of an injury reporting pyramid. We also explored differences in event capture between linked datasets. Results: In the cohort, a total of 34,113 fatal and non-fatal overdose events were identified. A total of 3,056 people died of overdose. Nearly 80% of these deaths occurred among those with no contact with the healthcare system. The majority of events with healthcare records included contact with EHS services (72%), while 39% were seen in the ED and only 7% were hospitalized. Pathways of care from EHS services to ED and hospitalization were generally observed. However, not all ED visits had an associated EHS record and some hospitalizations following an ED visit were for other health issues. Conclusions: These findings emphasize the importance of accessing timely healthcare for people experiencing overdose. These findings can be applied to understanding pathways of care for people who experience overdose events and estimating the total burden of healthcare-attended overdose events. Highlights: In British Columbia, Canada:Multiple sources of linked administrative health data were leveraged to understand recorded healthcare use among people with fatal and non-fatal overdose eventsThe majority of fatal overdose events occurred with no contact with the healthcare system and only appear in mortality dataMany non-fatal overdose events were captured in data from emergency health services, emergency departments, and hospital recordsAccessing timely healthcare services is critical for people experiencing overdose.


Subject(s)
Drug Overdose , Semantic Web , Humans , British Columbia/epidemiology , Drug Overdose/epidemiology , Ambulances , Analgesics, Opioid
SELECTION OF CITATIONS
SEARCH DETAIL
...