Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
1.
Pharm Stat ; 20(5): 945-951, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33724684

RESUMO

This paper uses the decomposition framework from the economics literature to examine the statistical structure of treatment effects estimated with observational data compared to those estimated from randomized studies. It begins with the estimation of treatment effects using a dummy variable in regression models and then presents the decomposition method from economics which estimates separate regression models for the comparison groups and recovers the treatment effect using bootstrapping methods. This method shows that the overall treatment effect is a weighted average of structural relationships of patient features with outcomes within each treatment arm and differences in the distributions of these features across the arms. In large randomized trials, it is assumed that the distribution of features across arms is very similar. Importantly, randomization not only balances observed features but also unobserved. Applying high dimensional balancing methods such as propensity score matching to the observational data causes the distributional terms of the decomposition model to be eliminated but unobserved features may still not be balanced in the observational data. Finally, a correction for non-random selection into the treatment groups is introduced via a switching regime model. Theoretically, the treatment effect estimates obtained from this model should be the same as those from a randomized trial. However, there are significant challenges in identifying instrumental variables that are necessary for estimating such models. At a minimum, decomposition models are useful tools for understanding the relationship between treatment effects estimated from observational versus randomized data.


Assuntos
Atenção à Saúde , Projetos de Pesquisa , Causalidade , Humanos , Pontuação de Propensão
2.
Med Care ; 58(10): 919-926, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32842044

RESUMO

BACKGROUND: Relative costs of care among treatment options for opioid use disorder (OUD) are unknown. METHODS: We identified a cohort of 40,885 individuals with a new diagnosis of OUD in a large national de-identified claims database covering commercially insured and Medicare Advantage enrollees. We assigned individuals to 1 of 6 mutually exclusive initial treatment pathways: (1) Inpatient Detox/Rehabilitation Treatment Center; (2) Behavioral Health Intensive, intensive outpatient or Partial Hospitalization Services; (3) Methadone or Buprenorphine; (4) Naltrexone; (5) Behavioral Health Outpatient Services, or; (6) No Treatment. We assessed total costs of care in the initial 90 day treatment period for each strategy using a differences in differences approach controlling for baseline costs. RESULTS: Within 90 days of diagnosis, 94.8% of individuals received treatment, with the initial treatments being: 15.8% for Inpatient Detox/Rehabilitation Treatment Center, 4.8% for Behavioral Health Intensive, Intensive Outpatient or Partial Hospitalization Services, 12.5% for buprenorphine/methadone, 2.4% for naltrexone, and 59.3% for Behavioral Health Outpatient Services. Average unadjusted costs increased from $3250 per member per month (SD $7846) at baseline to $5047 per member per month (SD $11,856) in the 90 day follow-up period. Compared with no treatment, initial 90 day costs were lower for buprenorphine/methadone [Adjusted Difference in Differences Cost Ratio (ADIDCR) 0.65; 95% confidence interval (CI), 0.52-0.80], naltrexone (ADIDCR 0.53; 95% CI, 0.42-0.67), and behavioral health outpatient (ADIDCR 0.54; 95% CI, 0.44-0.66). Costs were higher for inpatient detox (ADIDCR 2.30; 95% CI, 1.88-2.83). CONCLUSION: Improving health system capacity and insurance coverage and incentives for outpatient management of OUD may reduce health care costs.


Assuntos
Tratamento de Substituição de Opiáceos/economia , Transtornos Relacionados ao Uso de Opioides/tratamento farmacológico , Transtornos Relacionados ao Uso de Opioides/economia , Transtornos Relacionados ao Uso de Opioides/reabilitação , Adolescente , Adulto , Idoso , Assistência Ambulatorial/economia , Terapia Comportamental/economia , Buprenorfina/uso terapêutico , Estudos de Coortes , Feminino , Custos de Cuidados de Saúde , Hospitalização/economia , Humanos , Masculino , Medicare , Metadona/uso terapêutico , Pessoa de Meia-Idade , Naltrexona/uso terapêutico , Antagonistas de Entorpecentes/uso terapêutico , Estudos Retrospectivos , Estados Unidos
3.
Value Health ; 22(5): 587-592, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-31104739

RESUMO

The current focus on real world evidence (RWE) is occurring at a time when at least two major trends are converging. First, is the progress made in observational research design and methods over the past decade. Second, the development of numerous large observational healthcare databases around the world is creating repositories of improved data assets to support observational research. OBJECTIVE: This paper examines the implications of the improvements in observational methods and research design, as well as the growing availability of real world data for the quality of RWE. These developments have been very positive. On the other hand, unstructured data, such as medical notes, and the sparcity of data created by merging multiple data assets are not easily handled by traditional health services research statistical methods. In response, machine learning methods are gaining increased traction as potential tools for analyzing massive, complex datasets. CONCLUSIONS: Machine learning methods have traditionally been used for classification and prediction, rather than causal inference. The prediction capabilities of machine learning are valuable by themselves. However, using machine learning for causal inference is still evolving. Machine learning can be used for hypothesis generation, followed by the application of traditional causal methods. But relatively recent developments, such as targeted maximum likelihood methods, are directly integrating machine learning with causal inference.


Assuntos
Algoritmos , Bases de Dados Factuais/estatística & dados numéricos , Epidemiologia , Aprendizado de Máquina , Causalidade , Mineração de Dados , Pesquisa sobre Serviços de Saúde , Humanos
5.
Value Health ; 18(2): 137-40, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25773546

RESUMO

Traditional analytic methods are often ill-suited to the evolving world of health care big data characterized by massive volume, complexity, and velocity. In particular, methods are needed that can estimate models efficiently using very large datasets containing healthcare utilization data, clinical data, data from personal devices, and many other sources. Although very large, such datasets can also be quite sparse (e.g., device data may only be available for a small subset of individuals), which creates problems for traditional regression models. Many machine learning methods address such limitations effectively but are still subject to the usual sources of bias that commonly arise in observational studies. Researchers using machine learning methods such as lasso or ridge regression should assess these models using conventional specification tests.


Assuntos
Algoritmos , Inteligência Artificial , Pesquisa sobre Serviços de Saúde/métodos , Avaliação de Resultados em Cuidados de Saúde/métodos , Inteligência Artificial/tendências , Pesquisa sobre Serviços de Saúde/tendências , Humanos , Avaliação de Resultados em Cuidados de Saúde/tendências
6.
Health Aff Sch ; 2(3): qxae017, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38756919

RESUMO

Health and health care access in the United States are plagued by high inequality. While machine learning (ML) is increasingly used in clinical settings to inform health care delivery decisions and predict health care utilization, using ML as a research tool to understand health care disparities in the United States and how these are connected to health outcomes, access to health care, and health system organization is less common. We utilized over 650 variables from 24 different databases aggregated by the Agency for Healthcare Research and Quality in their Social Determinants of Health (SDOH) database. We used k-means-a non-hierarchical ML clustering method-to cluster county-level data. Principal factor analysis created county-level index values for each SDOH domain and 2 health care domains: health care infrastructure and health care access. Logistic regression classification was used to identify the primary drivers of cluster classification. The most efficient cluster classification consists of 3 distinct clusters in the United States; the cluster having the highest life expectancy comprised only 10% of counties. The most efficient ML clusters do not identify the clusters with the widest health care disparities. ML clustering, using county-level data, shows that health care infrastructure and access are the primary drivers of cluster composition.

7.
Nat Cardiovasc Res ; 3(4): 431-440, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38846711

RESUMO

Cardiovascular disease (CVD) is the leading cause of death among people with type 2 diabetes1-5, most of whom are at moderate CVD risk6, yet there is limited evidence on the preferred choice of glucose-lowering medication for CVD risk reduction in this population. Here, we report the results of a retrospective cohort study where data for US adults with type 2 diabetes and moderate risk for CVD are used to compare the risks of experiencing a major adverse cardiovascular event with initiation of glucagon-like peptide-1 receptor agonists (GLP-1RA; n = 44,188), sodium-glucose cotransporter 2 inhibitors (SGLT2i; n = 47,094), dipeptidyl peptidase-4 inhibitors (DPP4i; n = 84,315) and sulfonylureas (n = 210,679). Compared to DPP4i, GLP-1RA (hazard ratio (HR) 0.87; 95% confidence interval (CI) 0.82-0.93) and SGLT2i (HR 0.85; 95% CI 0.81-0.90) were associated with a lower risk of a major adverse cardiovascular event, whereas sulfonylureas were associated with a higher risk (HR 1.19; 95% CI 1.16-1.22). Thus, GLP-1RA and SGLT2i may be the preferred glucose-lowering agents for cardiovascular risk reduction in patients at moderate baseline risk for CVD. ClinicalTrials.gov registration: NCT05214573.

8.
JACC Adv ; 3(4): 100852, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38939660

RESUMO

Background: Major adverse cardiovascular events (MACE) are a leading cause of morbidity and mortality among adults with type 2 diabetes. Currently, available MACE prediction models have important limitations, including reliance on data that may not be routinely available, narrow focus on primary prevention, limited patient populations, and longtime horizons for risk prediction. Objectives: The purpose of this study was to derive and internally validate a claims-based prediction model for 1-year risk of MACE in type 2 diabetes. Methods: Using medical and pharmacy claims for adults with type 2 diabetes enrolled in commercial, Medicare Advantage, and Medicare fee-for-service plans between 2014 and 2021, we derived and internally validated the annualized claims-based MACE estimator (ACME) model to predict the risk of MACE (nonfatal acute myocardial infarction, nonfatal stroke, and all-cause mortality). The Cox proportional hazards model was composed of 30 covariates, including patient age, sex, comorbidities, and medications. Results: The study cohort comprised 6,623,526 adults with type 2 diabetes, mean age 68.1 ± 10.6 years, 49.8% women, and 73.0% Non-Hispanic White. ACME had a concordance index of 0.74 (validation index range: 0.739-0.741). The predicted 1-year risk of the study cohort ranged from 0.4% to 99.9%, with a median risk of 3.4% (IQR: 2.3%-6.5%). Conclusions: ACME was derived in a large usual care population, relies on routinely available data, and estimates short-term MACE risk. It can support population risk stratification at the health system and payer levels, participant identification for decentralized clinical trials of cardiovascular disease, and risk-stratified observational studies using real-world data.

9.
BMJ Open ; 13(12): e070221, 2023 12 21.
Artigo em Inglês | MEDLINE | ID: mdl-38135335

RESUMO

OBJECTIVES: This study examined whether the US President's Emergency Plan for AIDS Relief (PEPFAR) funding had effects beyond HIV, specifically on several measures of maternal and child health in low-income and middle-income countries (LMICs). The results of previous research on the question of PEPFAR health spillovers have been inconsistent. This study, using a large, multicountry panel data set of 157 LMICs including 90 recipient countries, adds to the literature. DESIGN: Seven indicators including child and maternal mortality, several child vaccination rates and anaemia among childbearing-age women are important population health indicators. Panel data and difference-in-differences estimators (DID) were used to estimate the impact of the PEPFAR programme from inception in 2004 to 2018 using a comparison group of 67 LMICs. Several different models of baseline (2004) covariates were used to help balance the comparison and treatment groups. Staggered DID was used to estimate impacts since all countries did not start receiving aid at PEPFAR's inception. SETTING: All 157 LMICs from 1990 to 2018. PARTICIPANTS: 90 LMICs receiving PEPFAR aid and cohorts of those countries, including those required to submit annual country operational plans (COP), other recipient countries (non-COP), and three groupings of countries based on cumulative amount of per capita aid received (high, medium, low). INTERVENTIONS: PEPFAR aid to combat the HIV epidemic. PRIMARY OUTCOME MEASURES: Maternal mortality and child mortality rates, vaccination rates to protect children for diphtheria, whooping cough and tetanus, measles, HepB3, and tetanus, and prevalence of anaemia in women of childbearing age. RESULTS: Across PEPFAR recipient countries, large, favourable PEPFAR health effects were found for rates of childhood immunisation, child mortality and maternal mortality. These beneficial health effects were large and significant in all segments of PEPFAR recipient countries studied. We also found significant and favourable programme effects on the prevalence of anaemia in women of childbearing age in PEPFAR recipient countries receiving the most intensive financial support from the PEPFAR programme. Other recipient countries did not demonstrate significant effects on anaemia. CONCLUSIONS: This study demonstrated that important health indicators, beyond HIV, have been consistently and favourably influenced by PEPFAR presence. Child and maternal mortality have been substantially reduced, and childhood immunisation rates increased. We also found no evidence of 'crowding out' or negative spillovers in these resource-poor countries. These findings add to the body of evidence that PEPFAR has had favourable health effects beyond HIV. The implications of these findings are that foreign aid for health in one area may have favourable health effects in other areas in recipient countries. More research is needed on the influence of the mechanisms at work that create these spillover health effects of PEPFAR.


Assuntos
Anemia , Infecções por HIV , Tétano , Criança , Humanos , Feminino , Infecções por HIV/epidemiologia , Infecções por HIV/prevenção & controle , Saúde da Criança , Cooperação Internacional
10.
Health Sci Rep ; 6(6): e1338, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37334041

RESUMO

Background and Aims: Policymakers need data about the burden of respiratory syncytial virus (RSV) lower respiratory tract infections (LRTI) among infants. This study estimates quality of life (QoL) for otherwise healthy term US infants with RSV-LRTI and their caregivers, previously limited to premature and hospitalized infants, and corrects for selective testing. Methods: The study enrolled infants <1 year with a clinically diagnosed LRTI encounter between January and May 2021. Using an established 0-100 scale, the 36 infants' and caregivers' QoL at enrollment and quality-adjusted life year losses per 1000 LRTI episodes (quality-adjusted life years [QALYs]/1000) were validated and analyzed. Regression analyses examined predictors of RSV-testing and RSV-positivity, creating modeled positives. Results: Mean QoL at enrollment in outpatient (n = 11) LRTI-tested infants (66.4) was lower than that in not-tested LRTI infants (79.6, p = 0.096). For outpatient LRTI infants (n = 23), median QALYs/1000 losses were 9.8 and 0.25 for their caregivers. RSV-positive outpatient LRTI infants (n = 6) had significantly milder QALYs/1000 losses (7.0) than other LRTI-tested infants (n = 5)(21.8, p = 0.030). Visits earlier in the year were more likely to be RSV-positive than later visits (p = 0.023). Modeled RSV-positivity (51.9%) was lower than the observed rate (55.0%). Infants' and caregivers' QALYs/1000 loss were positively correlated (rho = 0.34, p = 0.046), indicating that infants perceived as sicker imposed greater burdens on caregivers. Conclusions: The overall median QALYs/1000 losses for LRTI (9.0) and RSV-LRTI (5.6) in US infants are substantial, with additional losses for their caregivers (0.25 and 0.20, respectively). These losses extend equally to outpatient episodes. This study is the first reporting QALY losses for infants with LRTI born at term or presenting in nonhospitalized settings, and their caregivers.

11.
Cureus ; 14(10): e29884, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36348913

RESUMO

PURPOSE: The study reports the construction of a cohort used to study the effectiveness of antidepressants. METHODS: The cohort includes experiences of 3,678,082 patients with depression in the United States on antidepressants between January 1, 2001, and December 31, 2018. A total of 10,221,145 antidepressant treatment episodes were analyzed. Patients who had no utilization of health services for at least two years, or who had died, were excluded from the analysis. Follow-up was passive, automatic, and collated from fragmented clinical services of diverse providers. RESULTS: The average follow-up was 2.93 years, resulting in 15,096,055 person-years of data. The mean age of the cohort was 46.54 years (standard deviation of 17.48) at first prescription of antidepressant, which was also the enrollment event (16.92% were over 65 years), and most were female (69.36%). In 10,221,145 episodes, within the first 100 days of start of the episode, 4,729,372 (46.3%) continued their treatment, 1,306,338 (12.8%) switched to another medication, 3,586,156 (35.1%) discontinued their medication, and 599,279 (5.9%) augmented their treatment. CONCLUSIONS: We present a procedure for constructing a cohort using claims data. A surrogate measure for self-reported symptom remission based on the patterns of use of antidepressants has been proposed to address the absence of outcomes in claims. Future studies can use the procedures described here to organize studies of the comparative effectiveness of antidepressants.

12.
Genet Med ; 13(4): 349-55, 2011 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-21358336

RESUMO

PURPOSE: Women with early-onset (age ≤40 years) breast cancer are at high risk of carrying deleterious mutations in the BRCA1/2 genes; genetic assessment is thus recommended. Knowledge of BRCA1/2 mutation status is useful in guiding treatment decisions. To date, there has been no national study of BRCA1/2 testing among newly diagnosed women. METHODS: We used administrative data (2004-2007) from a national sample of 14.4 million commercially insured patients to identify newly diagnosed, early-onset breast cancer cases among women aged 20-40 years (n = 1474). Cox models assessed BRCA1/2 testing, adjusting for covariates and differential lengths of follow-up. RESULTS: Overall, 30% of women aged 40 years or younger received BRCA1/2 testing. In adjusted analyses, women of Jewish ethnicity were significantly more likely to be tested (hazard ratio = 2.83, 95% confidence interval: 1.52-5.28), whereas black women (hazard ratio = 0.34, 95% 0.18-0.64) and Hispanic women (hazard ratio = 0.52, 95% confidence interval: 0.33-0.81) were significantly less likely to be tested than non-Jewish white women. Those enrolled in a health maintenance organization (hazard ratio = 0.73, 95% confidence interval: 0.54-0.99) were significantly less likely to receive BRCA1/2 testing than those point of service insurance plans. Testing rates increased sharply for women diagnosed in 2007 compared with 2004. CONCLUSIONS: In this national sample of patients with newly diagnosed breast cancer at high risk for BRCA1/2 mutations, genetic assessment was low, with marked racial differences in testing.


Assuntos
Proteína BRCA1/genética , Proteína BRCA2/genética , População Negra/genética , Neoplasias da Mama/etnologia , Testes Genéticos/estatística & dados numéricos , Hispânico ou Latino/genética , Adulto , Neoplasias da Mama/diagnóstico , Neoplasias da Mama/genética , Neoplasias da Mama/terapia , Feminino , Humanos , Modelos de Riscos Proporcionais , Fatores de Risco , Mulheres
13.
Value Health ; 14(8): 1078-84, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-22152177

RESUMO

OBJECTIVES: To examine the performance of instrumental variables (IV) and ordinary least squares (OLS) regression under a range of conditions likely to be encountered in empirical research. METHODS: A series of simulation analyses are carried out to compare estimation error between OLS and IV when the independent variable of interest is endogenous. The simulations account for a range of situations that may be encountered by researchers in actual practice-varying degrees of endogeneity, instrument strength, instrument contamination, and sample size. The intent of this article is to provide researchers with more intuition with respect to how important these factors are from an empirical standpoint. RESULTS: Notably, the simulations indicate a greater potential for inferential error when using IV than OLS in all but the most ideal circumstances. CONCLUSIONS: Researchers should be cautious when using IV methods. These methods are valuable in testing for the presence of endogeneity but only under the most ideal circumstances are they likely to produce estimates with less estimation error than OLS.


Assuntos
Simulação por Computador , Modelos Estatísticos , Avaliação de Resultados em Cuidados de Saúde/métodos , Viés , Humanos , Análise dos Mínimos Quadrados , Avaliação de Resultados em Cuidados de Saúde/normas , Análise de Regressão , Tamanho da Amostra
14.
PLoS One ; 15(9): e0236400, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32970677

RESUMO

This study investigates the use of deep learning methods to improve the accuracy of a predictive model for dementia, and compares the performance to a traditional machine learning model. With sufficient accuracy the model can be deployed as a first round screening tool for clinical follow-up including neurological examination, neuropsychological testing, imaging and recruitment to clinical trials. Seven cohorts with two years of data, three to eight years prior to index date, and an incident cohort were created. Four trained models for each cohort, boosted trees, feed forward network, recurrent neural network and recurrent neural network with pre-trained weights, were constructed and their performance compared using validation and test data. The incident model had an AUC of 94.4% and F1 score of 54.1%. Eight years removed from index date the AUC and F1 scores were 80.7% and 25.6%, respectively. The results for the remaining cohorts were between these ranges. Deep learning models can result in significant improvement in performance but come at a cost in terms of run times and hardware requirements. The results of the model at index date indicate that this modeling can be effective at stratifying patients at risk of dementia. At this time, the inability to sustain this quality at longer lead times is more an issue of data availability and quality rather than one of algorithm choices.


Assuntos
Demência/diagnóstico , Idoso , Idoso de 80 Anos ou mais , Estudos de Coortes , Aprendizado Profundo , Demência/epidemiologia , Registros Eletrônicos de Saúde , Feminino , Humanos , Masculino , Redes Neurais de Computação , Fatores de Risco
15.
JMIR Med Inform ; 8(6): e17819, 2020 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-32490841

RESUMO

BACKGROUND: Clinical trials need efficient tools to assist in recruiting patients at risk of Alzheimer disease and related dementias (ADRD). Early detection can also assist patients with financial planning for long-term care. Clinical notes are an important, underutilized source of information in machine learning models because of the cost of collection and complexity of analysis. OBJECTIVE: This study aimed to investigate the use of deidentified clinical notes from multiple hospital systems collected over 10 years to augment retrospective machine learning models of the risk of developing ADRD. METHODS: We used 2 years of data to predict the future outcome of ADRD onset. Clinical notes are provided in a deidentified format with specific terms and sentiments. Terms in clinical notes are embedded into a 100-dimensional vector space to identify clusters of related terms and abbreviations that differ across hospital systems and individual clinicians. RESULTS: When using clinical notes, the area under the curve (AUC) improved from 0.85 to 0.94, and positive predictive value (PPV) increased from 45.07% (25,245/56,018) to 68.32% (14,153/20,717) in the model at disease onset. Models with clinical notes improved in both AUC and PPV in years 3-6 when notes' volume was largest; results are mixed in years 7 and 8 with the smallest cohorts. CONCLUSIONS: Although clinical notes helped in the short term, the presence of ADRD symptomatic terms years earlier than onset adds evidence to other studies that clinicians undercode diagnoses of ADRD. De-identified clinical notes increase the accuracy of risk models. Clinical notes collected across multiple hospital systems via natural language processing can be merged using postprocessing techniques to aid model accuracy.

16.
JAMA Netw Open ; 3(2): e1920622, 2020 02 05.
Artigo em Inglês | MEDLINE | ID: mdl-32022884

RESUMO

Importance: Although clinical trials demonstrate the superior effectiveness of medication for opioid use disorder (MOUD) compared with nonpharmacologic treatment, national data on the comparative effectiveness of real-world treatment pathways are lacking. Objective: To examine associations between opioid use disorder (OUD) treatment pathways and overdose and opioid-related acute care use as proxies for OUD recurrence. Design, Setting, and Participants: This retrospective comparative effectiveness research study assessed deidentified claims from the OptumLabs Data Warehouse from individuals aged 16 years or older with OUD and commercial or Medicare Advantage coverage. Opioid use disorder was identified based on 1 or more inpatient or 2 or more outpatient claims for OUD diagnosis codes within 3 months of each other; 1 or more claims for OUD plus diagnosis codes for opioid-related overdose, injection-related infection, or inpatient detoxification or residential services; or MOUD claims between January 1, 2015, and September 30, 2017. Data analysis was performed from April 1, 2018, to June 30, 2019. Exposures: One of 6 mutually exclusive treatment pathways, including (1) no treatment, (2) inpatient detoxification or residential services, (3) intensive behavioral health, (4) buprenorphine or methadone, (5) naltrexone, and (6) nonintensive behavioral health. Main Outcomes and Measures: Opioid-related overdose or serious acute care use during 3 and 12 months after initial treatment. Results: A total of 40 885 individuals with OUD (mean [SD] age, 47.73 [17.25] years; 22 172 [54.2%] male; 30 332 [74.2%] white) were identified. For OUD treatment, 24 258 (59.3%) received nonintensive behavioral health, 6455 (15.8%) received inpatient detoxification or residential services, 5123 (12.5%) received MOUD treatment with buprenorphine or methadone, 1970 (4.8%) received intensive behavioral health, and 963 (2.4%) received MOUD treatment with naltrexone. During 3-month follow-up, 707 participants (1.7%) experienced an overdose, and 773 (1.9%) had serious opioid-related acute care use. Only treatment with buprenorphine or methadone was associated with a reduced risk of overdose during 3-month (adjusted hazard ratio [AHR], 0.24; 95% CI, 0.14-0.41) and 12-month (AHR, 0.41; 95% CI, 0.31-0.55) follow-up. Treatment with buprenorphine or methadone was also associated with reduction in serious opioid-related acute care use during 3-month (AHR, 0.68; 95% CI, 0.47-0.99) and 12-month (AHR, 0.74; 95% CI, 0.58-0.95) follow-up. Conclusions and Relevance: Treatment with buprenorphine or methadone was associated with reductions in overdose and serious opioid-related acute care use compared with other treatments. Strategies to address the underuse of MOUD are needed.


Assuntos
Terapia Comportamental/estatística & dados numéricos , Procedimentos Clínicos/estatística & dados numéricos , Tratamento de Substituição de Opiáceos/estatística & dados numéricos , Transtornos Relacionados ao Uso de Opioides/terapia , Centros de Tratamento de Abuso de Substâncias/estatística & dados numéricos , Adolescente , Adulto , Analgésicos Opioides/uso terapêutico , Buprenorfina/uso terapêutico , Pesquisa Comparativa da Efetividade , Feminino , Humanos , Masculino , Metadona/uso terapêutico , Pessoa de Meia-Idade , Tratamento de Substituição de Opiáceos/métodos , Modelos de Riscos Proporcionais , Estudos Retrospectivos , Resultado do Tratamento , Estados Unidos , Adulto Jovem
17.
Alzheimers Dement (N Y) ; 5: 918-925, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31879701

RESUMO

INTRODUCTION: The study objective was to build a machine learning model to predict incident mild cognitive impairment, Alzheimer's Disease, and related dementias from structured data using administrative and electronic health record sources. METHODS: A cohort of patients (n = 121,907) and controls (n = 5,307,045) was created for modeling using data within 2 years of patient's incident diagnosis date. Additional cohorts 3-8 years removed from index data are used for prediction. Training cohorts were matched on age, gender, index year, and utilization, and fit with a gradient boosting machine, lightGBM. RESULTS: Incident 2-year model quality on a held-out test set had a sensitivity of 47% and area-under-the-curve of 87%. In the 3-year model, the learned labels achieved 24% (71%), which dropped to 15% (72%) in year 8. DISCUSSION: The ability of the model to discriminate incident cases of dementia implies that it can be a worthwhile tool to screen patients for trial recruitment and patient management.

18.
Pharmacogenomics ; 7(6): 853-62, 2006 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-16981846

RESUMO

INTRODUCTION: Pharmacogenomics and personalized medicine promise to improve healthcare by increasing drug efficacy and minimizing side effects. There may also be substantial savings realized by eliminating costs associated with failed treatment. This paper describes a framework using health claims data for analyzing the potential value of pharmacogenomic testing in clinical practice. METHODS: We evaluated a model of alternate clinical strategies using asthma patients' data from a retrospective health claims database to determine a potential cost offset. We estimated the likely cost impact of using a hypothetical pharmacogenomic test to determine a preferred initial therapy. We compared the annualized per patient costs distributions under two clinical strategies: testing all patients for a nonresponse genotype prior to treating and testing none. RESULTS: In the Test All strategy, more patients fall into lower cost ranges of the distribution. In our base case (15% phenotype prevalence, 200 US dollars test, 74% overall first-line treatment efficacy and 60% second-line therapy efficacy) the cost savings per patient for a typical run of the testing strategy simulation ranged from 200 US dollars to 767 US dollars (5th and 95th percentile). Genetic variant prevalence, test cost and the cost of choosing the wrong treatment are key parameters in the economic viability of pharmacogenomics in clinical practice. CONCLUSIONS: A general tool for predicting the impact of pharmacogenomic-based diagnostic tests on healthcare costs in asthma patients suggests that upfront testing costs are likely offset by avoided nonresponse costs. We suggest that similar analyses for decision making could be undertaken using claims data in which a population can be stratified by response to a drug.


Assuntos
Farmacogenética/economia , Antiasmáticos/economia , Antiasmáticos/uso terapêutico , Asma/tratamento farmacológico , Asma/economia , Asma/genética , Redução de Custos , Bases de Dados Factuais , Humanos , Modelos Econômicos , Estudos Retrospectivos
19.
Leuk Lymphoma ; 47(8): 1535-44, 2006 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-16966264

RESUMO

OBJECTIVES: To determine the direct costs of medical care associated with aggressive and indolent non-Hodgkin's lymphoma (NHL) in the United States; to show how costs for aggressive NHL change over time by examining costs related to initial, secondary and palliative treatment phases; and to evaluate the economic consequences of treatment failure in aggressive NHL. PATIENTS AND METHODS: A retrospective cohort analysis of 1999 - 2000 direct costs in newly diagnosed NHL patients and controls (subjects without any cancer) was conducted using the MarketScan medical and drug claims database of large employers across the United States. Treatment failure analysis was conducted for aggressive NHL patients, and was defined by the need for secondary treatment or palliative care after initial therapy. Cost of treatment failure was calculated as difference in regression-adjusted costs between patients with initial therapy only and patients experiencing initial treatment failure. RESULTS: Patients with aggressive (n = 356) and indolent (n = 698) NHL had significantly greater health service utilization and associated costs (all P < 05) than controls (n = 1068 for aggressive, n = 2094 for indolent). Mean monthly costs were 5871 dollars for aggressive NHL vs. 355 dollars for controls (P < 0001) and 3833 dollars for indolent NHL vs. 289 dollars for controls (P < 0001). The primary cost drivers were hospitalization (aggressive NHL = 44% of total costs, indolent NHL = 50%) and outpatient office visits (aggressive NHL = 39%, indolent NHL = 34%). For aggressive NHL, mean monthly initial treatment phase costs (10,970 dollars) and palliative care costs (9836 dollars) were higher than costs incurred during secondary phase (3302 dollars). The mean cost of treatment failure in aggressive NHL was 14,174 dollars per month, and 85,934 dollars over the study period. CONCLUSION: The treatment of NHL was associated with substantial health care costs. Patients with aggressive lymphomas tended to accrue higher costs, compared with those with indolent lymphomas. These costs varied over time, with the highest costs occurring during the initial treatment and palliative care phases. Treatment failure was the most expensive treatment pattern. New strategies to prevent or delay treatment failure in aggressive NHL could help reduce the economic burden of NHL.


Assuntos
Custos de Cuidados de Saúde , Linfoma não Hodgkin/economia , Feminino , Alocação de Recursos para a Atenção à Saúde/economia , Hospitalização/economia , Humanos , Linfoma não Hodgkin/terapia , Masculino , Pessoa de Meia-Idade , Visita a Consultório Médico/economia , Cuidados Paliativos/economia , Estudos Retrospectivos , Terapêutica/economia , Falha de Tratamento , Estados Unidos
20.
Curr Med Res Opin ; 22(4): 799-808, 2006 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-16684441

RESUMO

BACKGROUND: The bootstrap has become very popular in health economics. Its success lies in the ease of estimating sampling distribution, standard error and confidence intervals with few or no assumptions about the distribution of the underlying population. OBJECTIVE: The purpose of this paper is three-fold: (1) to provide an overview of four common bootstrap techniques for readers who have little or no statistical background; (2) to suggest a guideline for selecting the most applicable bootstrap technique for your data; and (3) to connect guidelines with a real world example, to illustrate how different bootstraps behave in one model, or in different models. RESULTS: The assumptions of homoscedasticity and normality are key to selecting the best bootstrapping technique. These assumptions should be tested before applying any bootstrapping technique. If homoscedasticity and normality hold, then parametric bootstrapping is consistent and efficient. Paired and wild bootstrapping are consistent under heteroscedasticity and non-normality assumptions. CONCLUSION: Selecting the correct type of bootstrapping is crucial for arriving at efficient estimators. Our example illustrates that if we selected an inconsistent bootstrapping technique, results could be misleading. An insignificant effect of controller treatment on total health expenditures among asthma patients would have been found significant and negative by an improperly chosen bootstrapping technique, regardless of the type of model chosen.


Assuntos
Pesquisa Empírica , Guias como Assunto , Pesquisa sobre Serviços de Saúde/métodos , Modelos Econométricos , Intervalos de Confiança , Bases de Dados Factuais , Atenção à Saúde/economia , Doença/economia , Pesquisa sobre Serviços de Saúde/estatística & dados numéricos , Humanos , Probabilidade , Viés de Seleção , Distribuições Estatísticas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA