Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 100
Filtrar
1.
Commun Med (Lond) ; 4(1): 169, 2024 Aug 24.
Artículo en Inglés | MEDLINE | ID: mdl-39181950

RESUMEN

BACKGROUND: Public reluctance to receive COVID-19 vaccination is associated with safety concerns. By contrast, the seasonal influenza vaccine has been administered for decades with a solid safety record and a high level of public acceptance. We compare the safety profile of the BNT162b2 COVID-19 booster vaccine to that of the seasonal influenza vaccine. METHODS: We study a prospective cohort of 5079 participants in Israel and a retrospective cohort of 250,000 members of MHS selected randomly. We examine reactions to BNT162b2 mRNA COVID-19 booster and to influenza vaccinations. All prospective cohort participants wore a smartwatch and completed a daily digital questionnaire. We compare pre-vaccination and post-vaccination smartwatch heart-rate data, and a stress measure based on heart-rate variability. We also examine adverse events from electronic health records. RESULTS: In the prospective cohort, 1905 participants receive the COVID-19 booster vaccine; 899 receive influenza vaccination. Focusing on those who receive both vaccines yields a total of 689 participants in the prospective cohort and 31,297 members in the retrospective cohort. Individuals reporting a more severe reaction after influenza vaccination tend to likewise report a more severe reaction after COVID-19 vaccination. In paired analysis, the increase in both heart rate and stress measure for each participant is higher for COVID-19 than for influenza in the first 2 days after vaccination. No elevated risk of hospitalization due to adverse events is found following either vaccine. Except for Bell's palsy after influenza vaccination, no elevated risk of adverse events is found. CONCLUSIONS: The more pronounced side effects after COVID-19 vaccination may explain the greater concern associated with it. Nevertheless, our comprehensive analysis supports the safety profile of both vaccines.


We compared the safety profiles of the COVID-19 and influenza vaccines. We analyzed data from Israel involving 5079 participants who wore smartwatches and completed daily questionnaires, as well as electronic health records from 250,000 members of Maccabi Healthcare Services. We found that side effects after the COVID-19 vaccine were more noticeable, based on self-reported symptoms and heart measures (heart-rate and stress) detected by smartwatches. The increase in heart measures was higher after COVID-19 vaccination than after influenza vaccination in the first 2 days post-vaccination. However, electronic health records showed no increased risk of adverse events with the COVID-19 and influenza vaccines. Our analysis supports the safety of both vaccines but may explain the greater concern about the COVID-19 vaccine.

2.
J Math Biol ; 89(2): 21, 2024 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-38926228

RESUMEN

For some communicable endemic diseases (e.g., influenza, COVID-19), vaccination is an effective means of preventing the spread of infection and reducing mortality, but must be augmented over time with vaccine booster doses. We consider the problem of optimally allocating a limited supply of vaccines over time between different subgroups of a population and between initial versus booster vaccine doses, allowing for multiple booster doses. We first consider an SIS model with interacting population groups and four different objectives: those of minimizing cumulative infections, deaths, life years lost, or quality-adjusted life years lost due to death. We solve the problem sequentially: for each time period, we approximate the system dynamics using Taylor series expansions, and reduce the problem to a piecewise linear convex optimization problem for which we derive intuitive closed-form solutions. We then extend the analysis to the case of an SEIS model. In both cases vaccines are allocated to groups based on their priority order until the vaccine supply is exhausted. Numerical simulations show that our analytical solutions achieve results that are close to optimal with objective function values significantly better than would be obtained using simple allocation rules such as allocation proportional to population group size. In addition to being accurate and interpretable, the solutions are easy to implement in practice. Interpretable models are particularly important in public health decision making.


Asunto(s)
COVID-19 , Simulación por Computador , Enfermedades Endémicas , Inmunización Secundaria , Conceptos Matemáticos , Vacunación , Humanos , Inmunización Secundaria/estadística & datos numéricos , Enfermedades Endémicas/prevención & control , Enfermedades Endémicas/estadística & datos numéricos , COVID-19/prevención & control , COVID-19/epidemiología , Vacunación/estadística & datos numéricos , Vacunas contra la COVID-19/administración & dosificación , Vacunas contra la COVID-19/provisión & distribución , Modelos Biológicos , Gripe Humana/prevención & control , SARS-CoV-2/inmunología , Años de Vida Ajustados por Calidad de Vida , Vacunas contra la Influenza/administración & dosificación , Enfermedades Transmisibles/epidemiología
3.
Lancet Reg Health Eur ; 42: 100934, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38800112

RESUMEN

Background: Limited knowledge exists regarding behavioral and biomarker shifts during the period from respiratory infection exposure to testing decisions (the diagnostic decision period), a key phase affecting transmission dynamics and public health strategy development. This study aims to examine the changes in behavior and biomarkers during the diagnostic decision period for COVID-19, influenza, and group A streptococcus (GAS). Methods: We analyzed data from a two-year prospective cohort study involving 4795 participants in Israel, incorporating smartwatch data, self-reported symptoms, and medical records. Our analysis focused on three critical phases: the digital incubation period (from exposure to physiological anomalies detected by smartwatches), the symptomatic incubation period (from exposure to onset of symptoms), and the diagnostic decision period for influenza, COVID-19, and GAS. Findings: The delay between initial symptom reporting and testing was 39 [95% confidence interval (CI): 34-45] hours for influenza, 53 [95% CI: 49-58] hours for COVID-19, and 38 [95% CI: 32-46] hours for GAS, with 73 [95% CI: 67-78] hours from anomalies in heart measures to symptom onset for influenza, 23 [95% CI: 18-27] hours for COVID-19, and 62 [95% CI: 54-68] hours for GAS. Analyzing the entire course of infection of each individual, the greatest changes in heart rates were detected 67.6 [95% CI: 62.8-72.5] hours prior to testing for influenza, 64.1 [95% CI: 61.4-66.7] hours prior for COVID-19, and 58.2 [95% CI: 52.1-64.2] hours prior for GAS. In contrast, the greatest reduction in physical activities and social contacts occurred after testing. Interpretation: These findings highlight the delayed response of patients in seeking medical attention and reducing social contacts and demonstrate the transformative potential of smartwatches for identifying infection and enabling timely public health interventions. Funding: This work was supported by the European Research Council, project #949850, the Israel Science Foundation (ISF), grant No. 3409/19, within the Israel Precision Medicine Partnership program, and a Koret Foundation gift for Smart Cities and Digital Living.

4.
Sci Rep ; 14(1): 6012, 2024 03 12.
Artículo en Inglés | MEDLINE | ID: mdl-38472345

RESUMEN

Vaccines stand out as one of the most effective tools in our arsenal for reducing morbidity and mortality. Nonetheless, public hesitancy towards vaccination often stems from concerns about potential side effects, which can vary from person to person. As of now, there are no automated systems available to proactively warn against potential side effects or gauge their severity following vaccination. We have developed machine learning (ML) models designed to predict and detect the severity of post-vaccination side effects. Our study involved 2111 participants who had received at least one dose of either a COVID-19 or influenza vaccine. Each participant was equipped with a Garmin Vivosmart 4 smartwatch and was required to complete a daily self-reported questionnaire regarding local and systemic reactions through a dedicated mobile application. Our XGBoost models yielded an area under the receiver operating characteristic curve (AUROC) of 0.69 and 0.74 in predicting and detecting moderate to severe side effects, respectively. These predictions were primarily based on variables such as vaccine type (influenza vs. COVID-19), the individual's history of side effects from previous vaccines, and specific data collected from the smartwatches prior to vaccine administration, including resting heart rate, heart rate, and heart rate variability. In conclusion, our findings suggest that wearable devices can provide an objective and continuous method for predicting and monitoring moderate to severe vaccine side effects. This technology has the potential to improve clinical trials by automating the classification of vaccine severity.


Asunto(s)
COVID-19 , Vacunas contra la Influenza , Gripe Humana , Humanos , Teléfono Inteligente , Vacunación
5.
Drug Alcohol Depend ; 256: 111112, 2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-38335797

RESUMEN

AIM: To assess the effectiveness and cost-effectiveness of buprenorphine and methadone treatment in the U.S. if exemptions expanding coverage for substance use disorder services via telehealth and allowing opioid treatment programs to supply a greater number of take-home doses of medications for opioid use disorder (OUD) continue (Notice of Proposed Rule Making, NPRM). DESIGN SETTING AND PARTICIPANTS: Model-based analysis of buprenorphine and methadone treatment for a cohort of 100,000 individuals with OUD, varying treatment retention and overdose risk among individuals receiving and not receiving methadone treatment compared to the status quo (no NPRM). INTERVENTION: Buprenorphine and methadone treatment under NPRM. MEASUREMENTS: Fatal and nonfatal overdoses and deaths over five years, discounted lifetime per person QALYs and costs. FINDINGS: For buprenorphine treatment under the status quo, 1.21 QALYs are gained at a cost of $19,200/QALY gained compared to no treatment; with 20% higher treatment retention, 1.28 QALYs are gained at a cost of $17,900/QALY gained compared to no treatment, and the strategy dominates the status quo. For methadone treatment under the status quo, 1.11 QALYs are gained at a cost of $17,900/QALY gained compared to no treatment. In all scenarios, methadone provision cost less than $20,000/QALY gained compared to no treatment, and less than $50,000/QALY gained compared to status quo methadone treatment. CONCLUSIONS: Buprenorphine and methadone OUD treatment under NPRM are likely to be effective and cost-effective. Increases in overdose risk with take-home methadone would reduce health benefits. Clinical and technological strategies could mitigate this risk.


Asunto(s)
Buprenorfina , Sobredosis de Droga , Trastornos Relacionados con Opioides , Humanos , Análisis Costo-Beneficio , Sobredosis de Droga/tratamiento farmacológico , Buprenorfina/uso terapéutico , Metadona/uso terapéutico , Trastornos Relacionados con Opioides/tratamiento farmacológico
6.
MDM Policy Pract ; 9(1): 23814683231222469, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38293655

RESUMEN

Introduction. The risk of infectious disease transmission, including COVID-19, is disproportionately high in correctional facilities due to close living conditions, relatively low levels of vaccination, and reduced access to testing and treatment. While much progress has been made on describing and mitigating COVID-19 and other infectious disease risk in jails and prisons, there are open questions about which data can best predict future outbreaks. Methods. We used facility data and demographic and health data collected from 24 prison facilities in the Pennsylvania Department of Corrections from March 2020 to May 2021 to determine which sources of data best predict a coming COVID-19 outbreak in a prison facility. We used machine learning methods to cluster the prisons into groups based on similar facility-level characteristics, including size, rurality, and demographics of incarcerated people. We developed logistic regression classification models to predict for each cluster, before and after vaccine availability, whether there would be no cases, an outbreak defined as 2 or more cases, or a large outbreak, defined as 10 or more cases in the next 1, 2, and 3 d. We compared these predictions to data on outbreaks that occurred. Results. Facilities were divided into 8 clusters of sizes varying from 1 to 7 facilities per cluster. We trained 60 logistic regressions; 20 had test sets with between 35% and 65% of days with outbreaks detected. Of these, 8 logistic regressions correctly predicted the occurrence of an outbreak more than 55% of the time. The most common predictive feature was incident cases among the incarcerated population from 2 to 32 d prior. Other predictive features included the number of tests administered from 1 to 33 d prior, total population, test positivity rate, and county deaths, hospitalizations, and incident cases. Cumulative cases, vaccination rates, and race, ethnicity, or age statistics for incarcerated populations were generally not predictive. Conclusions. County-level measures of COVID-19, facility population, and test positivity rate appear as potential promising predictors of COVID-19 outbreaks in correctional facilities, suggesting that correctional facilities should monitor community transmission in addition to facility transmission to inform future outbreak response decisions. These efforts should not be limited to COVID-19 but should include any large-scale infectious disease outbreak that may involve institution-community transmission. Highlights: The risk of infectious disease transmission, including COVID-19, is disproportionately high in correctional facilities.We used machine learning methods with data collected from 24 prison facilities in the Pennsylvania Department of Corrections to determine which sources of data best predict a coming COVID-19 outbreak in a prison facility.Key predictors included county-level measures of COVID-19, facility population, and the test positivity rate in a facility.Fortifying correctional facilities with the ability to monitor local community rates of infection (e.g., though improved interagency collaboration and data sharing) along with continued testing of incarcerated people and staff can help correctional facilities better predict-and respond to-future infectious disease outbreaks.

7.
J Pediatr Surg ; 59(2): 337-341, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37953157

RESUMEN

BACKGROUND: Identification of physical abuse at the point of care without a systematic approach remains inherently subjective and prone to judgement error. This study examines the implementation of an electronic health record (EHR)-based universal child injury screen (CIS) to improve detection rates of child abuse. METHODS: CIS was implemented in the EHR admission documentation for all patients age 5 or younger at a single medical center, with the following questions. 1) "Is this patient an injured/trauma patient?" 2) "If this is a trauma/injured patient, where did the injury occur?" A "Yes" response to Question 1 would alert a team of child abuse pediatricians and social workers to determine if a patient required formal child abuse clinical evaluation. Patients who received positive CIS responses, formal child abuse work-up, and/or reports to Child Protective Services (CPS) were reviewed for analysis. CPS rates from historical controls (2017-2018) were compared to post-implementation rates (2019-2021). RESULTS: Between 2019 and 2021, 14,150 patients were screened with CIS. 286 (2.0 %) patients screened received positive CIS responses. 166 (58.0 %) of these patients with positive CIS responses would not have otherwise been identified for child abuse evaluation by their treating teams. 18 (10.8 %) of the patients identified by the CIS and not by the treating team were later reported to CPS. Facility CPS reporting rates for physical abuse were 1.2 per 1000 admitted children age 5 or younger (pre-intervention) versus 4.2 per 1000 (post-intervention). CONCLUSIONS: Introduction of CIS led to increased detection suspected child abuse among children age 5 or younger. LEVEL OF EVIDENCE: Level II. TYPE OF STUDY: Study of Diagnostic Test.


Asunto(s)
Maltrato a los Niños , Registros Electrónicos de Salud , Niño , Humanos , Preescolar , Maltrato a los Niños/diagnóstico , Abuso Físico , Servicios de Protección Infantil , Hospitales
8.
Health Care Manag Sci ; 26(4): 599-603, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37804456

RESUMEN

The US is experiencing a severe opioid epidemic with more than 80,000 opioid overdose deaths occurring in 2022. Beyond the tragic loss of life, opioid use disorder (OUD) has emerged as a major contributor to morbidity, lost productivity, mounting criminal justice system costs, and significant social disruption. This Current Opinion article highlights opportunities for analytics in supporting policy making for effective response to this crisis. We describe modeling opportunities in the following areas: understanding the opioid epidemic (e.g., the prevalence and incidence of OUD in different geographic regions, demographics of individuals with OUD, rates of overdose and overdose death, patterns of drug use and associated disease outbreaks, and access to and use of treatment for OUD); assessing policies for preventing and treating OUD, including mitigation of social conditions that increase the risk of OUD; and evaluating potential regulatory and criminal justice system reforms.


Asunto(s)
Sobredosis de Droga , Trastornos Relacionados con Opioides , Humanos , Epidemia de Opioides , Analgésicos Opioides/efectos adversos , Trastornos Relacionados con Opioides/epidemiología , Trastornos Relacionados con Opioides/tratamiento farmacológico , Sobredosis de Droga/epidemiología , Sobredosis de Droga/prevención & control , Sobredosis de Droga/tratamiento farmacológico , Toma de Decisiones
9.
Drug Alcohol Depend ; 243: 109762, 2023 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-36621198

RESUMEN

AIM: To assess the effectiveness and cost-effectiveness of office-based buprenorphine treatment (OBBT) in the U.S. DESIGN SETTING AND PARTICIPANTS: We performed a model-based analysis of buprenorphine treatment provided in a primary care setting for the U.S. population with OUD. INTERVENTION: Buprenorphine treatment provided in a primary care setting. MEASUREMENTS: Fatal and nonfatal overdoses and deaths over five years, discounted lifetime quality-adjusted life years (QALYs), costs. FINDINGS: For a cohort of 100,000 untreated individuals who enter OBBT, approximately 9350 overdoses would be averted over five years; of these, approximately 900 would have been fatal. OBBT compared to no treatment would yield 1.07 incremental lifetime QALYs per person at an incremental cost of $17,000 per QALY gained when using a healthcare perspective. If OBBT is half as effective and twice as expensive as assumed in the base case, the incremental cost when using a healthcare perspective is $25,500 per QALY gained. Using a limited societal perspective that additionally includes patient costs and criminal justice costs, OBBT is cost-saving compared to no treatment even under pessimistic assumptions about efficacy and cost. CONCLUSIONS: Expansion of OBBT would be highly cost-effective compared to no treatment when considered from a healthcare perspective, and cost-saving when reduced criminal justice costs are included. Given the continuing opioid crisis in the U.S., expansion of this care option should be a high priority.


Asunto(s)
Buprenorfina , Trastornos Relacionados con Opioides , Humanos , Buprenorfina/uso terapéutico , Análisis Costo-Beneficio , Trastornos Relacionados con Opioides/tratamiento farmacológico , Años de Vida Ajustados por Calidad de Vida
10.
NPJ Digit Med ; 5(1): 140, 2022 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-36085312

RESUMEN

More than 12 billion COVID-19 vaccination shots have been administered as of August 2022, but information from active surveillance about vaccine safety is limited. Surveillance is generally based on self-reporting, making the monitoring process subjective. We study participants in Israel who received their second or third Pfizer BioNTech COVID-19 vaccination. All participants wore a Garmin Vivosmart 4 smartwatch and completed a daily questionnaire via smartphone. We compare post-vaccination smartwatch heart rate data and a Garmin-computed stress measure based on heart rate variability with data from the patient questionnaires. Using a mixed effects panel regression to remove participant-level fixed and random effects, we identify considerable changes in smartwatch measures in the 72 h post-vaccination even among participants who reported no side effects in the questionnaire. Wearable devices were more sensitive than questionnaires in determining when participants returned to baseline levels. We conclude that wearable devices can detect physiological responses following vaccination that may not be captured by patient self-reporting. More broadly, the ubiquity of smartwatches provides an opportunity to gather improved data on patient health, including active surveillance of vaccine safety.

11.
Oper Res ; 70(3): 1428-1447, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36034163

RESUMEN

The goal of a traditional Markov decision process (MDP) is to maximize expected cumulative reward over a defined horizon (possibly infinite). In many applications, however, a decision maker may be interested in optimizing a specific quantile of the cumulative reward instead of its expectation. In this paper we consider the problem of optimizing the quantiles of the cumulative rewards of a Markov decision process (MDP), which we refer to as a quantile Markov decision process (QMDP). We provide analytical results characterizing the optimal QMDP value function and present a dynamic programming-based algorithm to solve for the optimal policy. The algorithm also extends to the MDP problem with a conditional value-at-risk (CVaR) objective. We illustrate the practical relevance of our model by evaluating it on an HIV treatment initiation problem, where patients aim to balance the potential benefits and risks of the treatment.

12.
Math Biosci ; 351: 108879, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35843382

RESUMEN

The problem of optimally allocating a limited supply of vaccine to control a communicable disease has broad applications in public health and has received renewed attention during the COVID-19 pandemic. This allocation problem is highly complex and nonlinear. Decision makers need a practical, accurate, and interpretable method to guide vaccine allocation. In this paper we develop simple analytical conditions that can guide the allocation of vaccines over time. We consider four objectives: minimize new infections, minimize deaths, minimize life years lost, or minimize quality-adjusted life years lost due to death. We consider an SIR model with interacting population groups. We approximate the model using Taylor series expansions, and develop simple analytical conditions characterizing the optimal solution to the resulting problem for a single time period. We develop a solution approach in which we allocate vaccines using the analytical conditions in each time period based on the state of the epidemic at the start of the time period. We illustrate our method with an example of COVID-19 vaccination, calibrated to epidemic data from New York State. Using numerical simulations, we show that our method achieves near-optimal results over a wide range of vaccination scenarios. Our method provides a practical, intuitive, and accurate tool for decision makers as they allocate limited vaccines over time, and highlights the need for more interpretable models over complicated black box models to aid in decision making.


Asunto(s)
COVID-19 , Enfermedades Transmisibles , COVID-19/prevención & control , Vacunas contra la COVID-19 , Enfermedades Transmisibles/epidemiología , Humanos , Pandemias/prevención & control , Vacunación/métodos
13.
Emerg Infect Dis ; 28(7): 1375-1383, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35654410

RESUMEN

Despite extensive technological advances in recent years, objective and continuous assessment of physiologic measures after vaccination is rarely performed. We conducted a prospective observational study to evaluate short-term self-reported and physiologic reactions to the booster BNT162b2 mRNA (Pfizer-BioNTech, https://www.pfizer.com) vaccine dose. A total of 1,609 participants were equipped with smartwatches and completed daily questionnaires through a dedicated mobile application. The extent of systemic reactions reported after the booster dose was similar to that of the second dose and considerably greater than that of the first dose. Analyses of objective heart rate and heart rate variability measures recorded by smartwatches further supported this finding. Subjective and objective reactions after the booster dose were more apparent in younger participants and in participants who did not have underlying medical conditions. Our findings further support the safety of the booster dose from subjective and objective perspectives and underscore the need for integrating wearables in clinical trials.


Asunto(s)
COVID-19 , Vacuna BNT162 , COVID-19/prevención & control , Humanos , ARN Mensajero , Autoinforme , Vacunación
14.
Med Decis Making ; 42(7): 872-884, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35735216

RESUMEN

PURPOSE: Metamodels are simplified approximations of more complex models that can be used as surrogates for the original models. Challenges in using metamodels for policy analysis arise when there are multiple correlated outputs of interest. We develop a framework for metamodeling with policy simulations to accommodate multivariate outcomes. METHODS: We combine 2 algorithm adaptation methods-multitarget stacking and regression chain with maximum correlation-with different base learners including linear regression (LR), elastic net (EE) with second-order terms, Gaussian process regression (GPR), random forests (RFs), and neural networks. We optimize integrated models using variable selection and hyperparameter tuning. We compare the accuracy, efficiency, and interpretability of different approaches. As an example application, we develop metamodels to emulate a microsimulation model of testing and treatment strategies for hepatitis C in correctional settings. RESULTS: Output variables from the simulation model were correlated (average ρ = 0.58). Without multioutput algorithm adaptation methods, in-sample fit (measured by R2) ranged from 0.881 for LR to 0.987 for GPR. The multioutput algorithm adaptation method increased R2 by an average 0.002 across base learners. Variable selection and hyperparameter tuning increased R2 by 0.009. Simpler models such as LR, EE, and RF required minimal training and prediction time. LR and EE had advantages in model interpretability, and we considered methods for improving the interpretability of other models. CONCLUSIONS: In our example application, the choice of base learner had the largest impact on R2; multioutput algorithm adaptation and variable selection and hyperparameter tuning had a modest impact. Although advantages and disadvantages of specific learning algorithms may vary across different modeling applications, our framework for metamodeling in policy analyses with multivariate outcomes has broad applicability to decision analysis in health and medicine.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos , Modelos Lineales , Distribución Normal , Políticas
15.
Stat Med ; 41(17): 3336-3348, 2022 07 30.
Artículo en Inglés | MEDLINE | ID: mdl-35527474

RESUMEN

Outbreaks of an endemic infectious disease can occur when the disease is introduced into a highly susceptible subpopulation or when the disease enters a network of connected individuals. For example, significant HIV outbreaks among people who inject drugs have occurred in at least half a dozen US states in recent years. This motivates the current study: how can limited testing resources be allocated across geographic regions to rapidly detect outbreaks of an endemic infectious disease? We develop an adaptive sampling algorithm that uses profile likelihood to estimate the distribution of the number of positive tests that would occur for each location in a future time period if that location were sampled. Sampling is performed in the location with the highest estimated probability of triggering an outbreak alarm in the next time period. The alarm function is determined by a semiparametric likelihood ratio test. We compare the profile likelihood sampling (PLS) method numerically to uniform random sampling (URS) and Thompson sampling (TS). TS was worse than URS when the outbreak occurred in a location with lower initial prevalence than other locations. PLS had lower time to outbreak detection than TS in some but not all scenarios, but was always better than URS even when the outbreak occurred in a location with a lower initial prevalence than other locations. PLS provides an effective and reliable method for rapidly detecting endemic disease outbreaks that is robust to this uncertainty.


Asunto(s)
Brotes de Enfermedades , Humanos , Funciones de Verosimilitud , Prevalencia
16.
Med Decis Making ; 42(8): 1052-1063, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-35591754

RESUMEN

BACKGROUND: For certain communicable disease outbreaks, mass prophylaxis of uninfected individuals can curtail new infections. When an outbreak emerges, decision makers could benefit from methods to quickly determine whether mass prophylaxis is cost-effective. We consider 2 approaches: a simple decision model and machine learning meta-models. The motivating example is plague in Madagascar. METHODS: We use a susceptible-exposed-infectious-removed (SEIR) epidemic model to derive a decision rule based on the fraction of the population infected, effective reproduction ratio, infection fatality rate, quality-adjusted life-year loss associated with death, prophylaxis effectiveness and cost, time horizon, and willingness-to-pay threshold. We also develop machine learning meta-models of a detailed model of plague in Madagascar using logistic regression, random forest, and neural network models. In numerical experiments, we compare results using the decision rule and the meta-models to results obtained using the simulation model. We vary the initial fraction of the population infected, the effective reproduction ratio, the intervention start date and duration, and the cost of prophylaxis. LIMITATIONS: We assume homogeneous mixing and no negative side effects due to antibiotic prophylaxis. RESULTS: The simple decision rule matched the SEIR model outcome in 85.4% of scenarios. Using data for a 2017 plague outbreak in Madagascar, the decision rule correctly indicated that mass prophylaxis was not cost-effective. The meta-models were significantly more accurate, with an accuracy of 92.8% for logistic regression, 95.8% for the neural network model, and 96.9% for the random forest model. CONCLUSIONS: A simple decision rule using minimal information about an outbreak can accurately evaluate the cost-effectiveness of mass prophylaxis for outbreak mitigation. Meta-models of a complex disease simulation can achieve higher accuracy but with greater computational and data requirements and less interpretability. HIGHLIGHTS: We use a susceptible-exposed-infectious-removed model and net monetary benefit to derive a simple decision rule to evaluate the cost-effectiveness of mass prophylaxis.We use the example of plague in Madagascar to compare the performance of the analytically derived decision rule to that of machine learning meta-models trained on a stochastic dynamic transmission model.We assess the accuracy of each approach for different combinations of disease dynamics and intervention scenarios.The machine learning meta-models are more accurate predictors of mass prophylaxis cost-effectiveness. However, the simple decision rule is also accurate and may be a preferred substitute in low-resource settings.


Asunto(s)
Epidemias , Peste , Humanos , Análisis Costo-Beneficio , Peste/epidemiología , Años de Vida Ajustados por Calidad de Vida
18.
Med Decis Making ; 42(1): 8-16, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34027738

RESUMEN

BACKGROUND: Personalizing medical treatment decisions based on patient-specific risks and/or preferences can improve health outcomes. Decision makers frequently select treatments based on partial personalization (e.g., personalization based on risks but not preferences or vice versa) due to a lack of data about patient-specific risks and preferences. However, partially personalizing treatment decisions based on a subset of patient risks and/or preferences can result in worse population-level health outcomes than no personalization and can increase the variance of population-level health outcomes. METHODS: We develop a new method for partially personalizing treatment decisions that avoids these problems. Using a case study of antipsychotic treatment for schizophrenia, as well as 4 additional illustrative examples, we demonstrate the adverse effects and our method for avoiding them. RESULTS: For the schizophrenia treatment case study, using a previously proposed modeling approach for personalizing treatment decisions and using only a subset of patient preferences regarding treatment efficacy and side effects, mean population-level health outcomes decreased by 0.04 quality-adjusted life-years (QALYs; 95% credible interval [crI]: 0.02-0.06) per patient compared with no personalization. Using our new method and considering the same subset of patient preferences, mean population-level health outcomes increased by 0.01 QALYs (95% crI: 0.00-0.03) per patient as compared with no personalization, and the variance decreased. LIMITATIONS: We assumed a linear and additive utility function. CONCLUSIONS: Selecting personalized treatments for patients should be done in a way that does not decrease expected population-level health outcomes and does not increase their variance, thereby resulting in worse risk-adjusted, population-level health outcomes compared with treatment selection with no personalization. Our method can be used to ensure this, thereby helping patients realize the benefits of treatment personalization without the potential harms.


Asunto(s)
Antipsicóticos , Esquizofrenia , Antipsicóticos/uso terapéutico , Humanos , Prioridad del Paciente , Años de Vida Ajustados por Calidad de Vida , Esquizofrenia/tratamiento farmacológico
19.
Med Decis Making ; 42(4): 450-460, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-34416832

RESUMEN

BACKGROUND: Personalizing medical treatments based on patient-specific risks and preferences can improve patient health. However, models to support personalized treatment decisions are often complex and difficult to interpret, limiting their clinical application. METHODS: We present a new method, using machine learning to create meta-models, for simplifying complex models for personalizing medical treatment decisions. We consider simple interpretable models, interpretable ensemble models, and noninterpretable ensemble models. We use variable selection with a penalty for patient-specific risks and/or preferences that are difficult, risky, or costly to obtain. We interpret the meta-models to the extent permitted by their model architectures. We illustrate our method by applying it to simplify a previously developed model for personalized selection of antipsychotic drugs for patients with schizophrenia. RESULTS: The best simplified interpretable, interpretable ensemble, and noninterpretable ensemble models contained at most half the number of patient-specific risks and preferences compared with the original model. The simplified models achieved 60.5% (95% credible interval [crI]: 55.2-65.4), 60.8% (95% crI: 55.5-65.7), and 83.8% (95% crI: 80.8-86.6), respectively, of the net health benefit of the original model (quality-adjusted life-years gained). Important variables in all models were similar and made intuitive sense. Computation time for the meta-models was orders of magnitude less than for the original model. LIMITATIONS: The simplified models share the limitations of the original model (e.g., potential biases). CONCLUSIONS: Our meta-modeling method is disease- and model- agnostic and can be used to simplify complex models for personalization, allowing for variable selection in addition to improved model interpretability and computational performance. Simplified models may be more likely to be adopted in clinical settings and can help improve equity in patient outcomes.


Asunto(s)
Antipsicóticos , Esquizofrenia , Antipsicóticos/uso terapéutico , Humanos , Aprendizaje Automático , Evaluación de Resultado en la Atención de Salud , Años de Vida Ajustados por Calidad de Vida , Esquizofrenia/tratamiento farmacológico
20.
Med Decis Making ; 42(4): 436-449, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-34378462

RESUMEN

BACKGROUND: Many cost-effectiveness analyses (CEAs) only consider outcomes for a single disease when comparing interventions that prevent or treat 1 disease (e.g., vaccination) to interventions that prevent or treat multiple diseases (e.g., vector control to prevent mosquito-borne diseases). An intervention targeted to a single disease may be preferred to a broader intervention in a single-disease model, but this conclusion might change if outcomes from the additional diseases were included. However, multidisease models are often complex and difficult to construct. METHODS: We present conditions for when multiple diseases should be considered in such a CEA. We propose methods for estimating health outcomes and costs associated with control of additional diseases using parallel single-disease models. Parallel modeling can incorporate competing mortality and coinfection from multiple diseases while maintaining model simplicity. We illustrate our approach with a CEA that compares a dengue vaccine, a chikungunya vaccine, and mosquito control via insecticide and mosquito nets, which can prevent dengue, chikungunya, Zika, and yellow fever. RESULTS: The parallel models and the multidisease model generated similar estimates of disease incidence and deaths with much less complexity. When using this method in our case study, considering only chikungunya and dengue, the preferred strategy was insecticide. A broader strategy-insecticide plus long-lasting insecticide-treated nets-was not preferred when Zika and yellow fever were included, suggesting the conclusion is robust even without the explicit inclusion of all affected diseases. LIMITATIONS: Parallel modeling assumes independent probabilities of infection for each disease. CONCLUSIONS: When multidisease effects are important, our parallel modeling method can be used to model multiple diseases accurately while avoiding additional complexity.


Asunto(s)
Aedes , Fiebre Chikungunya , Enfermedades Transmisibles , Dengue , Insecticidas , Fiebre Amarilla , Infección por el Virus Zika , Virus Zika , Animales , Fiebre Chikungunya/epidemiología , Dengue/epidemiología , Dengue/prevención & control , Humanos , Mosquitos Vectores , Infección por el Virus Zika/epidemiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA