Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 127
Filtrar
Más filtros

Tipo del documento
Intervalo de año de publicación
1.
Kidney Int ; 2024 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-38945395

RESUMEN

Baseline kidney function following kidney transplantation is often used in research and clinical decision-making yet is not well defined. Here, a method to determine baseline function was proposed and validated on three single-center retrospective cohorts consisting of 922 patients from Belgium (main cohort) and two validation cohorts of 987 patients from the Netherlands and 519 patients from Germany. For each transplant, a segmented regression model was fitted on the estimated glomerular filtration rate (eGFR) evolution during the first-year post-transplantation. This yielded estimates for change point timing, rate of eGFR change before and after change point and eGFR value at change point, now considered the "baseline function". Associations of eGFR evolution with recipient/donor characteristics and the graft failure rate were assessed with linear regression and Cox regression respectively. The change point occurred on average at an eGFR value of 43.7±14.6 mL/min/1.73m2, at a median time of 6.5 days post-transplantation. Despite significant associations with several baseline donor-recipient characteristics (particularly, donor type; living vs deceased), the predictive value of these characteristics for eGFR value and timing of the change point was limited. This followed from a large heterogeneity within eGFR trajectories, which in turn indicated that favorable levels of kidney function could be reached despite a suboptimal initial evolution. Segmented regression consistently provided a good fit to early eGFR evolution, and its estimate of the change point can be a useful reference value in future analyses. Thus, our study shows that baseline kidney function after transplantation is heterogeneous and partly related to pretransplant donor characteristics.

2.
Am J Obstet Gynecol ; 230(1): 71.e1-71.e14, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37726057

RESUMEN

BACKGROUND: There is a growing literature base regarding menstrual changes following COVID-19 vaccination among premenopausal people. However, relatively little is known about uterine bleeding in postmenopausal people following COVID-19 vaccination. OBJECTIVE: This study aimed to examine trends in incident postmenopausal bleeding diagnoses over time before and after COVID-19 vaccine introduction, and to describe cases of new-onset postmenopausal bleeding after COVID-19 vaccination. STUDY DESIGN: For postmenopausal bleeding incidence calculations, monthly population-level cohorts consisted of female Kaiser Permanente Northwest members aged ≥45 years. Those diagnosed with incident postmenopausal bleeding in the electronic medical record were included in monthly numerators. Members with preexisting postmenopausal bleeding or abnormal uterine bleeding, or who were at increased risk of bleeding due to other health conditions, were excluded from monthly calculations. We used segmented regression analysis to estimate changes in the incidence of postmenopausal bleeding diagnoses from 2018 through 2021 in Kaiser Permanente Northwest members meeting the inclusion criteria, stratified by COVID-19 vaccination status in 2021. In addition, we identified all members with ≥1 COVID-19 vaccination between December 14, 2020 and August 14, 2021, who had an incident postmenopausal bleeding diagnosis within 60 days of vaccination. COVID-19 vaccination, diagnostic procedures, and presumed bleeding etiology were assessed through chart review and described. A temporal scan statistic was run on all cases without clear bleeding etiology. RESULTS: In a population of 75,530 to 82,693 individuals per month, there was no statistically significant difference in the rate of incident postmenopausal bleeding diagnoses before and after COVID-19 vaccine introduction (P=.59). A total of 104 individuals had incident postmenopausal bleeding diagnosed within 60 days following COVID-19 vaccination; 76% of cases (79/104) were confirmed as postvaccination postmenopausal bleeding after chart review. Median time from vaccination to bleeding onset was 21 days (range: 2-54 days). Among the 56 postmenopausal bleeding cases with a provider-attributed etiology, the common causes of bleeding were uterine or cervical lesions (50% [28/56]), hormone replacement therapy (13% [7/56]), and proliferative endometrium (13% [7/56]). Among the 23 cases without a clear etiology, there was no statistically significant clustering of postmenopausal bleeding onset following vaccination. CONCLUSION: Within this integrated health system, introduction of COVID-19 vaccines was not associated with an increase in incident postmenopausal bleeding diagnoses. Diagnosis of postmenopausal bleeding in the 60 days following receipt of a COVID-19 vaccination was rare.


Asunto(s)
Vacunas contra la COVID-19 , COVID-19 , Humanos , Femenino , Vacunas contra la COVID-19/efectos adversos , Posmenopausia , COVID-19/epidemiología , COVID-19/prevención & control , COVID-19/complicaciones , Hemorragia Uterina/epidemiología , Hemorragia Uterina/etiología , Vacunación/efectos adversos
3.
Am J Obstet Gynecol ; 230(5): 540.e1-540.e13, 2024 05.
Artículo en Inglés | MEDLINE | ID: mdl-38219855

RESUMEN

BACKGROUND: There is evidence suggesting that COVID-19 vaccination may be associated with small, transitory effects on uterine bleeding, possibly including menstrual timing, flow, and duration, in some individuals. However, changes in health care seeking, diagnosis, and workup for abnormal uterine bleeding in the COVID-19 vaccine era are less clear. OBJECTIVE: This study aimed to assess the impact of COVID-19 vaccination on incident abnormal uterine bleeding diagnosis and diagnostic evaluation in a large integrated health system. STUDY DESIGN: Using segmented regression, we assessed whether the availability of COVID-19 vaccines was associated with changes in monthly, population-based rates of incident abnormal uterine bleeding diagnoses relative to the prepandemic period in health system members aged 16 to 44 years who were not menopausal. We also compared clinical and demographic characteristics of patients diagnosed with incident abnormal uterine bleeding between December 2020 and October 13, 2021 by vaccination status (never vaccinated, vaccinated in the 60 days before diagnosis, vaccinated >60 days before diagnosis). Furthermore, we conducted detailed chart review of patients diagnosed with abnormal uterine bleeding within 1 to 60 days of COVID-19 vaccination in the same time period. RESULTS: In monthly populations ranging from 79,000 to 85,000 female health system members, incidence of abnormal uterine bleeding diagnosis per 100,000 person-days ranged from 8.97 to 19.19. There was no significant change in the level or trend in the incidence of abnormal uterine bleeding diagnoses between the prepandemic (January 2019-January 2020) and post-COVID-19 vaccine (December 2020-December 2021) periods. A comparison of clinical characteristics of 2717 abnormal uterine bleeding cases by vaccination status suggested that abnormal bleeding among recently vaccinated patients was similar or less severe than abnormal bleeding among patients who had never been vaccinated or those vaccinated >60 days before. There were also significant differences in age and race of patients with incident abnormal uterine bleeding diagnoses by vaccination status (Ps<.02). Never-vaccinated patients were the youngest and those vaccinated >60 days before were the oldest. The proportion of patients who were Black/African American was highest among never-vaccinated patients, and the proportion of Asian patients was higher among vaccinated patients. Chart review of 114 confirmed postvaccination abnormal uterine bleeding cases diagnosed from December 2020 through October 13, 2021 found that the most common symptoms reported were changes in timing, duration, and volume of bleeding. Approximately one-third of cases received no diagnostic workup; 57% had no etiology for the bleeding documented in the electronic health record. In 12% of cases, the patient mentioned or asked about a possible link between their bleeding and their recent COVID-19 vaccine. CONCLUSION: The availability of COVID-19 vaccination was not associated with a change in incidence of medically attended abnormal uterine bleeding in our population of over 79,000 female patients of reproductive age. In addition, among 2717 patients with abnormal uterine bleeding diagnoses in the period following COVID-19 vaccine availability, receipt of the vaccine was not associated with greater bleeding severity.


Asunto(s)
Vacunas contra la COVID-19 , COVID-19 , Hemorragia Uterina , Humanos , Femenino , Vacunas contra la COVID-19/efectos adversos , Adulto , Hemorragia Uterina/etiología , Adulto Joven , COVID-19/prevención & control , COVID-19/complicaciones , Adolescente , Incidencia , SARS-CoV-2 , Vacunación/efectos adversos , Vacunación/estadística & datos numéricos
4.
BMC Med Res Methodol ; 24(1): 31, 2024 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-38341540

RESUMEN

BACKGROUND: The Interrupted Time Series (ITS) is a robust design for evaluating public health and policy interventions or exposures when randomisation may be infeasible. Several statistical methods are available for the analysis and meta-analysis of ITS studies. We sought to empirically compare available methods when applied to real-world ITS data. METHODS: We sourced ITS data from published meta-analyses to create an online data repository. Each dataset was re-analysed using two ITS estimation methods. The level- and slope-change effect estimates (and standard errors) were calculated and combined using fixed-effect and four random-effects meta-analysis methods. We examined differences in meta-analytic level- and slope-change estimates, their 95% confidence intervals, p-values, and estimates of heterogeneity across the statistical methods. RESULTS: Of 40 eligible meta-analyses, data from 17 meta-analyses including 282 ITS studies were obtained (predominantly investigating the effects of public health interruptions (88%)) and analysed. We found that on average, the meta-analytic effect estimates, their standard errors and between-study variances were not sensitive to meta-analysis method choice, irrespective of the ITS analysis method. However, across ITS analysis methods, for any given meta-analysis, there could be small to moderate differences in meta-analytic effect estimates, and important differences in the meta-analytic standard errors. Furthermore, the confidence interval widths and p-values for the meta-analytic effect estimates varied depending on the choice of confidence interval method and ITS analysis method. CONCLUSIONS: Our empirical study showed that meta-analysis effect estimates, their standard errors, confidence interval widths and p-values can be affected by statistical method choice. These differences may importantly impact interpretations and conclusions of a meta-analysis and suggest that the statistical methods are not interchangeable in practice.


Asunto(s)
Salud Pública , Humanos , Análisis de Series de Tiempo Interrumpido
5.
BMC Med Res Methodol ; 24(1): 62, 2024 Mar 09.
Artículo en Inglés | MEDLINE | ID: mdl-38461257

RESUMEN

INTRODUCTION: Interrupted time series (ITS) design is a commonly used method for evaluating large-scale interventions in clinical practice or public health. However, improperly using this method can lead to biased results. OBJECTIVE: To investigate design and statistical analysis characteristics of drug utilization studies using ITS design, and give recommendations for improvements. METHODS: A literature search was conducted based on PubMed from January 2021 to December 2021. We included original articles that used ITS design to investigate drug utilization without restriction on study population or outcome types. A structured, pilot-tested questionnaire was developed to extract information regarding study characteristics and details about design and statistical analysis. RESULTS: We included 153 eligible studies. Among those, 28.1% (43/153) clearly explained the rationale for using the ITS design and 13.7% (21/153) clarified the rationale of using the specified ITS model structure. One hundred and forty-nine studies used aggregated data to do ITS analysis, and 20.8% (31/149) clarified the rationale for the number of time points. The consideration of autocorrelation, non-stationary and seasonality was often lacking among those studies, and only 14 studies mentioned all of three methodological issues. Missing data was mentioned in 31 studies. Only 39.22% (60/153) reported the regression models, while 15 studies gave the incorrect interpretation of level change due to time parameterization. Time-varying participant characteristics were considered in 24 studies. In 97 studies containing hierarchical data, 23 studies clarified the heterogeneity among clusters and used statistical methods to address this issue. CONCLUSION: The quality of design and statistical analyses in ITS studies for drug utilization remains unsatisfactory. Three emerging methodological issues warranted particular attention, including incorrect interpretation of level change due to time parameterization, time-varying participant characteristics and hierarchical data analysis. We offered specific recommendations about the design, analysis and reporting of the ITS study.


Asunto(s)
Salud Pública , Proyectos de Investigación , Humanos , Análisis de Series de Tiempo Interrumpido , Estudios Transversales , Utilización de Medicamentos
6.
J Anim Breed Genet ; 141(4): 453-464, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38299872

RESUMEN

Inbreeding depression is a genetic phenomenon associated with the loss of fitness and mean phenotypic performance due to mating between relatives. Historically, inbreeding coefficients have been estimated from pedigree information. However, the onset of genomic selection programs provides large datasets of individuals genotyped using SNP arrays, enabling more precise assessment of an individual's genomic-level inbreeding using genomic data. One of the traits most sensitive to issues stemming from increased inbreeding is reproduction. This is particularly important in equine, in which fertility is only moderate compared to other livestock species. To explore this further, we evaluated the effect of inbreeding on five reproductive traits (age at first foaling (AFF), average interval between foalings (AIF), total number of foalings (NF), productive life (PL) and reproductive efficiency (RE)) in Pura Raza Español mares using genomic data. Residual predicted phenotypes were obtained by purging these traits through the REML (wgResidual) and ssGREML (gResidual) approaches in reproductive data of 29,847 PRE mares using the BLUPF90+ program. Next, we used pedigree-based (Fped) and ROH-based genomic (FROH) inbreeding coefficients derived from 1018 animals genotyped with 61,271 SNPs to estimate the inbreeding depression (linear regression). Our results indicated significant levels of inbreeding depression for all reproductive traits, with the exception of the AIF trait when Fped was used. However, all traits were negatively affected by the increase in genomic inbreeding, and FROH was found to capture more inbreeding depression than Fped. Likewise, REML models (ssGREML) using genomic data for estimated predicted residual phenotypes resulted in higher variance explained by the model compared with the models not using genomics (REML). Finally, a segmented regression analysis was conducted to evaluate the effect of inbreeding depression, revealing that the levels of genealogical and genomic homozygosity do not manifest uniformly in reproductive traits. In contrast, the levels of inbreeding depression ranged from low to high as homozygosity increased. This analysis also showed that reproductive traits are very sensitive to inbreeding depression, even with relatively low levels of homozygosity.


Asunto(s)
Homocigoto , Depresión Endogámica , Reproducción , Animales , Caballos/genética , Caballos/fisiología , Femenino , Reproducción/genética , Fenotipo , Endogamia , Linaje , Polimorfismo de Nucleótido Simple , Genotipo
7.
Biometrics ; 79(3): 2272-2285, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-36056911

RESUMEN

High-throughput biological experiments are essential tools for identifying biologically interesting candidates in large-scale omics studies. The results of a high-throughput biological experiment rely heavily on the operational factors chosen in its experimental and data-analytic procedures. Understanding how these operational factors influence the reproducibility of the experimental outcome is critical for selecting the optimal parameter settings and designing reliable high-throughput workflows. However, the influence of an operational factor may differ between strong and weak candidates in a high-throughput experiment, complicating the selection of parameter settings. To address this issue, we propose a novel segmented regression model, called segmented correspondence curve regression, to assess the influence of operational factors on the reproducibility of high-throughput experiments. Our model dissects the heterogeneous effects of operational factors on strong and weak candidates, providing a principled way to select operational parameters. Based on this framework, we also develop a sup-likelihood ratio test for the existence of heterogeneity. Simulation studies show that our estimation and testing procedures yield well-calibrated type I errors and are substantially more powerful in detecting and locating the differences in reproducibility across workflows than the existing method. Using this model, we investigated an important design question for ChIP-seq experiments: How many reads should one sequence to obtain reliable results in a cost-effective way? Our results reveal new insights into the impact of sequencing depth on the binding-site identification reproducibility, helping biologists determine the most cost-effective sequencing depth to achieve sufficient reproducibility for their study goals.


Asunto(s)
Secuenciación de Nucleótidos de Alto Rendimiento , Reproducibilidad de los Resultados , Simulación por Computador , Secuenciación de Nucleótidos de Alto Rendimiento/métodos
8.
BMC Med Res Methodol ; 23(1): 277, 2023 11 24.
Artículo en Inglés | MEDLINE | ID: mdl-38001462

RESUMEN

The interrupted time series (ITS) design is widely used to examine the effects of large-scale public health interventions and has the highest level of evidence validity. However, there is a notable gap regarding methods that account for lag effects of interventions.To address this, we introduced activation functions (ReLU and Sigmoid) to into the classic segmented regression (CSR) of the ITS design during the lag period. This led to the proposal of proposed an optimized segmented regression (OSR), namely, OSR-ReLU and OSR-Sig. To compare the performance of the models, we simulated data under multiple scenarios, including positive or negative impacts of interventions, linear or nonlinear lag patterns, different lag lengths, and different fluctuation degrees of the outcome time series. Based on the simulated data, we examined the bias, mean relative error (MRE), mean square error (MSE), mean width of the 95% confidence interval (CI), and coverage rate of the 95% CI for the long-term impact estimates of interventions among different models.OSR-ReLU and OSR-Sig yielded approximately unbiased estimates of the long-term impacts across all scenarios, whereas CSR did not. In terms of accuracy, OSR-ReLU and OSR-Sig outperformed CSR, exhibiting lower values in MRE and MSE. With increasing lag length, the optimized models provided robust estimates of long-term impacts. Regarding precision, OSR-ReLU and OSR-Sig surpassed CSR, demonstrating narrower mean widths of 95% CI and higher coverage rates.Our optimized models are powerful tools, as they can model the lag effects of interventions and provide more accurate and precise estimates of the long-term impact of interventions. The introduction of an activation function provides new ideas for improving of the CSR model.


Asunto(s)
Aneurisma de la Aorta Abdominal , Humanos , Factores de Tiempo , Análisis de Series de Tiempo Interrumpido , Resultado del Tratamiento
9.
Pharmacoepidemiol Drug Saf ; 32(3): 298-311, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36331361

RESUMEN

PURPOSE: Develop and test a flexible, scalable tool using interrupted time series (ITS) analysis to assess the impact of Food and Drug Administration (FDA) regulatory actions on drug use. METHODS: We applied the tool in the Sentinel Distributed Database to assess the impact of FDA's 2010 drug safety communications (DSC) concerning the safety of long-acting beta2-agonists (LABA) in adult asthma patients. We evaluated changes in LABA use by measuring the initiation of LABA alone and concomitant use of LABA and asthma controller medications (ACM) after the DSCs. The tool generated ITS graphs and used segmented regression to estimate baseline slope, level change, slope change, and absolute and relative changes at up to two user-specified time point (s) after the intervention. We tested the tool and compared our results against prior analyses that used similar measures. RESULTS: Initiation of LABA alone declined among asthma patients aged 18-45 years before FDA DSCs (-0.10% per quarter; 95%CI: -0.11% to -0.09%) and the downward trend continued after. Concomitant use of LABA and ACM was stable before FDA DSCs. After FDA DSCs, there was a small trend decrease of 0.006% per quarter (95% CI, -0.008% to -0.003%). We found similar results among those aged 46-64 years and patients with poorly-controlled asthma. Our results were consistent with previous studies, confirming the performance of the new tool. CONCLUSIONS: We developed and tested a reusable ITS tool in real-world databases formatted to the Sentinel Common Data Model that can assess the impact of regulatory actions on drug use.


Asunto(s)
Agonistas de Receptores Adrenérgicos beta 2 , Asma , Adulto , Estados Unidos , Humanos , United States Food and Drug Administration , Administración por Inhalación , Asma/tratamiento farmacológico , Comunicación , Quimioterapia Combinada , Corticoesteroides
10.
Pharmacoepidemiol Drug Saf ; 32(5): 509-516, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36813735

RESUMEN

PURPOSE: Drug utilization researchers are often interested in evaluating prescribing and medication use patterns and trends over a specified period of time. Joinpoint regression is a useful methodology to identify any deviations in secular trends without a preconceived notion of where these break points might occur. This article provides a tutorial on the use of joinpoint regression, within Joinpoint software, for the analysis of drug utilization data. METHODS: The statistical considerations for whether a joinpoint regression analytical technique is a suitable approach are discussed. Then, we offer a tutorial as an introduction on conducting joinpoint regression (within Joinpoint software) through a step-by-step application, which is a case study developed using opioid prescribing data from the United States. Data were obtained from public files available through the Centers for Disease Control and Prevention from 2006 to 2018. The tutorial provides parameters and sample data needed to replicate the case study and it concludes with general considerations for the reporting of results using joinpoint regression in drug utilization research. RESULTS: The case study evaluated the trend of opioid prescribing in the United States from 2006 to 2018, where time points of significant variation (one in 2012 and another in 2016) are detected and interpreted. CONCLUSIONS: Joinpoint regression is a helpful methodology for drug utilization for the purposes of conducting descriptive analyses. This tool also assists with corroborating assumptions and identifying parameters for fitting other models such as interrupted time series. The technique and accompanying software are user-friendly; however, researchers interested in using joinpoint regression should exercise caution and follow best practices for correct measurement of drug utilization.


Asunto(s)
Analgésicos Opioides , Trastornos Relacionados con Opioides , Humanos , Estados Unidos , Analgésicos Opioides/uso terapéutico , Pautas de la Práctica en Medicina , Trastornos Relacionados con Opioides/epidemiología , Trastornos Relacionados con Opioides/prevención & control , Trastornos Relacionados con Opioides/tratamiento farmacológico , Prescripciones , Utilización de Medicamentos , Prescripciones de Medicamentos
11.
BMC Musculoskelet Disord ; 24(1): 456, 2023 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-37270498

RESUMEN

AIMS: To evaluate the extent to which publication of high-quality randomised controlled trials(RCTs) in 2018 was associated with a change in volume or trend of subacromial decompression(SAD) surgery in patients with subacromial pain syndrome(SAPS) treated in hospitals across various countries. METHODS: Routinely collected administrative data of the Global Health Data@work collaborative were used to identify SAPS patients who underwent SAD surgery in six hospitals from five countries (Australia, Belgium, Netherlands, United Kingdom, United States) between 01/2016 and 02/2020. Following a controlled interrupted time series design, segmented Poisson regression was used to compare trends in monthly SAD surgeries before(01/2016-01/2018) and after(02/2018-02/2020) publication of the RCTs. The control group consisted of musculoskeletal patients undergoing other procedures. RESULTS: A total of 3.046 SAD surgeries were performed among SAPS patients treated in five hospitals; one hospital did not perform any SAD surgeries. Overall, publication of trial results was associated with a significant reduction in the trend to use SAD surgery of 2% per month (Incidence rate ratio (IRR) 0.984[0.971-0.998]; P = 0.021), but with large variation between hospitals. No changes in the control group were observed. However, publication of trial results was also associated with a 2% monthly increased trend (IRR 1.019[1.004-1.034]; P = 0.014) towards other procedures performed in SAPS patients. CONCLUSION: Publication of RCT results was associated with a significantly decreased trend in SAD surgery for SAPS patients, although large variation between participating hospitals existed and a possible shift in coding practices cannot be ruled out. This highlights the complexities of implementing recommendations to change routine clinical practice even if based on high-quality evidence.


Asunto(s)
Descompresión , Dolor de Hombro , Humanos , Estados Unidos/epidemiología , Análisis de Series de Tiempo Interrumpido , Dolor de Hombro/diagnóstico , Dolor de Hombro/epidemiología , Dolor de Hombro/cirugía , Europa (Continente)/epidemiología , Australia/epidemiología
12.
Saudi Pharm J ; 31(12): 101851, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38028223

RESUMEN

Background: The Saudi Food and Drug Authority (SFDA) classified pregabalin as a controlled substance in 2018; however, whether this policy change has affected pregabalin use is unclear. This study examined the trends in pregabalin prescriptions before and after the SFDA restriction. In addition, the co-prescription of controlled analgesics and the use of pregabalin for approved indications were also evaluated. Method: A cross-sectional study was conducted on outpatient pregabalin prescriptions from three healthcare centers in Saudi Arabia. Interrupted time series analysis was used to assess changes over time in pregabalin prescriptions and the number of patients receiving pregabalin. June 2016 to June 2017 was identified as the pre-restriction period, and July 2018 to July 2019 as the post-restriction period. Results: In this study, 77,760 pregabalin prescriptions were identified. There were 9,076 patients on pregabalin in the pre-restriction period with 16,875 prescriptions, compared with 7,123 patients and 19,484 prescriptions post-restriction. The total number of pregabalin users decreased by 21.5% post-restriction, and prescriptions increased by 15.5%. There was no significant change in the monthly trends in pregabalin prescriptions before and after the restriction. However, the of tramadol and acetaminophen/codeine prescriptions in patients who were using pregabalin increased in the post-restriction period by 21% and 16.1%, respectively. Conclusion: Pregabalin use was reduced after the SFDA-enforced prescription restriction was implemented. This was accompanied by increased narcotics use in the post-implementation phase.

13.
World J Urol ; 40(7): 1777-1783, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35384485

RESUMEN

PURPOSE: Bariatric surgery has shown reductions in overactive bladder (OAB) symptoms; however, the impacts on OAB treatment is unknown. The goal of our study is to evaluate the impact of bariatric surgery on OAB medication utilization. METHODS: We used IBM® MarketScan® commercial databases from 2005 to 2018. We included patients aged ≥ 18 years with 360 days of continuous enrollment before and after bariatric surgery (Roux-en-Y Gastric Bypass and Sleeve Gastrectomy) with at least one fill of an OAB medication in the 360 days prior to bariatric surgery. We evaluated all included patients and stratified by surgery type and patient sex. Segmented regression analyses were used to assess the proportion of patients on OAB medications before and after bariatric surgery. We replicated our findings using hip or knee replacement surgery as a negative control. RESULTS: Among the included patients (n = 3069), 92.2% were females, 58.6% underwent Roux-en-Y Gastric Bypass. Immediately following bariatric surgery, the proportion of patients treated with an OAB medication reduced from 34.8 to 14.1% (p < 0.001) resulting in a 59.5% relative reduction. Patients who underwent Roux-en-Y Gastric Bypass vs. Sleeve Gastrectomy (63.8% vs. 55.1%) relative reduction (p = 0.009)) and females versus males [62.3% vs. 52.9% relative reduction (p < 0.001)] had a more pronounced reduction in OAB medication use. There was slight decrease in OAB medication use in the negative control analysis. CONCLUSIONS: A reduction in OAB medication use following bariatric surgery may be associated with a reduction in OAB symptoms suggesting an additional benefit of bariatric surgery.


Asunto(s)
Cirugía Bariátrica , Derivación Gástrica , Obesidad Mórbida , Vejiga Urinaria Hiperactiva , Anciano , Femenino , Derivación Gástrica/métodos , Humanos , Masculino , Obesidad Mórbida/complicaciones , Obesidad Mórbida/cirugía , Análisis de Regresión , Estudios Retrospectivos , Resultado del Tratamiento , Vejiga Urinaria Hiperactiva/complicaciones , Vejiga Urinaria Hiperactiva/tratamiento farmacológico
14.
Stat Med ; 41(16): 3102-3130, 2022 07 20.
Artículo en Inglés | MEDLINE | ID: mdl-35522060

RESUMEN

Since its release of Version 1.0 in 1998, Joinpoint software developed for cancer trend analysis by a team at the US National Cancer Institute has received a considerable attention in the trend analysis community and it became one of most widely used software for trend analysis. The paper published in Statistics in Medicine in 2000 (a previous study) describes the permutation test procedure to select the number of joinpoints, and Joinpoint Version 1.0 implemented the permutation procedure as the default model selection method and employed parametric methods for the asymptotic inference of the model parameters. Since then, various updates and extensions have been made in Joinpoint software. In this paper, we review basic features of Joinpoint, summarize important updates of Joinpoint software since its first release in 1998, and provide more information on two major enhancements. More specifically, these enhancements overcome prior limitations in both the accuracy and computational efficiency of previously used methods. The enhancements include: (i) data driven model selection methods which are generally more accurate under a broad range of data settings and more computationally efficient than the permutation test and (ii) the use of the empirical quantile method for construction of confidence intervals for the slope parameters and the location of the joinpoints, which generally provides more accurate coverage than the prior parametric methods used. We show the impact of these changes in cancer trend analysis published by the US National Cancer Institute.


Asunto(s)
Neoplasias , Recolección de Datos , Humanos , Análisis de Regresión , Proyectos de Investigación , Programas Informáticos
15.
BMC Med Res Methodol ; 22(1): 235, 2022 08 31.
Artículo en Inglés | MEDLINE | ID: mdl-36045338

RESUMEN

BACKGROUND: A classic methodology used in evaluating the impact of health policy interventions is interrupted time-series (ITS) analysis, applying a quasi-experimental design that uses both pre- and post-policy data without randomization. In this paper, we took a simulation-based approach to estimating intervention effects under different assumptions. METHODS: Each of the simulated mortality rates contained a linear time trend, seasonality, autoregressive, and moving-average terms. The simulations of the policy effects involved three scenarios: 1) immediate-level change only, 2) immediate-level and slope change, and 3) lagged-level and slope change. The estimated effects and biases of these effects were examined via three matched generalized additive mixed models, each of which used two different approaches: 1) effects based on estimated coefficients (estimated approach), and 2) effects based on predictions from models (predicted approach). The robustness of these two approaches was further investigated assuming misspecification of the models. RESULTS: When one simulated dataset was analyzed with the matched model, the two analytical approaches produced similar estimates. However, when the models were misspecified, the number of deaths prevented, estimated using the predicted vs. estimated approaches, were very different, with the predicted approach yielding estimates closer to the real effect. The discrepancy was larger when the policy was applied early in the time-series. CONCLUSION: Even when the sample size appears to be large enough, one should still be cautious when conducting ITS analyses, since the power also depends on when in the series the intervention occurs. In addition, the intervention lagged effect needs to be fully considered at the study design stage (i.e., when developing the models).


Asunto(s)
Política de Salud , Proyectos de Investigación , Simulación por Computador , Humanos , Análisis de Series de Tiempo Interrumpido , Tamaño de la Muestra
16.
Paediatr Perinat Epidemiol ; 36(3): 329-336, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-34981845

RESUMEN

BACKGROUND: Public health measures (PHM) designed to contain the spread of COVID-19 pandemic have influenced the epidemiological characteristics of other viral infections. Its impact on acute RSV bronchiolitis in infants of ≤24 months old has not been systematically studied in our setting. OBJECTIVES: To describe the monthly pattern of visits to the Paediatric Emergency Department (PED) of patients 0 to 14 years of age, the rate of patients diagnosed with RSV acute bronchiolitis per thousand inhabitants of 0 to 24 months, and the rate of them requiring hospital admission during the winter 2020-2021, in the context of local and national COVID-19 restrictions and compare them to the four previous seasons. METHODS: Interrupted time series analysis of patients assisted in the PED and diagnosed with or admitted for RSV acute bronchiolitis in a tertiary University Hospital from January 2016 to February 2020 (pre-intervention period) and from March 2020 to June 2021 (post-intervention period). INTERVENTION: Preventive PHM implemented by the Spanish government weighted by the Containment and Health Index of the Oxford COVID-19 Government Response Tracker. RESULTS: The intervention was followed by an immediate reduction of the rate of visits to the PED of -19.5 (95% confidence interval [CI] -24.0, -14.9) per thousand, and the rate of diagnoses and admissions for RSV acute bronchiolitis of -44.3 (95% CI -73.8, -14.8) and -1.4 (95% CI -2.7, -0.1) per thousand, respectively, with a delayed rebound. CONCLUSIONS: After the implementation of PHM to prevent the spread of SARS-CoV-2 infection, an immediate and important decline in the visits to the PED was observed, with an upward change thereafter. There was also an initial reduction in the diagnoses of and admissions by RSV acute bronchiolitis. An upward trend was observed six to nine months after the usual time of the winter RSV epidemic, coinciding with the relaxation of the preventive PHM.


Asunto(s)
Bronquiolitis , COVID-19 , Infecciones por Virus Sincitial Respiratorio , Virus Sincitial Respiratorio Humano , Bronquiolitis/epidemiología , COVID-19/epidemiología , COVID-19/prevención & control , Niño , Preescolar , Humanos , Lactante , Recién Nacido , Análisis de Series de Tiempo Interrumpido , Pandemias/prevención & control , Salud Pública , Infecciones por Virus Sincitial Respiratorio/epidemiología , Infecciones por Virus Sincitial Respiratorio/prevención & control , SARS-CoV-2 , Estaciones del Año
17.
Pharmacoepidemiol Drug Saf ; 31(5): 577-582, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35049110

RESUMEN

PURPOSE: The Saudi Food and Drug Authority (SFDA) added pregabalin to the list of controlled substances in December 2017 to minimize the risk of its possible abuse and misuse. This study was aimed at assessing the impact of this decision on the overall use of pregabalin in Saudi Arabia and in comparison with drugs prescribed to treat neuropathic pain (i.e., vs. gabapentin, tramadol, duloxetine, and amitriptyline). METHODS: This was an interrupted time-series analysis of the Saudi quarterly sale data of the study drugs from October/2015 to September/2020. These data were obtained from IQVIA and were converted into use estimates (defined daily dose per 1000 inhabitant-days [DDD/TID]). Segmented regression models were conducted to assess the direct (level) and prolonged (trend) changes in use data after the decision. All analyses were completed using RStudio Version 1.4.1103. RESULTS: Before the SFDA's decision, there was an increased quarter-to-quarter use of pregabalin (DDD/TID: 0.16; 95% confidence interval [CI] 0.04 to 0.28). Pregabalin overall use dropped sharply by -1.85 DDD/TID (95% CI -2.71 to -0.99) directly after the decision with a prolonged quarter-to-quarter declining effect (DDD/TID: -0.22, CI to -0.37 to -0.05). The decision was associated with a direct increase in the use of gabapentin by 0.62 DDD/TID (95% CI 0.52-0.72) without any impact on the use of other drugs. CONCLUSIONS: The results of our study showed that the SFDA decision was associated with a decrease in the overall use of pregabalin, which may help minimize the risk of its abuse and misuse.


Asunto(s)
Neuralgia , Analgésicos/uso terapéutico , Gabapentina/uso terapéutico , Humanos , Análisis de Series de Tiempo Interrumpido , Neuralgia/tratamiento farmacológico , Pregabalina/uso terapéutico , Arabia Saudita
18.
Int J Cancer ; 148(7): 1598-1607, 2021 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-33099777

RESUMEN

Breast cancer incidence is increasing among Asian Indian and Pakistani women living in the United States. We examined the characteristics of breast cancer in Asian Indian and Pakistani American (AIPA) and non-Hispanic white (NHW) women using data from the surveillance, epidemiology and end results (SEER) program. Breast cancer incidence rates were estimated via segmented Poisson regression using data between 1990 and 2014 from SEER 9 registries, including New Jersey and California. Disease characteristics, treatment and survival information between 2000 and 2016 for 4900 AIPA and 482 250 NHW cases diagnosed after age 18 were obtained from SEER 18 registries and compared using descriptive analyses and multivariable competing risk proportional hazards regression. Breast cancer incidence was lower in AIPA than NHW women, increased with age and the rate of increase declined after age of 46 years. AIPA women were diagnosed at significantly younger age (mean (SD) = 54.5 (13.3) years) than NHW women (mean (SD) = 62 (14) years, P < .0001) and were more likely than NHW cases (P < .0001) to have regional or distant stage, higher grade, estrogen receptor-negative, progesterone receptor-negative, triple-negative or human epidermal growth factor receptor 2-enriched tumors, subcutaneous or total mastectomy, and lower cumulative incidence of death due to breast cancer (hazard ratio = 0.79, 95% CI: 0.72-0.86, P < .0001). AIPA had shorter median follow-up (52 months) than NHW cases (77 months). Breast cancer in AIPA women has unique characteristics that need to be further studied along with a comprehensive evaluation of their follow-up patterns.


Asunto(s)
Neoplasias de la Mama/epidemiología , Neoplasias de la Mama/mortalidad , Adulto , Anciano , Asiático , Neoplasias de la Mama/patología , California , Femenino , Humanos , Incidencia , India , Mastectomía , Persona de Mediana Edad , Estadificación de Neoplasias , New Jersey , Pakistán , Progesterona , Modelos de Riesgos Proporcionales , Receptor ErbB-2/metabolismo , Receptores de Estrógenos/metabolismo , Sistema de Registros , Análisis de Regresión , Estudios Retrospectivos , Estados Unidos , Población Blanca
19.
Planta ; 253(2): 43, 2021 Jan 22.
Artículo en Inglés | MEDLINE | ID: mdl-33479798

RESUMEN

MAIN CONCLUSION: Root antioxidant defense, restricted root-to-shoot Cu translocation, altered nutrient partition, and leaf gas exchange adjustments occurred as tolerance mechanisms of soybean plants to increasing soil Cu levels. The intensive application of copper (Cu) fungicides has been related to the accumulation of this metal in agricultural soils. This study aimed to evaluate the effects of increasing soil Cu levels on soybean (Glycine max) plants. Soybean was cultivated under greenhouse conditions in soils containing different Cu concentrations (11.2, 52.3, 79.4, 133.5, 164.0, 205.1, or 243.8 mg kg-1), and biochemical and morphophysiological plant responses were analyzed through linear and nonlinear regression models. Although Cu concentrations around 50 mg kg-1 promoted some positive effects on the initial development of soybean plants (e.g., increased root length and dry weight), these Cu concentrations also induced root oxidative stress and activated defense mechanisms (such as the induction of antioxidant response, N and S accumulation in the roots). At higher concentrations, Cu led to growth inhibition (mainly of the root), nutritional imbalance, and damage to the photosynthetic apparatus of soybean plants, resulting in decreased CO2 assimilation and stomatal conductance. In contrast, low translocation of Cu to the leaves, conservative water use, and increased carboxylation efficiency contributed to the partial mitigation of Cu-induced stress. These responses allowed soybean plants treated with Cu levels in the soil as high as 90 mg kg-1 to maintain growth parameters higher than or similar to those of plants in the non-contaminated soil. These data provide a warning for the potentially deleterious consequences of the increasing use of Cu-based fungicides. However, it is necessary to verify how the responses to Cu contamination are affected by different types of soil and soybean cultivars.


Asunto(s)
Cobre , Glycine max , Modelos Estadísticos , Contaminantes del Suelo , Cobre/toxicidad , Contaminantes Ambientales/toxicidad , Hojas de la Planta/efectos de los fármacos , Raíces de Plantas/efectos de los fármacos , Análisis de Regresión , Suelo/química , Glycine max/efectos de los fármacos
20.
BMC Med Res Methodol ; 21(1): 134, 2021 06 26.
Artículo en Inglés | MEDLINE | ID: mdl-34174809

RESUMEN

BACKGROUND: The Interrupted Time Series (ITS) is a quasi-experimental design commonly used in public health to evaluate the impact of interventions or exposures. Multiple statistical methods are available to analyse data from ITS studies, but no empirical investigation has examined how the different methods compare when applied to real-world datasets. METHODS: A random sample of 200 ITS studies identified in a previous methods review were included. Time series data from each of these studies was sought. Each dataset was re-analysed using six statistical methods. Point and confidence interval estimates for level and slope changes, standard errors, p-values and estimates of autocorrelation were compared between methods. RESULTS: From the 200 ITS studies, including 230 time series, 190 datasets were obtained. We found that the choice of statistical method can importantly affect the level and slope change point estimates, their standard errors, width of confidence intervals and p-values. Statistical significance (categorised at the 5% level) often differed across the pairwise comparisons of methods, ranging from 4 to 25% disagreement. Estimates of autocorrelation differed depending on the method used and the length of the series. CONCLUSIONS: The choice of statistical method in ITS studies can lead to substantially different conclusions about the impact of the interruption. Pre-specification of the statistical method is encouraged, and naive conclusions based on statistical significance should be avoided.


Asunto(s)
Salud Pública , Proyectos de Investigación , Humanos , Análisis de Series de Tiempo Interrumpido
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA