Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 79
Filtrar
1.
BMC Med Inform Decis Mak ; 24(1): 51, 2024 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-38355486

RESUMO

BACKGROUND: Diagnostic codes are commonly used as inputs for clinical prediction models, to create labels for prediction tasks, and to identify cohorts for multicenter network studies. However, the coverage rates of diagnostic codes and their variability across institutions are underexplored. The primary objective was to describe lab- and diagnosis-based labels for 7 selected outcomes at three institutions. Secondary objectives were to describe agreement, sensitivity, and specificity of diagnosis-based labels against lab-based labels. METHODS: This study included three cohorts: SickKids from The Hospital for Sick Children, and StanfordPeds and StanfordAdults from Stanford Medicine. We included seven clinical outcomes with lab-based definitions: acute kidney injury, hyperkalemia, hypoglycemia, hyponatremia, anemia, neutropenia and thrombocytopenia. For each outcome, we created four lab-based labels (abnormal, mild, moderate and severe) based on test result and one diagnosis-based label. Proportion of admissions with a positive label were presented for each outcome stratified by cohort. Using lab-based labels as the gold standard, agreement using Cohen's Kappa, sensitivity and specificity were calculated for each lab-based severity level. RESULTS: The number of admissions included were: SickKids (n = 59,298), StanfordPeds (n = 24,639) and StanfordAdults (n = 159,985). The proportion of admissions with a positive diagnosis-based label was significantly higher for StanfordPeds compared to SickKids across all outcomes, with odds ratio (99.9% confidence interval) for abnormal diagnosis-based label ranging from 2.2 (1.7-2.7) for neutropenia to 18.4 (10.1-33.4) for hyperkalemia. Lab-based labels were more similar by institution. When using lab-based labels as the gold standard, Cohen's Kappa and sensitivity were lower at SickKids for all severity levels compared to StanfordPeds. CONCLUSIONS: Across multiple outcomes, diagnosis codes were consistently different between the two pediatric institutions. This difference was not explained by differences in test results. These results may have implications for machine learning model development and deployment.


Assuntos
Hiperpotassemia , Neutropenia , Humanos , Atenção à Saúde , Aprendizado de Máquina , Sensibilidade e Especificidade
2.
Int J Obes (Lond) ; 45(11): 2347-2357, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34267326

RESUMO

BACKGROUND: A detailed characterization of patients with COVID-19 living with obesity has not yet been undertaken. We aimed to describe and compare the demographics, medical conditions, and outcomes of COVID-19 patients living with obesity (PLWO) to those of patients living without obesity. METHODS: We conducted a cohort study based on outpatient/inpatient care and claims data from January to June 2020 from Spain, the UK, and the US. We used six databases standardized to the OMOP common data model. We defined two non-mutually exclusive cohorts of patients diagnosed and/or hospitalized with COVID-19; patients were followed from index date to 30 days or death. We report the frequency of demographics, prior medical conditions, and 30-days outcomes (hospitalization, events, and death) by obesity status. RESULTS: We included 627 044 (Spain: 122 058, UK: 2336, and US: 502 650) diagnosed and 160 013 (Spain: 18 197, US: 141 816) hospitalized patients with COVID-19. The prevalence of obesity was higher among patients hospitalized (39.9%, 95%CI: 39.8-40.0) than among those diagnosed with COVID-19 (33.1%; 95%CI: 33.0-33.2). In both cohorts, PLWO were more often female. Hospitalized PLWO were younger than patients without obesity. Overall, COVID-19 PLWO were more likely to have prior medical conditions, present with cardiovascular and respiratory events during hospitalization, or require intensive services compared to COVID-19 patients without obesity. CONCLUSION: We show that PLWO differ from patients without obesity in a wide range of medical conditions and present with more severe forms of COVID-19, with higher hospitalization rates and intensive services requirements. These findings can help guiding preventive strategies of COVID-19 infection and complications and generating hypotheses for causal inference studies.


Assuntos
COVID-19/epidemiologia , Obesidade/epidemiologia , Adolescente , Adulto , Idoso , COVID-19/mortalidade , Estudos de Coortes , Comorbidade , Feminino , Hospitalização , Humanos , Masculino , Pessoa de Meia-Idade , Prevalência , Fatores de Risco , Espanha/epidemiologia , Reino Unido/epidemiologia , Estados Unidos/epidemiologia , Adulto Jovem
3.
Rheumatology (Oxford) ; 60(SI): SI37-SI50, 2021 10 09.
Artigo em Inglês | MEDLINE | ID: mdl-33725121

RESUMO

OBJECTIVE: Patients with autoimmune diseases were advised to shield to avoid coronavirus disease 2019 (COVID-19), but information on their prognosis is lacking. We characterized 30-day outcomes and mortality after hospitalization with COVID-19 among patients with prevalent autoimmune diseases, and compared outcomes after hospital admissions among similar patients with seasonal influenza. METHODS: A multinational network cohort study was conducted using electronic health records data from Columbia University Irving Medical Center [USA, Optum (USA), Department of Veterans Affairs (USA), Information System for Research in Primary Care-Hospitalization Linked Data (Spain) and claims data from IQVIA Open Claims (USA) and Health Insurance and Review Assessment (South Korea). All patients with prevalent autoimmune diseases, diagnosed and/or hospitalized between January and June 2020 with COVID-19, and similar patients hospitalized with influenza in 2017-18 were included. Outcomes were death and complications within 30 days of hospitalization. RESULTS: We studied 133 589 patients diagnosed and 48 418 hospitalized with COVID-19 with prevalent autoimmune diseases. Most patients were female, aged ≥50 years with previous comorbidities. The prevalence of hypertension (45.5-93.2%), chronic kidney disease (14.0-52.7%) and heart disease (29.0-83.8%) was higher in hospitalized vs diagnosed patients with COVID-19. Compared with 70 660 hospitalized with influenza, those admitted with COVID-19 had more respiratory complications including pneumonia and acute respiratory distress syndrome, and higher 30-day mortality (2.2-4.3% vs 6.32-24.6%). CONCLUSION: Compared with influenza, COVID-19 is a more severe disease, leading to more complications and higher mortality.


Assuntos
Doenças Autoimunes/mortalidade , Doenças Autoimunes/virologia , COVID-19/mortalidade , Hospitalização/estatística & dados numéricos , Influenza Humana/mortalidade , Adulto , Idoso , Idoso de 80 Anos ou mais , COVID-19/imunologia , Estudos de Coortes , Feminino , Humanos , Influenza Humana/imunologia , Masculino , Pessoa de Meia-Idade , Prevalência , Prognóstico , República da Coreia/epidemiologia , SARS-CoV-2 , Espanha/epidemiologia , Estados Unidos/epidemiologia , Adulto Jovem
4.
Sensors (Basel) ; 21(6)2021 Mar 10.
Artigo em Inglês | MEDLINE | ID: mdl-33801798

RESUMO

Neuronal damage secondary to traumatic brain injury (TBI) is a rapidly evolving condition, which requires therapeutic decisions based on the timely identification of clinical deterioration. Changes in S100B biomarker levels are associated with TBI severity and patient outcome. The S100B quantification is often difficult since standard immunoassays are time-consuming, costly, and require extensive expertise. A zero-length cross-linking approach on a cysteamine self-assembled monolayer (SAM) was performed to immobilize anti-S100B monoclonal antibodies onto both planar (AuEs) and interdigitated (AuIDEs) gold electrodes via carbonyl-bond. Surface characterization was performed by atomic force microscopy (AFM) and specular-reflectance FTIR for each functionalization step. Biosensor response was studied using the change in charge-transfer resistance (Rct) from electrochemical impedance spectroscopy (EIS) in potassium ferrocyanide, with [S100B] ranging 10-1000 pg/mL. A single-frequency analysis for capacitances was also performed in AuIDEs. Full factorial designs were applied to assess biosensor sensitivity, specificity, and limit-of-detection (LOD). Higher Rct values were found with increased S100B concentration in both platforms. LODs were 18 pg/mL(AuES) and 6 pg/mL(AuIDEs). AuIDEs provide a simpler manufacturing protocol, with reduced fabrication time and possibly costs, simpler electrochemical response analysis, and could be used for single-frequency analysis for monitoring capacitance changes related to S100B levels.

5.
Clin Transplant ; 34(1): e13767, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31815310

RESUMO

Tacrolimus is the cornerstone of immunosuppressive therapy after kidney transplantation. Its narrow therapeutic window mandates serum level strict monitoring and dose adjustments to ensure the optimal risk-benefit balance. This observational retrospective study analyzed the effectiveness and safety of conversion from twice-daily immediate-release tacrolimus (IR-Tac) or once-daily prolonged-release tacrolimus (PR-Tac) to the recent formulation once-daily MeltDose® extended-release tacrolimus (LCP-Tac) in 365 stable kidney transplant recipients. We compared kidney function three months before and three months after the conversion. Three months after conversion, the total daily dose was reduced ~35% (P < .0001), and improved bioavailability and stable serum LCP-Tac concentrations were observed. There was no increase in the number of patients requiring tacrolimus dose adjustments after conversion. Renal function was unaltered, and no cases of BPAR were reported. Reports of tremors, as collected in the clinical histories for each patient, decreased from pre-conversion (20.8%) to post-conversion (11.8%, P < .0001). LCP-Tac generated a cost reduction of 63% compared with PR-Tac. In conclusion, the conversion strategy to LCP-Tac from other tacrolimus formulations in stable kidney transplant patients showed safety and effectiveness in a real-world setting, confirming the data from RCTs. The specific pharmacokinetic properties of LCP-Tac could be potentially advantageous in patients with tacrolimus-related adverse events.


Assuntos
Transplante de Rim , Tacrolimo , Preparações de Ação Retardada , Esquema de Medicação , Humanos , Imunossupressores/uso terapêutico , Estudos Prospectivos , Estudos Retrospectivos
6.
J Biomed Inform ; 75S: S94-S104, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28571784

RESUMO

In response to the challenges set forth by the CEGS N-GRID 2016 Shared Task in Clinical Natural Language Processing, we describe a framework to automatically classify initial psychiatric evaluation records to one of four positive valence system severities: absent, mild, moderate, or severe. We used a dataset provided by the event organizers to develop a framework comprised of natural language processing (NLP) modules and 3 predictive models (two decision tree models and one Bayesian network model) used in the competition. We also developed two additional predictive models for comparison purpose. To evaluate our framework, we employed a blind test dataset provided by the 2016 CEGS N-GRID. The predictive scores, measured by the macro averaged-inverse normalized mean absolute error score, from the two decision trees and Naïve Bayes models were 82.56%, 82.18%, and 80.56%, respectively. The proposed framework in this paper can potentially be applied to other predictive tasks for processing initial psychiatric evaluation records, such as predicting 30-day psychiatric readmissions.


Assuntos
Modelos Psicológicos , Teorema de Bayes , Humanos , Processamento de Linguagem Natural , Índice de Gravidade de Doença
7.
J Am Med Inform Assoc ; 31(4): 949-957, 2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38244997

RESUMO

OBJECTIVE: To measure pediatrician adherence to evidence-based guidelines in the treatment of young children with attention-deficit/hyperactivity disorder (ADHD) in a diverse healthcare system using natural language processing (NLP) techniques. MATERIALS AND METHODS: We extracted structured and free-text data from electronic health records (EHRs) of all office visits (2015-2019) of children aged 4-6 years in a community-based primary healthcare network in California, who had ≥1 visits with an ICD-10 diagnosis of ADHD. Two pediatricians annotated clinical notes of the first ADHD visit for 423 patients. Inter-annotator agreement (IAA) was assessed for the recommendation for the first-line behavioral treatment (F-measure = 0.89). Four pre-trained language models, including BioClinical Bidirectional Encoder Representations from Transformers (BioClinicalBERT), were used to identify behavioral treatment recommendations using a 70/30 train/test split. For temporal validation, we deployed BioClinicalBERT on 1,020 unannotated notes from other ADHD visits and well-care visits; all positively classified notes (n = 53) and 5% of negatively classified notes (n = 50) were manually reviewed. RESULTS: Of 423 patients, 313 (74%) were male; 298 (70%) were privately insured; 138 (33%) were White; 61 (14%) were Hispanic. The BioClinicalBERT model trained on the first ADHD visits achieved F1 = 0.76, precision = 0.81, recall = 0.72, and AUC = 0.81 [0.72-0.89]. Temporal validation achieved F1 = 0.77, precision = 0.68, and recall = 0.88. Fairness analysis revealed low model performance in publicly insured patients (F1 = 0.53). CONCLUSION: Deploying pre-trained language models on a variable set of clinical notes accurately captured pediatrician adherence to guidelines in the treatment of children with ADHD. Validating this approach in other patient populations is needed to achieve equitable measurement of quality of care at scale and improve clinical care for mental health conditions.


Assuntos
Transtorno do Deficit de Atenção com Hiperatividade , Criança , Humanos , Masculino , Pré-Escolar , Feminino , Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico , Transtorno do Deficit de Atenção com Hiperatividade/tratamento farmacológico , Hispânico ou Latino , Fidelidade a Diretrizes , Pediatras , Processamento de Linguagem Natural
8.
NPJ Digit Med ; 7(1): 171, 2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38937550

RESUMO

Foundation models are transforming artificial intelligence (AI) in healthcare by providing modular components adaptable for various downstream tasks, making AI development more scalable and cost-effective. Foundation models for structured electronic health records (EHR), trained on coded medical records from millions of patients, demonstrated benefits including increased performance with fewer training labels, and improved robustness to distribution shifts. However, questions remain on the feasibility of sharing these models across hospitals and their performance in local tasks. This multi-center study examined the adaptability of a publicly accessible structured EHR foundation model (FMSM), trained on 2.57 M patient records from Stanford Medicine. Experiments used EHR data from The Hospital for Sick Children (SickKids) and Medical Information Mart for Intensive Care (MIMIC-IV). We assessed both adaptability via continued pretraining on local data, and task adaptability compared to baselines of locally training models from scratch, including a local foundation model. Evaluations on 8 clinical prediction tasks showed that adapting the off-the-shelf FMSM matched the performance of gradient boosting machines (GBM) locally trained on all data while providing a 13% improvement in settings with few task-specific training labels. Continued pretraining on local data showed FMSM required fewer than 1% of training examples to match the fully trained GBM's performance, and was 60 to 90% more sample-efficient than training local foundation models from scratch. Our findings demonstrate that adapting EHR foundation models across hospitals provides improved prediction performance at less cost, underscoring the utility of base foundation models as modular components to streamline the development of healthcare AI.

9.
Sci Rep ; 13(1): 3767, 2023 03 07.
Artigo em Inglês | MEDLINE | ID: mdl-36882576

RESUMO

Temporal distribution shift negatively impacts the performance of clinical prediction models over time. Pretraining foundation models using self-supervised learning on electronic health records (EHR) may be effective in acquiring informative global patterns that can improve the robustness of task-specific models. The objective was to evaluate the utility of EHR foundation models in improving the in-distribution (ID) and out-of-distribution (OOD) performance of clinical prediction models. Transformer- and gated recurrent unit-based foundation models were pretrained on EHR of up to 1.8 M patients (382 M coded events) collected within pre-determined year groups (e.g., 2009-2012) and were subsequently used to construct patient representations for patients admitted to inpatient units. These representations were used to train logistic regression models to predict hospital mortality, long length of stay, 30-day readmission, and ICU admission. We compared our EHR foundation models with baseline logistic regression models learned on count-based representations (count-LR) in ID and OOD year groups. Performance was measured using area-under-the-receiver-operating-characteristic curve (AUROC), area-under-the-precision-recall curve, and absolute calibration error. Both transformer and recurrent-based foundation models generally showed better ID and OOD discrimination relative to count-LR and often exhibited less decay in tasks where there is observable degradation of discrimination performance (average AUROC decay of 3% for transformer-based foundation model vs. 7% for count-LR after 5-9 years). In addition, the performance and robustness of transformer-based foundation models continued to improve as pretraining set size increased. These results suggest that pretraining EHR foundation models at scale is a useful approach for developing clinical prediction models that perform well in the presence of temporal distribution shift.


Assuntos
Fontes de Energia Elétrica , Registros Eletrônicos de Saúde , Humanos , Mortalidade Hospitalar , Hospitalização
10.
Methods Inf Med ; 62(1-02): 60-70, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36812932

RESUMO

BACKGROUND: Temporal dataset shift can cause degradation in model performance as discrepancies between training and deployment data grow over time. The primary objective was to determine whether parsimonious models produced by specific feature selection methods are more robust to temporal dataset shift as measured by out-of-distribution (OOD) performance, while maintaining in-distribution (ID) performance. METHODS: Our dataset consisted of intensive care unit patients from MIMIC-IV categorized by year groups (2008-2010, 2011-2013, 2014-2016, and 2017-2019). We trained baseline models using L2-regularized logistic regression on 2008-2010 to predict in-hospital mortality, long length of stay (LOS), sepsis, and invasive ventilation in all year groups. We evaluated three feature selection methods: L1-regularized logistic regression (L1), Remove and Retrain (ROAR), and causal feature selection. We assessed whether a feature selection method could maintain ID performance (2008-2010) and improve OOD performance (2017-2019). We also assessed whether parsimonious models retrained on OOD data performed as well as oracle models trained on all features in the OOD year group. RESULTS: The baseline model showed significantly worse OOD performance with the long LOS and sepsis tasks when compared with the ID performance. L1 and ROAR retained 3.7 to 12.6% of all features, whereas causal feature selection generally retained fewer features. Models produced by L1 and ROAR exhibited similar ID and OOD performance as the baseline models. The retraining of these models on 2017-2019 data using features selected from training on 2008-2010 data generally reached parity with oracle models trained directly on 2017-2019 data using all available features. Causal feature selection led to heterogeneous results with the superset maintaining ID performance while improving OOD calibration only on the long LOS task. CONCLUSIONS: While model retraining can mitigate the impact of temporal dataset shift on parsimonious models produced by L1 and ROAR, new methods are required to proactively improve temporal robustness.


Assuntos
Medicina Clínica , Sepse , Feminino , Gravidez , Humanos , Mortalidade Hospitalar , Tempo de Internação , Aprendizado de Máquina
11.
J Am Med Inform Assoc ; 30(12): 2004-2011, 2023 11 17.
Artigo em Inglês | MEDLINE | ID: mdl-37639620

RESUMO

OBJECTIVE: Development of electronic health records (EHR)-based machine learning models for pediatric inpatients is challenged by limited training data. Self-supervised learning using adult data may be a promising approach to creating robust pediatric prediction models. The primary objective was to determine whether a self-supervised model trained in adult inpatients was noninferior to logistic regression models trained in pediatric inpatients, for pediatric inpatient clinical prediction tasks. MATERIALS AND METHODS: This retrospective cohort study used EHR data and included patients with at least one admission to an inpatient unit. One admission per patient was randomly selected. Adult inpatients were 18 years or older while pediatric inpatients were more than 28 days and less than 18 years. Admissions were temporally split into training (January 1, 2008 to December 31, 2019), validation (January 1, 2020 to December 31, 2020), and test (January 1, 2021 to August 1, 2022) sets. Primary comparison was a self-supervised model trained in adult inpatients versus count-based logistic regression models trained in pediatric inpatients. Primary outcome was mean area-under-the-receiver-operating-characteristic-curve (AUROC) for 11 distinct clinical outcomes. Models were evaluated in pediatric inpatients. RESULTS: When evaluated in pediatric inpatients, mean AUROC of self-supervised model trained in adult inpatients (0.902) was noninferior to count-based logistic regression models trained in pediatric inpatients (0.868) (mean difference = 0.034, 95% CI=0.014-0.057; P < .001 for noninferiority and P = .006 for superiority). CONCLUSIONS: Self-supervised learning in adult inpatients was noninferior to logistic regression models trained in pediatric inpatients. This finding suggests transferability of self-supervised models trained in adult patients to pediatric patients, without requiring costly model retraining.


Assuntos
Pacientes Internados , Aprendizado de Máquina , Humanos , Adulto , Criança , Estudos Retrospectivos , Aprendizado de Máquina Supervisionado , Registros Eletrônicos de Saúde
12.
JAMIA Open ; 6(3): ooad054, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37545984

RESUMO

Objective: To describe the infrastructure, tools, and services developed at Stanford Medicine to maintain its data science ecosystem and research patient data repository for clinical and translational research. Materials and Methods: The data science ecosystem, dubbed the Stanford Data Science Resources (SDSR), includes infrastructure and tools to create, search, retrieve, and analyze patient data, as well as services for data deidentification, linkage, and processing to extract high-value information from healthcare IT systems. Data are made available via self-service and concierge access, on HIPAA compliant secure computing infrastructure supported by in-depth user training. Results: The Stanford Medicine Research Data Repository (STARR) functions as the SDSR data integration point, and includes electronic medical records, clinical images, text, bedside monitoring data and HL7 messages. SDSR tools include tools for electronic phenotyping, cohort building, and a search engine for patient timelines. The SDSR supports patient data collection, reproducible research, and teaching using healthcare data, and facilitates industry collaborations and large-scale observational studies. Discussion: Research patient data repositories and their underlying data science infrastructure are essential to realizing a learning health system and advancing the mission of academic medical centers. Challenges to maintaining the SDSR include ensuring sufficient financial support while providing researchers and clinicians with maximal access to data and digital infrastructure, balancing tool development with user training, and supporting the diverse needs of users. Conclusion: Our experience maintaining the SDSR offers a case study for academic medical centers developing data science and research informatics infrastructure.

13.
JAMA Netw Open ; 6(9): e2333495, 2023 09 05.
Artigo em Inglês | MEDLINE | ID: mdl-37725377

RESUMO

Importance: Ranitidine, the most widely used histamine-2 receptor antagonist (H2RA), was withdrawn because of N-nitrosodimethylamine impurity in 2020. Given the worldwide exposure to this drug, the potential risk of cancer development associated with the intake of known carcinogens is an important epidemiological concern. Objective: To examine the comparative risk of cancer associated with the use of ranitidine vs other H2RAs. Design, Setting, and Participants: This new-user active comparator international network cohort study was conducted using 3 health claims and 9 electronic health record databases from the US, the United Kingdom, Germany, Spain, France, South Korea, and Taiwan. Large-scale propensity score (PS) matching was used to minimize confounding of the observed covariates with negative control outcomes. Empirical calibration was performed to account for unobserved confounding. All databases were mapped to a common data model. Database-specific estimates were combined using random-effects meta-analysis. Participants included individuals aged at least 20 years with no history of cancer who used H2RAs for more than 30 days from January 1986 to December 2020, with a 1-year washout period. Data were analyzed from April to September 2021. Exposure: The main exposure was use of ranitidine vs other H2RAs (famotidine, lafutidine, nizatidine, and roxatidine). Main Outcomes and Measures: The primary outcome was incidence of any cancer, except nonmelanoma skin cancer. Secondary outcomes included all cancer except thyroid cancer, 16 cancer subtypes, and all-cause mortality. Results: Among 1 183 999 individuals in 11 databases, 909 168 individuals (mean age, 56.1 years; 507 316 [55.8%] women) were identified as new users of ranitidine, and 274 831 individuals (mean age, 58.0 years; 145 935 [53.1%] women) were identified as new users of other H2RAs. Crude incidence rates of cancer were 14.30 events per 1000 person-years (PYs) in ranitidine users and 15.03 events per 1000 PYs among other H2RA users. After PS matching, cancer risk was similar in ranitidine compared with other H2RA users (incidence, 15.92 events per 1000 PYs vs 15.65 events per 1000 PYs; calibrated meta-analytic hazard ratio, 1.04; 95% CI, 0.97-1.12). No significant associations were found between ranitidine use and any secondary outcomes after calibration. Conclusions and Relevance: In this cohort study, ranitidine use was not associated with an increased risk of cancer compared with the use of other H2RAs. Further research is needed on the long-term association of ranitidine with cancer development.


Assuntos
Neoplasias Cutâneas , Neoplasias da Glândula Tireoide , Feminino , Humanos , Pessoa de Meia-Idade , Masculino , Ranitidina/efeitos adversos , Estudos de Coortes , Antagonistas dos Receptores H2 da Histamina/efeitos adversos
14.
EClinicalMedicine ; 58: 101932, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37034358

RESUMO

Background: Adverse events of special interest (AESIs) were pre-specified to be monitored for the COVID-19 vaccines. Some AESIs are not only associated with the vaccines, but with COVID-19. Our aim was to characterise the incidence rates of AESIs following SARS-CoV-2 infection in patients and compare these to historical rates in the general population. Methods: A multi-national cohort study with data from primary care, electronic health records, and insurance claims mapped to a common data model. This study's evidence was collected between Jan 1, 2017 and the conclusion of each database (which ranged from Jul 2020 to May 2022). The 16 pre-specified prevalent AESIs were: acute myocardial infarction, anaphylaxis, appendicitis, Bell's palsy, deep vein thrombosis, disseminated intravascular coagulation, encephalomyelitis, Guillain- Barré syndrome, haemorrhagic stroke, non-haemorrhagic stroke, immune thrombocytopenia, myocarditis/pericarditis, narcolepsy, pulmonary embolism, transverse myelitis, and thrombosis with thrombocytopenia. Age-sex standardised incidence rate ratios (SIR) were estimated to compare post-COVID-19 to pre-pandemic rates in each of the databases. Findings: Substantial heterogeneity by age was seen for AESI rates, with some clearly increasing with age but others following the opposite trend. Similarly, differences were also observed across databases for same health outcome and age-sex strata. All studied AESIs appeared consistently more common in the post-COVID-19 compared to the historical cohorts, with related meta-analytic SIRs ranging from 1.32 (1.05 to 1.66) for narcolepsy to 11.70 (10.10 to 13.70) for pulmonary embolism. Interpretation: Our findings suggest all AESIs are more common after COVID-19 than in the general population. Thromboembolic events were particularly common, and over 10-fold more so. More research is needed to contextualise post-COVID-19 complications in the longer term. Funding: None.

15.
BMJ Med ; 2(1): e000651, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37829182

RESUMO

Objective: To assess the uptake of second line antihyperglycaemic drugs among patients with type 2 diabetes mellitus who are receiving metformin. Design: Federated pharmacoepidemiological evaluation in LEGEND-T2DM. Setting: 10 US and seven non-US electronic health record and administrative claims databases in the Observational Health Data Sciences and Informatics network in eight countries from 2011 to the end of 2021. Participants: 4.8 million patients (≥18 years) across US and non-US based databases with type 2 diabetes mellitus who had received metformin monotherapy and had initiated second line treatments. Exposure: The exposure used to evaluate each database was calendar year trends, with the years in the study that were specific to each cohort. Main outcomes measures: The outcome was the incidence of second line antihyperglycaemic drug use (ie, glucagon-like peptide-1 receptor agonists, sodium-glucose cotransporter-2 inhibitors, dipeptidyl peptidase-4 inhibitors, and sulfonylureas) among individuals who were already receiving treatment with metformin. The relative drug class level uptake across cardiovascular risk groups was also evaluated. Results: 4.6 million patients were identified in US databases, 61 382 from Spain, 32 442 from Germany, 25 173 from the UK, 13 270 from France, 5580 from Scotland, 4614 from Hong Kong, and 2322 from Australia. During 2011-21, the combined proportional initiation of the cardioprotective antihyperglycaemic drugs (glucagon-like peptide-1 receptor agonists and sodium-glucose cotransporter-2 inhibitors) increased across all data sources, with the combined initiation of these drugs as second line drugs in 2021 ranging from 35.2% to 68.2% in the US databases, 15.4% in France, 34.7% in Spain, 50.1% in Germany, and 54.8% in Scotland. From 2016 to 2021, in some US and non-US databases, uptake of glucagon-like peptide-1 receptor agonists and sodium-glucose cotransporter-2 inhibitors increased more significantly among populations with no cardiovascular disease compared with patients with established cardiovascular disease. No data source provided evidence of a greater increase in the uptake of these two drug classes in populations with cardiovascular disease compared with no cardiovascular disease. Conclusions: Despite the increase in overall uptake of cardioprotective antihyperglycaemic drugs as second line treatments for type 2 diabetes mellitus, their uptake was lower in patients with cardiovascular disease than in people with no cardiovascular disease over the past decade. A strategy is needed to ensure that medication use is concordant with guideline recommendations to improve outcomes of patients with type 2 diabetes mellitus.

16.
Nephrol Dial Transplant ; 27(1): 417-22, 2012 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-21622985

RESUMO

BACKGROUND: The beneficial effect of angiotensin-converting enzyme inhibitors (ACEI) or angiotensin II receptor blockers (ARB) in kidney transplant recipients on modern immunosuppression is not yet well established. Our objective was to investigate the impact of the use of ACEI/ARB on patient and graft survival in a cohort of kidney transplant recipients. METHODS: A total of 990 patients, who received a single deceased donor kidney at our institution between 1996 and 2005, were included in this longitudinal cohort study. All-cause mortality and death-censored graft loss were the primary outcomes. We used traditional time-dependent Cox model (unweighted) and inverse-probability-of-treatment weighting of marginal structural models (weighted Cox model), controlling for time-dependent confounding by indication. RESULTS: A total of 414 patients (42%) received ACEI/ARB through the study period (median duration 14 months, interquartile range 6-40 months). ACEI/ARB use was associated with reduction of risk for mortality in the crude [hazard ratio (HR) 0.627, 95% confidence interval (CI) 0.412-0.953] and adjusted Cox analysis (HR 0.626, 95% CI 0.407-0.963). Similar results were observed after adjusting for confounding by indication (HR 0.629, 95% CI 0.407-0.973). By contrast, ACEI/ARB use was not associated with significant improvement of graft survival after kidney transplantation. CONCLUSION: ACEI/ARB prescription may be suggested as beneficial among multiple medications for reducing mortality in kidney transplant recipients, but its use was not associated with longer graft survival.


Assuntos
Bloqueadores do Receptor Tipo 1 de Angiotensina II/uso terapêutico , Inibidores da Enzima Conversora de Angiotensina/uso terapêutico , Rejeição de Enxerto/prevenção & controle , Sobrevivência de Enxerto/efeitos dos fármacos , Falência Renal Crônica/mortalidade , Transplante de Rim/mortalidade , Sistema Renina-Angiotensina/efeitos dos fármacos , Adulto , Feminino , Seguimentos , Humanos , Falência Renal Crônica/diagnóstico , Falência Renal Crônica/terapia , Estudos Longitudinais , Masculino , Pessoa de Meia-Idade , Prognóstico , Estudos Prospectivos , Taxa de Sobrevida
17.
AMIA Annu Symp Proc ; 2022: 221-230, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37128416

RESUMO

Patients diagnosed with systemic lupus erythematosus (SLE) suffer from a decreased quality of life, an increased risk of medical complications, and an increased risk of death. In particular, approximately 50% of SLE patients progress to develop lupus nephritis, which oftentimes leads to life-threatening end stage renal disease (ESRD) and requires dialysis or kidney transplant1. The challenge is that lupus nephritis is diagnosed via a kidney biopsy, which is typically performed only after noticeable decreased kidney function, leaving little room for proactive or preventative measures. The ability to predict which patients are most likely to develop lupus nephritis has the potential to shift lupus nephritis disease management from reactive to proactive. We present a clinically useful prediction model to predict which patients with newly diagnosed SLE will go on to develop lupus nephritis in the next five years.


Assuntos
Lúpus Eritematoso Sistêmico , Nefrite Lúpica , Medicina Preventiva , Humanos , Falência Renal Crônica/etiologia , Falência Renal Crônica/prevenção & controle , Lúpus Eritematoso Sistêmico/complicações , Lúpus Eritematoso Sistêmico/diagnóstico , Nefrite Lúpica/complicações , Nefrite Lúpica/diagnóstico , Nefrite Lúpica/prevenção & controle , Qualidade de Vida , Diálise Renal , Prognóstico , Biópsia , Medicina Preventiva/métodos , Conjuntos de Dados como Assunto , Registros Eletrônicos de Saúde , California , Masculino , Feminino , Adulto , Pessoa de Meia-Idade , Estudos de Coortes , Curva ROC , Reprodutibilidade dos Testes
18.
Sci Rep ; 12(1): 2726, 2022 02 17.
Artigo em Inglês | MEDLINE | ID: mdl-35177653

RESUMO

Temporal dataset shift associated with changes in healthcare over time is a barrier to deploying machine learning-based clinical decision support systems. Algorithms that learn robust models by estimating invariant properties across time periods for domain generalization (DG) and unsupervised domain adaptation (UDA) might be suitable to proactively mitigate dataset shift. The objective was to characterize the impact of temporal dataset shift on clinical prediction models and benchmark DG and UDA algorithms on improving model robustness. In this cohort study, intensive care unit patients from the MIMIC-IV database were categorized by year groups (2008-2010, 2011-2013, 2014-2016 and 2017-2019). Tasks were predicting mortality, long length of stay, sepsis and invasive ventilation. Feedforward neural networks were used as prediction models. The baseline experiment trained models using empirical risk minimization (ERM) on 2008-2010 (ERM[08-10]) and evaluated them on subsequent year groups. DG experiment trained models using algorithms that estimated invariant properties using 2008-2016 and evaluated them on 2017-2019. UDA experiment leveraged unlabelled samples from 2017 to 2019 for unsupervised distribution matching. DG and UDA models were compared to ERM[08-16] models trained using 2008-2016. Main performance measures were area-under-the-receiver-operating-characteristic curve (AUROC), area-under-the-precision-recall curve and absolute calibration error. Threshold-based metrics including false-positives and false-negatives were used to assess the clinical impact of temporal dataset shift and its mitigation strategies. In the baseline experiments, dataset shift was most evident for sepsis prediction (maximum AUROC drop, 0.090; 95% confidence interval (CI), 0.080-0.101). Considering a scenario of 100 consecutively admitted patients showed that ERM[08-10] applied to 2017-2019 was associated with one additional false-negative among 11 patients with sepsis, when compared to the model applied to 2008-2010. When compared with ERM[08-16], DG and UDA experiments failed to produce more robust models (range of AUROC difference, - 0.003 to 0.050). In conclusion, DG and UDA failed to produce more robust models compared to ERM in the setting of temporal dataset shift. Alternate approaches are required to preserve model performance over time in clinical medicine.


Assuntos
Bases de Dados Factuais , Unidades de Terapia Intensiva , Tempo de Internação , Modelos Biológicos , Redes Neurais de Computação , Sepse , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Sepse/mortalidade , Sepse/terapia
19.
JMIR Med Inform ; 10(11): e40039, 2022 Nov 17.
Artigo em Inglês | MEDLINE | ID: mdl-36394938

RESUMO

BACKGROUND: Given the costs of machine learning implementation, a systematic approach to prioritizing which models to implement into clinical practice may be valuable. OBJECTIVE: The primary objective was to determine the health care attributes respondents at 2 pediatric institutions rate as important when prioritizing machine learning model implementation. The secondary objective was to describe their perspectives on implementation using a qualitative approach. METHODS: In this mixed methods study, we distributed a survey to health system leaders, physicians, and data scientists at 2 pediatric institutions. We asked respondents to rank the following 5 attributes in terms of implementation usefulness: the clinical problem was common, the clinical problem caused substantial morbidity and mortality, risk stratification led to different actions that could reasonably improve patient outcomes, reducing physician workload, and saving money. Important attributes were those ranked as first or second most important. Individual qualitative interviews were conducted with a subsample of respondents. RESULTS: Among 613 eligible respondents, 275 (44.9%) responded. Qualitative interviews were conducted with 17 respondents. The most common important attributes were risk stratification leading to different actions (205/275, 74.5%) and clinical problem causing substantial morbidity or mortality (177/275, 64.4%). The attributes considered least important were reducing physician workload and saving money. Qualitative interviews consistently prioritized implementations that improved patient outcomes. CONCLUSIONS: Respondents prioritized machine learning model implementation where risk stratification would lead to different actions and clinical problems that caused substantial morbidity and mortality. Implementations that improved patient outcomes were prioritized. These results can help provide a framework for machine learning model implementation.

20.
Appl Clin Inform ; 13(1): 315-321, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-35235994

RESUMO

BACKGROUND: One key aspect of a learning health system (LHS) is utilizing data generated during care delivery to inform clinical care. However, institutional guidelines that utilize observational data are rare and require months to create, making current processes impractical for more urgent scenarios such as those posed by the COVID-19 pandemic. There exists a need to rapidly analyze institutional data to drive guideline creation where evidence from randomized control trials are unavailable. OBJECTIVES: This article provides a background on the current state of observational data generation in institutional guideline creation and details our institution's experience in creating a novel workflow to (1) demonstrate the value of such a workflow, (2) demonstrate a real-world example, and (3) discuss difficulties encountered and future directions. METHODS: Utilizing a multidisciplinary team of database specialists, clinicians, and informaticists, we created a workflow for identifying and translating a clinical need into a queryable format in our clinical data warehouse, creating data summaries and feeding this information back into clinical guideline creation. RESULTS: Clinical questions posed by the hospital medicine division were answered in a rapid time frame and informed creation of institutional guidelines for the care of patients with COVID-19. The cost of setting up a workflow, answering the questions, and producing data summaries required around 300 hours of effort and $300,000 USD. CONCLUSION: A key component of an LHS is the ability to learn from data generated during care delivery. There are rare examples in the literature and we demonstrate one such example along with proposed thoughts of ideal multidisciplinary team formation and deployment.


Assuntos
COVID-19 , Sistema de Aprendizagem em Saúde , COVID-19/epidemiologia , Humanos , Estudos Observacionais como Assunto , Pandemias , Guias de Prática Clínica como Assunto , Fluxo de Trabalho
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA