Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 106
Filtrar
1.
Life (Basel) ; 14(6)2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38929638

RESUMEN

Artificial intelligence models represented in machine learning algorithms are promising tools for risk assessment used to guide clinical and other health care decisions. Machine learning algorithms, however, may house biases that propagate stereotypes, inequities, and discrimination that contribute to socioeconomic health care disparities. The biases include those related to some sociodemographic characteristics such as race, ethnicity, gender, age, insurance, and socioeconomic status from the use of erroneous electronic health record data. Additionally, there is concern that training data and algorithmic biases in large language models pose potential drawbacks. These biases affect the lives and livelihoods of a significant percentage of the population in the United States and globally. The social and economic consequences of the associated backlash cannot be underestimated. Here, we outline some of the sociodemographic, training data, and algorithmic biases that undermine sound health care risk assessment and medical decision-making that should be addressed in the health care system. We present a perspective and overview of these biases by gender, race, ethnicity, age, historically marginalized communities, algorithmic bias, biased evaluations, implicit bias, selection/sampling bias, socioeconomic status biases, biased data distributions, cultural biases and insurance status bias, conformation bias, information bias and anchoring biases and make recommendations to improve large language model training data, including de-biasing techniques such as counterfactual role-reversed sentences during knowledge distillation, fine-tuning, prefix attachment at training time, the use of toxicity classifiers, retrieval augmented generation and algorithmic modification to mitigate the biases moving forward.

2.
Commun Biol ; 7(1): 529, 2024 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-38704509

RESUMEN

Intra-organism biodiversity is thought to arise from epigenetic modification of constituent genes and post-translational modifications of translated proteins. Here, we show that post-transcriptional modifications, like RNA editing, may also contribute. RNA editing enzymes APOBEC3A and APOBEC3G catalyze the deamination of cytosine to uracil. RNAsee (RNA site editing evaluation) is a computational tool developed to predict the cytosines edited by these enzymes. We find that 4.5% of non-synonymous DNA single nucleotide polymorphisms that result in cytosine to uracil changes in RNA are probable sites for APOBEC3A/G RNA editing; the variant proteins created by such polymorphisms may also result from transient RNA editing. These polymorphisms are associated with over 20% of Medical Subject Headings across ten categories of disease, including nutritional and metabolic, neoplastic, cardiovascular, and nervous system diseases. Because RNA editing is transient and not organism-wide, future work is necessary to confirm the extent and effects of such editing in humans.


Asunto(s)
Desaminasas APOBEC , Citidina Desaminasa , Edición de ARN , Humanos , Citidina Desaminasa/metabolismo , Citidina Desaminasa/genética , Polimorfismo de Nucleótido Simple , Citosina/metabolismo , Desaminasa APOBEC-3G/metabolismo , Desaminasa APOBEC-3G/genética , Uracilo/metabolismo , Proteínas/genética , Proteínas/metabolismo , Citosina Desaminasa/genética , Citosina Desaminasa/metabolismo
3.
JMIR Public Health Surveill ; 10: e49841, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38687984

RESUMEN

BACKGROUND: There have been over 772 million confirmed cases of COVID-19 worldwide. A significant portion of these infections will lead to long COVID (post-COVID-19 condition) and its attendant morbidities and costs. Numerous life-altering complications have already been associated with the development of long COVID, including chronic fatigue, brain fog, and dangerous heart rhythms. OBJECTIVE: We aim to derive an actionable long COVID case definition consisting of significantly increased signs, symptoms, and diagnoses to support pandemic-related clinical, public health, research, and policy initiatives. METHODS: This research employs a case-crossover population-based study using International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) data generated at Veterans Affairs medical centers nationwide between January 1, 2020, and August 18, 2022. In total, 367,148 individuals with ICD-10-CM data both before and after a positive COVID-19 test were selected for analysis. We compared ICD-10-CM codes assigned 1 to 7 months following each patient's positive test with those assigned up to 6 months prior. Further, 350,315 patients had novel codes assigned during this window of time. We defined signs, symptoms, and diagnoses as being associated with long COVID if they had a novel case frequency of ≥1:1000, and they significantly increased in our entire cohort after a positive test. We present odds ratios with CIs for long COVID signs, symptoms, and diagnoses, organized by ICD-10-CM functional groups and medical specialty. We used our definition to assess long COVID risk based on a patient's demographics, Elixhauser score, vaccination status, and COVID-19 disease severity. RESULTS: We developed a long COVID definition consisting of 323 ICD-10-CM diagnosis codes grouped into 143 ICD-10-CM functional groups that were significantly increased in our 367,148 patient post-COVID-19 population. We defined 17 medical-specialty long COVID subtypes such as cardiology long COVID. Patients who were COVID-19-positive developed signs, symptoms, or diagnoses included in our long COVID definition at a proportion of at least 59.7% (268,320/449,450, based on a denominator of all patients who were COVID-19-positive). The long COVID cohort was 8 years older with more comorbidities (2-year Elixhauser score 7.97 in the patients with long COVID vs 4.21 in the patients with non-long COVID). Patients who had a more severe bout of COVID-19, as judged by their minimum oxygen saturation level, were also more likely to develop long COVID. CONCLUSIONS: An actionable, data-driven definition of long COVID can help clinicians screen for and diagnose long COVID, allowing identified patients to be admitted into appropriate monitoring and treatment programs. This long COVID definition can also support public health, research, and policy initiatives. Patients with COVID-19 who are older or have low oxygen saturation levels during their bout of COVID-19, or those who have multiple comorbidities should be preferentially watched for the development of long COVID.


Asunto(s)
COVID-19 , Estudios Cruzados , Síndrome Post Agudo de COVID-19 , Humanos , COVID-19/epidemiología , COVID-19/complicaciones , Factores de Riesgo , Masculino , Femenino , Persona de Mediana Edad , Estados Unidos/epidemiología , Anciano , Clasificación Internacional de Enfermedades , Adulto
4.
Subst Use ; 18: 11782218231223673, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38433747

RESUMEN

Reportedly, various urine manipulations can be performed by opioid use disorder (OUD) patients who are on buprenorphine/naloxone medications to disguise their non-compliance to the treatment. One type of manipulation is known as "spiking" adulteration, directly dipping a buprenorphine/naloxone film into urine. Identifying this type of urine manipulation has been the aim of many previous studies. These studies have revealed urine adulterations through inappropriately high levels of "buprenorphine" and "naloxone" and a very small amount of "norbuprenorphine." So, does the small amount of "norbuprenorphine" in the adulterated urine samples result from dipped buprenorphine/naloxone film, or is it a residual metabolite of buprenorphine in the patient's system? This pilot study utilized 12 urine samples from 12 participants, as well as water samples as a control. The samples were subdivided by the dipping area and time, as well as the temperature and concentration of urine samples, and each sublingual generic buprenorphine/naloxone film was dipped directly into the samples. Then, the levels of "buprenorphine," "norbuprenorphine," "naloxone," "buprenorphine-glucuronide" and "norbuprenorphine-glucuronide" were examined by Liquid Chromatography with tandem mass spectrometry (LC-MS/MS). The results of this study showed that high levels of "buprenorphine" and "naloxone" and a small amount of "norbuprenorphine" were detected in both urine and water samples when the buprenorphine/naloxone film was dipped directly into these samples. However, no "buprenorphine-glucuronide" or "norbuprenorphine-glucuronide" were detected in any of the samples. In addition, the area and timing of dipping altered "buprenorphine" and "naloxone" levels, but concentration and temperature did not. This study's findings could help providers interpret their patients' urine drug test results more accurately, which then allows them to monitor patient compliance and help them identify manipulation by examining patient urine test results.

5.
JMIR Med Inform ; 12: e42271, 2024 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-38354033

RESUMEN

BACKGROUND: Infants born at extremely preterm gestational ages are typically admitted to the neonatal intensive care unit (NICU) after initial resuscitation. The subsequent hospital course can be highly variable, and despite counseling aided by available risk calculators, there are significant challenges with shared decision-making regarding life support and transition to end-of-life care. Improving predictive models can help providers and families navigate these unique challenges. OBJECTIVE: Machine learning methods have previously demonstrated added predictive value for determining intensive care unit outcomes, and their use allows consideration of a greater number of factors that potentially influence newborn outcomes, such as maternal characteristics. Machine learning-based models were analyzed for their ability to predict the survival of extremely preterm neonates at initial admission. METHODS: Maternal and newborn information was extracted from the health records of infants born between 23 and 29 weeks of gestation in the Medical Information Mart for Intensive Care III (MIMIC-III) critical care database. Applicable machine learning models predicting survival during the initial NICU admission were developed and compared. The same type of model was also examined using only features that would be available prepartum for the purpose of survival prediction prior to an anticipated preterm birth. Features most correlated with the predicted outcome were determined when possible for each model. RESULTS: Of included patients, 37 of 459 (8.1%) expired. The resulting random forest model showed higher predictive performance than the frequently used Score for Neonatal Acute Physiology With Perinatal Extension II (SNAPPE-II) NICU model when considering extremely preterm infants of very low birth weight. Several other machine learning models were found to have good performance but did not show a statistically significant difference from previously available models in this study. Feature importance varied by model, and those of greater importance included gestational age; birth weight; initial oxygenation level; elements of the APGAR (appearance, pulse, grimace, activity, and respiration) score; and amount of blood pressure support. Important prepartum features also included maternal age, steroid administration, and the presence of pregnancy complications. CONCLUSIONS: Machine learning methods have the potential to provide robust prediction of survival in the context of extremely preterm births and allow for consideration of additional factors such as maternal clinical and socioeconomic information. Evaluation of larger, more diverse data sets may provide additional clarity on comparative performance.

6.
bioRxiv ; 2023 Jul 31.
Artículo en Inglés | MEDLINE | ID: mdl-37577456

RESUMEN

Intra-organism biodiversity is thought to arise from epigenetic modification of our constituent genes and post-translational modifications after mRNA is translated into proteins. We have found that post-transcriptional modification, also known as RNA editing, is also responsible for a significant amount of our biodiversity, substantively expanding this story. The APOBEC (apolipoprotein B mRNA editing catalytic polypeptide-like) family RNA editing enzymes APOBEC3A and APOBEC3G catalyze the deamination of cytosines to uracils (C>U) in specific stem-loop structures.1,2 We used RNAsee (RNA site editing evaluation), a tool developed to predict the locations of APOBEC3A/G RNA editing sites, to determine whether known single nucleotide polymorphisms (SNPs) in DNA could be replicated in RNA via RNA editing. About 4.5% of non-synonymous SNPs which result in C>U changes in RNA, and about 5.4% of such SNPs labelled as pathogenic, were identified as probable sites for APOBEC3A/G editing. This suggests that the variant proteins created by these DNA mutations may also be created by transient RNA editing, with the potential to affect human health. Those SNPs identified as potential APOBEC3A/G-mediated RNA editing sites were disproportionately associated with cardiovascular diseases, digestive system diseases, and musculoskeletal diseases. Future work should focus on common sites of RNA editing, any variant proteins created by these RNA editing sites, and the effects of these variants on protein diversity and human health. Classically, our biodiversity is thought to come from our constitutive genetics, epigenetic phenomenon, transcriptional differences, and post-translational modification of proteins. Here, we have shown evidence that RNA editing, often stimulated by environmental factors, could account for a significant degree of the protein biodiversity leading to human disease. In an era where worries about our changing environment are ever increasing, from the warming of our climate to the emergence of new diseases to the infiltration of microplastics and pollutants into our bodies, understanding how environmentally sensitive mechanisms like RNA editing affect our own cells is essential.

7.
J Biomed Inform ; 144: 104443, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37455008

RESUMEN

OBJECTIVE: Despite the high prevalence of alcohol use disorder (AUD) in the United States, limited research is focused on the associations among AUD, pain, and opioids/benzodiazepine use. In addition, little is known regarding individuals with a history of AUD and their potential risk for pain diagnoses, pain prescriptions, and subsequent misuse. Moreover, the potential risk of pain diagnoses, prescriptions, and subsequent misuse among individuals with a history of AUD is not well known. The objective was to develop a tailored dataset by linking data from 2 New York State (NYS) administrative databases to investigate a series of hypotheses related to AUD and painful medical disorders. METHODS: Data from the NYS Office of Addiction Services and Supports (OASAS) Client Data System (CDS) and Medicaid claims data from the NYS Department of Health Medicaid Data Warehouse (MDW) were merged using a stepwise deterministic method. Multiple patient-level identifier combinations were applied to create linkage rules. We included patients aged 18 and older from the OASAS CDS who initially entered treatment with a primary substance use of alcohol and no use of opioids between January 1, 2003, and September 23, 2019. This cohort was then linked to corresponding Medicaid claims. RESULTS: A total of 177,685 individuals with a primary AUD problem and no opioid use history were included in the dataset. Of these, 37,346 (21.0%) patients had an OUD diagnosis, and 3,365 (1.9%) patients experienced an opioid overdose. There were 121,865 (68.6%) patients found to have a pain condition. CONCLUSION: The integrated database allows researchers to examine the associations among AUD, pain, and opioids/benzodiazepine use, and propose hypotheses to improve outcomes for at-risk patients. The findings of this study can contribute to the development of a prognostic prediction model and the analysis of longitudinal outcomes to improve the care of patients with AUD.


Asunto(s)
Alcoholismo , Trastornos Relacionados con Opioides , Humanos , Estados Unidos/epidemiología , Analgésicos Opioides/uso terapéutico , Alcoholismo/diagnóstico , Alcoholismo/epidemiología , Alcoholismo/tratamiento farmacológico , New York/epidemiología , Fuentes de Información , Trastornos Relacionados con Opioides/terapia , Trastornos Relacionados con Opioides/tratamiento farmacológico , Dolor/tratamiento farmacológico , Dolor/epidemiología , Dolor/inducido químicamente , Benzodiazepinas
8.
Stud Health Technol Inform ; 304: 21-25, 2023 Jun 22.
Artículo en Inglés | MEDLINE | ID: mdl-37347563

RESUMEN

Perceptions of errors associated with healthcare information technology (HIT) often depend on the context and position of the viewer. HIT vendors posit very different causes of errors than clinicians, implementation teams, or IT staff. Even within the same hospital, members of departments and services often implicate other departments. Organizations may attribute errors to external care partners that refer patients, such as nursing homes or outside clinics. Also, the various clinical roles within an organization (e.g., physicians, nurses, pharmacists) can conceptualize errors and their root causes differently. Overarching all these perceptual factors, the definitions, mechanisms, and incidence of HIT-related errors are remarkably conflictual. There is neither a universal standard for defining or counting these errors. This paper attempts to enumerate and clarify the issues related to differential perceptions of medical errors associated with HIT. It then suggests solutions.


Asunto(s)
Registros Electrónicos de Salud , Errores Médicos , Humanos , Hospitales
9.
J Clin Transl Sci ; 7(1): e55, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37008615

RESUMEN

Introduction: It is important for SARS-CoV-2 vaccine providers, vaccine recipients, and those not yet vaccinated to be well informed about vaccine side effects. We sought to estimate the risk of post-vaccination venous thromboembolism (VTE) to meet this need. Methods: We conducted a retrospective cohort study to quantify excess VTE risk associated with SARS-CoV-2 vaccination in US veterans age 45 and older using data from the Department of Veterans Affairs (VA) National Surveillance Tool. The vaccinated cohort received at least one dose of a SARS-CoV-2 vaccine at least 60 days prior to 3/06/22 (N = 855,686). The control group was those not vaccinated (N = 321,676). All patients were COVID-19 tested at least once before vaccination with a negative test. The main outcome was VTE documented by ICD10-CM codes. Results: Vaccinated persons had a VTE rate of 1.3755 (CI: 1.3752-1.3758) per thousand, which was 0.1 percent over the baseline rate of 1.3741 (CI: 1.3738-1.3744) per thousand in the unvaccinated patients, or 1.4 excess cases per 1,000,000. All vaccine types showed a minimal increased rate of VTE (rate of VTE per 1000 was 1.3761 (CI: 1.3754-1.3768) for Janssen; 1.3757 (CI: 1.3754-1.3761) for Pfizer, and for Moderna, the rate was 1.3757 (CI: 1.3748-1.3877)). The tiny differences in rates comparing either Janssen or Pfizer vaccine to Moderna were statistically significant (p < 0.001). Adjusting for age, sex, BMI, 2-year Elixhauser score, and race, the vaccinated group had a minimally higher relative risk of VTE as compared to controls (1.0009927 CI: 1.007673-1.0012181; p < 0.001). Conclusion: The results provide reassurance that there is only a trivial increased risk of VTE with the current US SARS-CoV-2 vaccines used in veterans older than age 45. This risk is significantly less than VTE risk among hospitalized COVID-19 patients. The risk-benefit ratio favors vaccination, given the VTE rate, mortality, and morbidity associated with COVID-19 infection.

10.
Subst Abuse ; 17: 11782218231153748, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36937705

RESUMEN

Background: Utilizing a 1-year chart review as the data, Furo et al. conducted a research study on an association between buprenorphine dose and the urine "norbuprenorphine" to "creatinine" ratio and found significant differences in the ratio among 8-, 12-, and 16-mg/day groups with an analysis of variance (ANOVA) test. This study expands the data for a 2-year chart review and is intended to delineate an association between buprenorphine dose and the urine "norbuprenorphine" to "creatinine" ratio with a higher statistical power. Methods: This study performed a 2-year chart review of data for the patients living in a halfway house setting, where their drug administration was closely monitored. The patients were on buprenorphine prescribed at an outpatient clinic for opioid use disorder (OUD), and their buprenorphine prescription and dispensing information were confirmed by the New York Prescription Drug Monitoring Program (PDMP). Urine test results in the electronic health record (EHR) were reviewed, focusing on the "buprenorphine," "norbuprenorphine," and "creatinine" levels. The Kruskal-Wallis H and Mann-Whitney U tests were performed to examine an association between buprenorphine dose and the "norbuprenorphine" to "creatinine" ratio. Results: This study included 371 urine samples from 61 consecutive patients and analyzed the data in a manner similar to that described in the study by Furo et al. This study had similar findings with the following exceptions: (1) a mean buprenorphine dose of 11.0 ± 3.8 mg/day with a range of 2 to 20 mg/day; (2) exclusion of 6 urine samples with "creatinine" level <20 mg/dL; (3) minimum "norbuprenorphine" to "creatinine" ratios in the 8-, 12-, and 16-mg/day groups of 0.44 × 10-4 (n = 68), 0.1 × 10-4 (n = 133), and 1.37 × 10-4 (n = 82), respectively; however, after removing the 2 lowest outliers, the minimum "norbuprenorphine" to "creatinine" ratio in the 12-mg/day group was 1.6 × 10-4, similar to the findings in the previous study; and (4) a significant association between buprenorphine dose and the urine "norbuprenorphine" to "creatinine" ratios from the Kruskal-Wallis test (P < .01). In addition, the median "norbuprenorphine" to "creatinine" ratio had a strong association with buprenorphine dose, and this association could be formulated as: [y = 2.266 ln(x) + 0.8211]. In other words, the median ratios in 8-, 12-, and 16-mg/day groups were 5.53 × 10-4, 6.45 × 10-4, and 7.10 × 10-4, respectively. Therefore, any of the following features should alert providers to further investigate patient treatment compliance: (1) inappropriate substance(s) in urine sample; (2) "creatinine" level <20 mg/dL; (3) "buprenorphine" to "norbuprenorphine" ratio >50:1; (4) buprenorphine dose >24 mg/day; or (5) "norbuprenorphine" to "creatinine" ratios <0.5 × 10-4 in patients who are on 8 mg/day or <1.5 × 10-4 in patients who are on 12 mg/day or more. Conclusion: The results of the present study confirmed those of the previous study regarding an association between buprenorphine dose and the "norbuprenorphine" to "creatinine" ratio, using an expanded data set. Additionally, this study delineated a clearer relationship, focusing on the median "norbuprenorphine" to "creatinine" ratios in different buprenorphine dose groups. These results could help providers interpret urine test results more accurately and apply them to outpatient opioid treatment programs for optimal treatment outcomes.

11.
J Gen Intern Med ; 38(1): 138-146, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-35650469

RESUMEN

BACKGROUND: Alcohol use disorder (AUD) is a highly prevalent public health problem that contributes to opioid- and benzodiazepine-related morbidity and mortality. Even though co-utilization of these substances is particularly harmful, data are sparse on opioid or benzodiazepine prescribing patterns among individuals with AUD. OBJECTIVE: To estimate temporal trends and disparities in opioid, benzodiazepine, and opioid/benzodiazepine co-prescribing among individuals with AUD in New York State (NYS). DESIGN/PARTICIPANTS: Serial cross-sectional study analyzing merged data from the NYS Office of Addiction Services and Supports (OASAS) and the NYS Department of Health Medicaid Data Warehouse. Subjects with a first admission to an OASAS treatment program from 2005-2018 and a primary AUD were included. A total of 148,328 subjects were identified. MEASURES: Annual prescribing rates of opioids, benzodiazepines, or both between the pre- (2005-2012) and post- (2013-2018) Internet System for Tracking Over-Prescribing (I-STOP) periods. I-STOP is a prescription monitoring program implemented in NYS in August 2013. Analyses were stratified based on sociodemographic factors (age, sex, race/ethnicity, and location). RESULTS: Opioid prescribing rates decreased between the pre- and post-I-STOP periods from 25.1% (95% CI, 24.9-25.3%) to 21.3% (95% CI, 21.2-21.4; P <.001), while benzodiazepine (pre: 9.96% [95% CI, 9.83-10.1%], post: 9.92% [95% CI, 9.83-10.0%]; P =.631) and opioid/benzodiazepine prescribing rates remained unchanged (pre: 3.01% vs. post: 3.05%; P =.403). After I-STOP implementation, there was a significant decreasing trend in opioid (change, -1.85% per year, P <.0001), benzodiazepine (-0.208% per year, P =.0184), and opioid/benzodiazepine prescribing (-0.267% per year, P <.0001). Opioid, benzodiazepine, and co-prescription rates were higher in females, White non-Hispanics, and rural regions. CONCLUSIONS: Among those with AUD, opioid prescribing decreased following NYS I-STOP program implementation. While both benzodiazepine and opioid/benzodiazepine co-prescribing rates remained high, a decreasing trend was evident after program implementation. Continuing high rates of opioid and benzodiazepine prescribing necessitate the development of innovative approaches to improve the quality of care.


Asunto(s)
Alcoholismo , Analgésicos Opioides , Femenino , Estados Unidos , Adulto , Humanos , Analgésicos Opioides/uso terapéutico , New York/epidemiología , Alcoholismo/tratamiento farmacológico , Benzodiazepinas/uso terapéutico , Estudios Transversales , Pautas de la Práctica en Medicina , Prescripciones de Medicamentos
13.
J Clin Transl Sci ; 6(1): e74, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35836784

RESUMEN

Introduction: COVID-19 is a major health threat around the world causing hundreds of millions of infections and millions of deaths. There is a pressing global need for effective therapies. We hypothesized that leukotriene inhibitors (LTIs), that have been shown to lower IL6 and IL8 levels, may have a protective effect in patients with COVID-19. Methods: In this retrospective controlled cohort study, we compared death rates in COVID-19 patients who were taking a LTI with those who were not taking an LTI. We used the Department of Veterans Affairs (VA) Corporate Data Warehouse (CDW) to create a cohort of COVID-19-positive patients and tracked their use of LTIs between November 1, 2019 and November 11, 2021. Results: Of the 1,677,595 cohort of patients tested for COVID-19, 189,195 patients tested positive for COVID-19. Forty thousand seven hundred one were admitted. 38,184 had an oxygen requirement and 1214 were taking an LTI. The use of dexamethasone plus a LTI in hospital showed a survival advantage of 13.5% (CI: 0.23%-26.7%; p < 0.01) in patients presenting with a minimal O2Sat of 50% or less. For patients with an O2Sat of <60 and <50% if they were on LTIs as outpatients, continuing the LTI led to a 14.4% and 22.25 survival advantage if they were continued on the medication as inpatients. Conclusions: When combined dexamethasone and LTIs provided a mortality benefit in COVID-19 patients presenting with an O2 saturations <50%. The LTI cohort had lower markers of inflammation and cytokine storm.

14.
Stud Health Technol Inform ; 294: 465-469, 2022 May 25.
Artículo en Inglés | MEDLINE | ID: mdl-35612123

RESUMEN

Order sets that adhere to disease-specific guidelines have been shown to increase clinician efficiency and patient safety but curating these order sets, particularly for consistency across multiple sites, is difficult and time consuming. We created software called CDS-Compare to alleviate the burden on expert reviewers in rapidly and effectively curating large databases of order sets. We applied our clustering-based software to a database of NLP-processed order sets extracted from VA's Electronic Health Record, then had subject-matter experts review the web application version of our software for clustering validity.


Asunto(s)
Aprendizaje Automático , Programas Informáticos , Bases de Datos Factuales , Registros Electrónicos de Salud , Humanos
15.
J Thorac Cardiovasc Surg ; 164(5): 1318-1326.e3, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-35469597

RESUMEN

BACKGROUND: Non-small cell lung cancer (NSCLC) continues to be a major cause of cancer deaths. Previous investigation has suggested that metformin use can contribute to improved outcomes in NSCLC patients. However, this association is not uniform in all analyzed cohorts, implying that patient characteristics might lead to disparate results. Identification of patient characteristics that affect the association of metformin use with clinical benefit might clarify the drug's effect on lung cancer outcomes and lead to more rational design of clinical trials of metformin's utility as an intervention. In this study, we examined the association of metformin use with long-term mortality benefit in patients with NSCLC and the possible modulation of this benefit by body mass index (BMI) and smoking status, controlling for other clinical covariates. METHODS: This was a retrospective cohort study in which we analyzed data from the Veterans Affairs (VA) Tumor Registry in the United States. Data from all patients with stage I NSCLC from 2000 to 2016 were extracted from a national database, the Corporate Data Warehouse that captures data from all patients, primarily male, who underwent treatment through the VA health system in the United States. Metformin use was measured according to metformin prescriptions dispensed to patients in the VA health system. The association of metformin use with overall survival (OS) after diagnosis of stage I NSCLC was examined. Patients were further stratified according to BMI and smoking status (previous vs current) to examine the association of metformin use with OS across these strata. RESULTS: Metformin use was associated with improved survival in patients with stage I NSCLC (average hazard ratio, 0.82; P < .001). An interaction between the effect of metformin use and BMI on OS was observed (χ2 = 3268.42; P < .001) with a greater benefit of metformin use observed in patients as BMI increased. Similarly, an interaction between smoking status and metformin use on OS was also observed (χ2 = 2997.05; P < .001) with a greater benefit of metformin use observed in previous smokers compared with current smokers. CONCLUSIONS: In this large retrospective study, we showed that a survival benefit is enjoyed by users of metformin in a robust stage I NSCLC patient population treated in the VA health system. Metformin use was associated with an 18% improved OS. This association was stronger in patients with a higher BMI and in previous smokers. These observations deserve further mechanistic study and can help rational design of clinical trials with metformin in patients with lung cancer.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Metformina , Carcinoma de Pulmón de Células no Pequeñas/patología , Humanos , Neoplasias Pulmonares/patología , Masculino , Metformina/uso terapéutico , Estadificación de Neoplasias , Modelos de Riesgos Proporcionales , Estudios Retrospectivos , Estados Unidos
16.
Subst Abuse ; 15: 11782218211061749, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34898987

RESUMEN

BACKGROUND: Treatment progress is routinely monitored by urine testing in patients with opioid use disorder (OUD) undergoing buprenorphine medication-assisted treatment (MAT). However, interpretation of urine test results could be challenging. This retrospective study aims to examine the results of quantitative buprenorphine, norbuprenorphine, and creatinine levels in urine testing in relation to sublingual buprenorphine dosage to facilitate an accurate interpretation of urine testing results. METHODS: We reviewed the medical charts of 41 consecutive patients, who were residing in halfway houses where their medication intake was closely monitored and who had enrolled in an office-based MAT program at an urban clinic between July 2018 and June 2019. The patients' urine testing results were reviewed, and demographic variables were recorded. We focused on the patients treated with 8-, 12-, or 16-mg/day of buprenorphine, examining their urine buprenorphine, norbuprenorphine, and creatinine levels. Analysis of variance tested the statistical association between the dosage and urine testing results on the norbuprenorphine-to-creatinine ratio. RESULTS: A total of 240 urine samples from 41 patients were included for this study. The 41 patients received a mean buprenorphine dose of 10.5 ± 3.7 mg/day (range, 4-20 mg/day). Then, this study examined the distribution of the 240 urine samples and then focused on 184 urine samples that came from the 33 patients who were treated with 8-, 12-, and 16-mg/day of buprenorphine, the 3 most common dosages. All of the 184 urine samples had a creatinine level of >20 mg/dL and buprenorphine-to-norbuprenorphine ratio <50:1. The average norbuprenorphine-to-creatinine ratio in the 8 mg/day dosage group was 3.85 ± 2.24 × 10-4 (n = 66; range, 0.44-11.12). The respective ratios in the 12- and 16-mg dosage groups were 5.64 ± 3.40 × 10-4 (n = 83; range, 1.55-22.72) and 6.23 ± 4.92 × 10-4 (n = 35; range, 1.37-27.12). The 3 dosage groups differed significantly in the mean ratios (P < .01), except when the 12- and 16-mg dosage groups were compared (P = .58). The results of this study thus suggest that prescribers should pay attention to the following features: (1) unexpected substance(s) in urine testing, (2) creatinine level under 20 mg/dL, (3) buprenorphine-to-creatinine ratio over 50:1, (4) buprenorphine dosage over 24 mg/day, and (5) norbuprenorphine-to-creatinine ratio consistently under 0.5 × 10-4 in patients treated with 8 mg/day or 1.5 × 10-4 in patients treated with 12 mg/day or more. CONCLUSION: This study suggested parameters for interpreting quantitative urine test results in relation to buprenorphine intake dose in office-based opioid treatment programs.

17.
J Med Internet Res ; 23(11): e28946, 2021 11 09.
Artículo en Inglés | MEDLINE | ID: mdl-34751659

RESUMEN

BACKGROUND: Nonvalvular atrial fibrillation (NVAF) affects almost 6 million Americans and is a major contributor to stroke but is significantly undiagnosed and undertreated despite explicit guidelines for oral anticoagulation. OBJECTIVE: The aim of this study is to investigate whether the use of semisupervised natural language processing (NLP) of electronic health record's (EHR) free-text information combined with structured EHR data improves NVAF discovery and treatment and perhaps offers a method to prevent thousands of deaths and save billions of dollars. METHODS: We abstracted 96,681 participants from the University of Buffalo faculty practice's EHR. NLP was used to index the notes and compare the ability to identify NVAF, congestive heart failure, hypertension, age ≥75 years, diabetes mellitus, stroke or transient ischemic attack, vascular disease, age 65 to 74 years, sex category (CHA2DS2-VASc), and Hypertension, Abnormal liver/renal function, Stroke history, Bleeding history or predisposition, Labile INR, Elderly, Drug/alcohol usage (HAS-BLED) scores using unstructured data (International Classification of Diseases codes) versus structured and unstructured data from clinical notes. In addition, we analyzed data from 63,296,120 participants in the Optum and Truven databases to determine the NVAF frequency, rates of CHA2DS2­VASc ≥2, and no contraindications to oral anticoagulants, rates of stroke and death in the untreated population, and first year's costs after stroke. RESULTS: The structured-plus-unstructured method would have identified 3,976,056 additional true NVAF cases (P<.001) and improved sensitivity for CHA2DS2-VASc and HAS-BLED scores compared with the structured data alone (P=.002 and P<.001, respectively), causing a 32.1% improvement. For the United States, this method would prevent an estimated 176,537 strokes, save 10,575 lives, and save >US $13.5 billion. CONCLUSIONS: Artificial intelligence-informed bio-surveillance combining NLP of free-text information with structured EHR data improves data completeness, prevents thousands of strokes, and saves lives and funds. This method is applicable to many disorders with profound public health consequences.


Asunto(s)
Fibrilación Atrial , Accidente Cerebrovascular , Anciano , Anticoagulantes , Inteligencia Artificial , Fibrilación Atrial/tratamiento farmacológico , Fibrilación Atrial/prevención & control , Estudios de Casos y Controles , Registros Electrónicos de Salud , Humanos , Procesamiento de Lenguaje Natural , Medición de Riesgo , Factores de Riesgo , Accidente Cerebrovascular/prevención & control
18.
Stud Health Technol Inform ; 286: 3-8, 2021 Nov 08.
Artículo en Inglés | MEDLINE | ID: mdl-34755681

RESUMEN

The COVID-19 pandemic has disrupted many global industries and shifted the digital health landscape by stimulating and accelerating the delivery of digital care. It has emphasized the need for a system level informatics implementation that supports the healthcare management of populations at a macro level while also providing the necessary support for front line care delivery at a micro level. From data dashboard to Telemedicine, this crisis has necessitated the need for health informatics transformation that can bridge time and space to provide timely care. However, heath transformation cannot solely rely on Health Information Technology (HIT) for progress, but rather success must be an outcome of system design focus on the contextual complexity of the health system where HIT is used. This conference highlights the important roles context plays for health informatics in global pandemics and aims to answer critical questions in four main areas: 1) health information management in the covid-19 context, 2) implementation of new practices and technologies in healthcare, 3) sociotechnical analysis of task performance and workload in healthcare, and 4) innovations in design and evaluation methods of health technologies. We deem this as a call to action to understand the importance of context while solving the last mile problem in delivering the informatics solutions that are needed to support our public health response.


Asunto(s)
COVID-19 , Informática Médica , Telemedicina , Humanos , Pandemias , SARS-CoV-2
19.
Stud Health Technol Inform ; 287: 89-93, 2021 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-34795088

RESUMEN

OBJECTIVE: One important concept in informatics is data which meets the principles of Findability, Accessibility, Interoperability and Reusability (FAIR). Standards, such as terminologies (findability), assist with important tasks like interoperability, Natural Language Processing (NLP) (accessibility) and decision support (reusability). One terminology, Solor, integrates SNOMED CT, LOINC and RxNorm. We describe Solor, HL7 Analysis Normal Form (ANF), and their use with the high definition natural language processing (HD-NLP) program. METHODS: We used HD-NLP to process 694 clinical narratives prior modeled by human experts into Solor and ANF. We compared HD-NLP output to the expert gold standard for 20% of the sample. Each clinical statement was judged "correct" if HD-NLP output matched ANF structure and Solor concepts, or "incorrect" if any ANF structure or Solor concepts were missing or incorrect. Judgements were summed to give totals for "correct" and "incorrect". RESULTS: 113 (80.7%) correct, 26 (18.6%) incorrect, and 1 error. Inter-rater reliability was 97.5% with Cohen's kappa of 0.948. CONCLUSION: The HD-NLP software provides useable complex standards-based representations for important clinical statements designed to drive CDS.


Asunto(s)
Procesamiento de Lenguaje Natural , RxNorm , Humanos , Reproducibilidad de los Resultados , Systematized Nomenclature of Medicine , Vocabulario Controlado
20.
J Biomed Inform ; 122: 103889, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34411708

RESUMEN

Identification of patient subtypes from retrospective Electronic Health Record (EHR) data is fraught with inherent modeling issues, such as missing data and variable length time intervals, and the results obtained are highly dependent on data pre-processing strategies. As we move towards personalized medicine, assessing accurate patient subtypes will be a key factor in creating patient specific treatment plans. Partitioning longitudinal trajectories from irregularly spaced and variable length time intervals is a well-established, but open problem. In this work, we present and compare k-means approaches for subtyping opioid use trajectories from EHR data. We then interpret the resulting subtypes using decision trees, examining how each subtype is influenced by opioid medication features and patient diagnoses, procedures, and demographics. Finally, we discuss how the subtypes can be incorporated in static machine learning models as features in predicting opioid overdose and adverse events. The proposed methods are general, and can be extended to other EHR prescription dosage trajectories.


Asunto(s)
Analgésicos Opioides , Trastornos Relacionados con Opioides , Analgésicos Opioides/uso terapéutico , Análisis por Conglomerados , Registros Electrónicos de Salud , Humanos , Trastornos Relacionados con Opioides/tratamiento farmacológico , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...