Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
BMC Cardiovasc Disord ; 24(1): 48, 2024 Jan 13.
Article in English | MEDLINE | ID: mdl-38218755

ABSTRACT

BACKGROUND: Type 2 Diabetes Mellitus (T2DM) has become a major health concern with an increasing prevalence and is now one of the leading attributable causes of death globally. T2DM and cardiovascular disease are strongly associated and T2DM is an important independent risk factor for ischemic heart disease. T-wave abnormalities (TWA) on electrocardiogram (ECG) can indicate several pathologies including ischemia. In this study, we aimed to investigate the association between T2DM and T-wave changes using the Minnesota coding system. METHODS: A cross-sectional study was conducted on the MASHAD cohort study population. All participants of the cohort population were enrolled in the study. 12-lead ECG and Minnesota coding system (codes 5-1 to 5-4) were utilized for T-wave observation and interpretation. Regression models were used for the final evaluation with a level of significance being considered at p < 0.05. RESULTS: A total of 9035 participants aged 35-65 years old were included in the study, of whom 1273 were diabetic. The prevalence of code 5-2, 5-3, major and minor TWA were significantly higher in diabetics (p < 0.05). However, following adjustment for age, gender, and hypertension, the presence of TWAs was not significantly associated with T2DM (p > 0.05). Hypertension, age, and body mass index were significantly associated with T2DM (p < 0.05). CONCLUSIONS: Although some T-wave abnormalities were more frequent in diabetics, they were not statistically associated with the presence of T2DM in our study.


Subject(s)
Diabetes Mellitus, Type 2 , Hypertension , Humans , Adult , Middle Aged , Aged , Diabetes Mellitus, Type 2/diagnosis , Diabetes Mellitus, Type 2/epidemiology , Diabetes Mellitus, Type 2/complications , Cross-Sectional Studies , Minnesota/epidemiology , Electrocardiography , Risk Factors , Hypertension/complications
2.
Pharmacol Res Perspect ; 11(2): e01068, 2023 04.
Article in English | MEDLINE | ID: mdl-36855813

ABSTRACT

We aimed to determine the effects of isoproterenol on arrhythmia recurrence in atrioventricular nodal re-entrant tachycardia (AVNRT) patients treated with catheter ablation. The present randomized controlled clinical trial was conducted on AVNRT patients candidates for radiofrequency ablation (RFA). The patients were randomly assigned to receive isoproterenol (0.5-4 µg/min) or not (control group) for arrhythmia re-induction after ablation. The results of the electrophysiological (EP) study, the ablation parameters, and the arrhythmia recurrence rate were recorded. We evaluated 206 patients (53 males and 153 females) with a mean (SD) age of 49.87 (15.5) years in two groups of isoproterenol (n = 103) and control (n = 103). No statistically significant difference was observed between the two studied groups in age, gender, EP study, and ablation parameters. The success rate of ablation was 100% in both groups. During ~16.5 months of follow-up, one patient (1%) in the isoproterenol group and four patients (3.8%) in the control group experienced AVNRT recurrence (HR = 0.245; 95% confidence interval [CI], 0.043-1.418; p = .173). Based on the Kaplan-Meier analysis, there was no significant difference in the incidence rate of arrhythmia recurrence during the follow-up period between the two studied groups (p = .129). Additionally, there were no significant differences between the arrhythmia's recurrence according to age, gender, junctional rhythm, type of AVNRT arrhythmia, and DAVN persistence after ablation. Although isoproterenol administration for arrhythmia re-induction after ablation did not alleviate the treatment outcomes and arrhythmia recurrence following RFA in AVNRT patients, further studies with a larger sample size and a longer duration of follow-up are necessary.


Subject(s)
Catheter Ablation , Tachycardia, Atrioventricular Nodal Reentry , Female , Male , Humans , Middle Aged , Isoproterenol , Tachycardia, Atrioventricular Nodal Reentry/drug therapy , Tachycardia, Atrioventricular Nodal Reentry/surgery , Arrhythmias, Cardiac , Catheter Ablation/adverse effects , Kaplan-Meier Estimate
3.
AMIA Annu Symp Proc ; 2021: 863-871, 2021.
Article in English | MEDLINE | ID: mdl-35308903

ABSTRACT

Background. A key to a more efficient scheduling systems is to ensure appointments are designed to meet patient's needs and to design and simplify appointment scheduling less prone to error. Electronic Health Records (EHR) consist of valuable information about patient characteristics and their healthcare needs. The aim of this study is to utilize information from structured and unstructured EHR data to redesign appointment scheduling in community health clinics. Methods. We used Global Vectors for Word Representation, a word embedding approach, on free text field "scheduler note" to cluster patients into groups based on similarities of reasons for appointment. We then redesigned an appointment scheduling template with new types and durations based on the clusters. We compared the current appointment scheduling system and our proposed system by predicting and evaluating clinic performance measures such as patient time spent in-clinic and number of additional patients to accommodate. Results. We collected 17,722 encounters of an urban community health clinic in 2014 including 102 unique types recorded in the EHR. Following data processing, word embedding implementation, and clustering, appointment types were grouped into 10 clusters. The proposed scheduling template could open space to see overall an additional 716 patients per year and decrease patient in-clinic time by 3.6 minutes on average (p-value<0.0001). Conclusions. We found word embedding, that is an NLP approach, can be used to extract information from schedulers notes for improving scheduling systems. Unsupervised machine learning approach can be applied to simplify appointment scheduling in CHCs. Patient-centered appointment scheduling can be achieved by simplifying and redesigning appointment types and durations that could improve performance measures, such as increasing availability of time and patient satisfaction.


Subject(s)
Ambulatory Care Facilities , Appointments and Schedules , Ambulatory Care , Cluster Analysis , Humans , Patient-Centered Care
4.
BMC Med Inform Decis Mak ; 19(Suppl 3): 73, 2019 04 04.
Article in English | MEDLINE | ID: mdl-30943952

ABSTRACT

BACKGROUND: Osteoporosis has become an important public health issue. Most of the population, particularly elderly people, are at some degree of risk of osteoporosis-related fractures. Accurate identification and surveillance of patient populations with fractures has a significant impact on reduction of cost of care by preventing future fractures and its corresponding complications. METHODS: In this study, we developed a rule-based natural language processing (NLP) algorithm for identification of twenty skeletal site-specific fractures from radiology reports. The rule-based NLP algorithm was based on regular expressions developed using MedTagger, an NLP tool of the Apache Unstructured Information Management Architecture (UIMA) pipeline to facilitate information extraction from clinical narratives. Radiology notes were retrieved from the Mayo Clinic electronic health records data warehouse. We developed rules for identifying each fracture type according to physicians' knowledge and experience, and refined these rules via verification with physicians. This study was approved by the institutional review board (IRB) for human subject research. RESULTS: We validated the NLP algorithm using the radiology reports of a community-based cohort at Mayo Clinic with the gold standard constructed by medical experts. The micro-averaged results of sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and F1-score of the proposed NLP algorithm are 0.930, 1.0, 1.0, 0.941, 0.961, respectively. The F1-score is 1.0 for 8 fractures, and above 0.9 for a total of 17 out of 20 fractures (85%). CONCLUSIONS: The results verified the effectiveness of the proposed rule-based NLP algorithm in automatic identification of osteoporosis-related skeletal site-specific fractures from radiology reports. The NLP algorithm could be utilized to accurately identify the patients with fractures and those who are also at high risk of future fractures due to osteoporosis. Appropriate care interventions to those patients, not only the most at-risk patients but also those with emerging risk, would significantly reduce future fractures.


Subject(s)
Fractures, Bone/classification , Natural Language Processing , Radiology , Aged , Algorithms , Cohort Studies , Electronic Health Records , Female , Humans , Information Storage and Retrieval
5.
J Biomed Inform ; 77: 34-49, 2018 01.
Article in English | MEDLINE | ID: mdl-29162496

ABSTRACT

BACKGROUND: With the rapid adoption of electronic health records (EHRs), it is desirable to harvest information and knowledge from EHRs to support automated systems at the point of care and to enable secondary use of EHRs for clinical and translational research. One critical component used to facilitate the secondary use of EHR data is the information extraction (IE) task, which automatically extracts and encodes clinical information from text. OBJECTIVES: In this literature review, we present a review of recent published research on clinical information extraction (IE) applications. METHODS: A literature search was conducted for articles published from January 2009 to September 2016 based on Ovid MEDLINE In-Process & Other Non-Indexed Citations, Ovid MEDLINE, Ovid EMBASE, Scopus, Web of Science, and ACM Digital Library. RESULTS: A total of 1917 publications were identified for title and abstract screening. Of these publications, 263 articles were selected and discussed in this review in terms of publication venues and data sources, clinical IE tools, methods, and applications in the areas of disease- and drug-related studies, and clinical workflow optimizations. CONCLUSIONS: Clinical IE has been used for a wide range of applications, however, there is a considerable gap between clinical studies using EHR data and studies using clinical IE. This study enabled us to gain a more concrete understanding of the gap and to provide potential solutions to bridge this gap.


Subject(s)
Electronic Health Records , Information Storage and Retrieval/methods , Medical Informatics/trends , Humans , Meaningful Use , Natural Language Processing , Research Design
6.
J Biomed Inform ; 63: 379-389, 2016 10.
Article in English | MEDLINE | ID: mdl-27593166

ABSTRACT

In the era of digitalization, information retrieval (IR), which retrieves and ranks documents from large collections according to users' search queries, has been popularly applied in the biomedical domain. Building patient cohorts using electronic health records (EHRs) and searching literature for topics of interest are some IR use cases. Meanwhile, natural language processing (NLP), such as tokenization or Part-Of-Speech (POS) tagging, has been developed for processing clinical documents or biomedical literature. We hypothesize that NLP can be incorporated into IR to strengthen the conventional IR models. In this study, we propose two NLP-empowered IR models, POS-BoW and POS-MRF, which incorporate automatic POS-based term weighting schemes into bag-of-word (BoW) and Markov Random Field (MRF) IR models, respectively. In the proposed models, the POS-based term weights are iteratively calculated by utilizing a cyclic coordinate method where golden section line search algorithm is applied along each coordinate to optimize the objective function defined by mean average precision (MAP). In the empirical experiments, we used the data sets from the Medical Records track in Text REtrieval Conference (TREC) 2011 and 2012 and the Genomics track in TREC 2004. The evaluation on TREC 2011 and 2012 Medical Records tracks shows that, for the POS-BoW models, the mean improvement rates for IR evaluation metrics, MAP, bpref, and P@10, are 10.88%, 4.54%, and 3.82%, compared to the BoW models; and for the POS-MRF models, these rates are 13.59%, 8.20%, and 8.78%, compared to the MRF models. Additionally, we experimentally verify that the proposed weighting approach is superior to the simple heuristic and frequency based weighting approaches, and validate our POS category selection. Using the optimal weights calculated in this experiment, we tested the proposed models on the TREC 2004 Genomics track and obtained average of 8.63% and 10.04% improvement rates for POS-BoW and POS-MRF, respectively. These significant improvements verify the effectiveness of leveraging POS tagging for biomedical IR tasks.


Subject(s)
Electronic Health Records , Information Storage and Retrieval , Natural Language Processing , Algorithms , Humans , Linguistics
7.
Article in English | MEDLINE | ID: mdl-27570664

ABSTRACT

In the era of precision medicine, accurately identifying familial conditions is crucial for providing target treatment. However, it is challenging to identify familial conditions without detailed family history information. In this work, we studied the documentation of family history of premature cardiovascular disease and hypercholesterolemia. The information on patients' family history of stroke within the Patient-provided information (PPI) forms was compared with the information gathered by clinicians in clinical notes. The agreement between PPI and clinical notes on absence of family history information in PPI was substantially higher compared to presence of family history.

8.
Biomed Inform Insights ; 8(Suppl 1): 13-22, 2016.
Article in English | MEDLINE | ID: mdl-27385912

ABSTRACT

The concept of optimizing health care by understanding and generating knowledge from previous evidence, ie, the Learning Health-care System (LHS), has gained momentum and now has national prominence. Meanwhile, the rapid adoption of electronic health records (EHRs) enables the data collection required to form the basis for facilitating LHS. A prerequisite for using EHR data within the LHS is an infrastructure that enables access to EHR data longitudinally for health-care analytics and real time for knowledge delivery. Additionally, significant clinical information is embedded in the free text, making natural language processing (NLP) an essential component in implementing an LHS. Herein, we share our institutional implementation of a big data-empowered clinical NLP infrastructure, which not only enables health-care analytics but also has real-time NLP processing capability. The infrastructure has been utilized for multiple institutional projects including the MayoExpertAdvisor, an individualized care recommendation solution for clinical care. We compared the advantages of big data over two other environments. Big data infrastructure significantly outperformed other infrastructure in terms of computing speed, demonstrating its value in making the LHS a possibility in the near future.

9.
Stud Health Technol Inform ; 216: 1033-4, 2015.
Article in English | MEDLINE | ID: mdl-26262333

ABSTRACT

In clinical NLP, one major barrier to adopting crowdsourcing for NLP annotation is the issue of confidentiality for protected health information (PHI) in clinical narratives. In this paper, we investigated the use of a frequency-based approach to extract sentences without PHI. Our approach is based on the assumption that sentences appearing frequently tend to contain no PHI. Both manual and automatic evaluations on 500 sentences out of the 7.9 million sentences of frequencies higher than one show that no PHI can be found among them. The promising results provide potentials of releasing those sentences for obtaining sentence-level NLP annotations via crowdsourcing.


Subject(s)
Crowdsourcing/methods , Data Interpretation, Statistical , Electronic Health Records/classification , Machine Learning , Natural Language Processing , Semantics , Language , Minnesota , Pattern Recognition, Automated/methods , Terminology as Topic , Vocabulary, Controlled
10.
Stud Health Technol Inform ; 216: 604-8, 2015.
Article in English | MEDLINE | ID: mdl-26262122

ABSTRACT

In this study we have developed a rule-based natural language processing (NLP) system to identify patients with family history of pancreatic cancer. The algorithm was developed in a Unstructured Information Management Architecture (UIMA) framework and consisted of section segmentation, relation discovery, and negation detection. The system was evaluated on data from two institutions. The family history identification precision was consistent across the institutions shifting from 88.9% on Indiana University (IU) dataset to 87.8% on Mayo Clinic dataset. Customizing the algorithm on the the Mayo Clinic data, increased its precision to 88.1%. The family member relation discovery achieved precision, recall, and F-measure of 75.3%, 91.6% and 82.6% respectively. Negation detection resulted in precision of 99.1%. The results show that rule-based NLP approaches for specific information extraction tasks are portable across institutions; however customization of the algorithm on the new dataset improves its performance.


Subject(s)
Electronic Health Records/classification , Information Storage and Retrieval/methods , Medical History Taking/methods , Natural Language Processing , Pancreatic Neoplasms/diagnosis , Pancreatic Neoplasms/genetics , Algorithms , Genetic Predisposition to Disease/epidemiology , Genetic Predisposition to Disease/genetics , Humans , Medical History Taking/statistics & numerical data , Medical Record Linkage , Pancreatic Neoplasms/epidemiology
11.
J Biomed Inform ; 54: 213-9, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25791500

ABSTRACT

In Electronic Health Records (EHRs), much of valuable information regarding patients' conditions is embedded in free text format. Natural language processing (NLP) techniques have been developed to extract clinical information from free text. One challenge faced in clinical NLP is that the meaning of clinical entities is heavily affected by modifiers such as negation. A negation detection algorithm, NegEx, applies a simplistic approach that has been shown to be powerful in clinical NLP. However, due to the failure to consider the contextual relationship between words within a sentence, NegEx fails to correctly capture the negation status of concepts in complex sentences. Incorrect negation assignment could cause inaccurate diagnosis of patients' condition or contaminated study cohorts. We developed a negation algorithm called DEEPEN to decrease NegEx's false positives by taking into account the dependency relationship between negation words and concepts within a sentence using Stanford dependency parser. The system was developed and tested using EHR data from Indiana University (IU) and it was further evaluated on Mayo Clinic dataset to assess its generalizability. The evaluation results demonstrate DEEPEN, which incorporates dependency parsing into NegEx, can reduce the number of incorrect negation assignment for patients with positive findings, and therefore improve the identification of patients with the target clinical findings in EHRs.


Subject(s)
Algorithms , Electronic Health Records , Natural Language Processing , Humans
12.
HPB (Oxford) ; 17(5): 447-53, 2015 May.
Article in English | MEDLINE | ID: mdl-25537257

ABSTRACT

INTRODUCTION: As many as 3% of computed tomography (CT) scans detect pancreatic cysts. Because pancreatic cysts are incidental, ubiquitous and poorly understood, follow-up is often not performed. Pancreatic cysts may have a significant malignant potential and their identification represents a 'window of opportunity' for the early detection of pancreatic cancer. The purpose of this study was to implement an automated Natural Language Processing (NLP)-based pancreatic cyst identification system. METHOD: A multidisciplinary team was assembled. NLP-based identification algorithms were developed based on key words commonly used by physicians to describe pancreatic cysts and programmed for automated search of electronic medical records. A pilot study was conducted prospectively in a single institution. RESULTS: From March to September 2013, 566,233 reports belonging to 50,669 patients were analysed. The mean number of patients reported with a pancreatic cyst was 88/month (range 78-98). The mean sensitivity and specificity were 99.9% and 98.8%, respectively. CONCLUSION: NLP is an effective tool to automatically identify patients with pancreatic cysts based on electronic medical records (EMR). This highly accurate system can help capture patients 'at-risk' of pancreatic cancer in a registry.


Subject(s)
Algorithms , Automation , Early Detection of Cancer/methods , Natural Language Processing , Pancreatic Cyst/diagnosis , Pancreatic Neoplasms/diagnosis , Follow-Up Studies , Humans , Pilot Projects , Reproducibility of Results , Retrospective Studies
13.
Stud Health Technol Inform ; 192: 822-6, 2013.
Article in English | MEDLINE | ID: mdl-23920672

ABSTRACT

Pancreatic cancer is one of the deadliest cancers, mostly diagnosed at late stages. Patients with pancreatic cysts are at higher risk of developing cancer and their surveillance can help to diagnose the disease in earlier stages. In this retrospective study we collected a corpus of 1064 records from 44 patients at Indiana University Hospital from 1990 to 2012. A Natural Language Processing (NLP) system was developed and used to identify patients with pancreatic cysts. NegEx algorithm was used initially to identify the negation status of concepts that resulted in precision and recall of 98.9% and 89% respectively. Stanford Dependency parser (SDP) was then used to improve the NegEx performance resulting in precision of 98.9% and recall of 95.7%. Features related to pancreatic cysts were also extracted from patient medical records using regex and NegEx algorithm with 98.5% precision and 97.43% recall. SDP improved the NegEx algorithm by increasing the recall to 98.12%.


Subject(s)
Electronic Health Records , Health Records, Personal , Natural Language Processing , Pancreatic Cyst/classification , Pancreatic Cyst/diagnosis , Vocabulary, Controlled , Algorithms , Artificial Intelligence , Data Mining/methods , Decision Support Systems, Clinical , Humans , Pattern Recognition, Automated/methods , Reproducibility of Results , Sensitivity and Specificity
14.
Stud Health Technol Inform ; 192: 1203, 2013.
Article in English | MEDLINE | ID: mdl-23920977

ABSTRACT

Large datasets may contain redundant data. Variable selection methods that select most relevant variables in the data set, fail to consider the interaction between the variables. Data transformation methods are used to transfer the original data to a new dimension and capture the most significant information within the data set. The data set used in this study was based on 45 clinical variables collected from 697 patients diagnosed as either having myocardial infarction (MI) or not. Principal component analysis (PCA) and independent component analysis (ICA) were applied prior to classification of patients to MI or Non-MI groups using support vector machines (SVM).


Subject(s)
Decision Support Systems, Clinical , Diagnosis, Computer-Assisted/methods , Electronic Health Records/statistics & numerical data , Information Storage and Retrieval/methods , Myocardial Infarction/classification , Myocardial Infarction/diagnosis , Principal Component Analysis , Electronic Health Records/classification , Humans , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...