Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 212
Filtrar
1.
JAMA Intern Med ; 2020 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-32065600

RESUMO

Importance: Chlorthalidone is currently recommended as the preferred thiazide diuretic to treat hypertension, but no trials have directly compared risks and benefits. Objective: To compare the effectiveness and safety of chlorthalidone and hydrochlorothiazide as first-line therapies for hypertension in real-world practice. Design, Setting, and Participants: This is a Large-Scale Evidence Generation and Evaluation in a Network of Databases (LEGEND) observational comparative cohort study with large-scale propensity score stratification and negative-control and synthetic positive-control calibration on databases spanning January 2001 through December 2018. Outpatient and inpatient care episodes of first-time users of antihypertensive monotherapy in the United States based on 2 administrative claims databases and 1 collection of electronic health records were analyzed. Analysis began June 2018. Exposures: Chlorthalidone and hydrochlorothiazide. Main Outcomes and Measures: The primary outcomes were acute myocardial infarction, hospitalization for heart failure, ischemic or hemorrhagic stroke, and a composite cardiovascular disease outcome including the first 3 outcomes and sudden cardiac death. Fifty-one safety outcomes were measured. Results: Of 730 225 individuals (mean [SD] age, 51.5 [13.3] years; 450 100 women [61.6%]), 36 918 were dispensed or prescribed chlorthalidone and had 149 composite outcome events, and 693 337 were dispensed or prescribed hydrochlorothiazide and had 3089 composite outcome events. No significant difference was found in the associated risk of myocardial infarction, hospitalized heart failure, or stroke, with a calibrated hazard ratio for the composite cardiovascular outcome of 1.00 for chlorthalidone compared with hydrochlorothiazide (95% CI, 0.85-1.17). Chlorthalidone was associated with a significantly higher risk of hypokalemia (hazard ratio [HR], 2.72; 95% CI, 2.38-3.12), hyponatremia (HR, 1.31; 95% CI, 1.16-1.47), acute renal failure (HR, 1.37; 95% CI, 1.15-1.63), chronic kidney disease (HR, 1.24; 95% CI, 1.09-1.42), and type 2 diabetes mellitus (HR, 1.21; 95% CI, 1.12-1.30). Chlorthalidone was associated with a significantly lower risk of diagnosed abnormal weight gain (HR, 0.73; 95% CI, 0.61-0.86). Conclusions and Relevance: This study found that chlorthalidone use was not associated with significant cardiovascular benefits when compared with hydrochlorothiazide, while its use was associated with greater risk of renal and electrolyte abnormalities. These findings do not support current recommendations to prefer chlorthalidone vs hydrochlorothiazide for hypertension treatment in first-time users was found. We used advanced methods, sensitivity analyses, and diagnostics, but given the possibility of residual confounding and the limited length of observation periods, further study is warranted.

2.
Korean Circ J ; 50(1): 52-68, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31642211

RESUMO

BACKGROUND AND OBJECTIVES: 2018 ESC/ESH Hypertension guideline recommends 2-drug combination as initial anti-hypertensive therapy. However, real-world evidence for effectiveness of recommended regimens remains limited. We aimed to compare the effectiveness of first-line anti-hypertensive treatment combining 2 out of the following classes: angiotensin-converting enzyme (ACE) inhibitors/angiotensin-receptor blocker (A), calcium channel blocker (C), and thiazide-type diuretics (D). METHODS: Treatment-naïve hypertensive adults without cardiovascular disease (CVD) who initiated dual anti-hypertensive medications were identified in 5 databases from US and Korea. The patients were matched for each comparison set by large-scale propensity score matching. Primary endpoint was all-cause mortality. Myocardial infarction, heart failure, stroke, and major adverse cardiac and cerebrovascular events as a composite outcome comprised the secondary measure. RESULTS: A total of 987,983 patients met the eligibility criteria. After matching, 222,686, 32,344, and 38,513 patients were allocated to A+C vs. A+D, C+D vs. A+C, and C+D vs. A+D comparison, respectively. There was no significant difference in the mortality during total of 1,806,077 person-years: A+C vs. A+D (hazard ratio [HR], 1.08; 95% confidence interval [CI], 0.97-1.20; p=0.127), C+D vs. A+C (HR, 0.93; 95% CI, 0.87-1.01; p=0.067), and C+D vs. A+D (HR, 1.18; 95% CI, 0.95-1.47; p=0.104). A+C was associated with a slightly higher risk of heart failure (HR, 1.09; 95% CI, 1.01-1.18; p=0.040) and stroke (HR, 1.08; 95% CI, 1.01-1.17; p=0.040) than A+D. CONCLUSIONS: There was no significant difference in mortality among A+C, A+D, and C+D combination treatment in patients without previous CVD. This finding was consistent across multi-national heterogeneous cohorts in real-world practice.

3.
J Biomed Inform ; 102: 103363, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31866433

RESUMO

Algorithms for identifying patients of interest from observational data must address missing and inaccurate data and are desired to achieve comparable performance on both administrative claims and electronic health records data. However, administrative claims data do not contain the necessary information to develop accurate algorithms for disorders that require laboratory results, and this omission can result in insensitive diagnostic code-based algorithms. In this paper, we tested our assertion that the performance of a diagnosis code-based algorithm for chronic kidney disorder (CKD) can be improved by adding other codes indirectly related to CKD (e.g., codes for dialysis, kidney transplant, suspicious kidney disorders). Following the best practices from Observational Health Data Sciences and Informatics (OHDSI), we adapted an electronic health record-based gold standard algorithm for CKD and then created algorithms that can be executed on administrative claims data and account for related data quality issues. We externally validated our algorithms on four electronic health record datasets in the OHDSI network. Compared to the algorithm that uses CKD diagnostic codes only, positive predictive value of the algorithms that use additional codes was slightly increased (47.4% vs. 47.9-48.5% respectively). The algorithms adapted from the gold standard algorithm can be used to infer chronic kidney disorder based on administrative claims data. We succeeded in improving the generalizability and consistency of the CKD phenotypes by using data and vocabulary standardized across the OHDSI network, although performance variability across datasets remains. We showed that identifying and addressing coding and data heterogeneity can improve the performance of the algorithms.

4.
Appl Clin Inform ; 10(5): 849-858, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31694054

RESUMO

BACKGROUND: Neurologists perform a significant amount of consultative work. Aggregative electronic health record (EHR) dashboards may help to reduce consultation turnaround time (TAT) which may reflect time spent interfacing with the EHR. OBJECTIVES: This study was aimed to measure the difference in TAT before and after the implementation of a neurological dashboard. METHODS: We retrospectively studied a neurological dashboard in a read-only, web-based, clinical data review platform at an academic medical center that was separate from our institutional EHR. Using our EHR, we identified all distinct initial neurological consultations at our institution that were completed in the 5 months before, 5 months after, and 12 months after the dashboard go-live in December 2017. Using log data, we determined total dashboard users, unique page hits, patient-chart accesses, and user departments at 5 months after go-live. We calculated TAT as the difference in time between the placement of the consultation order and completion of the consultation note in the EHR. RESULTS: By April 30th in 2018, we identified 269 unique users, 684 dashboard page hits (median hits/user 1.0, interquartile range [IQR] = 1.0), and 510 unique patient-chart accesses. In 5 months before the go-live, 1,434 neurology consultations were completed with a median TAT of 2.0 hours (IQR = 2.5) which was significantly longer than during 5 months after the go-live, with 1,672 neurology consultations completed with a median TAT of 1.8 hours (IQR = 2.2; p = 0.001). Over the following 7 months, 2,160 consultations were completed and median TAT remained unchanged at 1.8 hours (IQR = 2.5). CONCLUSION: At a large academic institution, we found a significant decrease in inpatient consult TAT 5 and 12 months after the implementation of a neurological dashboard. Further study is necessary to investigate the cognitive and operational effects of aggregative dashboards in neurology and to optimize their use.

5.
Lancet ; 394(10211): 1816-1826, 2019 11 16.
Artigo em Inglês | MEDLINE | ID: mdl-31668726

RESUMO

BACKGROUND: Uncertainty remains about the optimal monotherapy for hypertension, with current guidelines recommending any primary agent among the first-line drug classes thiazide or thiazide-like diuretics, angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, dihydropyridine calcium channel blockers, and non-dihydropyridine calcium channel blockers, in the absence of comorbid indications. Randomised trials have not further refined this choice. METHODS: We developed a comprehensive framework for real-world evidence that enables comparative effectiveness and safety evaluation across many drugs and outcomes from observational data encompassing millions of patients, while minimising inherent bias. Using this framework, we did a systematic, large-scale study under a new-user cohort design to estimate the relative risks of three primary (acute myocardial infarction, hospitalisation for heart failure, and stroke) and six secondary effectiveness and 46 safety outcomes comparing all first-line classes across a global network of six administrative claims and three electronic health record databases. The framework addressed residual confounding, publication bias, and p-hacking using large-scale propensity adjustment, a large set of control outcomes, and full disclosure of hypotheses tested. FINDINGS: Using 4·9 million patients, we generated 22 000 calibrated, propensity-score-adjusted hazard ratios (HRs) comparing all classes and outcomes across databases. Most estimates revealed no effectiveness differences between classes; however, thiazide or thiazide-like diuretics showed better primary effectiveness than angiotensin-converting enzyme inhibitors: acute myocardial infarction (HR 0·84, 95% CI 0·75-0·95), hospitalisation for heart failure (0·83, 0·74-0·95), and stroke (0·83, 0·74-0·95) risk while on initial treatment. Safety profiles also favoured thiazide or thiazide-like diuretics over angiotensin-converting enzyme inhibitors. The non-dihydropyridine calcium channel blockers were significantly inferior to the other four classes. INTERPRETATION: This comprehensive framework introduces a new way of doing observational health-care science at scale. The approach supports equivalence between drug classes for initiating monotherapy for hypertension-in keeping with current guidelines, with the exception of thiazide or thiazide-like diuretics superiority to angiotensin-converting enzyme inhibitors and the inferiority of non-dihydropyridine calcium channel blockers. FUNDING: US National Science Foundation, US National Institutes of Health, Janssen Research & Development, IQVIA, South Korean Ministry of Health & Welfare, Australian National Health and Medical Research Council.


Assuntos
Anti-Hipertensivos/uso terapêutico , Hipertensão/tratamento farmacológico , Adolescente , Adulto , Idoso , Antagonistas de Receptores de Angiotensina/efeitos adversos , Antagonistas de Receptores de Angiotensina/uso terapêutico , Inibidores da Enzima Conversora de Angiotensina/efeitos adversos , Inibidores da Enzima Conversora de Angiotensina/uso terapêutico , Anti-Hipertensivos/efeitos adversos , Bloqueadores dos Canais de Cálcio/efeitos adversos , Bloqueadores dos Canais de Cálcio/uso terapêutico , Criança , Estudos de Coortes , Pesquisa Comparativa da Efetividade/métodos , Bases de Dados Factuais , Diuréticos/efeitos adversos , Diuréticos/uso terapêutico , Medicina Baseada em Evidências/métodos , Feminino , Insuficiência Cardíaca/etiologia , Insuficiência Cardíaca/prevenção & controle , Humanos , Hipertensão/complicações , Masculino , Pessoa de Meia-Idade , Infarto do Miocárdio/etiologia , Infarto do Miocárdio/prevenção & controle , Acidente Vascular Cerebral/etiologia , Acidente Vascular Cerebral/prevenção & controle , Adulto Jovem
6.
J Biomed Inform ; 99: 103293, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31542521

RESUMO

BACKGROUND: Implementation of phenotype algorithms requires phenotype engineers to interpret human-readable algorithms and translate the description (text and flowcharts) into computable phenotypes - a process that can be labor intensive and error prone. To address the critical need for reducing the implementation efforts, it is important to develop portable algorithms. METHODS: We conducted a retrospective analysis of phenotype algorithms developed in the Electronic Medical Records and Genomics (eMERGE) network and identified common customization tasks required for implementation. A novel scoring system was developed to quantify portability from three aspects: Knowledge conversion, clause Interpretation, and Programming (KIP). Tasks were grouped into twenty representative categories. Experienced phenotype engineers were asked to estimate the average time spent on each category and evaluate time saving enabled by a common data model (CDM), specifically the Observational Medical Outcomes Partnership (OMOP) model, for each category. RESULTS: A total of 485 distinct clauses (phenotype criteria) were identified from 55 phenotype algorithms, corresponding to 1153 customization tasks. In addition to 25 non-phenotype-specific tasks, 46 tasks are related to interpretation, 613 tasks are related to knowledge conversion, and 469 tasks are related to programming. A score between 0 and 2 (0 for easy, 1 for moderate, and 2 for difficult portability) is assigned for each aspect, yielding a total KIP score range of 0 to 6. The average clause-wise KIP score to reflect portability is 1.37 ±â€¯1.38. Specifically, the average knowledge (K) score is 0.64 ±â€¯0.66, interpretation (I) score is 0.33 ±â€¯0.55, and programming (P) score is 0.40 ±â€¯0.64. 5% of the categories can be completed within one hour (median). 70% of the categories take from days to months to complete. The OMOP model can assist with vocabulary mapping tasks. CONCLUSION: This study presents firsthand knowledge of the substantial implementation efforts in phenotyping and introduces a novel metric (KIP) to measure portability of phenotype algorithms for quantifying such efforts across the eMERGE Network. Phenotype developers are encouraged to analyze and optimize the portability in regards to knowledge, interpretation and programming. CDMs can be used to improve the portability for some 'knowledge-oriented' tasks.

7.
J Biomed Inform ; 97: 103258, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31369862

RESUMO

BACKGROUND: The primary approach for defining disease in observational healthcare databases is to construct phenotype algorithms (PAs), rule-based heuristics predicated on the presence, absence, and temporal logic of clinical observations. However, a complete evaluation of PAs, i.e., determining sensitivity, specificity, and positive predictive value (PPV), is rarely performed. In this study, we propose a tool (PheValuator) to efficiently estimate a complete PA evaluation. METHODS: We used 4 administrative claims datasets: OptumInsight's de-identified Clinformatics™ Datamart (Eden Prairie,MN); IBM MarketScan Multi-State Medicaid); IBM MarketScan Medicare Supplemental Beneficiaries; and IBM MarketScan Commercial Claims and Encounters from 2000 to 2017. Using PheValuator involves (1) creating a diagnostic predictive model for the phenotype, (2) applying the model to a large set of randomly selected subjects, and (3) comparing each subject's predicted probability for the phenotype to inclusion/exclusion in PAs. We used the predictions as a 'probabilistic gold standard' measure to classify positive/negative cases. We examined 4 phenotypes: myocardial infarction, cerebral infarction, chronic kidney disease, and atrial fibrillation. We examined several PAs for each phenotype including 1-time (1X) occurrence of the diagnosis code in the subject's record and 1-time occurrence of the diagnosis in an inpatient setting with the diagnosis code as the primary reason for admission (1X-IP-1stPos). RESULTS: Across phenotypes, the 1X PA showed the highest sensitivity/lowest PPV among all PAs. 1X-IP-1stPos yielded the highest PPV/lowest sensitivity. Specificity was very high across algorithms. We found similar results between algorithms across datasets. CONCLUSION: PheValuator appears to show promise as a tool to estimate PA performance characteristics.

8.
Stat Med ; 38(22): 4199-4208, 2019 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-31436848

RESUMO

The case-control design is widely used in retrospective database studies, often leading to spectacular findings. However, results of these studies often cannot be replicated, and the advantage of this design over others is questionable. To demonstrate the shortcomings of applications of this design, we replicate two published case-control studies. The first investigates isotretinoin and ulcerative colitis using a simple case-control design. The second focuses on dipeptidyl peptidase-4 inhibitors and acute pancreatitis, using a nested case-control design. We include large sets of negative control exposures (where the true odds ratio is believed to be 1) in both studies. Both replication studies produce effect size estimates consistent with the original studies, but also generate estimates for the negative control exposures showing substantial residual bias. In contrast, applying a self-controlled design to answer the same questions using the same data reveals far less bias. Although the case-control design in general is not at fault, its application in retrospective database studies, where all exposure and covariate data for the entire cohort are available, is unnecessary, as other alternatives such as cohort and self-controlled designs are available. Moreover, by focusing on cases and controls it opens the door to inappropriate comparisons between exposure groups, leading to confounding for which the design has few options to adjust for. We argue that this design should no longer be used in these types of data. At the very least, negative control exposures should be used to prove that the concerns raised here do not apply.

9.
Stud Health Technol Inform ; 264: 1017-1020, 2019 Aug 21.
Artigo em Inglês | MEDLINE | ID: mdl-31438078

RESUMO

Recently the two most influential clinical guideline were published for diagnosing and treating hypertension in US and Europe: 2017 American College of Cardiology/American Heart Association (ACC/AHA) and 2018 European Society of Cardiology/European Society of Hypertension (ESC/ESH) Guideline. Though both of them have most in common, the differences in details between guidelines have confused many clinicians in the world. Because guidelines were evidence- based literature, through the analysis of articles cited in guidelines, these similarities and differences could be explained. Bibliometric analysis is a method of quantifying the contents of literature to analyze literature. So using the bibliometric analysis including co-citation network analysis, articles cited in guideline were analyzed. As a result, we figured out that bibliometrics can analyze the influence of the countries, authors and studies on the guidelines, which might affect on the similarities and the differences between both guidelines.


Assuntos
Hipertensão , American Heart Association , Bibliometria , Cardiologia , Europa (Continente) , Humanos , Estados Unidos
10.
Math Biosci ; 316: 108242, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31454628

RESUMO

One way to interject knowledge into clinically impactful forecasting is to use data assimilation, a nonlinear regression that projects data onto a mechanistic physiologic model, instead of a set of functions, such as neural networks. Such regressions have an advantage of being useful with particularly sparse, non-stationary clinical data. However, physiological models are often nonlinear and can have many parameters, leading to potential problems with parameter identifiability, or the ability to find a unique set of parameters that minimize forecasting error. The identifiability problems can be minimized or eliminated by reducing the number of parameters estimated, but reducing the number of estimated parameters also reduces the flexibility of the model and hence increases forecasting error. We propose a method, the parameter Houlihan, that combines traditional machine learning techniques with data assimilation, to select the right set of model parameters to minimize forecasting error while reducing identifiability problems. The method worked well: the data assimilation-based glucose forecasts and estimates for our cohort using the Houlihan-selected parameter sets generally also minimize forecasting errors compared to other parameter selection methods such as by-hand parameter selection. Nevertheless, the forecast with the lowest forecast error does not always accurately represent physiology, but further advancements of the algorithm provide a path for improving physiologic fidelity as well. Our hope is that this methodology represents a first step toward combining machine learning with data assimilation and provides a lower-threshold entry point for using data assimilation with clinical data by helping select the right parameters to estimate.

11.
J Am Med Inform Assoc ; 26(8-9): 730-736, 2019 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-31365089

RESUMO

OBJECTIVE: We sought to assess the quality of race and ethnicity information in observational health databases, including electronic health records (EHRs), and to propose patient self-recording as an improvement strategy. MATERIALS AND METHODS: We assessed completeness of race and ethnicity information in large observational health databases in the United States (Healthcare Cost and Utilization Project and Optum Labs), and at a single healthcare system in New York City serving a racially and ethnically diverse population. We compared race and ethnicity data collected via administrative processes with data recorded directly by respondents via paper surveys (National Health and Nutrition Examination Survey and Hospital Consumer Assessment of Healthcare Providers and Systems). Respondent-recorded data were considered the gold standard for the collection of race and ethnicity information. RESULTS: Among the 160 million patients from the Healthcare Cost and Utilization Project and Optum Labs datasets, race or ethnicity was unknown for 25%. Among the 2.4 million patients in the single New York City healthcare system's EHR, race or ethnicity was unknown for 57%. However, when patients directly recorded their race and ethnicity, 86% provided clinically meaningful information, and 66% of patients reported information that was discrepant with the EHR. DISCUSSION: Race and ethnicity data are critical to support precision medicine initiatives and to determine healthcare disparities; however, the quality of this information in observational databases is concerning. Patient self-recording through the use of patient-facing tools can substantially increase the quality of the information while engaging patients in their health. CONCLUSIONS: Patient self-recording may improve the completeness of race and ethnicity information.

12.
AMIA Jt Summits Transl Sci Proc ; 2019: 145-152, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31258966

RESUMO

Electronic health records (EHR) are valuable to define phenotype selection algorithms used to identify cohorts ofpatients for sequencing or genome wide association studies (GWAS). To date, the electronic medical records and genomics (eMERGE) network institutions have developed and applied such algorithms to identify cohorts with associated DNA samples used to discover new genetic associations. For complex diseases, there are benefits to stratifying cohorts using comorbidities in order to identify their genetic determinants. The objective of this study was to: (a) characterize comorbidities in a range of phenotype-selected cohorts using the Johns Hopkins Adjusted Clinical Groups® (ACG®) System, (b) assess the frequency of important comorbidities in three commonly studied GWAS phenotypes, and (c) compare the comorbidity characterization of cases and controls. Our analysis demonstrates a framework to characterize comorbidities using the ACG system and identified differences in mean chronic condition count among GWAS cases and controls. Thus, we believe there is great potential to use the ACG system to characterize comorbidities among genetic cohorts selected based on EHR phenotypes.

13.
J Biomed Inform ; 96: 103253, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31325501

RESUMO

BACKGROUND: Implementing clinical phenotypes across a network is labor intensive and potentially error prone. Use of a common data model may facilitate the process. METHODS: Electronic Medical Records and Genomics (eMERGE) sites implemented the Observational Health Data Sciences and Informatics (OHDSI) Observational Medical Outcomes Partnership (OMOP) Common Data Model across their electronic health record (EHR)-linked DNA biobanks. Two previously implemented eMERGE phenotypes were converted to OMOP and implemented across the network. RESULTS: It was feasible to implement the common data model across sites, with laboratory data producing the greatest challenge due to local encoding. Sites were then able to execute the OMOP phenotype in less than one day, as opposed to weeks of effort to manually implement an eMERGE phenotype in their bespoke research EHR databases. Of the sites that could compare the current OMOP phenotype implementation with the original eMERGE phenotype implementation, specific agreement ranged from 100% to 43%, with disagreements due to the original phenotype, the OMOP phenotype, changes in data, and issues in the databases. Using the OMOP query as a standard comparison revealed differences in the original implementations despite starting from the same definitions, code lists, flowcharts, and pseudocode. CONCLUSION: Using a common data model can dramatically speed phenotype implementation at the cost of having to populate that data model, though this will produce a net benefit as the number of phenotype implementations increases. Inconsistencies among the implementations of the original queries point to a potential benefit of using a common data model so that actual phenotype code and logic can be shared, mitigating human error in reinterpretation of a narrative phenotype definition.

15.
Sci Rep ; 9(1): 6077, 2019 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-30988330

RESUMO

Benign prostatic hyperplasia (BPH) results in a significant public health burden due to the morbidity caused by the disease and many of the available remedies. As much as 70% of men over 70 will develop BPH. Few studies have been conducted to discover the genetic determinants of BPH risk. Understanding the biological basis for this condition may provide necessary insight for development of novel pharmaceutical therapies or risk prediction. We have evaluated SNP-based heritability of BPH in two cohorts and conducted a genome-wide association study (GWAS) of BPH risk using 2,656 cases and 7,763 controls identified from the Electronic Medical Records and Genomics (eMERGE) network. SNP-based heritability estimates suggest that roughly 60% of the phenotypic variation in BPH is accounted for by genetic factors. We used logistic regression to model BPH risk as a function of principal components of ancestry, age, and imputed genotype data, with meta-analysis performed using METAL. The top result was on chromosome 22 in SYN3 at rs2710383 (p-value = 4.6 × 10-7; Odds Ratio = 0.69, 95% confidence interval = 0.55-0.83). Other suggestive signals were near genes GLGC, UNCA13, SORCS1 and between BTBD3 and SPTLC3. We also evaluated genetically-predicted gene expression in prostate tissue. The most significant result was with increasing predicted expression of ETV4 (chr17; p-value = 0.0015). Overexpression of this gene has been associated with poor prognosis in prostate cancer. In conclusion, although there were no genome-wide significant variants identified for BPH susceptibility, we present evidence supporting the heritability of this phenotype, have identified suggestive signals, and evaluated the association between BPH and genetically-predicted gene expression in prostate.

16.
Appl Clin Inform ; 10(1): 40-50, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30650448

RESUMO

BACKGROUND: Disadvantaged populations, including minorities and the elderly, use patient portals less often than relatively more advantaged populations. Limited access to and experience with technology contribute to these disparities. Free access to devices, the Internet, and technical assistance may eliminate disparities in portal use. OBJECTIVE: To examine predictors of frequent versus infrequent portal use among hospitalized patients who received free access to an iPad, the Internet, and technical assistance. MATERIALS AND METHODS: This subgroup analysis includes 146 intervention-arm participants from a pragmatic randomized controlled trial of an inpatient portal. The participants received free access to an iPad and inpatient portal while hospitalized on medical and surgical cardiac units, together with hands-on help using them. We used logistic regression to identify characteristics predictive of frequent use. RESULTS: More technology experience (adjusted odds ratio [OR] = 5.39, p = 0.049), less severe illness (adjusted OR = 2.07, p = 0.077), and private insurance (adjusted OR = 2.25, p = 0.043) predicted frequent use, with a predictive performance (area under the curve) of 65.6%. No significant differences in age, gender, race, ethnicity, level of education, employment status, or patient activation existed between the frequent and infrequent users in bivariate analyses. Significantly more frequent users noticed medical errors during their hospital stay. DISCUSSION AND CONCLUSION: Portal use was not associated with several sociodemographic characteristics previously found to limit use in the inpatient setting. However, limited technology experience and high illness severity were still barriers to frequent use. Future work should explore additional strategies, such as enrolling health care proxies and improving usability, to reduce potential disparities in portal use.

17.
J Biomed Inform ; 90: 103092, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30654029
18.
J Am Med Inform Assoc ; 26(2): 115-123, 2019 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-30534990

RESUMO

Objective: To determine the effects of an inpatient portal intervention on patient activation, patient satisfaction, patient engagement with health information, and 30-day hospital readmissions. Methods and Materials: From March 2014 to May 2017, we enrolled 426 English- or Spanish-speaking patients from 2 cardiac medical-surgical units at an urban academic medical center. Patients were randomized to 1 of 3 groups: 1) usual care, 2) tablet with general Internet access (tablet-only), and 3) tablet with an inpatient portal. The primary study outcome was patient activation (Patient Activation Measure-13). Secondary outcomes included all-cause readmission within 30 days, patient satisfaction, and patient engagement with health information. Results: There was no evidence of a difference in patient activation among patients assigned to the inpatient portal intervention compared to usual care or the tablet-only group. Patients in the inpatient portal group had lower 30-day hospital readmissions (5.5% vs. 12.9% tablet-only and 13.5% usual care; P = 0.044). There was evidence of a difference in patient engagement with health information between the inpatient portal and tablet-only group, including looking up health information online (89.6% vs. 51.8%; P < 0.001). Healthcare providers reported that patients found the portal useful and that the portal did not negatively impact healthcare delivery. Conclusions: Access to an inpatient portal did not significantly improve patient activation, but it was associated with looking up health information online and with a lower 30-day hospital readmission rate. These results illustrate benefit of providing hospitalized patients with real-time access to their electronic health record data while in the hospital. Trial Registration: ClinicalTrials.gov Identifier: NCT01970852.


Assuntos
Pacientes Internados , Participação do Paciente , Portais do Paciente , Readmissão do Paciente , Satisfação do Paciente , Adulto , Idoso , Registros Eletrônicos de Saúde , Feminino , Hospitalização , Humanos , Masculino , Pessoa de Meia-Idade
19.
J Biomed Inform ; 88: 62-69, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30414475

RESUMO

BACKGROUND: Previous research has developed methods to construct acronym sense inventories from a single institutional corpus. Although beneficial, a sense inventory constructed from a single institutional corpus is not generalizable, because acronyms from different geographic regions and medical specialties vary greatly. OBJECTIVE: Develop an automated method to harmonize sense inventories from different regions and specialties towards the development of a comprehensive inventory. METHODS: The method involves integrating multiple source sense inventories into one centralized inventory and cross-mapping redundant entries to establish synonymy. To evaluate our method, we integrated 8 well-known source inventories into one comprehensive inventory (or metathesaurus). For both the metathesaurus and its sources, we evaluated the coverage of acronyms and their senses on a corpus of 1 million clinical notes. The corpus came from a different institution, region, and specialty than the source inventories. RESULTS: In the evaluation using clinical notes, the metathesaurus demonstrated an acronym (short form) micro-coverage of 94.3%, representing a substantial increase over the two next largest source inventories, the UMLS LRABR (74.8%) and ADAM (68.0%). The metathesaurus demonstrated a sense (long form) micro-coverage of 99.6%, again a substantial increase compared to the UMLS LRABR (82.5%) and ADAM (55.4%). CONCLUSIONS: Given the high coverage, harmonizing acronym sense inventories is a promising methodology to improve their comprehensiveness. Our method is automated, leverages the extensive resources already devoted to developing institution-specific inventories in the United States, and may help generalize sense inventories to institutions who lack the resources to develop them. Future work should address quality issues in source inventories and explore additional approaches to establishing synonymy.


Assuntos
Informática Médica/métodos , Reconhecimento Automatizado de Padrão , Unified Medical Language System , Algoritmos , Bases de Dados Factuais , Hospitais , Linguagem , Reprodutibilidade dos Testes , Semântica , Software
20.
J Am Med Inform Assoc ; 25(12): 1618-1625, 2018 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-30395248

RESUMO

Objective: To study the effect on patient cohorts of mapping condition (diagnosis) codes from source billing vocabularies to a clinical vocabulary. Materials and Methods: Nine International Classification of Diseases, Ninth Revision, Clinical Modification (ICD9-CM) concept sets were extracted from eMERGE network phenotypes, translated to Systematized Nomenclature of Medicine - Clinical Terms concept sets, and applied to patient data that were mapped from source ICD9-CM and ICD10-CM codes to Systematized Nomenclature of Medicine - Clinical Terms codes using Observational Health Data Sciences and Informatics (OHDSI) Observational Medical Outcomes Partnership (OMOP) vocabulary mappings. The original ICD9-CM concept set and a concept set extended to ICD10-CM were used to create patient cohorts that served as gold standards. Results: Four phenotype concept sets were able to be translated to Systematized Nomenclature of Medicine - Clinical Terms without ambiguities and were able to perform perfectly with respect to the gold standards. The other 5 lost performance when 2 or more ICD9-CM or ICD10-CM codes mapped to the same Systematized Nomenclature of Medicine - Clinical Terms code. The patient cohorts had a total error (false positive and false negative) of up to 0.15% compared to querying ICD9-CM source data and up to 0.26% compared to querying ICD9-CM and ICD10-CM data. Knowledge engineering was required to produce that performance; simple automated methods to generate concept sets had errors up to 10% (one outlier at 250%). Discussion: The translation of data from source vocabularies to Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT) resulted in very small error rates that were an order of magnitude smaller than other error sources. Conclusion: It appears possible to map diagnoses from disparate vocabularies to a single clinical vocabulary and carry out research using a single set of definitions, thus improving efficiency and transportability of research.


Assuntos
Classificação Internacional de Doenças , Systematized Nomenclature of Medicine , Humanos , Estudos Observacionais como Assunto , Vocabulário Controlado
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA