Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
NPJ Digit Med ; 6(1): 135, 2023 Jul 29.
Article in English | MEDLINE | ID: mdl-37516790

ABSTRACT

The success of foundation models such as ChatGPT and AlphaFold has spurred significant interest in building similar models for electronic medical records (EMRs) to improve patient care and hospital operations. However, recent hype has obscured critical gaps in our understanding of these models' capabilities. In this narrative review, we examine 84 foundation models trained on non-imaging EMR data (i.e., clinical text and/or structured data) and create a taxonomy delineating their architectures, training data, and potential use cases. We find that most models are trained on small, narrowly-scoped clinical datasets (e.g., MIMIC-III) or broad, public biomedical corpora (e.g., PubMed) and are evaluated on tasks that do not provide meaningful insights on their usefulness to health systems. Considering these findings, we propose an improved evaluation framework for measuring the benefits of clinical foundation models that is more closely grounded to metrics that matter in healthcare.

2.
J Thromb Thrombolysis ; 55(4): 737-741, 2023 May.
Article in English | MEDLINE | ID: mdl-36745322

ABSTRACT

BACKGROUND: Hyponatremia is associated with negative prognosis in several conditions like Congestive heart failure and acute MI (Myocardial Infarction), but its impact on the outcomes in patients with pulmonary embolism (PE) is not well studied. We aimed to study the association of hyponatremia in patients hospitalized with PE. METHODS: A retrospective cohort study was designed using data obtained from the 2016 to 2019 combined National Inpatient Sample (NIS) database. Adult patients admitted with PE were identified and stratified based on the presence of hyponatremia. Primary outcomes assessed were, mortality, length of stay (LOS), and Total Hospitalization Charges (THC). Secondary outcomes included a diagnosis of Acute Kidney Injury (AKI), Acute Respiratory Failure (ARF), sepsis, Acute Cerebrovascular accident (CVA), arrhythmias and acute MI. Multivariate linear and logistic regressions were used to adjust for confounders. RESULTS: There was a total of 750,655 adult hospitalizations for PE and among them 41,595 (5.5%) had a secondary diagnosis of hyponatremia. Hyponatremia was associated with an increased odds of mortality, 6.31% vs 2.91% (AOR:1.77, p = 0.000, 95% CI: 1.61-1.92), increased LOS, 6.79 days vs 4.20 days (adjusted difference of 2.20 days, p = 0.000, 95% CI: 2.04-2.37), as well as an increase in THC, 75,458.95 USD vs 46,708.27 USD (adjusted difference of 24,341.37 USD, p < 0.001, 95% CI: 21,484.58-27,198.16). Similarly, the presence of hyponatremia was associated with increased odds of several secondary outcomes measured. CONCLUSION: Hyponatremia is associated with an increased odds of death and attendant increase in LOS and THC. The odds of several secondary adverse clinical outcomes were also increased.


Subject(s)
Hyponatremia , Pulmonary Embolism , Adult , Humans , Hyponatremia/complications , Hyponatremia/diagnosis , Hyponatremia/therapy , Retrospective Studies , Hospitalization , Pulmonary Embolism/complications , Pulmonary Embolism/diagnosis , Pulmonary Embolism/therapy , Length of Stay
3.
BMJ Health Care Inform ; 29(1)2022 Oct.
Article in English | MEDLINE | ID: mdl-36220304

ABSTRACT

OBJECTIVES: Few machine learning (ML) models are successfully deployed in clinical practice. One of the common pitfalls across the field is inappropriate problem formulation: designing ML to fit the data rather than to address a real-world clinical pain point. METHODS: We introduce a practical toolkit for user-centred design consisting of four questions covering: (1) solvable pain points, (2) the unique value of ML (eg, automation and augmentation), (3) the actionability pathway and (4) the model's reward function. This toolkit was implemented in a series of six participatory design workshops with care managers in an academic medical centre. RESULTS: Pain points amenable to ML solutions included outpatient risk stratification and risk factor identification. The endpoint definitions, triggering frequency and evaluation metrics of the proposed risk scoring model were directly influenced by care manager workflows and real-world constraints. CONCLUSIONS: Integrating user-centred design early in the ML life cycle is key for configuring models in a clinically actionable way. This toolkit can guide problem selection and influence choices about the technical setup of the ML problem.


Subject(s)
Machine Learning , User-Centered Design , Delivery of Health Care , Humans , Pain , Workflow
4.
JAMA Netw Open ; 5(8): e2227779, 2022 08 01.
Article in English | MEDLINE | ID: mdl-35984654

ABSTRACT

Importance: Various model reporting guidelines have been proposed to ensure clinical prediction models are reliable and fair. However, no consensus exists about which model details are essential to report, and commonalities and differences among reporting guidelines have not been characterized. Furthermore, how well documentation of deployed models adheres to these guidelines has not been studied. Objectives: To assess information requested by model reporting guidelines and whether the documentation for commonly used machine learning models developed by a single vendor provides the information requested. Evidence Review: MEDLINE was queried using machine learning model card and reporting machine learning from November 4 to December 6, 2020. References were reviewed to find additional publications, and publications without specific reporting recommendations were excluded. Similar elements requested for reporting were merged into representative items. Four independent reviewers and 1 adjudicator assessed how often documentation for the most commonly used models developed by a single vendor reported the items. Findings: From 15 model reporting guidelines, 220 unique items were identified that represented the collective reporting requirements. Although 12 items were commonly requested (requested by 10 or more guidelines), 77 items were requested by just 1 guideline. Documentation for 12 commonly used models from a single vendor reported a median of 39% (IQR, 37%-43%; range, 31%-47%) of items from the collective reporting requirements. Many of the commonly requested items had 100% reporting rates, including items concerning outcome definition, area under the receiver operating characteristics curve, internal validation, and intended clinical use. Several items reported half the time or less related to reliability, such as external validation, uncertainty measures, and strategy for handling missing data. Other frequently unreported items related to fairness (summary statistics and subgroup analyses, including for race and ethnicity or sex). Conclusions and Relevance: These findings suggest that consistent reporting recommendations for clinical predictive models are needed for model developers to share necessary information for model deployment. The many published guidelines would, collectively, require reporting more than 200 items. Model documentation from 1 vendor reported the most commonly requested items from model reporting guidelines. However, areas for improvement were identified in reporting items related to model reliability and fairness. This analysis led to feedback to the vendor, which motivated updates to the documentation for future users.


Subject(s)
Models, Statistical , Research Report , Data Collection , Humans , Prognosis , Reproducibility of Results
5.
J Med Internet Res ; 24(6): e36882, 2022 06 17.
Article in English | MEDLINE | ID: mdl-35635840

ABSTRACT

BACKGROUND: The COVID-19 pandemic prompted widespread implementation of telehealth, including in the inpatient setting, with the goals to reduce potential pathogen exposure events and personal protective equipment (PPE) utilization. Nursing workflow adaptations in these novel environments are of particular interest given the association between nursing time at the bedside and patient safety. Understanding the frequency and duration of nurse-patient encounters following the introduction of a novel telehealth platform in the context of COVID-19 may therefore provide insight into downstream impacts on patient safety, pathogen exposure, and PPE utilization. OBJECTIVE: The aim of this study was to evaluate changes in nursing workflow relative to prepandemic levels using a real-time locating system (RTLS) following the deployment of inpatient telehealth on a COVID-19 unit. METHODS: In March 2020, telehealth was installed in patient rooms in a COVID-19 unit and on movable carts in 3 comparison units. The existing RTLS captured nurse movement during 1 pre- and 5 postpandemic stages (January-December 2020). Change in direct nurse-patient encounters, time spent in patient rooms per encounter, and total time spent with patients per shift relative to baseline were calculated. Generalized linear models assessed difference-in-differences in outcomes between COVID-19 and comparison units. Telehealth adoption was captured and reported at the unit level. RESULTS: Change in frequency of encounters and time spent per encounter from baseline differed between the COVID-19 and comparison units at all stages of the pandemic (all P<.001). Frequency of encounters decreased (difference-in-differences range -6.6 to -14.1 encounters) and duration of encounters increased (difference-in-differences range 1.8 to 6.2 minutes) from baseline to a greater extent in the COVID-19 units relative to the comparison units. At most stages of the pandemic, the change in total time nurses spent in patient rooms per patient per shift from baseline did not differ between the COVID-19 and comparison units (all P>.17). The primary COVID-19 unit quickly adopted telehealth technology during the observation period, initiating 15,088 encounters that averaged 6.6 minutes (SD 13.6) each. CONCLUSIONS: RTLS movement data suggest that total nursing time at the bedside remained unchanged following the deployment of inpatient telehealth in a COVID-19 unit. Compared to other units with shared mobile telehealth units, the frequency of nurse-patient in-person encounters decreased and the duration lengthened on a COVID-19 unit with in-room telehealth availability, indicating "batched" redistribution of work to maintain total time at bedside relative to prepandemic periods. The simultaneous adoption of telehealth suggests that virtual care was a complement to, rather than a replacement for, in-person care. However, study limitations preclude our ability to draw a causal link between nursing workflow change and telehealth adoption. Thus, further evaluation is needed to determine potential downstream implications on disease transmission, PPE utilization, and patient safety.


Subject(s)
COVID-19 , Nursing Care , Telemedicine , COVID-19/epidemiology , COVID-19/nursing , Hospital Units/organization & administration , Humans , Nursing Care/organization & administration , Pandemics , Telemedicine/organization & administration , Workflow
6.
BMJ Health Care Inform ; 29(1)2022 Apr.
Article in English | MEDLINE | ID: mdl-35396247

ABSTRACT

OBJECTIVES: The American College of Cardiology and the American Heart Association guidelines on primary prevention of atherosclerotic cardiovascular disease (ASCVD) recommend using 10-year ASCVD risk estimation models to initiate statin treatment. For guideline-concordant decision-making, risk estimates need to be calibrated. However, existing models are often miscalibrated for race, ethnicity and sex based subgroups. This study evaluates two algorithmic fairness approaches to adjust the risk estimators (group recalibration and equalised odds) for their compatibility with the assumptions underpinning the guidelines' decision rules.MethodsUsing an updated pooled cohorts data set, we derive unconstrained, group-recalibrated and equalised odds-constrained versions of the 10-year ASCVD risk estimators, and compare their calibration at guideline-concordant decision thresholds. RESULTS: We find that, compared with the unconstrained model, group-recalibration improves calibration at one of the relevant thresholds for each group, but exacerbates differences in false positive and false negative rates between groups. An equalised odds constraint, meant to equalise error rates across groups, does so by miscalibrating the model overall and at relevant decision thresholds. DISCUSSION: Hence, because of induced miscalibration, decisions guided by risk estimators learned with an equalised odds fairness constraint are not concordant with existing guidelines. Conversely, recalibrating the model separately for each group can increase guideline compatibility, while increasing intergroup differences in error rates. As such, comparisons of error rates across groups can be misleading when guidelines recommend treating at fixed decision thresholds. CONCLUSION: The illustrated tradeoffs between satisfying a fairness criterion and retaining guideline compatibility underscore the need to evaluate models in the context of downstream interventions.


Subject(s)
Atherosclerosis , Cardiology , Cardiovascular Diseases , Hydroxymethylglutaryl-CoA Reductase Inhibitors , American Heart Association , Atherosclerosis/drug therapy , Atherosclerosis/prevention & control , Cardiovascular Diseases/prevention & control , Humans , Hydroxymethylglutaryl-CoA Reductase Inhibitors/therapeutic use , United States
7.
Appl Clin Inform ; 13(1): 315-321, 2022 01.
Article in English | MEDLINE | ID: mdl-35235994

ABSTRACT

BACKGROUND: One key aspect of a learning health system (LHS) is utilizing data generated during care delivery to inform clinical care. However, institutional guidelines that utilize observational data are rare and require months to create, making current processes impractical for more urgent scenarios such as those posed by the COVID-19 pandemic. There exists a need to rapidly analyze institutional data to drive guideline creation where evidence from randomized control trials are unavailable. OBJECTIVES: This article provides a background on the current state of observational data generation in institutional guideline creation and details our institution's experience in creating a novel workflow to (1) demonstrate the value of such a workflow, (2) demonstrate a real-world example, and (3) discuss difficulties encountered and future directions. METHODS: Utilizing a multidisciplinary team of database specialists, clinicians, and informaticists, we created a workflow for identifying and translating a clinical need into a queryable format in our clinical data warehouse, creating data summaries and feeding this information back into clinical guideline creation. RESULTS: Clinical questions posed by the hospital medicine division were answered in a rapid time frame and informed creation of institutional guidelines for the care of patients with COVID-19. The cost of setting up a workflow, answering the questions, and producing data summaries required around 300 hours of effort and $300,000 USD. CONCLUSION: A key component of an LHS is the ability to learn from data generated during care delivery. There are rare examples in the literature and we demonstrate one such example along with proposed thoughts of ideal multidisciplinary team formation and deployment.


Subject(s)
COVID-19 , Learning Health System , COVID-19/epidemiology , Humans , Observational Studies as Topic , Pandemics , Practice Guidelines as Topic , Workflow
8.
J Am Med Inform Assoc ; 28(10): 2258-2264, 2021 09 18.
Article in English | MEDLINE | ID: mdl-34350942

ABSTRACT

Using a risk stratification model to guide clinical practice often requires the choice of a cutoff-called the decision threshold-on the model's output to trigger a subsequent action such as an electronic alert. Choosing this cutoff is not always straightforward. We propose a flexible approach that leverages the collective information in treatment decisions made in real life to learn reference decision thresholds from physician practice. Using the example of prescribing a statin for primary prevention of cardiovascular disease based on 10-year risk calculated by the 2013 pooled cohort equations, we demonstrate the feasibility of using real-world data to learn the implicit decision threshold that reflects existing physician behavior. Learning a decision threshold in this manner allows for evaluation of a proposed operating point against the threshold reflective of the community standard of care. Furthermore, this approach can be used to monitor and audit model-guided clinical decision making following model deployment.


Subject(s)
Cardiovascular Diseases , Clinical Decision-Making , Humans , Risk Assessment
9.
J Am Med Inform Assoc ; 28(11): 2445-2450, 2021 10 12.
Article in English | MEDLINE | ID: mdl-34423364

ABSTRACT

OBJECTIVE: Artificial intelligence (AI) and machine learning (ML) enabled healthcare is now feasible for many health systems, yet little is known about effective strategies of system architecture and governance mechanisms for implementation. Our objective was to identify the different computational and organizational setups that early-adopter health systems have utilized to integrate AI/ML clinical decision support (AI-CDS) and scrutinize their trade-offs. MATERIALS AND METHODS: We conducted structured interviews with health systems with AI deployment experience about their organizational and computational setups for deploying AI-CDS at point of care. RESULTS: We contacted 34 health systems and interviewed 20 healthcare sites (58% response rate). Twelve (60%) sites used the native electronic health record vendor configuration for model development and deployment, making it the most common shared infrastructure. Nine (45%) sites used alternative computational configurations which varied significantly. Organizational configurations for managing AI-CDS were distinguished by how they identified model needs, built and implemented models, and were separable into 3 major types: Decentralized translation (n = 10, 50%), IT Department led (n = 2, 10%), and AI in Healthcare (AIHC) Team (n = 8, 40%). DISCUSSION: No singular computational configuration enables all current use cases for AI-CDS. Health systems need to consider their desired applications for AI-CDS and whether investment in extending the off-the-shelf infrastructure is needed. Each organizational setup confers trade-offs for health systems planning strategies to implement AI-CDS. CONCLUSION: Health systems will be able to use this framework to understand strengths and weaknesses of alternative organizational and computational setups when designing their strategy for artificial intelligence.


Subject(s)
Artificial Intelligence , Decision Support Systems, Clinical , Delivery of Health Care , Health Facilities , Machine Learning
10.
J Med Internet Res ; 23(7): e29240, 2021 07 26.
Article in English | MEDLINE | ID: mdl-34236993

ABSTRACT

BACKGROUND: Telemedicine has been deployed by health care systems in response to the COVID-19 pandemic to enable health care workers to provide remote care for both outpatients and inpatients. Although it is reasonable to suspect telemedicine visits limit unnecessary personal contact and thus decrease the risk of infection transmission, the impact of the use of such technology on clinician workflows in the emergency department is unknown. OBJECTIVE: This study aimed to use a real-time locating system (RTLS) to evaluate the impact of a new telemedicine platform, which permitted clinicians located outside patient rooms to interact with patients who were under isolation precautions in the emergency department, on in-person interaction between health care workers and patients. METHODS: A pre-post analysis was conducted using a badge-based RTLS platform to collect movement data including entrances and duration of stay within patient rooms of the emergency department for nursing and physician staff. Movement data was captured between March 2, 2020, the date of the first patient screened for COVID-19 in the emergency department, and April 20, 2020. A new telemedicine platform was deployed on March 29, 2020. The number of entrances and duration of in-person interactions per patient encounter, adjusted for patient length of stay, were obtained for pre- and postimplementation phases and compared with t tests to determine statistical significance. RESULTS: There were 15,741 RTLS events linked to 2662 encounters for patients screened for COVID-19. There was no significant change in the number of in-person interactions between the pre- and postimplementation phases for both nurses (5.7 vs 7.0 entrances per patient, P=.07) and physicians (1.3 vs 1.5 entrances per patient, P=.12). Total duration of in-person interactions did not change (56.4 vs 55.2 minutes per patient, P=.74) despite significant increases in telemedicine videoconference frequency (0.6 vs 1.3 videoconferences per patient, P<.001 for change in daily average) and duration (4.3 vs 12.3 minutes per patient, P<.001 for change in daily average). CONCLUSIONS: Telemedicine was rapidly adopted with the intent of minimizing pathogen exposure to health care workers during the COVID-19 pandemic, yet RTLS movement data did not reveal significant changes for in-person interactions between staff and patients under investigation for COVID-19 infection. Additional research is needed to better understand how telemedicine technology may be better incorporated into emergency departments to improve workflows for frontline health care clinicians.


Subject(s)
COVID-19/diagnosis , COVID-19/prevention & control , Emergency Service, Hospital/organization & administration , Health Personnel/organization & administration , Telemedicine , Workflow , COVID-19/epidemiology , Cross Infection/prevention & control , Humans , Pandemics , SARS-CoV-2 , Time Factors
12.
NPJ Digit Med ; 3: 95, 2020.
Article in English | MEDLINE | ID: mdl-32695885

ABSTRACT

There is substantial interest in using presenting symptoms to prioritize testing for COVID-19 and establish symptom-based surveillance. However, little is currently known about the specificity of COVID-19 symptoms. To assess the feasibility of symptom-based screening for COVID-19, we used data from tests for common respiratory viruses and SARS-CoV-2 in our health system to measure the ability to correctly classify virus test results based on presenting symptoms. Based on these results, symptom-based screening may not be an effective strategy to identify individuals who should be tested for SARS-CoV-2 infection or to obtain a leading indicator of new COVID-19 cases.

13.
J Am Med Inform Assoc ; 27(7): 1102-1109, 2020 07 01.
Article in English | MEDLINE | ID: mdl-32495830

ABSTRACT

OBJECTIVE: To reduce pathogen exposure, conserve personal protective equipment, and facilitate health care personnel work participation in the setting of the COVID-19 pandemic, three affiliated institutions rapidly and independently deployed inpatient telemedicine programs during March 2020. We describe key features and early learnings of these programs in the hospital setting. METHODS: Relevant clinical and operational leadership from an academic medical center, pediatric teaching hospital, and safety net county health system met to share learnings shortly after deploying inpatient telemedicine. A summative analysis of their learnings was re-circulated for approval. RESULTS: All three institutions faced pressure to urgently standup new telemedicine systems while still maintaining secure information exchange. Differences across patient demographics and technological capabilities led to variation in solution design, though key technical considerations were similar. Rapid deployment in each system relied on readily available consumer-grade technology, given the existing familiarity to patients and clinicians and minimal infrastructure investment. Preliminary data from the academic medical center over one month suggested positive adoption with 631 inpatient video calls lasting an average (standard deviation) of 16.5 minutes (19.6) based on inclusion criteria. DISCUSSION: The threat of an imminent surge of COVID-19 patients drove three institutions to rapidly develop inpatient telemedicine solutions. Concurrently, federal and state regulators temporarily relaxed restrictions that would have previously limited these efforts. Strategic direction from executive leadership, leveraging off-the-shelf hardware, vendor engagement, and clinical workflow integration facilitated rapid deployment. CONCLUSION: The rapid deployment of inpatient telemedicine is feasible across diverse settings as a response to the COVID-19 pandemic.


Subject(s)
Betacoronavirus , Coronavirus Infections/therapy , Inpatients , Pneumonia, Viral/therapy , Telemedicine , Academic Medical Centers , COVID-19 , California , Computers, Handheld , Coronavirus Infections/epidemiology , Coronavirus Infections/transmission , Hospitals, County , Hospitals, Pediatric , Humans , Infectious Disease Transmission, Patient-to-Professional/prevention & control , Pandemics , Pneumonia, Viral/epidemiology , Pneumonia, Viral/transmission , SARS-CoV-2 , Safety-net Providers , Teaching Rounds
14.
Drug Deliv Transl Res ; 10(5): 1341-1352, 2020 10.
Article in English | MEDLINE | ID: mdl-31994025

ABSTRACT

Mucopolysaccharidosis IVA (Morquio A disease) is a genetic disorder caused by deficiency of N-acetylgalactosamine-6-sulfate-sulfatase (GALNS), leading to accumulation of keratan sulfate and chondroitin-6-sulfate in lysosomes. Many patients become wheelchair-dependent as teens, and their life span is 20-30 years. Currently, enzyme replacement therapy (ERT) is the treatment of choice. Although it alleviates some symptoms, replacing GALNS enzyme poses several challenges including very fast clearance from circulation and instability at 37 °C. These constraints affect frequency and cost of enzyme infusion and ability to reach all tissues. In this study, we developed injectable and biodegradable polyethylene glycol (PEG) hydrogels, loaded with recombinant human GALNS (rhGALNS) to improve enzyme stability and bioavailability, and to sustain release. We established the enzyme's release profile via bulk release experiments and determined diffusivity using fluorescence correlation spectroscopy. We observed that PEG hydrogels preserved enzyme activity during sustained release for 7 days. In the hydrogel, rhGALNS diffused almost four times slower than in buffer. We further confirmed that the enzyme was active when released from the hydrogels, by measuring its uptake in patient fibroblasts. The developed hydrogel delivery device could overcome current limits of rhGALNS replacement and improve quality of life for Morquio A patients. Encapsulated GALNS enzyme in a polyethylene glycol hydrogel improves GALNS stability by preserving its activity, and provides sustained release for a period of at least 7 days.


Subject(s)
Chondroitinsulfatases , Mucopolysaccharidosis IV , Chondroitinsulfatases/therapeutic use , Delayed-Action Preparations/therapeutic use , Humans , Hydrogels , Mucopolysaccharidosis IV/drug therapy , Polyethylene Glycols , Quality of Life , Recombinant Proteins/therapeutic use
15.
J Am Geriatr Soc ; 65(5): 1061-1066, 2017 May.
Article in English | MEDLINE | ID: mdl-28182265

ABSTRACT

OBJECTIVE: To examine the association between long-term metformin therapy and serum vitamin B12 monitoring. DESIGN: Retrospective cohort study. SETTING: A single Veterans Affairs Medical Center (VAMC), 2002-2012. PARTICIPANTS: Veterans 50 years or older with either type 2 diabetes and long-term metformin therapy (n = 3,687) or without diabetes and no prescription for metformin (n = 13,258). MEASUREMENTS: We determined diabetes status from outpatient visits, and defined long-term metformin therapy as a prescription ≥500 mg/d for at least six consecutive months. We estimated the proportion of participants who received a serum B12 test and used multivariable logistic regression, stratified by age, to evaluate the association between metformin use and serum B12 testing. RESULTS: Only 37% of older adults with diabetes receiving metformin were tested for vitamin B12 status after long-term metformin prescription. The mean B12 concentration was significantly lower in the metformin-exposed group (439.2 pg/dL) compared to those without diabetes (522.4 pg/dL) (P = .0015). About 7% of persons with diabetes receiving metformin were vitamin B12 deficient (<170 pg/dL) compared to 3% of persons without diabetes or metformin use (P = .0001). Depending on their age, metformin users were two to three times more likely not to receive vitamin B12 testing compared to those without metformin exposure, after adjusting for sex, race and ethnicity, body mass index, and number of years treated at the VAMC. CONCLUSION: Long-term metformin therapy is significantly associated with lower serum vitamin B12 concentration, yet those at risk are often not monitored for B12 deficiency. Because metformin is first line therapy for type 2 diabetes, clinical decision support should be considered to promote serum B12 monitoring among long-term metformin users for timely identification of the potential need for B12 replacement.


Subject(s)
Diabetes Mellitus, Type 2/drug therapy , Hypoglycemic Agents/therapeutic use , Metformin/therapeutic use , Veterans/statistics & numerical data , Vitamin B 12 Deficiency/chemically induced , Diabetes Mellitus, Type 2/blood , Diabetes Mellitus, Type 2/complications , Female , Hospitals, Veterans , Humans , Male , Middle Aged , Retrospective Studies , Vitamin B 12 Deficiency/blood
16.
Arch Med Sci ; 12(4): 728-35, 2016 Aug 01.
Article in English | MEDLINE | ID: mdl-27478452

ABSTRACT

INTRODUCTION: The aim of the study was to examine changes in carotid intima-media thickness (CIMT) and carotid plaque morphology in patients receiving multifactorial cardiovascular disease (CVD) risk factor management in a community-based prevention clinic. Quantitative changes in CIMT and qualitative changes in carotid plaque morphology may be measured non-invasively by ultrasound. MATERIAL AND METHODS: This is a retrospective study on a cohort of 324 patients who received multifactorial cardiovascular risk reduction treatment at a community prevention clinic. All patients received lipid-lowering medications (statin, niacin, and/or ezetimibe) and lifestyle modification. All patients underwent at least one follow-up CIMT measurement after starting their regimen. Annual biomarker, CIMT, and plaque measurements were analyzed for associations with CVD risk reduction treatment. RESULTS: Median time to last CIMT was 3.0 years. Compared to baseline, follow-up analysis of all treatment groups at 2 years showed a 52.7% decrease in max CIMT, a 3.0% decrease in mean CIMT, and an 87.0% decrease in the difference between max and mean CIMT (p < 0.001). Plaque composition changes occurred, including a decrease in lipid-rich plaques of 78.4% within the first 2 years (p < 0.001). After the first 2 years, CIMT and lipid-rich plaques continued to decline at reduced rates. CONCLUSION: In a cohort of patients receiving comprehensive CVD risk reduction therapy, delipidation of subclinical carotid plaque and reductions in CIMT predominantly occurred within 2 years, and correlated with changes in traditional biomarkers. These observations, generated from existing clinical data, provide unique insight into the longitudinal on-treatment changes in carotid plaque.

17.
Am J Cardiol ; 117(9): 1474-81, 2016 May 01.
Article in English | MEDLINE | ID: mdl-27001449

ABSTRACT

Heart failure with preserved ejection fraction (HFpEF) is a prevalent condition with no established prevention or treatment strategies. Furthermore, the pathophysiology and predisposing risk factors for HFpEF are incompletely understood. Therefore, we sought to characterize the incidence and determinants of HFpEF in the Multi-Ethnic Study of Atherosclerosis (MESA). Our study included 6,781 MESA participants (White, Black, Chinese, and Hispanic men and women age 45 to 84 years, free of baseline cardiovascular disease). The primary end point was time to diagnosis of HFpEF (left ventricular ejection fraction ≥45%). Multivariable adjusted hazard ratios (HRs) with 95% confidence intervals were calculated to identify predictors of HFpEF. Over median follow-up of 11.2 years (10.6 to 11.7), 111 subjects developed HFpEF (cumulative incidence 1.7%). Incidence rates were similar across all races/ethnicities. Age (HR 2.3 [1.7 to 3.0]), hypertension (HR 1.8 [1.1 to 2.9]), diabetes (HR 2.3 [1.5 to 3.7]), body mass index (HR 1.4 [1.1 to 1.7]), left ventricular hypertrophy by electrocardiography (HR 4.3 [1.7 to 11.0]), interim myocardial infarction (HR 4.8 [2.7 to 8.6]), elevated N-terminal of the prohormone brain natriuretic peptide (HR 2.4 [1.5 to 4.0]), detectable troponin T (HR 4.5 [1.9 to 10.9]), and left ventricular mass index by magnetic resonance imaging (MRI; 1.3 [1.0 to 1.6]) were significant predictors of incident HFpEF. Worsening renal function, inflammatory markers, and coronary artery calcium were significant univariate but not multivariate predictors of HFpEF. Gender was neither a univariate nor multivariate predictor of HFpEF. In conclusion, we demonstrate several risk factors and biomarkers associated with incident HFpEF that were consistent across different racial/ethnic groups and may represent potential therapeutic targets for the prevention and treatment of HFpEF.


Subject(s)
Atherosclerosis/ethnology , Ethnicity/statistics & numerical data , Heart Failure/ethnology , Stroke Volume/physiology , White People/statistics & numerical data , Aged , Aged, 80 and over , Atherosclerosis/blood , Biomarkers/blood , Cohort Studies , Female , Heart Failure/blood , Heart Failure/physiopathology , Humans , Incidence , Male , Middle Aged , United States
18.
Atherosclerosis ; 246: 367-73, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26841074

ABSTRACT

AIMS: Patients with a low lifetime risk of coronary heart disease (CHD) are not completely free of events over 10 years. We evaluated predictors for CHD among "low lifetime risk" participants in the population-based Multi-Ethnic Study of Atherosclerosis (MESA). METHODS: MESA enrolled 6814 men and women aged 45-84 years who were free of baseline cardiovascular disease. Using established criteria of non-diabetic, non-smokers with total cholesterol ≤ 200 mg/dL, systolic BP ≤ 139 mmHg, and diastolic BP ≤ 89 mmHg at baseline, we identified 1391 participants with a low lifetime risk for cardiovascular disease. Baseline covariates were age, gender, ethnicity, HDL-C, C-reactive protein, family history of CHD, carotid intima-media thickness and coronary artery calcium (CAC). We calculated event rates and the number needed to scan (NNS) to identify one participant with CAC>0 and > 100. RESULTS: Over 10.4 years median follow-up, there were 33 events (2.4%) in participants with low lifetime risk. There were 479 participants (34%) with CAC>0 including 183 (13%) with CAC>100. CAC was present in 25 (76%) participants who experienced an event. In multivariable analyses, only CAC>100 remained predictive of CHD (HR 4.6; 95% CI: 1.6-13.6; p = 0.005). The event rates for CAC = 0, CAC>0 and CAC>100 were 0.9/1,000, 5.7/1,000, and 11.0/1000 person-years, respectively. The NNS to identify one participant with CAC>0 and > 100 were 3 and 7.6, respectively. CONCLUSIONS: While 10-year event rates were low in those with low lifetime risk, CAC was the strongest predictor of incident CHD. Identification of individuals with CAC = 0 and CAC>100 carries significant potential therapeutic implications.


Subject(s)
Computed Tomography Angiography , Coronary Angiography/methods , Coronary Artery Disease/diagnostic imaging , Coronary Vessels/diagnostic imaging , Vascular Calcification/diagnostic imaging , Aged , Aged, 80 and over , Asymptomatic Diseases , Carotid Intima-Media Thickness , Coronary Artery Disease/ethnology , Coronary Artery Disease/mortality , Female , Humans , Incidence , Male , Middle Aged , Multivariate Analysis , Predictive Value of Tests , Prognosis , Proportional Hazards Models , Risk Assessment , Risk Factors , Severity of Illness Index , Time Factors , United States/epidemiology , Vascular Calcification/ethnology , Vascular Calcification/mortality
19.
AJR Am J Roentgenol ; 201(5): W678-82, 2013 Nov.
Article in English | MEDLINE | ID: mdl-24147496

ABSTRACT

OBJECTIVE: The Schatzki ring was named for Richard Schatzki, a renowned radiologist who described the entity with his colleague, John E. Gary. The purpose of this article is to shed more light on a man who made such a significant contribution and to chronicle developments concerning this important radiologic finding. CONCLUSION: The Schatzki ring was described long ago, but its cause is poorly understood even today.


Subject(s)
Deglutition Disorders/history , Esophageal Diseases/history , History, 20th Century , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...