Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 41
Filter
1.
Nat Commun ; 14(1): 7870, 2023 Dec 18.
Article in English | MEDLINE | ID: mdl-38110409

ABSTRACT

Flood exposure has been linked to shifts in population sizes and composition. Traditionally, these changes have been observed at a local level providing insight to local dynamics but not general trends, or at a coarse resolution that does not capture localized shifts. Using historic flood data between 2000-2023 across the Contiguous United States (CONUS), we identify the relationships between flood exposure and population change. We demonstrate that observed declines in population are statistically associated with higher levels of historic flood exposure, which may be subsequently coupled with future population projections. Several locations have already begun to see population responses to observed flood exposure and are forecasted to have decreased future growth rates as a result. Finally, we find that exposure to high frequency flooding (5 and 20-year return periods) results in 2-7% lower growth rates than baseline projections. This is exacerbated in areas with relatively high exposure to frequent flooding where growth is expected to decline over the next 30 years.

2.
Appl Spat Anal Policy ; 16(1): 141-161, 2023.
Article in English | MEDLINE | ID: mdl-35967757

ABSTRACT

Existing measures of health care access were inadequate for guiding policy decisions in West Virginia, as they identified the entire state as having limited access. To address this, we compiled a comprehensive database of primary health care providers and facilities in the state, developed a modified E2SFCA tool to measure spatial access in the context of West Virginia's rural and mountainous nature, and integrated this with an index of socio-economic barriers to access. The integrated index revealed that the rural areas, especially in the southern part of the state, have especially limited access to primary health care. 1. Introduction. An emerging public health issue which has been exacerbated by the COVID-19 pandemic, is that of healthcare deserts, which are places where basic affordable health care is not accessible for residents. This problem has become worse in rural areas as rural hospitals close. In these areas, including West Virginia, scattered populations suffer from limited access to primary healthcare services. Uneven geographic and socio-economic barriers to accessing primary health care are major contributing factors to these health disparities. West Virginia's unique rural and mountainous settlement patterns, aging population, and economic crisis over the past two decades have resulted in unequal access to the primary healthcare services for its residents. The rural nature of the state makes it difficult to maintain medical facilities accessible to much of the population, especially as rural hospitals have been closing, such as the one in Williamson, WV (Jarvie, 2020). The mountainous terrain slows down travel across winding roads, lengthening travel times to the nearest hospital, while an aging population has increased health care needs. Lastly, an economic crisis and higher poverty rate makes West Virginians less able to pay for health care. As a result, West Virginians are confronting a health crisis. According to a recent report by the West Virginia Health Statistics Center (2019), West Virginians rank first in the country for heart attacks, have the second-highest obesity rate and prevalence of mental health problems in the country, along with the fourth-highest rate of diabetes and fifth-highest rate of cancer. An issue faced by West Virginia's policymakers is the limitations of tools for identifying and assessing healthcare deserts, as they are poorly suited for the unique challenges in West Virginia. Academic research has not analyzed comprehensive primary healthcare accessibility in WV, although previous studies have focused on Appalachia (e.g., Behringer & Friedell 2006; Smith & Holloman, 2011; Elnicki et al., 1995; Donohoe et al., 2015, 2016a, 2016b), and others focus on access to more specialized services (Valvi et al., 2019; Donohoe, 2016a). Existing approaches to identify the healthcare deprived areas, such as Health Professional Shortage Areas (HPSA), are not suitable for guiding West Virginia policies, because every one of the 55 counties within the state has several HPSAs, which makes prioritizing resources difficult. The lack of easily accessible, comprehensive, and up-to-date physician and healthcare facility database creates additional difficulties. Physician license datasets were found to often include inconsistent, misleading, and out-of-date information. The last limitation of the HPSA designation is that it is based on zip code areas and census tracts, which are not ideal as zip code areas lack spatial context and much covariate data, while rural census tracts are too large to capture spatial variation of access. In this context, the WV HealthLink project was begun with joint effort with WV Rural Health Initiative (RHI) to fill gaps in research and support decision making for primary healthcare access in West Virginia. The goals of the projects are: (1) to help West Virginia's three medical schools provide specialized professional training in rural healthcare; (2) to address health disparities by investing in clinical projects in underserved areas; and (3) to retain health professionals in WV. In 2018, to support these goals, HealthLink was invited by the RHI's leadership to analyze disparities in primary health care access in West Virginia and develop tools for rural healthcare decision-making. These goals also create a comprehensive and up-to-date physician and facility database, new analysis tools, and new visualization tools for decision support. The goals of this paper are to assess the spatial and social accessibility of primary health care in West Virginia, and to understand spatial and social determinants that shape this access. To achieve these goals, this paper completes the following objectives: (1) define primary healthcare and access; (2) build an extensive and up-to-date primary healthcare database; (3) develop an assessment framework for WV; and (4) visualize the results for policy makers and practitioners. The structure of this paper is as follows. First, we describe three methodological problems encountered as we define primary health care access. Second, we present the methods used to resolve these problems, and conclude by presenting our modified enhanced two-step floating catchment area (E2FCA hereafter) approach and its results for WV. Our foci in this modification were improving the accuracy of the analysis regarding measuring distance, considering distance decay effect, and more precisely representing the location of supply and demand.

3.
J Neuroophthalmol ; 43(1): 86-90, 2023 03 01.
Article in English | MEDLINE | ID: mdl-36166810

ABSTRACT

BACKGROUND: Although nonarteritic anterior ischemic optic neuropathy is a well-known cause of vision loss, it typically presents unilaterally. Simultaneous, bilateral nonarteritic anterior ischemic optic neuropathy (sNAION) is rare and poorly studied in comparison. This study seeks to characterize the clinical features and risk factors of patients with sNAION compared with unilateral NAION (uNAION). METHODS: In this retrospective case-control study, we reviewed 76 eyes (38 patients) with sNAION and 38 eyes (38 patients) with uNAION (controls) from 4 academic institutions examined between 2009 and 2020. Demographic information, medical history, medication use, symptom course, paraclinical evaluation, and visual outcomes were collected for all patients. RESULTS: No significant differences were observed in demographics, comorbidities and their treatments, and medication usage between sNAION and uNAION patients. sNAION patients were more likely to undergo an investigative work-up with erythrocyte sedimentation rate measurement ( P = 0.0061), temporal artery biopsy ( P = 0.013), lumbar puncture ( P = 0.013), and MRI ( P < 0.0001). There were no significant differences between the 2 groups for visual acuity, mean visual field deviation, peripapillary retinal nerve fiber layer thickness, or ganglion cell-inner plexiform layer thickness at presentation, nor at final visit for those with ≥3 months of follow-up. The sNAION eyes with ≥3 months of follow-up had a smaller cup-to-disc ratio (CDR) at final visit ( P = 0.033). Ten patients presented with incipient NAION, of which 9 suffered vision loss by final visit. CONCLUSION: Aside from CDR differences, the risk factor profile and visual outcomes of sNAION patients seem similar to those of uNAION patients, suggesting similar pathophysiology.


Subject(s)
Optic Disk , Optic Neuropathy, Ischemic , Humans , Case-Control Studies , Demography , Optic Disk/pathology , Optic Neuropathy, Ischemic/diagnosis , Optic Neuropathy, Ischemic/epidemiology , Retinal Ganglion Cells/pathology , Retrospective Studies , Risk Factors , Tomography, Optical Coherence
4.
PLoS One ; 17(6): e0269729, 2022.
Article in English | MEDLINE | ID: mdl-35737689

ABSTRACT

Deforestation continues at rapid rates despite global conservation efforts. Evidence suggests that governance may play a critical role in influencing deforestation, and while a number of studies have demonstrated a clear relationship between national-level governance and deforestation, much remains to be known about the relative importance of subnational governance to deforestation outcomes. With a focus on the Brazilian Amazon, this study aims to understand the relationship between governance and deforestation at the municipal level. Drawing on the World Bank Worldwide Governance Indicators (WGI) as a guiding conceptual framework, and incorporating the additional dimension of environmental governance, we identified a wide array of publicly available data sources related to governance indicators that we used to select relevant governance variables. We compiled a dataset of 22 municipal-level governance variables covering the 2005-2018 period for 457 municipalities in the Brazilian Amazon. Using an econometric approach, we tested the relationship between governance variables and deforestation rates in a fixed-effects panel regression analysis. We found that municipalities with increasing numbers of agricultural companies tended to have higher rates of deforestation, municipalities with an environmental fund tended to have lower rates of deforestation, and municipalities that had previously elected a female mayor tended to have lower rates of deforestation. These results add to the wider conversation on the role of local-level governance, revealing that certain governance variables may contribute to halting deforestation in the Brazilian Amazon.


Subject(s)
Conservation of Natural Resources , Environmental Policy , Agriculture , Brazil , Cities , Conservation of Natural Resources/methods , Female , Humans
5.
Am J Ophthalmol ; 241: 108-119, 2022 09.
Article in English | MEDLINE | ID: mdl-35504303

ABSTRACT

PURPOSE: To investigate the relationship between self-perceived driving difficulty, driving avoidance, and negative emotion about driving with glaucoma severity and on-road driving performance. DESIGN: Cohort study. METHODS: Glaucoma patients (n = 111), aged 55 to 90 years, with mild, moderate, and advanced glaucoma in the better-eye based on the Glaucoma Staging System, and age-matched controls (n = 47) were recruited from a large tertiary academic center. Self-reported questionnaires were administered by a trained occupational therapist followed by a standardized on-road driving evaluation (pass vs "at-risk" score) with a masked and certified driving rehabilitation specialist. RESULTS: Compared to controls, glaucoma participants reported greater driving difficulty with as early as mild glaucoma (P = .0391) and negative emotion about driving starting with moderate glaucoma (P = .0042). Glaucoma participants reporting at least 1 driving difficulty and negative emotion had a 3.3-fold (adjusted odds ratio [OR] = 3.3; 95% CI = 1.24-8.52; P = .0163) and 4.2-fold (adjusted OR = 4.2; 95% CI = 1.5-12.2; P = .0078) greater odds, respectively, of an at-risk score on the on-road test. Self-reported driving difficulty in "difficult" conditions (P = .0019), rain (P = .0096), interstates (P = .0378), and high traffic (P = .0076), driving avoidance on sunny (P = .0065) and cloudy (P = .0043) days, and driving fewer days per week (P = .0329) were also associated with at-risk driving. CONCLUSIONS: Screening tools that assess self-perceived driving difficulty and driving avoidance in specific conditions, negative emotion about driving, and driving exposure may help identify unsafe drivers with glaucoma. Some of these drivers, particularly those with modest glaucoma, may benefit from a driving evaluation and early referral to resources that could enable them to continue driving safely and confidently.


Subject(s)
Automobile Driving , Glaucoma , Aged , Cohort Studies , Emotions , Glaucoma/psychology , Humans , Self Report , Surveys and Questionnaires
6.
J Neuroophthalmol ; 42(1): 121-125, 2022 03 01.
Article in English | MEDLINE | ID: mdl-32991390

ABSTRACT

BACKGROUND: This study identifies the diagnostic errors leading to misdiagnosis of 3rd nerve palsy and to aid clinicians in making this diagnosis. The objective of this article is to determine the incidence of misdiagnosis of 3rd cranial nerve palsy (3rd nerve palsy) among providers referring to a tertiary care neuro-ophthalmology clinic and to characterize diagnostic errors that led to an incorrect diagnosis. METHODS: This was a retrospective clinic-based multicenter cross-sectional study of office encounters at 2 institutions from January 1, 2014, to January 1, 2017. All encounters with scheduling comments containing variations of "3rd nerve palsy" were reviewed. Patients with a documented referral diagnosis of new 3rd nerve palsy were included in the study. Examination findings, including extraocular movement examination, external lid examination, and pupil examination, were collected. The final diagnosis was determined by a neuro-ophthalmologist. The Diagnosis Error Evaluation and Research (DEER) taxonomy tool was used to categorize the causes of misdiagnosis. Seventy-eight patients referred were for a new diagnosis of 3rd nerve palsy. The main outcome measure was the type of diagnostic error that led to incorrect diagnoses using the DEER criteria as determined by 2 independent reviewers. Secondary outcomes were rates of misdiagnosis, misdiagnosis rate by referring specialty, and examination findings associated with incorrect diagnoses. RESULTS: Of 78 patients referred with a suspected diagnosis of 3rd nerve palsy, 21.8% were determined to have an alternate diagnosis. The most common error in misdiagnosed cases was failure to correctly interpret the physical examination. Ophthalmologists were the most common referring provider for 3rd nerve palsy, and optometrists had the highest overdiagnosis rate of 3rd nerve palsy. CONCLUSIONS: Misdiagnosis of 3rd nerve palsy was common. Performance and interpretation of the physical examination were the most common factors leading to misdiagnosis of 3rd nerve palsy.


Subject(s)
Oculomotor Nerve Diseases , Cross-Sectional Studies , Diagnostic Errors , Electron Spin Resonance Spectroscopy , Humans , Oculomotor Nerve Diseases/diagnosis , Paralysis , Retrospective Studies
7.
Am J Ophthalmol ; 236: 120-129, 2022 04.
Article in English | MEDLINE | ID: mdl-34626574

ABSTRACT

PURPOSE: To determine whether the addition of adjunctive tests, including immunohistochemistry (IHC), cytokine analysis, flow cytometry, and IgH gene rearrangement testing, achieves improved diagnostic parameters compared with cytologic smears alone in the detection of vitreoretinal lymphoma (VRL). To determine which of these tests or combination of tests provide the greatest diagnostic utility. DESIGN: Retrospective review to assess diagnostic value. METHODS: This single university-affiliated tertiary care center study included data from 237 vitreous biopsies performed between 1999 and 2017 in patients with suspected VRL. From 1999 to 2008-2009, cytologic smears were the sole test performed (84 cases). The protocol initiated in 2008-2009 added the 4 additional diagnostic tests (153 cases). The sensitivity, specificity, positive predictive value, negative predictive value, diagnostic accuracy, and diagnostic yield were calculated. Parameters were calculated for tests individually, for all 5 combined, and all possible 2-, 3-, and 4-test combinations. For cytologic smears, diagnostic parameters were calculated both before and after the addition of adjunctive tests to our protocol and for the entire cohort. RESULTS: Of the 237 vitreous biopsies, 50 samples (21%) were from patients with confirmed central nervous system lymphoma and/or actively treated central nervous system, systemic, or intraocular lymphoma. Diagnostic yields (95% CI) were 90% (85%-93%) for smears, 82% (72%-89%) for IHC, 91% (85%-96%) for cytokine analysis, 76% (67%-84%) for IgH gene rearrangement, and 50% (40%-60%) for flow cytometry. For smears, the sensitivity pre-protocol was 73% (39%-94%), compared with 87% (69%-96%) post-protocol. IgH gene rearrangement was the only test exhibiting low sensitivity (40%). The combination of smears, IHC, and cytokine analysis exhibited the highest diagnostic parameters, with sensitivity 92%, specificity 98%, and diagnostic yield 100%. CONCLUSIONS: The combination of cytologic smears, IHC, and cytokine analysis seems to be a reasonable and sufficient protocol for the diagnosis of suspected VRL. IgH gene rearrangement and flow cytometry may be the most expendable tests from our protocol.


Subject(s)
Eye Neoplasms , Intraocular Lymphoma , Retinal Neoplasms , Cytokines , Eye Neoplasms/pathology , Humans , Intraocular Lymphoma/diagnosis , Intraocular Lymphoma/genetics , Intraocular Lymphoma/pathology , Retinal Neoplasms/diagnosis , Retinal Neoplasms/genetics , Retinal Neoplasms/pathology , Vitreous Body/pathology
8.
Health Serv Outcomes Res Methodol ; 22(1): 145-161, 2022.
Article in English | MEDLINE | ID: mdl-34305442

ABSTRACT

Many places within rural America lack ready access to health care facilities. Barriers to access can be both spatial and non-spatial. Measurements of spatial access, such as the Enhanced Floating 2-Step Catchment Area and other floating catchment area measures, produce similar patterns of access. However, the extent to which different measurements of socioeconomic barriers to access correspond with each other has not been examined. Using West Virginia as a case study, we compute indices based upon the literature and measure the correlations among them. We find that all indices positively correlate with each other, although the strength of the correlation varies. Also, while there is broad agreement in the general spatial trends, such as fewer barriers in urban areas, and more barriers in the impoverished southwestern portion of the state, there are regions within the state that have more disagreement among the indices. These indices are to be used to support decision-making with respect to placement of rural residency students from medical schools within West Virginia to provide students with educational experiences as well as address health care inequalities within the state. The results indicate that for decisions and policies that address statewide trends, the choice of metric is not critical. However, when the decisions involve specific locations for receiving rural residents or opening clinics, the results can become more sensitive to the selection of the index. Therefore, for fine-grained policy decision-making, it is important that the chosen index best represents the processes under consideration.

9.
J Neuroophthalmol ; 41(4): 537-541, 2021 12 01.
Article in English | MEDLINE | ID: mdl-34334757

ABSTRACT

BACKGROUND: Isolated third nerve palsy may indicate an expanding posterior communicating artery aneurysm, thus necessitating urgent arterial imaging. This study aims to assess the rate and duration of delays in arterial imaging for new isolated third nerve palsies, identify potential causes of delay, and evaluate instances of delay-related patient harm. METHODS: In this cross-sectional study, we retrospectively reviewed 110 patient charts (aged 18 years and older) seen between November 2012 and June 2020 at the neuro-ophthalmology clinic and by the inpatient ophthalmology consultation service at a tertiary institution. All patients were referred for suspicion of or had a final diagnosis of third nerve palsy. Demographics, referral encounter details, physical examination findings, final diagnoses, timing of arterial imaging, etiologies of third nerve palsy, and details of patient harm were collected. RESULTS: Of the 110 included patients, 62 (56.4%) were women, 88 (80%) were white, and the mean age was 61.8 ± 14.6 years. Forty (36.4%) patients received arterial imaging urgently. Patients suspected of third nerve palsy were not more likely to be sent for urgent evaluation (P = 0.29) or arterial imaging (P = 0.082) than patients in whom the referring doctor did not suspect palsy. Seventy-eight of 95 (82%) patients with a final diagnosis of third nerve palsy were correctly identified by referring providers. Of the 20 patients without any arterial imaging before neuro-ophthalmology consultation, there was a median delay of 24 days from symptom onset to imaging, and a median delay of 12.5 days between first medical contact for their symptoms and imaging. One patient was harmed as a result of delayed imaging. CONCLUSIONS: Third nerve palsies were typically identified correctly, but referring providers failed to recognize the urgency of arterial imaging to rule out an aneurysmal etiology. Raising awareness of the urgency of arterial imaging may improve patient safety.


Subject(s)
Intracranial Aneurysm , Oculomotor Nerve Diseases , Adolescent , Aged , Cross-Sectional Studies , Diagnostic Imaging , Female , Humans , Intracranial Aneurysm/diagnosis , Middle Aged , Oculomotor Nerve Diseases/diagnosis , Retrospective Studies
10.
11.
JAMA Ophthalmol ; 2021 04 15.
Article in English | MEDLINE | ID: mdl-33856434

ABSTRACT

Importance: Ocular hypertension is an important risk factor for the development of primary open-angle glaucoma (POAG). Data from long-term follow-up can be used to inform the management of patients with ocular hypertension. Objective: To determine the cumulative incidence and severity of POAG after 20 years of follow-up among participants in the Ocular Hypertension Treatment Study. Design, Setting, and Participants: Participants in the Ocular Hypertension Treatment Study were followed up from February 1994 to December 2008 in 22 clinics. Data were collected after 20 years of follow-up (from January 2016 to April 2019) or within 2 years of death. Analyses were performed from July 2019 to December 2020. Interventions: From February 28, 1994, to June 2, 2002 (phase 1), participants were randomized to receive either topical ocular hypotensive medication (medication group) or close observation (observation group). From June 3, 2002, to December 30, 2008 (phase 2), both randomization groups received medication. Beginning in 2009, treatment was no longer determined by study protocol. From January 7, 2016, to April 15, 2019 (phase 3), participants received ophthalmic examinations and visual function assessments. Main Outcomes and Measures: Twenty-year cumulative incidence and severity of POAG in 1 or both eyes after adjustment for exposure time. Results: A total of 1636 individuals (mean [SD] age, 55.4 [9.6] years; 931 women [56.9%]; 1138 White participants [69.6%]; 407 Black/African American participants [24.9%]) were randomized in phase 1 of the clinical trial. Of those, 483 participants (29.5%) developed POAG in 1 or both eyes (unadjusted incidence). After adjusting for exposure time, the 20-year cumulative incidence of POAG in 1 or both eyes was 45.6% (95% CI, 42.3%-48.8%) among all participants, 49.3% (95% CI, 44.5%-53.8%) among participants in the observation group, and 41.9% (95% CI, 37.2%-46.3%) among participants in the medication group. The 20-year cumulative incidence of POAG was 55.2% (95% CI, 47.9%-61.5%) among Black/African American participants and 42.7% (95% CI, 38.9%-46.3%) among participants of other races. The 20-year cumulative incidence for visual field loss was 25.2% (95% CI, 22.5%-27.8%). Using a 5-factor baseline model, the cumulative incidence of POAG among participants in the low-, medium-, and high-risk tertiles was 31.7% (95% CI, 26.4%-36.6%), 47.6% (95% CI, 41.6%-53.0%), and 59.8% (95% CI, 53.1%-65.5%), respectively. Conclusions and Relevance: In this study, only one-fourth of participants in the Ocular Hypertension Treatment Study developed visual field loss in either eye over long-term follow-up. This information, together with a prediction model, may help clinicians and patients make informed personalized decisions about the management of ocular hypertension. Trial Registration: ClinicalTrials.gov Identifier: NCT00000125.

12.
Ophthalmology ; 128(9): 1356-1362, 2021 09.
Article in English | MEDLINE | ID: mdl-33713783

ABSTRACT

PURPOSE: To prospectively examine diagnostic error of neuro-ophthalmic conditions and resultant harm at multiple sites. DESIGN: Prospective, cross-sectional study. PARTICIPANTS: A total of 496 consecutive adult new patients seen at 3 university-based neuro-ophthalmology clinics in the United States in 2019 to 2020. METHODS: Collected data regarding demographics, prior care, referral diagnosis, final diagnosis, diagnostic testing, treatment, patient disposition, and impact of the neuro-ophthalmologic encounter. For misdiagnosed patients, we identified the cause of error using the Diagnosis Error Evaluation and Research (DEER) taxonomy tool and whether the patient experienced harm due to the misdiagnosis. MAIN OUTCOME MEASURES: The primary outcome was whether patients who were misdiagnosed before neuro-ophthalmology referral experienced harm as a result of the misdiagnosis. Secondary outcomes included appropriateness of referrals, misdiagnosis rate, interventions undergone before referral, and the primary type of diagnostic error. RESULTS: Referral diagnosis was incorrect in 49% of cases. A total of 26% of misdiagnosed patients experienced harm, which could have been prevented by earlier referral to neuro-ophthalmology in 97%. Patients experienced inappropriate laboratory testing, diagnostic imaging, or treatment before referral in 23%, with higher rates for patients misdiagnosed before referral (34% of patients vs. 13% with a correct referral diagnosis, P < 0.0001). Seventy-six percent of inappropriate referrals were misdiagnosed, compared with 45% of appropriate referrals (P < 0.0001). The most common reasons for referral were optic neuritis or optic neuropathy (21%), papilledema (18%), diplopia or cranial nerve palsies (16%), and unspecified vision loss (11%). The most common sources of diagnostic error were the physical examination (36%), generation of a complete differential diagnosis (24%), history taking (24%), and use or interpretation of diagnostic testing (13%). In 489 of 496 patients (99%), neuro-ophthalmology consultation (NOC) affected patient care. In 2% of cases, neuro-ophthalmology directly saved the patient's life or vision; in an additional 10%, harmful treatment was avoided or appropriate urgent referral was provided; and in an additional 48%, neuro-ophthalmology provided a diagnosis and direction to the patient's care. CONCLUSIONS: Misdiagnosis of neuro-ophthalmic conditions, mismanagement before referral, and preventable harm are common. Early appropriate referral to neuro-ophthalmology may prevent patient harm.


Subject(s)
Diagnostic Errors/statistics & numerical data , Eye Diseases/diagnosis , Medical Errors/statistics & numerical data , Optic Nerve Diseases/diagnosis , Patient Harm/statistics & numerical data , Adolescent , Adult , Aged , Aged, 80 and over , Cross-Sectional Studies , Female , Humans , Male , Middle Aged , Prospective Studies , Referral and Consultation
13.
Am J Ophthalmol ; 227: 275-283, 2021 07.
Article in English | MEDLINE | ID: mdl-33626364

ABSTRACT

PURPOSE: To determine the relationship between glaucoma severity and rate of falls, fear of falling, and avoidance of activities at-risk for falls. DESIGN: Cross-sectional study. METHODS: Patients with glaucoma (n = 138) 55 to 90 years of age with mild (n = 61), moderate (n = 54), or advanced (n = 23) glaucoma in the better eye based on the Glaucoma Staging System and age-matched control subjects (n = 50) were recruited from the Eye Clinics at Washington University, St. Louis, MO. Participants completed questionnaires regarding falls, the fear of falling, and the avoidance of activities at-risk for falls. RESULTS: Of the glaucoma participants, 36% reported ≥1 fall in the previous 12 months compared with 20% of control subjects (adjusted odds ratio [OR] 2.7 [95% confidence interval {CI} 1.18-6.17]; P = .018). Compared with control subjects, the mild glaucoma group trended toward a higher fall risk (adjusted OR 2.43 [95% CI 0.97-6.08]; P = .059) and the advanced group had the highest fall risk (adjusted OR 7.97 [95% CI 2.44-26.07]; P = .001). A greater risk of a high fear of falling and high avoidance of at-risk activities occurred at the moderate stage of glaucoma compared with control subjects (adjusted OR 4.66 [95% CI 1.24-17.49]; P = .023 and adjusted OR 4.49 [95% CI 1.34-15.05]; P = .015, respectively). CONCLUSIONS: Patient education, interventions, and appropriate referrals to minimize falls should be considered in older adults with early glaucoma and continue with advancing disease. Minimizing a patient's fall risk may decrease their fear of falling and avoidance of at-risk activities. Reducing falls, the fear of falling, and the avoidance of at-risk activities may lower morbidity and mortality and improve emotional and social well-being of patients with glaucoma. Am J Ophthalmol 2021;221:•••-•••. © 2021 Elsevier Inc. All rights reserved.


Subject(s)
Accidental Falls/prevention & control , Accidental Falls/statistics & numerical data , Fear/psychology , Glaucoma/psychology , Aged , Aged, 80 and over , Cross-Sectional Studies , Female , Glaucoma/physiopathology , Health Status Indicators , Humans , Male , Middle Aged , Patient Education as Topic , Postural Balance/physiology , Quality of Life , Risk Factors , Surveys and Questionnaires , Visual Acuity/physiology , Visual Field Tests , Visual Fields/physiology
14.
J Surg Educ ; 78(4): 1077-1088, 2021.
Article in English | MEDLINE | ID: mdl-33640326

ABSTRACT

OBJECTIVE: To test whether crowdsourced lay raters can accurately assess cataract surgical skills. DESIGN: Two-armed study: independent cross-sectional and longitudinal cohorts. SETTING: Washington University Department of Ophthalmology. PARTICIPANTS AND METHODS: Sixteen cataract surgeons with varying experience levels submitted cataract surgery videos to be graded by 5 experts and 300+ crowdworkers masked to surgeon experience. Cross-sectional study: 50 videos from surgeons ranging from first-year resident to attending physician, pooled by years of training. Longitudinal study: 28 videos obtained at regular intervals as residents progressed through 180 cases. Surgical skill was graded using the modified Objective Structured Assessment of Technical Skill (mOSATS). Main outcome measures were overall technical performance, reliability indices, and correlation between expert and crowd mean scores. RESULTS: Experts demonstrated high interrater reliability and accurately predicted training level, establishing construct validity for the modified OSATS. Crowd scores were correlated with (r = 0.865, p < 0.0001) but consistently higher than expert scores for first, second, and third-year residents (p < 0.0001, paired t-test). Longer surgery duration negatively correlated with training level (r = -0.855, p < 0.0001) and expert score (r = -0.927, p < 0.0001). The longitudinal dataset reproduced cross-sectional study findings for crowd and expert comparisons. A regression equation transforming crowd score plus video length into expert score was derived from the cross-sectional dataset (r2 = 0.92) and demonstrated excellent predictive modeling when applied to the independent longitudinal dataset (r2 = 0.80). A group of student raters who had edited the cataract videos also graded them, producing scores that more closely approximated experts than the crowd. CONCLUSIONS: Crowdsourced rankings correlated with expert scores, but were not equivalent; crowd scores overestimated technical competency, especially for novice surgeons. A novel approach of adjusting crowd scores with surgery duration generated a more accurate predictive model for surgical skill. More studies are needed before crowdsourcing can be reliably used for assessing surgical proficiency.


Subject(s)
Cataract , Crowdsourcing , Internship and Residency , Clinical Competence , Cross-Sectional Studies , Humans , Longitudinal Studies , Reproducibility of Results , Washington
15.
Crit Care Med ; 48(12): e1164-e1170, 2020 12.
Article in English | MEDLINE | ID: mdl-33003081

ABSTRACT

OBJECTIVES: Deliver a novel interdisciplinary care process for ICU survivor care and their primary family caregivers, and assess mortality, readmission rates, and economic impact compared with usual care. DESIGN: Population health quality improvement comparative study with retrospective data analysis. SETTING: A single tertiary care rural hospital with medical/surgical, neuroscience, trauma, and cardiac ICUs. PATIENTS: ICU survivors. INTERVENTIONS: Reorganization of existing post discharge health care delivery resources to form an ICU survivor clinic care process and compare this new process to post discharge usual care process. MEASUREMENTS AND MAIN RESULTS: Demographic data, Acute Physiology and Chronic Health Evaluation IV scores, and Charlson Comorbidity Index scores were extracted from the electronic health record. Additional data was extracted from the care manager database. Economic data were extracted from the Geisinger Health Plan database and analyzed by a health economist. During 13-month period analyzed, patients in the ICU survivor care had reduced mortality compared with usual care, as determined by the Kaplan-Meier method (ICU survivor care 0.89 vs usual care 0.71; log-rank p = 0.0108) and risk-adjusted stabilized inverse probability of treatment weighting (hazard ratio, 0.157; 95% CI, 0.058-0.427). Readmission for ICU survivor care versus usual care: at 30 days (10.4% vs 26.3%; stabilized inverse probability of treatment weighting hazard ratio, 0.539; 95% CI, 0.224-1.297) and at 60 days (16.7% vs 34.7%; stabilized inverse probability of treatment weighting hazard ratio, 0.525; 95% CI, 0.240-1.145). Financial data analysis indicates estimated annual cost savings to Geisinger Health Plan ranges from $247,052 to $424,846 during the time period analyzed. CONCLUSIONS: Our ICU survivor care process results in decreased mortality and a net annual cost savings to the insurer compared with usual care processes. There was no statistically significant difference in readmission rates.


Subject(s)
Aftercare , Intensive Care Units , Quality Improvement , Aftercare/economics , Aftercare/methods , Aftercare/organization & administration , Aftercare/standards , Hospital Costs/statistics & numerical data , Humans , Intensive Care Units/economics , Intensive Care Units/organization & administration , Intensive Care Units/standards , Kaplan-Meier Estimate , Patient Discharge , Patient Readmission/statistics & numerical data , Retrospective Studies , Survival Analysis , Survivors
16.
Sci Rep ; 10(1): 11528, 2020 07 13.
Article in English | MEDLINE | ID: mdl-32661318

ABSTRACT

The human-mediated spread of exotic and invasive species often leads to unintentional and harmful consequences. Invasive wild pigs (Sus scrofa) are one such species that have been repeatedly translocated throughout the United States and cause extensive damage to natural ecosystems, threatened and endangered species, agricultural resources, and private lands. In 2005, a newly established population of wild pigs was confirmed in Fulton County, Illinois, U.S. In 2011, a state-wide wild pig damage management program involving federal, state, and local government authorities directed a concerted effort to remove wild pigs from the county until the last wild pig (of 376 total) was successfully removed in 2016. We examined surveillance data from camera traps at bait sites and records of wild pig removals during this elimination program to identify environmental and anthropogenic factors that optimized removal of this population. Our results revealed that wild pigs used bait sites most during evening and nocturnal periods and on days with lower daily maximum temperatures. Increased removals of wild pigs coincided with periods of cold weather. We also identified that fidelity and time spent at bait sites by wild pigs was not influenced by increasing removals of wild pigs. Finally, the costs to remove wild pigs averaged $50 per wild pig (6.8 effort hours per wild pig) for removing the first 99% of the animals. Cost for removing the last 1% increased 84-fold, and averaged 122.8 effort hours per wild pig removed. Our results demonstrated that increased effort in removing wild pigs using bait sites should be focused during periods of environmental stress to maximize removal efficiency. These results inform elimination programs attempting to remove newly established populations of wild pigs, and ultimately prevent population and geographic expansion.


Subject(s)
Agriculture/economics , Animals, Wild/physiology , Introduced Species/economics , Sus scrofa/physiology , Animals , Ecosystem , Humans , Illinois , Swine
17.
Mo Med ; 117(3): 258-264, 2020.
Article in English | MEDLINE | ID: mdl-32636560

ABSTRACT

In this retrospective analysis of patients with diabetes in an academic primary care clinic in St. Louis, attendance at ophthalmic screening appointments was recorded over a two-year observation window. Factors associated with adherence were analyzed by multivariable regression. Among 974 total patients included, only 330 (33.9%) were adherent within a two-year period. Multivariate analyses identified older age, female gender, primary language other than English, and attendance at ancillary diabetes clinic visits as factors associated with improved diabetic retinopathy screening adherence. Factors not associated with adherence included race and insurance status.


Subject(s)
Diabetic Retinopathy/therapy , Mass Screening/standards , Treatment Adherence and Compliance/psychology , Adult , Aged , Diabetes Mellitus/psychology , Diabetes Mellitus/therapy , Diabetic Retinopathy/psychology , Female , Guidelines as Topic , Humans , Logistic Models , Male , Mass Screening/methods , Mass Screening/statistics & numerical data , Middle Aged , Patient Compliance , Poverty/psychology , Poverty/statistics & numerical data , Retrospective Studies , Treatment Adherence and Compliance/statistics & numerical data , Urban Population/statistics & numerical data
18.
Curr Eye Res ; 45(2): 173-176, 2020 02.
Article in English | MEDLINE | ID: mdl-31460803

ABSTRACT

Purpose: In animal models, insulin resistance without severe hyperglycemia is associated with retinopathy; however, corroborating data in humans are lacking. This study aims to investigate the prevalence of retinopathy in a population without diabetes and evaluate the association of insulin resistance and retinopathy within this group.Methods: The study population included 1914 adults age ≥40 without diabetes who were assigned to the morning, fasted group in the National Health and Nutrition Examination Survey 2005-2008, conducted by the Centers for Disease Control. Retinopathy was determined using fundus photos independently graded by a reading center and insulin resistance was determined using the homeostatic model of insulin resistance.Results: Prevalence of retinopathy in those without diabetes was survey design adjusted 9.4% (174/1914). In multivariable analyses, retinopathy was associated with insulin resistance (HOMA-IR OR: 1.09, 95% CI: 1.03, 1.16; p = .0030), male gender (OR: 1.39, 95% CI: 1.04, 1.85; p = .0267), and age (OR: 1.03, 95% CI: 1.01, 1.05; p = .0203).Conclusions: Insulin resistance in the absence of overt hyperglycemia could be an early driver of retinopathy.


Subject(s)
Diabetic Retinopathy/epidemiology , Insulin Resistance , Adult , Arterial Pressure , Blood Glucose/metabolism , Blood Pressure , Cross-Sectional Studies , Diabetes Mellitus, Type 2/epidemiology , Diabetic Retinopathy/diagnosis , Female , Glycated Hemoglobin/metabolism , Humans , Insulin/blood , Male , Middle Aged , Nutrition Surveys , Prevalence , Risk Factors , Sex Factors , United States/epidemiology
19.
Pediatr Pulmonol ; 54(11): 1694-1703, 2019 11.
Article in English | MEDLINE | ID: mdl-31424170

ABSTRACT

BACKGROUND: Our objective was to determine those characteristics associated with reversibility of airflow obstruction and response to maximal bronchodilation in children with severe asthma through the Severe Asthma Research Program (SARP). METHODS: We performed a cross-sectional analysis evaluating children ages 6 to 17 years with nonsevere asthma (NSA) and severe asthma (SA). Participants underwent spirometry before and after 180 µg of albuterol to determine reversibility (≥12% increase in FEV1 ). Participants were then given escalating doses up to 720 µg of albuterol to determine their maximum reversibility. RESULTS: We evaluated 230 children (n = 129 SA, n = 101 NSA) from five centers across the United States in the SARP I and II cohorts. SA (odds ratio [OR], 2.08, 95% confidence interval [CI], 1.05-4.13), second-hand smoke exposure (OR, 2.81, 95%CI, 1.23-6.43), and fractional exhaled nitric oxide (FeNO; OR, 1.97, 95%CI, 1.35-2.87) were associated with increased odds of airway reversibility after maximal bronchodilation, while higher prebronchodilator (BD) FEV1 % predicted (OR, 0.91, 95%CI, 0.88-0.94) was associated with decreased odds. In an analysis using the SARP III cohort (n = 186), blood neutrophils, immunoglobulin E (IgE), and FEV1 % predicted were significantly associated with BD reversibility. In addition, children with BD response have greater healthcare utilization. BD reversibility was associated with reduced lung function at enrollment and 1-year follow-up though less decline in lung function over 1 year compared to those without reversibility. CONCLUSIONS: Lung function, that is FEV1 % predicted, is a predictor of BD response in children with asthma. Additionally, smoke exposure, higher FeNO or IgE level, and low peripheral blood neutrophils are associated with a greater likelihood of BD reversibility. BD response can identify a phenotype of pediatric asthma associated with low lung function and poor asthma control.


Subject(s)
Albuterol/administration & dosage , Asthma/drug therapy , Bronchodilator Agents/administration & dosage , Forced Expiratory Volume/drug effects , Adolescent , Albuterol/pharmacology , Asthma/physiopathology , Breath Tests , Bronchodilator Agents/pharmacology , Child , Cohort Studies , Cross-Sectional Studies , Dose-Response Relationship, Drug , Female , Humans , Immunoglobulin E , Lung/physiopathology , Male , Nitric Oxide/analysis , Odds Ratio , Patient Acuity , Phenotype , Spirometry
20.
Dalton Trans ; 48(17): 5491-5495, 2019 Apr 23.
Article in English | MEDLINE | ID: mdl-30892339

ABSTRACT

Heterometallic rare earth transition metal compounds of dithioxalate (dto)2-, [NiII{(dto)LnIIITp2}2] (Ln = Y (1), Gd (2); Tp = hydrotris(pyrazol-1-yl)borate) were synthesised. The Lewis acidic rare earth ions are bound to the dioxolene and chemical reduction of 1 and 2 with cobaltocene yielded [CoCp2]+[NiII{(dto)LnIIITp2}2]˙- Ln = Y (3), Gd (4). The reduction is ligand-based and 3 and 4 are the first examples of both molecular and electronic structural characterisation of the dithiooxalato radical (dto)3˙-.

SELECTION OF CITATIONS
SEARCH DETAIL
...