Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Ophthalmology ; 130(2): 213-222, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36154868

RESUMEN

PURPOSE: To create an unsupervised cross-domain segmentation algorithm for segmenting intraretinal fluid and retinal layers on normal and pathologic macular OCT images from different manufacturers and camera devices. DESIGN: We sought to use generative adversarial networks (GANs) to generalize a segmentation model trained on one OCT device to segment B-scans obtained from a different OCT device manufacturer in a fully unsupervised approach without labeled data from the latter manufacturer. PARTICIPANTS: A total of 732 OCT B-scans from 4 different OCT devices (Heidelberg Spectralis, Topcon 1000, Maestro2, and Zeiss Plex Elite 9000). METHODS: We developed an unsupervised GAN model, GANSeg, to segment 7 retinal layers and intraretinal fluid in Topcon 1000 OCT images (domain B) that had access only to labeled data on Heidelberg Spectralis images (domain A). GANSeg was unsupervised because it had access only to 110 Heidelberg labeled OCTs and 556 raw and unlabeled Topcon 1000 OCTs. To validate GANSeg segmentations, 3 masked graders manually segmented 60 OCTs from an external Topcon 1000 test dataset independently. To test the limits of GANSeg, graders also manually segmented 3 OCTs from Zeiss Plex Elite 9000 and Topcon Maestro2. A U-Net was trained on the same labeled Heidelberg images as baseline. The GANSeg repository with labeled annotations is at https://github.com/uw-biomedical-ml/ganseg. MAIN OUTCOME MEASURES: Dice scores comparing segmentation results from GANSeg and the U-Net model with the manual segmented images. RESULTS: Although GANSeg and U-Net achieved comparable Dice scores performance as human experts on the labeled Heidelberg test dataset, only GANSeg achieved comparable Dice scores with the best performance for the ganglion cell layer plus inner plexiform layer (90%; 95% confidence interval [CI], 68%-96%) and the worst performance for intraretinal fluid (58%; 95% CI, 18%-89%), which was statistically similar to human graders (79%; 95% CI, 43%-94%). GANSeg significantly outperformed the U-Net model. Moreover, GANSeg generalized to both Zeiss and Topcon Maestro2 swept-source OCT domains, which it had never encountered before. CONCLUSIONS: GANSeg enables the transfer of supervised deep learning algorithms across OCT devices without labeled data, thereby greatly expanding the applicability of deep learning algorithms.


Asunto(s)
Aprendizaje Profundo , Humanos , Tomografía de Coherencia Óptica/métodos , Retina/diagnóstico por imagen , Algoritmos
2.
J Neuroophthalmol ; 43(2): 168-179, 2023 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-36705970

RESUMEN

BACKGROUND: The retina is a key focus in the search for biomarkers of Alzheimer's disease (AD) because of its accessibility and shared development with the brain. The pathological hallmarks of AD, amyloid beta (Aß), and hyperphosphorylated tau (pTau) have been identified in the retina, although histopathologic findings have been mixed. Several imaging-based approaches have been developed to detect retinal AD pathology in vivo. Here, we review the research related to imaging AD-related pathology in the retina and implications for future biomarker research. EVIDENCE ACQUISITION: Electronic searches of published literature were conducted using PubMed and Google Scholar. RESULTS: Curcumin fluorescence and hyperspectral imaging are both promising methods for detecting retinal Aß, although both require validation in larger cohorts. Challenges remain in distinguishing curcumin-labeled Aß from background fluorescence and standardization of dosing and quantification methods. Hyperspectral imaging is limited by confounding signals from other retinal features and variability in reflectance spectra between individuals. To date, evidence of tau aggregation in the retina is limited to histopathologic studies. New avenues of research are on the horizon, including near-infrared fluorescence imaging, novel Aß labeling techniques, and small molecule retinal tau tracers. Artificial intelligence (AI) approaches, including machine learning models and deep learning-based image analysis, are active areas of investigation. CONCLUSIONS: Although the histopathological evidence seems promising, methods for imaging retinal Aß require further validation, and in vivo imaging of retinal tau remains elusive. AI approaches may hold the greatest promise for the discovery of a characteristic retinal imaging profile of AD. Elucidating the role of Aß and pTau in the retina will provide key insights into the complex processes involved in aging and in neurodegenerative disease.


Asunto(s)
Enfermedad de Alzheimer , Curcumina , Enfermedades Neurodegenerativas , Humanos , Péptidos beta-Amiloides , Enfermedades Neurodegenerativas/patología , Inteligencia Artificial , Enfermedad de Alzheimer/diagnóstico por imagen , Retina/diagnóstico por imagen , Retina/patología , Biomarcadores
3.
Ophthalmology ; 129(2): 129-138, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34265315

RESUMEN

PURPOSE: To compare the rate of postoperative endophthalmitis after immediately sequential bilateral cataract surgery (ISBCS) versus delayed sequential bilateral cataract surgery (DSBCS) using the American Academy of Ophthalmology Intelligent Research in Sight (IRIS®) Registry database. DESIGN: Retrospective cohort study. PARTICIPANTS: Patients in the IRIS Registry who underwent cataract surgery from 2013 through 2018. METHODS: Patients who underwent cataract surgery were divided into 2 groups: (1) ISBCS and (2) DSBCS (second-eye surgery ≥1 day after the first-eye surgery) or unilateral surgery. Postoperative endophthalmitis was defined as endophthalmitis occurring within 4 weeks of surgery by International Classification of Diseases (ICD) code and ICD code with additional clinical criteria. MAIN OUTCOME MEASURES: Rate of postoperative endophthalmitis. RESULTS: Of 5 573 639 IRIS Registry patients who underwent cataract extraction, 165 609 underwent ISBCS, and 5 408 030 underwent DSBCS or unilateral surgery (3 695 440 DSBCS, 1 712 590 unilateral surgery only). A total of 3102 participants (0.056%) met study criteria of postoperative endophthalmitis with supporting clinical findings. The rates of endophthalmitis in either surgery eye between the 2 surgery groups were similar (0.059% in the ISBCS group vs. 0.056% in the DSBCS or unilateral group; P = 0.53). Although the incidence of endophthalmitis was slightly higher in the ISBCS group compared with the DSBCS or unilateral group, the odds ratio did not reach statistical significance (1.08; 95% confidence interval, 0.87-1.31; P = 0.47) after adjusting for age, sex, race, insurance status, and comorbid eye disease. Seven cases of bilateral endophthalmitis with supporting clinical data in the DSBCS group and no cases in the ISBCS group were identified. CONCLUSIONS: Risk of postoperative endophthalmitis was not statistically significantly different between patients who underwent ISBCS and DSBCS or unilateral cataract surgery.


Asunto(s)
Extracción de Catarata/efectos adversos , Endoftalmitis/epidemiología , Implantación de Lentes Intraoculares/efectos adversos , Complicaciones Posoperatorias/epidemiología , Sistema de Registros , Agudeza Visual , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Niño , Preescolar , Bases de Datos Factuales , Endoftalmitis/etiología , Femenino , Estudios de Seguimiento , Humanos , Incidencia , Lactante , Recién Nacido , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , Estados Unidos/epidemiología , Adulto Joven
4.
Invest Ophthalmol Vis Sci ; 65(6): 21, 2024 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-38864811

RESUMEN

Data is the cornerstone of using AI models, because their performance directly depends on the diversity, quantity, and quality of the data used for training. Using AI presents unique potential, particularly in medical applications that involve rich data such as ophthalmology, encompassing a variety of imaging methods, medical records, and eye-tracking data. However, sharing medical data comes with challenges because of regulatory issues and privacy concerns. This review explores traditional and nontraditional data sharing methods in medicine, focusing on previous works in ophthalmology. Traditional methods involve direct data transfer, whereas newer approaches prioritize security and privacy by sharing derived datasets, creating secure research environments, or using model-to-data strategies. We examine each method's mechanisms, variations, recent applications in ophthalmology, and their respective advantages and disadvantages. By empowering medical researchers with insights into data sharing methods and considerations, this review aims to assist informed decision-making while upholding ethical standards and patient privacy in medical AI development.


Asunto(s)
Inteligencia Artificial , Difusión de la Información , Oftalmología , Humanos
5.
Clin Ophthalmol ; 18: 1257-1266, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38741584

RESUMEN

Purpose: Understanding sociodemographic factors associated with poor visual outcomes in children with juvenile idiopathic arthritis-associated uveitis may help inform practice patterns. Patients and Methods: Retrospective cohort study on patients <18 years old who were diagnosed with both juvenile idiopathic arthritis and uveitis based on International Classification of Diseases tenth edition codes in the Intelligent Research in Sight Registry through December 2020. Surgical history was extracted using current procedural terminology codes. The primary outcome was incidence of blindness (20/200 or worse) in at least one eye in association with sociodemographic factors. Secondary outcomes included cataract and glaucoma surgery following uveitis diagnosis. Hazard ratios were calculated using multivariable-adjusted Cox proportional hazards models. Results: Median age of juvenile idiopathic arthritis-associated uveitis diagnosis was 11 (Interquartile Range: 8 to 15). In the Cox models adjusting for sociodemographic and insurance factors, the hazard ratios of best corrected visual acuity 20/200 or worse were higher in males compared to females (HR 2.15; 95% CI: 1.45-3.18), in Black or African American patients compared to White patients (2.54; 1.44-4.48), and in Medicaid-insured patients compared to commercially-insured patients (2.23; 1.48-3.37). Conclusion: Sociodemographic factors and insurance coverage were associated with varying levels of risk for poor visual outcomes in children with juvenile idiopathic arthritis-associated uveitis.

6.
Ocul Immunol Inflamm ; : 1-9, 2024 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-38842198

RESUMEN

The aim of this perspective is to promote the theory of salutogenesis as a novel approach to addressing ophthalmologic inflammatory conditions, illustrating several concepts in which it is based upon and how they can be applied to medical practice. This theory can better contextualize why patients with similar demographics and exposures are not uniform in their clinical presentations. Stressors in daily life can contribute to a state of ill-health and there are various factors that help alleviate their negative impact. These alleviating factors are significantly impaired in people with poor vision, one of the most common presentations of ophthalmologic conditions. Salutogenic principles can guide the treatment of eye conditions to be more respectful of patient autonomy amidst shifting expectations of the doctor-patient relationship. Being able to take ownership of their health and feeling that their cultural beliefs were considered improves compliance and subsequently gives more optimal outcomes. Population-level policy interventions could also utilize salutogenic principles to identify previously overlooked domains that can be addressed. We identified several papers about salutogenesis in an ophthalmological context and acknowledged the relatively few studies on this topic at present and offer directions in which we can explore further in subsequent studies.

7.
JMIR Public Health Surveill ; 9: e44552, 2023 03 07.
Artículo en Inglés | MEDLINE | ID: mdl-36881468

RESUMEN

BACKGROUND: Self-reported questions on blindness and vision problems are collected in many national surveys. Recently released surveillance estimates on the prevalence of vision loss used self-reported data to predict variation in the prevalence of objectively measured acuity loss among population groups for whom examination data are not available. However, the validity of self-reported measures to predict prevalence and disparities in visual acuity has not been established. OBJECTIVE: This study aimed to estimate the diagnostic accuracy of self-reported vision loss measures compared to best-corrected visual acuity (BCVA), inform the design and selection of questions for future data collection, and identify the concordance between self-reported vision and measured acuity at the population level to support ongoing surveillance efforts. METHODS: We calculated accuracy and correlation between self-reported visual function versus BCVA at the individual and population level among patients from the University of Washington ophthalmology or optometry clinics with a prior eye examination, randomly oversampled for visual acuity loss or diagnosed eye diseases. Self-reported visual function was collected via telephone survey. BCVA was determined based on retrospective chart review. Diagnostic accuracy of questions at the person level was measured based on the area under the receiver operator curve (AUC), whereas population-level accuracy was determined based on correlation. RESULTS: The survey question, "Are you blind or do you have serious difficulty seeing, even when wearing glasses?" had the highest accuracy for identifying patients with blindness (BCVA ≤20/200; AUC=0.797). The highest accuracy for detecting any vision loss (BCVA <20/40) was achieved by responses of "fair," "poor," or "very poor" to the question, "At the present time, would you say your eyesight, with glasses or contact lenses if you wear them, is excellent, good, fair, poor, or very poor" (AUC=0.716). At the population level, the relative relationship between prevalence based on survey questions and BCVA remained stable for most demographic groups, with the only exceptions being groups with small sample sizes, and these differences were generally not significant. CONCLUSIONS: Although survey questions are not considered to be sufficiently accurate to be used as a diagnostic test at the individual level, we did find relatively high levels of accuracy for some questions. At the population level, we found that the relative prevalence of the 2 most accurate survey questions were highly correlated with the prevalence of measured visual acuity loss among nearly all demographic groups. The results of this study suggest that self-reported vision questions fielded in national surveys are likely to yield an accurate and stable signal of vision loss across different population groups, although the actual measure of prevalence from these questions is not directly analogous to that of BCVA.


Asunto(s)
Ceguera , Teléfono , Humanos , Estudios Retrospectivos , Ceguera/epidemiología , Ceguera/etiología , Autoinforme , Agudeza Visual
8.
Am J Ophthalmol ; 249: 90-98, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36513155

RESUMEN

PURPOSE: To investigate whether associations between diabetic retinopathy (DR) and dementia and Alzheimer's disease (AD) remain significant after controlling for several measures of diabetes severity. DESIGN: Retrospective cohort study. METHODS: Adult Changes in Thought (ACT) is a prospective cohort study of adults aged ≥65 years, randomly selected and recruited from the membership rolls of Kaiser Permanente Washington, who are dementia free at enrollment and followed biennially until incident dementia. The ACT participants were included in this study if they had type 2 diabetes mellitus at enrollment or developed it during follow-up, and data were collected through September, 2018 (3516 person-years of follow-up). Diabetes was defined by ≥ 2 diabetes medication fills in 1 year. Diagnosis of DR was based on International Classification of Diseases Ninth and Tenth Revision codes. Estimates of microalbuminuria, long-term glycemia, and renal function from longitudinal laboratory records were used as indicators of diabetes severity. Alzheimer's disease and dementia were diagnosed using research criteria at expert consensus meetings. RESULTS: A total of 536 participants (median baseline age 75 [interquartile range 71-80], 54% women) met inclusion criteria. Significant associations between DR >5 years duration with dementia (hazard ratio 1.81 [95% CI 1.23, 2.65]) and AD (1.80 [1.15, 2.82]) were not altered by adjustment for estimates of microalbuminuria, long-term glycemia, and renal function (dementia: 1.69 [1.14, 2.50]; AD: 1.73 [1.10, 2.74]). CONCLUSIONS: Among people with type 2 diabetes, DR itself appears to be an important biomarker of dementia risk in addition to glycemia and renal complications.


Asunto(s)
Enfermedad de Alzheimer , Diabetes Mellitus Tipo 2 , Retinopatía Diabética , Adulto , Humanos , Femenino , Masculino , Enfermedad de Alzheimer/diagnóstico , Retinopatía Diabética/diagnóstico , Retinopatía Diabética/epidemiología , Retinopatía Diabética/complicaciones , Diabetes Mellitus Tipo 2/complicaciones , Diabetes Mellitus Tipo 2/diagnóstico , Diabetes Mellitus Tipo 2/epidemiología , Estudios Prospectivos , Estudios Retrospectivos , Factores de Riesgo
9.
JAMA Ophthalmol ; 141(6): 534-541, 2023 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-37140901

RESUMEN

Importance: Diagnostic information from administrative claims and electronic health record (EHR) data may serve as an important resource for surveillance of vision and eye health, but the accuracy and validity of these sources are unknown. Objective: To estimate the accuracy of diagnosis codes in administrative claims and EHRs compared to retrospective medical record review. Design, Setting, and Participants: This cross-sectional study compared the presence and prevalence of eye disorders based on diagnostic codes in EHR and claims records vs clinical medical record review at University of Washington-affiliated ophthalmology or optometry clinics from May 2018 to April 2020. Patients 16 years and older with an eye examination in the previous 2 years were included, oversampled for diagnosed major eye diseases and visual acuity loss. Exposures: Patients were assigned to vision and eye health condition categories based on diagnosis codes present in their billing claims history and EHR using the diagnostic case definitions of the US Centers for Disease Control and Prevention Vision and Eye Health Surveillance System (VEHSS) as well as clinical assessment based on retrospective medical record review. Main Outcome and Measures: Accuracy was measured as area under the receiver operating characteristic curve (AUC) of claims and EHR-based diagnostic coding vs retrospective review of clinical assessments and treatment plans. Results: Among 669 participants (mean [range] age, 66.1 [16-99] years; 357 [53.4%] female), identification of diseases in billing claims and EHR data using VEHSS case definitions was accurate for diabetic retinopathy (claims AUC, 0.94; 95% CI, 0.91-0.98; EHR AUC, 0.97; 95% CI, 0.95-0.99), glaucoma (claims AUC, 0.90; 95% CI, 0.88-0.93; EHR AUC, 0.93; 95% CI, 0.90-0.95), age-related macular degeneration (claims AUC, 0.87; 95% CI, 0.83-0.92; EHR AUC, 0.96; 95% CI, 0.94-0.98), and cataracts (claims AUC, 0.82; 95% CI, 0.79-0.86; EHR AUC, 0.91; 95% CI, 0.89-0.93). However, several condition categories showed low validity with AUCs below 0.7, including diagnosed disorders of refraction and accommodation (claims AUC, 0.54; 95% CI, 0.49-0.60; EHR AUC, 0.61; 95% CI, 0.56-0.67), diagnosed blindness and low vision (claims AUC, 0.56; 95% CI, 0.53-0.58; EHR AUC, 0.57; 95% CI, 0.54-0.59), and orbital and external diseases (claims AUC, 0.63; 95% CI, 0.57-0.69; EHR AUC, 0.65; 95% CI, 0.59-0.70). Conclusion and Relevance: In this cross-sectional study of current and recent ophthalmology patients with high rates of eye disorders and vision loss, identification of major vision-threatening eye disorders based on diagnosis codes in claims and EHR records was accurate. However, vision loss, refractive error, and other broadly defined or lower-risk disorder categories were less accurately identified by diagnosis codes in claims and EHR data.


Asunto(s)
Macrodatos , Glaucoma , Humanos , Femenino , Anciano , Masculino , Estudios Retrospectivos , Estudios Transversales , Datos de Salud Recolectados Rutinariamente , Ceguera
10.
Ocul Immunol Inflamm ; 30(2): 357-363, 2022 Feb 17.
Artículo en Inglés | MEDLINE | ID: mdl-35442873

RESUMEN

The objective grading of anterior chamber inflammation (ACI) has remained a challenge in the field of uveitis. While the grading criteria produced by the Standardization of Uveitis Nomenclature (SUN) International Workshop have been widely adopted, limitations exist including interobserver variability and grading confined to discrete categories rather than a continuous measurement. Since the earliest iterations of optical coherence tomography (OCT), ACI has been assessed using anterior segment OCT and shown to correlate with slit-lamp findings. However, widespread use of this approach has not been adopted. Barriers to standardization include variability in OCT devices across clinical settings, lack of standardization of image acquisition protocols, varying quantification methods, and the difficulty of distinguishing inflammatory cells from other cell types. Modern OCT devices and techniques in artificial intelligence show promise in expanding the clinical applicability of anterior segment OCT for the grading of ACI.


Asunto(s)
Uveítis Anterior , Uveítis , Cámara Anterior/diagnóstico por imagen , Inteligencia Artificial , Humanos , Inflamación/diagnóstico por imagen , Tomografía de Coherencia Óptica/métodos , Uveítis Anterior/diagnóstico por imagen
11.
Sci Rep ; 12(1): 16913, 2022 10 08.
Artículo en Inglés | MEDLINE | ID: mdl-36209335

RESUMEN

COVID-19 mortality risk stratification tools could improve care, inform accurate and rapid triage decisions, and guide family discussions regarding goals of care. A minority of COVID-19 prognostic tools have been tested in external cohorts. Our objective was to compare machine learning algorithms and develop a tool for predicting subsequent clinical outcomes in COVID-19. We conducted a retrospective cohort study that included hospitalized patients with COVID-19 from March 2020 to March 2021. Seven Hundred Twelve consecutive patients from University of Washington and 345 patients from Tongji Hospital in China were included. We applied three different machine learning algorithms to clinical and laboratory data collected within the initial 24 h of hospital admission to determine the risk of in-hospital mortality, transfer to the intensive care unit, shock requiring vasopressors, and receipt of renal replacement therapy. Mortality risk models were derived, internally validated in UW and externally validated in Tongji Hospital. The risk models for ICU transfer, shock and RRT were derived and internally validated in the UW dataset but were unable to be externally validated due to a lack of data on these outcomes. Among the UW dataset, 122 patients died (17%) during hospitalization and the mean days to hospital mortality was 15.7 +/- 21.5 (mean +/- SD). Elastic net logistic regression resulted in a C-statistic for in-hospital mortality of 0.72 (95% CI, 0.64 to 0.81) in the internal validation and 0.85 (95% CI, 0.81 to 0.89) in the external validation set. Age, platelet count, and white blood cell count were the most important predictors of mortality. In the sub-group of patients > 50 years of age, the mortality prediction model continued to perform with a C-statistic of 0.82 (95% CI:0.76,0.87). Prediction models also performed well for shock and RRT in the UW dataset but functioned with lower accuracy for ICU transfer. We trained, internally and externally validated a prediction model using data collected within 24 h of hospital admission to predict in-hospital mortality on average two weeks prior to death. We also developed models to predict RRT and shock with high accuracy. These models could be used to improve triage decisions, resource allocation, and support clinical trial enrichment.


Asunto(s)
COVID-19 , Hospitalización , Humanos , Aprendizaje Automático , Pronóstico , Estudios Retrospectivos
12.
Sci Rep ; 12(1): 1716, 2022 02 02.
Artículo en Inglés | MEDLINE | ID: mdl-35110593

RESUMEN

The rapid evolution of the novel coronavirus disease (COVID-19) pandemic has resulted in an urgent need for effective clinical tools to reduce transmission and manage severe illness. Numerous teams are quickly developing artificial intelligence approaches to these problems, including using deep learning to predict COVID-19 diagnosis and prognosis from chest computed tomography (CT) imaging data. In this work, we assess the value of aggregated chest CT data for COVID-19 prognosis compared to clinical metadata alone. We develop a novel patient-level algorithm to aggregate the chest CT volume into a 2D representation that can be easily integrated with clinical metadata to distinguish COVID-19 pneumonia from chest CT volumes from healthy participants and participants with other viral pneumonia. Furthermore, we present a multitask model for joint segmentation of different classes of pulmonary lesions present in COVID-19 infected lungs that can outperform individual segmentation models for each task. We directly compare this multitask segmentation approach to combining feature-agnostic volumetric CT classification feature maps with clinical metadata for predicting mortality. We show that the combination of features derived from the chest CT volumes improve the AUC performance to 0.80 from the 0.52 obtained by using patients' clinical data alone. These approaches enable the automated extraction of clinically relevant features from chest CT volumes for risk stratification of COVID-19 patients.


Asunto(s)
COVID-19/diagnóstico , COVID-19/virología , Aprendizaje Profundo , SARS-CoV-2 , Tórax/diagnóstico por imagen , Tórax/patología , Tomografía Computarizada por Rayos X , Algoritmos , COVID-19/mortalidad , Bases de Datos Genéticas , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Pronóstico , Tomografía Computarizada por Rayos X/métodos , Tomografía Computarizada por Rayos X/normas
13.
JAMA Intern Med ; 182(2): 134-141, 2022 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-34870676

RESUMEN

IMPORTANCE: Visual function is important for older adults. Interventions to preserve vision, such as cataract extraction, may modify dementia risk. OBJECTIVE: To determine whether cataract extraction is associated with reduced risk of dementia among older adults. DESIGN, SETTING, AND PARTICIPANTS: This prospective, longitudinal cohort study analyzed data from the Adult Changes in Thought study, an ongoing, population-based cohort of randomly selected, cognitively normal members of Kaiser Permanente Washington. Study participants were 65 years of age or older and dementia free at enrollment and were followed up biennially until incident dementia (all-cause, Alzheimer disease, or Alzheimer disease and related dementia). Only participants who had a diagnosis of cataract or glaucoma before enrollment or during follow-up were included in the analyses (ie, a total of 3038 participants). Data used in the analyses were collected from 1994 through September 30, 2018, and all data were analyzed from April 6, 2019, to September 15, 2021. EXPOSURES: The primary exposure of interest was cataract extraction. Data on diagnosis of cataract or glaucoma and exposure to surgery were extracted from electronic medical records. Extensive lists of dementia-related risk factors and health-related variables were obtained from study visit data and electronic medical records. MAIN OUTCOMES AND MEASURES: The primary outcome was dementia as defined by Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) criteria. Multivariate Cox proportional hazards regression analyses were conducted with the primary outcome. To address potential healthy patient bias, weighted marginal structural models incorporating the probability of surgery were used and the association of dementia with glaucoma surgery, which does not restore vision, was evaluated. RESULTS: In total, 3038 participants were included (mean [SD] age at first cataract diagnosis, 74.4 (6.2) years; 1800 women (59%) and 1238 men (41%); and 2752 (91%) self-reported White race). Based on 23 554 person-years of follow-up, cataract extraction was associated with significantly reduced risk (hazard ratio, 0.71; 95% CI, 0.62-0.83; P < .001) of dementia compared with participants without surgery after controlling for years of education, self-reported White race, and smoking history and stratifying by apolipoprotein E genotype, sex, and age group at cataract diagnosis. Similar results were obtained in marginal structural models after adjusting for an extensive list of potential confounders. Glaucoma surgery did not have a significant association with dementia risk (hazard ratio, 1.08; 95% CI, 0.75-1.56; P = .68). Similar results were found with the development of Alzheimer disease dementia. CONCLUSIONS AND RELEVANCE: This cohort study found that cataract extraction was significantly associated with lower risk of dementia development. If validated in future studies, cataract surgery may have clinical relevance in older adults at risk of developing dementia.


Asunto(s)
Enfermedad de Alzheimer , Extracción de Catarata , Catarata , Glaucoma , Anciano , Catarata/diagnóstico , Catarata/epidemiología , Catarata/etiología , Extracción de Catarata/efectos adversos , Estudios de Cohortes , Femenino , Glaucoma/diagnóstico , Glaucoma/epidemiología , Glaucoma/etiología , Humanos , Estudios Longitudinales , Masculino , Estudios Prospectivos , Factores de Riesgo
14.
PLoS One ; 17(10): e0274098, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36201483

RESUMEN

In response to the COVID-19 global pandemic, recent research has proposed creating deep learning based models that use chest radiographs (CXRs) in a variety of clinical tasks to help manage the crisis. However, the size of existing datasets of CXRs from COVID-19+ patients are relatively small, and researchers often pool CXR data from multiple sources, for example, using different x-ray machines in various patient populations under different clinical scenarios. Deep learning models trained on such datasets have been shown to overfit to erroneous features instead of learning pulmonary characteristics in a phenomenon known as shortcut learning. We propose adding feature disentanglement to the training process. This technique forces the models to identify pulmonary features from the images and penalizes them for learning features that can discriminate between the original datasets that the images come from. We find that models trained in this way indeed have better generalization performance on unseen data; in the best case we found that it improved AUC by 0.13 on held out data. We further find that this outperforms masking out non-lung parts of the CXRs and performing histogram equalization, both of which are recently proposed methods for removing biases in CXR datasets.


Asunto(s)
COVID-19 , Aprendizaje Profundo , COVID-19/diagnóstico por imagen , Humanos , Pulmón/diagnóstico por imagen , Radiografía Torácica/métodos , Rayos X
15.
Artículo en Inglés | MEDLINE | ID: mdl-33748826

RESUMEN

The eye and brain share common mechanisms of aging and disease, thus the retina is an essential source of accessible information about neurodegenerative processes occurring in the brain. Advances in retinal imaging have led to the discovery of many potential biomarkers of Alzheimer's disease, although further research is needed to validate these associations. Understanding the mechanisms of retinal disease in the context of aging will extend our knowledge of AD and may enable advancements in diagnosis, monitoring, and treatment.

16.
Transl Vis Sci Technol ; 10(6): 32, 2021 05 03.
Artículo en Inglés | MEDLINE | ID: mdl-34038502

RESUMEN

Purpose: Optical coherence tomography (OCT) is widely used in the management of retinal pathologies, including age-related macular degeneration (AMD), diabetic macular edema (DME), and primary open-angle glaucoma (POAG). We used machine learning techniques to understand diagnostic performance gains from expanding macular OCT B-scans compared with foveal-only OCT B-scans for these conditions. Methods: Electronic medical records were extracted to obtain 61 B-scans per eye from patients with AMD, diabetic retinopathy, or POAG. We constructed deep neural networks and random forest ensembles and generated area under the receiver operating characteristic (AUROC) and area under the precision recall (AUPR) curves. Results: After extracting 630,000 OCT images, we achieved improved AUROC and AUPR curves when comparing the central image (one B-scan) to all images (61 B-scans). The AUROC and AUPR points of diminishing return for diagnostic accuracy for macular OCT coverage were found to be within 2.75 to 4.00 mm (14-19 B-scans), 4.25 to 4.50 mm (20-21 B-scans), and 4.50 to 6.25 mm (21-28 B-scans) for AMD, DME, and POAG, respectively. All models with >0.25 mm of coverage had statistically significantly improved AUROC/AUPR curves for all diseases (P < 0.05). Conclusions: Systematically expanded macular coverage models demonstrated significant differences in total macular coverage required for improved diagnostic accuracy, with the largest macular area being relevant in POAG followed by DME and then AMD. These findings support our hypothesis that the extent of macular coverage by OCT imaging in the clinical setting, for any of the three major disorders, has a measurable impact on the functionality of artificial intelligence decision support. Translational Relevance: We used machine learning techniques to improve OCT imaging standards for common retinal disease diagnoses.


Asunto(s)
Retinopatía Diabética , Glaucoma de Ángulo Abierto , Edema Macular , Inteligencia Artificial , Retinopatía Diabética/diagnóstico , Humanos , Aprendizaje Automático , Edema Macular/diagnóstico
17.
Res Sq ; 2021 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-34816256

RESUMEN

BackgroundCOVID-19 mortality risk stratification tools could improve care, inform accurate and rapid triage decisions, and guide family discussions regarding goals of care. A minority of COVID-19 prognostic tools have been tested in external cohorts. Our objective was to compare machine learning algorithms and develop a tool for predicting subsequent clinical outcomes in COVID-19. MethodsWe conducted a retrospective cohort study that included hospitalized patients with COVID-19 from March 2020 to March 2021. 712 consecutive patients from University of Washington (UW) and 345 patients from Tongji Hospital in China were included. We applied three different machine learning algorithms to clinical and laboratory data collected within the initial 24 hours of hospital admission to determine the risk of in-hospital mortality, transfer to the intensive care unit (ICU), shock requiring vasopressors, and receipt of renal replacement therapy (RRT). Mortality risk models were derived, internally validated in UW and externally validated in Tongji Hospital. The risk models for ICU transfer, shock and RRT were derived and internally validated in the UW dataset. ResultsAmong the UW dataset, 122 patients died (17%) during hospitalization and the mean days to hospital mortality was 15.7 +/- 21.5 (mean +/- SD). Elastic net logistic regression resulted in a C-statistic for in-hospital mortality of 0.72 (95% CI, 0.64 to 0.81) in the internal validation and 0.85 (95% CI, 0.81 to 0.89) in the external validation set. Age, platelet count, and white blood cell count were the most important predictors of mortality. In the sub-group of patients > 50 years of age, the mortality prediction model continued to perform with a C-statistic of 0.82 (95% CI:0.76,0.87). Mortality prediction models also performed well for shock and RRT in the UW dataset but functioned with lower accuracy for ICU transfer. ConclusionsWe trained, internally and externally validated a prediction model using data collected within 24 hours of hospital admission to predict in-hospital mortality on average two weeks prior to death. We also developed models to predict RRT and shock with high accuracy. These models could be used to improve triage decisions, resource allocation, and support clinical trial enrichment.

18.
JAMA Ophthalmol ; 139(8): 876-885, 2021 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-34196667

RESUMEN

Importance: Approximately 2 million cataract operations are performed annually in the US, and patterns of cataract surgery delivery are changing to meet the increasing demand. Therefore, a comparative analysis of visual acuity outcomes after immediate sequential bilateral cataract surgery (ISBCS) vs delayed sequential bilateral cataract surgery (DSBCS) is important for informing future best practices. Objective: To compare refractive outcomes of patients who underwent ISBCS, short-interval (1-14 days between operations) DSBCS (DSBCS-14), and long-interval (15-90 days) DSBCS (DSBCS-90) procedures. Design, Setting, and Participants: This retrospective cohort study used population-based data from the American Academy of Ophthalmology Intelligent Research in Sight (IRIS) Registry. A total of 1 824 196 IRIS Registry participants with bilateral visual acuity measurements who underwent bilateral cataract surgery were assessed. Exposures: Participants were divided into 3 groups (DSBCS-90, DSBCS-14, and ISBCS groups) based on the timing of the second eye surgery. Univariable and multivariable linear regression models were used to analyze the refractive outcomes of the first and second surgery eye. Main Outcomes and Measures: Mean postoperative uncorrected visual acuity (UCVA) and best-corrected visual acuity (BCVA) after cataract surgery. Results: This study analyzed data from 1 824 196 patients undergoing bilateral cataract surgery (mean [SD] age for those <87 years, 70.03 [7.77]; 684 916 [37.5%] male). Compared with the DSBCS-90 group, after age, self-reported race, insurance status, history of age-related macular degeneration, diabetic retinopathy, and glaucoma were controlled for, the UCVA of the first surgical eye was higher by 0.41 (95% CI, 0.36-0.45; P < .001) letters, and the BCVA was higher by 0.89 (95% CI, 0.86-0.92; P < .001) letters in the DSBCS-14 group, whereas in the ISBCS group, the UCVA was lower by 2.79 (95% CI, -2.95 to -2.63; P < .001) letters and the BCVA by 1.64 (95% CI, -1.74 to -1.53; P < .001) letters. Similarly, compared with the DSBCS-90 group for the second eye, in the DSBCS-14 group, the UCVA was higher by 0.79 (95% CI, 0.74-0.83; P < .001) letters and the BCVA by 0.48 (95% CI, 0.45-0.51; P < .001) letters, whereas in the ISBCS group, the UCVA was lower by -1.67 (95% CI, -1.83 to -1.51; P < .001) letters and the BCVA by -1.88 (95% CI, -1.98 to -1.78; P < .001) letters. Conclusions and Relevance: The results of this cohort study of patients in the IRIS Registry suggest that compared with DSBCS-14 or DSBCS-90, ISBCS is associated with worse visual outcomes, which may or may not be clinically relevant, depending on patients' additional risk factors. Nonrandom surgery group assignment, confounding factors, and large sample size could account for the small but statistically significant differences noted. Further studies are warranted to determine whether these factors should be considered clinically relevant when counseling patients before cataract surgery.


Asunto(s)
Catarata , Oftalmología , Facoemulsificación , Anciano de 80 o más Años , Catarata/etiología , Estudios de Cohortes , Femenino , Humanos , Implantación de Lentes Intraoculares/efectos adversos , Masculino , Facoemulsificación/métodos , Estudios Retrospectivos , Estados Unidos
19.
J Acad Ophthalmol (2017) ; 13(2): e175-e182, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37325553

RESUMEN

Purpose: To investigate emerging trends and increasing costs in the National Residency Matching Program (NRMP) and San Francisco Residency and Fellowship Match Services (SF Match) associated with the current applicant/program Gale-Shapley-type matching algorithms. Design: A longitudinal observational study of behavioral trends in national residency matching systems with modeling of match results with alternative parameters. Methods: We analyzed publicly available data from the SF Match and NRMP websites from 1985 to 2020 for trends in the total number of applicants and available positions, as well the average number of applications and interviews per applicant for multiple specialties. To understand these trends and the algorithms' effect on the residency programs and applicants, we analyzed anonymized rank list and match data for ophthalmology from the SF Match between 2011 to 2019. Match results using current match parameters, as well as under conditions in which applicant and/or program rank lists were truncated, were analyzed. Results: Both the number of applications and length of programs' rank lists have increased steadily throughout residency programs, particularly those with competitive specialities. Capping student rank lists at 7 programs, or less than 80% of the average 8.9 programs currently ranked, results in a 0.71% decrease in the total number of positions filled. Similarly, capping program rank lists at 7 applicants per spot, or less than 60% of the average 11.5 applicants ranked per spot, results in a 5% decrease in the total number of positions filled. Conclusion: While the number of ophthalmology positions in the US has increased only modestly, the number of applications under consideration has increased substantially over the past two decades. The current study suggests that both programs and applicants rank more choices than are required for a nearly-complete and stable match, creating excess cost and work for both applicants and programs. "Stable-marriage"-type algorithms induce applicants and programs to rank as many counter-parties as possible to maximize individual chances of optimizing the match.

20.
Diabetes Care ; 44(5): 1168-1175, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33402366

RESUMEN

OBJECTIVE: With rising global prevalence of diabetic retinopathy (DR), automated DR screening is needed for primary care settings. Two automated artificial intelligence (AI)-based DR screening algorithms have U.S. Food and Drug Administration (FDA) approval. Several others are under consideration while in clinical use in other countries, but their real-world performance has not been evaluated systematically. We compared the performance of seven automated AI-based DR screening algorithms (including one FDA-approved algorithm) against human graders when analyzing real-world retinal imaging data. RESEARCH DESIGN AND METHODS: This was a multicenter, noninterventional device validation study evaluating a total of 311,604 retinal images from 23,724 veterans who presented for teleretinal DR screening at the Veterans Affairs (VA) Puget Sound Health Care System (HCS) or Atlanta VA HCS from 2006 to 2018. Five companies provided seven algorithms, including one with FDA approval, that independently analyzed all scans, regardless of image quality. The sensitivity/specificity of each algorithm when classifying images as referable DR or not were compared with original VA teleretinal grades and a regraded arbitrated data set. Value per encounter was estimated. RESULTS: Although high negative predictive values (82.72-93.69%) were observed, sensitivities varied widely (50.98-85.90%). Most algorithms performed no better than humans against the arbitrated data set, but two achieved higher sensitivities, and one yielded comparable sensitivity (80.47%, P = 0.441) and specificity (81.28%, P = 0.195). Notably, one had lower sensitivity (74.42%) for proliferative DR (P = 9.77 × 10-4) than the VA teleretinal graders. Value per encounter varied at $15.14-$18.06 for ophthalmologists and $7.74-$9.24 for optometrists. CONCLUSIONS: The DR screening algorithms showed significant performance differences. These results argue for rigorous testing of all such algorithms on real-world data before clinical implementation.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Algoritmos , Inteligencia Artificial , Retinopatía Diabética/diagnóstico , Retinopatía Diabética/epidemiología , Humanos , Tamizaje Masivo , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA