Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Ophthalmic Plast Reconstr Surg ; 39(4): e126-e128, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37010050

RESUMEN

Acellular porcine urinary bladder matrix promotes wound healing and is also used to stimulate hair growth. A 64-year-old female presented with acute-onset OD pain and decreased visual acuity after subcutaneous injection of acellular porcine urinary bladder matrix at the hairline. Fundus examination revealed multiple emboli at retinal arcade branch points, and fluorescein angiography demonstrated corresponding areas of peripheral nonperfusion. Two weeks later, external examination revealed new swelling of the right medial canthus without erythema or fluctuance, which was felt to possibly represent recruitment of vessels after occlusion in the facial vasculature. At 1-month follow up, visual acuity of the OD improved with resolution of right medial canthal swelling. Fundus examination was normal with no visible emboli. Herein, the authors present a case of retinal occlusion and medial canthal swelling following injection of acellular porcine urinary bladder matrix for hair restoration, which to the authors knowledge has not been previously reported.


Asunto(s)
Aparato Lagrimal , Oclusión de la Arteria Retiniana , Femenino , Porcinos , Animales , Oclusión de la Arteria Retiniana/diagnóstico , Oclusión de la Arteria Retiniana/etiología , Vejiga Urinaria , Angiografía con Fluoresceína , Cabello
2.
Ophthalmology ; 126(11): 1533-1540, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31358385

RESUMEN

PURPOSE: To assess the utility of deep learning in the detection of geographic atrophy (GA) from color fundus photographs and to explore potential utility in detecting central GA (CGA). DESIGN: A deep learning model was developed to detect the presence of GA in color fundus photographs, and 2 additional models were developed to detect CGA in different scenarios. PARTICIPANTS: A total of 59 812 color fundus photographs from longitudinal follow-up of 4582 participants in the Age-Related Eye Disease Study (AREDS) dataset. Gold standard labels were from human expert reading center graders using a standardized protocol. METHODS: A deep learning model was trained to use color fundus photographs to predict GA presence from a population of eyes with no AMD to advanced AMD. A second model was trained to predict CGA presence from the same population. A third model was trained to predict CGA presence from the subset of eyes with GA. For training and testing, 5-fold cross-validation was used. For comparison with human clinician performance, model performance was compared with that of 88 retinal specialists. MAIN OUTCOME MEASURES: Area under the curve (AUC), accuracy, sensitivity, specificity, and precision. RESULTS: The deep learning models (GA detection, CGA detection from all eyes, and centrality detection from GA eyes) had AUCs of 0.933-0.976, 0.939-0.976, and 0.827-0.888, respectively. The GA detection model had accuracy, sensitivity, specificity, and precision of 0.965 (95% confidence interval [CI], 0.959-0.971), 0.692 (0.560-0.825), 0.978 (0.970-0.985), and 0.584 (0.491-0.676), respectively, compared with 0.975 (0.971-0.980), 0.588 (0.468-0.707), 0.982 (0.978-0.985), and 0.368 (0.230-0.505) for the retinal specialists. The CGA detection model had values of 0.966 (0.957-0.975), 0.763 (0.641-0.885), 0.971 (0.960-0.982), and 0.394 (0.341-0.448). The centrality detection model had values of 0.762 (0.725-0.799), 0.782 (0.618-0.945), 0.729 (0.543-0.916), and 0.799 (0.710-0.888). CONCLUSIONS: A deep learning model demonstrated high accuracy for the automated detection of GA. The AUC was noninferior to that of human retinal specialists. Deep learning approaches may also be applied to the identification of CGA. The code and pretrained models are publicly available at https://github.com/ncbi-nlp/DeepSeeNet.


Asunto(s)
Aprendizaje Profundo , Técnicas de Diagnóstico Oftalmológico , Atrofia Geográfica/diagnóstico , Procesamiento de Imagen Asistido por Computador/métodos , Anciano , Anciano de 80 o más Años , Área Bajo la Curva , Femenino , Humanos , Masculino , Persona de Mediana Edad , Fotograbar/métodos , Examen Físico , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
3.
Ophthalmology ; 126(4): 565-575, 2019 04.
Artículo en Inglés | MEDLINE | ID: mdl-30471319

RESUMEN

PURPOSE: In assessing the severity of age-related macular degeneration (AMD), the Age-Related Eye Disease Study (AREDS) Simplified Severity Scale predicts the risk of progression to late AMD. However, its manual use requires the time-consuming participation of expert practitioners. Although several automated deep learning systems have been developed for classifying color fundus photographs (CFP) of individual eyes by AREDS severity score, none to date has used a patient-based scoring system that uses images from both eyes to assign a severity score. DESIGN: DeepSeeNet, a deep learning model, was developed to classify patients automatically by the AREDS Simplified Severity Scale (score 0-5) using bilateral CFP. PARTICIPANTS: DeepSeeNet was trained on 58 402 and tested on 900 images from the longitudinal follow-up of 4549 participants from AREDS. Gold standard labels were obtained using reading center grades. METHODS: DeepSeeNet simulates the human grading process by first detecting individual AMD risk factors (drusen size, pigmentary abnormalities) for each eye and then calculating a patient-based AMD severity score using the AREDS Simplified Severity Scale. MAIN OUTCOME MEASURES: Overall accuracy, specificity, sensitivity, Cohen's kappa, and area under the curve (AUC). The performance of DeepSeeNet was compared with that of retinal specialists. RESULTS: DeepSeeNet performed better on patient-based classification (accuracy = 0.671; kappa = 0.558) than retinal specialists (accuracy = 0.599; kappa = 0.467) with high AUC in the detection of large drusen (0.94), pigmentary abnormalities (0.93), and late AMD (0.97). DeepSeeNet also outperformed retinal specialists in the detection of large drusen (accuracy 0.742 vs. 0.696; kappa 0.601 vs. 0.517) and pigmentary abnormalities (accuracy 0.890 vs. 0.813; kappa 0.723 vs. 0.535) but showed lower performance in the detection of late AMD (accuracy 0.967 vs. 0.973; kappa 0.663 vs. 0.754). CONCLUSIONS: By simulating the human grading process, DeepSeeNet demonstrated high accuracy with increased transparency in the automated assignment of individual patients to AMD risk categories based on the AREDS Simplified Severity Scale. These results highlight the potential of deep learning to assist and enhance clinical decision-making in patients with AMD, such as early AMD detection and risk prediction for developing late AMD. DeepSeeNet is publicly available on https://github.com/ncbi-nlp/DeepSeeNet.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Computador/métodos , Técnicas de Diagnóstico Oftalmológico , Atrofia Geográfica/clasificación , Atrofia Geográfica/diagnóstico , Modelos Teóricos , Fotograbar/métodos , Anciano , Anciano de 80 o más Años , Área Bajo la Curva , Progresión de la Enfermedad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estudios Prospectivos , Reproducibilidad de los Resultados , Drusas Retinianas/clasificación , Drusas Retinianas/diagnóstico , Factores de Riesgo , Sensibilidad y Especificidad , Índice de Severidad de la Enfermedad
4.
J Acad Ophthalmol (2017) ; 13(1): e40-e45, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37389170

RESUMEN

Background To determine objective resident characteristics that correlate with Ophthalmic Knowledge Assessment Program (OKAP) performance, as well as to correlate OKAP performance with Accreditation Council for Graduate Medical Education (ACGME) milestone assessments, written qualifying examination (WQE) scores, and oral board pass rates. Methods Review of administrative records at an ACGME-accredited ophthalmology residency training program at an urban, tertiary academic medical center. Results The study included data from a total of 50 resident physicians who completed training from 2012 to 2018. Mean (standard deviation) OKAP percentile performance was 60.90 (27.51), 60.46 (28.12), and 60.55 (27.43) for Years 1, 2, and 3 examinations, respectively. There were no statistically significant differences based on sex, marital status, having children, MD/PhD degree, other additional degree, number of publications, number of first author publications, or grades on medical school medicine and surgery rotations. OKAP percentile scores were significantly associated with United States Medical Licensing Examination (USMLE) Step 1 scores (linear regression coefficient 0.88 [0.54-1.18], p = 0.008). Finally, continuous OKAP scores were significantly correlated with WQE ( r s = 0.292, p = 0.049) and oral board ( r s = 0.49, p = 0.001) scores. Conclusion Higher OKAP performance is correlated with passage of both WQE and oral board examinations during the first attempt. USMLE Step 1 score is the preresidency academic factor with the strongest association with success on the OKAP examination. Programs can utilize this information to identify those who may benefit from additional OKAP, WQE, and oral board preparation assistance.

5.
Methods Mol Biol ; 1939: 231-252, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30848465

RESUMEN

Recent advances in technology have led to the exponential growth of scientific literature in biomedical sciences. This rapid increase in information has surpassed the threshold for manual curation efforts, necessitating the use of text mining approaches in the field of life sciences. One such application of text mining is in fostering in silico drug discovery such as drug target screening, pharmacogenomics, adverse drug event detection, etc. This chapter serves as an introduction to the applications of various text mining approaches in drug discovery. It is divided into two parts with the first half as an overview of text mining in the biosciences. The second half of the chapter reviews strategies and methods for four unique applications of text mining in drug discovery.


Asunto(s)
Minería de Datos/métodos , Descubrimiento de Drogas/métodos , Aprendizaje Profundo , Humanos , Farmacogenética/métodos , Medicina de Precisión/métodos
6.
AMIA Jt Summits Transl Sci Proc ; 2019: 505-514, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31259005

RESUMEN

Age-related Macular Degeneration (AMD) is a leading cause of blindness. Although the Age-Related Eye Disease Study group previously developed a 9-step AMD severity scale for manual classification of AMD severity from color fundus images, manual grading of images is time-consuming and expensive. Built on our previous work DeepSeeNet, we developed a novel deep learning model for automated classification of images into the 9-step scale. Instead of predicting the 9-step score directly, our approach simulates the reading center grading process. It first detects four AMD characteristics (drusen area, geographic atrophy, increased pigment, and depigmentation), then combines these to derive the overall 9-step score. Importantly, we applied multi-task learning techniques, which allowed us to train classification of the four characteristics in parallel, share representation, and prevent overfitting. Evaluation on two image datasets showed that the accuracy of the model exceeded the current state-of-the-art model by > 10%. Availability: https://github.com/ncbi-nlp/DeepSeeNet.

7.
Artículo en Inglés | MEDLINE | ID: mdl-26737522

RESUMEN

The insight provided by fMRI, particularly BOLD fMRI, has been critical to the understanding of human brain function. Unfortunately, the application of fMRI techniques in clinical research has been held back by several factors. In order for the clinical field to successfully apply fMRI, two main challenges posed by aging and diseased brains need to be overcome: (1) difficulties in signal measurement and interpretation, and (2) partial voluming effects (PVE). Recent work has addressed the first challenge by developing fMRI methods that, in contrast to BOLD, provide a direct measurement of a physiological correlate of function. One such method is Arterial Spin Labeling (ASL) fMRI, which provides images of cerebral blood flow (CBF) in physiologically meaningful units. Although the problems caused by PVE can be mitigated to some degree through the acquisition of high spatial resolution fMRI data, both hardware and experimental design considerations limit this solution. Our team has developed a PVE correction (PVEc) algorithm that produces CBF images that are theoretically independent of tissue content and the associated PVE. The main drawback of the current PVEc method is that it introduces an inherent smoothing of the functional data. This smoothing effect can reduce the sensitivity of the method, complicating the detection of local changes in CBF, such as those due to stroke or activation. Here, we present results from an improved PVEc algorithm (ssPVEc), which uses high-resolution structural space information to correct for the tissue-driven heterogeneity in the ASL signal. We tested the ssPVEc method on ASL images obtained on patients with asymptomatic carotid occlusive disease during rest and motor activation. Our results showed that the sensitivity of the ssPVEc method (defined as the average T-value in the activated region) was at least 1.5 times greater than that of the original, functional space, fsPVEc, for all patients.


Asunto(s)
Encefalopatías/diagnóstico , Encefalopatías/fisiopatología , Encéfalo/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Marcadores de Spin , Algoritmos , Encéfalo/irrigación sanguínea , Circulación Cerebrovascular , Humanos , Descanso
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA