Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Telemed J E Health ; 2024 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-38934135

RESUMEN

Background: Blurry images in teledermatology and consultation increased the diagnostic difficulty for both deep learning models and physicians. We aim to determine the extent of restoration in diagnostic accuracy after blurry images are deblurred by deep learning models. Methods: We used 19,191 skin images from a public skin image dataset that includes 23 skin disease categories, 54 skin images from a public dataset of blurry skin images, and 53 blurry dermatology consultation photos in a medical center to compare the diagnosis accuracy of trained diagnostic deep learning models and subjective sharpness between blurry and deblurred images. We evaluated five different deblurring models, including models for motion blur, Gaussian blur, Bokeh blur, mixed slight blur, and mixed strong blur. Main Outcomes and Measures: Diagnostic accuracy was measured as sensitivity and precision of correct model prediction of the skin disease category. Sharpness rating was performed by board-certified dermatologists on a 4-point scale, with 4 being the highest image clarity. Results: The sensitivity of diagnostic models dropped 0.15 and 0.22 on slightly and strongly blurred images, respectively, and deblurring models restored 0.14 and 0.17 for each group. The sharpness ratings perceived by dermatologists improved from 1.87 to 2.51 after deblurring. Activation maps showed the focus of diagnostic models was compromised by the blurriness but was restored after deblurring. Conclusions: Deep learning models can restore the diagnostic accuracy of diagnostic models for blurry images and increase image sharpness perceived by dermatologists. The model can be incorporated into teledermatology to help the diagnosis of blurry images.

2.
Transl Vis Sci Technol ; 13(5): 23, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38809531

RESUMEN

Purpose: To develop convolutional neural network (CNN)-based models for predicting the axial length (AL) using color fundus photography (CFP) and explore associated clinical and structural characteristics. Methods: This study enrolled 1105 fundus images from 467 participants with ALs ranging from 19.91 to 32.59 mm, obtained at National Taiwan University Hospital between 2020 and 2021. The AL measurements obtained from a scanning laser interferometer served as the gold standard. The accuracy of prediction was compared among CNN-based models with different inputs, including CFP, age, and/or sex. Heatmaps were interpreted by integrated gradients. Results: Using age, sex, and CFP as input, the mean ± standard deviation absolute error (MAE) for AL prediction by the model was 0.771 ± 0.128 mm, outperforming models that used age and sex alone (1.263 ± 0.115 mm; P < 0.001) and CFP alone (0.831 ± 0.216 mm; P = 0.016) by 39.0% and 7.31%, respectively. The removal of relatively poor-quality CFPs resulted in a slight MAE reduction to 0.759 ± 0.120 mm without statistical significance (P = 0.24). The inclusion of age and CFP improved prediction accuracy by 5.59% (P = 0.043), while adding sex had no significant improvement (P = 0.41). The optic disc and temporal peripapillary area were highlighted as the focused areas on the heatmaps. Conclusions: Deep learning-based prediction of AL using CFP was fairly accurate and enhanced by age inclusion. The optic disc and temporal peripapillary area may contain crucial structural information for AL prediction in CFP. Translational Relevance: This study might aid AL assessments and the understanding of the morphologic characteristics of the fundus related to AL.


Asunto(s)
Longitud Axial del Ojo , Redes Neurales de la Computación , Fotograbar , Humanos , Masculino , Femenino , Persona de Mediana Edad , Adulto , Fotograbar/métodos , Anciano , Longitud Axial del Ojo/diagnóstico por imagen , Fondo de Ojo , Adulto Joven , Anciano de 80 o más Años
3.
Transl Vis Sci Technol ; 12(3): 23, 2023 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-36947046

RESUMEN

Purpose: The purpose of this study was to build a deep-learning model that automatically analyzes cataract surgical videos for the locations of surgical landmarks, and to derive skill-related motion metrics. Methods: The locations of the pupil, limbus, and 8 classes of surgical instruments were identified by a 2-step algorithm: (1) mask segmentation and (2) landmark identification from the masks. To perform mask segmentation, we trained the YOLACT model on 1156 frames sampled from 268 videos and the public Cataract Dataset for Image Segmentation (CaDIS) dataset. Landmark identification was performed by fitting ellipses or lines to the contours of the masks and deriving locations of interest, including surgical tooltips and the pupil center. Landmark identification was evaluated by the distance between the predicted and true positions in 5853 frames of 10 phacoemulsification video clips. We derived the total path length, maximal speed, and covered area using the tip positions and examined the correlation with human-rated surgical performance. Results: The mean average precision score and intersection-over-union for mask detection were 0.78 and 0.82. The average distance between the predicted and true positions of the pupil center, phaco tip, and second instrument tip was 5.8, 9.1, and 17.1 pixels. The total path length and covered areas of these landmarks were negatively correlated with surgical performance. Conclusions: We developed a deep-learning method to localize key anatomical portions of the eye and cataract surgical tools, which can be used to automatically derive metrics correlated with surgical skill. Translational Relevance: Our system could form the basis of an automated feedback system that helps cataract surgeons evaluate their performance.


Asunto(s)
Extracción de Catarata , Catarata , Aprendizaje Profundo , Humanos , Pupila , Algoritmos
6.
Transl Vis Sci Technol ; 10(13): 23, 2021 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-34784415

RESUMEN

Purpose: To build and evaluate deep learning models for recognizing cataract surgical steps from whole-length surgical videos with minimal preprocessing, including identification of routine and complex steps. Methods: We collected 298 cataract surgical videos from 12 resident surgeons across 6 sites and excluded 30 incomplete, duplicated, and combination surgery videos. Videos were downsampled at 1 frame/second. Trained annotators labeled 13 steps of surgery: create wound, injection into the eye, capsulorrhexis, hydrodissection, phacoemulsification, irrigation/aspiration, place lens, remove viscoelastic, close wound, advanced technique/other, stain with trypan blue, manipulating iris, and subconjunctival injection. We trained two deep learning models, one based on the VGG16 architecture (VGG model) and the second using VGG16 followed by a long short-term memory network (convolutional neural network [CNN]- recurrent neural network [RNN] model). Class activation maps were visualized using Grad-CAM. Results: Overall top 1 prediction accuracy was 76% for VGG model (93% for top 3 accuracy) and 84% for the CNN-RNN model (97% for top 3 accuracy). The microaveraged area under receiver-operating characteristic curves was 0.97 for the VGG model and 0.99 for the CNN-RNN model. The microaveraged average precision score was 0.83 for the VGG model and 0.92 for the CNN-RNN model. Class activation maps revealed the model was appropriately focused on the instrumentation used in each step to identify which step was being performed. Conclusions: Deep learning models can classify cataract surgical activities on a frame-by-frame basis with remarkably high accuracy, especially routine surgical steps. Translational Relevance: An automated system for recognition of cataract surgical steps could provide to residents automated feedback metrics, such as the length of time spent on each step.


Asunto(s)
Extracción de Catarata , Catarata , Aprendizaje Profundo , Humanos , Redes Neurales de la Computación , Curva ROC
7.
Int J Dermatol ; 60(8): 964-972, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-33848012

RESUMEN

BACKGROUND: Stevens-Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN) are potentially fatal adverse drug reactions. The characteristics of these diseases are changing with the use of novel drugs, posing new challenges to doctors. We aimed to review recent SJS/TEN cases in order to assist general practitioners with timely diagnosis and correct management. METHODS: We conducted a retrospective chart review of SJS/TEN patients in a referral center in Taiwan from 2009 to 2019. We included 24 patients' charts and analyzed demographic data, medication histories, clinical courses, human leukocyte antigen (HLA) alleles, and long-term complications. RESULTS: The average age was 63.4 years, and the average toxic epidermal necrolysis-specific severity of illness score was 1.9. The most common culprit drug was carbamazepine (33.3%), followed by antibiotics (12.5%) and nonsteroidal anti-inflammatory drugs (8.3%). Two cases were caused by immune checkpoint inhibitors, and one of them had a long latency of 210 days. Three out of the four patients carrying HLA-B*15:02 had carbamazepine-induced SJS/TEN. All patients were treated with systemic corticosteroids in the acute stage of the diseases. The length of in-hospital stay did not correlate with the average daily dose of corticosteroids. The overall mortality rate was 4.2%, and the disease-specific mortality rate was 0%. CONCLUSIONS: The most common culprit drug was carbamazepine, which had strong association with HLA-B*15:02. There was no statistically significant correlation between in-hospital stay and the average daily dose of corticosteroids. Immune checkpoint inhibitor-related SJS/TEN may have an extended latent period.


Asunto(s)
Síndrome de Stevens-Johnson , Anticonvulsivantes , Humanos , Persona de Mediana Edad , Derivación y Consulta , Estudios Retrospectivos , Síndrome de Stevens-Johnson/diagnóstico , Síndrome de Stevens-Johnson/epidemiología , Síndrome de Stevens-Johnson/etiología , Taiwán/epidemiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...