Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Ophthalmol Sci ; 3(3): 100300, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37113471

RESUMO

Purpose: Significant visual impairment due to glaucoma is largely caused by the disease being detected too late. Objective: To build a labeled data set for training artificial intelligence (AI) algorithms for glaucoma screening by fundus photography, to assess the accuracy of the graders, and to characterize the features of all eyes with referable glaucoma (RG). Design: Cross-sectional study. Subjects: Color fundus photographs (CFPs) of 113 893 eyes of 60 357 individuals were obtained from EyePACS, California, United States, from a population screening program for diabetic retinopathy. Methods: Carefully selected graders (ophthalmologists and optometrists) graded the images. To qualify, they had to pass the European Optic Disc Assessment Trial optic disc assessment with ≥ 85% accuracy and 92% specificity. Of 90 candidates, 30 passed. Each image of the EyePACS set was then scored by varying random pairs of graders as "RG," "no referable glaucoma (NRG)," or "ungradable (UG)." In case of disagreement, a glaucoma specialist made the final grading. Referable glaucoma was scored if visual field damage was expected. In case of RG, graders were instructed to mark up to 10 relevant glaucomatous features. Main Outcome Measures: Qualitative features in eyes with RG. Results: The performance of each grader was monitored; if the sensitivity and specificity dropped below 80% and 95%, respectively (the final grade served as reference), they exited the study and their gradings were redone by other graders. In all, 20 graders qualified; their mean sensitivity and specificity (standard deviation [SD]) were 85.6% (5.7) and 96.1% (2.8), respectively. The 2 graders agreed in 92.45% of the images (Gwet's AC2, expressing the inter-rater reliability, was 0.917). Of all gradings, the sensitivity and specificity (95% confidence interval) were 86.0 (85.2-86.7)% and 96.4 (96.3-96.5)%, respectively. Of all gradable eyes (n = 111 183; 97.62%) the prevalence of RG was 4.38%. The most common features of RG were the appearance of the neuroretinal rim (NRR) inferiorly and superiorly. Conclusions: A large data set of CFPs was put together of sufficient quality to develop AI screening solutions for glaucoma. The most common features of RG were the appearance of the NRR inferiorly and superiorly. Disc hemorrhages were a rare feature of RG. Financial Disclosures: Proprietary or commercial disclosure may be found after the references.

2.
IEEE Trans Biomed Eng ; 68(2): 374-383, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-32396068

RESUMO

One of the most common types of cancer in men is prostate cancer (PCa). Biopsies guided by bi-parametric magnetic resonance imaging (MRI) can aid PCa diagnosis. Previous works have mostly focused on either detection or classification of PCa from MRI. In this work, however, we present a neural network that simultaneously detects and grades cancer tissue in an end-to-end fashion. This is more clinically relevant than the classification goal of the ProstateX-2 challenge. We used the dataset of this challenge for training and testing. We use a 2D U-Net with MRI slices as input and lesion segmentation maps that encode the Gleason Grade Group (GGG), a measure for cancer aggressiveness, as output. We propose a method for encoding the GGG in the model target that takes advantage of the fact that the classes are ordinal. Furthermore, we evaluate methods for incorporating prostate zone segmentations as prior information, and ensembling techniques. The model scored a voxel-wise weighted kappa of 0.446 ±0.082 and a Dice similarity coefficient for segmenting clinically significant cancer of 0.370 ±0.046, obtained using 5-fold cross-validation. The lesion-wise weighted kappa on the ProstateX-2 challenge test set was 0.13 ±0.27. We show that our proposed model target outperforms standard multiclass classification and multi-label ordinal regression. Additionally, we present a comparison of methods for further improvement of the model performance.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Humanos , Imageamento por Ressonância Magnética , Masculino , Gradação de Tumores , Redes Neurais de Computação , Neoplasias da Próstata/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...