Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Magn Reson Imaging ; 50(4): 1144-1151, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-30924997

RESUMEN

BACKGROUND: The usefulness of 3D deep learning-based classification of breast cancer and malignancy localization from MRI has been reported. This work can potentially be very useful in the clinical domain and aid radiologists in breast cancer diagnosis. PURPOSE: To evaluate the efficacy of 3D deep convolutional neural network (CNN) for diagnosing breast cancer and localizing the lesions at dynamic contrast enhanced (DCE) MRI data in a weakly supervised manner. STUDY TYPE: Retrospective study. SUBJECTS: A total of 1537 female study cases (mean age 47.5 years ±11.8) were collected from March 2013 to December 2016. All the cases had labels of the pathology results as well as BI-RADS categories assessed by radiologists. FIELD STRENGTH/SEQUENCE: 1.5 T dynamic contrast-enhanced MRI. ASSESSMENT: Deep 3D densely connected networks were trained under image-level supervision to automatically classify the images and localize the lesions. The dataset was randomly divided into training (1073), validation (157), and testing (307) subsets. STATISTICAL TESTS: Accuracy, sensitivity, specificity, area under receiver operating characteristic curve (ROC), and the McNemar test for breast cancer classification. Dice similarity for breast cancer localization. RESULTS: The final algorithm performance for breast cancer diagnosis showed 83.7% (257 out of 307) accuracy (95% confidence interval [CI]: 79.1%, 87.4%), 90.8% (187 out of 206) sensitivity (95% CI: 80.6%, 94.1%), 69.3% (70 out of 101) specificity (95% CI: 59.7%, 77.5%), with the area under the curve ROC of 0.859. The weakly supervised cancer detection showed an overall Dice distance of 0.501 ± 0.274. DATA CONCLUSION: 3D CNNs demonstrated high accuracy for diagnosing breast cancer. The weakly supervised learning method showed promise for localizing lesions in volumetric radiology images with only image-level labels. LEVEL OF EVIDENCE: 4 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2019;50:1144-1151.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Mama/diagnóstico por imagen , Medios de Contraste , Aprendizaje Profundo , Femenino , Humanos , Aumento de la Imagen/métodos , Persona de Mediana Edad , Redes Neurales de la Computación , Estudios Retrospectivos , Sensibilidad y Especificidad
2.
Lancet Digit Health ; 1(4): e172-e182, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-33323187

RESUMEN

BACKGROUND: Spectral-domain optical coherence tomography (SDOCT) can be used to detect glaucomatous optic neuropathy, but human expertise in interpretation of SDOCT is limited. We aimed to develop and validate a three-dimensional (3D) deep-learning system using SDOCT volumes to detect glaucomatous optic neuropathy. METHODS: We retrospectively collected a dataset including 4877 SDOCT volumes of optic disc cube for training (60%), testing (20%), and primary validation (20%) from electronic medical and research records at the Chinese University of Hong Kong Eye Centre (Hong Kong, China) and the Hong Kong Eye Hospital (Hong Kong, China). Residual network was used to build the 3D deep-learning system. Three independent datasets (two from Hong Kong and one from Stanford, CA, USA), including 546, 267, and 1231 SDOCT volumes, respectively, were used for external validation of the deep-learning system. Volumes were labelled as having or not having glaucomatous optic neuropathy according to the criteria of retinal nerve fibre layer thinning on reliable SDOCT images with position-correlated visual field defect. Heatmaps were generated for qualitative assessments. FINDINGS: 6921 SDOCT volumes from 1 384 200 two-dimensional cross-sectional scans were studied. The 3D deep-learning system had an area under the receiver operation characteristics curve (AUROC) of 0·969 (95% CI 0·960-0·976), sensitivity of 89% (95% CI 83-93), specificity of 96% (92-99), and accuracy of 91% (89-93) in the primary validation, outperforming a two-dimensional deep-learning system that was trained on en face fundus images (AUROC 0·921 [0·905-0·937]; p<0·0001). The 3D deep-learning system performed similarly in the external validation datasets, with AUROCs of 0·893-0·897, sensitivities of 78-90%, specificities of 79-86%, and accuracies of 80-86%. The heatmaps of glaucomatous optic neuropathy showed that the learned features by the 3D deep-learning system used for detection of glaucomatous optic neuropathy were similar to those used by clinicians. INTERPRETATION: The proposed 3D deep-learning system performed well in detection of glaucomatous optic neuropathy in both primary and external validations. Further prospective studies are needed to estimate the incremental cost-effectiveness of incorporation of an artificial intelligence-based model for glaucoma screening. FUNDING: Hong Kong Research Grants Council.


Asunto(s)
Aprendizaje Profundo , Glaucoma/diagnóstico , Enfermedades del Nervio Óptico/diagnóstico , Enseñanza , Tomografía de Coherencia Óptica , Hong Kong , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...