Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Comput Med Imaging Graph ; 102: 102126, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36242993

RESUMEN

Intracranial aneurysm is commonly found in human brains especially for the elderly, and its rupture accounts for a high rate of subarachnoid hemorrhages. However, it is time-consuming and requires special expertise to pinpoint small aneurysms from computed tomography angiography (CTA) images. Deep learning-based detection has helped improve much efficiency but false-positives still render difficulty to be ruled out. To study the feasibility of deep learning algorithms for aneurysm analysis in clinical applications, this paper proposes a pipeline for aneurysm detection, segmentation, and rupture classification and validates its performance using CTA images of 1508 subjects. A cascade aneurysm detection model is employed by first using a fine-tuned feature pyramid network (FPN) for candidate detection and then applying a dual-channel ResNet aneurysm classifier to further reduce false positives. Detected aneurysms are then segmented by applying a traditional 3D V-Net to their image patches. Radiomics features of aneurysms are extracted after detection and segmentation. The machine-learning-based and deep learning-based rupture classification can be used to distinguish ruptured and un-ruptured ones. Experimental results show that the dual-channel ResNet aneurysm classifier utilizing image and vesselness information helps boost sensitivity of detection compared to single image channel input. Overall, the proposed pipeline can achieve a sensitivity of 90 % for 1 false positive per image, and 95 % for 2 false positives per image. For rupture classification the area under curve (AUC) of 0.906 can be achieved for the testing dataset. The results suggest feasibility of the pipeline for potential clinical use to assist radiologists in aneurysm detection and classification of ruptured and un-ruptured aneurysms.


Asunto(s)
Aneurisma Roto , Aneurisma Intracraneal , Humanos , Anciano , Aneurisma Intracraneal/diagnóstico por imagen , Angiografía Cerebral/métodos , Angiografía de Substracción Digital/métodos , Sensibilidad y Especificidad , Aneurisma Roto/diagnóstico por imagen
3.
Comput Med Imaging Graph ; 89: 101887, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33711732

RESUMEN

Registration of hepatic dynamic contrast-enhanced magnetic resonance images (DCE-MRIs) is an important task for evaluation of transarterial chemoembolization (TACE) or radiofrequency ablation by quantifying enhancing viable residue tumor against necrosis. However, intensity changes due to contrast agents combined with spatial deformations render technical challenges for accurate registration of DCE-MRI, and traditional deformable registration methods using mutual information are often computationally intensive in order to tolerate such intensity enhancement and shape deformation variability. To address this problem, we propose a cascade network framework composed of a de-enhancement network (DE-Net) and a registration network (Reg-Net) to first remove contrast enhancement effects and then register the liver images in different phases. In experiments, we used DCE-MRI series of 97 patients from Renji Hospital of Shanghai Jiaotong University and registered the arterial phase and the portal venous phase images onto the pre-contrast phases. The performance of the cascade network framework was compared with that of the traditional registration method SyN in the ANTs toolkit and Reg-Net without DE-Net. The results showed that the proposed method achieved comparable registration performance with SyN but significantly improved the efficiency.


Asunto(s)
Carcinoma Hepatocelular , Quimioembolización Terapéutica , Neoplasias Hepáticas , Algoritmos , China , Medios de Contraste , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Imagen por Resonancia Magnética
4.
NPJ Digit Med ; 3: 123, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33043147

RESUMEN

By 2040, ~100 million people will have glaucoma. To date, there are a lack of high-efficiency glaucoma diagnostic tools based on visual fields (VFs). Herein, we develop and evaluate the performance of 'iGlaucoma', a smartphone application-based deep learning system (DLS) in detecting glaucomatous VF changes. A total of 1,614,808 data points of 10,784 VFs (5542 patients) from seven centers in China were included in this study, divided over two phases. In Phase I, 1,581,060 data points from 10,135 VFs of 5105 patients were included to train (8424 VFs), validate (598 VFs) and test (3 independent test sets-200, 406, 507 samples) the diagnostic performance of the DLS. In Phase II, using the same DLS, iGlaucoma cloud-based application further tested on 33,748 data points from 649 VFs of 437 patients from three glaucoma clinics. With reference to three experienced expert glaucomatologists, the diagnostic performance (area under curve [AUC], sensitivity and specificity) of the DLS and six ophthalmologists were evaluated in detecting glaucoma. In Phase I, the DLS outperformed all six ophthalmologists in the three test sets (AUC of 0.834-0.877, with a sensitivity of 0.831-0.922 and a specificity of 0.676-0.709). In Phase II, iGlaucoma had 0.99 accuracy in recognizing different patterns in pattern deviation probability plots region, with corresponding AUC, sensitivity and specificity of 0.966 (0.953-0.979), 0.954 (0.930-0.977), and 0.873 (0.838-0.908), respectively. The 'iGlaucoma' is a clinically effective glaucoma diagnostic tool to detect glaucoma from humphrey VFs, although the target population will need to be carefully identified with glaucoma expertise input.

5.
Biomed Opt Express ; 10(5): 2639-2656, 2019 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-31149385

RESUMEN

We propose a joint segmentation and classification deep model for early glaucoma diagnosis using retina imaging with optical coherence tomography (OCT). Our motivation roots in the observation that ophthalmologists make the clinical decision by analyzing the retinal nerve fiber layer (RNFL) from OCT images. To simulate this process, we propose a novel deep model that joins the retinal layer segmentation and glaucoma classification. Our model consists of three parts. First, the segmentation network simultaneously predicts both six retinal layers and five boundaries between them. Then, we introduce a post processing algorithm to fuse the two results while enforcing the topology correctness. Finally, the classification network takes the RNFL thickness vector as input and outputs the probability of being glaucoma. In the classification network, we propose a carefully designed module to implement the clinical strategy to diagnose glaucoma. We validate our method both in a collected dataset of 1004 circular OCT B-Scans from 234 subjects and in a public dataset of 110 B-Scans from 10 patients with diabetic macular edema. Experimental results demonstrate that our method achieves superior segmentation performance than other state-of-the-art methods both in our collected dataset and in public dataset with severe retina pathology. For glaucoma classification, our model achieves diagnostic accuracy of 81.4% with AUC of 0.864, which clearly outperforms baseline methods.

7.
BMC Med Imaging ; 18(1): 35, 2018 10 04.
Artículo en Inglés | MEDLINE | ID: mdl-30286740

RESUMEN

BACKGROUND: To develop a deep neural network able to differentiate glaucoma from non-glaucoma visual fields based on visual filed (VF) test results, we collected VF tests from 3 different ophthalmic centers in mainland China. METHODS: Visual fields obtained by both Humphrey 30-2 and 24-2 tests were collected. Reliability criteria were established as fixation losses less than 2/13, false positive and false negative rates of less than 15%. RESULTS: We split a total of 4012 PD images from 1352 patients into two sets, 3712 for training and another 300 for validation. There is no significant difference between left to right ratio (P = 0.6211), while age (P = 0.0022), VFI (P = 0.0001), MD (P = 0.0039) and PSD (P = 0.0001) exhibited obvious statistical differences. On the validation set of 300 VFs, CNN achieves the accuracy of 0.876, while the specificity and sensitivity are 0.826 and 0.932, respectively. For ophthalmologists, the average accuracies are 0.607, 0.585 and 0.626 for resident ophthalmologists, attending ophthalmologists and glaucoma experts, respectively. AGIS and GSS2 achieved accuracy of 0.459 and 0.523 respectively. Three traditional machine learning algorithms, namely support vector machine (SVM), random forest (RF), and k-nearest neighbor (k-NN) were also implemented and evaluated in the experiments, which achieved accuracy of 0.670, 0.644, and 0.591 respectively. CONCLUSIONS: Our algorithm based on CNN has achieved higher accuracy compared to human ophthalmologists and traditional rules (AGIS and GSS2) in differentiation of glaucoma and non-glaucoma VFs.


Asunto(s)
Glaucoma/diagnóstico , Pruebas del Campo Visual/métodos , Adulto , Anciano , Femenino , Humanos , Aprendizaje Automático , Persona de Mediana Edad , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA