Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 26
Filter
1.
Sci Rep ; 12(1): 12638, 2022 07 25.
Article in English | MEDLINE | ID: mdl-35879344

ABSTRACT

Normative aging trends of the brain can serve as an important reference in the assessment of neurological structural disorders. Such models are typically developed from longitudinal brain image data-follow-up data of the same subject over different time points. In practice, obtaining such longitudinal data is difficult. We propose a method to develop an aging model for a given population, in the absence of longitudinal data, by using images from different subjects at different time points, the so-called cross-sectional data. We define an aging model as a diffeomorphic deformation on a structural template derived from the data and propose a method that develops topology preserving aging model close to natural aging. The proposed model is successfully validated on two public cross-sectional datasets which provide templates constructed from different sets of subjects at different age points.


Subject(s)
Aging , Brain , Adult , Algorithms , Brain/diagnostic imaging , Cross-Sectional Studies , Humans , Magnetic Resonance Imaging/methods , Research Design
2.
IEEE J Biomed Health Inform ; 26(4): 1496-1505, 2022 04.
Article in English | MEDLINE | ID: mdl-35157603

ABSTRACT

Deep learning based methods have shown great promise in achieving accurate automatic detection of Coronavirus Disease (covid) - 19 from Chest X-Ray (cxr) images.However, incorporating explainability in these solutions remains relatively less explored. We present a hierarchical classification approach for separating normal, non-covid pneumonia (ncp) and covid cases using cxr images. We demonstrate that the proposed method achieves clinically consistent explainations. We achieve this using a novel multi-scale attention architecture called Multi-scale Attention Residual Learning (marl) and a new loss function based on conicity for training the proposed architecture. The proposed classification strategy has two stages. The first stage uses a model derived from DenseNet to separate pneumonia cases from normal cases while the second stage uses the marl architecture to discriminate between covid and ncp cases. With a five-fold cross validation the proposed method achieves 93%, 96.28%, and 84.51% accuracy respectively over three large, public datasets for normal vs. ncp vs. covid classification. This is competitive to the state-of-the-art methods. We also provide explanations in the form of GradCAM attributions, which are well aligned with expert annotations. The attributions are also seen to clearly indicate that marl deems the peripheral regions of the lungs to be more important in the case of covid cases while central regions are seen as more important in ncp cases. This observation matches the criteria described by radiologists in clinical literature, thereby attesting to the utility of the derived explanations.


Subject(s)
COVID-19 , Deep Learning , Pneumonia , Algorithms , Attention , COVID-19/diagnostic imaging , Humans , Neural Networks, Computer , SARS-CoV-2 , X-Rays
3.
J Med Imaging (Bellingham) ; 8(4): 044502, 2021 Jul.
Article in English | MEDLINE | ID: mdl-34423071

ABSTRACT

Purpose: Explainable AI aims to build systems that not only give high performance but also are able to provide insights that drive the decision making. However, deriving this explanation is often dependent on fully annotated (class label and local annotation) data, which are not readily available in the medical domain. Approach: This paper addresses the above-mentioned aspects and presents an innovative approach to classifying a lung nodule in a CT volume as malignant or benign, and generating a morphologically meaningful explanation for the decision in the form of attributes such as nodule margin, sphericity, and spiculation. A deep learning architecture that is trained using a multi-phase training regime is proposed. The nodule class label (benign/malignant) is learned with full supervision and is guided by semantic attributes that are learned in a weakly supervised manner. Results: Results of an extensive evaluation of the proposed system on the LIDC-IDRI dataset show good performance compared with state-of-the-art, fully supervised methods. The proposed model is able to label nodules (after full supervision) with an accuracy of 89.1% and an area under curve of 0.91 and to provide eight attributes scores as an explanation, which is learned from a much smaller training set. The proposed system's potential to be integrated with a sub-optimal nodule detection system was also tested, and our system handled 95% of false positive or random regions in the input well by labeling them as benign, which underscores its robustness. Conclusions: The proposed approach offers a way to address computer-aided diagnosis system design under the constraint of sparse availability of fully annotated images.

4.
IEEE Trans Med Imaging ; 38(8): 1858-1874, 2019 08.
Article in English | MEDLINE | ID: mdl-30835214

ABSTRACT

Retinal swelling due to the accumulation of fluid is associated with the most vision-threatening retinal diseases. Optical coherence tomography (OCT) is the current standard of care in assessing the presence and quantity of retinal fluid and image-guided treatment management. Deep learning methods have made their impact across medical imaging, and many retinal OCT analysis methods have been proposed. However, it is currently not clear how successful they are in interpreting the retinal fluid on OCT, which is due to the lack of standardized benchmarks. To address this, we organized a challenge RETOUCH in conjunction with MICCAI 2017, with eight teams participating. The challenge consisted of two tasks: fluid detection and fluid segmentation. It featured for the first time: all three retinal fluid types, with annotated images provided by two clinical centers, which were acquired with the three most common OCT device vendors from patients with two different retinal diseases. The analysis revealed that in the detection task, the performance on the automated fluid detection was within the inter-grader variability. However, in the segmentation task, fusing the automated methods produced segmentations that were superior to all individual methods, indicating the need for further improvements in the segmentation performance.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Retina/diagnostic imaging , Tomography, Optical Coherence/methods , Algorithms , Databases, Factual , Humans , Retinal Diseases/diagnostic imaging
5.
Neurol India ; 67(1): 229-234, 2019.
Article in English | MEDLINE | ID: mdl-30860125

ABSTRACT

CONTEXT: A brain magnetic resonanace imaging (MRI) atlas plays an important role in many neuroimage analysis tasks as it provides an atlas with a standard coordinate system which is needed for spatial normalization of a brain MRI. Ideally, this atlas should be as near to the average brain of the population being studied as possible. AIMS: The aim of this study is to construct and validate the Indian brain MRI atlas of young Indian population and the corresponding structure probability maps. SETTINGS AND DESIGN: This was a population-specific atlas generation and validation process. MATERIALS AND METHODS: 100 young healthy adults (M/F = 50/50), aged 21-30 years, were recruited for the study. Three different 1.5-T scanners were used for image acquisition. The atlas and structure maps were created using nonrigid groupwise registration and label-transfer techniques. COMPARISON AND VALIDATION: The generated atlas was compared against other atlases to study the population-specific trends. RESULTS: The atlas-based comparison indicated a signifi cant difference between the global size of Indian and Caucasian brains. This difference was noteworthy for all three global measures, namely, length, width, and height. Such a comparison with the Chinese and Korean brain templates indicate all 3 to be comparable in length but signifi cantly different (smaller) in terms of height and width. CONCLUSIONS: The findings confirm that there is significant difference in brain morphology between Indian, Chinese, and Caucasian populations.


Subject(s)
Brain/anatomy & histology , Cervical Atlas/anatomy & histology , Image Processing, Computer-Assisted , Neuroimaging , Adult , Algorithms , Asian People , Brain Mapping/methods , Female , Humans , Magnetic Resonance Imaging/methods , Male , White People , Young Adult
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 5581-5584, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31947120

ABSTRACT

Optical coherence tomographic (OCT) images provide valuable information for understanding the changes occurring in the retina due to glaucoma, specifically, related to the retinal nerve fiber layer and the optic nerve head. In this paper, we propose a deep learning approach using Capsule network for glaucoma classification, which directly operates on 3D OCT volumes. The network is trained only on labelled volumes and does not attempt any region/structure segmentation. The proposed network was assessed on 50 volumes and found to achieve 0.97 for the area under the ROC curve (AUC). This is considerably higher than the existing approaches which are majorly based on machine learning or rely on segmentation of the required structures from OCT. Our network also outperforms 3D convolutional neural networks despite the fewer network parameters and fewer epochs needed for training.


Subject(s)
Glaucoma , Tomography, Optical Coherence , Algorithms , Glaucoma/diagnostic imaging , Humans , Optic Disk , ROC Curve , Retina
7.
IEEE J Biomed Health Inform ; 23(3): 1151-1162, 2019 05.
Article in English | MEDLINE | ID: mdl-29994410

ABSTRACT

The level set based deformable models (LDM) are commonly used for medical image segmentation. However, they rely on a handcrafted curve evolution velocity that needs to be adapted for each segmentation task. The Convolutional Neural Networks (CNN) address this issue by learning robust features in a supervised end-to-end manner. However, CNNs employ millions of network parameters, which require a large amount of data during training to prevent over-fitting and increases the memory requirement and computation time during testing. Moreover, since CNNs pose segmentation as a region-based pixel labeling, they cannot explicitly model the high-level dependencies between the points on the object boundary to preserve its overall shape, smoothness or the regional homogeneity within and outside the boundary. We present a Recurrent Neural Network based solution called the RACE-net to address the above issues. RACE-net models a generalized LDM evolving under a constant and mean curvature velocity. At each time-step, the curve evolution velocities are approximated using a feed-forward architecture inspired by the multiscale image pyramid. RACE-net allows the curve evolution velocities to be learned in an end-to-end manner while minimizing the number of network parameters, computation time, and memory requirements. The RACE-net was validated on three different segmentation tasks: optic disc and cup in color fundus images, cell nuclei in histopathological images, and the left atrium in cardiac MRI volumes. Assessment on public datasets was seen to yield high Dice values between 0.87 and 0.97, which illustrates its utility as a generic, off-the-shelf architecture for biomedical segmentation.


Subject(s)
Deep Learning , Image Interpretation, Computer-Assisted/methods , Cell Nucleus/physiology , Heart Atria/diagnostic imaging , Humans , Retina/diagnostic imaging
8.
IEEE J Biomed Health Inform ; 23(1): 273-282, 2019 01.
Article in English | MEDLINE | ID: mdl-29994501

ABSTRACT

Automated and accurate segmentation of cystoid structures in optical coherence tomography (OCT) is of interest in the early detection of retinal diseases. It is, however, a challenging task. We propose a novel method for localizing cysts in 3-D OCT volumes. The proposed work is biologically inspired and based on selective enhancement of the cysts, by inducing motion to a given OCT slice. A convolutional neural network is designed to learn a mapping function that combines the result of multiple such motions to produce a probability map for cyst locations in a given slice. The final segmentation of cysts is obtained via simple clustering of the detected cyst locations. The proposed method is evaluated on two public datasets and one private dataset. The public datasets include the one released for the OPTIMA cyst segmentation challenge (OCSC) in MICCAI 2015 and the DME dataset. After training on the OCSC train set, the method achieves a mean dice coefficient (DC) of 0.71 on the OCSC test set. The robustness of the algorithm was examined by cross validation on the DME and AEI (private) datasets and a mean DC values obtained were 0.69 and 0.79, respectively. Overall, the proposed system has the highest performance on all the benchmarks. These results underscore the strengths of the proposed method in handling variations in both data acquisition protocols and scanners.


Subject(s)
Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Retinal Diseases/diagnostic imaging , Tomography, Optical Coherence/methods , Algorithms , Cysts/diagnostic imaging , Databases, Factual , Humans , Neural Networks, Computer , Phantoms, Imaging , Retina/diagnostic imaging
9.
Saudi J Ophthalmol ; 32(4): 295-302, 2018.
Article in English | MEDLINE | ID: mdl-30581300

ABSTRACT

PURPOSE: To compare the diagnostic ability of optical coherence tomography angiography (OCT-A) derived radial peripapillary capillary (RPC) measured capillary density (CD) and inside the optic nerve head (ONH) CD measurements to differentiate between the normal and primary open angle glaucoma (POAG) eyes. METHODS: AngioVue disc OCT-A images were obtained and assessed in 83 eyes of POAG patients and 74 age matched healthy eyes. RPC CD was quantitatively measured in the peripapillary area within 3.45 mm circle diameter around the ONH and inside the ONH in 8 equally divided sectors, using Bar - Selective Combination of Shifted Filter Responses method after the suppressing large vessels. Area under receiver operating characteristic (AUROC) curve was used to assess the diagnostic accuracy of the two scanning regions of CD to differentiate between the normal and POAG eyes. RESULTS: The mean peripapillary RPC density (0.12 ±â€¯0.03) and mean ONH CD (0.09 ±â€¯0.03) were significantly lower in POAG eyes when compared to the normal eyes (RPC CD: 0.17 ±â€¯0.05, p < 0.0001 and ONH CD 0.11 ±â€¯0.02, p = 0.01 respectively). The POAG patients showed 29% reduction in the RPC CD and 19% reduction in the ONH CD when compared to the normal eyes. The AUROC for discriminating between healthy and glaucomatous eyes was 0.784 for mean RPC CD and 0.743 for the mean ONH CD. CONCLUSIONS: Diagnostic ability of OCT-A derived peripapillary CD and ONH CD was moderate for differentiating between the normal and glaucomatous eyes. Diagnostic ability of even the best peripapillary average and inferotemporal sector for RPC CD and average and superonasal sector for the ONH CD was moderate.

10.
PLoS One ; 13(11): e0207086, 2018.
Article in English | MEDLINE | ID: mdl-30444873

ABSTRACT

Diabetic retinopathy (DR) is a disease which is widely diagnosed using (colour fundus) images. Efficiency and accuracy are critical in diagnosing DR as lack of timely intervention can lead to irreversible visual impairment. In this paper, we examine strategies for scrutinizing images which affect diagnostic performance of medical practitioners via an eye-tracking study. A total of 56 subjects with 0 to 18 years of experience participated in the study. Every subject was asked to detect DR from 40 images. The findings indicate that practitioners use mainly two types of strategies characterized by either higher dwell duration or longer track length. The main findings of the study are that higher dwell-based strategy led to higher average accuracy (> 85%) in diagnosis, irrespective of the expertise of practitioner; whereas, the average obtained accuracy with a long-track length-based strategy was dependent on the expertise of the practitioner. In the second part of the paper, we use the experimental findings to recommend a scanning strategy for fast and accurate diagnosis of DR that can be potentially used by image readers. This is derived by combining the eye-tracking gaze maps of medical experts in a novel manner based on a set of rules. This strategy requires scrutiny of images in a manner which is consistent with spatial preferences found in human perception in general and in the domain of fundus images in particular. The Levenshtein distance-based assessment of gaze patterns also establish the effectiveness of the derived scanning pattern and is thus recommended for image readers.


Subject(s)
Diabetic Retinopathy/diagnosis , Diabetic Retinopathy/pathology , Fundus Oculi , Image Interpretation, Computer-Assisted/methods , Clinical Competence , Clinical Decision-Making , Diabetic Retinopathy/diagnostic imaging , Eye Movement Measurements , Humans , Photography , Physicians
11.
Comput Methods Programs Biomed ; 165: 235-250, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30337078

ABSTRACT

BACKGROUND AND OBJECTIVE: Accurate segmentation of the intra-retinal tissue layers in Optical Coherence Tomography (OCT) images plays an important role in the diagnosis and treatment of ocular diseases such as Age-Related Macular Degeneration (AMD) and Diabetic Macular Edema (DME). The existing energy minimization based methods employ multiple, manually handcrafted cost terms and often fail in the presence of pathologies. In this work, we eliminate the need to handcraft the energy by learning it from training images in an end-to-end manner. Our method can be easily adapted to pathologies by re-training it on an appropriate dataset. METHODS: We propose a Conditional Random Field (CRF) framework for the joint multi-layer segmentation of OCT B-scans. The appearance of each retinal layer and boundary is modeled by two convolutional filter banks and the shape priors are modeled using Gaussian distributions. The total CRF energy is linearly parameterized to allow a joint, end-to-end training by employing the Structured Support Vector Machine formulation. RESULTS: The proposed method outperformed three benchmark algorithms on four public datasets. The NORMAL-1 and NORMAL-2 datasets contain healthy OCT B-scans while the AMD-1 and DME-1 dataset contain B-scans of AMD and DME cases respectively. The proposed method achieved an average unsigned boundary localization error (U-BLE) of 1.52 pixels on NORMAL-1, 1.11 pixels on NORMAL-2 and 2.04 pixels on the combined NORMAL-1 and DME-1 dataset across the eight layer boundaries, outperforming the three benchmark methods in each case. The Dice coefficient was 0.87 on NORMAL-1, 0.89 on NORMAL-2 and 0.84 on the combined NORMAL-1 and DME-1 dataset across the seven retinal layers. On the combined NORMAL-1 and AMD-1 dataset, we achieved an average U-BLE of 1.86 pixels on the ILM, inner and outer RPE boundaries and a Dice of 0.98 for the ILM-RPEin region and 0.81 for the RPE layer. CONCLUSION: We have proposed a supervised CRF based method to jointly segment multiple tissue layers in OCT images. It can aid the ophthalmologists in the quantitative analysis of structural changes in the retinal tissue layers for clinical practice and large-scale clinical studies.


Subject(s)
Diagnostic Techniques, Ophthalmological/statistics & numerical data , Retina/diagnostic imaging , Tomography, Optical Coherence/statistics & numerical data , Algorithms , Databases, Factual , Diabetic Retinopathy/diagnostic imaging , Humans , Image Interpretation, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/statistics & numerical data , Macular Degeneration/diagnostic imaging , Supervised Machine Learning/statistics & numerical data
12.
Int Ophthalmol ; 38(3): 967-974, 2018 Jun.
Article in English | MEDLINE | ID: mdl-28447287

ABSTRACT

PURPOSE: To analyse the expansion of radial peripapillary capillary (RPC) network with optical coherence tomography angiography (OCT-A) in normal human eyes and correlate RPC density with retinal nerve fibre layer thickness (RNFLT) at various distances from the optic nerve head (ONH) edge. METHODS: Fifty eyes of 50 healthy subjects underwent imaging with RTVue XR-100 Avanti OCT. OCT-A scans of Angio disc (6 × 6 mm) and Angio retina (8 × 8 mm) were combined to create a wide-field montage image of the RPC network. RPC density and RNFLT was calculated at different circle diameter around the ONH, and their correlation was measured. RESULTS: In the arcuate region, RPC was detected as far as 8.5 mm from the ONH edge, but not around the perifoveal area within 0.025 ± 0.01 mm2. The mean RPC density (0.1556 ± 0.015) and RNFLT (245.96 ± 5.79) were highest at 1.5 mm from ONH margin, and there was a trend in its decline, in a distance-dependent manner, with the least density at 8.5 mm (all P < 0.0001). Highest RPC density was noted in the arcuate fibre region at all the distances. Overall mean RPC density correlated significantly (P < 0.0001) with the overall mean RNFLT. CONCLUSIONS: Wide-field montage OCT-A angiograms can visualize expansion of the RPC network, which is useful in obtaining information about various retinal disorders. The results obtained support the hypothesis that the RPC network could be responsible for RNFL nourishment.


Subject(s)
Capillaries/cytology , Optic Disk/blood supply , Retinal Ganglion Cells/cytology , Retinal Vessels/diagnostic imaging , Tomography, Optical Coherence/methods , Adult , Female , Fluorescein Angiography/methods , Fundus Oculi , Healthy Volunteers , Humans , Male , Middle Aged , Nerve Fibers , Reproducibility of Results , Young Adult
13.
Comput Methods Programs Biomed ; 147: 51-61, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28734530

ABSTRACT

BACKGROUND AND OBJECTIVE: Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. METHODS: We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. RESULTS: The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. CONCLUSIONS: We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable.


Subject(s)
Fundus Oculi , Glaucoma/diagnosis , Optic Disk , Algorithms , Humans , Tomography, Optical Coherence
14.
J Med Imaging (Bellingham) ; 4(2): 024503, 2017 Apr.
Article in English | MEDLINE | ID: mdl-28560245

ABSTRACT

Computer-assisted diagnostic (CAD) tools are of interest as they enable efficient decision-making in clinics and the screening of diseases. The traditional approach to CAD algorithm design focuses on the automated detection of abnormalities independent of the end-user, who can be an image reader or an expert. We propose a reader-centric system design wherein a reader's attention is drawn to abnormal regions in a least-obtrusive yet effective manner, using saliency-based emphasis of abnormalities and without altering the appearance of the background tissues. We present an assistive lesion-emphasis system (ALES) based on the above idea, for fundus image-based diabetic retinopathy diagnosis. Lesion-saliency is learnt using a convolutional neural network (CNN), inspired by the saliency model of Itti and Koch. The CNN is used to fine-tune standard low-level filters and learn high-level filters for deriving a lesion-saliency map, which is then used to perform lesion-emphasis via a spatially variant version of gamma correction. The proposed system has been evaluated on public datasets and benchmarked against other saliency models. It was found to outperform other saliency models by 6% to 30% and boost the contrast-to-noise ratio of lesions by more than 30%. Results of a perceptual study also underscore the effectiveness and, hence, the potential of ALES as an assistive tool for readers.

15.
J Med Imaging (Bellingham) ; 4(2): 024003, 2017 Apr.
Article in English | MEDLINE | ID: mdl-28439524

ABSTRACT

Automated segmentation of cortical and noncortical human brain structures has been hitherto approached using nonrigid registration followed by label fusion. We propose an alternative approach for this using a convolutional neural network (CNN) which classifies a voxel into one of many structures. Four different kinds of two-dimensional and three-dimensional intensity patches are extracted for each voxel, providing local and global (context) information to the CNN. The proposed approach is evaluated on five different publicly available datasets which differ in the number of labels per volume. The obtained mean Dice coefficient varied according to the number of labels, for example, it is [Formula: see text] and [Formula: see text] for datasets with the least (32) and the most (134) number of labels, respectively. These figures are marginally better or on par with those obtained with the current state-of-the-art methods on nearly all datasets, at a reduced computational time. The consistently good performance of the proposed method across datasets and no requirement for registration make it attractive for many applications where reduced computational time is necessary.

16.
J Glaucoma ; 26(5): 438-443, 2017 May.
Article in English | MEDLINE | ID: mdl-28234680

ABSTRACT

PURPOSE: The purpose of the study was to compare radial peripapillary capillary (RPC) density between normal subjects and patients with early primary open-angle glaucoma (POAG) using optical coherence tomography angiography (OCT-A). MATERIALS AND METHODS: A total of 24 patients with early POAG and age-matched 52 normal subjects underwent scanning with OCT-A imaging (RTVue XR-100, Avanti OCT). The enface angioflow images obtained were analyzed qualitatively for the RPC network, and RPC capillary density (CD) was measured in 8 sectors within a 3.45-mm-diameter circle around the optic disc, using the Bar-Selective Combination of Shifted Filter Responses (B-COSFIRE) method. CD and retinal nerve fiber layer (RNFL) thickness were compared between corresponding sectors with the Mann-Whitney U test. Correlations between CD and Humphrey visual field parameters and optic disc structural parameters were calculated by linear regression analysis. RESULTS: In the normal eyes, RPC bed was clearly visible on OCT-A as a dense microvascular network around the optic disc, whereas in POAG patients it was focally attenuated. RPC CD was lower in the inferotemporal (P=0.002) and superotemporal (P=0.008) sectors with corresponding focal RNFL defect in POAG patients when compared with normal controls. The average CD correlated with visual field mean deviation (P=0.01) and pattern standard deviation (P=0.02) in glaucomatous eyes. CONCLUSIONS: OCT-A demonstrated reproducible, focal loss of RPCs in patients with early POAG when compared with normal controls. The results of our study suggest that the RPC density measurements may have a value in the diagnosis and monitoring of glaucoma.


Subject(s)
Glaucoma, Open-Angle/diagnosis , Optic Disk/blood supply , Retinal Diseases/diagnosis , Retinal Vessels/pathology , Aged , Capillaries/pathology , Computed Tomography Angiography , Early Diagnosis , Female , Humans , Intraocular Pressure/physiology , Male , Middle Aged , Nerve Fibers/pathology , Retinal Ganglion Cells/pathology , Tomography, Optical Coherence/methods , Visual Field Tests , Visual Fields/physiology
17.
J Glaucoma ; 26(3): 241-246, 2017 Mar.
Article in English | MEDLINE | ID: mdl-27906811

ABSTRACT

AIM: To image the radial peripapillary capillary (RPC) network with optical coherence tomography angiography (OCTA) and measure its capillary density (CD) in the normal human retina. MATERIALS AND METHODS: Fifty-two normal participants underwent OCTA imaging with RTVue XR 100 Avanti OCT. The angioflow enface RPCs network was extracted from OCTA and 8 peripapillary sectors with a sector angle of 45 degrees were selected for quantitative analysis: superior nasal, superior temporal, temporal upper, temporal lower, nasal upper, nasal lower, inferior nasal, and inferior temporal. CD was measured within a 3.4-mm circle diameter around the optic nerve head (ONH) using the Bar-Selective Combination of Shifted Filter Responses method. RESULTS: Using OCTA, the RPC network was visualized with excellent detail as a distinctive pattern of parallel, long, uniform-diameter vessels around ONH, oriented parallel to the retinal nerve fiber layer. The mean overall RPC density within the circle diameter of 3.4 mm around ONH was 0.21±0.053 (95% confidence interval: 0.204-0.216). The CD at the superior temporal (0.243±0.045) and inferior temporal (0.242±0.047) sectors was higher (P<0.05) when compared with the other sectors. Age, sex (P=0.7), and disc size (P=0.3) did not have a significant effect on CD measurement. CONCLUSIONS: We imaged and describe a reproducible method to measure the RPC density, which would help us to understand the role of this vascular bed in the functioning of the retinal nerve fiber layer. Our study demonstrated that there was symmetry in superior and inferior corresponding pair sectors with respect to the horizontal meridian and symmetry between paired sectors at the nasal and temporal poles with respect to the vertical meridian.


Subject(s)
Capillaries/cytology , Optic Disk/blood supply , Retina/anatomy & histology , Retinal Vessels/cytology , Adult , Female , Fluorescein Angiography , Humans , Male , Microcirculation/physiology , Middle Aged , Nerve Fibers , Regional Blood Flow/physiology , Retinal Ganglion Cells/cytology , Tomography, Optical Coherence/methods
18.
J Glaucoma ; 25(7): 590-7, 2016 07.
Article in English | MEDLINE | ID: mdl-26580479

ABSTRACT

OBJECTIVE: To describe and evaluate the performance of an automated CAD system for detection of glaucoma from color fundus photographs. DESIGN AND SETTING: Color fundus photographs of 2252 eyes from 1126 subjects were collected from 2 centers: Aravind Eye Hospital, Madurai and Coimbatore, India. The images of 1926 eyes (963 subjects) were used to train an automated image analysis-based system, which was developed to provide a decision on a given fundus image. A total of 163 subjects were clinically examined by 2 ophthalmologists independently and their diagnostic decisions were recorded. The consensus decision was defined to be the clinical reference (gold standard). Fundus images of eyes with disagreement in diagnosis were excluded from the study. The fundus images of the remaining 314 eyes (157 subjects) were presented to 4 graders and their diagnostic decisions on the same were collected. The performance of the system was evaluated on the 314 images, using the reference standard. The sensitivity and specificity of the system and 4 independent graders were determined against the clinical reference standard. RESULTS: The system achieved an area under receiver operating characteristic curve of 0.792 with a sensitivity of 0.716 and specificity of 0.717 at a selected threshold for the detection of glaucoma. The agreement with the clinical reference standard as determined by Cohen κ is 0.45 for the proposed system. This is comparable to that of the image-based decisions of 4 ophthalmologists. CONCLUSIONS AND RELEVANCE: An automated system was presented for glaucoma detection from color fundus photographs. The overall evaluation results indicated that the presented system was comparable in performance to glaucoma classification by a manual grader solely based on fundus image examination.


Subject(s)
Diagnosis, Computer-Assisted , Diagnostic Techniques, Ophthalmological , Glaucoma, Open-Angle/diagnosis , Optic Disk/pathology , Optic Nerve Diseases/diagnosis , Photography/instrumentation , False Positive Reactions , Female , Glaucoma, Open-Angle/classification , Humans , India , Intraocular Pressure/physiology , Male , Ocular Hypertension/classification , Ocular Hypertension/diagnosis , Optic Nerve Diseases/classification , Predictive Value of Tests , ROC Curve , Sensitivity and Specificity
19.
Med Image Comput Comput Assist Interv ; 17(Pt 1): 747-54, 2014.
Article in English | MEDLINE | ID: mdl-25333186

ABSTRACT

We present a novel framework for depth based optic cup boundary extraction from a single 2D color fundus photograph per eye. Multiple depth estimates from shading, color and texture gradients in the image are correlated with Optical Coherence Tomography (OCT) based depth using a coupled sparse dictionary, trained on image-depth pairs. Finally, a Markov Random Field is formulated on the depth map to model the relative depth and discontinuity at the cup boundary. Leave-one-out validation of depth estimation on the INSPIRE dataset gave average correlation coefficient of 0.80. Our cup segmentation outperforms several state-of-the-art methods on the DRISHTI-GS dataset with an average F-score of 0.81 and boundary-error of 21.21 pixels on test set against manual expert markings. Evaluation on an additional set of 28 images against OCT scanner provided groundtruth showed an average rms error of 0.11 on Cup-Disk diameter and 0.19 on Cup-disk area ratios.


Subject(s)
Glaucoma/pathology , Image Interpretation, Computer-Assisted/methods , Optic Disk/pathology , Pattern Recognition, Automated/methods , Retinoscopy/methods , Subtraction Technique , Tomography, Optical Coherence/methods , Algorithms , Fundus Oculi , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
20.
IEEE Trans Biomed Eng ; 59(6): 1523-31, 2012 Jun.
Article in English | MEDLINE | ID: mdl-22333978

ABSTRACT

Accurate segmentation of the cup region from retinal images is needed to derive relevant measurements for glaucoma assessment. A novel, depth discontinuity (in the retinal surface)-based approach to estimate the cup boundary is proposed in this paper. The proposed approach shifts focus from the cup region used by existing approaches to cup boundary. The given set of images, acquired sequentially, are related via a relative motion model and the depth discontinuity at the cup boundary is determined from cues such as motion boundary and partial occlusion. The information encoded by these cues is used to approximate the cup boundary with a set of best-fitting circles. The final boundary is found by considering points on these circles at different sectors using a confidence measure. Four different kinds of data sets ranging from synthetic to real image pairs, covering different multiview scenarios, have been used to evaluate the proposed method. The proposed method was found to yield an error reduction of 16% for cup-to-disk vertical diameter ratio (CDR) and 13% for cup-to-disk area ratio (CAR) estimation, over an existing monocular image-based cup segmentation method. The error reduction increased to 33% in CDR and 18% in CAR with the addition of a third view (image) which indicates the potential of the proposed approach.


Subject(s)
Algorithms , Colorimetry/methods , Glaucoma/pathology , Image Interpretation, Computer-Assisted/methods , Optic Disk/pathology , Pattern Recognition, Automated/methods , Retinoscopy/methods , Color , Humans , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL