Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 3 de 3
1.
IEEE Trans Med Imaging ; 40(12): 3663-3673, 2021 12.
Article En | MEDLINE | ID: mdl-34224348

The explosive rise of the use of Computer tomography (CT) imaging in medical practice has heightened public concern over the patient's associated radiation dose. On the other hand, reducing the radiation dose leads to increased noise and artifacts, which adversely degrades the scan's interpretability. In recent times, the deep learning-based technique has emerged as a promising method for low dose CT(LDCT) denoising. However, some common bottleneck still exists, which hinders deep learning-based techniques from furnishing the best performance. In this study, we attempted to mitigate these problems with three novel accretions. First, we propose a novel convolutional module as the first attempt to utilize neighborhood similarity of CT images for denoising tasks. Our proposed module assisted in boosting the denoising by a significant margin. Next, we moved towards the problem of non-stationarity of CT noise and introduced a new noise aware mean square error loss for LDCT denoising. The loss mentioned above also assisted to alleviate the laborious effort required while training CT denoising network using image patches. Lastly, we propose a novel discriminator function for CT denoising tasks. The conventional vanilla discriminator tends to overlook the fine structural details and focus on the global agreement. Our proposed discriminator leverage self-attention and pixel-wise GANs for restoring the diagnostic quality of LDCT images. Our method validated on a publicly available dataset of the 2016 NIH-AAPM-Mayo Clinic Low Dose CT Grand Challenge performed remarkably better than the existing state of the art method. The corresponding source code is available at: https://github.com/reach2sbera/ldct_nonlocal.


Image Processing, Computer-Assisted , Neural Networks, Computer , Algorithms , Attention , Computers , Humans , Signal-To-Noise Ratio , Tomography, X-Ray Computed
2.
IEEE Trans Neural Netw Learn Syst ; 29(2): 470-485, 2018 02.
Article En | MEDLINE | ID: mdl-27959822

Multiview assisted learning has gained significant attention in recent years in supervised learning genre. Availability of high-performance computing devices enables learning algorithms to search simultaneously over multiple views or feature spaces to obtain an optimum classification performance. This paper is a pioneering attempt of formulating a mathematical foundation for realizing a multiview aided collaborative boosting architecture for multiclass classification. Most of the present algorithms apply multiview learning heuristically without exploring the fundamental mathematical changes imposed on traditional boosting. Also, most of the algorithms are restricted to two class or view setting. Our proposed mathematical framework enables collaborative boosting across any finite-dimensional view spaces for multiclass learning. The boosting framework is based on a forward stagewise additive model, which minimizes a novel exponential loss function. We show that the exponential loss function essentially captures the difficulty of a training sample space instead of the traditional "1/0" loss. The new algorithm restricts a weak view from overlearning and thereby preventing overfitting. The model is inspired by our earlier attempt on collaborative boosting, which was devoid of mathematical justification. The proposed algorithm is shown to converge much nearer to global minimum in the exponential loss space and thus supersedes our previous algorithm. This paper also presents analytical and numerical analyses of convergence and margin bounds for multiview boosting algorithms and we show that our proposed ensemble learning manifests lower error bound and higher margin compared with our previous model. Also, the proposed model is compared with traditional boosting and recent multiview boosting algorithms. In the majority of instances, the new algorithm manifests a faster rate of convergence on training set error and also simultaneously offers better generalization performance. The kappa-error diagram analysis reveals the robustness of the proposed boosting framework to labeling noise.

3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 1340-1343, 2016 Aug.
Article En | MEDLINE | ID: mdl-28268573

Automated segmentation of retinal blood vessels in label-free fundus images entails a pivotal role in computed aided diagnosis of ophthalmic pathologies, viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases. The challenge remains active in medical image analysis research due to varied distribution of blood vessels, which manifest variations in their dimensions of physical appearance against a noisy background. In this paper we formulate the segmentation challenge as a classification task. Specifically, we employ unsupervised hierarchical feature learning using ensemble of two level of sparsely trained denoised stacked autoencoder. First level training with bootstrap samples ensures decoupling and second level ensemble formed by different network architectures ensures architectural revision. We show that ensemble training of auto-encoders fosters diversity in learning dictionary of visual kernels for vessel segmentation. SoftMax classifier is used for fine tuning each member autoencoder and multiple strategies are explored for 2-level fusion of ensemble members. On DRIVE dataset, we achieve maximum average accuracy of 95.33% with an impressively low standard deviation of 0.003 and Kappa agreement coefficient of 0.708. Comparison with other major algorithms substantiates the high efficacy of our model.


Image Processing, Computer-Assisted/methods , Retinal Diseases/diagnostic imaging , Retinal Vessels/diagnostic imaging , Algorithms , Angiography/methods , Diabetic Retinopathy/diagnostic imaging , Diagnosis, Computer-Assisted , Fundus Oculi , Humans , Retinal Vessels/pathology
...