Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Interdiscip Sci ; 2024 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-38951382

RESUMO

Image classification, a fundamental task in computer vision, faces challenges concerning limited data handling, interpretability, improved feature representation, efficiency across diverse image types, and processing noisy data. Conventional architectural approaches have made insufficient progress in addressing these challenges, necessitating architectures capable of fine-grained classification, enhanced accuracy, and superior generalization. Among these, the vision transformer emerges as a noteworthy computer vision architecture. However, its reliance on substantial data for training poses a drawback due to its complexity and high data requirements. To surmount these challenges, this paper proposes an innovative approach, MetaV, integrating meta-learning into a vision transformer for medical image classification. N-way K-shot learning is employed to train the model, drawing inspiration from human learning mechanisms utilizing past knowledge. Additionally, deformational convolution and patch merging techniques are incorporated into the vision transformer model to mitigate complexity and overfitting while enhancing feature representation. Augmentation methods such as perturbation and Grid Mask are introduced to address the scarcity and noise in medical images, particularly for rare diseases. The proposed model is evaluated using diverse datasets including Break His, ISIC 2019, SIPaKMed, and STARE. The achieved performance accuracies of 89.89%, 87.33%, 94.55%, and 80.22% for Break His, ISIC 2019, SIPaKMed, and STARE, respectively, present evidence validating the superior performance of the proposed model in comparison to conventional models, setting a new benchmark for meta-vision image classification models.

2.
Appl Soft Comput ; 122: 108780, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35369122

RESUMO

Ever since the outbreak of COVID-19, the entire world is grappling with panic over its rapid spread. Consequently, it is of utmost importance to detect its presence. Timely diagnostic testing leads to the quick identification, treatment and isolation of infected people. A number of deep learning classifiers have been proved to provide encouraging results with higher accuracy as compared to the conventional method of RT-PCR testing. Chest radiography, particularly using X-ray images, is a prime imaging modality for detecting the suspected COVID-19 patients. However, the performance of these approaches still needs to be improved. In this paper, we propose a capsule network called COVID-WideNet for diagnosing COVID-19 cases using Chest X-ray (CXR) images. Experimental results have demonstrated that a discriminative trained, multi-layer capsule network achieves state-of-the-art performance on the COVIDx dataset. In particular, COVID-WideNet performs better than any other CNN based approaches for diagnosis of COVID-19 infected patients. Further, the proposed COVID-WideNet has the number of trainable parameters that is 20 times less than that of other CNN based models. This results in fast and efficient diagnosing COVID-19 symptoms and with achieving the 0.95 of Area Under Curve (AUC), 91% of accuracy, sensitivity and specificity respectively. This may also assist radiologists to detect COVID and its variant like delta.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...