Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Intell Neurosci ; 2022: 4629178, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36156959

RESUMO

Esophageal cancer (EC) is a commonly occurring malignant tumor that significantly affects human health. Earlier recognition and classification of EC or premalignant lesions can result in highly effective targeted intervention. Accurate detection and classification of distinct stages of EC provide effective precision therapy planning and improve the 5-year survival rate. Automated recognition of EC can aid physicians in improving diagnostic performance and accuracy. However, the classification of EC is challenging due to identical endoscopic features, like mucosal erosion, hyperemia, and roughness. The recent developments of deep learning (DL) and computer-aided diagnosis (CAD) models have been useful for designing accurate EC classification models. In this aspect, this study develops an atom search optimization with a deep transfer learning-driven EC classification (ASODTL-ECC) model. The presented ASODTL-ECC model mainly examines the medical images for the existence of EC in a timely and accurate manner. To do so, the presented ASODTL-ECC model employs Gaussian filtering (GF) as a preprocessing stage to enhance image quality. In addition, the deep convolution neural network- (DCNN-) based residual network (ResNet) model is applied as a feature extraction approach. Besides, ASO with an extreme learning machine (ELM) model is utilized for identifying the presence of EC, showing the novelty of the work. The performance of the ASODTL-ECC model is assessed and compared with existing models under several medical images. The experimental results pointed out the improved performance of the ASODTL-ECC model over recent approaches.


Assuntos
Neoplasias Esofágicas , Aprendizado de Máquina , Diagnóstico por Computador , Neoplasias Esofágicas/diagnóstico , Humanos , Redes Neurais de Computação
2.
Comput Intell Neurosci ; 2022: 7643967, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35814555

RESUMO

Oral cancer is one of the lethal diseases among the available malignant tumors globally, and it has become a challenging health issue in developing and low-to-middle income countries. The prognosis of oral cancer remains poor because over 50% of patients are recognized at advanced stages. Earlier detection and screening models for oral cancer are mainly based on experts' knowledge, and it necessitates an automated tool for oral cancer detection. The recent developments of computational intelligence (CI) and computer vision-based approaches help to accomplish enhanced performance in medical-image-related tasks. This article develops an intelligent deep learning enabled oral squamous cell carcinoma detection and classification (IDL-OSCDC) technique using biomedical images. The presented IDL-OSCDC model involves the recognition and classification of oral cancer on biomedical images. The proposed IDL-OSCDC model employs Gabor filtering (GF) as a preprocessing step to eliminate noise content. In addition, the NasNet model is exploited for the generation of high-level deep features from the input images. Moreover, an enhanced grasshopper optimization algorithm (EGOA)-based deep belief network (DBN) model is employed for oral cancer detection and classification. The hyperparameter tuning of the DBN model is performed using the EGOA algorithm which in turn boosts the classification outcomes. The experimentation outcomes of the IDL-OSCDC model using a benchmark biomedical imaging dataset highlighted its promising performance over the other methods with maximum accu y , prec n , reca l , and F score of 95%, 96.15%, 93.75%, and 94.67% correspondingly.


Assuntos
Carcinoma de Células Escamosas , Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Neoplasias Bucais , Carcinoma de Células Escamosas/diagnóstico por imagem , Humanos , Neoplasias Bucais/diagnóstico por imagem , Carcinoma de Células Escamosas de Cabeça e Pescoço
3.
Comput Math Methods Med ; 2022: 8452002, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35664638

RESUMO

This research was aimed at presenting performance of 3-dimensional input convolutional neural networks for steady-state visual evoked potential classification in a wireless EEG-based brain-computer interface system. Overall performance of a brain-computer interface system depends on information transfer rate. Parameters such as signal classification accuracy rate, signal stimulator structure, and user task completion time affect information transfer rate. In this study, we used 3 types of signal classification methods that are 1-dimensional, 2-dimensional, and 3-dimensional input convolutional neural network. According to online experiment with using 3-dimensional input convolutional neural network, we reached average classification accuracy rate and average information transfer rate as 93.75% and 58.35 bit/min, respectively. This both results significantly higher than the other methods that we used in experiments. Moreover, user task completion time was reduced with using 3-dimensional input convolutional neural network. Our proposed method is novel and state-of-art model for steady-state visual evoked potential classification.


Assuntos
Interfaces Cérebro-Computador , Algoritmos , Eletroencefalografia/métodos , Potenciais Evocados Visuais , Humanos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...