Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1996-2002, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018395

RESUMO

This work proposes an automated algorithms for classifying retinal fundus images as cytomegalovirus retinitis (CMVR), normal, and other diseases. Adaptive wavelet packet transform (AWPT) was used to extract features. The retinal fundus images were transformed using a 4-level Haar wavelet packet (WP) transform. The first two best trees were obtained using Shannon and log energy entropy, while the third best tree was obtained using the Daubechies-4 mother wavelet with Shannon entropy. The coefficients of each node were extracted, where the feature value of each leaf node of the best tree was the average of the WP coefficients in that node, while those of other non-leaf nodes were set to zero. The feature vector was classified using an artificial neural network (ANN). The effectiveness of the algorithm was evaluated using ten-fold cross-validation over a dataset consisting of 1,011 images (310 CMVR, 240 normal, and 461 other diseases). In testing, a dataset consisting of 101 images (31 CMVR, 24 normal, and 46 other diseases), the AWPT-based ANN had sensitivities of 90.32%, 83.33%, and 91.30% and specificities of 95.71%, 94.81%, and 92.73%. In conclusion, the proposed algorithm has promising potential in CMVR screening, for which the AWPT-based ANN is applicable with scarce data and limited resources.


Assuntos
Retinite por Citomegalovirus , Algoritmos , Retinite por Citomegalovirus/diagnóstico , Fundo de Olho , Humanos , Redes Neurais de Computação , Análise de Ondaletas
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 6159-6162, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33019377

RESUMO

A computerized version of "Noo-Khor-Arn" `May I read?', a paper-based screening test for Thai children at risk with Learning Disability (LD), was developed and some core ideas of development were given in details. Six test categories with 23 subtests were conducted on 110 Thai children aged between 7-12 years old (Mean = 7.94, SD = 1.45), divided into 50 LD and 60 Typically Developing (TD) children to determine most relevant test categories and subtests for classifying between the groups. Two-factor balanced Analysis of Variance (ANOVA) revealed that a computerized version shown a significant difference between TD and LD groups in the tasks related to linguistics, decoding, and naming. These tasks were Phonological Awareness (PA), Morphological Awareness (MA), Decoding (DEC), and Rapid Naming (RN), respectively. The rest of the test categories showed non-significant factors between TD and LD. Not only the results can be used for classification but also for streamlining the test categories and subtests, to shorten the test tool.Clinical relevance- The subtests related to linguistics and decoding aspects showed promising results in screening children at risk for learning disabilities.


Assuntos
Deficiências da Aprendizagem , Fonética , Análise de Variância , Criança , Humanos , Deficiências da Aprendizagem/diagnóstico , Leitura , Tailândia
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1229-1233, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018209

RESUMO

AIChest4All is the name of the model used to label and screening diseases in our area of focus, Thailand, including heart disease, lung cancer, and tuberculosis. This is aimed to aid radiologist in Thailand especially in rural areas, where there is immense staff shortages. Deep learning is used in our methodology to classify the chest X-ray images from datasets namely, NIH set, which is separated into 14 observations, and the Montgomery and Shenzhen set, which contains chest X-ray images of patients with tuberculosis, further supplemented by the dataset from Udonthani Cancer hospital and the National Chest Institute of Thailand. The images are classified into six categories: no finding, suspected active tuberculosis, suspected lung malignancy, abnormal heart and great vessels, Intrathoracic abnormal findings, and Extrathroacic abnormal findings. A total of 201,527 images were used. Results from testing showed that the accuracy values of the categories heart disease, lung cancer, and tuberculosis were 94.11%, 93.28%, and 92.32%, respectively with sensitivity values of 90.07%, 81.02%, and 82.33%, respectively and the specificity values were 94.65%, 94.04%, and 93.54%, respectively. In conclusion, the results acquired have sufficient accuracy, sensitivity, and specificity values to be used. Currently, AIChest4All is being used to help several of Thailand's government funded hospitals, free of charge.Clinical relevance- AIChest4All is aimed to aid radiologist in Thailand especially in rural areas, where there is immense staff shortages. It is being used to help several of Thailand's goverment funded hospitals, free of charege to screening heart disease, lung cancer, and tubeculosis with 94.11%, 93.28%, and 92.32% accuracy.


Assuntos
Neoplasias Pulmonares , Tuberculose , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Programas de Rastreamento , Sensibilidade e Especificidade , Tailândia
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 4225-4228, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946801

RESUMO

This study focuses on automatic stroke-screening of the arm factor in the FAST (Face, Arm, Speech, and Time) stroke screening method. The study provides a methodology to collect data on specific arm movements, using signals from the gyroscope and accelerometer in mobile devices. Fifty-two subjects were enrolled in this study (20 stroke patients and 32 healthy subjects). Given in the instructions of the application, the patients were asked to perform two arm movements, Curl Up and Raise Up. The two exercises were classified into three parts: curl part, raise part, and stable part. Stroke patients were expected to experience difficulty in performing both exercises efficiently on the same arm. We proposed 20 handcrafted features from these three parts. Our study achieved an average accuracy of 61.7%-74.2% and an average area under the ROC curve (AUC) of 66.2%-81.5% from the combination of both exercises. Compared to the FAST method used by examiners in a previous study (Kapes et al., 2014) that showed with an accuracy of 69%-77% for every age group, our study showed promising results for early stroke identification, giving that our study is based only on the arm factor.


Assuntos
Acelerometria/instrumentação , Aplicativos Móveis , Movimento , Acidente Vascular Cerebral/diagnóstico , Braço , Estudos de Casos e Controles , Humanos , Reabilitação do Acidente Vascular Cerebral
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 7044-7048, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31947460

RESUMO

This study aims to apply Mask Regional Convolutional Neural Network (Mask R-CNN) to cervical cancer screening using pap smear histological slides. Based on our current literature review, this is the first attempt of using Mask R-CNN to detect and analyze the nucleus of the cervical cell, screening for normal and abnormal nuclear features. The data set were liquid-based histological slides obtained from Thammasat University (TU) Hospital. The slides contained both cervical cells and various artifacts such as white blood cells, mimicking the slides obtained in actual clinical settings. The proposed algorithm achieved mean average precision (mAP) of 57.8%, accuracy of 91.7%, sensitivity of 91.7%, and specificity of 91.7% per image. As we needed to evaluate the efficiency of our algorithm in comparison to single cell classification algorithm (Zhang et al., IEEE JBHI, vol. 21, no. 6, pp. 1633, 2017), we modified our method to also classify single cells on TU dataset test using Mask R-CNN segmentation. The results obtained had an accuracy of 89.8%, sensitivity of 72.5%, and specificity of 94.3%.


Assuntos
Neoplasias do Colo do Útero , Aprendizado Profundo , Detecção Precoce de Câncer , Feminino , Humanos , Teste de Papanicolaou , Esfregaço Vaginal
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 904-907, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946040

RESUMO

Glaucoma is the second leading cause of blindness worldwide. This paper proposes an automated glaucoma screening method using retinal fundus images via the ensemble technique to fuse the results of different classification networks and the result of each classification network was fed as an input to a simple artificial neural network (ANN) to obtain the final result. Three public datasets, i.e., ORIGA-650, RIM-ONE R3, and DRISHTI-GS were used for training and evaluating the performance of the proposed network. The experimental results showed that the proposed network outperformed other state-of-art glaucoma screening algorithms with AUC of 0.94. Our proposed algorithms showed promising potential as a medical support system for glaucoma screening especially in low resource countries.


Assuntos
Aprendizado Profundo , Glaucoma , Algoritmos , Técnicas de Diagnóstico Oftalmológico , Fundo de Olho , Humanos
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2017: 1469-1472, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29060156

RESUMO

This work proposed an automated screening system for Age-related Macular Degeneration (AMD), and distinguishing between wet or dry types of AMD using fundus images to assist ophthalmologists in eye disease screening and management. The algorithm employs contrast-limited adaptive histogram equalization (CLAHE) in image enhancement. Subsequently, discrete wavelet transform (DWT) and locality sensitivity discrimination analysis (LSDA) were used to extract features for a neural network model to classify the results. The results showed that the proposed algorithm was able to distinguish between normal eyes, dry AMD, or wet AMD with 98.63% sensitivity, 99.15% specificity, and 98.94% accuracy, suggesting promising potential as a medical support system for faster eye disease screening at lower costs.


Assuntos
Degeneração Macular , Algoritmos , Fundo de Olho , Humanos , Análise de Ondaletas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA