Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(19)2023 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-37837049

RESUMO

Flat foot is a postural deformity in which the plantar part of the foot is either completely or partially contacted with the ground. In recent clinical practices, X-ray radiographs have been introduced to detect flat feet because they are more affordable to many clinics than using specialized devices. This research aims to develop an automated model that detects flat foot cases and their severity levels from lateral foot X-ray images by measuring three different foot angles: the Arch Angle, Meary's Angle, and the Calcaneal Inclination Angle. Since these angles are formed by connecting a set of points on the image, Template Matching is used to allocate a set of potential points for each angle, and then a classifier is used to select the points with the highest predicted likelihood to be the correct point. Inspired by literature, this research constructed and compared two models: a Convolutional Neural Network-based model and a Random Forest-based model. These models were trained on 8000 images and tested on 240 unseen cases. As a result, the highest overall accuracy rate was 93.13% achieved by the Random Forest model, with mean values for all foot types (normal foot, mild flat foot, and moderate flat foot) being: 93.38 precision, 92.56 recall, 96.46 specificity, 95.42 accuracy, and 92.90 F-Score. The main conclusions that were deduced from this research are: (1) Using transfer learning (VGG-16) as a feature-extractor-only, in addition to image augmentation, has greatly increased the overall accuracy rate. (2) Relying on three different foot angles shows more accurate estimations than measuring a single foot angle.


Assuntos
Calcâneo , Pé Chato , Humanos , Pé Chato/diagnóstico por imagem , Pé/diagnóstico por imagem , Radiografia
2.
Sensors (Basel) ; 23(12)2023 Jun 16.
Artigo em Inglês | MEDLINE | ID: mdl-37420812

RESUMO

Early diagnosis of mild cognitive impairment (MCI) with magnetic resonance imaging (MRI) has been shown to positively affect patients' lives. To save time and costs associated with clinical investigation, deep learning approaches have been used widely to predict MCI. This study proposes optimized deep learning models for differentiating between MCI and normal control samples. In previous studies, the hippocampus region located in the brain is used extensively to diagnose MCI. The entorhinal cortex is a promising area for diagnosing MCI since severe atrophy is observed when diagnosing the disease before the shrinkage of the hippocampus. Due to the small size of the entorhinal cortex area relative to the hippocampus, limited research has been conducted on the entorhinal cortex brain region for predicting MCI. This study involves the construction of a dataset containing only the entorhinal cortex area to implement the classification system. To extract the features of the entorhinal cortex area, three different neural network architectures are optimized independently: VGG16, Inception-V3, and ResNet50. The best outcomes were achieved utilizing the convolution neural network classifier and the Inception-V3 architecture for feature extraction, with accuracy, sensitivity, specificity, and area under the curve scores of 70%, 90%, 54%, and 69%, respectively. Furthermore, the model has an acceptable balance between precision and recall, achieving an F1 score of 73%. The results of this study validate the effectiveness of our approach in predicting MCI and may contribute to diagnosing MCI through MRI.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Aprendizado Profundo , Humanos , Doença de Alzheimer/patologia , Disfunção Cognitiva/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Córtex Entorrinal/diagnóstico por imagem , Córtex Entorrinal/patologia
3.
Sensors (Basel) ; 22(17)2022 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-36081048

RESUMO

Epilepsy is a nervous system disorder. Encephalography (EEG) is a generally utilized clinical approach for recording electrical activity in the brain. Although there are a number of datasets available, most of them are imbalanced due to the presence of fewer epileptic EEG signals compared with non-epileptic EEG signals. This research aims to study the possibility of integrating local EEG signals from an epilepsy center in King Abdulaziz University hospital into the CHB-MIT dataset by applying a new compatibility framework for data integration. The framework comprises multiple functions, which include dominant channel selection followed by the implementation of a novel algorithm for reading XLtek EEG data. The resulting integrated datasets, which contain selective channels, are tested and evaluated using a deep-learning model of 1D-CNN, Bi-LSTM, and attention. The results achieved up to 96.87% accuracy, 96.98% precision, and 96.85% sensitivity, outperforming the other latest systems that have a larger number of EEG channels.


Assuntos
Eletroencefalografia , Epilepsia , Algoritmos , Encéfalo , Eletroencefalografia/métodos , Epilepsia/diagnóstico , Humanos , Convulsões/diagnóstico , Processamento de Sinais Assistido por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA