Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo del documento
Publication year range
1.
J Imaging Inform Med ; 37(1): 45-59, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38343240

RESUMEN

An automated computer-aided approach might aid radiologists in diagnosing breast cancer at a primary stage. This study proposes a novel decision support system to classify breast tumors into benign and malignant based on clinically important features, using ultrasound images. Nine handcrafted features, which align with the clinical markers used by radiologists, are extracted from the region of interest (ROI) of ultrasound images. To validate that these elected clinical markers have a significant impact on predicting the benign and malignant classes, ten machine learning (ML) models are experimented with resulting in test accuracies in the range of 96 to 99%. In addition, four feature selection techniques are explored where two features are eliminated according to the feature ranking score of each feature selection method. The Random Forest classifier is trained with the resultant four feature sets. Results indicate that even when eliminating only two features, the performance of the model is reduced for each feature selection technique. These experiments validate the efficiency and effectiveness of the clinically important features. To develop the decision support system, a probability density function (PDF) graph is generated for each feature in order to find a threshold range to distinguish benign and malignant tumors. Based on the threshold range of particular features, a decision support system is developed in such a way that if at least eight out of nine features are within the threshold range, the image will be denoted as true predicted. With this algorithm, a test accuracy of 99.38% and an F1 Score of 99.05% is achieved, which means that our decision support system outperforms all the previously trained ML models. Moreover, after calculating individual class-based test accuracies, for the benign class, a test accuracy of 99.31% has been attained where only three benign instances are misclassified out of 437 instances, and for the malignant class, a test accuracy of 99.52% has been attained where only one malignant instance is misclassified out of 210 instances. This system is robust, time-effective, and reliable as the radiologists' criteria are followed and may aid specialists in making a diagnosis.

2.
Data Brief ; 51: 109799, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38075615

RESUMEN

Sign Language Recognition (SLR) is crucial for enabling communication between the deaf-mute and hearing communities. Nevertheless, the development of a comprehensive sign language dataset is a challenging task due to the complexity and variations in hand gestures. This challenge is particularly evident in the case of Bangla Sign Language (BdSL), where the limited availability of depth datasets impedes accurate recognition. To address this issue, we propose BdSL47, an open-access depth dataset for 47 one-handed static signs (10 digits, from ০ to ৯; and 37 letters, from অ to ँ) of BdSL. The dataset was created using the MediaPipe framework for extracting depth information. To classify the signs, we developed an Artificial Neural Network (ANN) model with a 63-node input layer, a 47-node output layer, and 4 hidden layers that included dropout in the last two hidden layers, an Adam optimizer, and a ReLU activation function. Based on the selected hyperparameters, the proposed ANN model effectively learns the spatial relationships and patterns from the depth-based gestural input features and gives an F1 score of 97.84 %, indicating the effectiveness of the approach compared to the baselines provided. The availability of BdSL47 as a comprehensive dataset can have an impact on improving the accuracy of SLR for BdSL using more advanced deep-learning models.

3.
Biomedicines ; 11(6)2023 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-37371661

RESUMEN

Diabetic retinopathy (DR) is the foremost cause of blindness in people with diabetes worldwide, and early diagnosis is essential for effective treatment. Unfortunately, the present DR screening method requires the skill of ophthalmologists and is time-consuming. In this study, we present an automated system for DR severity classification employing the fine-tuned Compact Convolutional Transformer (CCT) model to overcome these issues. We assembled five datasets to generate a more extensive dataset containing 53,185 raw images. Various image pre-processing techniques and 12 types of augmentation procedures were applied to improve image quality and create a massive dataset. A new DR-CCTNet model is proposed. It is a modification of the original CCT model to address training time concerns and work with a large amount of data. Our proposed model delivers excellent accuracy even with low-pixel images and still has strong performance with fewer images, indicating that the model is robust. We compare our model's performance with transfer learning models such as VGG19, VGG16, MobileNetV2, and ResNet50. The test accuracy of the VGG19, ResNet50, VGG16, and MobileNetV2 were, respectively, 72.88%, 76.67%, 73.22%, and 71.98%. Our proposed DR-CCTNet model to classify DR outperformed all of these with a 90.17% test accuracy. This approach provides a novel and efficient method for the detection of DR, which may lower the burden on ophthalmologists and expedite treatment for patients.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda