Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
PeerJ Comput Sci ; 10: e1995, 2024.
Article in English | MEDLINE | ID: mdl-38686004

ABSTRACT

The detection of natural images, such as glaciers and mountains, holds practical applications in transportation automation and outdoor activities. Convolutional neural networks (CNNs) have been widely employed for image recognition and classification tasks. While previous studies have focused on fruits, land sliding, and medical images, there is a need for further research on the detection of natural images, particularly glaciers and mountains. To address the limitations of traditional CNNs, such as vanishing gradients and the need for many layers, the proposed work introduces a novel model called DenseHillNet. The model utilizes a DenseHillNet architecture, a type of CNN with densely connected layers, to accurately classify images as glaciers or mountains. The model contributes to the development of automation technologies in transportation and outdoor activities. The dataset used in this study comprises 3,096 images of each of the "glacier" and "mountain" categories. Rigorous methodology was employed for dataset preparation and model training, ensuring the validity of the results. A comparison with a previous work revealed that the proposed DenseHillNet model, trained on both glacier and mountain images, achieved higher accuracy (86%) compared to a CNN model that only utilized glacier images (72%). Researchers and graduate students are the audience of our article.

2.
Sci Rep ; 14(1): 6530, 2024 03 19.
Article in English | MEDLINE | ID: mdl-38503765

ABSTRACT

Nanoparticulate systems have the prospect of accounting for a new making of drug delivery systems. Nanotechnology is manifested to traverse the hurdle of both physical and biological sciences by implementing nanostructures indistinct fields of science, particularly in nano-based drug delivery. The low delivery efficiency of nanoparticles is a critical obstacle in the field of tumor diagnosis. Several nano-based drug delivery studies are focused on for tumor diagnosis. But, the nano-based drug delivery efficiency was not increased for tumor diagnosis. This work proposes a method called point biserial correlation symbiotic organism search nanoengineering-based drug delivery (PBC-SOSN). The objective and aim of the PBC-SOSN method is to achieve higher drug delivery efficiency and lesser drug delivery time for tumor diagnosis. The contribution of the PBC-SOSN is to optimized nanonengineering-based drug delivery with higher r drug delivery detection rate and smaller drug delivery error detection rate. Initially, raw data acquired from the nano-tumor dataset, and nano-drugs for glioblastoma dataset, overhead improved preprocessed samples are evolved using nano variational model decomposition-based preprocessing. After that, the preprocessed samples as input are subjected to variance analysis and point biserial correlation-based feature selection model. Finally, the preprocessed samples and features selected are subjected to symbiotic organism search nanoengineering (SOSN) to corroborate the objective. Based on these findings, point biserial correlation-based feature selection and a symbiotic organism search nanoengineering were tested for their modeling performance with a nano-tumor dataset and nano-drugs for glioblastoma dataset, finding the latter the better algorithm. Incorporated into the method is the potential to adjust the drug delivery detection rate and drug delivery error detection rate of the learned method based on selected features determined by nano variational model decomposition for efficient drug delivery.


Subject(s)
Glioblastoma , Nanoparticles , Nanostructures , Humans , Drug Delivery Systems , Nanotechnology/methods , Pharmaceutical Preparations , Nanoparticles/chemistry
3.
Heliyon ; 10(2): e24403, 2024 Jan 30.
Article in English | MEDLINE | ID: mdl-38304780

ABSTRACT

The HT-29 cell line, derived from human colon cancer, is valuable for biological and cancer research applications. Early detection is crucial for improving the chances of survival, and researchers are introducing new techniques for accurate cancer diagnosis. This study introduces an efficient deep learning-based method for detecting and counting colorectal cancer cells (HT-29). The colorectal cancer cell line was procured from a company. Further, the cancer cells were cultured, and a transwell experiment was conducted in the lab to collect the dataset of colorectal cancer cell images via fluorescence microscopy. Of the 566 images, 80 % were allocated to the training set, and the remaining 20 % were assigned to the testing set. The HT-29 cell detection and counting in medical images is performed by integrating YOLOv2, ResNet-50, and ResNet-18 architectures. The accuracy achieved by ResNet-18 is 98.70 % and ResNet-50 is 96.66 %. The study achieves its primary objective by focusing on detecting and quantifying congested and overlapping colorectal cancer cells within the images. This innovative work constitutes a significant development in overlapping cancer cell detection and counting, paving the way for novel advancements and opening new avenues for research and clinical applications. Researchers can extend the study by exploring variations in ResNet and YOLO architectures to optimize object detection performance. Further investigation into real-time deployment strategies will enhance the practical applicability of these models.

4.
BMC Bioinformatics ; 24(1): 458, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38053030

ABSTRACT

Intense sun exposure is a major risk factor for the development of melanoma, an abnormal proliferation of skin cells. Yet, this more prevalent type of skin cancer can also develop in less-exposed areas, such as those that are shaded. Melanoma is the sixth most common type of skin cancer. In recent years, computer-based methods for imaging and analyzing biological systems have made considerable strides. This work investigates the use of advanced machine learning methods, specifically ensemble models with Auto Correlogram Methods, Binary Pyramid Pattern Filter, and Color Layout Filter, to enhance the detection accuracy of Melanoma skin cancer. These results suggest that the Color Layout Filter model of the Attribute Selection Classifier provides the best overall performance. Statistics for ROC, PRC, Kappa, F-Measure, and Matthews Correlation Coefficient were as follows: 90.96% accuracy, 0.91 precision, 0.91 recall, 0.95 ROC, 0.87 PRC, 0.87 Kappa, 0.91 F-Measure, and 0.82 Matthews Correlation Coefficient. In addition, its margins of error are the smallest. The research found that the Attribute Selection Classifier performed well when used in conjunction with the Color Layout Filter to improve image quality.


Subject(s)
Melanoma , Skin Neoplasms , Humans , Algorithms , Skin Neoplasms/diagnostic imaging , Melanoma/diagnostic imaging , Machine Learning , Melanoma, Cutaneous Malignant
5.
Sci Rep ; 13(1): 17904, 2023 10 20.
Article in English | MEDLINE | ID: mdl-37863944

ABSTRACT

Ultrasound imaging is commonly used to aid in fetal development. It has the advantage of being real-time, low-cost, non-invasive, and easy to use. However, fetal organ detection is a challenging task for obstetricians, it depends on several factors, such as the position of the fetus, the habitus of the mother, and the imaging technique. In addition, image interpretation must be performed by a trained healthcare professional who can take into account all relevant clinical factors. Artificial intelligence is playing an increasingly important role in medical imaging and can help solve many of the challenges associated with fetal organ classification. In this paper, we propose a deep-learning model for automating fetal organ classification from ultrasound images. We trained and tested the model on a dataset of fetal ultrasound images, including two datasets from different regions, and recorded them with different machines to ensure the effective detection of fetal organs. We performed a training process on a labeled dataset with annotations for fetal organs such as the brain, abdomen, femur, and thorax, as well as the maternal cervical part. The model was trained to detect these organs from fetal ultrasound images using a deep convolutional neural network architecture. Following the training process, the model, DenseNet169, was assessed on a separate test dataset. The results were promising, with an accuracy of 99.84%, which is an impressive result. The F1 score was 99.84% and the AUC was 98.95%. Our study showed that the proposed model outperformed traditional methods that relied on the manual interpretation of ultrasound images by experienced clinicians. In addition, it also outperformed other deep learning-based methods that used different network architectures and training strategies. This study may contribute to the development of more accessible and effective maternal health services around the world and improve the health status of mothers and their newborns worldwide.


Subject(s)
Artificial Intelligence , Maternal Health Services , Pregnancy , Female , Humans , Infant, Newborn , Ultrasonography , Ultrasonography, Prenatal/methods , Machine Learning
6.
BMC Bioinformatics ; 24(1): 382, 2023 Oct 10.
Article in English | MEDLINE | ID: mdl-37817066

ABSTRACT

An abnormal growth or fatty mass of cells in the brain is called a tumor. They can be either healthy (normal) or become cancerous, depending on the structure of their cells. This can result in increased pressure within the cranium, potentially causing damage to the brain or even death. As a result, diagnostic procedures such as computed tomography, magnetic resonance imaging, and positron emission tomography, as well as blood and urine tests, are used to identify brain tumors. However, these methods can be labor-intensive and sometimes yield inaccurate results. Instead of these time-consuming methods, deep learning models are employed because they are less time-consuming, require less expensive equipment, produce more accurate results, and are easy to set up. In this study, we propose a method based on transfer learning, utilizing the pre-trained VGG-19 model. This approach has been enhanced by applying a customized convolutional neural network framework and combining it with pre-processing methods, including normalization and data augmentation. For training and testing, our proposed model used 80% and 20% of the images from the dataset, respectively. Our proposed method achieved remarkable success, with an accuracy rate of 99.43%, a sensitivity of 98.73%, and a specificity of 97.21%. The dataset, sourced from Kaggle for training purposes, consists of 407 images, including 257 depicting brain tumors and 150 without tumors. These models could be utilized to develop clinically useful solutions for identifying brain tumors in CT images based on these outcomes.


Subject(s)
Brain Neoplasms , Neural Networks, Computer , Humans , Brain Neoplasms/diagnostic imaging , Tomography, X-Ray Computed , Magnetic Resonance Imaging , Brain
7.
Diagnostics (Basel) ; 13(13)2023 Jun 30.
Article in English | MEDLINE | ID: mdl-37443636

ABSTRACT

This study aims to develop an efficient and accurate breast cancer classification model using meta-learning approaches and multiple convolutional neural networks. This Breast Ultrasound Images (BUSI) dataset contains various types of breast lesions. The goal is to classify these lesions as benign or malignant, which is crucial for the early detection and treatment of breast cancer. The problem is that traditional machine learning and deep learning approaches often fail to accurately classify these images due to their complex and diverse nature. In this research, to address this problem, the proposed model used several advanced techniques, including meta-learning ensemble technique, transfer learning, and data augmentation. Meta-learning will optimize the model's learning process, allowing it to adapt to new and unseen datasets quickly. Transfer learning will leverage the pre-trained models such as Inception, ResNet50, and DenseNet121 to enhance the model's feature extraction ability. Data augmentation techniques will be applied to artificially generate new training images, increasing the size and diversity of the dataset. Meta ensemble learning techniques will combine the outputs of multiple CNNs, improving the model's classification accuracy. The proposed work will be investigated by pre-processing the BUSI dataset first, then training and evaluating multiple CNNs using different architectures and pre-trained models. Then, a meta-learning algorithm will be applied to optimize the learning process, and ensemble learning will be used to combine the outputs of multiple CNN. Additionally, the evaluation results indicate that the model is highly effective with high accuracy. Finally, the proposed model's performance will be compared with state-of-the-art approaches in other existing systems' accuracy, precision, recall, and F1 score.

8.
Diagnostics (Basel) ; 13(13)2023 Jul 04.
Article in English | MEDLINE | ID: mdl-37443658

ABSTRACT

Cancer, including the highly dangerous melanoma, is marked by uncontrolled cell growth and the possibility of spreading to other parts of the body. However, the conventional approach to machine learning relies on centralized training data, posing challenges for data privacy in healthcare systems driven by artificial intelligence. The collection of data from diverse sensors leads to increased computing costs, while privacy restrictions make it challenging to employ traditional machine learning methods. Researchers are currently confronted with the formidable task of developing a skin cancer prediction technique that takes privacy concerns into account while simultaneously improving accuracy. In this work, we aimed to propose a decentralized privacy-aware learning mechanism to accurately predict melanoma skin cancer. In this research we analyzed federated learning from the skin cancer database. The results from the study showed that 92% accuracy was achieved by the proposed method, which was higher than baseline algorithms.

9.
Diagnostics (Basel) ; 12(12)2022 12 13.
Article in English | MEDLINE | ID: mdl-36553152

ABSTRACT

Skin cancer is one of the most severe forms of the disease, and it can spread to other parts of the body if not detected early. Therefore, diagnosing and treating skin cancer patients at an early stage is crucial. Since a manual skin cancer diagnosis is both time-consuming and expensive, an incorrect diagnosis is made due to the high similarity between the various skin cancers. Improved categorization of multiclass skin cancers requires the development of automated diagnostic systems. Herein, we propose a fully automatic method for classifying several skin cancers by fine-tuning the deep learning models VGG16, ResNet50, and ResNet101. Prior to model creation, the training dataset should undergo data augmentation using traditional image transformation techniques and Generative Adversarial Networks (GANs) to prevent class imbalance issues that may lead to model overfitting. In this study, we investigate the feasibility of creating dermoscopic images that have a realistic appearance using Conditional Generative Adversarial Network (CGAN) techniques. Thereafter, the traditional augmentation methods are used to augment our existing training set to improve the performance of pre-trained deep models on the skin cancer classification task. This improved performance is then compared to the models developed using the unbalanced dataset. In addition, we formed an ensemble of finely tuned transfer learning models, which we trained on balanced and unbalanced datasets. These models were used to make predictions about the data. With appropriate data augmentation, the proposed models attained an accuracy of 92% for VGG16, 92% for ResNet50, and 92.25% for ResNet101, respectively. The ensemble of these models increased the accuracy to 93.5%. A comprehensive discussion on the performance of the models concluded that using this method possibly leads to enhanced performance in skin cancer categorization compared to the efforts made in the past.

10.
Healthcare (Basel) ; 10(10)2022 Oct 18.
Article in English | MEDLINE | ID: mdl-36292519

ABSTRACT

The novel coronavirus 2019 (COVID-19) spread rapidly around the world and its outbreak has become a pandemic. Due to an increase in afflicted cases, the quantity of COVID-19 tests kits available in hospitals has decreased. Therefore, an autonomous detection system is an essential tool for reducing infection risks and spreading of the virus. In the literature, various models based on machine learning (ML) and deep learning (DL) are introduced to detect many pneumonias using chest X-ray images. The cornerstone in this paper is the use of pretrained deep learning CNN architectures to construct an automated system for COVID-19 detection and diagnosis. In this work, we used the deep feature concatenation (DFC) mechanism to combine features extracted from input images using the two modern pre-trained CNN models, AlexNet and Xception. Hence, we propose COVID-AleXception: a neural network that is a concatenation of the AlexNet and Xception models for the overall improvement of the prediction capability of this pandemic. To evaluate the proposed model and build a dataset of large-scale X-ray images, there was a careful selection of multiple X-ray images from several sources. The COVID-AleXception model can achieve a classification accuracy of 98.68%, which shows the superiority of the proposed model over AlexNet and Xception that achieved a classification accuracy of 94.86% and 95.63%, respectively. The performance results of this proposed model demonstrate its pertinence to help radiologists diagnose COVID-19 more quickly.

11.
Saudi Med J ; 41(1): 94-97, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31915801

ABSTRACT

OBJECTIVES: To  compare physical activity, postural stability, and muscle strength in Saudi adolescents with normal and poor sleep quality. Methods: This cross-sectional study investigated 62 Saudi adolescents between December 2017 and April 2018 at Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia. Participants were classified into 2 equal groups; one with normal sleep (NS) and the other with poor sleep (PS). TecnoBody balance system was used to measure postural stability, ActiGraph to assess physical activity, and hand dynamometer and pinch gauge to assess hand grip and key pinch strength respectively. Results: At low platform stability, PS group showed poorer postural stability indices than NS group either with eyes opened or closed (p less than 0.05). ActiGraph data revealed that the physical activity parameters including the total steps count, total activities count, activity rate, and the vigorous activity time were significantly lower in PS group (p less than 0.05). The PS group had significantly more total sedentary time than the NS group. Muscle strength parameters did not show any significant difference between groups (p greater than 0.05).  Conclusion: Poor sleep significantly impaired postural stability and physical activity in Saudi adolescents. However, poor sleep had no effect on their isometric muscle strength.


Subject(s)
Exercise , Muscle Strength/physiology , Postural Balance , Sleep Wake Disorders/physiopathology , Sleep/physiology , Adolescent , Humans , Saudi Arabia
SELECTION OF CITATIONS
SEARCH DETAIL
...