Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Oncol ; 13: 1151257, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346069

RESUMO

Skin cancer is a serious disease that affects people all over the world. Melanoma is an aggressive form of skin cancer, and early detection can significantly reduce human mortality. In the United States, approximately 97,610 new cases of melanoma will be diagnosed in 2023. However, challenges such as lesion irregularities, low-contrast lesions, intraclass color similarity, redundant features, and imbalanced datasets make improved recognition accuracy using computerized techniques extremely difficult. This work presented a new framework for skin lesion recognition using data augmentation, deep learning, and explainable artificial intelligence. In the proposed framework, data augmentation is performed at the initial step to increase the dataset size, and then two pretrained deep learning models are employed. Both models have been fine-tuned and trained using deep transfer learning. Both models (Xception and ShuffleNet) utilize the global average pooling layer for deep feature extraction. The analysis of this step shows that some important information is missing; therefore, we performed the fusion. After the fusion process, the computational time was increased; therefore, we developed an improved Butterfly Optimization Algorithm. Using this algorithm, only the best features are selected and classified using machine learning classifiers. In addition, a GradCAM-based visualization is performed to analyze the important region in the image. Two publicly available datasets-ISIC2018 and HAM10000-have been utilized and obtained improved accuracy of 99.3% and 91.5%, respectively. Comparing the proposed framework accuracy with state-of-the-art methods reveals improved and less computational time.

2.
Cancers (Basel) ; 15(9)2023 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-37173974

RESUMO

Leukocytes, also referred to as white blood cells (WBCs), are a crucial component of the human immune system. Abnormal proliferation of leukocytes in the bone marrow leads to leukemia, a fatal blood cancer. Classification of various subtypes of WBCs is an important step in the diagnosis of leukemia. The method of automated classification of WBCs using deep convolutional neural networks is promising to achieve a significant level of accuracy, but suffers from high computational costs due to very large feature sets. Dimensionality reduction through intelligent feature selection is essential to improve the model performance with reduced computational complexity. This work proposed an improved pipeline for subtype classification of WBCs that relies on transfer learning for feature extraction using deep neural networks, followed by a wrapper feature selection approach based on a customized quantum-inspired evolutionary algorithm (QIEA). This algorithm, inspired by the principles of quantum physics, outperforms classical evolutionary algorithms in the exploration of search space. The reduced feature vector obtained from QIEA was then classified with multiple baseline classifiers. In order to validate the proposed methodology, a public dataset of 5000 images of five subtypes of WBCs was used. The proposed system achieves a classification accuracy of about 99% with a reduction of 90% in the size of the feature vector. The proposed feature selection method also shows a better convergence performance as compared to the classical genetic algorithm and a comparable performance to several existing works.

3.
Diagnostics (Basel) ; 13(7)2023 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-37046456

RESUMO

One of the most frequent cancers in women is breast cancer, and in the year 2022, approximately 287,850 new cases have been diagnosed. From them, 43,250 women died from this cancer. An early diagnosis of this cancer can help to overcome the mortality rate. However, the manual diagnosis of this cancer using mammogram images is not an easy process and always requires an expert person. Several AI-based techniques have been suggested in the literature. However, still, they are facing several challenges, such as similarities between cancer and non-cancer regions, irrelevant feature extraction, and weak training models. In this work, we proposed a new automated computerized framework for breast cancer classification. The proposed framework improves the contrast using a novel enhancement technique called haze-reduced local-global. The enhanced images are later employed for the dataset augmentation. This step aimed at increasing the diversity of the dataset and improving the training capability of the selected deep learning model. After that, a pre-trained model named EfficientNet-b0 was employed and fine-tuned to add a few new layers. The fine-tuned model was trained separately on original and enhanced images using deep transfer learning concepts with static hyperparameters' initialization. Deep features were extracted from the average pooling layer in the next step and fused using a new serial-based approach. The fused features were later optimized using a feature selection algorithm known as Equilibrium-Jaya controlled Regula Falsi. The Regula Falsi was employed as a termination function in this algorithm. The selected features were finally classified using several machine learning classifiers. The experimental process was conducted on two publicly available datasets-CBIS-DDSM and INbreast. For these datasets, the achieved average accuracy is 95.4% and 99.7%. A comparison with state-of-the-art (SOTA) technology shows that the obtained proposed framework improved the accuracy. Moreover, the confidence interval-based analysis shows consistent results of the proposed framework.

4.
Sensors (Basel) ; 23(8)2023 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-37112323

RESUMO

With the most recent developments in wearable technology, the possibility of continually monitoring stress using various physiological factors has attracted much attention. By reducing the detrimental effects of chronic stress, early diagnosis of stress can enhance healthcare. Machine Learning (ML) models are trained for healthcare systems to track health status using adequate user data. Insufficient data is accessible, however, due to privacy concerns, making it challenging to use Artificial Intelligence (AI) models in the medical industry. This research aims to preserve the privacy of patient data while classifying wearable-based electrodermal activities. We propose a Federated Learning (FL) based approach using a Deep Neural Network (DNN) model. For experimentation, we use the Wearable Stress and Affect Detection (WESAD) dataset, which includes five data states: transient, baseline, stress, amusement, and meditation. We transform this raw dataset into a suitable form for the proposed methodology using the Synthetic Minority Oversampling Technique (SMOTE) and min-max normalization pre-processing methods. In the FL-based technique, the DNN algorithm is trained on the dataset individually after receiving model updates from two clients. To decrease the over-fitting effect, every client analyses the results three times. Accuracies, Precision, Recall, F1-scores, and Area Under the Receiver Operating Curve (AUROC) values are evaluated for each client. The experimental result shows the effectiveness of the federated learning-based technique on a DNN, reaching 86.82% accuracy while also providing privacy to the patient's data. Using the FL-based DNN model over a WESAD dataset improves the detection accuracy compared to the previous studies while also providing the privacy of patient data.


Assuntos
Inteligência Artificial , Punho , Humanos , Resposta Galvânica da Pele , Articulação do Punho , Monitores de Aptidão Física
5.
Sensors (Basel) ; 23(5)2023 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-36904963

RESUMO

The performance of human gait recognition (HGR) is affected by the partial obstruction of the human body caused by the limited field of view in video surveillance. The traditional method required the bounding box to recognize human gait in the video sequences accurately; however, it is a challenging and time-consuming approach. Due to important applications, such as biometrics and video surveillance, HGR has improved performance over the last half-decade. Based on the literature, the challenging covariant factors that degrade gait recognition performance include walking while wearing a coat or carrying a bag. This paper proposed a new two-stream deep learning framework for human gait recognition. The first step proposed a contrast enhancement technique based on the local and global filters information fusion. The high-boost operation is finally applied to highlight the human region in a video frame. Data augmentation is performed in the second step to increase the dimension of the preprocessed dataset (CASIA-B). In the third step, two pre-trained deep learning models-MobilenetV2 and ShuffleNet-are fine-tuned and trained on the augmented dataset using deep transfer learning. Features are extracted from the global average pooling layer instead of the fully connected layer. In the fourth step, extracted features of both streams are fused using a serial-based approach and further refined in the fifth step by using an improved equilibrium state optimization-controlled Newton-Raphson (ESOcNR) selection method. The selected features are finally classified using machine learning algorithms for the final classification accuracy. The experimental process was conducted on 8 angles of the CASIA-B dataset and obtained an accuracy of 97.3, 98.6, 97.7, 96.5, 92.9, 93.7, 94.7, and 91.2%, respectively. Comparisons were conducted with state-of-the-art (SOTA) techniques, and showed improved accuracy and reduced computational time.


Assuntos
Aprendizado Profundo , Humanos , Algoritmos , Marcha , Aprendizado de Máquina , Biometria/métodos
6.
Diagnostics (Basel) ; 13(5)2023 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-36900145

RESUMO

Diabetic retinopathy (DR) and diabetic macular edema (DME) are forms of eye illness caused by diabetes that affects the blood vessels in the eyes, with the ground occupied by lesions of varied extent determining the disease burden. This is among the most common cause of visual impairment in the working population. Various factors have been discovered to play an important role in a person's growth of this condition. Among the essential elements at the top of the list are anxiety and long-term diabetes. If not detected early, this illness might result in permanent eyesight loss. The damage can be reduced or avoided if it is recognized ahead of time. Unfortunately, due to the time and arduous nature of the diagnosing process, it is harder to identify the prevalence of this condition. Skilled doctors manually review digital color images to look for damage produced by vascular anomalies, the most common complication of diabetic retinopathy. Even though this procedure is reasonably accurate, it is quite pricey. The delays highlight the necessity for diagnosis to be automated, which will have a considerable positive significant impact on the health sector. The use of AI in diagnosing the disease has yielded promising and dependable findings in recent years, which is the impetus for this publication. This article used ensemble convolutional neural network (ECNN) to diagnose DR and DME automatically, with accurate results of 99 percent. This result was achieved using preprocessing, blood vessel segmentation, feature extraction, and classification. For contrast enhancement, the Harris hawks optimization (HHO) technique is presented. Finally, the experiments were conducted for two kinds of datasets: IDRiR and Messidor for accuracy, precision, recall, F-score, computational time, and error rate.

7.
Diagnostics (Basel) ; 13(2)2023 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-36673057

RESUMO

The competence of machine learning approaches to carry out clinical expertise tasks has recently gained a lot of attention, particularly in the field of medical-imaging examination. Among the most frequently used clinical-imaging modalities in the healthcare profession is chest radiography, which calls for prompt reporting of the existence of potential anomalies and illness diagnostics in images. Automated frameworks for the recognition of chest abnormalities employing X-rays are being introduced in health departments. However, the reliable detection and classification of particular illnesses in chest X-ray samples is still a complicated issue because of the complex structure of radiographs, e.g., the large exposure dynamic range. Moreover, the incidence of various image artifacts and extensive inter- and intra-category resemblances further increases the difficulty of chest disease recognition procedures. The aim of this study was to resolve these existing problems. We propose a deep learning (DL) approach to the detection of chest abnormalities with the X-ray modality using the EfficientDet (CXray-EffDet) model. More clearly, we employed the EfficientNet-B0-based EfficientDet-D0 model to compute a reliable set of sample features and accomplish the detection and classification task by categorizing eight categories of chest abnormalities using X-ray images. The effective feature computation power of the CXray-EffDet model enhances the power of chest abnormality recognition due to its high recall rate, and it presents a lightweight and computationally robust approach. A large test of the model employing a standard database from the National Institutes of Health (NIH) was conducted to demonstrate the chest disease localization and categorization performance of the CXray-EffDet model. We attained an AUC score of 0.9080, along with an IOU of 0.834, which clearly determines the competency of the introduced model.

8.
Diagnostics (Basel) ; 13(1)2022 Dec 28.
Artigo em Inglês | MEDLINE | ID: mdl-36611387

RESUMO

The rapid increase in Internet technology and machine-learning devices has opened up new avenues for online healthcare systems. Sometimes, getting medical assistance or healthcare advice online is easier to understand than getting it in person. For mild symptoms, people frequently feel reluctant to visit the hospital or a doctor; instead, they express their questions on numerous healthcare forums. However, predictions may not always be accurate, and there is no assurance that users will always receive a reply to their posts. In addition, some posts are made up, which can misdirect the patient. To address these issues, automatic online prediction (OAP) is proposed. OAP clarifies the idea of employing machine learning to predict the common attributes of disease using Never-Ending Image Learner with an intelligent analysis of disease factors. Never-Ending Image Learner predicts disease factors by selecting from finite data images with minimum structural risk and efficiently predicting efficient real-time images via machine-learning-enabled M-theory. The proposed multi-access edge computing platform works with the machine-learning-assisted automatic prediction from multiple images using multiple-instance learning. Using a Never-Ending Image Learner based on Machine Learning, common disease attributes may be predicted online automatically. This method has deeper storage of images, and their data are stored per the isotropic positioning. The proposed method was compared with existing approaches, such as Multiple-Instance Learning for automated image indexing and hyper-spectrum image classification. Regarding the machine learning of multiple images with the application of isotropic positioning, the operating efficiency is improved, and the results are predicted with better accuracy. In this paper, machine-learning performance metrics for online automatic prediction tools are compiled and compared, and through this survey, the proposed method is shown to achieve higher accuracy, proving its efficiency compared to the existing methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...