Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Sensors (Basel) ; 23(5)2023 Mar 02.
Artículo en Inglés | MEDLINE | ID: mdl-36904963

RESUMEN

The performance of human gait recognition (HGR) is affected by the partial obstruction of the human body caused by the limited field of view in video surveillance. The traditional method required the bounding box to recognize human gait in the video sequences accurately; however, it is a challenging and time-consuming approach. Due to important applications, such as biometrics and video surveillance, HGR has improved performance over the last half-decade. Based on the literature, the challenging covariant factors that degrade gait recognition performance include walking while wearing a coat or carrying a bag. This paper proposed a new two-stream deep learning framework for human gait recognition. The first step proposed a contrast enhancement technique based on the local and global filters information fusion. The high-boost operation is finally applied to highlight the human region in a video frame. Data augmentation is performed in the second step to increase the dimension of the preprocessed dataset (CASIA-B). In the third step, two pre-trained deep learning models-MobilenetV2 and ShuffleNet-are fine-tuned and trained on the augmented dataset using deep transfer learning. Features are extracted from the global average pooling layer instead of the fully connected layer. In the fourth step, extracted features of both streams are fused using a serial-based approach and further refined in the fifth step by using an improved equilibrium state optimization-controlled Newton-Raphson (ESOcNR) selection method. The selected features are finally classified using machine learning algorithms for the final classification accuracy. The experimental process was conducted on 8 angles of the CASIA-B dataset and obtained an accuracy of 97.3, 98.6, 97.7, 96.5, 92.9, 93.7, 94.7, and 91.2%, respectively. Comparisons were conducted with state-of-the-art (SOTA) techniques, and showed improved accuracy and reduced computational time.


Asunto(s)
Aprendizaje Profundo , Humanos , Algoritmos , Marcha , Aprendizaje Automático , Biometría/métodos
2.
Sensors (Basel) ; 23(8)2023 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-37112323

RESUMEN

With the most recent developments in wearable technology, the possibility of continually monitoring stress using various physiological factors has attracted much attention. By reducing the detrimental effects of chronic stress, early diagnosis of stress can enhance healthcare. Machine Learning (ML) models are trained for healthcare systems to track health status using adequate user data. Insufficient data is accessible, however, due to privacy concerns, making it challenging to use Artificial Intelligence (AI) models in the medical industry. This research aims to preserve the privacy of patient data while classifying wearable-based electrodermal activities. We propose a Federated Learning (FL) based approach using a Deep Neural Network (DNN) model. For experimentation, we use the Wearable Stress and Affect Detection (WESAD) dataset, which includes five data states: transient, baseline, stress, amusement, and meditation. We transform this raw dataset into a suitable form for the proposed methodology using the Synthetic Minority Oversampling Technique (SMOTE) and min-max normalization pre-processing methods. In the FL-based technique, the DNN algorithm is trained on the dataset individually after receiving model updates from two clients. To decrease the over-fitting effect, every client analyses the results three times. Accuracies, Precision, Recall, F1-scores, and Area Under the Receiver Operating Curve (AUROC) values are evaluated for each client. The experimental result shows the effectiveness of the federated learning-based technique on a DNN, reaching 86.82% accuracy while also providing privacy to the patient's data. Using the FL-based DNN model over a WESAD dataset improves the detection accuracy compared to the previous studies while also providing the privacy of patient data.


Asunto(s)
Inteligencia Artificial , Muñeca , Humanos , Respuesta Galvánica de la Piel , Articulación de la Muñeca , Monitores de Ejercicio
3.
Sensors (Basel) ; 22(2)2022 Jan 07.
Artículo en Inglés | MEDLINE | ID: mdl-35062405

RESUMEN

Glaucoma is an eye disease initiated due to excessive intraocular pressure inside it and caused complete sightlessness at its progressed stage. Whereas timely glaucoma screening-based treatment can save the patient from complete vision loss. Accurate screening procedures are dependent on the availability of human experts who performs the manual analysis of retinal samples to identify the glaucomatous-affected regions. However, due to complex glaucoma screening procedures and shortage of human resources, we often face delays which can increase the vision loss ratio around the globe. To cope with the challenges of manual systems, there is an urgent demand for designing an effective automated framework that can accurately identify the Optic Disc (OD) and Optic Cup (OC) lesions at the earliest stage. Efficient and effective identification and classification of glaucomatous regions is a complicated job due to the wide variations in the mass, shade, orientation, and shapes of lesions. Furthermore, the extensive similarity between the lesion and eye color further complicates the classification process. To overcome the aforementioned challenges, we have presented a Deep Learning (DL)-based approach namely EfficientDet-D0 with EfficientNet-B0 as the backbone. The presented framework comprises three steps for glaucoma localization and classification. Initially, the deep features from the suspected samples are computed with the EfficientNet-B0 feature extractor. Then, the Bi-directional Feature Pyramid Network (BiFPN) module of EfficientDet-D0 takes the computed features from the EfficientNet-B0 and performs the top-down and bottom-up keypoints fusion several times. In the last step, the resultant localized area containing glaucoma lesion with associated class is predicted. We have confirmed the robustness of our work by evaluating it on a challenging dataset namely an online retinal fundus image database for glaucoma analysis (ORIGA). Furthermore, we have performed cross-dataset validation on the High-Resolution Fundus (HRF), and Retinal Image database for Optic Nerve Evaluation (RIM ONE DL) datasets to show the generalization ability of our work. Both the numeric and visual evaluations confirm that EfficientDet-D0 outperforms the newest frameworks and is more proficient in glaucoma classification.


Asunto(s)
Aprendizaje Profundo , Glaucoma , Disco Óptico , Técnicas de Diagnóstico Oftalmológico , Fondo de Ojo , Glaucoma/diagnóstico , Humanos
4.
Sensors (Basel) ; 22(3)2022 Jan 21.
Artículo en Inglés | MEDLINE | ID: mdl-35161553

RESUMEN

The variation in skin textures and injuries, as well as the detection and classification of skin cancer, is a difficult task. Manually detecting skin lesions from dermoscopy images is a difficult and time-consuming process. Recent advancements in the domains of the internet of things (IoT) and artificial intelligence for medical applications demonstrated improvements in both accuracy and computational time. In this paper, a new method for multiclass skin lesion classification using best deep learning feature fusion and an extreme learning machine is proposed. The proposed method includes five primary steps: image acquisition and contrast enhancement; deep learning feature extraction using transfer learning; best feature selection using hybrid whale optimization and entropy-mutual information (EMI) approach; fusion of selected features using a modified canonical correlation based approach; and, finally, extreme learning machine based classification. The feature selection step improves the system's computational efficiency and accuracy. The experiment is carried out on two publicly available datasets, HAM10000 and ISIC2018. The achieved accuracy on both datasets is 93.40 and 94.36 percent. When compared to state-of-the-art (SOTA) techniques, the proposed method's accuracy is improved. Furthermore, the proposed method is computationally efficient.


Asunto(s)
Enfermedades de la Piel , Neoplasias Cutáneas , Algoritmos , Inteligencia Artificial , Entropía , Humanos , Neoplasias Cutáneas/diagnóstico por imagen
5.
Diagnostics (Basel) ; 13(2)2023 Jan 09.
Artículo en Inglés | MEDLINE | ID: mdl-36673057

RESUMEN

The competence of machine learning approaches to carry out clinical expertise tasks has recently gained a lot of attention, particularly in the field of medical-imaging examination. Among the most frequently used clinical-imaging modalities in the healthcare profession is chest radiography, which calls for prompt reporting of the existence of potential anomalies and illness diagnostics in images. Automated frameworks for the recognition of chest abnormalities employing X-rays are being introduced in health departments. However, the reliable detection and classification of particular illnesses in chest X-ray samples is still a complicated issue because of the complex structure of radiographs, e.g., the large exposure dynamic range. Moreover, the incidence of various image artifacts and extensive inter- and intra-category resemblances further increases the difficulty of chest disease recognition procedures. The aim of this study was to resolve these existing problems. We propose a deep learning (DL) approach to the detection of chest abnormalities with the X-ray modality using the EfficientDet (CXray-EffDet) model. More clearly, we employed the EfficientNet-B0-based EfficientDet-D0 model to compute a reliable set of sample features and accomplish the detection and classification task by categorizing eight categories of chest abnormalities using X-ray images. The effective feature computation power of the CXray-EffDet model enhances the power of chest abnormality recognition due to its high recall rate, and it presents a lightweight and computationally robust approach. A large test of the model employing a standard database from the National Institutes of Health (NIH) was conducted to demonstrate the chest disease localization and categorization performance of the CXray-EffDet model. We attained an AUC score of 0.9080, along with an IOU of 0.834, which clearly determines the competency of the introduced model.

6.
Cancers (Basel) ; 15(9)2023 Apr 27.
Artículo en Inglés | MEDLINE | ID: mdl-37173974

RESUMEN

Leukocytes, also referred to as white blood cells (WBCs), are a crucial component of the human immune system. Abnormal proliferation of leukocytes in the bone marrow leads to leukemia, a fatal blood cancer. Classification of various subtypes of WBCs is an important step in the diagnosis of leukemia. The method of automated classification of WBCs using deep convolutional neural networks is promising to achieve a significant level of accuracy, but suffers from high computational costs due to very large feature sets. Dimensionality reduction through intelligent feature selection is essential to improve the model performance with reduced computational complexity. This work proposed an improved pipeline for subtype classification of WBCs that relies on transfer learning for feature extraction using deep neural networks, followed by a wrapper feature selection approach based on a customized quantum-inspired evolutionary algorithm (QIEA). This algorithm, inspired by the principles of quantum physics, outperforms classical evolutionary algorithms in the exploration of search space. The reduced feature vector obtained from QIEA was then classified with multiple baseline classifiers. In order to validate the proposed methodology, a public dataset of 5000 images of five subtypes of WBCs was used. The proposed system achieves a classification accuracy of about 99% with a reduction of 90% in the size of the feature vector. The proposed feature selection method also shows a better convergence performance as compared to the classical genetic algorithm and a comparable performance to several existing works.

7.
Front Psychol ; 14: 1277741, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38274692

RESUMEN

We re-examined whether different time scales such as week, day of week, and hour of day are independently used during memory retrieval as has been previously argued (i.e., independence of scales). To overcome the limitations of previous studies, we used experience sampling technology to obtain test stimuli that have higher ecological validity. We also used pointwise mutual information to directly calculate the degree of dependency between time scales in a formal way. Participants were provided with a smartphone and were asked to wear it around their neck for two weeks, which was equipped with an app that automatically collected time, images, GPS, audio and accelerometry. After a one-week retention interval, participants were presented with an image that was captured during their data collection phase, and were tested on their memory of when the event happened (i.e., week, day of week, and hour). We find that, in contrast to previous arguments, memories of different time scales were not retrieved independently. Moreover, through rendering recurrence plots of the images that the participants collected, we provide evidence the dependency may have originated from the repetitive events that the participants encountered in their daily life.

8.
Front Oncol ; 13: 1151257, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37346069

RESUMEN

Skin cancer is a serious disease that affects people all over the world. Melanoma is an aggressive form of skin cancer, and early detection can significantly reduce human mortality. In the United States, approximately 97,610 new cases of melanoma will be diagnosed in 2023. However, challenges such as lesion irregularities, low-contrast lesions, intraclass color similarity, redundant features, and imbalanced datasets make improved recognition accuracy using computerized techniques extremely difficult. This work presented a new framework for skin lesion recognition using data augmentation, deep learning, and explainable artificial intelligence. In the proposed framework, data augmentation is performed at the initial step to increase the dataset size, and then two pretrained deep learning models are employed. Both models have been fine-tuned and trained using deep transfer learning. Both models (Xception and ShuffleNet) utilize the global average pooling layer for deep feature extraction. The analysis of this step shows that some important information is missing; therefore, we performed the fusion. After the fusion process, the computational time was increased; therefore, we developed an improved Butterfly Optimization Algorithm. Using this algorithm, only the best features are selected and classified using machine learning classifiers. In addition, a GradCAM-based visualization is performed to analyze the important region in the image. Two publicly available datasets-ISIC2018 and HAM10000-have been utilized and obtained improved accuracy of 99.3% and 91.5%, respectively. Comparing the proposed framework accuracy with state-of-the-art methods reveals improved and less computational time.

9.
Diagnostics (Basel) ; 13(7)2023 Mar 25.
Artículo en Inglés | MEDLINE | ID: mdl-37046456

RESUMEN

One of the most frequent cancers in women is breast cancer, and in the year 2022, approximately 287,850 new cases have been diagnosed. From them, 43,250 women died from this cancer. An early diagnosis of this cancer can help to overcome the mortality rate. However, the manual diagnosis of this cancer using mammogram images is not an easy process and always requires an expert person. Several AI-based techniques have been suggested in the literature. However, still, they are facing several challenges, such as similarities between cancer and non-cancer regions, irrelevant feature extraction, and weak training models. In this work, we proposed a new automated computerized framework for breast cancer classification. The proposed framework improves the contrast using a novel enhancement technique called haze-reduced local-global. The enhanced images are later employed for the dataset augmentation. This step aimed at increasing the diversity of the dataset and improving the training capability of the selected deep learning model. After that, a pre-trained model named EfficientNet-b0 was employed and fine-tuned to add a few new layers. The fine-tuned model was trained separately on original and enhanced images using deep transfer learning concepts with static hyperparameters' initialization. Deep features were extracted from the average pooling layer in the next step and fused using a new serial-based approach. The fused features were later optimized using a feature selection algorithm known as Equilibrium-Jaya controlled Regula Falsi. The Regula Falsi was employed as a termination function in this algorithm. The selected features were finally classified using several machine learning classifiers. The experimental process was conducted on two publicly available datasets-CBIS-DDSM and INbreast. For these datasets, the achieved average accuracy is 95.4% and 99.7%. A comparison with state-of-the-art (SOTA) technology shows that the obtained proposed framework improved the accuracy. Moreover, the confidence interval-based analysis shows consistent results of the proposed framework.

10.
Diagnostics (Basel) ; 13(5)2023 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-36900145

RESUMEN

Diabetic retinopathy (DR) and diabetic macular edema (DME) are forms of eye illness caused by diabetes that affects the blood vessels in the eyes, with the ground occupied by lesions of varied extent determining the disease burden. This is among the most common cause of visual impairment in the working population. Various factors have been discovered to play an important role in a person's growth of this condition. Among the essential elements at the top of the list are anxiety and long-term diabetes. If not detected early, this illness might result in permanent eyesight loss. The damage can be reduced or avoided if it is recognized ahead of time. Unfortunately, due to the time and arduous nature of the diagnosing process, it is harder to identify the prevalence of this condition. Skilled doctors manually review digital color images to look for damage produced by vascular anomalies, the most common complication of diabetic retinopathy. Even though this procedure is reasonably accurate, it is quite pricey. The delays highlight the necessity for diagnosis to be automated, which will have a considerable positive significant impact on the health sector. The use of AI in diagnosing the disease has yielded promising and dependable findings in recent years, which is the impetus for this publication. This article used ensemble convolutional neural network (ECNN) to diagnose DR and DME automatically, with accurate results of 99 percent. This result was achieved using preprocessing, blood vessel segmentation, feature extraction, and classification. For contrast enhancement, the Harris hawks optimization (HHO) technique is presented. Finally, the experiments were conducted for two kinds of datasets: IDRiR and Messidor for accuracy, precision, recall, F-score, computational time, and error rate.

11.
Diagnostics (Basel) ; 13(1)2022 Dec 28.
Artículo en Inglés | MEDLINE | ID: mdl-36611387

RESUMEN

The rapid increase in Internet technology and machine-learning devices has opened up new avenues for online healthcare systems. Sometimes, getting medical assistance or healthcare advice online is easier to understand than getting it in person. For mild symptoms, people frequently feel reluctant to visit the hospital or a doctor; instead, they express their questions on numerous healthcare forums. However, predictions may not always be accurate, and there is no assurance that users will always receive a reply to their posts. In addition, some posts are made up, which can misdirect the patient. To address these issues, automatic online prediction (OAP) is proposed. OAP clarifies the idea of employing machine learning to predict the common attributes of disease using Never-Ending Image Learner with an intelligent analysis of disease factors. Never-Ending Image Learner predicts disease factors by selecting from finite data images with minimum structural risk and efficiently predicting efficient real-time images via machine-learning-enabled M-theory. The proposed multi-access edge computing platform works with the machine-learning-assisted automatic prediction from multiple images using multiple-instance learning. Using a Never-Ending Image Learner based on Machine Learning, common disease attributes may be predicted online automatically. This method has deeper storage of images, and their data are stored per the isotropic positioning. The proposed method was compared with existing approaches, such as Multiple-Instance Learning for automated image indexing and hyper-spectrum image classification. Regarding the machine learning of multiple images with the application of isotropic positioning, the operating efficiency is improved, and the results are predicted with better accuracy. In this paper, machine-learning performance metrics for online automatic prediction tools are compiled and compared, and through this survey, the proposed method is shown to achieve higher accuracy, proving its efficiency compared to the existing methods.

12.
Front Public Health ; 10: 1046296, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36408000

RESUMEN

The COVID-19 virus's rapid global spread has caused millions of illnesses and deaths. As a result, it has disastrous consequences for people's lives, public health, and the global economy. Clinical studies have revealed a link between the severity of COVID-19 cases and the amount of virus present in infected people's lungs. Imaging techniques such as computed tomography (CT) and chest x-rays can detect COVID-19 (CXR). Manual inspection of these images is a difficult process, so computerized techniques are widely used. Deep convolutional neural networks (DCNNs) are a type of machine learning that is frequently used in computer vision applications, particularly in medical imaging, to detect and classify infected regions. These techniques can assist medical personnel in the detection of patients with COVID-19. In this article, a Bayesian optimized DCNN and explainable AI-based framework is proposed for the classification of COVID-19 from the chest X-ray images. The proposed method starts with a multi-filter contrast enhancement technique that increases the visibility of the infected part. Two pre-trained deep models, namely, EfficientNet-B0 and MobileNet-V2, are fine-tuned according to the target classes and then trained by employing Bayesian optimization (BO). Through BO, hyperparameters have been selected instead of static initialization. Features are extracted from the trained model and fused using a slicing-based serial fusion approach. The fused features are classified using machine learning classifiers for the final classification. Moreover, visualization is performed using a Grad-CAM that highlights the infected part in the image. Three publically available COVID-19 datasets are used for the experimental process to obtain improved accuracies of 98.8, 97.9, and 99.4%, respectively.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Humanos , Rayos X , COVID-19/diagnóstico por imagen , Teorema de Bayes , Redes Neurales de la Computación
13.
J Biotechnol ; 105(1-2): 51-60, 2003 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-14511909

RESUMEN

Angiogenesis, the formation of new blood vessels out of pre-existing capillaries, occurs in a variety of pathophysiological conditions, and is regulated by a balance of angiogenic activators and inhibitors. To identify novel angiogenic factors, we developed a gene screening method by combining the prediction analysis of transcription factor (TF) binding site and the chromosomal localization analysis. First, we analyzed the promoter sequences from known angiogenesis-related factors using the MATINSPECTOR program in TRANSFAC database. Interestingly, we found that the binding site of LMO2 complex is highly conserved in the promoter regions of these factors. Second, we analyzed chromosome loci based on the hypothesis that angiogenesis-related factors might be co-localized in a specific chromosomal band. We found that angiogenesis-related factors are localized in specific 14 chromosomal bands including 5q31 and 19q13 using AngioDB and LocusLink database mining. From these two approaches, we identified 32 novel candidates that have the LMO2 complex binding site in their promoter and are located on one of 14 chromosomal bands. Among them, human recombinant troponin T and spectrin markedly inhibited the neovascularization in vivo and in vitro. Collectively, we suggest that the combination of the prediction analysis of TF binding site and the chromosomal localization analysis might be a useful strategy for gene screening of angiogenesis.


Asunto(s)
Inductores de la Angiogénesis/antagonistas & inhibidores , Sitios de Unión , Células Cultivadas , Mapeo Cromosómico/métodos , Biología Computacional , Simulación por Computador , Bases de Datos de Ácidos Nucleicos , Bases de Datos de Proteínas , Humanos , Regiones Promotoras Genéticas , Espectrina/farmacología , Factores de Transcripción/fisiología , Troponina T/farmacología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA