Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Front Neurorobot ; 18: 1398703, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38831877

RESUMEN

Introduction: During the last few years, a heightened interest has been shown in classifying scene images depicting diverse robotic environments. The surge in interest can be attributed to significant improvements in visual sensor technology, which has enhanced image analysis capabilities. Methods: Advances in vision technology have a major impact on the areas of multiple object detection and scene understanding. These tasks are an integral part of a variety of technologies, including integrating scenes in augmented reality, facilitating robot navigation, enabling autonomous driving systems, and improving applications in tourist information. Despite significant strides in visual interpretation, numerous challenges persist, encompassing semantic understanding, occlusion, orientation, insufficient availability of labeled data, uneven illumination including shadows and lighting, variation in direction, and object size and changing background. To overcome these challenges, we proposed an innovative scene recognition framework, which proved to be highly effective and yielded remarkable results. First, we perform preprocessing using kernel convolution on scene data. Second, we perform semantic segmentation using UNet segmentation. Then, we extract features from these segmented data using discrete wavelet transform (DWT), Sobel and Laplacian, and textual (local binary pattern analysis). To recognize the object, we have used deep belief network and then find the object-to-object relation. Finally, AlexNet is used to assign the relevant labels to the scene based on recognized objects in the image. Results: The performance of the proposed system was validated using three standard datasets: PASCALVOC-12, Cityscapes, and Caltech 101. The accuracy attained on the PASCALVOC-12 dataset exceeds 96% while achieving a rate of 95.90% on the Cityscapes dataset. Discussion: Furthermore, the model demonstrates a commendable accuracy of 92.2% on the Caltech 101 dataset. This model showcases noteworthy advancements beyond the capabilities of current models.

2.
Sensors (Basel) ; 24(10)2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38793886

RESUMEN

The domain of human locomotion identification through smartphone sensors is witnessing rapid expansion within the realm of research. This domain boasts significant potential across various sectors, including healthcare, sports, security systems, home automation, and real-time location tracking. Despite the considerable volume of existing research, the greater portion of it has primarily concentrated on locomotion activities. Comparatively less emphasis has been placed on the recognition of human localization patterns. In the current study, we introduce a system by facilitating the recognition of both human physical and location-based patterns. This system utilizes the capabilities of smartphone sensors to achieve its objectives. Our goal is to develop a system that can accurately identify different human physical and localization activities, such as walking, running, jumping, indoor, and outdoor activities. To achieve this, we perform preprocessing on the raw sensor data using a Butterworth filter for inertial sensors and a Median Filter for Global Positioning System (GPS) and then applying Hamming windowing techniques to segment the filtered data. We then extract features from the raw inertial and GPS sensors and select relevant features using the variance threshold feature selection method. The extrasensory dataset exhibits an imbalanced number of samples for certain activities. To address this issue, the permutation-based data augmentation technique is employed. The augmented features are optimized using the Yeo-Johnson power transformation algorithm before being sent to a multi-layer perceptron for classification. We evaluate our system using the K-fold cross-validation technique. The datasets used in this study are the Extrasensory and Sussex Huawei Locomotion (SHL), which contain both physical and localization activities. Our experiments demonstrate that our system achieves high accuracy with 96% and 94% over Extrasensory and SHL in physical activities and 94% and 91% over Extrasensory and SHL in the location-based activities, outperforming previous state-of-the-art methods in recognizing both types of activities.


Asunto(s)
Algoritmos , Técnicas Biosensibles , Sistemas de Información Geográfica , Dispositivos Electrónicos Vestibles , Humanos , Técnicas Biosensibles/métodos , Locomoción/fisiología , Teléfono Inteligente , Caminata/fisiología , Internet de las Cosas
3.
Front Physiol ; 15: 1344887, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38449788

RESUMEN

Human activity recognition (HAR) plays a pivotal role in various domains, including healthcare, sports, robotics, and security. With the growing popularity of wearable devices, particularly Inertial Measurement Units (IMUs) and Ambient sensors, researchers and engineers have sought to take advantage of these advances to accurately and efficiently detect and classify human activities. This research paper presents an advanced methodology for human activity and localization recognition, utilizing smartphone IMU, Ambient, GPS, and Audio sensor data from two public benchmark datasets: the Opportunity dataset and the Extrasensory dataset. The Opportunity dataset was collected from 12 subjects participating in a range of daily activities, and it captures data from various body-worn and object-associated sensors. The Extrasensory dataset features data from 60 participants, including thousands of data samples from smartphone and smartwatch sensors, labeled with a wide array of human activities. Our study incorporates novel feature extraction techniques for signal, GPS, and audio sensor data. Specifically, for localization, GPS, audio, and IMU sensors are utilized, while IMU and Ambient sensors are employed for locomotion activity recognition. To achieve accurate activity classification, state-of-the-art deep learning techniques, such as convolutional neural networks (CNN) and long short-term memory (LSTM), have been explored. For indoor/outdoor activities, CNNs are applied, while LSTMs are utilized for locomotion activity recognition. The proposed system has been evaluated using the k-fold cross-validation method, achieving accuracy rates of 97% and 89% for locomotion activity over the Opportunity and Extrasensory datasets, respectively, and 96% for indoor/outdoor activity over the Extrasensory dataset. These results highlight the efficiency of our methodology in accurately detecting various human activities, showing its potential for real-world applications. Moreover, the research paper introduces a hybrid system that combines machine learning and deep learning features, enhancing activity recognition performance by leveraging the strengths of both approaches.

4.
Biomimetics (Basel) ; 8(7)2023 Nov 19.
Artículo en Inglés | MEDLINE | ID: mdl-37999195

RESUMEN

Cognitive assessment plays a vital role in clinical care and research fields related to cognitive aging and cognitive health. Lately, researchers have worked towards providing resolutions to measure individual cognitive health; however, it is still difficult to use those resolutions from the real world, and therefore using deep neural networks to evaluate cognitive health is becoming a hot research topic. Deep learning and human activity recognition are two domains that have received attention for the past few years. The former is for its relevance in application fields like health monitoring or ambient assisted living, and the latter is due to their excellent performance and recent achievements in various fields of application, namely, speech and image recognition. This research develops a novel Symbiotic Organism Search with a Deep Convolutional Neural Network-based Human Activity Recognition (SOSDCNN-HAR) model for Cognitive Health Assessment. The goal of the SOSDCNN-HAR model is to recognize human activities in an end-to-end way. For the noise elimination process, the presented SOSDCNN-HAR model involves the Wiener filtering (WF) technique. In addition, the presented SOSDCNN-HAR model follows a RetinaNet-based feature extractor for automated extraction of features. Moreover, the SOS procedure is exploited as a hyperparameter optimizing tool to enhance recognition efficiency. Furthermore, a gated recurrent unit (GRU) prototype can be employed as a categorizer to allot proper class labels. The performance validation of the SOSDCNN-HAR prototype is examined using a set of benchmark datasets. A far-reaching experimental examination reported the betterment of the SOSDCNN-HAR prototype over current approaches with enhanced precision of 86.51% and 89.50% on Penn Action and NW-UCLA datasets, respectively.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...