RESUMO
The paper is a continuation of the authors' work intended for infrared navigation for blind people and mobile robots. This concerns the detection of obstacles in the person's or mobile robot's trajectory, in particular, the detection of corners. The temperature distribution of a building's internal wall near a corner has been investigated. Due to geometry, more heat will be transferred by conduction so that inside the building, the temperature on the wall will be decreasing towards a corner. The problem will be investigated theoretically and numerically, and the results are confirmed by experimental measurements. The purpose of this research is to help blind people by equipping them with a small infrared camera that warns them when they are approaching a corner inside a building. The same aim is addressed to mobile robots.
RESUMO
The usage of mobile robots (MRs) has expanded dramatically in the last several years across a wide range of industries, including manufacturing, surveillance, healthcare, and warehouse automation. To ensure the efficient and safe operation of these MRs, it is crucial to design effective control strategies that can adapt to changing environments. In this paper, we propose a new technique for controlling MRs using reinforcement learning (RL). Our approach involves mathematical model generation and later training a neural network (NN) to learn a policy for robot control using RL. The policy is learned through trial and error, where MR explores the environment and receives rewards based on its actions. The rewards are designed to encourage the robot to move towards its goal while avoiding obstacles. In this work, a deep Q-learning (QL) agent is used to enable robots to autonomously learn to avoid collisions with obstacles and enhance navigation abilities in an unknown environment. When operating MR independently within an unfamiliar area, a RL model is used to identify the targeted location, and the Deep Q-Network (DQN) is used to navigate to the goal location. We evaluate our approach using a simulation using the Epsilon-Greedy algorithm. The results show that our approach outperforms traditional MR control strategies in terms of both efficiency and safety.
RESUMO
A convolutional neural network (CNN) is an important and widely utilized part of the artificial neural network (ANN) for computer vision, mostly used in the pattern recognition system. The most important applications of CNN are medical image analysis, image classification, object recognition from videos, recommender systems, financial time series analysis, natural language processing, and human-computer interfaces. However, after the technological advancement in the power of computing ability and the emergence of huge quantities of labeled data provided through enhanced algorithms, nowadays, CNN is widely used in almost every area of study. One of the main uses of wearable technology and CNN within medical surveillance is human activity recognition (HAR), which must require constant tracking of everyday activities. This paper provides a comprehensive study of the application of CNNs in the classification of HAR tasks. We describe their enhancement, from their antecedents up to the current state-of-the-art systems of deep learning (DL). We have provided a comprehensive working principle of CNN for HAR tasks, and a CNN-based model is presented to perform the classification of human activities. The proposed technique interprets data from sensor sequences of inputs by using a multi-layered CNN that gathers temporal and spatial data related to human activities. The publicly available WISDM dataset for HAR has been used to perform this study. This proposed study uses the two-dimensional CNN approach to make a model for the classification of different human activities. A recent version of Python software has been used to perform the study. The rate of accuracy for HAR through the proposed model in this experiment is 97.20%, which is better than the previously estimated state-of-the-art technique. The findings of the study imply that using DL methods for activity recognition might greatly increase accuracy and increase the range of applications where HAR can be used successfully. We have also described the future research trends in the field of HAR in this article.