RESUMO
Today, hyperspectral imaging plays an integral part in the remote sensing and precision agriculture field. Identifying the matching key points between hyperspectral images is an important step in tasks such as image registration, localization, object recognition, and object tracking. Low-pixel resolution hyperspectral imaging is a recent introduction to the field, bringing benefits such as lower cost and form factor compared to traditional systems. However, the use of limited pixel resolution challenges even state-of-the-art feature detection and matching methods, leading to difficulties in generating robust feature matches for images with repeated textures, low textures, low sharpness, and low contrast. Moreover, the use of narrower optics in these cameras adds to the challenges during the feature-matching stage, particularly for images captured during low-altitude flight missions. In order to enhance the robustness of feature detection and matching in low pixel resolution images, in this study we propose a novel approach utilizing 3D Convolution-based Siamese networks. Compared to state-of-the-art methods, this approach takes advantage of all the spectral information available in hyperspectral imaging in order to filter out incorrect matches and produce a robust set of matches. The proposed method initially generates feature matches through a combination of Phase Stretch Transformation-based edge detection and SIFT features. Subsequently, a 3D Convolution-based Siamese network is utilized to filter out inaccurate matches, producing a highly accurate set of feature matches. Evaluation of the proposed method demonstrates its superiority over state-of-the-art approaches in cases where they fail to produce feature matches. Additionally, it competes effectively with the other evaluated methods when generating feature matches in low-pixel resolution hyperspectral images. This research contributes to the advancement of low pixel resolution hyperspectral imaging techniques, and we believe it can specifically aid in mosaic generation of low pixel resolution hyperspectral images.
RESUMO
There are many visually impaired people globally, and it is important to support their ability to walk independently. Acoustic signals and escort zones have been installed on pedestrian crossings for the visually impaired people to walk safely; however, pedestrian accidents, including those involving the visually impaired, continue to occur. Therefore, to realize safe walking for the visually impaired on pedestrian crossings, we present an automatic sensing method for pedestrian crossings using images from cameras attached to them. Because the white rectangular stripes that mark pedestrian crossings are aligned, we focused on the edges of these rectangular stripes and proposed a novel pedestrian crossing sensing method based on the dispersion of the slope of a straight line in Hough space. Our proposed method possesses unique characteristics that allow it to effectively handle challenging scenarios that traditional methods struggle with. It excels at detecting crosswalks even in low-light conditions during nighttime when illumination levels may vary. Moreover, it can detect crosswalks even when certain areas are partially obscured by objects or obstructions. By minimizing computational costs, our method achieves high real-time performance, ensuring efficient and timely crosswalk detection in real-world environments. Specifically, our proposed method demonstrates an impressive accuracy rate of 98.47%. Additionally, the algorithm can be executed at almost real-time speeds (approximately 10.5 fps) using a Jetson Nano small-type computer, showcasing its suitability as a wearable device.
Assuntos
Pedestres , Pessoas com Deficiência Visual , Humanos , Acidentes de Trânsito , Segurança , Algoritmos , CaminhadaRESUMO
It is crucial for an autonomous vehicle to predict cyclist behavior before decision-making. When a cyclist is on real traffic roads, his or her body orientation indicates the current moving directions, and his or her head orientation indicates his or her intention for checking the road situation before making next movement. Therefore, estimating the orientation of cyclist's body and head is an important factor of cyclist behavior prediction for autonomous driving. This research proposes to estimate cyclist orientation including both body and head orientation using deep neural network with the data from Light Detection and Ranging (LiDAR) sensor. In this research, two different methods are proposed for cyclist orientation estimation. The first method uses 2D images to represent the reflectivity, ambient and range information collected by LiDAR sensor. At the same time, the second method uses 3D point cloud data to represent the information collected from LiDAR sensor. The two proposed methods adopt a model ResNet50, which is a 50-layer convolutional neural network, for orientation classification. Hence, the performances of two methods are compared to achieve the most effective usage of LiDAR sensor data in cyclist orientation estimation. This research developed a cyclist dataset, which includes multiple cyclists with different body and head orientations. The experimental results showed that a model that uses 3D point cloud data has better performance for cyclist orientation estimation compared to the model that uses 2D images. Moreover, in the 3D point cloud data-based method, using reflectivity information has a more accurate estimation result than using ambient information.
RESUMO
Numerous studies have been conducted to prove the calming and stress-reducing effects on humans of visiting aquatic environments. As a result, many institutions have utilized fish to provide entertainment and treat patients. The most common issue in this approach is controlling the movement of fish to facilitate human interaction. This study proposed an interactive robot, a robotic fish, to alter fish swarm behaviors by performing an effective, unobstructed, yet necessary, defined set of actions to enhance human interaction. The approach incorporated a minimalistic but futuristic physical design of the robotic fish with cameras and infrared (IR) sensors, and developed a fish-detecting and swarm pattern-recognizing algorithm. The fish-detecting algorithm was implemented using background subtraction and moving average algorithms with an accuracy of 78%, while the swarm pattern detection implemented with a Convolutional Neural Network (CNN) resulted in a 77.32% accuracy rate. By effectively controlling the behavior and swimming patterns of fish through the smooth movements of the robotic fish, we evaluated the success through repeated trials. Feedback from a randomly selected unbiased group of subjects revealed that the robotic fish improved human interaction with fish by using the proposed set of maneuvers and behavior.
Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Animais , Humanos , Robótica/métodos , Inteligência Artificial , Algoritmos , Redes Neurais de Computação , PeixesRESUMO
Outdoor recreation has become popular in recent years, against the backdrop of the new coronavirus epidemic that started in 2020. Mountaineering, in particular, has become a popular pastime for many people as an easy way to experience nature. However, the number of mountaineering accidents is increasing, owing to the inadequate knowledge and equipment for beginners. In particular, the lack of map-reading skills and experience often leads to the selection of wrong trails. The smartphones used for precise location information obtain correction information from radio waves from a base station, and the accuracy of using only the GPS in mountainous areas without radio waves is questionable. In general, the GPS position correction methods in the literature for such situations include complex processing of the GPS radio waves. Some of these methods have been proposed with complex hardware and are difficult to implement with portable hardware. In this study, we develop and demonstrate a method for obtaining accurate location information using GPS without the error correction of radio waves, even in mountainous areas. The multipath is the reason for most of the GPS errors in the mountains. In the mountains, depending on the locations, the correct GPS location can also be received. In the proposed method, the correct GPS data are used to detect the incorrect GPS locations. We present an experimental method for estimating the interrelationship between the GPS longitude and latitude data. Additionally, we demonstrate the effectiveness of our method by showing that the experimental mountain location data presented in this paper are more accurate than the GPS data alone.
Assuntos
Infecções por Coronavirus , Montanhismo , Coleta de Dados , Humanos , Ondas de Rádio , SmartphoneRESUMO
Several robot-related studies have been conducted in recent years; however, studies on the autonomous travel of small mobile robots in small spaces are lacking. In this study, we investigate the development of autonomous travel for small robots that need to travel and cover the entire smooth surface, such as those employed for cleaning tables or solar panels. We consider an obstacle-available surface and target this travel on it by proposing a spiral motion method. To achieve the spiral motion, we focus on developing autonomous avoidance of obstacles, return to original path, and fall prevention when robots traverse a surface. The development of regular travel by a robot without an encoder is an important feature of this study. The traveled distance was measured using the traveling time. We achieved spiral motion by analyzing the data from multiple small sensors installed on the robot by introducing a new attitude-control method, and we ensured that the robot returned to the original spiral path autonomously after avoiding obstacles and without falling over the edge of the surface.