RESUMO
Knowledge about the interior and exterior camera orientation parameters is required to establish the relationship between 2D image content and 3D object data. Camera calibration is used to determine the interior orientation parameters, which are valid as long as the camera remains stable. However, information about the temporal stability of low-cost cameras due to the physical impact of temperature changes, such as those in smartphones, is still missing. This study investigates on the one hand the influence of heat dissipating smartphone components at the geometric integrity of implemented cameras and on the other hand the impact of ambient temperature changes at the geometry of uncoupled low-cost cameras considering a Raspberry Pi camera module that is exposed to controlled thermal radiation changes. If these impacts are neglected, transferring image measurements into object space will lead to wrong measurements due to high correlations between temperature and camera's geometric stability. Monte-Carlo simulation is used to simulate temperature-related variations of the interior orientation parameters to assess the extent of potential errors in the 3D data ranging from a few millimetres up to five centimetres on a target in X- and Y- direction. The target is positioned at a distance of 10 m to the camera and the Z-axis is aligned with camera's depth direction.
RESUMO
As key-components of the urban-drainage system, storm-drains and manholes are essential to the hydrological modeling of urban basins. Accurately mapping of these objects can help to improve the storm-drain systems for the prevention and mitigation of urban floods. Novel Deep Learning (DL) methods have been proposed to aid the mapping of these urban features. The main aim of this paper is to evaluate the state-of-the-art object detection method RetinaNet to identify storm-drain and manhole in urban areas in street-level RGB images. The experimental assessment was performed using 297 mobile mapping images captured in 2019 in the streets in six regions in Campo Grande city, located in Mato Grosso do Sul state, Brazil. Two configurations of training, validation, and test images were considered. ResNet-50 and ResNet-101 were adopted in the experimental assessment as the two distinct feature extractor networks (i.e., backbones) for the RetinaNet method. The results were compared with the Faster R-CNN method. The results showed a higher detection accuracy when using RetinaNet with ResNet-50. In conclusion, the assessed DL method is adequate to detect storm-drain and manhole from mobile mapping RGB images, outperforming the Faster R-CNN method. The labeled dataset used in this study is available for future research.
RESUMO
Detection and classification of tree species from remote sensing data were performed using mainly multispectral and hyperspectral images and Light Detection And Ranging (LiDAR) data. Despite the comparatively lower cost and higher spatial resolution, few studies focused on images captured by Red-Green-Blue (RGB) sensors. Besides, the recent years have witnessed an impressive progress of deep learning methods for object detection. Motivated by this scenario, we proposed and evaluated the usage of Convolutional Neural Network (CNN)-based methods combined with Unmanned Aerial Vehicle (UAV) high spatial resolution RGB imagery for the detection of law protected tree species. Three state-of-the-art object detection methods were evaluated: Faster Region-based Convolutional Neural Network (Faster R-CNN), YOLOv3 and RetinaNet. A dataset was built to assess the selected methods, comprising 392 RBG images captured from August 2018 to February 2019, over a forested urban area in midwest Brazil. The target object is an important tree species threatened by extinction known as Dipteryx alata Vogel (Fabaceae). The experimental analysis delivered average precision around 92% with an associated processing times below 30 miliseconds.