RESUMO
Developments in the field of artificial intelligence have made great strides in the field of automatic semantic segmentation, both in the 2D (image) and 3D spaces. Within the context of 3D recording technology it has also seen application in several areas, most notably in creating semantically rich point clouds which is usually performed manually. In this paper, we propose the introduction of deep learning-based semantic image segmentation into the photogrammetric 3D reconstruction and classification workflow. The main objective is to be able to introduce semantic classification at the beginning of the classical photogrammetric workflow in order to automatically create classified dense point clouds by the end of the said workflow. In this regard, automatic image masking depending on pre-determined classes were performed using a previously trained neural network. The image masks were then employed during dense image matching in order to constraint the process into the respective classes, thus automatically creating semantically classified point clouds as the final output. Results show that the developed method is promising, with automation of the whole process feasible from input (images) to output (labelled point clouds). Quantitative assessment gave good results for specific classes e.g., building facades and windows, with IoU scores of 0.79 and 0.77 respectively.
Assuntos
Inteligência Artificial , Semântica , Redes Neurais de Computação , Fotogrametria , Fluxo de TrabalhoRESUMO
This paper presents LocSpeck, a collaborative and distributed indoor positioning system for dynamic nodes connected using an ad-hoc network, based on inter-node relative range measurements and Wi-Fi fingerprinting. The proposed system operates using peer-to-peer range measurements and does not need ultra-wideband (UWB) fixed anchor, nor it needs a predefined network topology. The nodes could be asymmetric in terms of the available sensors onboard, the computational resources, and the power capacity. This asymmetry adversely affects the positioning performance of the weaker nodes. Collaboration between different nodes is achieved through a distributed estimator without the need of a single centralized computing element. The ranging measurement component of the system is based on the DW1000 UWB transceiver chip from Decawave, which is attached to a set of smartphones equipped with asymmetric sensors. The distributed positioning filter fuses, locally on each node, the relative range measurements, the reading from the internal sensors, and the Wi-Fi received signal strength indicator (RSSI) readings to obtain an estimate of the position of each node. The described system does not depend on fixed UWB anchors and supports online addition and removal of nodes and dynamic node role assignment, either as an anchor or as a rover. The performance of the system is evaluated by real-world test scenarios using a set of four smartphones navigating an indoor environment on foot. The performance is compared to that of a commercial UWB-based system. The results presented in this paper show that weak mobile nodes, in terms of available positioning sensors, can benefit from collaboration with other nearby nodes.
RESUMO
Cooperative positioning (CP) utilises information sharing among multiple nodes to enable positioning in Global Navigation Satellite System (GNSS)-denied environments. This paper reports the performance of a CP system for pedestrians using Ultra-Wide Band (UWB) technology inGNSS-denied environments. This data set was collected as part of a benchmarking measurementcampaign carried out at the Ohio State University in October 2017. Pedestrians were equippedwith a variety of sensors, including two different UWB systems, on a specially designed helmetserving as a mobile multi-sensor platform for CP. Different users were walking in stop-and-go modealong trajectories with predefined checkpoints and under various challenging environments. Inthe developed CP network, both Peer-to-Infrastructure (P2I) and Peer-to-Peer (P2P) measurementsare used for positioning of the pedestrians. It is realised that the proposed system can achievedecimetre-level accuracies (on average, around 20 cm) in the complete absence of GNSS signals,provided that the measurements from infrastructure nodes are available and the network geometryis good. In the absence of these good conditions, the results show that the average accuracydegrades to meter level. Further, it is experimentally demonstrated that inclusion of P2P cooperativerange observations further enhances the positioning accuracy and, in extreme cases when only oneinfrastructure measurement is available, P2P CP may reduce positioning errors by up to 95%. Thecomplete test setup, the methodology for development, and data collection are discussed in thispaper. In the next version of this system, additional observations such as theWi-Fi, camera, and othersignals of opportunity will be included.
RESUMO
Thanks to the recent diffusion of low-cost high-resolution digital cameras and to the development of mostly automated procedures for image-based 3D reconstruction, the popularity of photogrammetry for environment surveys is constantly increasing in the last years. Automatic feature matching is an important step in order to successfully complete the photogrammetric 3D reconstruction: this step is the fundamental basis for the subsequent estimation of the geometry of the scene. This paper reconsiders the feature matching problem when dealing with smart mobile devices (e.g., when using the standard camera embedded in a smartphone as imaging sensor). More specifically, this paper aims at exploiting the information on camera movements provided by the inertial navigation system (INS) in order to make the feature matching step more robust and, possibly, computationally more efficient. First, a revised version of the affine scale-invariant feature transform (ASIFT) is considered: this version reduces the computational complexity of the original ASIFT, while still ensuring an increase of correct feature matches with respect to the SIFT. Furthermore, a new two-step procedure for the estimation of the essential matrix E (and the camera pose) is proposed in order to increase its estimation robustness and computational efficiency.
RESUMO
Motivated by the increasing importance of adaptive optics (AO) systems for improving the real resolution of large ground telescopes, and by the need of testing the AO system performance in realistic working conditions, in this paper we address the problem of simulating the turbulence effect on ground telescope observations at high resolution. The procedure presented here generalizes the multiscale stochastic approach introduced in our earlier paper [Appl. Opt. 50, 4124 (2011)], with respect to the previous solution, a relevant computational time reduction is obtained by exploiting a local spatial principal component analysis (PCA) representation of the turbulence. Furthermore, the turbulence at low resolution is modeled as a moving average (MA) process, while previously [Appl. Opt. 50, 4124 (2011)] the wind velocity was restricted to be directed along one of the two spatial axes, the use of such MA model allows the turbulence to evolve indifferently in all the directions. In our simulations, the proposed procedure reproduces the theoretical statistical characteristics of the turbulent phase with good accuracy.
RESUMO
Simulating the turbulence effect on ground telescope observations is of fundamental importance for the design and test of suitable control algorithms for adaptive optics systems. In this paper we propose a multiscale approach for efficiently synthesizing turbulent phases at very high resolution. First, the turbulence is simulated at low resolution, taking advantage of a previously developed method for generating phase screens [J. Opt. Soc. Am. A 25, 515 (2008)]. Then, high-resolution phase screens are obtained as the output of a multiscale linear stochastic system. The multiscale approach significantly improves the computational efficiency of turbulence simulation with respect to recently developed methods [Opt. Express 14, 988 (2006)] [J. Opt. Soc. Am. A 25, 515 (2008)] [J. Opt. Soc. Am. A 25, 463 (2008)]. Furthermore, the proposed procedure ensures good accuracy in reproducing the statistical characteristics of the turbulent phase.
RESUMO
Turbulence simulation methods are of fundamental importance for evaluating the performance of control strategies for Adaptive Optics (AO) systems. In order to obtain a reliable evaluation of the performance a statistically accurate turbulence simulation method has to be used. This work generalizes a previously proposed method for turbulence simulation based on the use of a multiscale stochastic model. The main contributions of this work are: first, a multiresolution local PCA representation is considered. In typical operating conditions, the computational load for turbulence simulation is reduced approximately by a factor of 4, with respect to the previously proposed method, by means of this PCA representation. Second, thanks to a different low resolution method, based on a moving average model, the wind velocity can be in any direction (not necessarily that of the spatial axes). Finally, this paper extends the simulation procedure to generate, if needed, turbulence samples by using a more general model than that of the frozen flow hypothesis.
RESUMO
The phase screen method is a well-established approach to take into account the effects of atmospheric turbulence in astronomical seeing. This is of key importance in designing adaptive optics for new-generation telescopes, in particular in view of applications such as exoplanet detection or long-exposure spectroscopy. We present an innovative approach to simulate turbulent phase that is based on stochastic realization theory. The method shows appealing properties in terms of both accuracy in reconstructing the structure function and compactness of the representation.