We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

An On-board Visual-based Attitude Estimation System For Unmanned Aerial Vehicle Mapping

Formale Metadaten

Titel
An On-board Visual-based Attitude Estimation System For Unmanned Aerial Vehicle Mapping
Serientitel
Anzahl der Teile
183
Autor
Lizenz
CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache
Produzent
Produktionsjahr2015
ProduktionsortSeoul, South Korea

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
A visual-based attitude estimation system aims to utilize an on-board camera to estimate the pose of the platform by using salient image features rather than additional hardware such as gyroscope. One of the notable achievements in this approach is on-camera self-calibration [1-4] which has been widely used in the modern digital cameras. Attitude/pose information is one of the crucial requirements for the transformation of 2-dimensional (2D) image coordinates to 3-dimensional (3D) real-world coordinates [3]. In photogrammetry and machine vision, the use of camera’s pose is essential for modeling tasks such as photo modeling [5-8] and 3D mapping [9]. Commercial software packages are now available for such tasks, however, they are only good for off-board image processing which does not have any computing or processing constraints. Unmanned Aerial Vehicles (UAVs) and any other airborne platforms impose several constraints to attitude estimation. Currently, Inertial Measurement Units (IMUs) are widely used in unmanned aircrafts. Although IMUs are very effective, this conventional attitude estimation approach adds up the aircraft’s payload significantly [10]. Hence, a visual-based attitude estimation system is more appropriate for UAV mapping. Different types of approaches to visual-based attitude estimation have been proposed in [10-14]. This study aims to integrate optical flow and a keypoints detector of overlapped images for on-board attitude estimation and camera-self calibration. This is to minimize the computation burden that can be caused by the optical flow, and to fit in on-board visual-based attitude estimation and camera calibration. A series of performance tests have been conducted on selected keypoints detectors, and the results are evaluated to identify the best detector for the proposed visual-based attitude estimation system. The proposed on-board visual-based attitude estimation system is designed to use visual information from overlapped images to measure the platform’s egomotion, and estimate the attitude from the visual motion. Optical flow computation could be expensive depending on the approach [15]. Our goal is to reduce the computation burden at the start of the processing by minimizing the aerial images to the regions of upmost important. This requires an integration of optical flow with salient feature detection and matching. Our proposed system strictly follows the UAV’s on-board processing requirements [16]. Thus, the suitability of salient feature detectors for the system needs to be investigated. Performances of various keypoints detectors have been evaluated in terms of detection, time to complete and matching capabilities. A set of 249 aerial images acquired from a fixed wing UAV have been tested. The test results show that the best keypoints detector to be integrated in our proposed system is the Speeded Up Robust Feature (SURF) detector, given that Sum of Absolute Differences (SAD) matching metric is used to identify the matching points. It was found that the time taken for SURF to complete the detection and matching process is, although not the fastest, relatively small. SURF is also able to provide sufficient numbers of salient feature points in each detection without sacrificing the computation time.