Merken

An On-board Visual-based Attitude Estimation System For Unmanned Aerial Vehicle Mapping

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
and
well thank you thank
a k by looking at the pattern of my presentation you might wonder how this is the ability to the previous speaker but actually use so everything is connected to everything on 9 and some some name and underscored by all of the the 110 years freeze the 1st also this on paper and written is developing a one-vote visual-based attitude of estimation system for armed drone that when you do on your mapping what we do is we call out on images and you know the provision of are drawn and you know the the version of the image the camera you know pose can pose and therefore but from the all of the images you can actually and to the manners and get a 3 D point cloud that's what number if you have a mapping is doing but what we want to do here is to the to the opposite on from the of overlapping images on we want to estimate the attitude of unity and on and the partition of EAP so we are interested in you ladies and the previous presenter was interesting people use attitude and them orientation the Sarah
Harrison objectives is to develop a major repairs attitude as mentioned system and we want to use the system as a back-up of iron you on so a supplement to I am new to provide an accurate on platform and camera pose during the image at petition and also want to remove on GCPs let me correct on the atom I images the that is why we we developed a system is that on In fact we have commercial software packages on if we use and picks for the all of voter model or other and commercial software packages that we can do the computing and data processing to on to trace back the attitude and and pose of the platform the but this is done by post-processing once you have images then you apply the software and compute on the which and and long the use the on transformation to get on to obtain the attitude and and and petitions but we want to develop 8 1 vote on edge to dismiss estimations and and therefore our requirement is at the time is the most important criterion and therefore we have to be careful about on and transformation equations and on detectors we use optical flow to measure the ego motion of the on-board camera and we want to estimate the platform's attitude we want to intuit optical flow With the keypoints detector the key points are the points that can be found from 2 different images from 2 overlapping images and therefore we have to correlate cross-correlate the points from 1 image and an atom each and we used on the image feature detection of whether MSE to to identify those key points and there's so many key points from 1 image any point that is available in both images can be a key point but some key points of very clear from the image and so will we want to use those key points and as a federal grant contour points In order to do so we have to find the best detector or fastest and detector because time is a critical issue here virtual ground control points at key points on the image and they can be extracted from the sensory and features and phages appeals should be present in overlapping images and therefore any point from the arms any any common point from 2 images can be our a virtual ground control points but as I said on the we need to identified very fond of just 4 points from our between images so we want to use on a limited number of free disappears and this is just disputes are crucial to estimate the pose of the camera so yeah if we go to the
basic theory off on this transformation between the image plane and the ground points but 1st of all you have a camera can repetition and focal events and image brand image and playing is in the middle and the the image points the pixel coordinates x and y are determined by the relationship between the camera focus and the crown points and the ground has 3 D coordinates x y z so from this measure should what we can do is to to calculate and you lost it would be to to estimate the velocity on the image plane so the velocity on the image plane used on a combination of transformation and orientation the and we can their % dues and complex and relationship into just vector and matrix format sigh of relief is a plus the on the image plane and it is done combination of font-installation and orientation and once you have a 1 at images on the velocity can be represented by an optical flow through so this is just a sample of the flow fields from 2 overlapping images as I said on the time is a critical issue and therefore we want to and identify which keypoint detector is suitable for our project and there are so many detectors of a report on let and we tried and free we tested for you and criteria the 1st 1 is the detection rate how many 2 points can be obtained from all between images and the 2nd 1 is most critical 1 but we measure the time to complete it how much time is necessary to complete and detection of key points and the 3rd 1 is matching right or dough the detectors can give you on as a series of Bonds key points the someone M on not pretty matched and therefore we have to and inspect those key points on the jury and memory and then on the 40 and integrity statistics of matching right we tested about 2 100 fourty 9 overlapping ranges over the left all for this as well as Australia most clearly have to at different kinds of yoga this 1 is fixed wing europeans and the others and copters but we used and fixed wing and he comes with it on GPS and IMU so this ever to do on ITK petitioning for UAV platform that is why we choose and fix the wing reduce that and fixed wing you may be used more suitable for high altitude flight on and and therefore we can minimize the distortion on when the images of the focus the so this is sample images
of all of those over come from the top left to the bottom right this is sparse 60 images in the core so you can see some overlaps between 2 consecutive images for example In the 1st image the round about this at the center but the next time image it is on the bottom now in the 3rd image is con and then some of other features like trees and the roads apparent in other images the
so we tested on 7 key points detected so the first one is Harris type and then the minimum eigenvalue detector recorded mean Iike and scale invariant feature transform SIFT and maxima is transparent stable extremal region and so on and speeded up robust features SURF and fast so we tested 7 mounted keypoint detectors we I would expect to have 3 criterion the detection rate end of time to complete and matching rate so I want to report about there's and statistics we use that to matching metrics on the 1st 1 is the sum of absolute difference to on sets of key points around the P points obtained by the detector and the key points as a reference and the difference is measured by absolute value between the 2 difference that's onset and the other 1 is on sum of squared differences so differences squared
out of 7 and detectors I mean by the minimum eigenvalue I within provide you providers and the largest number of key points so mean Iike detected more than 5 thousand key points from the 2 parts 1 images and and this is the main values so found out of 249 images the on the average of mean you can give you about 5 thousand keypoints part in each of it's the spiral out the 2nd 1 is SIFT and I can give you more than 2 dozen conquered points come from 1 image 1 image on the average time we so it is in the middle um and fast and risk and performed on the absolute on we got a very limited number of on each so and fast didn't perform where in terms of the number of detected keypoints we use a of that on matching keypoints are of key points of detector the key points how many key points are actually next the I but this matching remains cracked match it also improves on Forest matching the and the matching keypoints um this if I were them is excellent on SIFT and provide more than 250 and keeps matching keypoints per frame each on the and the 2nd 1 is I and the 3rd is self so mean Iike and some and but the top 3 and the rest of them providing us not so many metric components the so this is 1 of the criteria of the matching keypoints then and when we look get on matching rate on SIFT this about an 11 % matching rate and and so on and so a similar the city so of and so on good detectors in time so matching right there again yeah mention means just and the we can see the key points from 1 image and the key points from another image so we can see those key points from 2 sets but it doesn't mean it doesn't necessarily mean that 1 point in 1 is correspond to the the identical point in another set OK so it contained some force matching keypoints the I think it's time to compute is the most important criteria and in our system and unfortunately see if doesn't perform it takes a lot of time because simply because on SIFT and provide you too many the the key points so it takes more than on 1 . 4 seconds to complete it on detection in but I think fast is the most of 1st this 1 on as the name suggests and the fast doesn't look where in terms of detection rate um the but anyway the top 3 performers ourselves based and fast the no
from the visual inspection we on 40 inspected Indonesia a key point from the image and we matched from 1 image to another so so we compare the the matching keypoints against our future inspection and we couldn't do this on 450 descent and because the system has too many on key points so unfortunately we couldn't have report their cracked matching rates for seats with them but anyway as SIFT is out of on the candidate because the East is too slow so it didn't affect our our project the and away on most of them performed where found they on the top form I mean quite natural rights is above 80 per cent so I think it doesn't matter which and I wouldn't and the use but what I want to say here in this slide is that there is a discrepancy between on SST an essay SAT but a if we if we use SAT the absolute difference as an much much heat Ben itself is the top 1 he's a none of the soft output homes number 1 among sick or or 7 but when we looked at on SST it's not so when I we have to use SAD you assess the non it's a question based on the average of cracked matching rate so we consider both at the same time and then main Iike is the top 1 and Harris is the 2nd and self it is about time to based on these statistics on 1 and
from this test is that SSD and SAD to natural consistent results have especially in terms of on craft matching rate but South shows the highest quite a matching rate were despair to SAT and some of it was on top 3 in terms of time to complete it and and natural and detection right so which
1 is the optimal keypoints picked up for our project on see if it provides the highest number of matching keypoints but it provides for money keypoints dusty on this energy to our system because it increases the processing time and our interest is on brought on camera pose estimation and therefore aren't we excluded SIFT from our and detection a self-chosen optima processing time it takes on the . 2 seconds and it shows that is number number of matching keypoints and depending on which matches the use of self is the top 1 in terms of correct matching rate so the choice is self and when I 1 found from this expression is that the elites is very just a on Chris clustering pattern moment matching keypoints I'll show you on and cross talk all matching points in the next slide but we can use this cross-talk as as as a tool To classify I cracked matching keypoints and 1st matching keypoints automatically in order to classify credit matching points and and forest matching points we applied cross correlation and we also use outlier information so this is the algorithm for the automatic feature identification of correct matches so you spot alright information and use to clustering and apply a cross-correlation and and then in limit eliminate and forest matches is the statistics of automatic visual identification of self points on it is on the X axis represents on the number of I'm images that from the 1st image to a lot on that 40 9 regions and these numbers represent how many on points how many credit keep he points identified so some humiliation it provides a very small number of arms correct matching keypoints and some other images provide 151 credit commission points by the way this is statistics for yourself are with them we apply the same today on remain quite embryo key points on statistics of most similar and some images and does not have enough number of on common key points hands on these images cannot be used for our attitude estimation system statistical analysis shows of has a man at all points for and next of all and standard deviation is about point 8 and 8 so status could itself is its fourier to mean eigenvalue so just 1 of the reasons why we choose to serve as our as our boredom it shows
you and the clusters of key points 1 I put to define images into 1 so you see on 2 different courses here and the so the points In 1 in each presenter here that and lines and soccer's several XL close in 1 image are classified as key points and they are Manfred tool and crosses in finding images they are constant and therefore we can use this cluster the information for automatic feature as identification this the final results
and we have a pair of 2 sets of a pair of between images on you remember this is the 1st image on the cities so near the roundabout this is key points a set key points and they're mesh with the key points from the 2nd image
10 the performance of various key points that talks were detectors were as varied With respect to detection rate and time to complete and generates and the set of 249 on and images were used to evaluate on the performance of keypoint detectors and our assessment is a it says stuff is on the optimal keypoint detector for our system for our on onboard educated as mentioned system the it doesn't necessary mean it's of some of these the best I notice it is optimal for system I think the a think much thank you then form the question
thank you thank fair presentation and that just regarding the the the the the vector we use seized on a GPU the In in GPU seized runs quite fast experts in the area of the local and the link here for a graphics and we hope that the view but the few of our where enough to do it on their every will the interesting thank you much is very helpful suggestions and comments on x-ray will wait was sought about GPU to arms the as you can imagine UAB has this so many and mutations and the payload is 1st and for so long that the payload is the UN significant and annotation so on GPU and any of microprocessor could parents and also on the 2nd annotation of UAV is an the battery and in any fees high-performance grittily takes more than 3 tongue to me and they really should question maybe Amy said the also the user to apply these ugly Mama's also with a weight and we develop our in-house software so we use that treatment let her go and all of and all the all of of our house the world was they In the last of the the latter is the in that and is the yet he past on as the the in the in those slides on so the process there's key points within . 2 seconds and a measure is and fast enough for an extra on both operations known the Our system is ongoing is not completed yet so we didn't have actually test on in India environment it thank you very much for the don't having more times we can discuss this crucial those thank you much thank you the
Schätzwert
Euler-Winkel
Weg <Topologie>
Versionsverwaltung
Zahlenbereich
Euler-Winkel
Physikalisches System
Baumechanik
E-Mail
Textur-Mapping
Kombinatorische Gruppentheorie
Visuelles System
Partitionsfunktion
Computeranimation
Eins
Mapping <Computergraphik>
Physikalisches System
Whiteboard
Schätzung
Mustersprache
Hill-Differentialgleichung
Streuungsdiagramm
Matrizenrechnung
Abstimmung <Frequenz>
Punkt
Euler-Winkel
Virtualisierung
Desintegration <Mathematik>
Nebenbedingung
Snake <Bildverarbeitung>
Gleichungssystem
Oval
Euler-Winkel
Dicke
Textur-Mapping
Komplex <Algebra>
Computeranimation
Spezialrechner
Entscheidungsmodell
Datenverarbeitung
Punkt
Kontrolltheorie
Einflussgröße
Statistik
Prozess <Informatik>
Reihe
CAM
Bitrate
Ereignishorizont
Digitale Photographie
Reihe
Software
Datenfeld
Ebene
Rechter Winkel
Einheit <Mathematik>
Festspeicher
Dateiformat
Projektive Ebene
Technische Optik
Messprozess
Ordnung <Mathematik>
Schlüsselverwaltung
Geschwindigkeit
Ebene
Orientierung <Mathematik>
Subtraktion
Betragsfläche
Schaltnetz
Transformation <Mathematik>
Datensicherung
Systemplattform
B-Spline
Signifikanztest
Physikalische Theorie
Physikalisches System
Spannweite <Stochastik>
Task
Schätzung
Stichprobenumfang
Virtuelle Realität
Inertialsystem
Modelltheorie
Pixel
Medizinische Informatik
Systemplattform
Physikalisches System
Vektorraum
Datenfluss
Fokalpunkt
Integral
Objekt <Kategorie>
Vierzig
Whiteboard
Verzerrungstensor
Vollständigkeit
Bitrate
Geschwindigkeit
Modul <Software>
Subtraktion
Gewichtete Summe
Punkt
Extrempunkt
Invarianz
Betrag <Mathematik>
Unrundheit
Transformation <Mathematik>
Oval
Extrempunkt
Signifikanztest
Computeranimation
Netzwerktopologie
Spezialrechner
Maßstab
Datentyp
Minimum
Vorlesung/Konferenz
Punkt
Integraloperator
Binärcode
Zentrische Streckung
Statistik
Linienelement
Linienelement
Bitrate
Invariante
Arithmetisches Mittel
Eigenwert
Betrag <Mathematik>
Menge
Rechter Winkel
Speicherabzug
Hill-Differentialgleichung
Schlüsselverwaltung
Eigenwertproblem
Satellitensystem
Subtraktion
Punkt
Extrempunkt
Zahlenbereich
Term
Service provider
Computeranimation
Bildschirmmaske
Mittelwert
Spirale
Punkt
Diskrepanz
Funktion <Mathematik>
Statistik
Wald <Graphentheorie>
Linienelement
Zwei
Güte der Anpassung
Physikalisches System
Bitrate
Matching
Rechenschieber
Arithmetisches Mittel
Menge
Forcing
Rechter Winkel
Komponente <Software>
Mereologie
Projektive Ebene
Bitrate
Schlüsselverwaltung
Resultante
Satellitensystem
Korrelationsfunktion
Prozess <Physik>
Punkt
Momentenproblem
Leistungsbewertung
Euler-Winkel
Information
Analysis
Computeranimation
Spezialrechner
Arithmetischer Ausdruck
Algorithmus
Standardabweichung
Mustersprache
Visualisierung
Statistische Analyse
Punkt
Auswahlaxiom
Korrelationsfunktion
Metropolitan area network
Softwaretest
Statistik
Prozess <Informatik>
Systemidentifikation
Matching
Bitrate
Dialekt
Widerspruchsfreiheit
Fourier-Entwicklung
Rechenschieber
Mustersprache
Information
Ordnung <Mathematik>
Schlüsselverwaltung
Arithmetisches Mittel
Standardabweichung
Systemidentifikation
Zahlenbereich
Eliminationsverfahren
Term
Inverser Limes
Schätzwert
Algorithmus
Fehlermeldung
Wald <Graphentheorie>
Zwei
Statistische Analyse
Physikalisches System
Visuelles System
Matching
Energiedichte
Bitrate
Term
Systemidentifikation
Resultante
Punkt
Systemidentifikation
Abgeschlossene Menge
Kombinatorische Gruppentheorie
Visuelles System
Menge
Ablöseblase
Punkt
Polygonnetz
Information
Cluster <Rechnernetz>
Schlüsselverwaltung
Gerade
Punkt
Gewicht <Mathematik>
Euler-Winkel
Minimierung
Kombinatorische Gruppentheorie
Graphikprozessor
Computeranimation
Spezialrechner
Software
Schätzung
Vererbungshierarchie
Hilfesystem
Einflussgröße
Softwaretest
Expertensystem
Nichtlinearer Operator
Mikroprozessor
Sichtenkonzept
Zwei
Wurm <Informatik>
Globale Optimierung
Web Site
Physikalisches System
Vektorraum
Bitrate
Binder <Informatik>
Inverser Limes
Stochastischer Prozess
Whiteboard
Menge
Flächeninhalt
Bitrate
Programmierumgebung
Schlüsselverwaltung
Vorlesung/Konferenz

Metadaten

Formale Metadaten

Titel An On-board Visual-based Attitude Estimation System For Unmanned Aerial Vehicle Mapping
Serientitel FOSS4G Seoul 2015
Autor Lim, Samsung
Lizenz CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
DOI 10.5446/32020
Herausgeber FOSS4G
Erscheinungsjahr 2015
Sprache Englisch
Produzent FOSS4G KOREA
Produktionsjahr 2015
Produktionsort Seoul, South Korea

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract A visual-based attitude estimation system aims to utilize an on-board camera to estimate the pose of the platform by using salient image features rather than additional hardware such as gyroscope. One of the notable achievements in this approach is on-camera self-calibration [1-4] which has been widely used in the modern digital cameras. Attitude/pose information is one of the crucial requirements for the transformation of 2-dimensional (2D) image coordinates to 3-dimensional (3D) real-world coordinates [3]. In photogrammetry and machine vision, the use of camera’s pose is essential for modeling tasks such as photo modeling [5-8] and 3D mapping [9]. Commercial software packages are now available for such tasks, however, they are only good for off-board image processing which does not have any computing or processing constraints. Unmanned Aerial Vehicles (UAVs) and any other airborne platforms impose several constraints to attitude estimation. Currently, Inertial Measurement Units (IMUs) are widely used in unmanned aircrafts. Although IMUs are very effective, this conventional attitude estimation approach adds up the aircraft’s payload significantly [10]. Hence, a visual-based attitude estimation system is more appropriate for UAV mapping. Different types of approaches to visual-based attitude estimation have been proposed in [10-14]. This study aims to integrate optical flow and a keypoints detector of overlapped images for on-board attitude estimation and camera-self calibration. This is to minimize the computation burden that can be caused by the optical flow, and to fit in on-board visual-based attitude estimation and camera calibration. A series of performance tests have been conducted on selected keypoints detectors, and the results are evaluated to identify the best detector for the proposed visual-based attitude estimation system. The proposed on-board visual-based attitude estimation system is designed to use visual information from overlapped images to measure the platform’s egomotion, and estimate the attitude from the visual motion. Optical flow computation could be expensive depending on the approach [15]. Our goal is to reduce the computation burden at the start of the processing by minimizing the aerial images to the regions of upmost important. This requires an integration of optical flow with salient feature detection and matching. Our proposed system strictly follows the UAV’s on-board processing requirements [16]. Thus, the suitability of salient feature detectors for the system needs to be investigated. Performances of various keypoints detectors have been evaluated in terms of detection, time to complete and matching capabilities. A set of 249 aerial images acquired from a fixed wing UAV have been tested. The test results show that the best keypoints detector to be integrated in our proposed system is the Speeded Up Robust Feature (SURF) detector, given that Sum of Absolute Differences (SAD) matching metric is used to identify the matching points. It was found that the time taken for SURF to complete the detection and matching process is, although not the fastest, relatively small. SURF is also able to provide sufficient numbers of salient feature points in each detection without sacrificing the computation time.

Ähnliche Filme

Loading...