Bestand wählen
Merken

Object-Based Building Boundary Extraction From Lidar Data

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
but then this presentation is about light uh
data-processing on as you can see on the slide editor has a lot on 3 D points so we say of point clouds on on the bottom of the screen you see you have 3 D points massive number of points on 3 points yet present by chorus each on column represents derivation and part of this slide you see on optical fired or line from a to B so the security to do use we want to classify the points representing ground and points represent representing underground that's the 1st step we do for the the processing and once they have found ground points that we created the and t time relation room order to detect trying modem some degree say TM and this basic on data motor for all kinds of Syrian projects then the next step is we we want to classify non-ground points for the 2 of a station points and in point right up points representing trees and on shows and and other types of patient and then we want to on Xtract on points representing shoulder gets like the beatings and rule it's and and produced by that and that's not it's because once you have a a point cloud representing readings they are just points but what we want he's a 3 D model of each building so you want to have a nice smooth poorly considered Corrigan could be reading out of my the point cloud so I won't talk about over the process of on building extraction from there was a lot of it has the most of filtering
our them on to classify ground points most so filtering algorithms use ization what that means is we convert light that they tell which is a messy points to the grid-based images the the reason why we use roster ideation is because we have too many points to process so he takes a lot of time and it's not efficient but the door that the main drawback of roster eradication is we need additional computing overhead so it was on the overhead for preprocessing once a we can just go ahead and processes roster beta and the reason why we use prosody ties because right they type is much easier to process you cannot write met Argyriou or any due process in are to rusty images for example you can just substrate from 1 image to another this very simple but if you of trick if you have 2 different pointed that assets and then even subsection of the 2 is not easy because all recorded it based cake relations to coordinate-based decoration is very close to the and that's the main reason why we use cross radiation for either the processing about if we use priors radiation some points a couple of points representing 1 pixel can be merged to 1 so you have to average out of control points to represent single pixel which means if the if we remove some information out of that and resulted before the plenary lose some information by resolution it means we increase the uncertainty the and that's the reason why we use no Ross register we want to just keep to get the time and also we want to use adaptive window size when the future out light at a time if we use ride rider data as a rule of the time then you have to use a window you can use the advisory window was 7 by 7 window you have to filter out by and striking this windows of course and of course they can Our main goal these to exit meetings and as you can imagine we have built all kinds of different sizes of meetings you know on commercial properties commercial buildings often large size and on residential buildings like houses of letters for smaller so if you stick to the fixed window size sometimes the accuracy is not good enough so we want to use or that the window size and also want to use morphological filtering which means we want to he's trying as a base it's for the classification of the on on then on non-ground points once more and once more for a script written is a prior to lie the day time then became generation is possible and then building detection is followed the study area of our study is on the units that you campus which is 1 kind of vital kilometers on that I want to say this data set is quite challenging In many what in many reasons and because unesco campus is situated in beyond in the heart of Sydney which is metropolitan city and therefore we have a smaller that's then should be readings but also we have a high rise buildings and sometimes rulers very ask people and the territories and smaller wishes and large screens areas of the of a real world crusty on campus the right of the test so we have was the what we have hands on XYZ an intensity and as a supplement data that we use L 1 lot there when misread which comes with them RGB values the 1 the main problem with this tool that says is that on the 2 data sets applied in different times there is onto a year gap between 2 and then assets so that sums discrepancy between 2 and that assets the
so this is an overview over the campus on this on are also fertile for the campus it can see on the the principles the and around buildings and very complex buildings on you can see that part of part of here but actually this is a campus and this is upper campus and along the main road from real campus to the upper campus on this is quite high and this is the screen
capture all right intensity map 1 because light enters the use on your infrared images article collider use on a spectrum of of 1 thousand 40 on which belongs to on new infrared so we can use provide the intensity to close 5 and buildings and roads and green spaces to the spectrum I'm in the UN the characteristics of each type of entity is different In intensity images so this is very useful information for our work and this is what the
profile or the main road you can see uh and high true on on the roads and those of buildings and this is this opera campus and I will go is to expect on the buildings out of this slide at the top so what we do and for the
young the counter point classification instead we use what for the square on filtering mission parameters and dilation and you allusion the reason why we use dilation-erosion is to find the local maxima and local minima from the latter point clouds so that we can separate ground points and number of points and then we developed a abductive window indicate talk and this indicator is used to detect and other relief approximate size all the building and it is automatic on adaptation model for window size so so when we're on of this stuff for the 1st time we can have on approximate size although each reading and therefore we can change the window size accordingly then when the size is not fixed it changes all the time as far as we move along so this is a work of what active filtering and I don't want to go in care because it's a bit complex but what I must say here is we use elevation information and on intensity information and we apply some criteria to on Crossfire ground points and then points and this is not just 1 time we we use the try to really and therefore would give some feedback to the results and once on the data that is and the MIT 1 of the criteria then we go back to the previous step and a private changed the window size and the we do it on the the treachery for this the right hand side of this bound flow is about adaptive indicator the the 2 terms can be found from
my am from our paper on few interested so this is the initial outcome all filtering and the filter the has 2 classes just 2 classes in the 1st class is crowned points the points representing ground so if you remember the in the image that I showed you before you see green space is left here and the bread areas are building footprints so color there is that green areas and on the bottom use our non-ground points including to trees and buildings and other types types of offshore obscure objects so the top figure shows you ready for that area that were donated considerable elevation change from rural campus to their upper campus but also the all the points show you very smooth soft but on the bottom because this is unknown ground points you see on this and the show that the building approximate on height all the building approximate height of tree and so on we consider them from the bottom figure so this is the initial resort then many civil engineers and bam engineers out paper used the for the top product this is very useful product for them because we can use this 1 too to create undue TM and the chain can be used for on many or event planning propose its and then use them for our project because our our goal is to create the the motor or the buildings so we discard the top product and we used to do is just bottom product but this is not ready for building extraction because of trees In order to remove trees
from the non-ground points and we use normalized difference vegetation index which is the ratio the which is the difference between red band from the auditory Maedche & new infrared band from relied on data so we use the fused at the top so 1st of all we have to fuse relied top an area image imagery and then use to different bands 1 is red band and the other is on near-infrared and take the difference between the 2 but we have to normalize it and by dividing by the and the sum over the 2 of course the some of you are aware that ended the I is not profit and especially if on arrival i intensity is on is not stable no they're like light intensity is very good source for data classification but the main drawback all right at light intensity is that the amplitude relied on may change on the penny 1 what time all the units on depending on what time of the day you the right at the top so on we don't use lie the intensity as it is we actually know on me intensity 1st and then we use normalized intensity to create to calculate and DVI the and once vegetation is removed from now and and the non-ground points then what is left is just points so the point form experience however they are just set of points so what we do and this to to a trace out the boundaries of the edges of the buildings there are many algorithms to to detect the boundary of the buildings and really tried through different from that are with them so the first one is our shape on and the 2nd 1 is I push shape nice brand what I push bees the 2nd 1 is created based on what and so we arise at this step we must rise building points and greed and then use grid-based with them here but this is just for comparison words on allophonic choices that based final choices modified convex item so we tried wait wait we tested the 3 our them just to create a path through the building on in vector format and then find your fine tuning is applied to to remove small be yours so this is the
on extracted buildings of the campus now you can see the on the principles large transpose is removed all the trees and move and what is left here is just buildings if you look at it from a closely you see the building points are almost they up it's not smoothing of so what can do is we have to add prior fine tuning again to this product the but I want to look at this part this is a residential area see has very small houses compared to on campus buildings and that's the main reason why we try adaptive on window size when I have
on the initial transcription is a it's all over the nostalgia area of the resort is muscle groups can see on ordered the building points and gone points are mixed up so this is unfiltered classification once we apply the
futile what is left heroes so this is a negative image of on the buildings so white space the prison buildings and great colors represent the ground and trees the so once fine-tuning is
done we have a final outcome so we tried to assess our results on we tried and we tested for different areas over the campus on campus of a campus and construct the area and the rates they show area the commission error some on the average about 6 per cent so we got about 190 90 94 % accuracy in terms of commission error my in terms of what the RMSE on Our results in a piece of our research shows about 36 centimeters all global which is acceptable I think the
so what is of a shape our them up I should I wear them is on his is to understand on my studies to implement in the coding if you have 0 points the present the presenting a and what it can do is start from any point and Duracell cross and if there is some point inside a soccer that's not part of the edge to remove that point and in the long so at the end of what we have these you have on have the edge or on the the boundary of the buildings so this is the type of shape are with them and the based on what it means for all you just unrest arise vector time and then once you have on Nietzsche roughly each then you cannot provide any an hour with them November in a commercial software such as OCR gyrus has on a tour that converts on something on that converts images to carry guns we can have on competition ization tool and from open-source software or commercial software so you can use just Ieave adjusting are with them to create polygons the so I think this is world just on methods to create some the building boundaries because you don't have to on to the of your own I within it can just simply use on the adjusting to the 3rd 1 which we tried that is modified convexed hardware them so once you have all of a set of points and then you create convex hard once you have a convex horror menu you on this the start and I'm just going on distressing and find the next point that is part of all of the on the edge and at the end we have a final boundary I wanna say that this modified combines horror over them is not the same as a concave aha with them come Kandahar I wouldn't is always available on all commercial software that 1 this or this or that it is of computer hardware means that you have to find out the a good primer talk the depending on which privately use you can have faith in and carry guns we already have on too much to convex our so finding a suitable primer for conic him horror is Main challenges but he had file commits I want is different from concave harder William because we trace on around the boundary so do is
add up to a sample is all of the units that will you on buildings 1 is on the just a rectangular and and most analysts and other 1 is of complex breeding I pressure works very good so you can we can have on the outer put on an inoperable nicely the and we can goes around the on complex building sigh pressure looks very good modified comics hard I was and has a problem here because summary it is very difficult to on identified in the Pori gone so this is the main drawback of on modified MCE so I Our approach is to undo used makes use this to of grammar-based looks fine that as I said Rosso ideation the user some information and therefore in terms of on spatial accuracy nouns grid-based doesn't perform when compared to other 2 I where them so again was tested on our research we also tested on the crossing of the horizontal on the RMSE or a different buildings across the campus pressure gamers Our seminiferous such a I see on the average and modified conversation is similar to a shape satirist on the 2 and the grid-based was the worst among 3 but it's not far from other to this very gross started to we tested on on
our algorithm to on on other datasets on as I said in Astoria Campos is very challenging area because it could it contains all the different types of buildings and and station perhaps been instruction is popularly in Auburn areas so we applied our algorithm to urban setting and we choose to a different sites in in a typical residential areas and the I would works very well on all sides give us at good desserts I think the main challenge of this algorithm is that how to seperate the 2 readings in close proximity if 2 buildings that too close to each other if the gap between the 2 buildings is about an hour 1 meter called or 2 meters then we don't have are enough point density so I don't have to do is just increase the the point density old L 1 line at a time them and then into work but temporal solution to turn new to times and to avoid this armchair that but we'll pride windows horizontally and then for query so we tried different directions and take the intersection of the 2 2 then it gives us an better resorts the statistics are used to
sites is that well in in either classification we have 3 different statistics on the completeness and correctness and quality and we tried through from edges to assess a and test results and the set 1 the completeness is about 96 % and start to is about 94 % on over 90 % and cracked the most 1 these 90 8 and then the other is 9 through but in terms of quality of quantum is on we compared the commission there are an an omission so cop in terms of therapy decide to give us some on thirds this or most of it is around 80 80 % so we are not satisfied with the results which means we need to do something more to improve on our Adams
so on my concluding remarks on the proposed algorithm is suitable for the Steve or urban areas with a varying bidding sizes it looks for commercial properties and and commercial properties Our women requires some parameters but those parameters are automatically determined is is it is adaptive filtering and the test results show that the proposed algorithm is able to classify ground points with the particle so body-fixed sentiment this quite good about 46 sentiments but was under actresses about 75 senators I think it is honest understand the it is well known fact that model the time has a better accuracy in terms of of what course on measurements not was onto measurements so horizontal accuracy is a role or than 40 like when you look get Lila data so this is on the stand there were figures now commission error is less than 6 per and that as I mentioned before on the multitude of top buildings or 2 buildings are close to each other on that is a bit difficult to determine the actual size of everything but we on tackle this problem by applying on the them into rejections on but I think the the what you just solutions is to increase the point that the better solved this kind of problem lastly the I think that's it thank you very much that do you have any questions this this you might I was just wondering if you just walk in to the you know trying to deter mine the height of the buildings will use elevation intensity OK so it's kind of 40 and if you would here but you just when you show the residue and what would the you know the shape of the building now is local hiding the good you could do that is the problem of it and that yeah cracked the boundaries set is fitted into the but the building heights of stored in light data so we can just extrude the 3 up to the fading outlines 2 to 3 D but you you we have a flat roof not with this yet and that's a very quick I could comment on because but have I think the them the most challenging area is I don't touch shape on if you look at the campus those buildings
out rectangular or square but this 1 this is round the house and on the dislike of them type on building so 1 for this kind over down the building what we got at the end of on building detection building bound the detection is the just and outside 3 gone that's it at any other questions from the audience but I think of much of this moment to the next speaker
Objekt <Kategorie>
Prozess <Physik>
Punkt
Randwert
Weg <Topologie>
Zahlenbereich
Derivation <Algebra>
Kombinatorische Gruppentheorie
Computeranimation
Netzwerktopologie
Informationsmodellierung
Schwebung
Arbeitsplatzcomputer
Minimum
Datentyp
Gerade
Touchscreen
Computersicherheit
Relativitätstheorie
Ruhmasse
Schlussregel
Modem
Rechenschieber
Texteditor
Minimalgrad
Rechter Winkel
Mereologie
Projektive Ebene
Ordnung <Mathematik>
Streuungsdiagramm
Lesen <Datenverarbeitung>
Subtraktion
Punkt
Gewichtete Summe
Prozess <Physik>
Information
B-Spline
Computeranimation
Temperaturstrahlung
Einheit <Mathematik>
Algorithmus
Reelle Zahl
Bildschirmfenster
Datentyp
Skript <Programm>
Flächeninhalt
Diskrepanz
Bildgebendes Verfahren
Bildauflösung
Touchscreen
Beobachtungsstudie
Softwaretest
Addition
Pixel
Betafunktion
Relativitätstheorie
Schlussregel
Digitale Photographie
System F
Generator <Informatik>
Verbandstheorie
Flächeninhalt
Menge
Rechter Winkel
Mereologie
Anpassung <Mathematik>
Overhead <Kommunikationstechnik>
Information
Overhead <Kommunikationstechnik>
Beobachtungsstudie
Einfügungsdämpfung
Lesen <Datenverarbeitung>
Green-Funktion
Stoß
Profil <Aerodynamik>
Punktspektrum
Raum-Zeit
Computeranimation
Benutzerprofil
Rechenschieber
Mapping <Computergraphik>
Motion Capturing
Rechter Winkel
Datentyp
Information
Charakteristisches Polynom
Bildgebendes Verfahren
Resultante
Punkt
Extrempunkt
Baumechanik
Extrempunkt
Komplex <Algebra>
Raum-Zeit
Computeranimation
Anpassung <Mathematik>
Netzwerktopologie
Bildschirmfenster
Minimum
Punkt
Figurierte Zahl
Parametersystem
Approximation
Digitalfilter
Biprodukt
Ereignishorizont
Verkettung <Informatik>
Rechter Winkel
Anpassung <Mathematik>
Projektive Ebene
Information
Messprozess
Ordnung <Mathematik>
Lesen <Datenverarbeitung>
Rückkopplung
Betragsfläche
Zeitdilatation
Mathematisierung
Klasse <Mathematik>
Automatische Handlungsplanung
Zahlenbereich
Term
Informationsmodellierung
Iteration
Datentyp
Indexberechnung
Bildgebendes Verfahren
Approximationstheorie
p-V-Diagramm
Mathematisierung
Datenfluss
Objekt <Kategorie>
Quadratzahl
Flächeninhalt
Streuungsdiagramm
Zeitdilatation
Relationentheorie
Subtraktion
Gewichtete Summe
Punkt
Jensen-Maß
Konvexer Körper
Computeranimation
Netzwerktopologie
Bildschirmmaske
Einheit <Mathematik>
TUNIS <Programm>
Algorithmus
Gruppe <Mathematik>
Bildschirmfenster
Bildgebendes Verfahren
Auswahlaxiom
Algorithmus
Shape <Informatik>
Konvexe Hülle
Güte der Anpassung
Indexberechnung
Konvexer Körper
Quellcode
Paarvergleich
Vektorraum
Biprodukt
Residuum
Randwert
Flächeninhalt
Automatische Indexierung
Rechter Winkel
Parametersystem
Mereologie
Dateiformat
Wort <Informatik>
Normalvektor
Shape <Informatik>
Gefangenendilemma
Netzwerktopologie
Punkt
Flächeninhalt
Gruppenkeim
Flächeninhalt
Kantenfärbung
Raum-Zeit
Computeranimation
Resultante
Punkt
Jensen-Maß
Randwert
Konvexer Körper
Schaltnetz
Annulator
Polygon
Term
Computeranimation
Software
Datentyp
Bildgebendes Verfahren
Beobachtungsstudie
Algorithmus
Fehlermeldung
Shape <Informatik>
Hardware
Konvexe Hülle
Open Source
Güte der Anpassung
Konvexer Körper
Vektorraum
Elektronische Publikation
Bitrate
Vertikale
Randwert
Menge
Flächeninhalt
Mereologie
Codierung
Übertrag
Shape <Informatik>
Fehlermeldung
Umsetzung <Informatik>
Subtraktion
Punkt
Randwert
Abgeschlossene Menge
Term
Komplex <Algebra>
Computeranimation
Einheit <Mathematik>
Algorithmus
Mittelwert
Stichprobenumfang
Arbeitsplatzcomputer
Datentyp
Bildschirmfenster
Meter
Abstand
Gerade
Shape <Informatik>
Statistik
Konvexe Hülle
Temporale Logik
Abfrage
Konvexer Körper
Dichte <Physik>
Druckverlauf
Flächeninhalt
Menge
Information
Arithmetisches Mittel
Lesen <Datenverarbeitung>
Resultante
Maschinenschreiben
Web Site
Bit
Multiplikation
Punkt
Term
Computeranimation
Informationsmodellierung
Algorithmus
Quantisierung <Physik>
Punkt
Figurierte Zahl
Einflussgröße
Softwaretest
Parametersystem
Algorithmus
Shape <Informatik>
Statistik
Fehlermeldung
Vervollständigung <Mathematik>
Prozess <Informatik>
Vertikale
Randwert
Flächeninhalt
Residuum
Parametersystem
Partikelsystem
Fehlermeldung
Momentenproblem
Datentyp
Vorlesung/Konferenz
Unrundheit
Digitale Photographie

Metadaten

Formale Metadaten

Titel Object-Based Building Boundary Extraction From Lidar Data
Serientitel FOSS4G Seoul 2015
Autor Lim, Samsung
Lizenz CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
DOI 10.5446/32006
Herausgeber FOSS4G, Open Source Geospatial Foundation (OSGeo)
Erscheinungsjahr 2015
Sprache Englisch
Produzent FOSS4G KOREA
Produktionsjahr 2015
Produktionsort Seoul, South Korea

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract Urban areas are of increasing importance in most of the countries since they have been changing rapidly over time. Buildings are the main objects of these areas, and building boundaries are one of the key factors for urban mapping and city modelling. Accurate building extraction using lidar data has been a prevalent topic that many research efforts have been contributed to. However, the complexity of building shapes and irregularity of lidar point distribution make the task difficult to achieve. Although there are plenty of algorithms trying to solve the difficulties, it is not feasible for a single method to fit for all. Each can perform well under a certain situation and requirement only. In this paper, several building boundary extraction algorithms including an alpha-shape algorithm, a grid-based algorithm, and a concave hull algorithm are assessed. The strengths and limitations of each algorithm are identified and addressed. The point cloud used in this research is derived from the airborne lidar data acquired over the main campus of the University of New South Wales (UNSW) Australia in 2005. Typically, the boundary extraction algorithms are applied to the clusters of building points when lidar data is segmented and classified. Many approaches have been attempted to improve the extraction algorithms. The simplest way to extract a rough boundary is using the convex hull method which has been implemented by several researchers including Qihong et al. [1]. However, this algorithm only fits for buildings with regular convex shapes. In order to overcome the limitation of this method many researchers have modified and improved the algorithm and obtained more reliable boundaries [2, 3]. Another prevalent and recent method is using an alpha-shape algorithm based on two-dimensional Delaunay Triangulation [4, 5]. This method works for both concave and convex shapes, and even for some complicated shapes. Another approximation-based algorithm was introduced by Zhou and Neumann [6] using watertight grids. Although it is observed that aforementioned algorithms work well in different scenarios, a quantitative comparison analysis on each algorithm��s performance on an identical dataset is rarely reported. Aiming at evaluating and improving these algorithms, we implemented a mathematical framework to compare the algorithms in an object-by-object basis. This study compares the boundary points selected by different algorithms and the impact of the selection on the accuracy. In this paper, three algorithms for building boundary extraction are assessed in an object-by-object basis. The alpha-shape algorithm generates reliable boundaries for most of sample buildings, while the grid-based algorithm shows a little inconsistency in some cases. The concave hull algorithm performs moderately with a few limitations. The alpha-shape algorithm is suggested for general building boundary extraction for its consistency and reliability.

Ähnliche Filme

Loading...
Feedback