Logo TIB AV-Portal Logo TIB AV-Portal

The PDAL Pointcloud Engine

Video in TIB AV-Portal: The PDAL Pointcloud Engine

Formal Metadata

The PDAL Pointcloud Engine
Title of Series
Part Number
Number of Parts
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Release Date

Content Metadata

Subject Area
An introduction to the PDAL pointcloud library, how to accomplish basic data processing, read/write files and how to scale to do batch processing. Also covering the use of PDAL docker images for quick installation. Also covering various PDAL plugins, optional drivers and connections to other projects that use PDAL.
Void's cloud processes Arm
Computer animation number Wendepunkt
flock Computer animation Ranges cloud fitness cloud Wendepunkt
Context summand necessities outliers indicators functions argument part morphological programs Bugs Location splitting mathematics single Snake information model cloud area cover randomization beta algorithm relation views formating wrappers point feedback moment The list argument statistics processes repo WPAN hill progress point filters digital filters randomness varieties maximal similar analysis van number powerful versions bridges operations translation system level utilizes structure plugin noise meter validation Tiled forces projects programming analysis coma total applications morphological files Indexable Densities Computer animation versions cloud libraries
area statistics Computer animation information configuration green maximal cloud coma several Arm metadata
polygons equal formating ones indicators argument font splitting Explorative Datenanalyse configuration real vector information series cloud area sources capacity formating reps point storage real vector Void's Results geometric point varieties capacity number specific Densities terms boundaries default Operational Research default information Tiled polygons breadth threshold files Densities Computer animation case different operating functions calculations cloud boundaries
Actions building gaps time views Ranges sources formating files functions open subsets argument Ordinary Differential Equations dimension redundancy image configuration different kernel Forest sets Dimension processes cloud sources formating building digital point web pages voxel Ranges drop Arrays configuration angles real vector linear input website reading write meter classical web pages filters point batch digital filters randomness Open Source lines images maximal translation Mass Content distances Axonometrie Coloured powerful number Densities operations hierarchy repo Poisson data types Operational Research default multiple batch bracket projects Content analysis coma basis lines powerful files Densities kernel Computer animation case functions versions cloud Windows
point meter current Inverse interpolation pixel presentation ones clients Coloured image Arrays Densities level Representation boundaries model alpha oracle form conditions area algorithm information formating digital cellular bits approximate files sparse calculations cloud Clique <Graphentheorie> libraries
Computer animation
the home yeah but so welcome news in the CIS you we're going to go into the clouds but not up there were there is none but to the point of
starting with Mike Smith is going to present just out of curiosity how many people are using PDL or poodle at the golf now OK how many haven't used it at all OK
can even number so cover some
advanced and cover some some beginning stuff to but just there for those of you that curious this point clouds again in the thermal range using a
flock camera of Palmer Station in Antarctica to swap places we've been able stand what so about Google
PL however you wanna pronounce it
but it's a BSD license library so the most permissive available but we do support proprietary plug-ins there are a number of proprietary plug-ins already there some of which are public and some which are not but it's a C + + library and we have a github repo but with whole pull requests gladly welcomed we've had 3 official releases the the 1 . 1 release last year just after we had a 1 . 2 release earlier this year and we're just going into a 1 . 3 beta release right now so that those of you who wanna get out there and start testing things find bugs we welcome so the way pool was set up is very much like did with a single poodle command and then many some commands that run off that but put primarily a translation engine you can see we have a large variety of readers large variety writers there's a reason the name poodle is very similar to the G it's very much inspired by GDL Howard Butler's what is a current emitter on the detail part of project so a lot of things structure very similar it's it's intended to really be the GDL point clouds and you can see we've just in the last release upcoming we've added a bunch of new formats so moment scientific formats like Elvis and ice bridge that are primarily of interest analysis on a new text reader that's some of will help you get those xyz text files into only see something like that and coupling the writers that 1 of which I'll discuss but the heart of thing is the filters these are really the the processing power of poodle but a number of them are your basic operations like filtering splitting chipping sorting things like that some of the more noise processing like the force on filter the statistical outlier designed to really allow you to clean up your data and then there is the and the main pool applications and these are kind of the top level command some of these are basically wrappers around certain filters for example the poodle ground command this is just the way to get access to the filters in a simpler manner some of them are very similar to detail so there's poodle translate just like there is GDL translate it you're basic command for changing file formats doing minor changes your files and things like that there is a utility formats like GLT index or poodle T and X I should say very similar to GLT index building up of tile list of all your files and allow you to operated on a single single file want build that and then there's poodle pipeline which is the main the power of the total of processing engines for cover here's what's really new and 1 . 3 Jason pipelines XML is gone but that should not person will preserve at around for 2 to 3 more releases so XML is still supported which is good because we built a lot of our applications around the axonal pipelines but we're in the process of preparing over to Jason now and the Jason pipelines make things much simpler as we go through here all show some examples of the poodle pipeline in XML and then they complement in Jason we have enhanced derivative writer allows you write out things like slope aspect contours fill shades we have a bunch of new analysis filters that briefly mentioned of the new T and that's reader which now gives you the ability to to do merge but clips the common filters right on a large number of files all with 1 command we are like this and we have a better text reader now and improved argument validation so much better feedback at command line when you've billion arguments and thanks to O'Connor's work transparent as 3 Uriel handling so if you store a lot of your data necessary even just reference necessary URL in the command line the by the way if people have questions let me know during the talks and rather do that when we have the context so just or even your question all repeated for the audio but we don't have to wait to the end for questions and put ground something that people like to do a lot with point clouds classifying point cloud into ground non-ground points you cannot be you can classify that so right into the classification routines or you can actually remove the points directly and make a smaller points some lighter formats don't support classifications so you actually have to remove the points things like last do support classification so you can just mark those points is ground it uses and an algorithm from the Point Cloud Library that compiled into approval the progress of morphological filters and and there is an approximate version so when you're trying to play around with the parameters on the output of ground you buy 1 from the approximate version because much faster to operate and then take the approximate true what you get things narrower down so here's an example of a point cloud from Sitka Alaska it's as as you can see quite heavily vegetated and this is an area they wanted to produce a Digital relation model of so we ran the program filter on it and now we have a very point cloud this is the kinds of things you can do with the ground filter the
pool info is very similar to to GDL info it is you you can get basic summary information about your point clouds of there's several other options besides summary things like just looking at the metadata looking at the stats as well
as the ability to this is an overview of this area the
ability of a boundary files for your for your point cloud this is what's also run when you're generating a poodle T and X and stores the boundary in the in the GDL shapefile or whatever format support so by by default the the parameters are fairly course you get a very coarse outline for your data but you can change the option to add to the boundary of filter and get finer and finer boundary calculations at the cost of going to more data doing more intensive calculations of 1 thing this new is the boundary of clu info when you do a boundary returns back the density because it's calculated the area and knows the number of points it will however varies based on the boundary that a calculated so if you do a very coarse boundary you might have a lower density than you do with a very fine boundary full info can also give you information on specific points using drill down to your file and find are the bad points here what are the specific ones and then potentially remove those points there's a
split options that allows you to split very large point clouds into a series of tiles of based on capacity in this case each of these point cloud tiles here is approximately 3 million points or you can also specify in terms of like and then you have approximately equal size of tiles depending on how much are you actually have but they're going to vary in terms of point and a point x the T and that's very similar to GLT on that allows you to run through a whole series of files we create a single compatible vector format that will store the path to all your files as well as a boundary for each file and the geometry column then this allows you to move to use this T X for a variety of different operations filtering merging clipping without just out without having this so specify all the files and you put into it uses references 1 single file so here's an example where I took the split files that a polygon geometry around it renders command and I get the merged clipped point cloud from the result of all those individual thoughts put
translators is the main point cloud translation method but basically input outputs and then whatever different advanced features you want the the advanced options are very similar to the linear creation option in data set creation options that you see in detail so it assumes certain defaults which you can override whenever you want to or to cool pipeline is basically the full power of putting this allows you to put all your commands into 1 specific script and run across from 1 to many files that allows you to run through the data just 1 time while doing multiple if you need to filter operations multiple read operations multiple write operations In the end most operations include include all go through a pipeline with it's transparent you're not that so here's an
example in the old XML format I they start from the inside out so they start with a reader then go through a filter in this case range filter and filter between 0 and 9 thousand 99 thousand 999 and then in this case right up to the last the notice there's no file names in this particular pipeline In this case I'm specifying the pipe the file names on the command line so I can keep this pipeline and just run through this and impasse multiple file names but XML is deprecated and now ruined adjacent so you can see that the Jason format for specifying the pipeline is a lot simpler little less typing definitely less angle brackets and just like the colorization filter in XML you now and Jason you can pass any GDL compatible roster format to a point cloud have it it does have to be in the same projection as a point cloud of and colorize a pipeline basically will set the RGB values for for those dimensions or you can just do it all at the command line so there's multiple ways of doing the same thing so here's an example of that particular after sick last again that is pulled from the US fears and then once you apply the colorization filter to a point where you know how colorized point all these images were by the way were taken from cloud to pair which is another open source of points of viewing engine that uses live blasted actually render things that use poodle yet but that's forthcoming here's an example of using the re-projection filter which is a common operation for point clouds but again this is the deprecated XML format now when Jason and you can specify your parameters at the command line or you can even do a batch processing with X on and do a whole bunch of operations another common thing to do with point clouds is generating digital digital elevation models we have the points agreed output writer that we use that was provided to us from the open topography group but in this case we have actually have the file name and the output input output file name in the point cloud in the pipeline which as XML here isn't Jason so running this pipeline commands would run to read this file filtering based on classification classification to use ground so it's gonna cruiser bare digital elevation model and then write it out at the 1 meter distance using in persist waiting out the output to to 5 or use the same kind of thing done with command line parameters rather than specifying point pipeline and you can even turn a a series of and my parameters into a pipeline with the dashed pipeline command but some of the highlights of a poodle and the 1 . 3 release we've added a new of filters . right this allows you to calculate normalized height but on the new dimension but 1 thing about this so it doesn't have to be it does have to have a ground classication is otherwise we can't really tell where the base ground elevation is to calculate the normalized sites but we've added multiple standing options so we used to just have random probably was now we've added a song in voxel grid thinning options of the filters that actually it has been enhanced so that you can use 0 you're features so say you have a vector file of building footprints from things like that places you wanna mass down you can assign classifications from that vector format and apply it on to the rest of the points of data and we knew it better than you are density kernel command but to look at density thoughts so here's an example running out of the density command on that point cloud these are just individual expands that calculated fairly crude but you can see the most the most the points are captured on the hillside and down along the shore there really what too many points captured poodle now has a Python API available via but the abilities getting your point cloud data into a number hierarchy is is the easiest for lines of Python now just open the pipeline executed and then read it into the array so very simple now to get your data into a number for analysis we've got refresh the poodle documentation is now our format with download options the content has been completely reorganized and some things a lot simpler terrain but recently Howard gave a workshop at that 1 of the NGO was and we have that on the website site now so you can download and work through your own workshop it's a hundred plus pages utilizing Q and doctors and go through all the basic capabilities of poodle so you don't even have to pay for workshop you can just download run and we added a bunch new tutorials to the to the website put all releases of source code will always be a cooler or outcome the did have of the recommended way recommend of we have for people to get approval now is doctor doctors is the fastest way to get put just to Dr. Paul poodle specify the release softly that his latest and you're going to get up-to-date runnable plural we also have a dependencies Dr. image that's really nice if you want to build your own privilege so you want to know how in some custom arguments custom options different capabilities all the dependencies are packaged together for you in a single document and you can just use that as the basis for your doctor RBM is available Debbie and unstable is available with for poodle however always for w we can you check there's a view that work with Windows and a build pool forests contact as well us and that's it OK thank you might for this great
overview of political
and it was this month the you and like does the doctor image contain all of the dependencies strongly optional all thing like like the points to grid library and and solve all the basic ones here there's some things that doesn't contain like oracle of stuff like that but post-process therefore PG point cloud anything that's that's publicly available is available in on the poodle doctorates there's something there's not distributable like the oracle instant client stuff like that that's not included in the and then 1 more question the concerning calculating the density of the text and I had tried to do that and it shows a visual representation but can you do that again like what the point density as per meter squared lots of the poodle info command will give you the point density per meter squared overall for the whole file not for individual so can make an image of the point density you could because each 1 of those textons does have an area it does have accounts you calculate it yourself thank you the the back in the of the think you 1st it's little it's amazing for me this is this is the 2 cells of the clique with 2 quick questions for the condition to D N when I have a bit of sparse point cloud is there is some form of interpolation or something like that there is a I know there currently there being worked on is a some newer improved digital age model calculations that are better at handling sparse data than the points agreed algorithm are of those will probably be in the 1 . 3 release they may not be fully documented of 1 . 3 that have alpha level quality so keep an eye on that and make comments in the in the github tracker other things like that where you find issues the 2 points a good writer will do it but it does have some issues sometimes I have here things and the 2nd probably just shot in the dark but when he talked about coloring plain clothes in the inverse operation like from color point cloud get their roster with you know with the image pixels you could probably write out the xyz RGB arrays and converted from it you know basically you have to convert it back to a grid format but the points agreed and then just colorize that from the xyzzy so there's nothing quick to do that you could do it manually but think presentation which you approximation is a something that the 3 D representation of a generalization of there isn't it StudioLink current it's due to the near the boundary calculation with everyone yeah if you would like to submit a pull request that there we can certainly take that under advisement and work on that and it's not something we currently have in the pipeline right now but it certainly something that could be at OK any other questions from Marx the 1st the thing that you find some my