Open Source Photogrammetry with OpenDroneMap

Video in TIB AV-Portal: Open Source Photogrammetry with OpenDroneMap

Formal Metadata

Open Source Photogrammetry with OpenDroneMap
Title of Series
Part Number
Number of Parts
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Release Date

Content Metadata

Subject Area
OpenDroneMap (ODM) aims to be a full photogrammetric solution for small Unmanned Aircraft (drones), balloons, and kites. ODM acts as a tool for processing highly overlapping unreferenced imagery, turning the unstructured data (simple photos) + GPS into structured data including colorized point clouds, digital surface models, textured digital surface models, and orthophotography. This session will act as an introduction to OpenDroneMap, give an overview of what the current status of the project is, detail what the anticipated next steps are in the project, and how you can participate as a user and/or developer. For ODM, 2016 will see smoothed texturing, denser, more accurate point clouds, and other key components to the maturation of the project. Find out how you can participate.
Keywords Cleveland Metroparks
sup slides Computer animation open source software developers Natural Number software developers analogues Extremwerte match part LUG
build effective state time section set match open part ARM Magnitude data model Biomedical Imaging Mathematical different forest core Squares Mats's Frameworks model library Systems cloud scripting injection areas man Outline ARM map view software developers point open source bit digitaler instances break measurement category types photo Sparse job CPU data repositories Ordering Iris website Streetview iOS patterns sort Arten structure results globaler Computernetze point filterer surface Mesh control regexes services open source civil connection image 3rd field Emulation product train wave Dense Ensemble level structure iOS CAMS platform man match Mesh cells projection semaphores evaluation Dense Computer animation networks platform cloud trivial
pixels state Map time section real-time operating system open Sun energy ARM San Biomedical Imaging Services chrome video cloud areas man Algorithmen Map map regulations point sample menu part instances portable photo job 4th cluster triangles iOS sort globaler Computernetze point map Barriers open source Resolution Real Slam neural network list color Join field portable number Theta Dense operation level Gittererzeugung iOS CAMS interface Maßstäbe Elektronischer Datenaustausch surface consistency Experts analysis com lines binary encodings Application chrome Dense errors Computer animation networks Drums cloud
wiki Computer animation Link repositories com binary encodings QOS Twitter
Strömungsverlauf point servers build effective services control point file Link time water list open part field scalability web Biomedical Imaging Services Softwareentwicklung apparent depth interface areas domain match graphs map format polygons projection Gradient plane com ups Application call photo messaging job Computer animation networks Case Trees cloud
the all of you on ladies and
gentlemen smell asking you for attention it's 11 o'clock we are about to start I thank you my name is John Hume chip Izquierdo be chairing the session and please welcome our 1st speaker decoder and he'll tell us something about enjoyment piece to make things the introduction of science Dakota developer for the click natural parts of you may know see analogies than a few of these talks before and he will be here in the slides for only so what is open from a few years ago Steven decided that we needed to have an open source solution to photogrammetry
software and there were some hodgepodge sort of different tool teach change that you could sort of next year the cells there's nothing really fully automated fully together so this started putting the the toolchain together and and put together with the pro script to of control script and so it does it does matching between images the civil service model and as where the firm is a thing so the other way the government has an open source toolkit for processing aerial drone images of the level of the major more comprehensively survey and evaluate the ecological health of the 22 thousand acres of public forests wetlands and beaches that he's doing digital photographs captured from small aerial drones flying over whether patterns of the target site open dramatic finds matching features in each of the overlapping images in order to reconstruct the world as a 3 D point cloud and match the matches untextured using the original images finally in also for those generated from the GPS data and the textured match as an open source project contributors the users and developers like your called the Repo tried out and contribute back but we much worse open dramatic was a powerful tool for the evaluation of streambank erosion and tracking the spread of invasive species and measuring native vegetation find your you participate in the product that has tested the user or developer
so you can see a lot of users and the plenty of of ways to reduce it so like I said it started out as a sort of a bundle of tool of mesh of tools originally for from bundle tools now we since replace final tools we've replaced by some other components we replace the prospect with Python framework so in sort of the tradition of the ship of the CS if ships had or boulders parts of place is it the same ship become like a semaphore or what what we produce what comes out of the point clouds detection measures and or the 1st so for example point clouds here this is a small building property of into the Ohio and states in many many points for being generated by an in depth tension match over that and then from these raw aerial images we can produce this completed of mosaic as old farm field in our Hinckley reservation the metric parks and so when talk about the quick 2nd injection talk about what you need to get started with a with this and then we'll do a light technical dive and and talk about the broader ecosystem of open dramatic Isolde about platforms you know what you would to fly to take to get these images and then of what these images need to be to go into our software has so besides just flying a kite and putting a a camera on it all there's a pretty large camera aerial photography movement movement but group in you know from public labs or otherwise only have a silicate which set of a and B have the pre Britain I believe and it's just basically helium balloon attachment to keep it stable and it works really really well for creating stable images and of 1 job axes that takes quite a bit of effort to to really back and so this is this is for the efficient real bomb and you can see that the camera on a 3 printed with an Arduino and it rotates and stands in it it triggers the camera everyone 2nd answerable to get sort of these spiracles photosphere but results from the from that we can input that we also have rotorcraft so could visit a small area really hard to land of for instance it with In this article that is also a small fixed-wing if you've got some big landing area of this is you write poly recognize this sense like of really easy to use it if she can sell it myself when I'm completely automated systems there are really really nice man of course the larger the the higher earlier that we are large and medium and now what's called the 384 for assess larger has a larger that's an appropriate I apologize you to and so what these what these drones and these are unmanned aerial systems or capturing or regular photos they have they have Canon cameras in them or other cameras and on so what open takes is conjugate photos are specifically this 23 then photo on other RGB or you can get filters putting these cameras for and I or otherwise arm and we also need a high level of overlap inside lapse between each photo you need new at any given point you want you want to to have 3 photos overlapping and so 70 % overlap inside left between the 1st and most of if you if you've got a with automation on when you can if you like build the grid most the time you can select how much overlapped you want training you wanna have you know somewhere greater than 50 images says hard requirement but and that way you're you're able to get a nice good results from that and so we will go into a little bit of what each section of the toolchain is on how we get from from beginning to end from of photos to it or the photo to start with the structure motion we worked with when we replace Bundler we worked with mapper Larry who do street level for the street-level street view and crowdsourced street street view and so they have this open open SfM software that they helped integrate into open drawn maps and then you might think well they the street level we know mu drones those are 2 completely different things but core concepts the same they they are taking these photos finding type points between each photo and creating a sort of sparse point out from that I'm able to reconstruct the the model that the world on in 3 D from from all these photos and see the little white squares of the photos and each is at high point between different photos part and then from there we will create a dense point cloud of curly use of Paeonia Oceania solution on we're working on updating that right now we want to were looking into some technical semi-global matching on the advantage of of this well you know does point that is probably the most costly as far as resources on RAM and CPU mostly around so but and you know we get something like this from CMDS on the advantage of using something like sent single matching and probably will be working on a custom solution to that you get a much much denser on orders of magnitude of point clouds from that you can almost
tell that that's the points I so then we have to treat the Nash and from the nitrogen created by the texture but texture of 3 Nash comes you know it the National so it's just triangles and it looks a little a little soft around the edges but when you texture it's still looks pretty great it's a lot of photogrammetry software struggle with the meditation I notice like Canada's flicker looks at a pillar and auditory and but we are looking into taking basically edge-detection surface detection so that we can get of this lastly well 2nd last we just implemented a new texturing algorithm but using MDS texturing which is 3 things up pretty well 1st of color correction also seen seem line correction and of consistency checking so I apologies for that another inappropriate image but I will zoom in you can see there is like very distinct the lines and colors so they fix the seems also to color correction and then for consistency you can see there that's not supposed to be there and and this texturing checks that affects the times the state time it's all really honest with seen levelling etc. the OK so you can see in in our example with open drawn map of this is a prime example of our old texturing but you can see 1 of the center images is definitely off it's not georeferenced properly probably not either from just on the interference or you know all of the cameras we use here all have internal GPS and not that great other good but sometimes they can be immediate so there's for a consistency check as well as you can see the seam line here and the only color correction if it was partially cloudy day so and you're going to get sort of blobs of dark and light and open via a and a section fixes that 1 creates nice consistent of sample very sort of example of what is actually going to cameras this is the 2 lead us again I and you can see this is a really you know production quality that just through quick about accuracy on the flew drawn inside to get around FAA regulations which I think on the 29th are gonna be in the US for commercial operations is going to be very good and they just passing regulations but we flew inside with uh a football field I had a professional surveyor take 50 points and I did just a quick accuracy analysis see how how well our drones would fly but I would would serve build this now compared to follow photograph software and found some really low it you know in these areas this creating maps like 2 3 and 4 centimeter accuracy and resolution centimeters per pixel so these maps are coming out with with very good accuracy quickly talking about the broader ecosystem we want we want this are open drawn up to be and innovative solutions and we want to be able to seamlessly the maps the created with open drawn map it open em and we want to be able to work with what we are working with a horrible sound by to be able to you know have in low-tech areas run open roadmap for inoperable and have them sort of instance real almost real time mapping with with what the people were using open open as of open source portable open street maps out a 1 is 1 and then you know getting into the future of open drawn map we talked about dense point cloud improvements but we also have in a poor request right now video input so it's not just that J peg but you can sort of role of video on you drop by and the goal of this is to have possible real real time neural time processing on at least you know you search step by step on open access and building the the 1st point clouds MFA real time applications crisis mapping etc. men and lastly we are working on a graphical user interface right now open drawn map is a command line tool I think in energy notated talked about needing to have geo-spatial applications that are not just for experts are there for for business folks on and so you know being a command line tool presented barrier so having a graphical user interface for working on a a web service solution so so based the interface for upload or having a process and and then being able to move to it's different you know open up or whatever we and lastly just sort of an idea of how drones can be used for mapping as far as resources on so these are 2 of our drums of the hours plus is the is the quadcopter and the 3 384 is is the big huge should fixed-wing they can map it can go
about up for about an hour and 45 minutes to an hour and it would take to map entirely have Francisco take 66 hours or 33 flights at 60 % overlap on definitely intensive but poorly doable of Cleveland has a much a much larger so much for urban sprawl so take more flight but still definitely doable but this but you know it gets because pretty intense as far as the number of flights number of battery chargers they need so I like it was in the video glob
repository for QoS developed we have a data that's the 2nd last link on where a lot of desktop happens please join us ask questions on defining and Twitter and since even on Twitter end on of course you get out of the question the new instance please wait a like is it possible
to add to ground control points in a 2 year to year from from yes has a it's already integrated on the entire about but you have to basically text file with a specific format with each graph control points and you can do that in lieu of having GPS points in the axis of the of the images this is also possible to exploit the plant plants X for the point clouds that yet of multiple formats but A . xyz I believe last year and last last those as well the the answer I was watching so thank you I have 2 questions easier euros using the GPU and the call big projects can you have with open and domain OK so it it is on top of we we are using GPU definitely a goal for us in the future on specifically with the semi-global matching improvements that will definitely that's easily the most time-intensive like 80 % of of the time cost of open dramatic is in densification and will be using GPU processing so by but see I have a server with 24 hours so that I can run 700 images in about 2 hours self this time the somewhere quite sure obvious were always get into improved the so centers of about creating production-quality maps it seems like 1 of the big pitfalls with current photogrammetry is around water and the kind of artifacts that you get and so in the light of kind of crisis use cases in coastal areas and folks at this 1 amendment the coast you have any plans features to have user-generated polygons to do flattening anything like that no not right now I of course we welcome improvements of the ice it's really not really an issue of photogrammetry software but of just the images themselves were In the victim of refractivity and and so we we are looking for ways to church will prove that for sure so bloomed used with the use of motion to classify the points will use loop on on of them do so soon because the bonds and the bonds we don't have that right now the messages from the AP grade I think especially if we want to be doing more edge-detection it just deciding what the building with the tree that would be really helpful in that regard do have for example of such software which can do it already and I don't know we need to do the grant program and OK which would among microphone link as as cruel does accreditation so that's different and action or I will ask deep and he thinks scalability and when you have like 0 probably as Francisco like 10 thousands of photos currently displayed the project Summer or whatever the ways to go with them yeah I would say if you're doing something as large as the city trying to map a city on you would probably have to manually chunked house the city itself and then hum new school or something to to merge the asters together but no more questions you will have to ask you processing service Samoa your school because you have to think about processing those yeah yeah that's I think that's the the real reason for the web based on their interface on this field induced and distributed processing and part of you know maybe we would not offer offer service but to be able to have this this web services available for people to post their own services yeah it will thank you thank you very much