Geometrically guided and confidence-based denoising
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 156 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/68472 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
| |
Keywords |
00:00
TrailAvatar (2009 film)Confidence intervalInformationMatching (graph theory)Software frameworkMusical ensembleGreen's functionSample (statistics)Directed graphComputer-generated imageryMultiplicationDigital libraryTwin primeActive contour modelImage resolutionElement (mathematics)Reduction of orderCurvatureSmoothingMereologySoftwareProduct (business)Presentation of a groupSound effectElement (mathematics)Open sourceInformationFlow separationSampling (statistics)Medical imagingPoint (geometry)Noise (electronics)Image resolutionCurvatureBuildingSoftware developerSurfaceGreen's functionContext awarenessPoint cloudDirected graphDistanceMusical ensembleKonferenz Europäischer StatistikerNumberSoftware frameworkDigital photographyResultantTwitterOpen setMeeting/InterviewComputer animationLecture/Conference
03:48
Directed graphDiagramComputer-generated imageryEndliche ModelltheorieAdditionDirected setMathematical modelGeometryRepresentation (politics)Rational numberCoefficientPolynomialCompact spacePixelAnalogyCAN busEuler anglesFiber (mathematics)Avatar (2009 film)Focus (optics)Software frameworkProduct (business)EpipolargeometrieMultiplication signGeometric modelingMedical imagingDifferent (Kate Ryan album)View (database)Position operatorPixelLine (geometry)Displacement MappingBuildingCoefficientSurfacePoint (geometry)Constructor (object-oriented programming)Endliche ModelltheorieElectronic mailing listDistanceElement (mathematics)Perspective (visual)Level (video gaming)Shift operatorRational numberDirection (geometry)Right anglePoint cloudRepresentation (politics)State observerLecture/ConferenceMeeting/InterviewComputer animation
08:14
Directed graphAvatar (2009 film)Focus (optics)DiagramPoint cloudPoint (geometry)Regular graphSurfaceMathematical modelDigital libraryCellular automatonDigital filterNeighbourhood (graph theory)WeightNormal (geometry)Sampling (music)Linear regressionPlane (geometry)EstimationPoint (geometry)Position operatorPoint cloudLevel (video gaming)Element (mathematics)WaveProfil (magazine)Arithmetic meanGraph coloringDistanceNeighbourhood (graph theory)DemosceneElectronic mailing listAverageLinear regressionPlanningNoise (electronics)SpacetimeCellular automatonDirection (geometry)Regular expressionFilter <Stochastik>Computer animationLecture/ConferenceMeeting/Interview
11:19
Theory of everythingPixelDirected graphPlastikkarteUser profileSampling (music)WeightConfidence intervalPoint (geometry)Neighbourhood (graph theory)Plane (geometry)EstimationNormal (geometry)Axonometric projectionPlanningPixelEstimatorMedical imagingProjective planePoint (geometry)Arithmetic meanNoise (electronics)Direction (geometry)Constraint (mathematics)CurveWeightWell-formed formulaProfil (magazine)AreaNichtlineares GleichungssystemCost curveConfidence intervalCorrespondence (mathematics)DistanceGraph coloringMatching (graph theory)Line (geometry)Lecture/ConferenceMeeting/InterviewComputer animation
14:07
Directed graphComputer-generated imageryPoint cloudPerspective (visual)Medical imagingResultantSlide ruleComputer animation
14:38
Product (business)SmoothingFlow separationAxonometric projectionPoint cloudMetric systemMedianResultantFlow separationBuildingProjective planeMedical imagingBitConstructor (object-oriented programming)Graph coloringPerspective (visual)IterationPoint cloudPoint (geometry)Slide ruleTexture mappingInformation retrievalComputer animationLecture/ConferenceMeeting/Interview
16:36
Mathematical analysisDisplacement MappingConstraint (mathematics)Metric systemAxonometric projectionElement (mathematics)Observational studyCondition numberCondition numberPoint cloudPerspective (visual)Filter <Stochastik>Computer animation
16:56
Displacement MappingMathematical analysisMetric systemConstraint (mathematics)Axonometric projectionElement (mathematics)Observational studyClassical physicsMathematical analysisDifferent (Kate Ryan album)ResultantElement (mathematics)Object (grammar)Constraint (mathematics)Metric systemPoint cloudGraph coloringDistancePoint (geometry)Lecture/ConferenceMeeting/InterviewComputer animation
17:51
Discrete element methodOrbitProcess (computing)VolumePhase transitionDirected graphSoftwareSlide ruleSurfaceOpen setData conversionConstructor (object-oriented programming)Discrete element methodComputer animation
18:51
WaveGraph (mathematics)Classical physicsPoint (geometry)AnisotropyNumberNeighbourhood (graph theory)Exponential functionAlgorithmNoise (electronics)Graph coloringDistanceParameter (computer programming)Category of beingLecture/ConferenceMeeting/InterviewXML
21:23
Graph coloringPoint cloudAdaptive behaviorLibrary (computing)Lecture/ConferenceMeeting/Interview
22:26
Computer-assisted translationComputer animation
Transcript: English(auto-generated)
00:02
Hello, everyone. My name is David Youssefie. I work at CNES, the French Space Agency. I'm a ground segment development engineer, and the presentation raises the issue of the noise in point cloud, reconstructed by satellite imagery.
00:30
I will begin with an introduction about context, about CO3D and the issue of noise in point
00:42
cloud. After that, I will talk about CARS and Pandora, two open source software integrated in the ground segment. I will talk about the methodology of the method presented in the paper, and after that, the results, and to finish, a conclusion about CO3D and the tools
01:07
we developed. The CNES, in partnership with Airbus, developed a new constellation of satellites. The mission is called CO3D. The aim of the mission is the reconstruction
01:30
of the Earth in 3D. We will produce with four satellites. Each satellite has four bands,
01:41
blue, green, red, and near infrared at 50 centimeter. With this satellite, we will produce a global digital surface model at one meter ground sample distance, and for information at 15 meter and 30 meter, the DSM will be delivered as open data.
02:06
To talk about open source, we also have open source included in the ground segment. We have two tools. The first one is CARS, Satellite Multiview Stereo Framework, and
02:22
Pandora, which is integrated in CARS to make the matching part. These two tools are V2 Listense, and we think that worldwide production of this 3D information will make
02:45
a real contribution to the creation of digital to it. Why we try to denoise point cloud? In satellite imagery, we have a small number of images. We have lower resolution than drone and aerial photography.
03:08
We have a ripple effect on surface, which are flat elements. The aim of the denoising
03:21
is reducing the ripple effect, but without destroying the reconstructed elements. We try to smooth out flat elements, such as rooftops or inroads, and we try to retain the sharp edges without destroying the building or the separation between the buildings.
03:51
Now I don't know if everybody knows our work, the photogrammetry, so I will explain quickly the principle. In satellite imagery, we have images from agile satellites, for
04:10
example. It's one satellite, and during the mission, he captures a first image, and
04:21
after a second image, so it's the same satellite, but two images are taken at two different times. In the CO3D mission, we will have a satellite constellation, so it's two images at the same time to reconstruct moving elements.
04:49
The principle of photogrammetry is like our eyes, to see the perspective. We have two or three images of the same scene, but observed from a different point of view.
05:04
We therefore need at least two images to reconstruct the surface in 3D. In satellite imagery, the difference of point of view is quantified by B over H ratio. B is the base distance
05:22
between the two satellites, and H, their altitude. Depending of this B over H, you can reconstruct the streets, or be more precise on the height of the buildings. We don't need just the images to recreate the Earth in 3D. We also need a geometric
05:48
model to know when the satellite, and which is the position of the satellite when it takes a picture. The geometric model is represented as a rational polynomial coefficient.
06:06
It's a list of coefficients which provide a compact representation of this geometric model. Thanks to this RPC, we can know for pixels, line and column, as the position
06:25
of the ground, and draw, for example, the line of sight, and retrieve the point reconstruct. I will explain step by step the reconstruction, the 3D reconstruction. The first step is
06:47
to reassemble the images in epipolar geometry. It's like rotate or resize the image. To allow us to search the displacement of the pixels along the lines. After that, we
07:09
match for each pixels in the left image the right corresponding pixel, and we obtain a disparity map which contains all the shifts between the two images.
07:26
These shifts are converted into positions. Thanks to these positions in the two images, and thanks to the geometric models, we can draw lines the direction of the pixels observed.
07:45
When we draw these two lines, we only have to intersect the two lines to have the position of the pixels. All the positions we succeed to intersect allow us to obtain a point cloud,
08:07
so longitude, the latitude, and the height of each pixel observed. We obtain a point cloud. This point cloud is a non-regular list, a position list, so to be easily used for
08:33
the user before QGIS 3D, we need to convert it in a digital surface model. A digital
08:42
surface model is a regular grid with a fixed space in X and Y. For this point, we need for each cells of the ground, we compute the average of the point, and we obtain the altitude on each point at the ground. We obtain a map of eight, but we can also make
09:13
a mean of colors to obtain another map, which allows us to understand for each element
09:29
which is reconstructed in 3D. Now, I will talk about the methodology to denoise the point cloud and the proposed method. We use bilateral filtering, which
09:48
is the principle of bilateral filtering. We have a point to denoise, and we have neighborhood points in the scene. I draw a simulated profile in yellow. To denoise the point cloud,
10:13
we will see all the points of the neighborhood. Thanks to the distance, we have a wave mean,
10:27
and we obtain a plane by regression, so we know in which direction the points have to go to be denoised. Thanks to the normal distance, we can move the point to its new
10:45
position. You can observe that the regression plane is not aligned with the roof, because we use all the points, but just using the distance. There are points used during
11:06
the denoising, which now allows us to use this roof. So, we can iterate over this bilateral filtering. We tried to add some constraints to the bilateral filtering to improve it.
11:25
The first constraint is about the so-called ambiguity concept. The concept of ambiguity is contained in the Pandora tool. Pandora is the tool which does the matches between
11:46
the two images. For each pixel in the left image, we compare all the pixels in the right image, and we obtain cost curve profiles. For pixels which have only one cost curve
12:04
corresponding pixel, the profile is non-ambiguous, but in homogeneous area, for example, you can have ambiguous profiles. This cost curve is derivated to ambiguity curves profile,
12:22
thanks to this equation, and we obtain this curve, and the integral of this curve gives us the ambiguity of each point, or if we compute the formula of confidence,
12:44
we can obtain the confidence of each point, and know which points we are sure to have a good height, and so the pixels are more ambiguous.
13:00
We use also the color of the points to compute the mean, the weight of the denoising, and the confidence, and so we obtain a new plane estimation, a better plane estimation,
13:21
and the points with the normal distance allow us to go in the right direction, and we add another constraint, it's a projection of line of sight, because the point we obtain thanks to photogrammetry, where before don't stay on line of sight, but it's impossible
13:49
because the point was obtained by intersection of line of sight. So we obtain a more realistic solution, and we can iterate over this method. For the
14:10
result, we have an experiment set up with triply play-add images on Nice and Montpellier,
14:21
and we have various landscapes, but urban oriented, and we use leader HD as a ground truth. We have the digital surface model and the leader HD in this slide. We make an ablations to D, you have the first initial point cloud, a bit noisy.
14:48
If we simple filter the point cloud, we have roofs and streets very smooth, but we lost all the details of the roofs, but if you use colors during the bilateral filtering,
15:08
the textures are recovered. You can see also with the projection, better roof reconstruction and building separation, and also we try to add the ambiguity, and thanks to the ambiguity
15:29
we have building separation and details in the street retrieved. We can see in the image that we lost the separation of the roof, and thanks to ambiguity we retrieve
15:45
it, and the result of the complete method. We try also to quantitatively qualify our
16:02
method. It was very difficult to prove quantitatively the results we see qualitatively, but we see that with the method we obtain better results. We have more points near to the
16:25
reference, and with the iteration the method is more robust. Conclusion and perspective. We propose a new method built upon bilateral filtering.
16:44
We integrate the so-called ambiguity provided by Colorado, and we respect the acquisition condition to have a new point cloud more realistic. We need to continue to adjust
17:07
the contributions of the different constraints, the distance, the memberships to the same object. We use color for that, but we can use a classification, for example, and the
17:21
ambiguity. The classic quantitative metrics show that we don't worth the results imposing a strong constraint on point deplacement, but we think that the qualitative analysis
17:42
seems to show good results, but we must improve the metric to prove it quantitatively. A last slide about the CO3D mission. This mission will be launched in 2025 with one
18:04
meter with a DEM of one meter accuracy. We will provide 15 meters and 30 meters freely
18:22
in open access DEM. You can find all the tools I present, like cars to construct the surface, Pandora for matching, and we have also two software, Bulldozer, which is a DSM to DTM converter, and a DEM compare, which allow you to compare your DEM to a
18:50
reference leader, for example. Thank you for your attention, and if you have a question.
19:09
Yes, so any questions for David? We have around five minutes.
19:23
Hi, thanks for the talk. How do you choose to define, I mean for the point neighborhoods, how do you define those neighborhoods point for the point that you want to do noisy? How do you define those neighborhoods? I define the number of neighborhoods used.
19:50
It's another parameter of the method. We use a category to know the neighborhood, the points we have to choose, but with the wave distance, for example, with color and
20:08
etc., even if we have a big neighborhood, the wave will decrease. It's a computational
20:24
thing to know the size of the neighborhood. Have you explored the possibility to adapt some classical methods like anisotropic diffusion
20:46
for this kind of data? Have you explored the possibility to adapt more classical algorithms like, for example, anisotropic diffusion for this kind of data? Something from the
21:08
anisotropic diffusion algorithms or noise removal. Have you explored this kind of things? We explored diffusion of the point, because for the colors, for example, we have to make
21:27
some regenerate streak adaptation to be sure, and before that, in a pointless fusion with cars, we concatenate point clouds before to make the denoising. I don't know if I
21:50
answer the question. There are Python libraries and they can be installed with pipe install
22:20
only, so it's very simple to install it. We made with Dimitri a workshop about it
22:28
in a collab, and it's very simple to install.