SLAM C 06
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 76 | |
Author | ||
License | CC Attribution - NonCommercial - NoDerivatives 3.0 Germany: You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/48994 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | |
Genre |
SLAM and path planning63 / 76
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
00:00
Distribution (mathematics)DisintegrationMeasurementError messagePlot (narrative)Arithmetic meanMultiplicationDistribution (mathematics)2 (number)XML
00:17
Arithmetic meanResultantMeasurementPosterior probabilityDistribution (mathematics)Diagram
00:37
Linear subspaceError messageTriangleDistribution (mathematics)MeasurementRAIDPlot (narrative)TriangleBitGreen's functionResultantDistribution (mathematics)Shape (magazine)XMLDiagram
00:53
InformationTriangleRoboticsPosterior probabilityDistribution (mathematics)Position operatorNetwork topologyWordDiagram
01:35
Dynamical systemOrder (biology)SoftwareNetwork topologyState of matterHydraulic motorPrice indexLogical constantNichtlineares GleichungssystemMeasurementNormal (geometry)RoboticsPosterior probabilitySummierbarkeitSet (mathematics)outputConvolutionMultiplication sign2 (number)Position operatorGame controllerSelf-organizationForcing (mathematics)Line (geometry)MultiplicationPeer-to-peerComputer animation
04:14
Video gameLine (geometry)NeuroinformatikMultiplication signDemosceneGame controllerPrice indexAlgorithmComputer programmingState of matterProcedural programmingHistogramLoop (music)DivisorMeasurementNormal (geometry)Distribution (mathematics)ConvolutionPosition operatorForm (programming)Centralizer and normalizerMatching (graph theory)CASE <Informatik>Thomas BayesDiscrete groupMetreParticle systemSimultaneous localization and mappingComputer animation
06:03
Plot (narrative)GUI widgetDistribution (mathematics)Function (mathematics)MeasurementDigital filterHistogramMultiplicationConvolutionExecution unitPulse (signal processing)Group actionPlastikkarteComputer programmingForm (programming)Functional (mathematics)HistogramBinary multiplierDiscrete groupConvolutionElectric generatorSystem callMeasurementMultiplication signRight anglePosition operatorGame controllerSource codeXML
06:22
TriangleDistribution (mathematics)Control flowMeasurementTerm (mathematics)ConvolutionFunction (mathematics)GUI widgetPulse (signal processing)Plot (narrative)Loop (music)Range (statistics)CodeTriangleArithmetic meanLine (geometry)Execution unitMeasurementOperator (mathematics)Pulse (signal processing)Distribution (mathematics)Point (geometry)Electronic mailing listMultiplication signRight anglePosition operatorGame controllerPlotterMultiplicationPropagatorSocial classConvolutionEndliche ModelltheorieDemosceneComputer animation
07:47
Moment of inertiaExecution unitRoboticsPosition operatorTriangleMetropolitan area networkDistribution (mathematics)CurvePattern languageComputer animation
08:02
Plot (narrative)MeasurementDistribution (mathematics)CurvePattern languageGame controllerDiagram
08:21
Distribution (mathematics)PredictionControl flowConvolutionPlot (narrative)MultiplicationMeasurementRefractionExecution unitGUI widgetCodeLoop (music)Pole (complex analysis)MeasurementDistribution (mathematics)Game controllerResultantXMLComputer animationDiagram
08:45
Distribution (mathematics)Control flowMultiplicationMeasurementExecution unitGUI widgetCodeLoop (music)Plot (narrative)MeasurementResultantRoboticsDistribution (mathematics)Computer animationDiagramXML
09:10
Distribution (mathematics)PredictionControl flowExecution unitGUI widgetMeasurementCodeLoop (music)Analytic continuationNumberMeasurementoutputClique-widthGame controllerExecution unitDistribution (mathematics)XMLComputer animation
09:25
CodeDistribution (mathematics)PredictionConvolutionControl flowMultiplicationMeasurementPulse (signal processing)Plot (narrative)GUI widgetLoop (music)Execution unitMeasurementPulse (signal processing)Distribution (mathematics)CASE <Informatik>InformationInsertion lossChemical equationComputer animationXML
09:40
Plot (narrative)SineInformationMeasurementRoboticsInsertion lossChemical equationObject (grammar)Phase transitionComputer configurationState of matterDiagram
10:12
Order (biology)Functional (mathematics)Loop (music)MultiplicationTerm (mathematics)Cellular automatonMeasurementRoboticsDistribution (mathematics)SummierbarkeitSocial classData storage deviceProgrammschleifeArray data structureSound effectRepresentation (politics)NeuroinformatikElement (mathematics)Position operatorComputer programmingLine (geometry)LinearizationIdentical particlesFingerprintRule of inferenceComputer animation
12:16
ConvolutionDistribution (mathematics)Plot (narrative)HistogramMeasurementControl flowPredictionLine (geometry)TriangleComputer programmingPlotterPhysical systemCoroutineSimultaneous localization and mappingCodeOrder (biology)HistogramRevision controlComputer animation
12:35
HistogramControl flowDigital filterDistribution (mathematics)PredictionMeasurementLoop (music)GUI widgetSpeech synthesisComputer programmingArithmetic meanMultiplicationPairwise comparisonExecution unitSystem callMeasurementCASE <Informatik>PredictabilityDistribution (mathematics)Roundness (object)Convolution40 (number)Multiplication signService-oriented architecturePosition operatorGame controllerPlotterHistogramLoop (music)Clique-widthCoroutineComputer animation
14:11
Coma BerenicesProbability distributionBinomial distributionResultantMeasurementRoboticsPosterior probabilityDistribution (mathematics)ConvolutionZoom lensClosed setCurveClique-widthComputer clusterTriangleLine (geometry)Machine visionCASE <Informatik>Kerr-LösungShape (magazine)NeuroinformatikDiagram
15:51
Control flowPredictionSimultaneous localization and mappingHistogramLoop (music)Distribution (mathematics)GUI widgetExecution unitIterationTriangleMultiplicationResultantMeasurementDistribution (mathematics)Shape (magazine)ConvolutionZoom lensCurveRow (database)MathematicsBinary multiplierAddress spaceComputer animationDiagram
16:41
Diagram
Transcript: English(auto-generated)
00:00
Now let's again have a look at the multiplication of two distributions. So we had one distribution centered at 400 with a half-width of 100 and another one centered at 410 with a half-width of 200, meaning something less accurate, and we multiplied those two and the outcome was as follows. And so we already noted that even though the second distribution is less accurate,
00:23
meaning it has a lower peak and it is wider, the result has a higher peak than our prior distribution. So the posterior has a higher peak than the prior. Now let's have a look what happens if we move our measurement distribution. So let's move that to 500. Then the result looks like this.
00:41
So since the green distribution moves to the right, the resulting red distribution somehow leans towards the green distribution and clearly this shape isn't a triangle anymore. Now let's move a bit more. Let's move to 550. Now it's very obvious that the resulting posterior isn't the triangle distribution anymore. But we also see, even though those two distributions tell us
01:04
that our robot is at different positions, one telling us it is at 400, the other telling us it's at 550, we end up in a final distribution which has a higher peak than any of those two distributions. So what we see here is that by adding this information from our laser scanner,
01:23
we have an information gain, no matter if the measured value differs from our previous assumed position. So now let's put together the two steps that we've learned so far. So first, we did a motion.
01:41
And you remember, we had this tree for explaining this with three different outcomes. And then for every movement, we again had three different outcomes. So we ended up in this tree with five different outcomes. And so the probability for a certain outcome was p of x equals the sum of all p of y times the probability that I go from y to x.
02:04
And it's summed up over all y. Because in order to end up in this x, I can go either this way or that way or that way. And this is summed up here. So these are the probabilities of y and these are the probabilities of p of x given y. And this was a convolution.
02:22
And on the second step, we made a measurement and we found out that the probability for position x given a measurement c is a normalization constant times the probability of measuring c when we are at x times the probability for x. And so we solved that using a multiplication. And we called that the prior and this one the posterior.
02:43
And now as our robot moves along, those two steps are always iterated. So we move, we measure. After measuring, we move. And indeed, normally they're interleaved. So in reality, we measure while moving. Now as we move on like this, we produce a dynamic base network. So we come from a certain state, and then we are in a certain state, which I denote as x t minus one.
03:05
And then in this state, we perform a measurement, say c t minus one. Then we move on to the next state, x t. And doing so requires us to give some input to the robot. For example, the speed settings for the motors. And this is control.
03:20
So control is normally expressed as u. So this is u t. That's the control that we set in order to get to our state x t. And in x t, again, we measure. And so on. So if we are in a certain state at time t, for example, we will denote the probability of being in this state as our belief.
03:44
So the robot has a belief of being somewhere. So the probability of x t is also termed the belief. And this is our belief after we move and after we have seen the measurement. And there's a second belief, namely, if we have moved, but we didn't see yet our measurement.
04:03
And this is termed our belief with an overline. So now we'll have to use time indices here in order not to get confused which x is which in all those equations. And let's write that down again with indices. So in our movement step, we compute our belief overline of x t using the total probability,
04:23
namely the probability that we end up in x t, given that we were previously in x t minus one. And also given, this is different from the previous slide, the control u t times the probability that we were in the state x t minus one, which we now denote as the belief of being in x t minus one.
04:43
And this is summed up overall x t minus one. And in the next step, we compute our belief after incorporating the measurement, which is a normalization factor times the probability of the measurement c t, given that we are at position x t times this belief that we just computed.
05:01
And these are the two steps. So all we have to do is to iterate those two steps. So now we need to compute that for all x. So if you write it that way, we have also to add for loop for all x t. And this entire thing now is called a base filter. So this is our procedure base filter. And it has the parameters, our old belief, our control and our measurement.
05:24
And it returns our new belief. And this is very important. This is the base algorithm for any filter we will cover here. So for the histogram filter, which we actually already did for the Kalman filter, and also for the particle filter. And now when we want to program this, we don't have to do very much.
05:40
We already did this. So this line here, this ends up being just a convolution. So programming this is just belief with the overline convolution. If you term this move, the convolution of our move distribution and our old belief distribution, while this step is just a multiply of the belief we just computed and our measurement.
06:04
Now in SLAM60 histogram filter, you will find a program that implements this base filter in the form of a discrete base filter, also termed a histogram filter. Now if you copy your convolve and multiply functions up here, then it's ready to be used. Let's look what happens in main.
06:21
So the program is very short. Now here it sets a start position, a 10, and it puts a unit pulse there, as we did before. The controls, these are move 10 times to the right by 20, this will be 30, and so on. And the measurements are generated assuming perfect measurements, meaning this somehow elaborated code here just produces a list of 10, 30, 50, and so on.
06:46
Just does the cumulative list of this list here, plus an added start point of 10. Now this down here is the main loop, and as you see it's very short. We have a movement step and a measurement step, and both are identical except for the operation that they use.
07:03
The movement uses a convolution, whereas the measurement uses a multiplication. Now for movement we set up our control distribution, which means for movement of 20 I set up triangle with center at 20, and I use a half-width of 10. And after the movement I plot the resulting belief with a river line in blue.
07:25
And then after that I model my measurement. The measurement tells me that I am now at say position 50. It's also a triangle distribution, and I'm using the same half-width of the triangle. Then I multiply the two distributions and I plot the resulting belief.
07:44
Of the next step in red. So now running it, this is how it looks like. So we start with our unit peak, and then we move propagate this into this triangle, which is then improved by the update. So in the update we get a higher peak, so the robot is more certain about its position.
08:03
Now let's have a closer look. Starting from this triangle, this bell-shaped curve evolves, and we have always the same pattern. By our movement we widen our distribution from this red one to this blue one. And then again by the measurement we narrow our distribution again and get a higher peak.
08:21
Now we use the half-width of 10 for both the control distribution and also the measurement distribution. Now what happens if we set control distribution much much wider? Then we get the following result. After our first movement we get a very wide distribution, and this is corrected by the measurement. Then again what we gained here by the measurement, we lose it by the next movement and so on.
08:45
And so essentially these triangle distributions are the distributions of our measurements and the movement, since it is so inaccurate, plays no role. So what happens if you do it the opposite way? Then we get a similar result to the first one. However, the peaks of our distributions are lower.
09:03
So if we reduce our measurement accuracy, then the estimated accuracy of our robot is reduced too. Now remember earlier, when we didn't have the measurement steps, our accuracy decreased continuously. Now let's do a longer simulation. So let's multiply our width of the arena by 10 and also the number of control inputs by 10.
09:25
Let's run that. Now see here, this is a situation without measurements. We start with a unit pulse and then our distribution quickly gets wider and wider and the peak gets smaller and smaller. Now if we switch on our measurement, we see this is not the case anymore.
09:40
So we start with the peak, then it goes down, but then essentially the information loss that we have by our movement is in balance with the information gain that we get from our measurement. And so if the overall situation does not change, that is our robot always moves by the same amount and there's always a measurement to one object with a specified accuracy,
10:03
then this value will be stable. So after a short startup phase, our filter will become stable under those circumstances. Now remember in our base filter, everything was formulated in a discrete way. So in order to compute our belief overline,
10:21
we computed the sum over all x t minus one, and then we computed our new belief by multiplication of the probability of the measurement given the precision. Our belief overline. Now seeing this discrete formulation, this belief is represented by some kind of an array.
10:41
And in our python programming, we found a way of doing so by using a class that stores those arrays in an efficient manner in terms of storing the start and only those values which are non-zero. So then here's a loop, and here's a loop two, and there's two loops where our convolution, whereas in the computation of the posterior, there's no loop,
11:02
but there's this outer loop, so there's multiplication, and the outer loop, this was our multiplication function. And so you see, this is quadratic in the elements that have to be touched, whereas this is linear. But unfortunately, as our distributions get wider and wider, they might eventually cover the entire world our robot operates in.
11:21
So what shall we do? So we might think, well then we use large cells, so we need only a few of them. But there are certain problems with that regarding the accuracy, because since our robot is here, we can't distinguish this position from any other position in the cell. So if we give it a movement command, which is so small that the robot doesn't live itself,
11:41
it appears as if the movement wouldn't have any effect. In order to circumvent those accuracy problems, we would have to make small cells. But then, as you can imagine, we need lots of them, and sooner or later, we get a storage problem. And so it is a question of efficiency. If we are able to represent this belief in a different manner,
12:03
so we can replace all this by an integral, and we returned a belief in the end, also conforming with our original representation. And in order to explore this, I have provided you with the SLAM 6E histogram filter cleaned up version.
12:22
And so it is essentially the same program as our last one, but I rearranged the code so that it is simpler. So first of all, there are some helpers. All the plot routines are in this histogram plot, so they won't disturb us in the main code. So now scrolling down to the main code, we have the following. Our arena has a width of 200, and our dist, the distribution,
12:41
we can set a different distribution to be used throughout the entire program. And this time we use a triangle, as in our examples before. Then we start at a certain position given by this dist. So this means this is a distribution dot triangle, centered at 10, with a half-width of 1. So it's a unit peak. And here are our controls and measurements.
13:00
So the controls are two distributions. First one meaning we move by 40 forward, and our distribution has a half-width of 10. And then we move again by 70 forward, again with the same half-width of 10. And the measurements tell us that after we have been at 10, and we moved forward by 40, we should be actually at 50, but our measurement tells us we are at 60 with a half-width of 10.
13:22
And then after we move again, we are at 120, but our measurement tells us we are at 140, but this time the accuracy is lower. And our main loop is as simple as that. We just call the histogram filter step with the old position, our control, and our measurement, and we obtain the new position. In this case, the histogram filter also returns the prediction.
13:44
This is not really necessary for the filtering, this is just for plotting it down here. And this is the call to the plot routine. So what does the histogram filter step do? Let's look here. It is as simple as that. So the only thing it does is one prediction, which is the convolution of our previous belief and the control,
14:01
and then a correction, which is just the multiplication of our prediction, which we just computed, and the measurement. That's all there is to do. Now let's run this. So this is what we get out for the two steps. Now let's zoom in. This is the first filter step. So the blue curve is our discrete distribution that we obtain after our first movement.
14:20
And as expected, it is centered at 50. Then we also get our measurement, which is a triangular distribution centered at 60. And so our resulting posterior is this red distribution, which has its peak exactly in the middle between our two other distributions, because those distributions have the same width. So the robot's belief is as accurate as the robot's measurement,
14:42
and so the result is exactly in the middle. So let's move on. One next step, and it looks like this. In blue again is our belief overline. So after the movement. And now we get our measurement, and our measurement is still a triangular distribution. However, now we have chosen a width which is twice as large. And so our resulting posterior is here, close to our prior.
15:04
Now why is this the case? Because our prior is more accurate than our measurement, and so our posterior moves towards our prior. Now let's also have a look at the shape of those curves. Now we see we started with triangular distributions, we ended up in something else,
15:20
then we moved. And after movement, our curve pretty much looked like a bell-shaped curve. And this is no surprise, because if you remember, this curve is computed by a convolution. And the convolution of two triangular distributions leads actually to a binomial distribution. And so we end up here looking at binomial coefficients,
15:40
which means that this curve approximates a bell-shaped curve. So it looks pretty much like a Gaussian curve. Now let's try all this using Gaussian distributions. So all we need to do is change this to Gaussian, and then run it again. Now here's our new result. Again, let's zoom in. So this is after the first iteration.
16:01
And this very narrow distribution we start with is transformed after our first movement into this Gaussian shape. Then we have a second Gaussian shape, which is our measurement. And the outcome is something which looks pretty much like a Gaussian shape too. Now as we move on, our next distribution also looks like a Gaussian shape. And after we multiply it with our measurement, the result also looks like a Gaussian shape.
16:23
So we have now seen, if we start with a triangular shape, our outcome is not a triangular shape anymore. But it seems that if we start with a Gaussian shape, then both the convolution to obtain the blue curve and the multiplication to obtain the red curve seem to give us again a Gaussian shaped distribution.
16:41
Now what do you think? Is this true? So this here is a Gaussian. And multiplying it with another Gaussian again gives us a Gaussian. And then in the next step, when we move, we have this convolution, which then is an integral. This will lead to a Gaussian as well. What do you think? Is this true?