SLAM F 04
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 76 | |
Autor | ||
Lizenz | CC-Namensnennung - keine kommerzielle Nutzung - keine Bearbeitung 3.0 Deutschland: Sie dürfen das Werk bzw. den Inhalt in unveränderter Form zu jedem legalen und nicht-kommerziellen Zweck nutzen, vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/49027 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | |
Genre |
SLAM and path planning30 / 76
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
00:00
DatenstrukturMessage-PassingVorhersagbarkeitRauschenGamecontrollerMultiplikationsoperatorForcingDerivation <Algebra>MatrizenrechnungMaßerweiterungTermPartielle DifferentiationPeer-to-Peer-NetzMathematikKappa-KoeffizientCASE <Informatik>NebenbedingungFunktionalRechter WinkelVarianzProdukt <Mathematik>KovarianzfunktionNichtlineares SystemSLAM-VerfahrenKovarianzmatrixKalman-FilterComputeranimation
01:44
Rechter WinkelMatrizenrechnungAggregatzustandRobotikNormalvektorGamecontrollerDerivation <Algebra>OrtsoperatorZusammenhängender GraphNeuroinformatikMathematikProgrammierungOrdnung <Mathematik>AdditionARM <Computerarchitektur>Variablet-TestMereologieRelativitätstheorieWeb logGeradeTopologieMultiplikationsoperatorDiagonale <Geometrie>ZweiWärmeübergangFunktionalSchätzfunktionCASE <Informatik>AlgorithmusPartielle DifferentiationAusdruck <Logik>Arithmetisches MittelNichtlineares GleichungssystemRuhmasseEinsKonstanteMaßerweiterungp-BlockComputeranimation
04:17
MatrizenrechnungVarianzSchätzfunktionGamecontrollerMultiplikationsoperatorTermRechter WinkelPartielle DifferentiationMereologieSigma-AlgebraDatensatzMaßerweiterungElement <Gruppentheorie>Zusammenhängender GraphJacobi-VerfahrenKlasse <Mathematik>WärmeübergangComputeranimation
05:35
WinkelMultiplikationsoperatorZusammenhängender GraphKlasse <Mathematik>TopologieElement <Gruppentheorie>Diagonale <Geometrie>ZahlenbereichMatrizenrechnungResultanteRuhmasseEinsGamecontrollerComputeranimation
06:13
AusnahmebehandlungMultiplikationsoperatorSchlüsselverwaltungImplementierungMatrizenrechnungZahlenbereichKonstruktor <Informatik>Elektronische PublikationSLAM-VerfahrenVorhersagbarkeitGamecontrollerSigma-AlgebraComputeranimation
06:38
MathematikKontrollstrukturKovarianzfunktionThetafunktionDigitalfilterMarketinginformationssystemKonstanteRoboterCliquenweiteKonvexe HülleTeilbarkeitJensen-MaßDisplacement MappingElektronische PublikationVorhersagbarkeitSLAM-VerfahrenPartielle DifferentiationFunktionalAusnahmebehandlungMaßerweiterungZahlenbereichAggregatzustandProgramm/QuellcodeXML
07:08
KontrollstrukturKovarianzfunktionMinkowski-MetrikThetafunktionMultiplikationsoperatorMatrizenrechnungVorhersagbarkeitPartielle DifferentiationFunktionalXML
07:19
FehlermeldungCliquenweiteKontrollstrukturRoboterKovarianzfunktionMinkowski-MetrikTeilbarkeitCodeMatrizenrechnungSkalarproduktFunktion <Mathematik>IdentitätsverwaltungVektorraumNichtlinearer OperatorVektor <Datentyp>MatrizenrechnungGeradeZahlenbereichKovarianzfunktionFunktionalOrtsoperatorIdentitätsverwaltungGamecontrollerAggregatzustandVorhersagbarkeitOrientierung <Mathematik>SystemaufrufRechter WinkelVarianzMultiplikationsoperatorIndexberechnungVariablePhysikalisches SystemPartielle DifferentiationDiagonalformProgrammierungRobotikHalbleiterspeicherNeuroinformatikXML
08:50
SLAM-VerfahrenEinflussgrößeKonstanteRoboterFehlermeldungDisplacement MappingCliquenweiteDigitalfilterKontrollstrukturKovarianzfunktionMatrizenrechnungWinkelIdentitätsverwaltungFunktion <Mathematik>IndexberechnungVektorraumMatrixoperatorVektor <Datentyp>FunktionalFehlermeldungKovarianzmatrixTrajektorie <Kinematik>AggregatzustandArithmetisches MittelSchreib-Lese-KopfLie-GruppeSichtenkonzeptXML
09:03
FehlermeldungKontrollstrukturDisplacement MappingDigitalfilterKonstanteKovarianzfunktionEinflussgrößeSimulationLoopRoboterPrognoseverfahrenFunktion <Mathematik>AggregatzustandTupelTeilbarkeitCliquenweiteOrtsoperatorVisualisierungOrientierung <Mathematik>ViewerRobotikMAPVorhersagbarkeitKovarianzfunktionFitnessfunktionSLAM-VerfahrenKalman-FilterKonstanteGamecontrollerTrajektorie <Kinematik>MaßerweiterungGleitendes MittelXML
09:58
EinflussgrößeDigitalfilterKonstanteFehlermeldungKovarianzfunktionLoopFunktion <Mathematik>RoboterPrognoseverfahrenKontrollstrukturSimulationMatrizenrechnungWinkelDisplacement MappingTeilbarkeitCliquenweiteMeterSchreiben <Datenverarbeitung>SLAM-VerfahrenKonvexe HülleRechter WinkelEigenwertproblemCodeIdentitätsverwaltungVektorraumElement <Gruppentheorie>Nichtlinearer OperatorVektor <Datentyp>FehlermeldungProgrammierungMereologieTrajektorie <Kinematik>FokalpunktMaßerweiterungVorhersagbarkeitXMLProgramm/Quellcode
10:08
Euler-DiagrammOffene MengeHash-AlgorithmusLastLastElektronische PublikationVorhersagbarkeitMaßerweiterungEinflussgrößePunktInformationRobotikVisualisierungOrtsoperatorKovarianzfunktionDifferenteDifferenzenrechnungMAPProgrammierungResultanteDateiformatGibbs-VerteilungSpitze <Mathematik>SLAM-VerfahrenComputeranimationDiagramm
Transkript: Englisch(automatisch erzeugt)
00:00
Now let's have a look at the extended Kalman filter SLAM. And fortunately, we have implemented the extended Kalman filter already. So first let's have a look at the prediction step. Now as we know, in the prediction, we computed our new mu from the possibly non-linear function g, which took the old mu and our control. And then we computed our predicted covariance matrix as g t,
00:22
where g was the Jacobian of g times our old covariance matrix times the transpose of g, plus the noise from our control, which was a matrix v, which in turn was the partial derivative of g with respect to the control, times the covariance of a control times the transpose of v.
00:40
And this covariance was actually set up from the variance in the left and right control. So we assume the left and right noise to be independent. And the covariance matrix actually only contains the two variances. And now this function g, which we have set up, was dependent on x, y, and the heading of the robot, and the left and right control.
01:01
So this is the state mu, whereas this is the control, u. And this was the old x, y, and theta, plus those terms. Unfortunately, we programmed all of that already. And from those terms, we obtained the derivative with respect to the state, which was a three times three matrix, with one on the main diagonal, and two terms here.
01:21
And we had two cases, if the left control is different from the right control, or if left and right control are equal. And so those two cases led to different values here, but still the structure of the matrix was the same. And in the same manner, we computed v, partial derivative with respect to control, and this turned out to be a three times two matrix of non-zero values.
01:44
Now let's have a look at what happens to g, if we augment our state with the positions of our landmarks. Well then our new state will be x, y, theta, and the first three lines will be exactly the same as previously, but in addition, we have x1 here, and y1, and x2, and y2, and so on.
02:03
Because as our robot moves, the state of the robot is changed, but the state of our landmarks will not be changed. They will not move just because the robot moves, unless of course the robot bumps into one of them. So in general, our control moves the robot, but it does not induce any change in the position of the landmarks.
02:21
So what does this mean for our matrix G? Well G is the partial derivative with respect to the state, and the state now is this, whereas this is control. Now in our case, G will be a seven times seven matrix. And so for the first entry, we will have to compute the derivative with respect to x of the first component of G, and this is as we see, one.
02:42
Now the derivative with respect to y is zero, and the derivative with respect to the heading is non-zero in general, and it's exactly the value that we had on the last slide. And so for the first three derivatives, we get exactly the same values, which we had earlier, for the case of the extended Kalman filter, without estimation of the landmarks.
03:02
So this three times three matrix is our old matrix. And we computed that already, and we even programmed that already. Now what happens here? Well, the partial derivative of the first component with respect to x1 is zero, because x1 is not present in this formula. So there will be zeros here, and in fact also here, and here. So this is all zeros.
03:21
Now what happens along the main diagonal? Well, the partial derivative of this component with respect to x1 is one. So we will have ones along the main diagonal, and zeros elsewhere. And this block will of course also be zero. So in order to augment our state with all the variables for the positions of the landmarks,
03:41
we will have to add those equations. So those equations just tell us to keep the old values constant. So as we see, we do not have to modify G at all. The function G just modifies the first three values of our state. And now for our Jacobian matrix G, the derivative with respect to the state, this means we have to place our old three times three matrix G into the upper left corner,
04:04
and we have to fill the lower right submatrix, and everything else is zero. And so this is a pretty simple modification of our algorithm. Now we still have to check what happens with R. Now for R, we have the following. R equals vt times the variance of our control times transpose to vt.
04:24
And as we know, the matrix v is a partial derivative of G with respect to the control. So now, remember our G was x, y, theta, and in addition, x and y of landmark 1, landmark 2, and so on. Plus those terms we had earlier, where the control L and R only influence the first three terms.
04:45
So consequently, this Jacobian is non-zero for the first three terms, and zero because L and R is not influencing the lower four components here. And so this is a 7 times 2 matrix. Now sigma control is the variance of the left and right control,
05:02
and vt is just transposed of this matrix. And so if we multiply these out, we see this is a 7 times 7 matrix, where the first three rows and the first three columns are non-zero in general, and the remaining elements are all zero. And this part of the matrix is actually our old matrix vt sigma control vt transposed.
05:24
So old meaning the matrix we obtained for our extended Kalman filter without the estimation of landmarks. So this is our matrix Rt. And now this is a pretty comfortable result. So our old G is replaced by our new G, but there's nothing to do.
05:41
The trick is just apply the old G to the first three components. And our Jacobian matrix G, which was 3 times 3, a main diagonal and non-zero values here and here, is replaced by 3 plus 2 times the number of landmarks times 3 plus 2n matrix. Well, this original matrix is up here, and as you see, this has ones on the main diagonal,
06:05
and we have ones on the main diagonal down here. So this is three elements, and this is 2n elements, and the same holds here, of course. And our R, which was v sigma control vt, which was a 3 times 3 matrix, has to be replaced by a 3 plus 2n times 3 plus 2n matrix,
06:24
where again, the first 3 times 3 matrix is the old v sigma control vt, and everything else is zeros, and so again, this is 3, and this is 2 times the number of landmarks. Now let's implement this. So I prepared the SLAM9A SLAM prediction file for you,
06:43
and this is basically the extended Kalman filter, which we programmed earlier. So in the beginning, there's the constructor, and it is exactly the same as our previous implementation, with the exception that we have an additional variable, which is the number of landmarks, which we set to zero down here. Now here's the function g we implemented earlier,
07:02
here is the partial derivative of g with respect to the state, which we also implemented earlier, and here is the partial derivative of g with respect to control, and this is something we also implemented earlier. And now as we know, from those methods, we can build our prediction step, which is here. So in the prediction step, we set up the matrix g,
07:22
and since the call to this function now constructs the 3 times 3 matrix g, I call this g3. And then from the left and right control, we construct the control covariance matrix, which consists of a diagonal matrix containing the two variances. We construct the matrix V, which is the partial derivatives of g with respect to control,
07:42
and from that, we obtain our matrix R. And exactly as above here, I termed this R3, because again, this is just the 3 times 3 matrix R, which is exactly the same matrix that we computed earlier in our Kalman filter, but it is not the matrix that we need when our system state is augmented
08:00
by the additional unknowns required for all the landmark positions. Now if you look down here, I modified these two lines to contain g3 and R3, and I called the function g on the state to predict the new state, but both of those lines will have to be modified to handle the full state, which consists not only of the three variables for the position and orientation of our robot,
08:23
but also contains all the unknown positions of our landmarks. And here are some hints. So the number of landmarks is the member variable self.numberOfLandmarks, and a NumPy function i will return an n times n identity matrix. 0s will return a matrix where all entries are 0,
08:42
and if you need submatrices or subvectors, you can obtain them using indices like that. So if we look at the rest of the code, here's a helper function, getErrorEllipse, which will convert a covariance matrix into an error ellipse, and we programmed and used that earlier.
09:01
And down here is the main function. As usual, we initialize our constants. Now the initial state, this should be actually 0, 0 with a heading of 0, but since I can't choose it arbitrarily, I've chosen other values here, and those have been chosen just to make the resulting trajectory fit into our viewer later on.
09:21
So again, those values are not meaningful. I've chosen them just for visualization purposes. Then here, I set the initial covariance to 0. This is what we discussed earlier. So I'm very sure about my initial position and orientation, simply because I define my map to have its origin and orientation according to my initial robot position.
09:44
Then I set up the extended Kalman Filter Slam. I read the motor tick data, and then here, this is the Kalman Slam Filter loop. So all it does is, it reads control and calls the prediction step of the filter. And as usual, the rest of it outputs the robot's position and orientation,
10:02
and it also writes the error ellipse. Now, this is the part you should program now. Now after you programmed this, run it, and it will produce the extended Kalman Filter Slam prediction.txt file. So load it, and you will see the following. So this is our trajectory,
10:21
which we obtain by just executing the prediction step. And this is not different from our previous results, because so far, even though you programmed it now to handle landmarks as well, we do not have any landmarks, because we did not integrate any laser scanner measurements yet. There's two differences to our earlier solution. First of all, we now start down here,
10:42
and this is an arbitrary start position that has been chosen in a way that the visualization is useful. And I didn't use our usual start position, just to remind you of the fact that in Slam, we do not have an initial map. So we do not have any useful information regarding the start point of our robot.
11:00
And the second difference is that our covariances in the start position are zero, and so as we move along, they start to grow. And we have chosen zero, which you can see also here, because this gives us the anchor for our map, or in geodetic terminology, this defines the datum. So now please program the prediction step.