SLAM G 07
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 76 | |
Autor | ||
Lizenz | CC-Namensnennung - keine kommerzielle Nutzung - keine Bearbeitung 3.0 Deutschland: Sie dürfen das Werk bzw. den Inhalt in unveränderter Form zu jedem legalen und nicht-kommerziellen Zweck nutzen, vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/49038 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | |
Genre |
SLAM and path planning19 / 76
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
00:00
MIDI <Musikelektronik>LastSLAM-VerfahrenOffene MengeKonvexe HülleIkosaederTrajektorie <Kinematik>PartikelsystemZahlenbereichResultanteFunktionalGlättungComputeranimation
00:16
StandardabweichungZustandsdichteLastSoundverarbeitungTrajektorie <Kinematik>PartikelsystemInterrupt <Informatik>ResultanteSLAM-VerfahrenGamecontrollerRuhmasseAdditionRechenwerkDistributionenraumDiagramm
01:02
PartikelsystemMultiplikationsoperatorMAPDatenverwaltungRobotikSLAM-VerfahrenDistributionenraumOrdnung <Mathematik>CASE <Informatik>GamecontrollerEinflussgrößeImplementierungKomplex <Algebra>FunktionalAutomatische HandlungsplanungGewicht <Ausgleichsrechnung>MathematikComputeranimation
02:23
PartikelsystemOrtsoperatorKovarianzfunktionRechter Winkelsinc-FunktionDateiformatTotal <Mathematik>Computeranimation
02:39
EinflussgrößeSigma-AlgebraMailing-ListeIdentitätsverwaltungDifferentePartikelsystemMAPVorhersagbarkeitMultiplikationsoperatorTotal <Mathematik>ImplementierungVariableOrdnung <Mathematik>Metropolitan area networkComputeranimationFlussdiagramm
03:36
PartikelsystemElement <Gruppentheorie>LogarithmusTopologieZeiger <Informatik>EinflussgrößeSigma-AlgebraAutomatische IndexierungMereologieOrdnung <Mathematik>Wurzel <Mathematik>BinärbaumKomplex <Algebra>Rechter WinkelMultiplikationsoperatorMAPKalman-FilterAggregatzustandRoutingSummengleichungWellenpaketPunkt
05:31
Komplex <Algebra>MultiplikationsoperatorZahlenbereichEinflussgrößeMAPAsymptotikCASE <Informatik>LinearisierungPartikelsystemLikelihood-FunktionMailing-ListeResultanteGewicht <Ausgleichsrechnung>VersionsverwaltungRechenwerkMathematikTeilmengeRuhmasseInterrupt <Informatik>p-BlockComputeranimation
06:27
AssoziativgesetzRechenwerkLoopAnalytische FortsetzungPartikelsystemURLSchnittmengeFunktionalArithmetisches MittelCodeAlgorithmusKontrast <Statistik>MAPParametersystemSLAM-VerfahrenKovarianzmatrixMultiplikationsoperatorMaßerweiterungOrtsoperatorStichprobenumfangDifferenteZahlenbereichAbgeschlossene MengeZufallsvariableDiskrete UntergruppeStochastische AbhängigkeitRobotikTermA-posteriori-WahrscheinlichkeitOrientierung <Mathematik>VersionsverwaltungKorrelationsfunktionMereologieVererbungshierarchieMathematikWhiteboardKlasse <Mathematik>SinusfunktionWeg <Topologie>MeterComputeranimation
Transkript: Englisch(automatisch erzeugt)
00:00
Now if you run your code and you're worried about the smoothness of the trajectory keep in mind that we used only 25 particles. So if you don't like that modify this number which is set in the main function. So here is a result which I obtained using 200 particles. So as you can see the trajectory is now much much smoother and looks much more like
00:22
the desired result. So in the beginning all the particles are the same but shortly after the 200 particles spread out nicely and we can see the typical effect in the left turn where some particles take a stronger left turn. Even though we have more particles we still have the effect that this landmark is dropped since it is not observed as it is occluded
00:45
by this landmark. Shortly after this is inserted again and we also see of course spurious landmarks in other places they usually disappear shortly after they are introduced. So this is our final fast slam result of this unit. So let me make two additional remarks
01:04
with regard to the efficiency of fast slam. The first is about the proposal distribution and here the problem of the presented fast slam approach is that our proposal is made only based on the given control and this may lead to inefficiencies because if our control
01:23
is very noisy this may lead to a wide spread of particles out of which only a few will survive in the end because all the others do not fit very well to the measurements of our robot. And so in the approach which was presented here we do not care about this and in the book of Tran, Bogart and Fox this is also called fast slam 10 and you'll find a different
01:46
solution called fast slam 20 in the book which modifies the proposal distribution so that the measurements are taken into account. And the second remark is with respect to map management. Now you have seen we take m particles and say we have up to n map features or landmarks
02:06
and if you now check with our implementation you will notice our time complexity is in the order of m times n. For example in re-sampling you will find a copy function which copies an entire particle including its map in case the particle is picked. Now let's have a closer
02:24
look at this. So we have our particles where each holds its own position and also the estimated position and covariance of every landmark. In our re-sampling step whenever we pick one of the particles we copy it so we may have two or more identical copies
02:43
of the same particle and so overall since in re-sampling we have to pick a total of m new particles and each particle requires copying all n variables of our map our implementation is indeed in the order of m times n. Now let's see what happens to our two identical copies of landmarks
03:02
here. First of all during the predictions step they will move differently which will not affect the map and then when a measurement occurs they will be updated differently. Now let's assume that this measurement relates to an existing landmark so it will affect this mu k sigma k of this particle and it will also affect one filter same u l sigma l of the other particle. Now if
03:25
these particles are close then probably k will equal l. So as you see we will modify the list of landmarks only in one place where all the other entries will stay the same. Now let's think about putting all the landmarks into a balanced tree so mu one and sigma one would be stored in a
03:44
leaf of a tree as well as mu two sigma two and so on and we would use a binary tree and this would be un sigma n. Now say this is our particle k so we would put that into that tree so we would have a pointer pointing to that tree and whenever we need some element with a
04:02
certain index we would now have a logarithmic time complexity for assessing it. So so far that's not an improvement but now say during resampling we pick this element twice and instead of copying it we'll just make a second pointer pointing to the root of the very same tree. Now what happens if you get our measurement c then as we know one of the elements will be updated
04:24
whereas all the rest stays the same. Now say our measurement would influence this filter of our particle l so this needs to be modified whereas all the remaining Kalman filters will stay the same. So instead of copying everything in order to just modify one element we will do the following we will set up a new root for our particle l and since our element to be
04:44
modified is here we don't have to modify the right part of our tree so we will keep a pointer to the unmodified right part but we'll have to modify the left part so we insert a new node here as well but here the left subtree also stays the same so we keep a pointer to the existing subtree whereas the right subtree is modified so we insert a new node and then again
05:05
the left element is unmodified whereas the right one is our new element and so as you see instead of copying all the elements which will cost us in the order of n we only have to modify those elements where you can see it's one modification per level of the tree so it's only
05:22
the path that we'll have to modify and so instead of taking O of n we now have on the order of the logarithm of n and so our conclusion regarding the efficiency is that we can move from O of m times n to complexity of m log n which is much much better as the number of
05:43
features in the map grows so remember that's the number of particles and that is the number of map features or landmarks there's another issue here namely if this is our particle and if we get a new measurement we have to find out the likelihood that this measurement corresponds to any of our features in the map so in this case we need to make sure that we
06:04
don't have to compute the likelihood for all of our map features but only for a small subset so that picking the correct landmark in our list of landmarks does not take time linear in the number of landmarks now this is indeed possible so that overall we get an asymptotic time
06:22
complexity of m log n which is a very good result now this brings us to the conclusions so in this unit you understood and programmed a particle filter version of SLAM and i really congratulate you if you managed to program that because even though the particle filter SLAM is
06:40
conceptually not very hard to understand it needs quite an amount of code to realize all the required functionality now in the particle filter SLAM each particle is one path plus one map so keep in mind even though usually we only keep the last position and orientation of our robot each particle stands for all the positions and orientation along the particles path now one
07:05
of the most important features was that our map features or landmarks are independent given the path meaning of course the random variables describing the locations of our map features are independent when the path is given and this has led to our solution to use one
07:21
independent Kalman filter per feature which has meant that in contrast to the extended Kalman filter SLAM approach we have no correlations and as you remember those correlations were responsible for our covariance matrix the size of 3 plus 2n times 3 plus 2n which is quadratic in the
07:41
number of landmarks so while this is good on the other hand there's also drawback since those dependencies are only implicitly captured by the fast SLAM algorithm in terms of the diversity of our particle set the fast SLAM algorithm may not perform very well especially with respect to loop closing because due to particle deprivation after certain amount of time the particles in our
08:06
filter will have a common history and so the dependencies that were captured through the diversity of the particles will be lost if you're interested in this please have a look at the probabilistic robotics book by tran borgert and fox now another big plus of fast SLAM
08:22
is that each particle uses its own data associations which is in contrast to other methods for example the extended Kalman filter SLAM here we had to decide in each step the one and only data association whereas with our particle filter each particle has its own individual
08:42
data associations along the path and this is a big plus because this means that our fast SLAM algorithm not only samples across different continuous parameters but it also samples across different discrete landmark associations and finally remember that the fast SLAM amazingly
09:02
solves both problems the offline and online SLAM because each particle represents the entire path plus the map so the set of particles represents the full posterior on the other hand we're usually not interested in the entire path so we just keep the last position and orientation in each
09:20
particle and so we use the fast SLAM algorithm as a filter and so this is the solution to the online SLAM problem now this concludes our unit about the fast SLAM algorithm i hope you had as much fun as i did in implementing and running your very own fast SLAM algorithm