SLAM E 04
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 76 | |
Author | ||
License | CC Attribution - NonCommercial - NoDerivatives 3.0 Germany: You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/49020 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | |
Genre |
SLAM and path planning37 / 76
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
00:00
Orientation (vector space)Execution unitFlynn's taxonomyMetric systemCompact CassetteRepresentation (politics)Position operatorUniform resource locatorMatching (graph theory)RoboticsParticle systemMeasurementPhysical systemState of matterSampling (statistics)Uniqueness quantificationNichtlineares GleichungssystemRandomizationRadiusNeuroinformatikFactory (trading post)Associative propertyPrisoner's dilemmaNavigationReal numberState observerCylinder (geometry)CASE <Informatik>HypothesisProjective planeVarianceMetropolitan area networkEndliche ModelltheorieMoment (mathematics)Procedural programmingCoordinate systemEstimatorTrajectoryMultiplication signLaser scanningNatural numberPole (complex analysis)Kalman-FilterDrawing
02:29
Arrow of timeParticle systemPopulation densityAsynchronous Transfer ModeRepresentation (politics)State of matterSet (mathematics)Distribution (mathematics)NumberSampling (statistics)MultiplicationNormal distributionComputer animation
03:16
PredictabilityNormal distributionParticle systemSet (mathematics)Multiplication signMoment (mathematics)Sigma-algebraSampling (statistics)VarianceEstimatorDistribution (mathematics)Arithmetic meanSummierbarkeitMetropolitan area networkPower (physics)HypermediaComputer animation
04:24
File Transfer ProtocolState of matterSampling (statistics)Particle systemPredictabilityDistribution (mathematics)Data miningPoint (geometry)Set (mathematics)Multiplication signNeuroinformatikGame controllerPosition operatorExpert systemConvolutionRoboticsTerm (mathematics)Noise (electronics)CurveSuite (music)ProteinSimilarity (geometry)Discrete groupSummierbarkeitLoop (music)Raster graphicsResultantComputer animation
07:00
Distribution (mathematics)Game controllerSampling (statistics)Particle systemNoise (electronics)Vector spaceRoboticsWell-formed formulaExecution unitFunctional (mathematics)Position operator
07:31
Game controllerNormal distributionParticle systemWell-formed formulaRight angleNichtlineares GleichungssystemExecution unitVarianceSampling (statistics)State of matterDistribution (mathematics)Position operatorDifferent (Kate Ryan album)DistanceMultiplication signSigma-algebraAlpha (investment)Functional (mathematics)TrailDivisorLine (geometry)RoboticsNeuroinformatikSquare numberWeightMetropolitan area networkComputer animation
09:23
Particle systemVarianceRight angleCodeGame controllerSampling (statistics)Well-formed formulaConstructor (object-oriented programming)Computer animation
09:37
State of matterRandom numberControl flowMathematicsParticle systemCodeLogical constantRobotClique-widthDivisorThetafunktionAlpha (investment)Digital filterError messageDrum memoryVarianceLoop (music)Sigma-algebraStandard deviationDisplacement MappingComputer iconPredictionFunction (mathematics)RandomizationParticle systemSocial classVarianceDivisorCASE <Informatik>Computer fileTupleLoop (music)Exception handlingDistribution (mathematics)Execution unitGame controllerAutocovarianceFunctional (mathematics)CodeFunction (mathematics)Logical constantStandard deviationNumberRoboticsMultiplication signElement (mathematics)Constructor (object-oriented programming)AdditionFile viewerParameter (computer programming)State of matterGAUSS (software)Variable (mathematics)PredictabilityDisplacement MappingEmailGroup actionData storage deviceData conversionComputer programmingSampling (statistics)XML
11:48
Digital filterLogical constantDivisorError messageControl flowRobotThetafunktionParticle systemPartition (number theory)PredictionLoop (music)Function (mathematics)Clique-widthParticle systemLoop (music)Game controllerMetreSystem callSet (mathematics)Data conversionFunctional (mathematics)Physical lawComputer fileRight angleDistribution (mathematics)Multiplication signXML
12:17
Duality (mathematics)Programmer (hardware)Structural loadData loggerComputer fileStructural loadMultiplication signDistribution (mathematics)Particle systemRight angleTrajectoryLine (geometry)Computer animation
12:45
Open setComputer fileStructural loadNetwork topologyDistribution (mathematics)VotingData structureParticle systemTrajectoryComputer animation
Transcript: English(auto-generated)
00:00
Now to see what happens. Remember how our Kalman filtering in the last unit worked. So we had our landmarks and in every time step we extracted from our laser scan measurements the locations of landmarks which were somehow noisy. But we used the assumed position and orientation of our robot to map those detected cylinders into the world coordinate system and
00:23
then we looked for landmarks close to this projected positions, the closer than a given radius and then we assigned those. And each such assignment led to an observation equation in the correction step of our Kalman filter. And so if I don't know where the robot is,
00:41
I just say it is centered in the middle of the arena looking say in the x direction. Then even though I express my uncertainty about this position by large variances, the landmark assignment is based not on the second moments but on the first moments on my estimated position and orientation. And so from our scanner I will get those detected
01:03
poles and then the procedure will do some assignment of landmarks in the vicinity and it will probably even assign this cylinder here which is a completely wrong match. And based on this wrong match the Kalman filter will compute the correction and this will lead most probably to a completely wrong trajectory. So even though in general it would be okay to
01:24
model my uncertainty in this way, the problem in our case is that the observations which I need in the correction step are not absolute in nature but I obtain them based on my current estimation of the position and orientation of my robot. Now here's an idea to overcome this
01:41
problem. So if I don't know where I am, what if I assume some random position and orientation? Then I could do the landmark assignment for each of those hypothetical poses of my robot. So I would not only try this here but also this and then eventually this position here would
02:02
lead to the best match between detected landmarks and landmarks in the map. So by starting with many many such poses instead of just one there would be the chance that one of those poses is close to my real pose and so the landmark association would give the best results so that ultimately I would be able to identify the correct pose among all those hypothetical
02:24
poses. And so this is one of the basic ideas behind the particle filter. In a particle filter we do have particles and so we represent our belief by a set of random samples and so this is an approximate representation and it's non-parametric and it is able to represent
02:43
distributions with multiple modes. So each of those particles is a hypothetical state and our belief is represented by the set of particles where m is a large number for example m maybe 1000 and so if this is our true belief which we want to represent our particles may
03:04
look like that here maybe one here here maybe a few more here's the peak so there should be many particles here now the density of those particles approximates our true belief so now if you have a simple distribution and want to obtain the particles that represent this distribution
03:22
for example a normal distribution then you would just sample that according to the distribution and return the set of samples on the other hand if you have a set of particles we can compute the first and second moment so our estimated mu will be 1
03:40
divided by m times the sum of all samples so this is the mean value and the estimation for variance would be 1 divided by m minus 1 times the sum of x minus the estimated mean squared so assuming that our particles are for example sampled from a normal distribution we can estimate
04:01
the mu and sigma of that normal distribution where of course the more particles we have the better our estimate will be however if the particles do not represent a normal distribution then we still get this first and second moment but the distribution represented by our particles will be different from the normal distribution that is defined by our estimated mean and
04:23
variance so now let's have a look at the particle filter prediction step and i want to compare this with a discrete base filter which we had earlier so in the discrete base filter the update step was given as follows for all xt we computed our predicted belief using that sum overall xt minus 1 of the probability of ending up in xt when we were at xt minus 1 and given the
04:47
control ut times the belief of xt minus 1 that's all there was to do in the update step and that was a convolution so if our old belief looked like that then for every discrete value we multiply that value with this probability which also was given at discrete raster positions
05:04
only and so for example we placed this here then we had this value placed this here and so in the end we added all this up and obtained something like that it's about five discrete values here convoluted by three values and we obtained seven discrete values here so the result of the convolution is the widening of our
05:25
distribution or we could say non-scientifically it's a smearing by convolution and now in the our distribution is represented by the set of particles and the update step looks pretty similar we now do for every particle the following we sample a particle for our predicted belief
05:44
according to the distribution of this probability which is the probability that i end up in xt if my previous state was exactly the mth particle of my particle set and the given control was ut so again say this is my old belief but this is now not represented by this curve but rather by
06:03
a set of particles and now in this loop i take every single particle say for example this particle here to maybe particle i and i take this probability which is the same as here and for this particle the probability of the new state will look like that and so i move this particle to here but not exactly to the center but i now sample from this distribution say i pick this
06:26
point here and i do so for every point say this is my next particle i want to look at then this is the probability i will sample from this probability say i pick this point and so on for every single particle and now you see what we achieved here by a convolution with
06:43
those probabilities is achieved here by a sampling from this distribution so again non-scientifically we could say we do a smearing by sampling but the smearing is controlled by exactly the same term in the particle filter and in the discrete base filter and to give an example in 2d if my robot is here particles would look like this and my
07:06
control would move the robot like that then i would have to append this vector here but i would have to apply noise say my distribution would be like that then i would sample from that distribution and i would apply the same vector here get this distribution sample from it
07:21
and so on and so this would be my new particle set so let's think about this sampling step so how do we get this probability what we do have so far is we know if our robot is somewhere and we execute some control ut consisting of left and right motor ticks we end up in a new
07:45
position and in our last unit we already implemented this formula our new position x prime is a function g of our old position or state and our control and now here this is our particle x t minus one m this is our control left t and right t and this is our new particle
08:04
xt over line m now this is an exact function however the movement according to the control is inexact and so we implement this formula above here in the following way given lt and ut we assume that l and t are normal distributed and so we sample lt prime according to a normal
08:23
distribution of lt with the variance sigma l and we sample the right control in the same manner and after we sample this we will compute the new particle by the exact formula using the sampled control so as you see the only difference to the exact formula is that the
08:41
left and right control is not taken as is but it is sampled according to a distribution which is centered at the left and right control so how do i determine the variance and fortunately we don't have to think about that because in the previous unit you remember we set up those two equations for the left and right variance namely a factor alpha one times the left control
09:04
squared plus alpha two times left minus right squared and the reasoning was that the variance depends on the driven distance and also on the difference of the left and right track and the same for the right variance and so this is all there is to do compute the left and right
09:25
variances use those to sample the left and right control and then compute the new particle from the old particle by applying the exact movement formula using the sampled control so here's the code for the particle filter and many things will look very familiar because
09:42
they're very similar to the Kalman filter code which we had in the last unit so this is the particle filter class the constructor doesn't take a state and covariance anymore instead it takes a number of initial particles otherwise it is the same as the constructor in the Kalman filter class it takes the robot constants with and displacement and the control motion factor and
10:05
control turn factor and it stores all that in member variables so down here is the function g for the state transition and this is just copied from the Kalman filter class with the exception that down here i return a tuple instead of a numpy array so that we don't have to
10:23
import numerical python this time now here comes the prediction function you'll have to implement it takes the control which is left and right and then here you have to program the steps we just discussed and i've put some additional hints as comments here in particular take care if you call the function random gauss it takes the standard deviation here as a second argument
10:46
and not the variance here's another function i programmed which prints out the particles using a small header pa so this also goes into the log file and you will see shortly that the log file viewer now is able to plot all the particles that we output here now let's go to the main
11:02
function as usual here is the initialization of some robot constants and of the control motion factor and control turn factor and these are exactly the same values as those that we used in the Kalman filter and now i need to generate some initial particles so in this case i use 300 particles and here's my measured state and my standard deviations for x y and the heading and
11:26
then i just do a loop for all 300 particles i append one particle which is sampled in x y and adding where the distributions are centered on the elements of the first tuple and the standard deviations are picked from the second tuple so after that i have 300 particles and i hand
11:45
them over to the particle filter class together with all those constants and now down here is the main loop it reads all control data and then loops and in the loop we have our usual conversion of the motor takes two millimeters and then we just call predict so this call replaces
12:01
the old particles in the particle filter by a new set of particles which are then printed out and then we take the new control and again replace the old particles by new particles and so on so now you'll have to implement this predict function up here so after you implemented this run it and it will write a log file called particle filter predicted load this file and so you
12:25
see the following here is our initialization of our 300 particles in the upper right corner now as we move our time the distribution gets wider and wider until after a while it seems completely random however let us load the reference trajectory
12:47
now you can nicely see how the particles are kind of centered around the reference trajectory for a while until the distribution gets so wide that no structure is visible anymore this is not surprising since for now we just implemented the prediction and so we still have to implement
13:05
the correction step