We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback
00:00

Formal Metadata

Title
SLAM G 02
Title of Series
Number of Parts
76
Author
License
CC Attribution - NonCommercial - NoDerivatives 3.0 Germany:
You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
1
Thumbnail
04:51
2
Thumbnail
07:18
3
Thumbnail
12:07
4
Thumbnail
05:59
5
Thumbnail
03:20
6
Thumbnail
09:38
7
Thumbnail
07:46
8
9
10
11
12
13
14
15
16
17
18
19
Thumbnail
09:38
20
Thumbnail
08:03
21
Thumbnail
05:55
22
Thumbnail
07:00
23
Thumbnail
05:58
24
Thumbnail
21:58
25
Thumbnail
17:22
26
Thumbnail
08:50
27
Thumbnail
10:06
28
Thumbnail
13:47
29
Thumbnail
04:59
30
Thumbnail
11:21
31
Thumbnail
00:08
32
Thumbnail
05:57
33
Thumbnail
06:28
34
Thumbnail
14:21
35
Thumbnail
04:28
36
Thumbnail
17:03
37
Thumbnail
13:05
38
Thumbnail
00:44
39
Thumbnail
02:50
40
Thumbnail
04:11
41
Thumbnail
03:36
42
Thumbnail
06:17
43
Thumbnail
10:12
44
Thumbnail
04:45
45
Thumbnail
05:37
46
Thumbnail
12:54
47
Thumbnail
00:07
48
Thumbnail
00:06
49
Thumbnail
00:03
50
Thumbnail
00:06
51
Thumbnail
00:02
52
Thumbnail
00:03
53
Thumbnail
06:59
54
Thumbnail
00:24
55
Thumbnail
03:08
56
Thumbnail
03:12
57
Thumbnail
06:55
58
Thumbnail
03:41
59
Thumbnail
02:31
60
Thumbnail
05:53
61
Thumbnail
17:09
62
Thumbnail
01:35
63
Thumbnail
16:58
64
Thumbnail
04:25
65
Thumbnail
05:05
66
Thumbnail
05:08
67
Thumbnail
02:55
68
Thumbnail
15:18
69
Thumbnail
02:26
70
Thumbnail
04:26
71
Thumbnail
07:50
72
Thumbnail
07:27
73
Thumbnail
17:17
74
Thumbnail
08:03
75
Thumbnail
02:52
76
Thumbnail
03:39
State of matterMoving averageCorrespondence (mathematics)Associative propertySequenceSurgeryPosition operatorMeasurementGame controllerVariable (mathematics)Simultaneous localization and mappingRoboticsBasis <Mathematik>Particle systemType theoryDecision theorySingle-precision floating-point formatCASE <Informatik>Software testingMereologyDistribution (mathematics)Extension (kinesiology)Level (video gaming)Product (business)Posterior probabilityMultiplication signMultiplicationDifferent (Kate Ryan album)
Source codeComputer programmingDistribution (mathematics)Message passingRow (database)Simultaneous localization and mappingPlanningNumberLetterpress printingSocial classMereologyParticle systemRoboticsComputer animation
Error messageVarianceControl flowParticle systemRandom numberAlpha (investment)ThetafunktionClique-widthRobotLibrary (computing)MathematicsDivisorIntelSocial classSimultaneous localization and mappingComputer programmingGroup actionGame controllerState of matterParticle systemRight angleoutputFunctional (mathematics)Source codeXML
Commutative propertyControl flowParticle systemEllipseError messageMathematicsRandom numberThetafunktionRobotClique-widthAlpha (investment)Game controllerMeasurementCoroutineoutputParticle systemRoboticsClique-widthPredictabilitySource codeXML
Control flowDivisorError messageMaxima and minimaSimultaneous localization and mappingRobotLogical constantClique-widthParticle systemTwin primePredictionAdditionRight angleDisplacement MappingPartial derivativeRandom numberSocial classParticle systemSign (mathematics)2 (number)Constructor (object-oriented programming)Logical constantSimultaneous localization and mappingRight angleVariable (mathematics)Functional (mathematics)PredictabilityStandard deviationNeuroinformatikSource codeXML
Twin primeControl flowRobotError messageDivisorDisplacement MappingLogical constantParticle systemInclusion mapClique-widthPredictionElectronic mailing listRight angleRandom numberThetafunktionDigital filterLoop (music)Function (mathematics)Particle systemStandard deviationDistribution (mathematics)Set (mathematics)Operator (mathematics)Functional (mathematics)State of matterGame controllerSource codeXML
StatisticsThetafunktionParticle systemRobotDisplacement MappingDigital filterLogical constantDivisorControl flowError messageLoop (music)PredictionFunction (mathematics)Clique-widthCompilation albumSimultaneous localization and mappingEstimatorGame controllerParticle systemPredictabilityMereologyState of matterNeuroinformatikArithmetic meanFunctional (mathematics)Logical constantXMLSource code
ThetafunktionError messageDigital filterLoop (music)Particle systemPredictionControl flowFunction (mathematics)VarianceRobotClique-widthContinuum hypothesisEllipseStandard deviationMIDIAlpha (investment)Revision controlMathematicsRandom numberElectronic data interchangeSimultaneous localization and mappingPermianMenu (computing)Loop (music)PredictabilityMereologyFunction (mathematics)Error messageState of matterLibrary (computing)Arithmetic meanFunctional (mathematics)Simultaneous localization and mappingProduct (business)CodeSource codeXML
Open setStructural loadStandard deviationPredictabilitySimultaneous localization and mappingComputer fileStructural loadParticle systemDivergenceFunctional (mathematics)Social classExecution unitResultantComputer animationDiagram
Functional (mathematics)Social classParticle systemMereologyLoop (music)Electronic mailing listWeightNeuroinformatikComputer chessNumberLevel (video gaming)MeasurementSystem callCylinder (geometry)Computer animation
Particle systemWeightFunctional (mathematics)Simultaneous localization and mappingMeasurementCylinder (geometry)Loop (music)ResultantElectronic mailing listSocial classPoint (geometry)Dimensional analysisClosed setStaff (military)Computer animation
MeasurementLikelihood functionRoboticsElectronic mailing listThetafunktionState observerPosition operatorArithmetic meanSocial classAutocovarianceParticle systemMaxima and minimaBitExtension (kinesiology)Range (statistics)Error messageCovariance matrixCorrespondence (mathematics)State of matterAngleFunctional (mathematics)Maximum likelihoodKalman-FilterThresholding (image processing)NeuroinformatikVarianceMatrix (mathematics)Decision theoryDirection (geometry)Cohen's kappaResultantLipschitz-StetigkeitDigital photographyLie groupComputer animation
RoboticsVarianceParticle systemElectronic mailing listAutocovarianceMultiplication signAngleMeasurementIndependence (probability theory)Range (statistics)Likelihood functionDistanceAlpha (investment)Normal distributionRootMatrix (mathematics)Simultaneous localization and mappingPosition operatorCovariance matrixLogical constantMereologyDeterminantPropagatorDerivation (linguistics)Probability density functionMeasurable functionError messageEllipseState of matterExtension (kinesiology)NeuroinformatikInverse elementSigma-algebraCASE <Informatik>Square numberFunctional (mathematics)Workstation <Musikinstrument>Social classProper mapHeat transferVariable (mathematics)PlanningOffice suiteDemosceneCovering spaceJacobi methodSoftware frameworkCohen's kappaCorrespondence (mathematics)Point (geometry)Population densityDistribution (mathematics)Limit (category theory)Disk read-and-write headComputer animation
Web pageAutocovarianceAlpha (investment)State of matterThetafunktionDisplacement MappingParticle systemMathematicsFunction (mathematics)Range (statistics)Correspondence (mathematics)Position operatorSoftware frameworkParticle systemSocial classAutocovarianceElectronic mailing listMeasurable functionFilter <Stochastik>ImplementationVariable (mathematics)Functional (mathematics)Limit (category theory)Source codeComputer animation
Alpha (investment)CodeFunction (mathematics)RobotSimultaneous localization and mappingInterior (topology)MeasurementBit error rateDisplacement MappingState of matterExtension (kinesiology)Simultaneous localization and mappingMereologyDerivation (linguistics)NumberMultiplication signMeasurementFunctional (mathematics)Source codeComputer animation
CodeFluid staticsComputerFunction (mathematics)RobotProduct (business)Matrix (mathematics)Alpha (investment)MeasurementCorrespondence (mathematics)Likelihood functionAutocovarianceVector graphicsDot productDisplacement MappingDeterminantMeasurementFunctional (mathematics)Logical constantNumberMatrix (mathematics)Likelihood functionForm (programming)Correspondence (mathematics)Electronic mailing listNeuroinformatikTupleWell-formed formula2 (number)Covariance matrixComputer animationSource code
CodeCorrespondence (mathematics)MeasurementProduct (business)Matrix (mathematics)DeterminantVector graphicsDot productAutocovarianceLikelihood functionNumberExtension (kinesiology)Electronic mailing listLikelihood functionParticle systemLoop (music)MereologyService-oriented architectureMeasurement
MeasurementCorrespondence (mathematics)Product (business)Matrix (mathematics)CodeAutocovarianceDeterminantDot productVector graphicsDisplacement MappingComputer-generated imageryEndliche ModelltheorieParticle systemMaxima and minimaAngleLogical constantRobotDistanceError messageCylinder (geometry)Element (mathematics)Digital filterStandard deviationMeasurementLikelihood functionFunction (mathematics)Displacement MappingArrow of timeDegree (graph theory)CodeComputer programmingSoftware testingDifferent (Kate Ryan album)MereologyRoboticsCoroutineParticle systemEllipseStandard deviationLipschitz-StetigkeitCircleCoordinate systemMassComputer animation
MeasurementLikelihood functionInclusion mapAutocovarianceAngleLogical constantError messageDistanceCylinder (geometry)Digital filterParticle systemOrientation (vector space)Displacement MappingMereologyCorrespondence (mathematics)MeasurementNumberAngleCodeDistanceImplementationRange (statistics)Closed setComputer animation
MeasurementCloningLikelihood functionData typeDefault (computer science)Range (statistics)AutocovarianceExpected valueInvariant (mathematics)Multiplication signDistanceLikelihood functionError messageAutocovarianceVarianceRange (statistics)Square numberMereologyComputer programmingImplementationPoint (geometry)MeasurementAngleRootResultantOrder of magnitudeCross-correlationArrow of timeNumberCorrespondence (mathematics)NeuroinformatikCASE <Informatik>Translation (relic)Degree (graph theory)MassPower (physics)Lipschitz-StetigkeitComputer animation
Transcript: English(auto-generated)
Now there's something interesting about data association, or landmark correspondence. Now the basic situation is, a robot moves along, it does some measurements, and based on this, initializes some landmark positions with some uncertainty, then later on it measures the same landmarks and associates the new measurements with the previously measured
landmarks. And now this association is a discrete decision, so for example, if there was a landmark here, it would be not so clear if this measurement belongs to this landmark or if it belongs to that landmark. And all those discrete decisions are actually also part of our posterior, so that we'll
have to write the probability for our states and the map, given all the measurements and all the controls, and all the correspondences is equal to the probability of the state, given all that, times the product of the probabilities for all the map features. So you see here is the correspondence variable, and here, and here it is as well.
Now the interesting thing here is that this path of robot states is represented by one particle, and so also the decisions on the data association are made on a per particle basis, so each particle maintains its own data associations.
Now this is very different from the previous case, where our extended Kalman filter slam represented posterior of the online slam by a multivariate Gaussian distribution, however using only a single sequence of data associations. So the difference between the extended Kalman filter slam and our particle filter slam
with respect to the data associations is, our extended Kalman filter slam represents only one particular sequence of data associations, whereas our particle filter slam, or fast slam, maintains the posterior over multiple data associations.
So each particle has its own sequence of data associations, which makes this type of filtering much more robust. Now let me finally give you a remark before we start implementing all this. Now sometimes I say that fast slam solves the full slam problem, and this is because
I have a number of particles representing one part of the distribution, and they contain the full path, as well as all the landmarks, so this here is the full path. On the other hand I sometimes say that we use this as a filter, which means I talk about the online slam, and in fact this also solves the online slam problem, because
although this contains the current pose of the robot as well as all previous poses, I don't have to store them. So if I am not interested in all previous poses, I may just keep the last pose of the robot in my particle, while keeping all the rest exactly the same.
So again, fast slam solves the full slam problem as well as the online slam problem, and so we will use it as a filter. Now let's program all this. So I prepared the slam 10a prediction, and this will provide us with an overview of the program that we shall develop.
Now there is two classes here, the first is class Particle, and this class contains a method namely G, which was previously located in our filter classes, but which is exactly the same. So the method G computes the state transition given the old state and the control input. And this method is wrapped in this move function, which now is a member function of
the particle, and which modifies the pose of the particle given the left-right control input and the width of the robot. So this is our particle, so far there are no routines for measurement and correction, it is all just the movement or prediction step.
Now the second class is our fast slam class, which is also pretty short up to now. So it consists of a constructor, which designs all the particles and copies some constants to the class variables. And the second function is the prediction function, and we also used that function earlier when we did the particle filter. So it takes left and right from the control, it computes the standard deviation for left
and right, and then for every particle of the filter, it sets L and R to random values based on a Gaussian distribution, which is centered at the control and has the appropriate standard deviation. And then it just calls the movement function of the particle.
Finally, in the main function, we set all those constants, we generate an initial set of 25 particles, which are given an arbitrary start state, and which are duplicated here using the copy copy function. Then we set up our filter using the particles we just generated and those constants. We read the control data from our motor ticks, and then for any control data, we do
a loop prediction step, so that is the interesting part of the loop. And then we output particles, a mean state computed from all particles, and the error ellipse. Now this function, get mean, and also this function is imported from the SLAMG library, where I moved some of the helper functions for better readability of the main code.
So that is all there is to do. Now let's run this. After you run this, it will produce the fast SLAM prediction text file. So load this, and you will see the following. In the beginning, all particles are the same. Then they start to diverge, and so we don't get a reasonable result here.
So this result is not very impressive, but it was to be expected, because we don't have a correction step yet. And in fact, this result is the same as what we obtained earlier in our unit about the particle filter. So now let's have a look at the correction step. So our correction step will be a member function of the fastSLAM class.
So in the class fastSLAM, we will have the function correct, and this will take our measured cylinders. Now exactly as in our previous particle filter, the correction step will have two sub-steps. The first is computing all weights, and we'll call this function updateAndComputeWeights,
and it will take the measurements. And then the second sub-step will do the re-sampling. So this first function here will have a loop over all particles and return one weight for each particle in this list of weights. So the list of weights will have exactly the same number of entries as there are particles in our particle filter.
And then in the second step, there will be a re-sampling. And we won't worry about this, because this will be exactly the same as the re-sampling step which we programmed earlier in our particle filter. Now let's have a look at this function in more detail. Now this function computes all the weights, but it also updates all the particles. So this is why it's called updateAndComputeWeights and not just computeWeights.
So it does the following. It has a loop over all particles, say for particle p in particles. And in this loop, it does another loop for all measurements, for the measurement m in cylinders. And here we just call the update function of the particle. So p, the particle of the outer loop, dot update particle, using the measurement m.
So this is essentially just a loop over all particles which presents every measurement of the current step to every particle. Now we'll also have to compute the weights. And so this update function will return a weight. And since we'll have to compute the weight of the particle, and not only of the single measurement,
we will have one overall weight which we'll initialize by 1.0. And we'll multiply this here by the weight for the single measurement. Now after the loop over all measurements, we'll append the result to a list which we'll initialize by the empty list before we start. And we'll just return this list of weights.
So now we started with the fastSlam correct function and we have seen this means we'll have to implement the updateAndComputeWeights function. So this is a member of the fastSlam class. Now this function just does a loop over all particles and all measurements and calls this updateParticle function which now is a member of the particle class.
And this will be the function we will develop in the next steps. So let's have a look at it. So in class particle, we'll have the updateParticle member function which takes a measurement. Which is one range and one bearing value corresponding to the measurement of a single cylinder. Now remember, we are now in the class particle.
So each particle has the robot state. So this is x, y and theta. And it also has a list of estimated landmark positions. So there's one entry for every landmark which has been observed by the robot so far. And for which the robot has decided that it is indeed a new landmark and not a landmark that has been observed earlier already.
And we do not only have the estimated position for every landmark but also the covariances. Because as you remember, we run one extended Kalman filter for each landmark in the particles list of landmarks. And remember, this list is not global. This list of landmark positions and covariances is individual for each particle.
So our error ellipses may be like that. So now we get this measurement. That is, the robot tells us that it has detected a landmark at a certain range and bearing angle. So the first thing we will do is, we will compute the likelihood of correspondence for any existing landmark.
So this means we will compute the likelihood that this measurement is due to observing landmark M1 and the likelihood if this measurement is due to observing landmark M2. And as you see here, obviously it is not very likely that the measurement belongs to any of those two landmarks.
So depending on this result, we either do the following, B, initialize a new landmark, which means we take this position, set up a new Kalman filter, initialize it with that position, compute an appropriate covariance matrix and initialize that too. Or, if the likelihood that a measurement belongs to a certain landmark is above a threshold,
we will update the landmark. So say if the measurement is like that, we will decide that it belongs to landmark 1. So we will update this, meaning we will update the extended Kalman filter of the corresponding landmark. So we will update the position, it will move a little bit in that direction, and the covariance matrix, which will get smaller.
So these are the three important steps. First of all, compute the likelihood that the given measurement is due to the observation of any of the existing landmarks. Then, second, if the maximum of those likelihoods is below a threshold, initialize a new landmark in the current particle,
which means setting up one new filter. And third, if the maximum likelihood is above a threshold, pick the landmark which belongs to the maximum value and update its Kalman filter. Now let's first have a look at this step. So we will talk about step A, the computation of the likelihood.
So our robot is somewhere and it determines, using its slider, that there seems to be a landmark at a certain range and bearing angle. Now as we know, the robot itself is represented by a particle with a certain position and heading theta, and also a list of landmarks that the robot has encountered so far with positions and covariance matrices.
So if this is a landmark which the robot has observed earlier, then the robot will have this landmark position, xk, yk, somewhere in its list of landmarks, and it will also have the covariance matrix corresponding to that landmark. And so now we want to compute the likelihood that this measurement of range and bearing angle actually belongs to this landmark.
And so we'll first compute the expected measurement, or predicted measurement, which as we see here would be this. So if the robot is here, this is given by the particle. And our landmark is here. Then we would expect a bearing angle like this and a range like that.
So we will say our expected measurement, c hat, is a function, h, of our current state and our landmark. And fortunately, we programmed all of that earlier, so this is our measurement function. Now we will also need the covariance matrix for this measurement. And for that, we'll need the Jacobian of h. So we'll need capital H, which is the derivative of h with respect to the landmark.
And we need to take this at the current pose of the robot and the landmark position. So this will be a 2x2 matrix. And we've computed that earlier as part of the H matrix which we used in our extended Kalman filter SLAM.
So we will just use that. Now here comes the interesting part. Our covariance of the landmark measurement is this Jacobian times the covariance of the landmark, here, times ht, the transposed matrix of h, plus the covariance of a measurement, which we'll denote as qt. Now this t stands for the time dependence of this covariance, but actually we will use a constant covariance matrix independent of time.
So then we encountered that matrix already. So this is our variance in range and variance in the bearing angle. And so what is this? It's easy to see that this here is a variance propagation. So this is the variance of the landmark, k, which is in the plane, which is in xy.
Whereas this is the measurement variance due to the landmark variance. And so this is the variance due to the actual measurement. So what are we doing here? We are interested in obtaining the measurement c hat and its covariance. So we are interested in the uncertainty, which in our case is expressed as a covariance matrix.
Now the uncertainty of this measurement is due to the uncertainty of the landmark, which is this, which then translates into uncertainty in a distance and bearing measurement, plus the uncertainty of the actual measurement of the sensor that we use.
So we add those two up and obtain the desired covariance matrix. Now we'll compute our delta c, which is our measured c minus our expected c, c hat, which we computed here. And finally we'll compute the likelihood, which is obtained from the probability density function of the Gaussian distribution.
So it's 1 divided by 2 pi times the square root of the determinant of the matrix QL times e raised to the power of minus one half times delta c transposed times the inverse of QL times delta c. And so this gives us the final likelihood.
So if this is our range and our bearing angle, then c hat is our expected measurement. And using c hat as the center and QL as the covariance, we define this Gaussian distribution. So this here is the one sigma error ellipse given by the covariance QL. And now we want to know the probability of our actual measurement of the range R and the bearing angle alpha.
And so we'll grab this value here, and this is the likelihood that we will return here. So this is our measurement c, and this is delta c. So now all we have to do is to implement those formulas. And as I mentioned, we already programmed H, the measurement function.
We also programmed capital H, the Jacobian of the measurement function. And so essentially we'll have to program this part here and then put everything together. So I implemented the SLAM10B correspondence likelihood, which serves as a framework for what you'll have to implement.
So here's our particle class again, where each particle now also has a list of positions and covariances for the landmarks. So these two variables will hold all those extended Kalman filters for our landmarks. Then here's our measurement function H, which is just copied from our earlier implementation. And here is the Jacobian with respect to the landmark.
And this is a subpart of what we implemented earlier. So our extended Kalman filter SLAM implemented the Jacobian with respect to the state and the landmark, whereas we now will only use the last part, namely the derivative with respect to the landmark.
Now here's the first function you'll have to implement. So that is the function H, which computes the expected measurement for a landmark. And it is given the number of the landmark and an additional constant. And it should return the expected measurement. And this is really, really easy to do, because you may use the function H, which we just defined.
Now the second function you'll have to implement is this. And this is a combined function. It returns the Jacobian H and the covariance matrix QL. So it returns a tuple of those two values. So using the formulas we just developed, compute H, compute QL, and return the tuple of those two values.
And finally you'll have to implement the WL likelihood of correspondence function, where you compute the likelihood that a certain measurement corresponds to an existing landmark, which is given by its number. And as usual here is an extensive list of hints how to do this.
And here this is the final function, but you won't have to implement this. It just does a loop over all landmarks of this particle. So given a measurement, it returns a list of likelihoods, one likelihood for each landmark, and each value representing the likelihood that the measurement corresponds to the corresponding landmark.
Now the main part of the program is not particle filter anymore, but it consists of routines to set up some landmarks and test the output of your code. So here we define one single particle. So this is placed at minus scanner displacement for X and zero for Y with a heading of zero.
So this means our robot will be here and this is scanner displacement. So our scanner center will be in the origin of the coordinate system. And then here we add some landmarks, which is done here. So the first landmark is at 500 minus 500 with a standard deviation of 100 in both axes.
So the arrow ellipse is a circle. The second landmark is at 1000 zero and it has the same arrow ellipse. And the third landmark is at 2000 zero and it has a different arrow ellipse, which looks somehow like this, where this is 45 degrees. And so the main code computes the expected measurements for each of those particles.
And in the second part, it sets up some measurements. The first measurement is close to the first landmark or landmark number zero, whereas the second measurement is at a distance of 1500 with a bearing angle of zero. So it is exactly between those two landmarks.
And if your implementation is correct, you should see the following. So for landmark number zero, we expect a range of 707, which is 500 times the square root of two, and a bearing angle of minus 45 degrees, which seems to be correct. And the covariance of our measurement is 50,000 in distance and 8.85 times 10 raised to the power of minus two.
So you got a certain distance error and a certain bearing error. Now, if we look at the first landmark, we see that the distance error is exactly the same. However, the error in the bearing angle now is smaller because the point is further away. So the point's uncertainty translates into a smaller bearing angle error.
And in the third case, the interesting part is here, which means that we'll have a correlation between the range and the angle. Now, this is quite clear because looking at this arrow ellipse, if our bearing angle gets larger, our distance gets smaller. And the second part, we give a measurement which is close to landmark zero.
So this was this measurement here. And consequently, we get a likelihood of 0.002 that this measurement belongs to landmark zero, and two other likelihoods for the other landmarks, which are much smaller. So this is times 10 raised to the power of minus five, and this is times 10 raised to the power of minus 10.
So it is clear that landmark zero has the largest likelihood, in fact, by two orders of magnitude. Now it's somehow more interesting for this measurement, which is geometrically exactly between this landmark and that landmark. So is it more likely to belong to this landmark or to that landmark?
And as we see here, first of all, the likelihood for the first landmark is much smaller. But then, considering the likelihoods for those other two landmarks, they are substantially different. So it's 0.0002 for this landmark and 0.0004 for this landmark.
So it's twice as likely that this measurement belongs to this landmark, actually. Why is this the case? It is geometrically exactly between both landmarks. Well, of course, because we have set the variance of this landmark larger than the variance for that landmark. And so it is less probable for this measurement to belong to this landmark with the smaller variance.
So now please program the computation of the correspondence likelihoods. And after you implemented this, you may check your result against this outcome, which should appear if your implementation is correct.

Recommendations

Thumbnail
09:38
Thumbnail
17:22
Thumbnail
08:03
Thumbnail
05:55
Thumbnail
07:00
Thumbnail
05:58
Thumbnail
02:55
Thumbnail
06:55
Thumbnail
05:57
Thumbnail
02:50