We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback
00:00

Formal Metadata

Title
SLAM F 01
Title of Series
Number of Parts
76
Author
License
CC Attribution - NonCommercial - NoDerivatives 3.0 Germany:
You are free to use, copy, distribute and transmit the work or content in unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
1
Thumbnail
04:51
2
Thumbnail
07:18
3
Thumbnail
12:07
4
Thumbnail
05:59
5
Thumbnail
03:20
6
Thumbnail
09:38
7
Thumbnail
07:46
8
9
10
11
12
13
14
15
16
17
18
19
Thumbnail
09:38
20
Thumbnail
08:03
21
Thumbnail
05:55
22
Thumbnail
07:00
23
Thumbnail
05:58
24
Thumbnail
21:58
25
Thumbnail
17:22
26
Thumbnail
08:50
27
Thumbnail
10:06
28
Thumbnail
13:47
29
Thumbnail
04:59
30
Thumbnail
11:21
31
Thumbnail
00:08
32
Thumbnail
05:57
33
Thumbnail
06:28
34
Thumbnail
14:21
35
Thumbnail
04:28
36
Thumbnail
17:03
37
Thumbnail
13:05
38
Thumbnail
00:44
39
Thumbnail
02:50
40
Thumbnail
04:11
41
Thumbnail
03:36
42
Thumbnail
06:17
43
Thumbnail
10:12
44
Thumbnail
04:45
45
Thumbnail
05:37
46
Thumbnail
12:54
47
Thumbnail
00:07
48
Thumbnail
00:06
49
Thumbnail
00:03
50
Thumbnail
00:06
51
Thumbnail
00:02
52
Thumbnail
00:03
53
Thumbnail
06:59
54
Thumbnail
00:24
55
Thumbnail
03:08
56
Thumbnail
03:12
57
Thumbnail
06:55
58
Thumbnail
03:41
59
Thumbnail
02:31
60
Thumbnail
05:53
61
Thumbnail
17:09
62
Thumbnail
01:35
63
Thumbnail
16:58
64
Thumbnail
04:25
65
Thumbnail
05:05
66
Thumbnail
05:08
67
Thumbnail
02:55
68
Thumbnail
15:18
69
Thumbnail
02:26
70
Thumbnail
04:26
71
Thumbnail
07:50
72
Thumbnail
07:27
73
Thumbnail
17:17
74
Thumbnail
08:03
75
Thumbnail
02:52
76
Thumbnail
03:39
Internationalization and localizationExecution unitSimultaneous localization and mappingTexture mappingPosition operatorError messageLocal ringAnalytic continuationOrientation (vector space)RoboticsSimultaneous localization and mappingExecution unitComputer animation
Position operatorMiniDiscRoboticsEllipseError messageOrientation (vector space)Sigma-algebraDemosceneUniform resource locatorPredictabilityDistanceLipschitz-StetigkeitArrow of timeComputer animation
EstimatorPosition operatorUniform resource locatorMeasurementDistanceExtension (kinesiology)ImplementationDegree (graph theory)Particle systemECosDiagram
MappingOrientation (vector space)Position operatorBuildingAngleMeasurementLipschitz-StetigkeitTable (information)CASE <Informatik>AdditionDistanceLocal ringMereologyError messageMassRoboticsRule of inferenceWeightArithmetic meanCartesian coordinate systemLevel (video gaming)Real numberComputer animation
AngleDistanceDisk read-and-write headPosition operatorRoboticsMeasurementLevel (video gaming)Field (computer science)EllipseMultiplication signError messageOrientation (vector space)View (database)Arithmetic meanLocal ringDiagram
MeasurementMachine visionState of matterLogical constantMereologyAnglePosition operatorRoboticsError messageMultiplication signImplementationExtension (kinesiology)DistanceOrientation (vector space)MassPhysical systemSimultaneous localization and mappingState observerLevel (video gaming)ThetafunktionRevision controlDiagram
Quantum stateRoboticsOrientation (vector space)State of matterError messagePosition operatorElement (mathematics)Diagram
Position operatorError messageBitRoboticsOrientation (vector space)Computer animationDiagram
AreaExtension (kinesiology)RoboticsError messageMeasurementPosition operatorMultiplication signNumberClosed setGoodness of fitDiagram
Transcript: English(auto-generated)
Now welcome to Unit F and this will be about simultaneous localization and mapping which is the topic that gave the SLAM lecture its name. Now let's have a look at what we did so far. So here's our robot at a given position having a certain orientation and we also know there is an error associated with position and orientation which we expressed as an error ellipse and a disk segment both depicting plus minus one sigma.
Now our robot moved and the inaccuracies in the movement induced an increase in error for both the position and the orientation and in the filter this was also called the prediction and we noticed if we go on like that we will have an ever-increasing error ellipse
but fortunately we had some landmarks in the scene with known locations and by measuring the distances to those locations we were able to correct the position of the vehicle and after that correction we also obtained a smaller uncertainty for our position and heading and this here was also called the
correction step and now for this correction step it was essential that we have those landmarks which lead to those measurements which in turn give us a better estimate for the position and decrease our uncertainty. Now for the implementation we did so far it was essential that we knew the coordinates of those landmarks in advance and if these were
available we solved the problem using for example the extended Kalman filter approach or the particle filter approach. Now what happens if you don't know the landmarks in advance which is actually the usual case because it is very hard to get hold of cadastral maps or floor plans and even if you manage to do so you will often notice
that they are not very useful for localization because the buildings have been built in a different way or they have been modified later on or there are so many additional items like chairs and tables that are not part of the floor plan but make up a huge portion of what the robot sees. So now let's think about the following I put my robot somewhere and since I
don't have a map of landmarks I will have to produce that map on my own so since I start with an empty map I can define the robot's position to be in the origin and the orientation to be along the x-axis and I can pretend the uncertainty in position and heading to be zero.
Now the idea is as follows sitting there the robot sees some landmarks and so as in our case the laser scanner measures both the bearing angle as well as the distance I can use those measurements to define the landmark positions in the real world meaning in the world that has its origin in the start position of the robot and which is oriented along the heading of the robot
at the start position and now after having defined those landmarks I will do the very same as in the previous case so the robot moves along it increases its uncertainty but then it also measures the bearing angle and distance to the landmarks we just defined and using those
measurements I can correct the robot's position and orientation and thus I will also get a smaller error ellipse and a smaller uncertainty in heading. Now this looks simple instead of taking the landmarks from a map which was obtained by some external means I do the following whenever I see a landmark for the first time I determine its position relative to the map which
I currently build up and I enter this as a new landmark which then can be used subsequently for the localization of my robot at all subsequent positions where the landmark is also in the field of view of the robot's lighter. Now it is not exactly as easy as that because when we see the
landmark for the first time our measurement of the bearing angle and the distance induces an error of the landmark position as well and later on say in that position I see this landmark for the first time I am not only having the angle and distance error but I also have the error in the
current position of the robot which add up to an even larger error in the position of the landmark if I move on I will observe this landmark once again and so having multiple observations the error in the landmark's position should also decrease and so what becomes clear is I cannot simply put a landmark into my map when I measure it for the first time and then assume
that the landmark position is correct I can only put it into my map with a certain error and then as I move on and measure that landmark over and over again update this error just in the same way as I am updating the position and orientation of my robot. So previously in our
extended Kalman filter implementation our system state included the x position y position and the theta the heading angle of our robot and now doing the SLAM version of the extended Kalman filter we have the following we still have the robots x y and theta but we also have x and y
of landmark one landmark two landmark three and landmark four so these also become part of our state vector what you see now immediately is our state vector doesn't have constant size anymore but for each landmark which we observe and which we didn't observe before our state vector will grow by two elements. Now let me ask you the following question say our robot
starts in zero zero with heading zero but I feel uncomfortable with the idea of assigning a zero error to the position and orientation and so I'll just give him a large error in position and heading now the robot will observe some landmarks and since the position error is large the
error ellipses for those landmarks will be large too but now as I move I will observe those again and so all those error ellipses will get a bit smaller and now the question is let's say I move for a real long time in this area observing those landmarks over and over again
so my hope is that those error ellipses will get smaller and smaller until they're really really small so that the landmarks are not uncertain anymore so that the situation is equivalent to what we had in our earlier extended Kalman filter approach where the landmarks were assumed to be error free so what do you think will the error of the landmark positions
go down to zero as the number of measurements by my robot goes to infinity?

Recommendations