A Whole New World [DEMO #4]
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 19 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/18068 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
1
2
3
5
6
7
10
12
13
15
16
18
00:00
Disk read-and-write headBitRepresentation (politics)BuildingObject (grammar)DemosceneGame controllerUtility softwareInformationPerspective (visual)Reading (process)Configuration spaceGroup actionQuicksortMereologyLogicWordTestgrößeForm (programming)AreaMachine visionSet (mathematics)DistanceUniverse (mathematics)ChainStandard deviationPredictabilityRoboticsComputer programTheoryCASE <Informatik>Temporal logicSampling (statistics)Codierung <Programmierung>Cartesian coordinate systemInferenceoutputOrder (biology)Lecture/ConferenceMeeting/Interview
03:22
Object (grammar)Sampling (statistics)DistanceInterior (topology)Group action
03:35
Large eddy simulationLibrary (computing)WeightMaxima and minimaAddressing modeHill differential equationGroup actionClient (computing)Cellular automatonStability theorySubsetNumberPredictabilityRepresentation (politics)Graph (mathematics)Configuration spacePoint (geometry)Endliche ModelltheorieNoise (electronics)Different (Kate Ryan album)Bounded variationGreatest elementRight anglePosition operatorMultiplication signCombinational logicLibrary catalogForm (programming)State observerState of matterMereology1 (number)View (database)Focus (optics)Arm2 (number)Universe (mathematics)Hydraulic motorDisk read-and-write head
06:47
Coma BerenicesLemma (mathematics)DemosceneMultiplication signNoise (electronics)WordSummierbarkeitMedical imagingStability theoryRepresentation theoryRepresentation (politics)PredictabilitySpeech synthesisExpected valueClosed setSource code
Transcript: English(auto-generated)
00:17
Hello, I'm Chetan, this is Yue, and our hack is an application of the recent research
00:23
we've been doing at Numenta regarding sensory motor inference and temporal pooling. So we'll show you a little robotics-based hack on that. So this is actually the first experiment, we use real-world data and pass it to algorithms.
00:41
So the setup is very simple, we have a small robot there, it has an IR sensor that's taking the distance between some objects, it has a range between about 5 centimeters to up to 30, 40 centimeters. So it also has a motor on the back, if you only look at the sensory part, it looks random because we program it to move randomly to sample a big spot of work, but if you consider
01:06
both the sensory input and the motor command, it contains some information about the layout of the work, and if you also change the spatial configuration of the work, you might get a sense of this is a new environment, versus this is an old one.
01:22
Yeah, so a little bit of the theory before we show you the demo, so you kind of understand what's going on, what we're doing is taking this data from the sensor and the motor and feeding it in, using a scalar encoder to get SDRs and feeding it in, both of them concatenated to layer 4.
01:40
And so if you think about it from the perspective of this robot, what it's seeing is, it thinks okay I'm going to move left now, and then it moves left and it senses how far away the object is using its IR sensor. And then it can turn right, I say I'm going to turn right now, what do I expect to see how far away the object is supposed to be, and it can make a prediction, so in this case
02:01
layer 4 gets the information about the current sensor reading, how far away the object it's looking at is, and the motor command it's about to execute, and layer 4 basically learns those sensory motor transitions, and learns to predict what it's going to see next, what the sensor value it's going to read next is going to look like. So if this was predicted by layer 4, if a transition was predicted by layer 4 successfully
02:25
to learn that transition, then now layer 2.3 can pull over those predicted transitions, because now the world is more predictable, it can build a stable representation for that world. So layer 2.3 pulls over it, does temporal pooling, and if it was unpredicted, then you'll
02:41
see bursting in layer 4 and it will pass through those changes. So what we hope to see is some stable representation once the world becomes predictable. And layer 2.3 is supposed to learn higher order transitions, but we didn't test that part. Okay, so let's take a look at the demo.
03:01
Disclaimer, this is a live robotics demo, so it's very likely it won't work. It did work yesterday in the room, and we took videos of it. Go ahead and stand up so you can see all the rides. So there are three objects, this is a robot, the sensor is in the front, so it can sense
03:24
the distance to the object, and it will move around to sample the three objects. So its movements are random? It's random. Its movements are random, except it tries to explore transitions it hasn't seen before. So it's biased towards exploring new things.
03:43
Okay, it's initializing now, so what I'll do is... So we want to look up there, or we want to look over here? I'll look right here.
04:01
Okay, so we just tagged it to do 30 random movements. 30 random movements. This is, by the way, nothing has been trained yet. The model is empty, hasn't learned anything at this point. So here what you're seeing is the representation in layer four in the middle, representation in layer three at the top,
04:21
and the number of unpredicted cells in layer four at the bottom in that graph there. So initially everything is unpredicted. It starts to make some predictions, as you can see in the bottom graph there. And you'll see in the top layer, it made a sound, and it classified successfully this world,
04:41
and you're going to see a stable representation in layer three. This is only showing a subset of the number of columns. Yeah, this is just a subset of the number of columns. So you'll see some stability there. So now, for this world, there's a stable representation that's built, because layer four is able to predict. Now we switch to a different world. Which is your world?
05:01
It's a whole new world! I just changed the spatial configuration. So it looks, it should look, yeah. Yeah, we mark the lines so that we can go back to the first world. Right, which we'll do in a second.
05:22
But it looks different, we'll see what it sees and what it represents. So it's going to go to the same problem. So you'll see that everything is unpredicted again, because it hasn't seen this before. Layer three, there's no stability. It keeps changing between the representations.
05:40
And soon, you see that it starts making predictions in layer four. And it recognizes this as a new world. And you see this representation here is stable, right? But it's different from this representation for the previous world. And you see in this classification here, it used to say zero,
06:01
but the previous world now says one. And it also plays a different sound. Maybe I'll go back to the first world to see whether it can recall. Are you sure that's exactly the first world? We try to mark the positions. Yeah, there will be some small variations in the noise and the sensors, the motors,
06:23
and in our movements here. But hopefully it can read past that. It was this one. I don't know. It's a somewhat new world.
06:41
We can always run it again. OK, so let's see what it sees.
07:00
It's not perfect. So there's noise. Some of the columns are still bursting. Is that because it's slightly different? So is it going to think of it as a new system? No, you'll see up there, right there. This was the representation it had for world zero. This is the representation it now sees.
07:21
So it somehow generalized a little bit, accepted this noise. It didn't think of it as a new world. Yeah. So we'll do a little anomaly. It's not the same world again, but this time I will move one object away. Go for it.
07:48
What is this new reality? So there's some predictions you can see, but also some unpredictability. And you'll see there's no stable representation in layer three,
08:04
but I should have maybe done more timestamps. You think it would learn as a new world then? Yeah, but I think I didn't actually allow it to give it enough time. I didn't. Close. Oh, I just needed a couple more steps.
08:21
All right, let's just... Two minutes. That's Matt speaking. Essentially you see that with the anomalous world, it wasn't able to consistently predict, so it didn't settle into a stable representation.
08:41
Is it still learning though? It's still learning, yeah. It's classifying as world zero. It's classifying as world zero for some reason. So that's not as expected. World zero is the first one. World zero is the first one. So that's interesting, yeah. Classify it as the first one. We tried something we hadn't tested yet.
09:03
After the class. There's the stability and distinction in layer three, and that's where we're going.