Hacking how we see
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 165 | |
Author | ||
License | CC Attribution 4.0 International: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/39334 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
| |
Keywords |
00:00
Musical ensembleIndependence (probability theory)Event horizonLecture/Conference
01:26
Projective planeMultiplication signFood energyAnalytic continuationComputer animation
02:22
DreizehnComputer virusWindowCyberspaceComputer animationDiagram
03:06
Cellular automatonInformationContrast (vision)Level (video gaming)CoprocessorInformationMultiplication signCellular automatonGroup actionStreaming mediaContrast (vision)Direction (geometry)AreaCondition numberDifferent (Kate Ryan album)2 (number)Equivalence relationDiagram
04:15
Computer networkContext awarenessIntegrated development environmentLoop (music)Surjective functionProcess (computing)Artificial neural networkMereologyPoint (geometry)SoftwareLevel (video gaming)Right angleXMLComputer animation
05:23
CyberspaceCAN busComputer networkOpticsLevel (video gaming)AbstractionoutputLevel (video gaming)Function (mathematics)Operator (mathematics)CyberspaceException handlingExecution unitArtificial neural networkPoint (geometry)WordAbstractionProcess (computing)OpticsComputer animation
06:35
Rotation1 (number)CircleMultiplication signDecision theoryPoint (geometry)Computer animation
07:18
Semiconductor memoryGreatest elementDirection (geometry)Basis <Mathematik>FreewareMedical imagingComputer animationLecture/Conference
08:04
EstimationDistanceEstimatorView (database)Medical imagingVirtual realityBranch (computer science)Level (video gaming)DiagramComputer animation
09:10
Data structureDistanceVideo gameAngleMultiplication signParallel portoutputBitObject (grammar)Computer animation
10:37
CyberspaceDifferent (Kate Ryan album)Process (computing)Physical systemVisual systemoutputMagnetic stripe cardAreaLecture/ConferenceComputer animationDiagram
11:28
Similarity (geometry)Data structureMetreMagnetic stripe cardRight angleCASE <Informatik>Similarity (geometry)Touch typingVisual systemPoint (geometry)Function (mathematics)MereologyoutputOrientation (vector space)Graph coloringMultiplication signDifferent (Kate Ryan album)Computer animation
12:29
outputIntegrated development environmentFluidSelf-organizationShape (magazine)Multiplication signCausalityXMLComputer animationDiagram
13:20
outputMedical imagingCausalitySimilarity (geometry)Point (geometry)Figurate numberRight anglePlanningEvolutePerspective (visual)MereologyView (database)Field (computer science)Computer animation
14:16
Meta elementRevision controlCellular automatonoutputOpticsHypothesisCellular automatonRight angleDialectMultiplication signMultilaterationMereologyTheoryAtomic nucleusRevision controlState of matterComputer animation
15:02
Group actionMereologyWordConnectivity (graph theory)EvoluteLaserComputer animation
15:59
Stress (mechanics)Control flowBit rateLaserEvoluteDirection (geometry)QuicksortPhysical systemShooting methodDisk read-and-write headMereologyStress (mechanics)View (database)Field (computer science)Lecture/Conference
16:57
Stress (mechanics)Control flowBit rateFingerprintElectronic program guideInsertion lossWindowOpticsVisual systemCapability Maturity ModelPhysical systemSound effectStress (mechanics)SmoothingWhiteboardComputer configurationComputer animation
17:41
FingerprintInsertion lossSensitivity analysisWindowOpticsVisual systemCapability Maturity ModelPhysical systemSound effectChi-squared distributionChaos theoryVideo gameData structureDirection (geometry)Capability Maturity ModelPhysical systemPhase transitionCoordinate systemOpticsConnected spaceMultiplication signoutputMagnetic stripe cardCASE <Informatik>Computer animation
18:26
Machine visionMachine visionSpeech synthesisData recoverySurgeryMultiplication signPatch (Unix)Goodness of fitJSONComputer animationMeeting/Interview
19:12
Covering spacePatch (Unix)Coordinate systemVulnerability (computing)CurveOpticsWave packetComputer configurationMultiplication signMachine visionComputer animation
20:20
SurgerySelf-balancing binary search treeAddress spaceVideo gameNetwork socketSelf-balancing binary search treeMachine visionPosition operatorSurgeryVirtual realityHead-mounted displayComputer animation
21:17
Hacker (term)Integrated development environmentMachine visionHypothesisOpticsWave packetData structureMachine visionIntegrated development environmentPhysical systemState observerInteractive televisionMultiplication signSoftware developerComputer animationXMLUML
22:17
Software testingProcess (computing)Logical constantHypothesisMoment (mathematics)Machine visionClassical physicsMultiplication signRight angleArrow of timeRegular graphOverlay-NetzMedical imagingTriangleDirection (geometry)MathematicsLecture/ConferenceXMLUML
23:34
HypothesisDialectArrow of timeMultiplication signGreatest elementRevision controlComputer animation
24:21
OpticsPoint (geometry)OpticsEqualiser (mathematics)Multiplication signHypothesisArrow of timeRight angleState of matterFrequencyComputer animation
25:34
Software testingRevision controlPoint (geometry)DemosceneCASE <Informatik>Direction (geometry)Regular graphArrow of timeIntegrated development environmentSoftware testingHead-mounted displayoutputMultiplication signRevision controlDifferent (Kate Ryan album)XMLComputer animation
26:51
Universe (mathematics)outputRight angleHypothesisHead-mounted displayPosition operatorUniverse (mathematics)Object (grammar)XMLComputer animationLecture/ConferenceDiagram
27:38
Different (Kate Ryan album)CyberspaceCircleSound effectClassical physicsDisk read-and-write headRevision controlGraph coloringAngleQuicksortIntegrated development environmentMoment (mathematics)Position operatorGreatest element
28:57
Installable File SystemForm (programming)Limit (category theory)Integrated development environmentHypothesisOpticsFlow separationComputer animationXMLLecture/Conference
29:48
CircleSoftware testingRhombusObject (grammar)Virtual realitySound effectTouchscreenMultiplication signProgram flowchart
30:33
Negative numberStandard deviationRevision controlDisplacement MappingStandard deviationSoftware testingExistential quantificationRevision controlNegative numberCellular automatonMultiplication signArithmetic progressionXMLComputer animation
31:26
Electronic program guideHypothesisProcess (computing)Standard deviationNegative numberRevision controlDisplacement MappingOpticsDifferent (Kate Ryan album)Direction (geometry)MereologyPoint (geometry)HypothesisProcess (computing)Arithmetic meanXMLComputer animation
32:35
Medical imagingPoint (geometry)Disk read-and-write headRight angleSign (mathematics)
33:51
VideoconferencingGame theoryIntegrated development environmentVideoconferencingElectronic mailing listArmGame theoryWave packetPhysical systemLecture/ConferenceXMLComputer animation
34:40
Observational studyCausalitySoftware development kitUniverse (mathematics)Revision controlObservational studySound effectGroup actionComputer animation
36:02
Open setComputing platformSoftware testingBit rateArithmetic progressionField (computer science)Open sourceComputing platformIntegrated development environmentArithmetic meanOpen setMathematicsConnectivity (graph theory)Different (Kate Ryan album)Group actionTable (information)Software testingGame theoryBuildingLatent heatVirtual machineBelegleserClassical physicsPhysical systemSoftwareXMLComputer animation
38:47
1 (number)Software frameworkComputing platformVacuumMaxima and minimaBitSoftware frameworkMobile appDirection (geometry)Physical systemIntegrated development environmentVirtual realityInformationMereologyBit rateMultiplication signRight angle1 (number)Game theoryObject (grammar)Arithmetic progressionGlobale BeleuchtungJSONXMLProgram flowchart
39:36
Right angleRevision controlSoftware frameworkGame controllerScripting languageElectronic data interchangeMicrocontrollerContrast (vision)Revision controlReal-time operating systemParameter (computer programming)Integrated development environmentGame controllerRemote procedure callHead-mounted displayNumberWeb 2.0Socket-SchnittstelleXML
40:26
Control flowSoftware frameworkSurface of revolutionControl systemDisk read-and-write headoutputMultiplication signSurface of revolutionSelf-organizationBuildingMachine visionFood energyComputer animation
41:45
Machine visionProcess (computing)Software developerHacker (term)TrailReal numberTheory of everythingXML
42:29
Semiconductor memorySoftware developerProcess (computing)Machine visionHacker (term)PrototypeFocus (optics)Open setCurve fittingBitForm (programming)Graphic designMultiplication signHacker (term)Game theoryPrototypeOpen sourceSelf-organizationProjective planeLecture/ConferenceComputer animation
43:16
Musical ensembleConfiguration spaceOcean currentSlide ruleProcess (computing)Term (mathematics)Lecture/ConferenceMeeting/Interview
44:38
1 (number)Software frameworkComputing platformMenu (computing)VoltmeterNim-SpielMultiplication signRevision controlIntegrated development environmentProcess (computing)UsabilityGame theoryLecture/Conference
45:39
Game theoryWebsiteTouch typingRight angleSurgeryProcess (computing)CASE <Informatik>Condition numberMultiplication signFeedbackPosition operatorGateway (telecommunications)Exception handlingSpeech synthesisMachine visionLecture/Conference
47:46
FeedbackMachine visionSoftware testingMultiplication signArithmetic progressionStress (mechanics)Musical ensembleGame theoryDifferent (Kate Ryan album)Lecture/Conference
49:23
Device driverView (database)outputMachine visionExtension (kinesiology)Position operatorNatural numberField (computer science)Regular graphCASE <Informatik>Proof theoryLecture/Conference
51:00
Touch typingComputer programmingPosition operatorLengthMultiplication signState of matterLecture/Conference
52:00
CausalityNumberOperator (mathematics)Lecture/Conference
53:13
SurgeryPosition operatorMobile appMoment (mathematics)TouchscreenBitLecture/Conference
54:26
Position operatorView (database)Field (computer science)Right angleDisk read-and-write headElectronic visual displayDirection (geometry)Lecture/Conference
55:37
HypothesisSoftware testingIntegrated development environmentLimit (category theory)Inheritance (object-oriented programming)Multiplication signProjective planeFood energyLattice (order)Lecture/Conference
56:24
Software development kitStandard deviationLibrary (computing)Decision theorySoftware developerComputing platformVideoconferencingGoodness of fitFormal languageIntegrated development environment1 (number)Web 2.0Multiplication signProcess (computing)Content (media)Lecture/Conference
57:43
Semiconductor memoryCartesian closed categoryMusical ensembleGroup actionLecture/ConferenceSource code
Transcript: English(auto-generated)
00:19
So, I will start immediately with introducing Ben Senior to you, he's an independent researcher
00:26
and inventor, and he's got a topic I'm really looking forward to all Congress about hacking how we see, and especially hacking how the brain works, because, I mean, it's not about the eyes, we can just, yeah, use other eyes maybe, but our brain is so versatile
00:45
that I'm very much, yeah, looking forward to hear what I can also do and what other people can do about that. So, that's your applause Ben Senior.
01:07
I'd just like to start with some thank yous. Thank you to everybody that's put this event on, because it's been as inspiring as every year, and I'm very grateful that I can be here to share some ideas with you, and I'm very grateful that you're
01:21
here. So, I'm going to talk about hacking how we see, and I'd like to dedicate this first of all to my son, Arthur, if it wasn't for him, this project wouldn't have started, and I'd also like to dedicate it to the many volunteers, many of whom I can't show here, who've given their energy and their time and their inspiration, and that's
01:44
why this project's continued. So, this is all about lazy eye. You probably know a few people with lazy eyes, when one eye falls inwards or outwards, it might always be falling to one side, or it might only happen sometimes when you're tired or real,
02:03
like it does for me. It might actually switch sides, sometimes it's the opposite eye that falls to one side, sometimes it's always the same eye. So, I'd just like to know who here in the audience has a lazy eye or might have a lazy eye? Wow. Okay,
02:21
brilliant. So, before we can talk about what we're hacking, we need to first of all understand what it is we're going to hack. So, what does it mean to see? So, it sounds pretty obvious we see with our eyes, but we don't really. We see through our eyes in our mind, and that's something I want to bring home. The crazy thing is that your
02:46
brain actually starts in the back of your eyeball, okay? If the only place where your brain is outside the protective enclosure of your skull. So, your eye is more than just this movable window onto the world, it's actually a space suit, it's protecting you
03:03
from viruses and bacteria and contamination. And the crazy thing is that there are 100 times more brain cells in the back of the eye than there are axons, so channels for taking that information back into the rest of the brain. So, what's happening in there? It isn't just picking up light levels like a CCD chip and passing them back
03:26
to some processor. Those channels are very specific. Some of the channels fire when your receptors that are specialized for different wavelengths of light under different light conditions fire. Some of them fire when groups of those receptors detect areas
03:43
of contrast, okay? So, some receptors detect light and some detect dark, and they realize, oh, here, there's a contrast, there's an edge, and then they fire. Others detect the movement of light across receptors in particular directions, and then the channels
04:01
fire. So, what you end up with is this stream of fragments, structured, meaningful fragments tearing down the optical nerve at the equivalent of 10 megabits per second towards the visual cortex, where it's processed at the back of the brain. So, when these fragments arrive in the brain, they're completely context-free. And somehow, your brain
04:24
has to put these things in the context of what you've been seeing, in the context of what you know and can recognize. It needs to combine it with your other body's senses to make sense of the environment and what you're doing. I mean, are you moving
04:41
or am I moving, you know? And then, all this has to come together so that you become consciously aware of the things that you want to place your attention onto, so the loop can be completed and your eyeballs can track the thing that you're interested in. So, this is a simple schematic of some of the basic parts that make up the visual cortex.
05:04
All of these parts are effectively, well, they are literally neural networks. Neural network classifiers, right? Duh. And these are operating subconsciously. They're pre-processing those fragments and at some point later, past the subconscious stage,
05:21
you can become consciously aware of things. So, these are some of the specific jobs that these units are doing. And what they're doing is they're interacting with each other. They're negotiating. The output of these classifiers are flying backwards and forwards and roundabout and they're sifting a search space. They're trying to find the probable.
05:46
What is it that you're probably looking at, given this crazy mess of signals that are coming in? So, these are all just words. Can you experience this? Yes, you can. So, we've seen in this congress and other congresses how neural network classifiers can be attacked.
06:04
You can subtly manipulate the inputs so that the classifier makes mistakes. It makes the wrong classification. And optical illusions are exactly this. They are adversarial attacks, except they're operating directly onto your brain.
06:23
They're altering your perception of what's probably there. And you can do this at many, many levels of abstraction. There are so many kinds of optical illusions, but I'd just like to show you a few, just to make the point. So, take a look at the center of one of those circles. And the other ones probably start rotating. Except they're not rotating, it's completely fixed.
06:47
It's fooling the classifier in your brain for detecting rotation. This is a really famous picture from the 1890s. The artist has deliberately made it ambiguous. You can classify it as a rabbit or you can classify it as a duck, which is pretty interesting.
07:04
But what's much more interesting is that you cannot see it as both at the same time. Try as hard as you like. You might get quite fast at switching, but at some point, your brain makes a binary decision. This is a rabbit or it is a duck. And finally, I'd like you to pay close attention to the top of the mountains beyond the lake.
07:29
There's no lake there. That's a white wall at the bottom of a garden. So, what you perceive is even influenced by your conscious thought.
07:43
Well, very much, in fact. You miss direction and suggestion. You know, the stuff that magicians work with on a daily basis and leaders of the free world. So, let's get back to talking about what it means to see with two eyes. So, we each have two eyes, and each eye sees a flat image of the world.
08:01
And our brain brings these two images together. And in primates and quite a lot of the carnivores, it's evolved that the eyes are at the front of the skull looking forwards. And this gives the most overlap possible between each eye. So, both eyes are seeing a lot of the same stuff.
08:23
And this gives us the ability to accurately estimate distances. And that's pretty important for primates, right? And why does this work? Well, on one level, physically, it works because the eyes are separated horizontally. Each eye has a slightly different view onto the world.
08:42
So, these are two pictures taken 10 centimeters apart, and at the bottom, I've overlaid them. And you can see that things in the foreground have this overlap, this parallax. Now, if you, in a virtual reality headset or a stereoscope, were to look with the left eye at the left image and the right eye at the right image, you wouldn't see that overlap.
09:01
What you would see are branches popping out towards you. Your brain, your visual cortex, is removing the parallax and presenting it to you as a feeling for depth. And this is something, and this is important, this is very relevant to lazy eye. This is something that you learn in the first weeks and months of your life. When you see a baby in the crib playing around with a teddy bear or something,
09:25
it starts off with both eyes all over the shop. And it learns to look at one thing at a time with both eyes. And it knows that it's seeing one thing with both eyes at once, the brain does, because when it's looking at the thing it's touching, that parallax has disappeared.
09:44
And at the same time, the brain is putting together a feeling for distance. It can feel the sensory inputs from the eye muscles, so it can feel the angle that the eyes are at, and it can feel the thing that you're touching or the amount of time it takes to travel across the room to reach something
10:02
when the child's a bit older. And it can put those two things together and give you a feeling for distance. So, when you look at something and your eyes come together, ah, yeah, that feels about that far away. I know how far away that is. That's how it learns to develop a feeling for distance. And this becomes paralyzed in a way, parallelized.
10:22
If you look past the object, so if you each hold up a finger, and you look at me, for example, you'll notice, you'll perceive that you've got two fingers doubled up. And as you're walking around, or you're running through the jungle being chased by a tiger, in parallel, your brain is seeing these different amounts of parallax
10:43
for all the things around you, and giving you this sense, this feeling for depth and the space that you're in. So, where's this happening? It's happening really, really, really early in the visual processing system. The inputs from the eyes are routed to this area at the back,
11:03
the V1 cortex, which is kind of not exclusively where they're routed to, but this is where most of the processing begins. And that's where you find these things called ocular dominance columns. So, what's an ocular dominance column? Well, these stripes represent the inputs,
11:21
a million from each eye, a million plus, the dark stripes representing the inputs from one eye, and the light stripes representing the inputs from the other. And if we take a look inside those stripes, a one by one by two millimeter chunk, overlapping two stripes, you'll see something like this. This is called a hyper column.
11:41
And this is a crazy thing you find in the brain. It's a multidimensional classifier. In this case, in the visual system, it's classifying orientation and color and the stuff at the front. It also classifies in different parts of the brain, sound, it classifies touch. And what's happening is, the inputs from the left and the right eye, millions of times, are going into one column,
12:03
for example, the left eye, and another column from the right eye. And the further you go away from that point, down the front face of that hyper column, the more work is being done comparing the inputs, these matched inputs from both eyes, seeing what's similar and what's different. And that is tangibly where the outputs from this classifier
12:23
are saying these things are the same or these things are different. That's where parallax is detected. So where does lazy eye come from? A lot of things can go wrong during gestation, a lot of things.
12:41
Genes can fail to express properly, there can be toxins in the environment, there can be mutations, the mother might be malnourished, might receive an injury, et cetera, et cetera, et cetera. And the eye is a very sensitive organ, a lot of things can go wrong. Corneas can get damaged, lenses can be misshaped, there can be too much fluid in the eye, too little fluid in the eye, the eyeball can be misshaped, the optical nerve can be damaged,
13:02
the skull might be the wrong shape, so the eye is too high, too low, to the side. And at the same time, babies in the womb are developing three to five hundred thousand neurons per minute. So a lot of stuff can go wrong.
13:22
But many of these causes have the same outcome. And the outcome is that the input from one eye is radically different to the input from the other eye. And what that would normally mean is that you get this situation where the brain can't bring these two images together anymore, there's not enough similarity, it can't figure out where the overlap points are.
13:41
So you see double, like when you're really drunk, right? Really drunk. And from an evolutionary perspective, confused monkeys are lunch. So evolution has selected for a plan B, and that's called amblyopia. If the brain detects that it can't fuse both images together,
14:03
it suppresses one of the eyes, it inhibits the eye, so that you only see with one. So you see clearly, which means that you've lost your depth perception, and generally you've lost part of your field of view. And where does that happen? Well, there are a lot of competing theories.
14:21
It's not really known. Some of the most recent hypotheses are that there are concrete conflict-detecting cells that look at matched regions of the retina, and if the inputs are too different, then it goes, whoa, we're not looking at the same thing, are we? Right, time to switch off. And they probably inhibit the lateral genticulate nucleus,
14:43
which is this part I've arranged, which is even before V1. It's right where the optical nerve inputs come together. So the eye is functioning, these electrical signals are arriving at the LGN, and then they're being inhibited, so they only carry on in a very weakened state.
15:03
So the next step is, or the next consequence, is often strabismus. So lazy eye has these two components, the suppression and then the lazy eye part, the strabismus, which is when the eye falls to one side. So it's also not really clear why that happens. It might be because the eyeball's always in this tug-of-war
15:22
with muscles on both sides, so that when it's positioned, it doesn't just flop about, it's held in tension. And maybe one group of muscles is just stronger than the other, so it either gets pulled outwards or inwards. Or maybe it's because the brain really would like to have a signal that's as different as possible, so that it's easier to filter out.
15:47
The problem is that evolution didn't know about eye doctors. So nowadays you can have eye doctors that can shoot freaking lasers at the cornea
16:02
and reshape them and take out lenses with cataracts and do all sorts of magic. But evolution didn't put this reset switch at the back of the head, so you can't reboot the system. And the trouble is, even once you've fixed the original physical problem, you've still got one eye pointing in the wrong direction and it's turned off. So now the problem is, how do you solve the original evolutionary solution
16:25
to the first problem? So I'd just like to talk quickly about some of the consequences, the personal consequences for people with strabism, this lack of depth perception means that there are going to be more falls and stumbles and accidents, which is stressful.
16:42
It generally means that they're less refined in using their bodies, which is stressful. It means that for people that lose part of their view field, they're not so aware in social situations always of who's saying what. So it means that you've got to position yourself carefully and it's another stressor.
17:00
It's a stressor if it makes you feel ugly. It's a stressor if you're talking to people and you can see they can't tell if you're looking at them or someone else. Stress, stress, stress. It's not so easy. Sociologically, we're talking about 7 million people a year, so it's about 5% of all people that are born
17:21
at some kind of disadvantage, which might not always be necessary. Perhaps we can do something about this. And because it's across the board, and 74% of the world are on less than $10 a day, most people aren't going to have very many options.
17:41
Neurologically, what are the consequences? Well, you were born with all of these structures to fuse the eyes together, and you were born with the structures in your brain to coordinate eye movements. They both look in the same directions. And you were born with the ability to perceive depth. But this topographic disorganization means that stuff wasn't connected together properly.
18:02
In those first phases of life, you didn't learn to put this stuff together. It's an immature system. And in a worse case, if one side of these optical dominance columns are constantly deprived of input, those neural connections decay over time.
18:21
So you can see here, instead of stripes, you have these thin, isolated islands, and that can lead to blindness. Sorry. So what happens now? You might come across, if you have this, a vision therapist. So this is Marco, one of our volunteers. And they are like speech therapists who can recover somebody's voice after they've had a stroke,
18:42
after they've had brain damage, or a physiotherapist who can do the same with limbs. But you're very unlikely to find one of these people. There are around 2,000 in the entire world, 1,000 of them in America, and they can easily charge north of $400 an hour. So unless you're wealthy and you've got good insurance,
19:01
you're far more likely to come across, 100 times more likely to come across, ophthalmologists. And ophthalmologists, for like the last 100 years, have been pushing patching and surgery. So patching is this idea that you cover the strong eye to make the weak eye stronger. And this is a really outdated metaphor.
19:21
It's a misleading metaphor, and I think it's a counterproductive metaphor. But that's not to say that it doesn't have value, the patching I mean, because the patching means that at least the optical dominance column for the suppressed eye is forced into activity, and that prevents it from decaying.
19:40
On the other hand, if you overdo it and you patch too much, the brain says, well, okay, look, I just want to use one eye. What are you doing to me? You want me to use that eye? Fine. And then it suppresses the other eye, and it switches over. And even if that doesn't happen, you're only training this monocular ability to see one eye at a time.
20:02
You're not training this coordination between the optical dominance columns where all this other stuff comes in, the parallax, the depth perception. So some kids get lucky, some kids, through patching a few hours a day, somehow they catch the curve and they are able to rehabilitate their binocular vision, but it isn't the majority.
20:21
So the remaining option, as far as ophthalmology is concerned, is surgery, to cut and shorten and stick back together your eye muscles. So the resting position of your eye is more or less in the middle of your eye socket. So this is cosmetic, and that's not to say it's without value. It can improve your quality of life.
20:40
It can make you feel better about yourself. But it doesn't rebalance the brain. It doesn't mean that you're going to recover this binocular vision. But it might be an opportunity, because with the eyes roughly straightened, it means that you can certainly look through the lenses of even the cheapest VR headsets,
21:01
to be used with the cheapest of phones. Because these headsets do something very useful. It's less about the virtual reality and it's more about the ability to very precisely target each eye and control what we can present to it at a very low cost. So what if, after the physical problem's been solved,
21:23
we could hack past that lockdown? We know that these classifiers in the brain, these structures, these optical dominance columns, we know that they learn and they adapt through training. We know that they're flexible and plastic. So can we build environments that can subconsciously reboot that binocular vision?
21:45
So now we get to what we've been doing. These are... Very basic experiments. And that means that they are very basic hypotheses, because they're based on anecdotal observations.
22:02
This is what they are, anecdotal observations. Because these are interactions we've had during development. Every time I sit down with a new person who has lazy eye, we discover new phenomena. It means that four or five things need to be changed on the system. Nothing stays constant at the moment.
22:20
Constant learning process. So let's get falsifying. The first hypothesis, well, the main overriding hypothesis, is that this does something. We want to falsify that it doesn't do anything. So the first hypothesis is a participant cannot use both eyes simultaneously.
22:41
So we test usually all of our participants with classic physical techniques with a vision therapist beforehand to see that they can only see with one eye at a time. So with the headset on, first of all we want to confirm this suppression. So the image at the top is what the person will see in the headset. The left eye sees arrows pointing to the left.
23:01
The right eye sees arrows pointing to the right. And in a normal situation, your brain goes, OK, I'm looking at the same thing. Let's try and overlay them. And what you perceive is this crazy overlapping mess of triangles pointing in opposite directions. If you show this to a regular person with lazy eye, I've discovered not alternating
23:20
a strabism, so it changes from eye to eye, but a regular fixed on one eye, what they see is arrows just pointing to the right or arrows just pointing to the left because there's an instantaneous suppression. So, sub-hypothesis. A participant cannot deactivate suppression
23:43
to see with both eyes. We discovered really early by chance that we can suppress the suppression. How do we do that? What we do is we have blinkers for the left eye for example, so the right eye sees arrows at the top and the left eye sees nothing.
24:00
And the left eye sees arrows at the bottom and the right eye sees nothing. And because these regions aren't detecting conflict, suddenly all of our participants have seen arrows pointing right and left at the same time. So they're actually using both eyes at the same time, which is weird and shouldn't be possible.
24:23
But that's not so useful, I think, because we want to get to the point where people are using both eyes to look at the same things. So what we want to achieve is to have both optical dominance columns receiving a signal of equal intensity.
24:42
So the next sub-hypothesis is that a participant cannot use both eyes with suppression active. That's what we want to falsify. So in the normal condition, in the normal state, the signals from both eyes have the same intensity. This is frequency-modulated electrical signaling.
25:02
The left eye is seeing an arrow to the left, the right eye is seeing an arrow to the right, and in the brain it becomes perceived as a six-pointed star. If one eye is suppressed, the LGN inhibits that signal, so less signal arrives at the brain and the brain perceives an arrow pointing to the right only.
25:22
So how can we match up those stimuli to the V1? What we can do is we can just drop the intensity of the signal from the non-suppressed eye and that's what we do. The person just tilts their head up or down and that changes the brightness ratio,
25:41
the luminance ratio between each eye. And in, again, all of the cases of regular strabism or regular amblyopia, there's always been a point where the person suddenly sees arrows overlapping and pointing in opposite directions, which means that we've brute-forced our way past their suppression.
26:04
And that's a ratio that we can keep and reuse in other scenes and other environments. So I think that's falsified. We've shown that we can deactivate suppression and we've shown that we can overcome it. And we've also had the insight that in these initial physical tests,
26:23
why was it always the case that the tests were showing the person could only use one eye at a time? And I think what's happening is the tests are confounded. Physical tests are taking place in a room and there's always a backdrop. There are walls and a ceiling and a floor and that's providing enough input that this conflict is being triggered
26:44
and the eye is always being suppressed. And inside a VR headset, it's a different kettle of fish. Just worth mentioning, I think. So the next sub-hypothesis, right, we've broken through the suppression. They can use both eyes simultaneously. The participant cannot fuse the input from both eyes
27:04
whilst being amblyopic and strabismic. Well, we've overcome the suppression. How do we overcome the strabism? So one eye is misaligned. What we can do in a VR headset, we can just rotate the entire universe
27:21
for the misaligned eye. Simples. So this way, the same object, no matter where the eye is, appears at the same position in the retina and as far as the brain's concerned, both eyes are looking straight ahead. Fantastic. And this is what we do.
27:41
The person has the headset on and we have these two circles. Actually, we have different versions of these environments for kids and for adults and different situations, but this is sort of our classic at the moment. One circle has cyan at the top. One circle has yellow at the bottom. The circle for the eye that's straight
28:01
is fixed in local space. So it does this as you move around. It stays where it is. And the other circle is fixed in global space. So all they have to do is position their head until the two circles overlap and hold. And that allows us to extremely accurately measure precisely what their misalignment angle is.
28:23
And we also get this crazy effect where when the two circles are apart, there's the cyan, there's the yellow, and when they come together, we know if that person is fusing. Because they see impossible colors. The brain sees cyan and black at the same position
28:42
and yellow and black at the same position and it doesn't know what to do with it. And it starts to shimmer. It's really incredible. You know, impossible colors are awesome. And we hear this little, ooh. And then we know. So I think we've falsified that at least for some people,
29:04
in fact, I think it's better than just some people. We can achieve basic fusion, which is a limited form of binocularity. So binocularity is just using two eyes and binocularity is using two eyes together.
29:22
And this seems to be quite stable and precise. So, great, we've overcome the suppression, we can compensate for the eye misalignment. Can we push further into those optical dominance columns? Can we start to perceive depth?
29:42
Let's falsify the hypothesis that we can't. So this is one of several environments for depth perception and this is based on a classic test where you have this diamond background and four circles and we take into account or we make use of this parallax effect.
30:00
We just offset the circle for each eye. And this is the way that all depth perception in virtual reality works. You just offset an object in each eye so you get this parallax and your brain perceives depth. So it seems like the circle with the offsets coming out of the screen at you or coming towards you. And then randomly we pick a circle and each time we pick a circle we reduce the offset
30:22
so the depth from the background appears to be getting less and less and they just have to look right or left or up or down to indicate which one of the circles is popping out of the screen. And what we found is that despite our initial testing showing that almost everybody had no depth perception
30:42
most participants can see the first one or two. And actually that showed us that standard tests produce a lot of false negatives because partly I think because of this conflict issue that the conflict cells are activated because of the background of the room that they're in but also because the initial offsets in standard tests
31:02
are just too ambitious quite frankly. And we also found for the few people that have been able to come back because they live in Leipzig and try this a couple of times they've been able to make quite rapid progress from just seeing the first one or two to the first four or five to the first seven or eight which implies to me that there may be
31:22
we really are activating something and something's happening in the background here. So we've broken through the suppression. We've compensated for the eye misalignment. We've begun to push up into those optical dominance columns to do not just fusion but also seeing differences and detecting parallax and getting a feeling for depth.
31:44
The person takes off the headset. The eyes are still looking in the wrong direction. Their brain suppresses everything and we're back to square one. What was the point? So this is where the really novel and innovative stuff comes in. And this has been the hardest part.
32:00
Can we actually get somebody's eyes to straighten up and maybe even to stay straight? So we had many, many, many false starts and many ideas that ran into the sand until this simple hypothesis came together which is that this is a subconscious process
32:22
so let's treat it as a subconscious process. And probably the desire to maintain fusion is going to override this habituation for suppression. So what does that mean? We did this. We use the front-facing camera of the phone
32:40
and it beams in what it's seeing to each eye where we take into account the misalignment and we take into account the luminance ratios so we can break through suppression and the person can generally see both, fuse both of those images so they can walk around using their eyes
33:01
as if they were both straight. Okay, that's cool. Then what we do, we very, very gradually straighten up the image in the misaligned eye very, very slowly. And nobody that we've tried this with so far has been consciously aware of this
33:21
even when it's pointed out to them, even when I ask them, can you feel this? Can you sense this? It happens completely subconsciously. They maintain fusion because they can still see the yellow in the cyan bars, the yellow in the left eye, the cyan in the right eye and there's no doubling up. But they maintain fusion
33:41
and the thing straightens up and straightens up and straightens up and at some point the headset goes straightened. What was that? Yeah, thanks. I think this is a big deal too.
34:05
So there are lots of other things that we're going to do here but we already do this when you're watching videos so I think this could be great for kids. Your kid spends five minutes a day watching their favorite cartoon. Why not 3D videos? Integrate this with games.
34:23
I've got a list as long as my arm of things that I want to build to improve robustness so that this really trains that system. But we know this can work.
34:40
So I think in conclusion, eye skills isn't doing nothing. It's having an effect. And that's not a bad thing to build on. The question is, how effective is it and for who? Because everybody with a lazy eye, because there are so many different causes and it's a neurological thing,
35:01
they all seem to have very subtly different symptoms and very subtly different perceptions of the world. So we need to start breaking it down into different symptoms and seeing what works best for which groups of people. And the big question is, with practice, because we haven't had a kit that was ready to take home yet but we're nearly there now.
35:20
With practice, might the day come when the eyes are straight and the brain is totally comfortable with this and familiar with this and you take off the headset and it just goes, yeah, fine, this is cool. I don't need to suppress anything. I'm not in conflict anymore. I've got used to this. That's the goal.
35:42
So in 2019, our emphasis is going to be on trying to really validate some of this. With an internal study, we have a university department in Giesen that wants to get involved and do a study. But we're looking for more participants. We're looking for other researchers. We're looking for more people who want to be involved.
36:03
And this leads me to the bigger picture. Can we build something to really accelerate progress in this field? It's been a hundred years that people have been looking at this. So I've had this dream for a couple of years. If this was open source software,
36:21
which it is, wouldn't it be cool if it became the go-to platform for research? Because researchers would have more freedom to rebuild the system, change the system, add new components without worrying about commercial licenses. They'd have the flexibility
36:40
to use each other's components off the shelf. Very importantly, they have better repeatability. Instead of just describing what they did in a paper, they can share a build. This is what we used. And that means that the same build can be shared between multiple groups, perhaps cross-disciplinary groups.
37:02
The developmental psychologists, the ophthalmologists, the neuroscientists, each of whom bring different skills and different insights to the table. And I think longevity is also important. If it's open source, it doesn't necessarily die if a company goes bust.
37:22
And this takes me to the even bigger idea behind this. I think this could be and should be one of or the biggest citizen science experiments in the world. I think one of the things that's really held back progress is the small cohort sizes
37:42
and the costs involved in getting people to come into the department and that you only see them once every few months and maybe they were having a bad day and they were hungry or they were tired and it's particularly good or it's particularly bad. What if hundreds of thousands of people are using this at home?
38:00
And what if we can use classic gamification and classic game testing techniques to A-B test different environments and to see which work most effectively for different groups of people with different kinds of symptoms? Let's use gamification and data science for good instead of evil. And that means that the professionals
38:22
can take a look at very tightly focusing people with specific symptoms and with all of their experience and their expensive machines that we don't have, fMRI scanners and all the rest of it, they can take the environments that have the most positive impacts or even the most negative impacts and try to study why they have
38:43
a better means of accessing those behaviors. So I'll just quickly mention or talk a little bit about the framework because the app that I showed you, the system that I showed you was really to help us figure out what needs to be in the framework and the framework is still very immature
39:03
but it's heading in the right direction. So the app is very simple to modify, to put in new environments or take environments out. This is perhaps the most important part. There's a camera rig inside there which I call the lazy camera rig because you can take any Unity game
39:21
and you just rip out the virtual reality camera and you plop in this camera and it knows about eye misalignment and luminance ratios and eye straightening and it knows how to talk to the right data objects to store this information over time so that you can see rates of progress. And the camera has these microcontrollers on it
39:40
which can be extensible, which are extensible, sorry. And they let a practitioner kind of manipulate in real time what the person is seeing, adding cues so that you can tell if the person is seeing with both eyes or adding conflict or changing the luminance ratios manually or swapping assets from high contrast assets to low contrast assets, et cetera, et cetera, et cetera.
40:02
And that works because we have this real time remote inspection API over web sockets. So any number of practitioners can see what's happening inside the VR headset of the participant and manipulate that environment. They can take control of it, they can change the parameters of it, they can do all these things remotely
40:23
by coming into your clinic. You're not even bound to people being in the same country. And finally, to keep the costs down, we've built this gesture control system which doesn't rely on your typical VR UI inputs with reticles and menus and things
40:42
because you first of all can't tell what the eyes are doing and secondly you don't want to interfere with what people are seeing. So it's all based on just moving the head and holding the head still and looking up and down. Again, the goal is to keep this as cheap as possible so that all somebody needs is that they got an $8 headset with reasonably large lenses
41:01
that they can put their phone into. So next year, out into the world. That's the goal. I'm hoping that this kicks off a little revolution and I'm hoping that it brings together the ophthalmologists and the vision therapists and the neuroscientists.
41:22
And I'm hoping that we manage to get this to spread across the world and I'm hoping that we can build an organization that isn't going to just run out of steam and energy and time and money. I'm hoping we can build something that's going to last. And most importantly, none of this is going to work if we can't deliver it at a price that people can actually afford.
41:49
So to do this, this is where I beg. I need more people with more skills. We've got ideas and designs for headsets
42:02
that have two outward facing cameras, for example. So you have a stereoscopic view of the world, so you can also walk around with the headset on and perceive real depth. We need inward facing cameras that do eye tracking, but we need them for a couple of dollars, not a thousand dollars. I've got some ideas, but I need people with skills.
42:23
Maker skills. Legal advice. We are going to be stepping on a few toes with this. Neuroscientists, graphical designers, game developers, growth hackers, accountants, you name it. If you have any spare time, if this is interesting to you, if you want to get involved, please take a look at iskills.org.
42:43
There's a little form there on volunteer and you can tell us a bit about what you can do. And finally, I'd just like to say thank you to some of the organizations that have helped us get this far. And I'd particularly like to single out the prototype fund because they gave us money.
43:01
They are brilliant. They fund over 40 open source projects a year. And they give you money. And they get out of the way and they let you get on with it. So take a look at them. And that's all I've got for you. Thank you very much.
43:31
Thank you so much, Ben. And I assume you have plenty of questions and you're already lining in front of the microphones. So I take first microphone two and check for the signal angel.
43:46
Hi. Yes, I'm very much checking at the end. I understand that it is sweeping fast for now. And I'm wondering what it's going to, is it going to do anything besides simply find the,
44:03
let's say, configuration process that you need? So right now I'm focusing on trying to make this something that people can just use at home. And every day, I'm up to, I'm totally open for new ideas,
44:20
but I think every day it should go through this kind of calibration process. Because that lets us see whether there have been improvements and in what way. Before, so for example, figuring out what the current suppression ratio is and the eye misalignment, simple as that. So in that slide that you saw, just try and keep this quick here, you go through one step at a time
44:43
and each step unlocks the next step until you get to seeing straight. So it should be a very user-friendly process. And I think there'll have to be another version which has all the bells and whistles and all the environments we took out of this one for use in a clinic where you can do everything possible with it. Does that answer the question?
45:01
Awesome. Then we take again microphone two. Is the lazy eye camera inevitably impacting the performance of the games? Sorry, can you say that again? Is the lazy eye camera negatively impacting the performance of the games?
45:25
The performance? I'm not quite sure I understand the question. Is the performance degraded if you apply these techniques? Is the game slower than before? I see. Does the performance degrade? Is the game slower than before?
45:42
Yeah, no. It shouldn't be. No. Can these methods be useful for patients affected by disjointed site after a stroke? I would love to know. Probably get in touch.
46:01
Try it out. I take microphone one now. Hi, over here to the right. Question would be you said that people often get surgery after they had a condition like that. What is the condition afterwards? Are the eyes fixed in place or how much does it actually solve the problem or create a new problem?
46:22
It doesn't solve the problem except if you define the problem as being a cosmetic problem. You can do eye surgery up to six times before the muscles are shot and you can't do it anymore. I think fairly often you have to do it multiple times because the eye keeps relaxing. I suspect that's actually also a feedback process
46:42
that the brain would like to have on one side to make it easier to filter out those signals. So effectively it's shortening the muscles or something? That's right. It's just shortening the muscles so the resting position is just dragged back more or less into the center.
47:00
I think it's partly to do with the fact that most ophthalmologists on the one hand they really believe that beyond a very young age there's just no hope anymore. That has been pretty conclusively disproven by the neuroscientists. The brain stays plastic and you can learn new things just like in the speech therapy case or the physiotherapy case.
47:22
Of course, it's what the institution has developed to do. It's developed to do eye surgery. That's what they do. That doesn't mean it's the right thing to do but I hope we can use it so we can add these techniques onto the surgery so that it becomes more than just cosmetic. It becomes the gateway
47:44
to our bodies. I would take microphone 2 now and quickly get a feedback from the signal, Angel, please. Whether there are questions? No? Okay. Microphone 2, please. I have two questions.
48:01
I have oscillating stresses so I wondered how do you work with that because actually I can switch instantly. I know from those tests that usually if you give me two pictures I'm always left, I see that, right, I see that. So thank you.
48:22
Secondly, I always have a hard time explaining to people what it means to lack depth perception because I have my whole vision but I keep on bouncing into things and I'm always using this hey, I don't have steric vision so that's the reason. Then I always try to be like okay, so just close one eye
48:41
and now do you see difference? Some people see it and some don't. Do you have like a test for people to explain what it means to lack steric vision? Well, first of all I'm incredibly excited to sit down with you because you're now the third person with alternating strabism. Yesterday I sat down with Fabian
49:01
in the audience who has alternating strabism and before that, Cliff, an alternating strabism appears to be a whole different game, a whole different thing and it seems to be where we can make the quickest progress. Well, I'm not, you know,
49:29
I can't, don't make me make promises or get any hopes up or anything, right? I mean, you know, all we can do, the only person that can help you is your own brain. You know, it's your brain that's got, and we don't know,
49:41
we don't have any definitive proof. That's what we're working towards but I promise you this, you will have a very interesting experience. I have some tricks to tell steric or binocular people
50:01
how it is to see monocular. I think it would just, in your case because you have something that's called panorama vision where because of the alternating nature your brain knows that the eyes are looking different positions and it kind of puts them together like two build channel, like two monitors
50:21
and extends your field of view to some extent, probably with the portion in the middle where there's some suppression and that's something that is very useful if you're a truck driver but it's not necessarily something I think that people with regular vision can experience.
50:41
It's too far out. We could perhaps create a picture. No, we could, we could. We could produce, we could take the input of the camera and we could map it to this panorama-like view with a dead spot in the middle or something. We could do something that might give people a feeling for that, yeah.
51:00
Thanks. Stay in touch. So, microphone one please. Thanks a lot both for the talk. I was wondering, you mentioned that you have the program that adjusts the position of the eye inside the headset. What happens when you remove the headset from the alignment? So what happens, this is really fresh.
51:21
So we haven't had people being able to take this at home and practice it for any length of time but what typically happens is the headset's removed and the eyes are together and they stay together for a little while until old habits reassert themselves. The question is what happens after longer exposure to it
51:42
and all of the other techniques that I want to build in to increase the ability of the person to hold their eyes straight and to feel comfortable with them being straight and not go back into this suppressed state. That is 2019. Thank you.
52:01
Great. Microphone two please. Thank you for this. During your experiments, you don't, if I understand it correctly, you are not able to work against the underlying cause of this. So whatever is wrong with the eye,
52:20
do you already have some experience if maybe someone had an operation to reduce the impact of whatever caused this and how did it impact your experiments? This can only work if those original problems, if it's possible to tackle them
52:40
and they have been tackled. And that's generally something as simple as the person wears glasses because the most common cause is that one eye is stronger or weaker than the other. Beyond that, again, I think the only way to discover these things in practice is to make it available to a great number of people and to have them explain very carefully
53:01
what the history has been so that we can begin to piece together where this stuff can work and where it probably isn't going to work. All open for data gathering basically. Thank you. I have two short questions. The first one is, do you think this app would be helpful
53:20
for somebody who already had the surgery at a very young age but still would help this? And the other one is, would you think it's more useful to start with the app before surgery or just wait until somebody is a bit older and then use the app because surgery is usually at a very young age?
53:44
So I'm kind of loathe to say things that are too concrete before we've actually done trials and whatnot, but I think it's definitely worth trying. Definitely. I don't see why surgery
54:01
is only adjusting the eye position. That's all it's doing, so why not? The surgery is useful at the moment because it means you can see through the lenses of the headset. If you've got extreme strabism, you look past the lens so you're just not seeing enough of the screen. You can't do anything. And in that sense, the surgery is very useful.
54:24
Thank you. And adding to that question, would you need another headset that has the lenses at other positions then, like adjustable headsets? Well, I went to Seoul and I talked to LG about some ideas
54:40
for a full view field headset. And in that week, Samsung issued a patent which is actually a very good patent in the sense that they're obviously thinking about this and it does pretty much everything right.
55:01
So they have flexible OLED displays and they've been thinking about how to organize the lenses in such a way that you can, in a headset now, you look straight ahead. Oh, there's something over there. Oh, I'll move my whole head. You need to be able to move the eyes fully in every direction. That's what full view field is.
55:20
And yeah, they have this patent. They've figured out a lot of the problems that need to be solved. That's obvious. So I hope in a few years, maybe, if VR doesn't die the death, Samsung will bring out this headset and we can abuse it for our own purposes. Nice. So we have performances left and two questions on the microphones
55:43
directly to you. So I take microphone one first. Did you test your hypothesis that the usual tests fail because of the background environment because you could easily add environment and background to your VR? Obviously this is still a volunteer project
56:02
so we just have super limited resources, energy, time. The disparity between what we want to do and what we can do is infinitesimally small. If you were interested in getting involved or somebody wanted to answer that question, please, please join us.
56:22
Thank you. So and microphone two, please. So I think using a unique SDK is a good way because there are a bunch of developer, game developer that's building their VR content using it. But have you considered using WebVR because it's basically how you can unite
56:40
all the different platforms, and the VR headsets, and there is a standardization process from the W3C, so if you can go down a layer of the stack then you're going to reach a much wider audience and with a ton of content, including all the videos and everything. We had a look and a think about that.
57:03
But I decided to be pragmatic. Unity is the main environment that most game developers use and it means that it's easy to, it's not just about the language or the compatibility, it's also about the assets that are out there,
57:21
the libraries that are out there, the skills that are out there, the size of the community and to make it as accessible and pragmatic and reliable as possible, I've gone with Unity because it also supports all the headsets. Time will tell if that was a good decision or not, but it's a good suggestion.
57:42
Thank you. So that's also your first contributor from this group. Maybe others as well are already volunteering. Thanks again for this impressive demonstration here and thank you all for coming for the first talk on the last day of Congress.
58:00
Thanks again.