5th HLF – Lecture: Can a Machine be Conscious? Towards a Computational Model of Consciousness
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 49 | |
Author | ||
License | No Open Access License: German copyright law applies. This film may be used for your own use but it may not be distributed via the internet or passed on to external parties. | |
Identifiers | 10.5446/40139 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
2
00:00
Internet forumTuring testCartesian coordinate systemVirtual machineComputational complexity theoryInternet forumMathematicsCryptographyGroup actionComputer programmingLecture/Conference
00:43
Computer simulationSlide ruleParameter (computer programming)Virtual machineComputer simulationLecture/Conference
01:23
Bit rateInternet forumNumberMeasurementCubeFreewareMathematicianComputer simulationContext awarenessPoint (geometry)Speech synthesisBus (computing)Multiplication signVirtual machineInformationstheorieOpen setRight angleWeb pageSquare numberFunctional (mathematics)MereologySimulationGraph coloringAreaComputer sciencePlanningVisualization (computer graphics)NeuroinformatikLecture/Conference
07:57
Internet forumDecision theoryLevel (video gaming)Digital photographyPoint (geometry)CubeDegree (graph theory)Lecture/Conference
08:45
Internet forumForceRight angleSign (mathematics)View (database)Category of beingNeuroinformatikFreewareComputer simulationCross-correlationFormal grammarSemiconductor memoryStudent's t-testComputer architectureCoefficient of determinationVirtual machineTerm (mathematics)Physical systemProcess (computing)Turing-MaschineComputer scienceHypothesisRoboticsComputerMeeting/InterviewLecture/Conference
11:43
Internet forumState observerSemiconductor memoryCoprocessorLevel (video gaming)Context awarenessComputer simulationAnalogyScanning tunneling microscopeLecture/Conference
13:06
Internet forumArchitectureContext awarenessAnalogyCoprocessorClosed setInformationComputer simulationParameter (computer programming)NumberAtomic numberBroadcasting (networking)MereologyTerm (mathematics)Semiconductor memoryBefehlsprozessorTuring testQuicksortArmTape driveFinite-state machineUniverse (mathematics)Computer scienceoutputExtension (kinesiology)Pointer (computer programming)AreaDigitizingNP-hardMultiplication signAlphabet (computer science)State diagramGame controllerPoint (geometry)Content (media)Speech synthesisVirtual machineComputer architectureScanning tunneling microscopeState of matterFlow separationFormal languageFunction (mathematics)WordFiber (mathematics)Point cloudLecture/Conference
20:27
Coprocessor1 (number)Order (biology)Semiconductor memoryMultiplication signComputer simulationINTEGRALSpeech synthesisInformationCoefficient of determinationInstance (computer science)Parameter (computer programming)Term (mathematics)Interior (topology)PlanningBroadcasting (networking)Level (video gaming)DialectOperator (mathematics)Forcing (mathematics)Dependent and independent variablesInsertion lossMereologyField (computer science)TheoremTable (information)Axiom of choiceImage resolutionSign (mathematics)Context awarenessLecture/Conference
27:17
1 (number)Multiplication signSurgeryMereologyRight angleSemiconductor memorySupercomputerField (computer science)ExistenceView (database)Interior (topology)MomentumProcess (computing)Slide ruleComputer architectureFiber (mathematics)Order (biology)Musical ensembleNeuroinformatikSpeech synthesisType theoryLecture/Conference
34:07
Computer simulationFreewareLevel (video gaming)Axiom of choicePoint (geometry)Metropolitan area networkMultiplication signComputer chessPosition operatorDecision theoryFreewareComputer simulationWhiteboardLecture/Conference
35:26
Execution unitComputer simulationVirtual machineMereologyQuicksortCondition numberUniform resource locatorPoint (geometry)CoprocessorFreewareTuring testYouTubeGame controllerLevel (video gaming)Green's functionSensitivity analysisSimulationOffice suiteLecture/Conference
38:53
Internet forumLaurent seriesMultiplication signComputer programmingVirtual machineComputer chessTuring-MaschineRow (database)1 (number)Prime idealComputer simulationSimilarity (geometry)Lecture/Conference
40:52
Internet forumTerm (mathematics)TheoremMusical ensembleTheoryProof theoryControl flowInformationComputer scienceComputer simulationINTEGRALInformationstheorieRandomizationSemiconductor memoryPresentation of a groupTranslation (relic)Sign (mathematics)Computer architectureWave packetVirtual machineComputer programmingPlanningProjective planeProcess (computing)NeuroinformatikArtificial neural networkGoodness of fitLecture/Conference
Transcript: English(auto-generated)
00:01
My name is Anna Wienhard, I'm a professor of mathematics here at the University of Heidelberg and a group leader at the Heidelberg Institute for Theoretical Studies and a member of the Scientific Committee of the Heidelberg Laureate Forum. And it's my great pleasure to welcome you all this morning and in particular to introduce our first speaker, Manuel Blum, who received the Turing Award in 1995
00:22
for his contributions to the foundation of complexity theory and its applications to cryptography and program checking. And I'm very much looking forward to your talk, which will tell us whether a machine can be conscious and what computational methods we can use to approach the concept of consciousness.
00:42
Thank you, Anna. So I want to start by thanking the Klaus Schirra Foundation which brought this wonderful stuff. And I want to thank my girlfriend, Lenore Blum, who really has encouraged me
01:03
and a lot of these slides are much better because of her. So can a machine be conscious? So I'm going, I wonder. So towards a computational model of consciousness, I'll give some of the arguments here.
01:20
Let's start with what is consciousness? Consciousness or conscious awareness is everything you are aware of when you are awake or dreaming. What you see, what you hear, what you feel, and most especially, your own private inner speech,
01:42
which it seems all of us have. We talk to ourselves, including R1. And it's very interesting. Talks differently than we do, but that's a good goal to understand consciousness.
02:03
So let me just say a brief history here. I have been interested in consciousness for a very long time, since I was a kid, but I was told that it's not permissible to talk about, think about consciousness. And in fact, before 1980, it was not permissible.
02:24
1987, many neuroscientists had started really thinking deeply about consciousness, and I have their names over here. Bernard Barres is the first, and it's his model of consciousness
02:41
that I will be mainly speaking about. The others, David Chalmers is a philosopher, but the others are all neuroscientists, and they are wonderful neuroscientists. They have done very excellent work. And the last one, Giulio Tononi, actually has measures of consciousness, and he really knows his information theory,
03:01
besides the fact that he's a surgeon and a neuroscientist. In 1990, fMRI, functional magnetic resonance imaging, was invented, and because of that, consciousness became science. We could actually see what parts of the brain were actually active, depending on what you think.
03:24
So the question is, can a machine be conscious? So I believe it should be possible to build a machine that experiences consciousness, free will, and I hope maybe from this you will see an explanation of free will.
03:41
Pain, I mean the torment of pain, not just the simulation of pain, but the actual torment, joy, the true joy of discovery. The machine is to really experience qualia, like the color red, the ache of pain, and so on, not just simulate the experience. So let me start with some examples
04:01
of conscious and unconscious experiences, so that we are more or less on the same page here. One example which I really like, number one here, is an experience many of you have had. You go to a party, you see somebody that you know. You know their name, but you don't, it doesn't come.
04:22
You don't remember the name. And then you go and you have a drink with somebody, and half an hour later, the name pops up into your mind. So the question, what's that person's name? Well, that's conscious. But you're not aware of all that your brain is doing
04:41
to at some point then pop the answer into your brain. Clearly, your brain has been thinking about it, and that's been your unconscious thinking about it. Other things that are conscious, unconscious is your breathing. Of course, your breathing can be conscious, and it's terrible to mention it, because suddenly all of you are aware of the fact
05:01
that you're breathing is a terrible thing to do to you. But breathing can be unconscious, and it's mostly unconscious, and it becomes, okay. And let's see, let me go on. The sighted world, I'll go to number six, the sighted.
05:20
Well, number five is good. Number five, learning to ride a bicycle. You've learned how to ride a bicycle. You don't forget, you can always ride a bicycle. You don't know how you ride a bicycle. There's been a lot of unconscious work done by the cerebellum mainly to learn how to ride the bicycle.
05:41
The sighted world, there are people who, because of damage to a particular part, it's usually area V1 in the visual cortex, because of that, they no longer see, they are blind. Or they claim that they are blind, they see nothing. And yet, you can put these people in a room, at the entrance to a room with obstacles,
06:01
and tell them, look, there's a door at the other side. Walk to the other side and avoid the obstacles. I can't see, I don't know where the door is. I don't see any, well, just keep your eyes open. And then they'll walk, and they'll avoid the obstacles, and they'll get to the other side. They can, these people with blind sight,
06:21
blind sight two, as it's called, you put them in front of a monitor, there's a dot on the monitor, and you say, well, just put your finger on the dot, wherever it is. I can't see it. Well, just imagine, they put their fingers on the dot. It's blind sight. They are not conscious of,
06:40
but the unconscious is working there. You, as mathematicians, computer scientists, know about problem clarification versus problem incubation. The Poincare, was he getting on the bus, or was he getting off the bus when he had his insight?
07:03
He mentions the fact that he had this problem, he was thinking about it, and it was just on getting on or off the bus that the idea came to him. Unconscious incubation. The Necker cube, here's the Necker cube. It's just a cube.
07:20
It's just a cube, and then when you look at it, you will see, if you try to visualize it as a cube, you will have one of these two, one of these planes is the one that's forward. And depending on who I speak to, it's one or the other. It could be this one. The one on the lower left square is what they see,
07:44
or it's the upper right square that they see is the one being in the front. But it's difficult to actually go to see both. We can see both. We can do that, but it's difficult.
08:01
The point here is that your brain makes a decision. Your unconscious is making a decision which you will consciously see, which of those cubes you will see. This is levels of consciousness. Here's something. When you look at a photograph like this one,
08:20
you may feel conscious, as I do, of every detail. You are not conscious of every detail. What you see in the photo and what you see in front of you is what is available to you to be focused on and made conscious. This is a subtle distinction between what is or is not conscious.
08:43
There's degrees of consciousness here. There's the upside down house. This is a house in Moscow. All the furniture inside is glued to the top, and it's a wonderful tourist attraction. I don't know if you see the stairs all the way on the right that you can go up.
09:01
Do you see the bicycle? Some of you do. Okay, so it's there where it should be if the house were righted. This is what you actually see. You have the sense when you are looking at a cable car that you see the cable car
09:20
and the stop sign and everything else. This is a picture of what you actually see when you are looking at the letter E on this cable car. And you can see that everything, but nevertheless, you have the sense it all looks clear because you can actually make it clear if you want. So my view of consciousness is that consciousness
09:42
is a property of all properly organized computing systems, whether they are made of flesh and blood or metal and silicon. My thesis is that the architecture of these systems makes them conscious. And my job as a computer scientist is to use the experimental evidence
10:02
provided by neuroscientists, the neuro correlates of consciousness and the model provided by neuroscientists to create, I would like to create a formal Turing machine like model of consciousness that provides at least reasonable explanations of consciousness, free will,
10:22
the various emotions of love, joy, pain and so on. So my conscious Turing machine is a simplification. I've given talks like this and had a neuroscientist come up and say, well, you don't include all of the memories that are possible in your model.
10:42
And then I had a graduate student of that person come tell her, no, he's trying to have a very simple model of consciousness. So he doesn't have to find, he doesn't have to include it all. Anyway, I'm not trying to get an exact model of the brain. My goal is to understand consciousness.
11:01
For example, I would like to be able to say which animals or machines are conscious. Are dogs conscious? You think dogs are conscious? Me too, I think so too. Octopuses, insects, robots, not yet.
11:21
Why these and why not those? I would like to have enough understanding of consciousness that I can say, right now I know you are conscious because I know you are built the way I am and I'm conscious. But it's harder to say that about an octopus. I would like to be able to have an understanding of the term so I can say
11:40
whether or not the octopus is conscious. The fundamental extraordinary idea for explaining consciousness is due to neuroscientist Bernard Bars. So I say extraordinary because I've for many, many years I had tried to understand how could one possibly explain consciousness?
12:01
How could one possibly explain the agony of pain? I could not come up with anything and apparently nobody else could come up with anything until Bernard Bars in 1987 came up with his model. Here's Bars' Theater of Consciousness. Bernard Bars describes conscious awareness through a theater analogy.
12:22
Consciousness is the activity of actors in a play performing on a stage. That stage is short-term memory, STM, short-term memory. So performing on a stage, their performance under observation by a huge audience
12:41
which is the long-term memory that are sitting in the dark. You're not really sitting in the dark? Well, anyway, so this is the model you are conscious, the unconscious processors are all conscious of what's on the stage and that's what they are conscious of.
13:03
Your consciousness is of just what is on the stage. So Bernard Bars describes conscious awareness through a theater analogy. Consciousness is the activity of actors in a play. Oh yes, I've gone through this. Each processor is knowledgeable of and concerned primarily with its own unique specialty.
13:23
Each is in close contact with others that have or might have relevant information. So you can talk to each other and the processors are unconscious. So there's the model of the theater analogy. Here is the actual model that Bars came up with.
13:41
Now, Bars is a neuroscientist. Nevertheless, this is a wonderful model. It looks almost like the sort of stuff that computer scientists might talk about. You know, when we talk about Turing machines, we have a finite state machine and a tape and we could have input and output to it. And look at it, this is a model
14:00
that has all of that stuff in it and then he circles some part where he says that's where we are conscious. I would like to make this precise. I have not managed to make it precise but I'm working very hard at trying to make it precise. And to be able to say what accounts for consciousness.
14:20
So it's the architecture, at least what Bars is saying is that you have a tiny short term memory, STM, short term memory, seven plus or minus two chunks. Have you heard of George Miller's The Magic Number Seven Plus or Minus Two? He was pointing out that you can remember if somebody gives you a seven digit telephone number,
14:42
you can remember it long enough to walk over and write it down. But if it's 10 digits, 10 random digits, you will have a hard time because it's more than the seven plus or minus two. And this George Miller who coined The Magic Number Seven Plus or Minus Two, he described what a chunk is.
15:00
He said a chunk is a digit or a letter or a word. These are chunks, could even be the alphabet. And he never defined chunk. And one of the nice things that comes out of this model, as you will see, is what a chunk is. I can tell you what a chunk is.
15:21
It's a pointer. It's a pointer to a neuron. It's a pointer to a processor and long term memory. Trunks are pointers. So there is this enormous long term memory, the audience of unconscious processors.
15:42
The important point here is that the contents of short term memory are constantly broadcast to long term memory, the audience of unconscious processors and the long term memory processors negotiate among themselves what information to send to short term memory.
16:01
So now I'm gonna show you the beginnings of trying to make a formal model of this. There's a CPU at the top. It's like the finite state machine that Turing talks about. And I'm looking into that CPU and the state diagram. There are arguments why there are about 10
16:22
to the 80th states. This could not be represented really by a finite state machine putting out all 10 to the 80th states. Because 10 to the 80th is the number of atoms in the universe. But it could be represented by a CPU with a relatively small amount of memory, 33 bytes of memory.
16:41
Here's the CPU. I'm expanding it now to say that inside the CPU is this short term memory. And the yellow color indicates the fact that that's what we are conscious of. And there's the external input. We are conscious to a certain extent of it. Some parts of the house, but not all of it.
17:00
The external output, we are conscious of the fact that I try to move my arm. I can do it. I try to move Tony Blair's arm. I seem to not be able to do it. Oh well, so there's what we are conscious of. And then way beneath that, in gray, the unconscious processors of long term memory.
17:23
And these unconscious processors, well we know about where many of them are. For example, faces. There's a part of your brain that is concerned solely with faces. What are these faces? It's called the fusiform face area. And if it's destroyed or damaged,
17:42
then the person doesn't recognize faces, doesn't even recognize themselves in the mirror. There are speech. You know there's a Broca speech area, a Wernicke speech area. There are two major speech areas. There are others.
18:02
Memory, fine control. That's the, what is it? It's in the back of the brain. God, why does that happen? It'll come up half an hour later. Fear, anger, memory.
18:22
We know that there's a part of the brain that's concerned with fear. It's called the amygdala. And if this amygdala is damaged, no fear. You might think it's wonderful, but it can be a little dangerous. There's a 32-year-old woman, three kids, and she is afraid of nothing. People say very strange, try to take care of her.
18:46
Desire, love, embarrassment, and it goes on. You can see it just keeps on going. And these processors can talk to each other. And so here comes the last part of it. The chunks, the seven plus or minus two chunks
19:01
in short-term memory broadcast their information to the long-term processors. So at the top, there's an actor representing yourself. You're there. You're asking, what's her name? Doesn't come till later, it's Anna.
19:20
But what's her name? And that question goes to all the processors. You don't know which processor has the information. Maybe one of the processors say, I know that the name begins with the letter A, and another processor has some information. I know that she introduced me at the HLF,
19:42
and then Anna will come up. And it's just wonderful. The broadcast is very, very fast, bam. And we've, only within the past two years, found a part of the brain that actually is,
20:01
seems to have these fibers, these axons, that travel all around the brain. Enormous axons, they are unlike any others. This part of the brain is called the claustrum. And the neuroscientist Koch, in particular,
20:23
is sure that this is where our short-term memory, our conscious memory is. And it is broadcasting out there. And in fact, it's very interesting that when people, when the neurosurgeon goes in and puts a little jolt of electricity
20:41
into regions close to the claustrum, the person who's under the operating table, someone goes up, stops. And when you let go of the electricity, back alive again. And she remembers nothing of what happened while that jolt of electricity was there. So there are some evidence for why the claustrum really,
21:03
of course, is where this broadcasting is at least beginning. Then there's this resolution and integration. Information also goes back up from the processors. There, the processors get together.
21:20
The one that said, it's the letter A. The one that said, I saw her at HLF. And then others, and gradually the information goes up to the top. This is slower, this is integration of the information. And finally, some of it goes up to the stage. So how does this account for consciousness?
21:41
The conscious self represented by the global workspace, which is Bernie Barr's term for it, the short-term memory, the stage, is not aware of how the unconscious processors do their work. Just don't know. The unconscious long-term memory processors which respond to some, but not all,
22:02
questions, requests, commands. I can ask it to move my hand and I'll get a response, but there are other things. Move Tony Blair's hand and I can't do it. So they respond to some questions, requests, commands. Can, when necessary, force themselves onto the stage.
22:22
Force themselves onto the stage. Long-term memory processors, they are processors responsible for motivation that can insist to get through. They are processors responsible for pain. The insula, the anterior cingulate cortex,
22:41
which are responsible for pain. These processors can insist to get through, and the one that recognizes danger is the amygdala and so on. What does this mean, insist? So part of the explanation of what causes
23:01
the agony of pain is that no matter what you want to think of, when the processor which insists to get up on the stage insists to bring their pain up on the stage, that's causing agony. First of all, all the processors will see that you're in pain, all of them. So every part of your brain is seeing
23:21
that you're in pain, and you don't have a choice. You can't think about something else. When you're in serious pain, you cannot think about anything else. There's this neuroscientist, Oliver Sacks, writes about taking a walk up in the Swiss Alps.
23:45
He saw a sign, beware of the bull. He saw no bull, so he went ahead, and somewhere deep in this little field, he suddenly saw the bull. And the bull started to rise up on its haunches,
24:02
and he got scared and turned around and ran as hard as he could. And it was only when he finally got out of the field that the bull was in that he discovered that he was in enormous pain, and he had in fact torn several ligaments in his leg.
24:23
If you've ever torn a ligament, you know that is very, very painful. It's painful enough that when you tear a ligament, it can make you nauseous. It's terrible. Yet he did not notice it until after he got out of that field. You can just see fear telling him, go.
24:42
Pain telling him, stop. And fear went out and got up on the stage. It had the stage, and then pain when the fear went away. Okay, so the emotions of consciousness arise, in part at least, through processors forcing themselves onto the stage and not allowing you to think about
25:02
anything else you want to. That wonderful theorem you would like to prove, you cannot think about it while some other processor is forcing themselves onto the stage. How much time do I have? Okay, well, nine o'clock, I should be able to tell. Okay, so which long-term memory processors
25:20
are necessary for consciousness and which not? While I've been working on this model, it's interesting to ask the question of which processors are necessary, at least for my model of consciousness, and which are not. And the ones that are in green here, I find necessary in order to explain consciousness,
25:46
just to have an explanation of consciousness. One example is short-term memory. I need to have short-term memory in this model in order to explain consciousness. Other things that I need in order to explain
26:00
is inner speech. I need to be able to plan what I am able to do, and I need to have some kind of inner speech for planning and for the actor on the stage that represents me, for it to be able to say what it's doing, what it wants to do, and so on. There needs to be inner speech. A dog could have that inner speech, it won't be English,
26:22
but it's dog speech, whatever that is. Self-awareness, it's very important for this, I haven't gone into it, that there be an actor representing yourself on the stage. That is self-awareness, it's the actor representing the conscious entity itself.
26:42
And then there is motivation. You cannot, you must have motivation. And it's wonderful that there are some instances where people, because of damage to some portion
27:01
of their brain, lose motivation for something, and you can see what happens. Let me skip past the arguments to go to a loss of motivation. This is pictorial, it's beautiful. On the left you see a clock, a house, a flower. This particular patient was asked to make a copy of it.
27:22
This patient is suffering from something called left hemispatial neglect. Neglect is very interesting. This particular person sees the right side of their field of view, does not see, the left side does not even recognize its existence. You do not want these people to drive cars.
27:42
It does not see on the left side. You can give them a dish of food, and they'll eat what's on the right, and they're hungry, but they will not eat what's on the left of their dish. They simply don't recognize that it's there. You can see this person is not just being blind
28:01
to what's on the left. It's totally unmotivated, it's unconscious. He's unconscious is the way it's been described of what's on the left side of their field of view. So let me go back to this. So the green represents those,
28:21
I have never found an example of a human who, a conscious human, who does not have inner speech, self-awareness, and motivation. If I did, a conscious human, then I would turn that green into red. The red refers to people who don't have that ability,
28:42
who have lost an ability. Look especially at eight, the emotions. Those are the ones that the emotions of fear, embarrassment, guilt, I know that those are in red because they are people who have no fear. And so, and they are conscious. So that person who has no fear, but is conscious,
29:02
lets me put fear down there in red. And the same thing is true for embarrassment, guilt, anger, hate, love. So, okay, having done that, so let me just mention some particularly good examples there's a person named H.M.
29:22
who has been studied for a very long time. We now know that his name is Henry Malazan, but while he was alive, it was H.M. And he was really well studied. He went under surgery and had a part of his hippocampus cut away. And after that, the realization,
29:40
he could no longer make any declarative memories. He could no longer make memories. He had, he was conscious. He had inner speech, he had self-awareness and motivation, but he could not make new memories. You walk into the room, you introduce yourself. The person who studied him day after day after day,
30:04
each day coming in and introducing herself anew. He did simply not remember her, could not make that memory. But it did turn out that there were some things he could do, some memories he could make.
30:21
And it's interesting because those are the procedural memories. The memory of how you ride a bicycle. You know how to ride a bicycle, but do you know how to, can you say how you do it? Can you even say how you make a left turn on a bicycle? Can you? How do you make a left turn?
30:42
I see you saying, you turn your wheel left? What do you think? No. If you lean to the left, your bicycle will lean to the right. The way in which you do make a left turn is you actually turn your wheel to the right.
31:03
Your bicycle which is moving forward, the momentum carries it forward. The bicycle wheel wants to go to the right and because it's going forward, it falls to the left and then you turn to the left. And look, you all know how to ride a bicycle, but you don't, except for the few that raised their hands,
31:22
you don't know how you do it. Okay, this particular HM could make procedural memories. He was taught, he was taught how to type. You come in, she comes in, she asks him, can you type? No, I can't type. Puts him in front of a typewriter,
31:42
I'll teach you to type. Puts him in front of a typewriter, puts a sheet of paper there with some stuff to copy and then he finds himself two finger typing, the whole thing and he's amazed at himself that he can type. He could make that memory, he just couldn't make the declarative memories
32:00
which are explained in English. Emotions, fear, embarrassment, guilt. SM is the person I mentioned to you, the woman 32 years old with three children who feels no fear because her amygdala is calcified and there are people called alexithymic people
32:21
who feel no emotion. Caleb is wonderful, he feels no emotion and there are ways to measure the fact that he really feels no emotions, including the fact that when you put him in TABF MRI, the parts of the insula that normally are responsible for emotion do not light up. He's gotten married, interesting, but he agrees that he has never felt any love.
32:45
Let me just say about computers, the supercomputers that we have that in my sense of it is that they are big enough to be conscious. They don't have the right architecture but they are big enough and I say this because the Titan supercomputer
33:01
has 1.7 times 10 to the 15 transistors. I view a transistor as about equal to a synapse, a bouton, between one fiber and a neuro body. The human brain has 1.5 times 10 to the 15 synapses. So this is the same order.
33:22
There are 10 to the 11th neurons, the cerebellum, that's what I was looking for. The cerebellum in the back of the brain has more than 50% of the neurons in your brain and yet some people are born without it. No cerebellum. And these people learn to walk only late
33:40
and they are somewhat clumsy but they learn to walk and they can get a job later, they can do stuff they cannot ever ride a bicycle. Cerebellum is needed for that. Humans are born with a missing left or right brain, are also conscious. Okay, so let me see if I can do this with you.
34:05
The slide after this is gonna be a little statue of the homunculus inside the man's brain. Anyway, how the model explains free will. So consider a chess position. A chess position, you have a chess board in front of you
34:21
you are white, it's your move, okay? On the theater stage, the actress that represents the chess player herself recognizes that she has a choice of possible moves. That's the actress on the stage that represents you. I can either make this move or that move. And this is free will. It's the fact that you recognize
34:42
that you have a choice of one or the other and which one, and you have that choice until you finally make a decision. Perhaps at some point the clock tells you time out and you do the best you can at that time and you make your move. You have free will until you make your move and then the free will is gone.
35:00
To me, that's a very clean explanation of free will. If that's not convincing, please come up and tell me. I would like to know why not. In fact, Lenore's not convinced. So why not? I still haven't. Okay, the decision which move to make may be important enough to merit a careful look
35:22
at the immediate consequences of each choice. So there's, I love this because that's often the feeling we have that there's a little, that we are in our, that there's a little person inside our brain that represents us and this model of consciousness says yes, you do have such a person,
35:41
this actor that represents you. So the next question asks how to engineer a machine so it really feels, not just simulates emotions and for example, pain. So here's an interesting, relevant thing.
36:01
This is showing you a part of the brain, the insula, which is like, almost like a brain inside of a brain. You see, you pull apart the brain at the right part and you find cortex underneath there which is the part that's responsible for pain. I want to distinguish between insensitivity to pain and indifference to pain.
36:21
Insensitivity is something damaged to your nerves. If you have leprosy, there's damage to the nerves and you don't feel pain. But indifference means you feel the pain. You know where it is, you know how intense it is, you feel it, but it's okay. It's sort of like being under laughing gas in the dentist's office.
36:41
You can still respond and it's okay because it's just not, the pain is not bothering you. Pain asymbolia is a condition in which pain is experienced without unpleasantness. Pre-existing lesions of this green insula may abolish the aversive quality of painful stimuli,
37:00
this is a quote, while preserving the location and intensity aspects. Typically, patients report that they have pain but are not bothered by it. They recognize the sensation of pain but are immune to suffering it. So that's pain. We know how to make our machines simulate pain. There are wonderful YouTube videos of machines
37:23
that are, look to be in agony but we know they are just simulations. I want to know how to actually make them feel it. And the basic point and the point that I, I've looked for this again and again and if you can give me some idea, I would appreciate it.
37:41
The main point of how pain is able to give, how pain generates the agony is that it can force itself up on the stage. It will not let you do anything else. It insists that all processors in the brain pay attention.
38:00
Okay, so at the very bottom here, the main point is that simulated pain becomes real pain when the machine loses control and free will. You can argue with me but that's, and that's the explanation I have at least for all of the emotions. Fear, motivation, embarrassment, joy.
38:26
And these are the neuroscientists that have done this wonderful work till now. I'm reading them as fast as I can and I would like to be able to make the model, the conscious Turing machine model.
38:41
We'll see. Thank you. Thank you very much for your very interesting and nice talk. We actually have time for one or two short questions
39:00
if there are questions from the audience. Oh great, I appreciate that. Please. There's one back there, yes. Madhu. Hey Manuel, thanks. I'm very glad you didn't have to spend half an hour searching for my name. Can a conscious Turing machine be programmable? Well, we are programmable.
39:24
We are conscious, we are programmable. Hope to answer that. I do think that machines can be made conscious and I do think that once they're made conscious, they will have this wonderful thing that we can't do that they can take a program to play chess and just hand it over to the next machine
39:42
and it will be able to play chess. Wouldn't you love to be able to have that ability? I don't know that that answers but. There was one other question there. Same row, no the girl, I wanted to take a young one first, so. Oh yes, thank you. First, the row before, same row as before.
40:01
There's a. Do you think that all these feelings should be biologically plausible or we should try and first accuracy of modeling? You don't, I'm so sorry that I am deaf. Or the fear, pain, joy, do they,
40:23
do you think? It won't help, it won't help, Lenore, no. Somebody else, say what she said, I just. Go ahead, go ahead, try again. Are these values, do you think that they should be
40:40
biologically possible to simulate in the brain? Or do you prime that we should have them more accurate, not necessarily biologically possible? What is she, I'm sorry. Can these, yes, I think that the,
41:02
well at least I'm claiming that a machine built out of silicon and metal can be designed, can be, the architecture can be such that they will feel these emotions. I don't think it has to be flesh and blood.
41:21
Okay. And in fact the model even says it doesn't even have to have randomness. There's nothing in what I said that requires randomness to explain free will. Thank you. Yeah, so one, and then last one here. Thank you very much for this interesting presentation.
41:43
I think one problem is with the definition of consciousness and also free will. Because I'm missing that also in your lecture because I think it's not possible so easily to define. If you take the German translation of consciousness and vice versa, it's in German,
42:02
and it's a very complicated, a very complicated term. And also free will, if you do it immediately by taking a machine and explaining it there, you will not be, this is very important because we have two kinds. I hear you, I hear you, good, good.
42:21
So let me just say something about what computer scientists do. You may not like this either. But computer scientists often will take a term and define it. In my model, what you see in short-term memory, that is what you see in short-term memory is defined to be conscious.
42:41
Now you may not like that. But it works for computer science. You know we have in computer science the whole notion of zero knowledge proofs. Now these zero knowledge proofs give away a lot of information. But we define it in a particular way and now everybody's using the term and we can prove theorems about it.
43:02
I think, Jäger, let's leave the, I only want to mention, we have such a huge brain program here in Heidelberg. It's an international brain project who tried before also to solve the problem, to simulate the process. They have reduced it to neural computers. There I agree.
43:21
Okay, so last question. Hi, Fatma Deniz is my name. You mentioned Tononi and Tononi has this beautiful model of consciousness called integration information theory that he developed over the course of 20 years or something like this. So I wonder how this,
43:43
or if you had ever thought about how this could incorporate into a CTM. Of how this could? If that could be incorporated or if you had any thoughts about it. How to incorporate what he's doing into this model? Yeah, the information integration theory. I have to understand the information theory first.
44:00
I know that he has information theory really down cold and even though I did take a course from the founder of information theory, I still don't know, understand it well enough to go into the detail. Hopefully, I'll be able to do it. I can't right now.
44:21
Okay, so I would really defer all the other questions for the coffee break. So I'm sure you will, many people will approach you with more questions and would like to change now to the second speaker this morning. So thank you very much again.