The mess we’re in
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 170 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/50608 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
NDC Oslo 201458 / 170
2
3
5
6
8
11
12
16
21
22
23
27
31
35
37
42
43
45
47
48
49
50
52
54
56
57
58
61
65
66
67
71
74
77
80
81
83
84
85
87
88
89
90
91
94
95
96
97
98
100
102
107
108
112
114
115
116
118
120
121
122
123
126
127
128
130
133
135
137
138
139
140
141
142
143
144
145
147
148
149
150
153
155
156
157
158
159
160
161
162
163
166
169
170
00:00
Roundness (object)Axiom of choiceFormal languageBitComputer programmingNeuroinformatikSemiconductor memoryProgrammer (hardware)Beat (acoustics)Universe (mathematics)Multiplication signArmComputer configurationSlide ruleAssembly languageComputer animation
01:32
LaptopRead-only memoryVirtual machineBootingBefehlsprozessorMiniDiscFloppy diskThermal expansionSource codeLine (geometry)Physical lawPhysicsCodeData managementTotal S.A.Computer virusSlide ruleSupercomputerLaptopPairwise comparisonQuicksortSemiconductor memoryPoint (geometry)Power (physics)Multi-core processorDistribution (mathematics)MathematicsComputer hardwareLine (geometry)Process (computing)Right angleCodeTelecommunicationMoment (mathematics)Virtual machineCoprocessorDigitizingMiniDiscPlastikkarteNeuroinformatik2 (number)Latent heatComputer programmingPhysical lawCharge carrierComplex (psychology)InternetworkingMultiplication signState of matterFormal languagePhysicalismSoftware maintenanceDescriptive statisticsLimit (category theory)Data managementVirtualizationTouch typingOrder (biology)Metropolitan area networkMaxima and minimaCurveProgramming paradigmProgramming languageStaff (military)Graph coloringSynchronizationCartesian coordinate systemDivisorComputer architecturePhysicistBitUbiquitous computingFrequencySolid geometryBootingStructural loadProgrammer (hardware)Evelyn PinchingBand matrixRewritingConnected spaceCausalityComputer animation
11:18
Block (periodic table)MassState of matterVirtual machineCodeMassRight angleIntegerComputer programmingBlack boxAtomic numberNumberComplex (psychology)CalculationState of matterMultiplication signProcess (computing)PhysicistDoubling the cubeExecution unitBitEnvelope (mathematics)Scripting languageUnit testingWater vaporVariable (mathematics)Position operatorComputer animation
12:58
Execution unitSoftware testingVirtual machineState of matterNumberUniverse (mathematics)MathematicsComputerComputer programmingSystem programmingGauge theoryFormal languageAssembly languageWeightBuildingUnit testingVariable (mathematics)Virtual machinePower (physics)State of matterSolid geometryRight angleNumberData managementFactory (trading post)Multiplication signProof theoryType theoryComputer programmingFormal languageProgrammer (hardware)QuicksortAnalogyJava appletCalculusScalabilityNeuroinformatikOrder (biology)Sinc functionPhysical systemTheoremWindowArithmetic meanGodTotal S.A.Core dumpMulti-core processorUniverse (mathematics)Electronic mailing listAtomic numberNumeral (linguistics)Positional notationBitProgramming languageException handlingNegative numberMathematicsLaptopSpacetimeLambda calculusEmailWeb 2.0Weight2 (number)Crash (computing)Process (computing)Computer filePlanningFlow separationSoftwareControl flowScaling (geometry)Arithmetic progressionPhysical lawComplex (psychology)Event horizonInstallation artGoodness of fitComputer animation
21:25
Equivalence relationErlang distributionModul <Datentyp>Physical systemComputer fileEmbedded systemComputer-generated imageryInternetworkingGoogolComputer programmingAbstractionCodeBefehlsprozessorSpacetimePhysicsPhysical lawUniqueness quantificationElectronic program guideCausalitySpeicherkapazitätVideo gameFunctional programmingNeuroinformatikLibrary (computing)Bridging (networking)Event horizonCausalityPhysicalismErlang distributionCodeUniqueness quantificationPosition operatorWeightSoftwareAbstractionComputer programmingFormal languageMathematicsRootSoftware developerProcess (computing)Goodness of fitScripting languageRight angleWeb browserSystem callReliefMultiplication signComputer clusterSource codeQuicksortType theoryProgrammer (hardware)Kernel (computing)Programming paradigmComputer fileEquivalence relationPhysical lawChannel capacityData storage deviceNumberDichotomySlide ruleMereologyProgramming languageStack (abstract data type)Latent heatGoogolCorrespondence (mathematics)Line (geometry)InternetworkingBoss CorporationBuildingEntire functionPhysical systemVirtual machineControl flowProduct (business)Arithmetic progressionBuffer overflowMedical imagingNamespaceEndliche ModelltheorieBlogWater vaporWeb pageBitPhysicistClosed setExterior algebraLengthWeb 2.0Functional (mathematics)Computer animation
29:41
InformationMessage passingEvent horizonTerm (mathematics)Limit (category theory)TheoremLandau theoryLaptopComputerFood energyMaxima and minimaBit rateMassSphereMetreRadiusSpeicherkapazitätQuantum entanglementThermal radiationQuantumSource codeCompilerLimit (category theory)Rule of inferenceConnectivity (graph theory)Operator (mathematics)BitDifferent (Kate Ryan album)Multiplication signFacebookSet (mathematics)Degree (graph theory)RadiusMassInformationFood energyTheoremNeuroinformatikBit rateSphereMeasurementHeegaard splittingParticle systemChannel capacityData storage deviceMetreThermal radiationLaptopTerm (mathematics)Execution unitPhysical lawArm2 (number)Line (geometry)Message passingBlack boxQuicksortRight angleQuantum mechanicsMusical ensemblePhysicistState observerMereologyFrequencyArchaeological field surveyEntropie <Informationstheorie>Goodness of fitSign (mathematics)Proof theoryFerry CorstenOrder of magnitudePlanck constantMaxima and minimaLogical constantCycle (graph theory)Equaliser (mathematics)SoftwarePhysical systemElectronic program guidePhysicalismData centerPoint (geometry)MathematicsElement (mathematics)OscillationSquare numberPopulation densityPhase transitionUniverse (mathematics)Server (computing)Replication (computing)Structural loadComputer animation
37:57
InformationSource codeCompilerComputerUniverse (mathematics)Entire functionOperations researchSpeicherkapazitätState of matterLimit (category theory)ComputerRadiusMetreComputer fileDirectory serviceVirtual machineComputer multitaskingCryptographyStudent's t-testCache (computing)Content (media)Right angleQuantum computerEntropie <Informationstheorie>Reverse engineeringGraph coloringQuantum entanglementDeclarative programmingComputer fileGroup actionMultiplication signQuantum mechanicsDistanceDirect numerical simulationProgrammer (hardware)Computer programmingMereologyFilm editingMathematicsPoint (geometry)NeuroinformatikMetropolitan area networkLaptopQuicksortTerm (mathematics)LengthInfinityMeasurementCryptographyCycle (graph theory)CASE <Informatik>Operator (mathematics)PhysicistNumberMetreUniverse (mathematics)BootingType theoryVirtual machineFigurate numberInformationComputational complexity theoryCondensationPhysical systemFood energySource codePhysical lawFrequencyContent (media)Hash functionProgram slicingData storage deviceTheory of relativityKey (cryptography)BitBeat (acoustics)10 (number)GoogolDirectory serviceErlang distributionQuantumMiniDiscAlgorithmComputer-assisted translationApproximationCache (computing)Bound stateEnvelope (mathematics)Complex (psychology)Computer animation
46:13
Address spaceContent (media)Similarity (geometry)Internet der DingeComputer fileDressing (medical)Message passingHash functionBlock (periodic table)Parallel computingWaveLimit (category theory)Auto mechanicQuantumMathematicsTable (information)Distribution (mathematics)Complete metric spaceBlock (periodic table)Graph coloringDatabaseKey (cryptography)Random number generationIP addressOpen sourceSource codeMetropolitan area networkBlogVirtual machineDirect numerical simulationQuantum computerQuantum mechanicsComputer fileAlgorithmBasis <Mathematik>Electronic mailing listDressing (medical)InternetworkingInformationSimilarity (geometry)CondensationPhysical systemSpacetimeHash functionRight angleAnnihilator (ring theory)Different (Kate Ryan album)WebsiteBitLine (geometry)MultiplicationParallel portIdentity managementMultiplication signMathematicsSummierbarkeitOrder (biology)Computer iconTwitterCuboidProcess (computing)Group actionData compressionService (economics)NeuroinformatikMessage passingSoftwareNumberData managementContent (media)PressureGoogolLimit (category theory)Type theoryBootingCryptographyExclusive orAddress spaceShared memoryHeegaard splittingStudent's t-testLinear codeQuantumPeer-to-peerQuicksortDimensional analysis
54:29
Motif (narrative)Computer animation
Transcript: English(auto-generated)
00:00
No hands up. Yeah, when about two and a half years before I was born, the first computer ran at Manchester University. It had, I think it was 18 instructions, and it made use of the Williams tube, which could store 1,024 bits on a cathode ray tube for up to an hour.
00:25
And that was the first program that ever ran on a stored memory computer. It was in 1948. Of course, I didn't really know about that when I was born. But I discovered computing when I was a bit older, about 16 or 17, at school.
00:44
And I wanted to learn programming. So we could choose between Fortran, Assembler, or COBOL. But Assembler and COBOL weren't options, because nobody knew how to program in them. So I had the choice of only Fortran.
01:02
And had a turnaround time of three weeks when you'd written a program. Now, the situation's a bit better. Well, perhaps it's a bit better. So you guys, when you're starting out, instead of having a choice between three different languages, have a choice between 2,500 different languages.
01:21
So there's much more to choose from. On the other hand, it's not easy to make that choice, because there's so many bloody languages you don't know which language to choose. So I think, actually, we got into a slight mess. And there went my slides. Hello. What happened to the slides? Is this me or you?
01:43
Oh, no? There we go. They're back again. Isn't it marvelous? Right. Was that the second slide? That was the second slide, yes. I started programming when I was about 17. And I've always been kind of interested.
02:01
Whoops, that's the third slide. Why are they moving forward automatically? I didn't, hey, stop it. Go away, look, right, now don't do this to me. What? There's a little, which is that one? That one there. Right, so now it'll step forward. And if I get double click outside, that'll do it.
02:21
Very good. Now it's not going to move forward. Is that right? Keep our fingers crossed, right. So I'm going to, in this lecture, talk about three things, actually. I think we're in a bit of a mess, actually. So I'm going to go into why we're in a mess and tell you what the symptoms of the mess are
02:40
and the causes of the mess. That's about the first third of the lecture. And then there's about a third of the lecture. I used to be a physicist, actually. So I'm going to talk about the physical limits of computation and what that means for computer programs and so on. And then since we're in this mess
03:01
and I've contributed to making this mess, I've got some vague ideas about how we might conceivably get out of the mess. So I'll talk about that as well. So the start point for this, I think I'm going to go back to about 1985 because that's when I was a young,
03:22
sort of, how old, of 85. I mean, I was a young 35-year-old with a gleam in my eye and thought, oh, I'll invent a new programming language which the whole world will use. And to help me, I had these wonderful things here. So this is a computer from 1985. For comparison, it's a supercomputer
03:42
from a couple of years ago. This little fellow had 256 kilobytes of RAM. You could stick cards in it so you could get an amazing three megabytes. Oh, you filled it full of cards. You get three megabytes of memory. Isn't that a lot? You get eight gigabytes on a stick now.
04:01
It had this blindingly fast eight-megahertz clock and it had a 20, or if you had a lot of money, you could buy a 40-megabyte disk. So it wasn't really very good for downloading movies because movies hadn't, digital movies hadn't been invented. They're like 800 megabytes anyway, so it wouldn't fit.
04:21
What's missing from that list? That's the spec of a 1985. What's conspicuous by its absence? What? A mouse. No communications. This thing can't talk to the Internet out of 100 megabits per second. It had no communications, right?
04:42
That's a big thing that's changed. Now if you buy a computer, it's got a 100-megabit ethernet connector or something. It's got a Wi-Fi. You've got LTE and 4G and 3G and stuff, and you can connect to the Internet at hundreds of megabits per second, okay? Couldn't do that then.
05:01
Distributed computing is only something that's emerged from about 1990, almost only 10 years, 15 years into this period of distributed computing. That's radically changed what's going on. Okay, so that was the start point, and now we've got these supercomputers. So this guy, let me see.
05:21
Well, it had 8 gigabytes of memory, or a typical laptop today, this little thing. 8 gigabytes of memory. That's 32,000 times the memory of the machine in 1985. It's maybe a new laptop. It's a quad core running at 2.5 gigahertz, so it's 1,000 times faster, and it's got 250 gigabytes solid state disk
05:41
or something like that, so it's maybe 1,000 times faster. So the machine that booted in 120 seconds in 1985 should boot in 120 milliseconds today. Does it? Does your machine boot in 120 seconds? No, it doesn't. So what the hell went wrong? What have we done as an industry?
06:00
What have we got wrong? So I'm gonna look at some of the things we did that were wrong. Right, so in the last 40 years, we've written a ton of code, millions of lines of code, but we have created an industry that will take up trillions of man-hours to fix it.
06:21
Right, so you guys are gonna feel, I should retire and die, and you guys can live on and fix all this complete mess we've made. Right, so welcome. Welcome to programming. Now, I'm gonna look at some of the things they said about programming languages when I started. And compare them to what we say today, and we'll see exactly the same thing.
06:41
I mean, 1985? We're all gonna program in PL1 and Ada. That's the future. No, no, just forget about anything else. How many people program in PL1 here? Right. Ada? Oh, oh, sorry. Well done. So you'll believe in this 1985 stuff,
07:03
and you're not gonna change, right? So I take this with a pinch of salt. We're all gonna be programming in C minus plus minus Tran in the next million years. You know, it's a load of rubbish, right? Well, when the hardware doesn't change, the programming languages don't change.
07:20
So if the hardware is basically the same, the programming language will follow sort of some S curve where you find a maximum and a most efficient way of programming that hardware, and then when the hardware changes, you'll suddenly start changing the programming models. So in fact, we've seen two changes in architectures in the last 15 years or so.
07:42
The first thing is ubiquitous computing, I mean, the internet, and being permanently connected to the internet at high bandwidths. That started in about 1990. By 2000, it's a fact. The significant factor there is, you know, this permanent connection wherever you go. It's in your pocket. It's through LTE or something like that,
08:01
and that means you need to know about distributed programming if you want to build interesting applications. All the interesting applications are distributed. The second thing that's happened is the multicore thing that happened in 2004. That was inevitable. It was predicted around about 1998 or something like that. Chips got bigger and bigger and bigger,
08:22
and the clock speed got faster and faster and faster, but then a physical limitation hit in. You couldn't get from one side of the chip to the other at the speed of light, because the speed of light is finite. It's not infinite, and so the synchronous chip design vanished, and you put multiple cores on with their own clocks,
08:40
and that's because of the speed of light. You could also drop the voltage, so you can run at lower power because of that. So in the future, we're going to see, I mean, I've got, I'm experimenting with 64-core low-power processor at the moment. We should see 1024-core low-power processors coming onto the market in 2015,
09:02
and they're going to influence how we do things. So we've got a change in the hardware, and that will reflect in the change in the programming language. No programming languages have been explicitly designed with distribution, multicore, and low power in mind. I haven't seen any languages that have been designed with that in mind. We're working on it, so we'll see what happens.
09:23
Right. And when those change, the languages will change. Right, so what are the problems? This lecture, what are the problems? What do the laws of physics have to say about computation, and can we get out of this mess we've got ourselves into? Right, so what are the problems?
09:41
Here's a few problems. Legacy code, complexity, all this kind of stuff. I'm going to talk about each of those in turn. So legacy code. This is great. Legacy code is when all the programmers who wrote the stuff are dead. Right, that's legacy code. And it's a pain in the ass. There's no specific...
10:00
Have you ever seen a program that had a specification? Well, I've seen a lot of programs, and I've seen a lot of specifications, and the two normally don't have anything to do with each other. They're kind of a vague description of what the program might do on a good day. They're written in archaic languages, which nobody understands. Right, that's great. You want to be a maintenance programmer?
10:21
Here's half a million lines of COBOL. Wow, cool. That's just what I wanted to do. You've got to change one line of it, but which line? That's the problem. Nobody understands how this stuff works, and it works. And then we've got business value. We've got managers who say, well, this legacy code, this million lines of stuff written in COBOL Tran4 has got commercial value.
10:43
It's got business value, so we can't touch it. And to rewrite it would, you know, because the first time it was written, took a million man hours or something to rewrite, it's going to take 10 million man hours. Of course, nobody knows what it does, so it might not write. So it might not. So management thinks that modifying legacy code
11:02
is cheaper than a total rewrite. They are nuts. They're completely lunatics. Sometimes it is, if that legacy code is in good order, but often it's not right. So what do you do with legacy code? Don't touch it. Put it in a virtual... Well, this is what you do. You put the legacy code in virtual machine.
11:20
Don't mess with it. So you've got all these little black boxes running legacy code, which nobody knows what it does, and it's just sitting there, festering like a nasty wart or something that will come and hit you one day. Ugh, horrible stuff. But it's created a lot of job opportunities, so it's pretty good.
11:41
Complexity. Complexity, right. So I have to sort of remind myself of a certain number. You're gonna see a lot of numbers in this talk. So the mass of the Earth... I used to be a physicist. The mass of the Earth is 6 times 10 to the 27 grams, and the Earth's got about 10 to the 50 atoms in it.
12:04
Okay? Just remember that number. 10 to the 50 is a nice number. So we do a back-of-envelope calculation. Here's my envelope, back-of-envelope. So if we take 10 to the 52 to the 167 and divide that by 32 and take the ceiling of that,
12:21
that's 6. What's that say? That says 6 32-bit integers has... What's that? 192 bits. But the number of atoms on the planet is 2 to the 167. Okay? So what does that say? That says a C program
12:40
with 6 integers has more possible states than the number of atoms on the planet. And don't ask about JavaScript. What about JavaScript? Well, 3 variables, actually, because they're double precision. So what does that mean? What does that mean? Well, it means you need more than 6 unit tests.
13:02
If you've got 6 variables, there are more than 3 unit tests if you've got JavaScript variables. Right? Just as an aside, my machine, little machine here, has got a 250-gigabyte solid-state disk in it. That means the number of possible states it can be in
13:23
is 2 to the power of 250 billion times 8. Okay? The total number of atoms, not in the planet, but in the universe, the total number of atoms in the universe is about 2 to the 260. Right? That means the complexity,
13:40
the number of states that my machine can be in divided by the number of atoms in the universe is 2 to the 7 billion. Right? So I'd need 2 to the 7 billion universes filled with computers, and I might just find one that's in the same state as my computer. Right.
14:01
So this is why, you know, I do things on my computer, and it doesn't work. So I Google. I say, blah-blah-blah-blah-blah-blah, didn't work. And I find a web person. I say, oh, I have exactly the same problem as you. Do this.
14:21
And there's lots of mails after. Oh, gee, thanks, that's absolutely great because I had exactly the same problem. And I type this stuff in, and guess what? It doesn't work. Shit. So I Google again, and I find I had exactly the same problem. So I Google again, and I do it, and it doesn't work. Why is that? Well, because his machine, when he did these commands
14:43
or she did these commands, is not the same as my machine. Its state is different. In fact, the chance that it's the same is one in two to the 250 times eight, you know, right? And I would need to be very lucky and be in one of these universes of the seven giga, two to the seven giga universes
15:02
where the guy has just happened to post and his machine was in the same state as me, which is not gonna happen. And it happened surprisingly early. Last week, no, week before last at work, we got, my and my colleague, we both got two brand-new Apple Retina laptop thingamajigs, and they were factory new, and they were installed,
15:21
and we were gonna run the same software. So we thought, well, we'll do the install together so that our machines are in sync. And after four hours, I typed some commands on my machine, and an install worked, and he typed the same commands on his machine, and the install failed, right? And up to that, we'd been the same. And then what do we do? Well, he started Googling, and oh, how the hell?
15:41
Oh, I did it, and it didn't work, try doing that. So they diverged after four hours. Of course, they weren't the same to start with, but you kind of think they're the same, don't you? It's not true. It's not true. So what's the answer to all of this? Well, scary math to the rescue. The trouble is, scary math hasn't got there yet.
16:00
We don't have scary math to help us yet, okay? Because scary math is trying to prove things about the states of these programs, so the only way it can do it is prune the search space down to something that's manageable and then look at that. So you can't really prove things about programs with more than two or three variables in, so it's pretty tricky. Well, sometimes, if they're specially constructed, you can.
16:21
So we can't really look to scary math to help us yet. It's coming along. It takes a long time. It takes like 100 years for somebody to get a good idea, and then they take it. I mean, look at the Lambda Calculus. Church invented the Lambda Calculus in 1930. The type land of Calculus in 1936, and I think it got into Java this year, didn't it, or last year or something. I mean, it takes rather a long time,
16:41
and then nobody's had any ideas since Church, so there you go. Failures. What about failures? Well, you know, stuff fails. Deal with it. Stuff does actually fail. Computers fail. So there's only one way to handle failures. Do it on another, Joe's first theorem.
17:01
In order to handle failures, you need two computers. Right? Because if the whole computer crashes, you're screwed. Sorry, at least two. If you want to tolerate the failure of 100 computers, you need 101, right? Probability of them all failing at the same time is two to the power of 101, right? So it's low, but it still might all fail.
17:22
That means that you need to understand distributed computing because you've got two computers. You need to understand parallel computing because they're running at the same time, and you need to understand concurrent programming because the programs in them are running at the same time. So if you think you can do error handling or scalability without understanding these three things, it's not going to work.
17:42
Now, I'm not going to talk about that. I've written three books about that and made a programming language to sort of embody some of those ideas, and if you want to read more about that, you can read some of those. Right. Means because systems break, because they're becoming more complicated, we have to make them so that they will self-configure and they will repair themselves and they will evolve with time.
18:01
I think they'll be like us. They'll gradually die, and it's a very interesting problem handling dying chips, how they... I'm looking at that. Multi-core is where the cores start failing. When we produce 1,000-core multi-core chips, half a dozen of the chips won't work, and then as we run them, some of the cores will start failing, and we need to live with that.
18:23
We don't want to throw them away because bits of them don't work. Languages. Language doesn't matter, they say. Well, they should have told the Romans. Do you do arithmetic like this? No. It's even worse. You see, Romans didn't have a zero.
18:42
Made light of it. And they didn't have negative numbers for a lot of society. Do you know how old negative numbers are? They're not very old. You know, they're about... I can't remember the date. 1,700 and something. We need notations and things that are familiar and easy to work with. And if you're kind of in this mindset of programming with numerals like this
19:02
and somebody else is using Arabic numerals, you won't understand each other. You say, well, it's a lot easier using Arabic. No, no, no, no, no. I'm perfectly happy with this. I mean, I'm not talking about Java and Haskell and things like that. I'm talking about Romans and things, but there are analogies to be made there.
19:21
So in 1985, everybody knew... Oh, there's a little star there. Not absolutely everybody. Everybody knew the born shell, make, and see, right? If they were sort of scientific technical programmer type.
19:41
That meant they could talk to each other. Programmers could talk to each other. Well, now we can't talk to each other. We don't have these common languages. Half of the... You know, when you go to these big conferences, there's these dot net people. Any dot net people here?
20:02
Right. Any JVM people here? Right. Oh, golly, you're a... I've been to other conferences. I say, any JVM people? Oh, the hands go up. And you say, any dot net people here? No, nobody. And I'm a sort of neither a dot net or a JVM person, so, you know, I don't really belong here. I hope you won't eat me. It's like talking to the lion,
20:21
you know, being thrown to the lions. 99% of the audience were dot net people. Oh, my God. What's dot net? You think I'm joking. I haven't actually got a window. No, none of my friends have got a Windows machine. I don't know what it is. Well, I used one... Well, that old thing, that had a Windows on it. Yeah. And so now what do we program for JVM dot net?
20:41
We program in Ruby, Doobie, Fortran, and by the way, we can't talk to each other. Right. So we can't sort of send programs to each other because we can't understand them. So... Well, when I learned to program, I choose... I said this earlier. I can choose three languages now. First list I looked on. I looked on the Wikipedia,
21:00
and it said there were 776 languages. Looked somewhere else. There were 2,500, so... There's a lot anyway. And then we have build tools. Well, once upon a time, I said everybody talked Make. Make's wonderful. I love Make. And now there's Ant and Grunt and Make and Rake and Melvin
21:20
and Jake and Bacon, BitBake and Fabric and Pavar and Shovel and Dick Grovel and God knows what and what the hell these things are. So I was researching this talk, and I looked up on stack overflow, and then somebody asked a question. Is there a Rake equivalent in Python? Yeah, well, there's Pavar, Invoke, Shovel, WAF,
21:42
and, oh, well, that's really good. You know, that's what I like. So a couple of months ago, I was writing this... I actually do some programming for my living at Ericsson, and I was writing this program in Elling, and I was going to go into a product, and I want to put it on the target. And so the guy said, well, we've got this script that does it.
22:02
Just log in on this development system and type Make, you know, because it was a Make file, actually. Well, it Make evoked a Bake file... No, a BitBake file, a Bake recipe. And it was taking a while. I left it, and I went... You know, 18 hours later, it had downloaded 46,000 files,
22:23
which included the entire source code of the Linux kernel and of GCC, compiled the whole bloody lot up, and then built a single image, and then it had cryptographically signed it and done all this stuff. And then I could take my three-module program and put it on the target. Well, of course, Elling was designed to be object-code compatible, so you just move the files over anyway.
22:41
No, that's not possible. We have to use BitGrunt or, you know, and then... I know, you'll say you should have used Grunt, and it would have been easy, but I don't know how to build Grunt files. I'm sorry, you know. This is not good, actually. Right. So without Google and Stack Overflow, programming would be impossible.
23:00
I was showing these slides to somebody two days ago, and he said, Well, when the Internet stops, I can't program. Well, I didn't think that was really very good. And I don't think, Oh, shit, it doesn't work. I'll Google it. It's a very good programming paradigm. In fact, I think programming is going to stop. You see, if you plot...
23:21
If I reckon the amount of time I spend fixing stuff that's broken that shouldn't be broken, corresponding to do real work where I'm just doing stuff, the fixing stuff that's broken that shouldn't be broken takes 30% to 40% of my time, and that is increasing with time. So I think in 10 years' time, nobody will be able to program any...
23:41
You know, everything you use will be broken in 10 years' time, and you won't be able to program. You'll spend all your life in Google searching for, Well, why doesn't my fucking program work because I've done this, and it's supposed to work? Then it's not going to happen. Right. Good. So efficiency... I love efficiency.
24:02
You know, programming language designers, they don't care about efficiency. All the programming... You know, Dennis Ritchie and everybody who's ever invented programming says, Don't think about efficiency. Think about correctness. Why is that? Because programming language designers get blamed if a programmer writes a program and it crashes and kills somebody. You see, it's their fault. It's not my fault.
24:20
You shouldn't have put it in your bloody language, right? So I'm more concerned about correctness than efficiency. And so we have this sort of dichotomy between efficiency and clarity. So to make something clearer, you add a layer of abstraction, and to make something more efficient, you remove a layer of abstraction. So you're always kind of dicing around
24:42
between these two alternatives. And, of course, in the last 30 years, we have systematically chosen efficiency over clarity. Well, that's great. That's fantastic because now we have machines that are so frigging fast, you can throw anything at them and they do it instantaneously if the code is correct. Right.
25:00
So about 20 years ago, I told... I was telling my bosses at Ericsson, I was saying, Look, this is stupid. What we have to do is write our software as clearly as possible, and as few lines of code with as close a correspondence as we can make it to the specification and to the mathematics.
25:22
And then we'll be in a very good position in the future because as processes get faster, we'll just be able to take this stuff, and one day it will be fast enough. So it's really simple. If you're in a program, you want to optimize it, make a thousand times faster, just wait 10 years, and it's really simple. Alan goes, You're like a million times faster
25:40
by waiting 20 years. You just wait 10 years, it goes a thousand times faster. Who's in such a bloody hurry? Why do we want it to be quick tomorrow? We're making this legacy code now that if we don't fry the planet and if we don't have a nuclear war and if we don't have a pandemic and if the nanorobots don't take over and if we're not hit by an asteroid
26:00
and if there's not a supervolcano, it's going to last for a bloody long time. And you look back at this archaeological layer of crap that was done in the beginning of computing. Why didn't they just write it efficiently? Sorry, clearly. And so what we see in the beginning of a technology is companies compete by being incompatible with each other,
26:25
by being deliberately incompatible with each other. So the early browser wars were a Microsoft browser was deliberately incompatible with a Netscape browser. After 10 years of that, people say, hey, that's bloody stupid.
26:40
Let's try and make them. Let's not compete over browsers. Let's try and make them compatible with each other. So now we're into this, well, kind of the browsers are largely, you know, they all use WebKit or something like that. So they are compatible. So the wars are somewhere else. So we've got some decent functional programming. We've got Haskell there and things like that. And then, okay, so let's not use them.
27:01
You know, Microsoft, oh, there's some really good ideas in functional programming. I know what we'll do. We won't use them. We'll invent our own language, F sharp. That's nice. That's great. And then Apple comes along. Oh, there's some really good ideas in functional programming. I know what we'll do. Let's not use them. Let's invent our own one.
27:20
Why is that? Why is Microsoft making F sharp? Why is Apple making Swift? To lock you in forever so that we can't talk to each other. Now, that will last for 10 to 15 years, and then everybody will say, oh, this is a complete mess because we've got all this legacy code which can't talk to each other. We've got the .NET world. We've got the JVM world. And you guys can't talk to each other.
27:40
But it becomes part of that legacy code which has to be maintained for thousands of years into the future. Right? It's not a good idea. Contribute to the stuff. You know, now is not the time to be inventing new functional languages because that was done a while back. That was done in the 80s. Okay, so if Apple wanted to do something, make a Cocoa bridge to Haskell or to Erlang so that we can really use Cocoa
28:01
and the nice, gooey stuff. Don't make your own thing because you don't have all the libraries. You don't have all the stuff going. Right, names. How are we doing for time? Golly, I have to speed up. Names. Names. We name things. Names are imprecise.
28:22
Terribly difficult deciding on names. Unique names. You know, I'm called Joe, but any other Joes? Okay, so Joe's unique in this namespace, but if it's a bigger namespace, there's lots of Joes. I'll talk more about that later.
28:40
It's the root of all evil. Right, now to sort of change the subject now. This is light relief because I'm an ex-physicist. What do the laws of physics have to say about computation? Any other physicists? Oh, good, thank you. Right, so my interest in this was spike. I was reading the Erlang manual pages,
29:01
and it said the return reference will reoccur. Make a unique reference. It said the return reference will reoccur after approximately 2 to the 82 calls. And therefore it is unique enough for practical purposes. I just wondered what this sentence meant.
29:21
Well, 10 to the 82, sorry, 2 to the 82, that's 10 to the 25. So sometimes I use 2 to the, sometimes I use 10 to the. What do these numbers mean? Right, so in this physics bit, I'm going to talk about causality, simultaneity, entropy, speed of computation, storage capacity. So causality to a physicist, a cause must precede an event.
29:42
We communicate through messages, some rays of light, sound. You don't know about, you hear what I say, I don't know, a couple of milliseconds after I've said it. You see my arms waving about before you hear what I say. Your brain sort of sorts that thing out. You think it's simultaneously.
30:01
Information travels at or less than the speed of light. You don't know how something is now. You knew how it was the last time you talked to it. So simultaneity, this is basic sort of physics. We've got two stars, A and B, in different parts of the universe.
30:24
And they explode at, I'll say at the same time. They explode at the same time. We've got three observers. And they look at that. Well, this guy over here, he's near, you know, this bloke here. He's nearer to A than B. So light to him gets there first. So he says A exploded before B.
30:40
And the guy to the far right, he says, well, B exploded before A. And the guy who's in between them, he says, well, they happened at the same time. So physicists gave up this idea of simultaneity a long time ago. Prior to, you know, I mean, as soon as you realize that light traveled at a finite speed, this whole notion of simultaneity vanished from physics.
31:02
You're not allowed to talk about simultaneity. So if you put it into sort of computer terms, try and replicate data. Suppose you've got a server A where you store something and a server B where you keep a replica. If you update X on A, you send X equals 10 to B, and it replicates it, and it sends an ACK back to A.
31:23
Trouble is that B doesn't know that A knows that the value is replicated because it doesn't know that the ACK signal's got there or not. They can't make any assumptions about that, right? So if B wished to know that A was replicated, i.e. that it had the same value,
31:41
because it doesn't know that the ACK signal's got back there, you'd have to send an ACK signal back from A to B, but A doesn't know that the ACK signal's got back to B. This is called the Byzantine general problem, and it's a very simple impossibility proof that when you've got data in two different places, there is no physical law that can determine it. They're the same.
32:00
You're violating the laws of physics if you say they're the same. What does that mean? It means that two-phase commit doesn't work. It actually means that three-phase commit doesn't work. Well, it actually means that infinite phase commit doesn't work. So you can't have data in two places and know that it's the same. It's the law of physics, which we break a lot.
32:23
It's not a good idea breaking the laws of physics. They're there for a good reason. Entropy. Entropy always increases. Entropy is the amount of disorder in a system. If you take a load of dice and you chuck them up in the air, they're not all going to land with one up or with six.
32:42
They're going to become more and more disordered. This is the second law of thermodynamics. Entropy always increases. And in software terms, it means systems become more disordered all the time as you build them, and this is the law of physics. Speed of computation. This is quite fun.
33:01
Who are the geezers here? That's a little quiz. Who's this guy? Sorry? Yes, that's right. Yes, that's Albert Einstein. And this is Hans-Joachim Bremen. And who's this guy? No?
33:22
Yes, Planck. Max Planck. So Max Planck is kind of a father of quantum mechanics. One of his relationships is E equals H mu. E is the energy of a black box radiator. A black box radiator can radiate in different frequencies, and the energy in a particular frequency is Planck's constant times mu.
33:44
Okay, so that's one of Planck's relationships. Mr. Einstein said, well, E equals MC. Everybody knows E equals MC squared. That's the amount of energy in a given mass. So Mr. or Dr. or Professor Bremen said, well, if E equals H mu
34:00
and E equals MC squared, then H mu is MC squared. Just knock out the E. And mu is M times C squared over H. So C squared over H is called the Bremen limit. And it's 1.36 times 10 to the 50 hertz per kilogram. Okay, so that says that a one-kilogram computer, a one-kilogram, not a one-kilogram computer,
34:22
a one-kilogram of anything, doesn't matter what it is, a one-kilogram of anything can oscillate at a max of 10 to the 50 cycles per second. That's an upper limit on how fast an oscillator can run. That's a clock frequency of a kilogram of stuff. They can't go faster than that due to quantum mechanics.
34:43
Right. So not only is there the Bremen limit, there's Bremen's limit, the Margolius-Levitin theorem, the Birkenstein bound and Landau limit. You're familiar with these. So I'll just go through them very quickly because they tell us important things about computing. So the Bremen limit, that says that the max clock rate
35:02
of a one-kilogram computer is 1.6 times 10 to the 50 hertz. See, the Margolius-Levitin theorem, that relates computation to energy. We can do 10 to the 33 operations per second per joule. And the Birkenstein bound, that tells us how tightly we can pack information.
35:21
It's, what is it, 10 to the 43 bits per meter times the, sorry, wait a minute, per mass and meter. You take the mass of the thing and the radius of the sphere. That's the maximum stuff. And then the Landau limit, Landau limit, the minimum energy to change one bit of information. And that's 2.85 zettajoules at 25 degrees.
35:43
I didn't know what a zettajoule was. I had to look up what a zettajoule was. It's 10 to the minus 21 of a joule. There's another unit. I just invented it. It's FBDC. It's a Facebook data center, and that's 28 megawatts. So Facebook data center is 28 times 10 to the 6 joules per second,
36:04
and the Landau limit is 10 to the minus 21 of a joule. So you see this 27 orders of magnitude difference. Well, of course, the Facebook data center is not just doing a one-bit operation. It's doing lots of bits operation, but you get some idea of that. So let's build the ultimate laptop and see what that is.
36:23
Well, as you add more and more components, it gets hotter and hotter, and the component density gets greater and greater. So the ultimate laptop is a black hole, of course. We've squashed all this stuff into it, and it runs at this Bremerhauer limit. What was his name? Bremerhauer? I've forgotten the name.
36:40
So it's running at 10 to the 51 operations per second, so you think, well, this is bloody useful. I can do a lot in that. But there's a problem. It's pretty small. It's 10 to the minus 27 of a meter big, so you might drop it and not be able to find it. It's got a storage capacity of 10 to the 16 bits, and above all, it lives for 10 to the minus 21 of a second.
37:04
So, yeah. But it's done 10 to the 31 computations during that time, and you might ask, how does it get out? You know, black holes, stuff goes into black holes. You might have heard this. You know, stuff falls into black holes, but it can't get out. Well, that's not true.
37:20
Stephen Hawking found this. Something called Hawking's radiation that comes out of a black hole when you drop stuff into it. So if you take an elephant and you drop it into a black hole, you get an elephant's worth of energy that comes out of the black hole. But how does it come out? Oh, well, there's a picture. Nice picture from Scientific American. What happens is, outside the black hole,
37:44
if things are falling into the black hole, they split into pairs, particles and antiparticles, and say one of them drops into the black hole. They're split pairs, so their spins and things are equal and opposite. And provided nobody measures them or does anything, you're fine.
38:01
When you change the spin or something of the thing inside the black hole and somebody measures it outside, it instantaneously has this value. It's called quantum entanglement, which Einstein called spooky action at a distance. He didn't believe in it, actually. So there's a kind of technical problem. A, how to encode our program. We drop it into a black hole,
38:21
and the answer instantaneously appears due to quantum entanglement somewhere else in the universe. But these are technical problems, which I leave to future programmers and physicists. Not quite sure how to work it. Oh, well, hang on. That was the ultimate laptop. What's the ultimate computer? Well, that's the entire universe
38:42
behaving as a black hole, right? It's a black hole computer that's, you know, it's the ultimate computer. So we could ask the question, how many operations has the universe done since it was booted? So you boot the universe, and you wait 10 to the 10 years about now.
39:01
Well, it's done 10 to the 1, 2, 3 operations since then. Okay? This is as a quantum computer. Its size is 2 to the 26, 2 times 10 to the 26 meters, and it'll live for 10 to the 10 to the 100. Well, that's a Googleplex. Nobody really knows. It's just gonna, like, cool down and die, I suppose.
39:21
I don't know. So that's giving you some sort of, it's the number of operations the entire universe working as a quantum computer since it was created is, what did I say, 2 to the 409. If you wanted to crap a crypto key that's, you know, by systematic search of all keys,
39:42
and you got a 512-bit key to just go through them all one at a time. If you could go through them on one clock cycle, it would take 2 to the 103 universes to do that. It's actually cryptographers use measures from quantum mechanics to set the upper bounds on the complexity of the algorithms.
40:02
So you don't actually need to, you don't need infinite length keys. You just need to set them in relation to the size of the universe, and then you're fine. And, yeah, okay. So there's some papers there you can read. There's a very readable article in Scientific American about black hole computers from 2012.
40:26
Fun facts and figures. Oh, this is wrong. I noticed it. One kilogram can do 10 to the 51, store 10 to the 31, and I wrote it. It's a 10 giga-gigahertz. That's obviously, it's not a giga-gigahertz machine. It's a giga-giga-giga-giga-giga-giga-giga.
40:41
Giga, giga-giga-giga-giga. Oh, I forget it. I mean, it's fast. It's faster than this thing, anyway. A conventional computer can do 10 to the 9 operations per second if you compare it to the black hole computer. And the universe is... The universe can store 10 to the 92 bits of information.
41:02
Multiply, 92 times 3 is about 270. So the number of bits in the universe is about 2 to the... 3... 3 to the 290. Sorry, 2 to the 290. So if at a 290-bit checksum, you're about on par with the number of bits you can store in the universe.
41:22
So SHA-1 is 120 bits. It's not enough. If you want to store a checksum of everything in the universe. It's probably good for the Earth. It may be all right. You have to think about that. Exercise. How many bits should a checksum have to uniquely identify every atom on the planet? Answer's on the back of an envelope.
41:43
Right, so the third bit. Golly, I have to speed up. So we've created all this mess. We've got the laws of physics. What are we going to do about it? Right, so I have a little proposal here. We've got to break the laws of physics. We can reverse entropy if you put energy into the system.
42:05
So we've got to build the condenser. And the condenser... I should have made a better slide. It's like a meat machine. You put all files into it. And you turn the handle. And less files come out. So we condense... You see, what we've been doing in the past is we've been expanding the amount of information.
42:22
Right. So this is just breaking the second law of thermodynamics. We can do that if we put energy into the system. So it's fine. Why did the number of files increase? Well, you start off with files, and you make more... You take a file, and you edit it, and you do this, and you do that. That's entropy increasing. Files mutate.
42:40
When you've got a file... Well, disks are really huge. Every time I buy a new computer, I just copy all the stuff I had on all my old computers onto it. Just a complete mess. I can't find anything on it. I mean, I've got 43,000 Erlang files on my machine. I can't find anything in them. When you have a file, you've got some data.
43:02
You say, whoa, I wonder what file name I should have. Oh, okay. Oh, I don't know. I wonder what directory I should put it in. Oh, I don't know. I wonder which machine I should store. Oh, I don't know. And that problem gets worse and worse and worse as you have more and more files. It gets far worse when the system becomes distributed. Right, so I want to declare the war on names.
43:25
You know, there's a war on terror. There's a war on poverty. Well, we programmers, we should declare the war on names. It's kind of been, you know, Git's quite good. It's declared the war on names nicely. We shouldn't have any names. I'll give you an example.
43:42
To talk about things, they need names. We can't talk about something if we can't name it. It's a basic philosophical fact. Okay, so here's a paragraph of text. Cup of tea. He sat down, cut a buttered slice of toast. He shored away the burnt flesh and flung it to the cat.
44:01
And so on. What's that from? I know you'll use Google. Any guesses? James Joyce is from Ulysses. Because that paragraph has no name, we can't talk about it. So let's name it. Well, it's not really a name. Okay, so there we go.
44:20
Just compute its SHA-1 checksum, right? So the name of that paragraph is 799150AD. So given that name, I can tell... That's not really a name. It's a hash. Given that hash, I can tell anybody. I'm talking about the paragraph... Well, not about the paragraph. I'm talking about 79915... We've uniquely named this.
44:41
There's no ambiguity whatsoever. Right. So imagine... Let's do away with these URIs. They are evil. They are a silly idea. Well, no, they're not a silly idea, but they're okay for an approximation to the truth. So look at this thing.
45:01
It's got two parts. It's got a host name. You use DNS to look the host name up, and then it's got a resource name. Well, DNS can be spoofed. A naughty person can put false things into DNS or send you to the wrong DNS.
45:23
If the resource was changed, this reference is wrong. It points to the... You know, that reference you found in a file or something, it points to that, but somebody changes that. What do we call it? ABC. They change the content of C. You've still got a reference to the old C. You get the new C. If you put a time to live or a caching in,
45:42
you wouldn't know what time to put it in. Suppose you put this in a cache and it says time to live in ten minutes and the guy changes it after five minutes. You can't invalidate the cache or something like that. There are all sorts of problems with this. And the content can be changed by a man in the middle. Bad guys can listen to this stuff and change the content.
46:01
So we don't want this at all. So let's do it like this. Instead of having a URI, we'll make a new type of thing. We'll just say hash, and there's the name. Just go and get... Notice there is no host name. Right? There's just the hash of the content. And the nice thing about this is, you say to the system, go find this thing.
46:22
Go find this blob that has this hash. So no man in the middle can attack that thing because you get the blob back, you compute the hash, and it's what you asked for. So no man in the middle can change this. No man in the middle can attack this. No DNS thing can spoof this. If you get an answer, it's what you asked for.
46:43
It is completely safe. It doesn't need any crypto keys to be exchanged. It doesn't need to be protected from man in the middle attacks. It doesn't need to do anything like that. So then the question is, how do we find this thing? Because we haven't said which host it's on. Well, that's a well-solved problem. Okay, so DNS usually works.
47:03
You've got two start addresses in your... Okay, so you have two start addresses in your machine. Data is what you've got when you turn your machine off. Okay, so when you boot your machine, you've got two start addresses in a cache, a DNS1 and a DNS2, and you go and look at these two addresses.
47:20
In a peer-to-peer system, you've got a long list of machines that are known to participate in a distributed hash table. And what do you do? Let's suppose I know about these machines here. Here's the IP addresses of machines that are participating in a peer-to-peer distributed hash table. And what you do is you compute the SHA1 checksum
47:41
of each of these IP addresses, and you sort them all. And you say, okay, I've got this resource. I want to find something with this hash. Where is 536A52? You look in this list and say, well, it's somewhere... These two machines here that I've highlighted, they're the nearest machines in this space to this hash address.
48:01
So I go and ask them. And they've got lists like that, and they'll find the nearest ones, and they'll go and ask them, and so on. Okay, so this is the basis of CORD and CADAMELIA and algorithms like that. These are well-understood algorithms. The CADAMELIA system's got something like nine million machines. It's actually invented by the file sharers who are using it to share movies and things like that.
48:24
But it works great. So we could put all information into this, right? Instead of just having movies and things like that, just put all information into it. Right, so here's how to make... So I'm going to make the condenser. The first thing we want to do is find all identical files and then find all similar files
48:40
and reduce the amount of information on the Internet. Well, finding all identical files on the Internet is trivial. We compute the SHA-1, check some of it, and we inject it into the hash table. That's all we have to do. You can do that in... All the machines in the world can run in parallel. We could do this in a few hours
49:02
and get rid of all the replicas, okay? That's what Dropbox does, I think. You know, if you put stuff in there and say, oh, it's got the same checks as that. I don't need to keep multiple copies. I just keep, what, two or three copies. Right, so that's the finding all identical files bit. Now we want to find all similar files.
49:20
Right, find the most similar file to a given file. This is a tricky problem. I've been thinking about this for ages. Okay, so the best algorithm I know, I'll just tell you the answer. It's called least compression difference. If two things are similar, then if you concatenate them and compress them,
49:42
the size of the thing will be... Okay, so you've got the file here. You compress it. It's got a certain size. If you took the file twice and compressed it, concatenated and compressed it, the file size of the compressed file would be very, only a little bit greater than the file size because it's just describing the difference. There is no difference.
50:00
If you take a file, concatenate it to itself a hundred times and compress it, it's about the same size as the file compressed by itself. If they're very dissimilar, two files that are totally dissimilar, random numbers that are totally dissimilar, if you compress A and you compress B and you compress A, concatenated to B, the size of the totally dissimilar one
50:22
will be more or less the size of the sum of both of these. So this will find the most similar thing. It's very insensitive to changes in the compression algorithm. So I have a thing on my machine. I type... When I've got information, I type it into a little box like Twitter.
50:40
And I press the Sherlock button. Sherlock Holmes, his little icon there. And I press Sherlock. And it does least compression difference on most of my files. Takes a long time. And then it says, hmm, do you know, the most similar thing to that was something you did 15 years ago and it's in this file. And it finds it for me. I want that to be on the entire internet. I want us to put all the information we've got into the internet
51:03
so that we reduce the amount of information. We reverse entropy and you just make it more manageable. Okay, right. This takes order of time n where n is the number of files on the planet. So it's not super quick. I'm just wondering if we can speed it up.
51:21
We have to reduce the search space. I don't really know how to do that. I'd like to talk to some researchers who'd know how to do that. But there's a little hint there on... We certainly know things that couldn't be similar because they're very different sizes. The plagiarism detection algorithms, do you know how they work? They work on a rolling checksum. There are databases of open source software
51:43
and of student essays that people do for feces. And what you do is you take the source code or you take the student... You chop it into 50-byte blocks, say 50 bytes, and you compute a checksum, an MD5 or an SHA-1. You stick that in a key-value database.
52:02
Sorry, no, it's not an MD5. It's called a rolling checksum. Did I...? Oh, I didn't mention what they're called. A rolling checksum is one that you take 50 bytes, you compute the checksum, and if you shift it by one byte, you can incrementally change the hash. The typical one, you XOR all these bits. Simple one is XOR all the 50 bytes.
52:22
When you shift it by one byte, you XOR the final byte and XOR the beginning byte, and now you've got the checksum of the next block. That's called a rolling hash. So what you do is you take your data, split it into 50-byte blocks, compute the hashes, rolling hashes, stick them in a, you know, a catamilia or a distributed hash table.
52:41
And then you take your 50-byte block, look it up in the hash table. If it's in there, that line would possibly be the same, and then you go and look at it and see if it actually was the same. So you can do that in linear time per document. It needs order of N lookups where N is the number of bytes in the file. So it's kind of reasonable. Could probably do that fairly quickly, I think.
53:05
Right. Summary. So we made a mess, horrible mess. Created millions and trillions of man-hours at work. In order to get out of this mess, we need to reverse entropy and sort of start making things simpler
53:22
instead of making things more complicated. Quantum mechanics sets the limits to the precision or the number of bits we need in checksums and things like that, and we can dimension our systems accordingly. We would love some mathematics to prove things. I think we're gonna have to wait a long time.
53:42
And, yeah, well, join the war on names and help me build the condenser. That's the next five years' job, and thank you very much. So any questions?
54:05
Yeah, where do I buy a quantum computer? You can't. And wouldn't a black hole computer be dangerous? Possibly. No questions.
54:21
Different problem. Google it.