The Hip Hop Virtual Machine
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 150 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/51538 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
NDC Oslo 20135 / 150
3
4
5
6
8
11
12
15
17
22
26
27
31
32
39
40
41
42
44
47
51
53
56
57
59
60
61
63
64
66
67
68
69
71
72
79
80
81
82
83
85
87
89
90
93
94
95
97
98
99
100
101
102
103
106
108
109
110
114
118
119
120
122
125
126
130
132
133
134
135
136
137
138
139
140
141
142
145
00:00
Virtual machineFacebookCompilerJust-in-Time-CompilerCompilation albumStrategy gameMachine codeInterpreter (computing)Binary fileCombinational logicWeb pageSeries (mathematics)NumberStrategy gamePoint (geometry)Operator (mathematics)Formal languageScripting languageMultiplication signCycle (graph theory)FacebookQuicksortAxiom of choiceGoodness of fitMathematicsVirtual machineDatabaseSoftware developerCoefficient of determinationLie groupWebsiteTablet computerDefault (computer science)Type theoryInterpreter (computing)RoboticsPhase transitionGraph (mathematics)2 (number)Moment (mathematics)Video gameConnected spaceContrast (vision)Degree (graph theory)Machine codeService (economics)Insertion lossSpeech synthesisForcing (mathematics)Assembly languageCellular automatonSubject indexingTable (information)Hash functionSequelRepresentation (politics)Group actionImplementationArchaeological field surveyInterface (computing)Open sourceLevel (video gaming)Programming languageSemiconductor memoryArmWeb serviceReal-time operating systemFocus (optics)Product (business)CompilerRational numberVariety (linguistics)AverageDirection (geometry)Open setDifferent (Kate Ryan album)State of matterReal numberBytecodeRadical (chemistry)Arithmetic progressionStreamlines, streaklines, and pathlinesLine (geometry)Electronic visual displayElectronic mailing listJust-in-Time-CompilerTerm (mathematics)Compilation albumXMLUMLComputer animation
09:16
Interpreter (computing)Machine codeBinary fileCompilerWebsiteRevision controlCompilerFunctional (mathematics)Power (physics)Just-in-Time-CompilerProper mapSoftware developerProduct (business)Projective planeData managementDynamical systemMultiplication signFacebookInterpreter (computing)Computer programmingConservation lawEvoluteFluid staticsQuicksortVirtual machineType theoryVariable (mathematics)NumberEndliche ModelltheorieIntermediate languageLinear searchMachine codeClosed setElectronic mailing listTraffic reportingSoftware bugBitDebuggerFreewareUniform resource locatorGoodness of fitBounded variationCycle (graph theory)Frame problemArmAutomatic differentiationObservational studyMereologyFeedbackPoint (geometry)CASE <Informatik>HypermediaPhysical lawState of matterRight angleSineScaling (geometry)Server (computing)ExtrapolationHost Identity ProtocolSurface of revolutionLine (geometry)SequenceService (economics)Presentation of a groupInsertion lossPrisoner's dilemmaComputer animation
18:28
Mathematical optimizationInferenceType theoryAerodynamicsDivisor (algebraic geometry)String (computer science)CompilerMachine visionCompilerMachine codeInterpreter (computing)Observational studyEntropie <Informationstheorie>Cartesian coordinate systemCompilerComputer programmingNumberExpressionRow (database)DatabaseReal numberMachine codeLevel (video gaming)Type theoryCross-correlationStatisticsRange (statistics)ResultantSoftware developerVirtual machineElement (mathematics)Just-in-Time-CompilerOpen setMachine visionDynamical systemFormal languagePoint (geometry)TrailInterpreter (computing)Derivation (linguistics)MathematicsAlgebraPerturbation theoryINTEGRALQuicksortCompilation albumMathematical optimizationParameter (computer programming)BytecodeStack (abstract data type)Discrete groupRight angleDistanceMaxima and minimaString (computer science)TypinferenzDivisor (algebraic geometry)Focus (optics)MereologyMachine learningMathematical analysisSummierbarkeitNeuroinformatikPower (physics)Goldbach's conjectureComplete metric spaceNumbering schemeGoodness of fitMultiplication signPiDivision (mathematics)ArmProcess (computing)View (database)Functional (mathematics)Prime numberBit rateLimit (category theory)2 (number)DivisorReal-time operating systemDigitizingArithmetic meanSpeech synthesisSystem callGodMetropolitan area networkDisk read-and-write headFile formatSlide ruleCASE <Informatik>Lie groupBitRoboticsSocial classMassWordWave packetComputer animation
27:39
Function (mathematics)Polymorphism (materials science)Data typeStrategy gameBlock (periodic table)Cache (computing)Execution unitCompilation albumSequenceControl flowMachine codeChainInformationVirtual machineBoundary value problemInterpreter (computing)Local ringMachine codeStrategy gameInterior (topology)Parameter (computer programming)BytecodeAnalytic continuationMaxima and minimaRoutingMatching (graph theory)Functional (mathematics)Virtual machinePolymorphism (materials science)Interpreter (computing)ChainPoint (geometry)Type theoryGoodness of fitBlock (periodic table)ResultantBitComputer programmingQuicksortStack (abstract data type)Mathematical optimizationString (computer science)Level (video gaming)Combinational logicRevision controlIntegerJava appletBridging (networking)Real numberHydraulic jumpEquivalence relationResource allocationSequenceLatent heatBranch (computer science)Tracing (software)Interactive televisionElectric generatorRepository (publishing)Moment (mathematics)LengthLocal ringSystem callPrisoner's dilemmaGraph coloringWebsiteNumberCircleRight angleGodArmThomas BayesGroup actionMultiplication signCoefficient of determinationMoving averageStreaming mediaBlogPosition operatorOperator (mathematics)Pairwise comparisonSpeech synthesisOrder (biology)MathematicsProcedural programmingCASE <Informatik>Computer animation
36:51
Local ringVirtual machineCache (computing)View (database)Machine codeLogicDataflowTablet computerComputer programoutputType theoryChainInterpreter (computing)Function (mathematics)PredictionData typeWebsiteArchitectureInformationPhysical systemUniqueness quantificationSocial classExploit (computer security)Object (grammar)IP addressString (computer science)Core dumpWorkloadPort scannerData compressionVirtual realityProfil (magazine)Logical constantSocial classType theoryObject (grammar)Just-in-Time-CompilerStrategy gameFunctional (mathematics)Combinational logicInterpreter (computing)SineSquare numberRootMultiplication signDifferent (Kate Ryan album)Electric generatorQuicksortMachine codeCache (computing)Table (information)ChainFreewareHeuristicString (computer science)Repository (publishing)Interior (topology)Capability Maturity ModelSymbol tableSystem callIntegerNumberGoodness of fitProof theoryParameter (computer programming)Power (physics)ResultantStatisticsCASE <Informatik>Bounded variationStability theoryUniqueness quantificationBenchmarkPhysical systemPoint (geometry)WebsiteKey (cryptography)GodArmArithmetic meanLevel (video gaming)Speech synthesisNetwork topologyGame theoryWordTranslation (relic)Wave packetInstance (computer science)Line (geometry)CausalityPiMathematicsThermal expansionShared memoryProcedural programmingOptical disc driveGroup actionStructural loadComputer programmingRight angleSign (mathematics)View (database)Mathematical optimizationWhiteboardComputer animation
46:03
WordSoftware repositoryFacebookBefehlsprozessorPairwise comparisonDirected setInferenceAerodynamicsPoint (geometry)Repository (publishing)QuicksortProgrammschleifeExtension (kinesiology)FacebookProduct (business)Machine codeCompilerStructural loadMaxima and minimaInternational Date LineComputer configurationDifferent (Kate Ryan album)Power (physics)Dimensional analysisScripting languageEndliche ModelltheorieVirtual machinePlastikkarteDirection (geometry)Multiplication signComputer programmingBefehlsprozessorJust-in-Time-CompilerCompilation albumEvent horizon2 (number)Arrow of timeUtility softwareGoodness of fitSpring (hydrology)Server (computing)Cache (computing)Interpreter (computing)Graph (mathematics)Mathematical optimizationImplementationLine (geometry)TypinferenzFilm editingCycle (graph theory)SpeicherbereinigungShape (magazine)Formal languageService (economics)NeuroinformatikArmBit rateGreatest elementLevel (video gaming)Web pageObservational studyRight angleTwitterCurveSoftware developerProcess (computing)Type theoryTracing (software)First-order logicSpeech synthesisResultantSocial classTouchscreenMereologyRoundness (object)MathematicsMetropolitan area networkState of matterCombinational logicReverse engineeringDynamical systemFunctional (mathematics)Computer animation
55:15
InferenceAerodynamicsDataflowTablet computerLocal ringControl flowMachine codeLogicCache (computing)View (database)Goodness of fitSystem callWorkloadFunctional (mathematics)Shape (magazine)Object (grammar)FacebookJust-in-Time-CompilerConnected spaceBlock (periodic table)LogicMaxima and minimaInterleavingFerry CorstenSemiconductor memoryBitLevel (video gaming)Structural loadLink (knot theory)AverageMachine codeQuicksortInstance (computer science)Type theoryStatisticsWeb pageMultiplication signComputer programmingScripting languageServer (computing)NeuroinformatikDifferent (Kate Ryan album)Mechanism designCache (computing)Java appletComputer clusterService (economics)WordPhysical systemVariety (linguistics)Home pageEvoluteCASE <Informatik>Slide ruleProcess (computing)BlogData centerMaterialization (paranormal)ArmMedical imagingSocial classReal numberRight angleProjective planePhysical lawPoint (geometry)Arithmetic meanNumbering schemeMathematics2 (number)Helmholtz decompositionRule of inferenceCircleShared memoryBuildingComputer animation
01:04:27
XMLUML
Transcript: English(auto-generated)
00:06
Hi, my name is Andrei Alexandreski. I'm seeing a few folks who've been in my first talk today. This talk is going to be quite a bit different. I'm going to talk about some work by Facebook, which is relevant because it's open source,
00:22
so it can be used by anyone. It's a virtual machine for the PHP programming language. And, you know, before you answer in kind with, you know, what's wrong with you using PHP and all that stuff,
00:40
I'm going to thoroughly argue that the point that such a virtual machine and JIT is a good thing to have and using PHP has a certain, you know, it has certain assets going about it,
01:02
particularly for Facebook, which has an interesting history with PHP. So, the hip-hop VM has been Facebook's production PHP engine starting sometime November of last year, and it has been the workhorse behind Facebook ever since.
01:25
It's a JIT compiler, meaning it can interpret code, but it can also write on the fly, compile code down to native x86 assembler and execute it all at once.
01:41
And it has an unusual compilation strategy, which we're going to discuss, which I find interesting and applicable to a variety of other languages. So, as I mentioned, I'm going to argue, you know, why PHP. So, PHP has a number of liabilities, which are well-known and discussed in the community. Already I'm seeing a couple of smirks from people in the audience,
02:03
like, yeah, you know, PHP, weren't you the D guy? Like, all this clean language and stuff, and, you know, what's wrong with you? Well, consider like less than a clock back, nine years, 2004. At that point, Facebook was just starting,
02:22
and, you know, now a historical dormitory at Harvard and all that good stuff. And at that time, there was not a lot of choice in terms of, like, let's build a great website distributed and, you know, used by a billion people and all that stuff. First of all, there's not one billion people probably online at that time.
02:45
2004, let's say, probably there were one billion people, but that was it. So, at that time, PHP was pretty much the choice, you know, the default choice of a language. If what you wanted to do is build a site real quick,
03:01
you know, seat of the pants operation, like, you know, literally putting it on your desk, you have a machine and it's shared by, you know, six users at a time or whatever, and you want to develop real fast, connect to a database, have all that rapid cycle of development.
03:20
So PHP was it. And PHP has one very interesting thing going for it, as a language that's aimed at robust development, as strange as that might sound, which is the following. In PHP, every request, so essentially the lifetime of a script in PHP
03:43
lasts from the moment the request is made until the request has finished. After that, it kind of goes away, right? Well, that state goes away. In contrast, there are many other web services engines that actually keep state between invocations
04:02
and kind of try to kind of resurrect the zombies from the last request and stuff like that. For PHP, this simple lifecycle of a script has turned out to be very successful for Facebook because any bugs, either in the engine or the scripts being run, any memory leak,
04:22
you know, any issues there were, would essentially disappear with the termination of that particular request, which means any other fresh request would start with a Brave New World clean slate and would just run however it runs. So there's no sort of progressive deterioration of things
04:41
as the site was being used. So, well, 2004, Facebook is launched using PHP, had a relatively low traffic which has grown ever since, and it's become sort of a blessing and a curse
05:01
in the sense that right now we have many millions of PHP code at Facebook, and it is, like PHP used by very good developers, it turns out to be a very convenient tool to have around. So essentially like anything you want to that Facebook, like any change you want to make to the site, you can very easily dog food stuff
05:22
that we already have working and tested and thoroughly streamlined. So once you consider this, it is very difficult to imagine, for a Facebooker to imagine day-to-day work as a front-end designer with all kind of just,
05:43
yeah, I need to get a list of friends, so I need to kind of display with look ahead and all that stuff. It's like, literally it's like lines of code away in PHP with the tools that we have. Definitely you wouldn't imagine, just as sort of an aside, so how many of you are using Facebook Graph Search,
06:01
like the search at the top of the Facebook? Yeah, okay. So you know what I'm talking about. There's this thing, and essentially search for friends, if I search for Ove, and I type O-L-V, and by that time I see this guy, who's like testing his camera on me right now, I thought he's bootlegging me,
06:20
which made me feel real good for a few seconds there. So essentially you type a few letters and you see out of like, not only 150 friends or what have you, but out of literally like one billion people, because you can find people who are not your friends and are directly connected to you and kind of second-degree connections
06:42
and the kind of people around you, geographically and all that good stuff. And we wouldn't build that in PHP. So let's clarify that. We wouldn't build that in PHP. There is a service that is implemented in C++ language, namely C++,
07:01
which is going to have an index, literally there's a hash table with millions of elements, like a billion elements or whatnot. And this hash table is stored on distributed many servers, and the PHP service is going to give a very simple, easy-to-use interface to that real-time service.
07:21
So it's good to make a distinction between what you can do and what you think of at different levels and in different languages. So all that being said, parentheses closed, PHP is a great tool for add just one that widget, just put it on the page and it's there and it just works.
07:41
This has worked like tremendously well for Facebook. I should add that it may not work just as well for the same size of code at other companies, because at Facebook, there's a huge focus on good engineering and hiring the best engineers and such, whereas saying, let me hire a few average developers
08:05
and try to build a big thing with them, it's more difficult because unless you use PHP with care, it's going to exhibit all of the issues that we know has, plus has the return times completely bizarre and things like that. So at Facebook, with the appropriate discipline
08:21
and use of talent, we're going to carefully avoid such problems. So continuing the story, from 2004 to 2009, we used the Zen interpreter, which is a switch on byte code C interpreter.
08:41
So essentially, literally it's a big switch and reads the next byte code, which is like one octet, like one byte, and which is the byte code representation of the PHP code, and depending on that guy, it's going to do different actions, can fetch operands
09:01
and do stuff and whatnot. And that's pretty much the most direct implementation of interpreter that you can think of, but we figured Zen is too slow for us. We also figured out, again, the code is being so big, it was at a point, sort of a lock-in issue,
09:22
because we couldn't switch the language cheaply. So in 2009, we launched HipHop, the static compiler, which is kind of an interesting endeavor. Take the PHP code, compile it to C++ code,
09:44
which creates like a large, humanly, next to unreadable C++ program, and it's all kind of virtual dispatch and all this kind of dynamic typing and all that stuff, but it's in C++, and then take that C++ compiler and compile it with state-of-the-art compilers, such as GCC, and at the end,
10:04
you're going to get a two gigabytes binary, which now would not even work on 32-bit models. You have this two gigabyte binary, which was Facebook, it was a site, so you launch that guy, and just it's Facebook.
10:21
It serves Facebook. I'm going to open another parenthesis just for a minute, because I find this story just too funny. So you have this big C++ program. So you can imagine any number of problems happening with a large, generated C++ program. C++ is a terrible intermediate language to work with,
10:41
and here's an example. That program had 30,000 global variables. It's generated, I mean, in a way, it's no sin, right? It's generated, so you can say, I didn't sin, father, and that kind of stuff. So it has 30,000 globals, and it was a big slow to compile.
11:00
So you kind of, you know, we show bug report to GCC, and what's going on there, folks, and why is it so slow? And they said, well, you have 30,000 globals, and we put them in a singly-linked list, and we search that list for name lookup. So it does a kind of linear search for 30,000 things, whenever you have a name somewhere. And we said, well, then you better fix that crap,
11:22
because it's, you know, linear search in this day and age is kind of embarrassing. And they said, no, guys, you are embarrassing, because you have 30,000 globals. Like, no same program in this world should have this many globals. And we're like, oh, all right. So compiling the site took a good few hours,
11:43
and not because of only that. It's a very large program, after all. So, practice closed. So we've been running with that engine for a good while. And can you see any problem with this model of building the site?
12:01
Like, think of this. Think you're a PHP developer working on the site, and you want to kind of, you know, get work done. What problems do you see? Not you, because you answered yes. Feedback, cycle, and debuggability. Did I give you an invitation for tonight? Okay, I'll see you then.
12:21
By the way, for folks who have not been in my first talk, so I have some wonderful invites. I presume people who are interested in this kind of stuff would be interested in talking to me and my coworkers a bit more. So this is a private party by Facebook tonight at 6 at a very posh location in Oslo, free drinks and food. So ask me questions if you want to get some of this.
12:42
Good stuff. All right, I think I have questions. No, I didn't. Hold on. Okay, so. The whole debug cycle. So for a while, while we had the static compile, we had this weird situation, which was if you want to kind of develop the site,
13:00
work on the site day to day, you'd be using the interpreter. And then if when you're kind of, yeah, I'm done and stuff, you push the thing into version source control and we had this long cycle build, which would rebuild the site overnight and whatnot. And well, guess what? There's the interpreter which does things one way
13:23
and there's the compiler which does things the other way and supposedly the things are 100% identical and as I'm sure you know, it's like synchronizing clocks. You can never do it like 100%, right? There's gonna be small variations in behavior. There's always gonna be this one little thing, this one little sequencing, this one little quirk
13:42
that you're not gonna get the same. So we did have issues with, you know, this works on my machine under the interpreter, it doesn't work on the compile thing and oh my god, let's take a look, let's see what the hell is happening here. So definitely that was not tenable for a long time. But it did buy us 2x run speed.
14:04
So switching from Zend to Hip-Hop was like 2x faster to run the site. So that means, I mean, translate it the other way if you look at it from the other frame of reference, it means less power consumed for the same functionality
14:22
or more functionality for the same available power. Which is kind of awesome because at Facebook it's always good to have some more community power available for interesting stuff that could be going on, people you may know and good ads, which I know you guys hate.
14:44
And stuff like that. Now the power consumed per user of Facebook, we published that a while ago. Actually it's very low, it's in the hundreds of milliwatts per user. So it's like you have a little light bulb there and that's how much power you consume per user.
15:00
So it's very economical in that sense. Okie doke. 2012, well as I said, last November we launched the VM but it kind of had a long history. We started developing that a couple of years earlier. And it's a virtual machine, it's a just-in-time compiler
15:24
and it unifies the development and production which solved that problem that you mentioned. Because essentially you could use the same exact generator for both day-to-day work and the website proper. So that solved a huge problem. In a way it was a step back because it's much more like an interpreter than a compiler.
15:43
But in many ways it's been a step forward. So let's take a look at how much faster things have been due to using this technology. So we have Zent, which is less than twice as slow.
16:00
And this would be like, let's say as reference, let's use the throughput that we achieved through the first release of hip-hop. So this is the throughput, day one, hip-hop. And then a nice thing is that people have worked on the compiler to generate better code to make it better and stuff like that.
16:24
So it's like squeezing blood from a stone. They got yet another 2.5x, 2.3x improvements just by improving the static compiler that generates C++. So generating better C++ and things like that.
16:42
So we are way, way ahead, like what we would have done if we used run-of-the-mill technology for running PHP. It's great, do you think you can do better than that with the VM or is it gonna be worse? I mean, where do you think it's gonna sit if you have like, okay, let's build a VM
17:03
and JIT for this? What's your opinion? Honest question. It's not a trick question. Lower? Yeah, actually, that is correct. In the first release that actually worked, it was the release, internal release.
17:20
In the first kind of version of the interpreter that actually worked, the JIT was eight times slower than the production hip-hop. And true story, so I was like, this team does interesting stuff and whatnot and my manager at the time, he said, I would not advise you to join that team
17:41
because that team is not gonna work out. The project is not gonna kind of work out. One great thing about Facebook is that we get to try like weird projects all the time and those that succeed, they're just amazing because they're so out of there that if they succeed, the win is big. It's not kind of a conservative bet
18:01
that's likely to succeed in a little way. So very interesting. So I did end up on that team, by the way. So I'm on that team now. So I kind of, there's poetic revenge of sorts. So that's the static compiler evolution. Very nice. Probably Zen kind of improved a bit too,
18:23
but I presume it's never near that. So from the whole project of JITing PHP, we kind of learned a very important thing. That type inference is sort of the crux of the matter.
18:41
The single most important thing that you care about when it comes about JITing, at least for this particular language. Let's say this particular language class, because probably things like Python would enter the same realm. So type inference would be the best thing to look at
19:02
and the most important and sort of the focal point of the whole discussion. Let me kind of give details on why. First of all, PHP has dynamic types. Like who knows the Goldbach conjecture? Question for the ticket. So the Goldbach conjecture is like famous conjecture devised by a guy called Goldbach in 1742.
19:25
I'm making this up. So sometime earlier, probably it was the 18th century. So he said, I tried it for a few numbers and it turns out that every even number greater than two
19:42
can be expressed as a sum of two prime numbers. That's interesting because the guy had kind of tried it on like, I don't know, 15 numbers or what, like a few numbers because he didn't have a computer. So now it's checked up to like 10 to the power of 18 and it's like they didn't find one that doesn't work. So anyhow, it's one of the greatest open problems
20:01
in computer science, sorry, in math. And this is gonna be, well, depending on Goldbach conjecture, give me a float or give me a string. So you can't know statically what's going on, right? Another very classic example is let me give a row
20:21
and give me the first element of that row and depending on the scheme of the database, you're gonna get an integer or whatnot. And not to mention division and the divisor can be an integer, floating point, or even a string or a map or what have you. It's complete dynamic, right? So all of these are also in part good power to PHP
20:43
because you get to manipulate the databases in a very convenient manner. By the way, question for you guys. How many of you use a dynamic language like this in your daily work? Okay, fair amount, great.
21:00
So there's understanding of all of the stuff. It's kind of useful and good to have and apple pie and motherhood. So however, we figured something interesting. We figured out statistically, statistically, most expressions have one type,
21:22
which is very interesting because it kind of takes you into a whole different world in which you don't care about the exact type, you care about the most likely type. And there are some good examples. I mean, for most of your PHP work, you know, interpreted work, you know that when you divide two numbers, it's a number
21:41
and that kind of stuff. So for example, this guy is almost never false. I think it's false if this is null or I forgot that. You know, it's one of those weird PHP, like PHP's hell kind of articles, right? Oh my god, if I divide the map by a string, it's gonna give me false and that kind of stuff. I don't know, is this the case?
22:01
Who knows PHP real good? Okay, nobody. Well, I guess it's a compliment. Database row types are gonna stay put, so whenever you access the first element in any row of a given database, it's gonna give you the same thing. Global is always true or always false, at least within a running program and so on.
22:22
So, yeah, this is very interesting because there's a kind of a long range correlation here. It's like, you know, for example, like a century ago, like algebra was the thing in math and kind of, you know, all of this, you know, mathematical analysis and kind of, you know, derivatives and integrals and such and algebra.
22:43
So these are kind of the big things in math and right now, what do you think is the biggest thing in math? What's hot in math? Big data. Exactly, so where's the big, where's, you know, what's the math aspect of big data?
23:01
Machine learning, what's the big aspect, what's the mathematical big thing? Statistics, you got it. I give you an invite already. So statistics is sort of the new algebra, if you wish, right? So you're not operating with kind of, you know, known things like discrete things, things you know about. It's all statistics now and machine learning is great application of it.
23:23
You know, big data is a big, because what do you want to do on big data? Do you want to look at every single thing there? It's sometimes big data, look at aggregates and statistics. So I was saying like there's this long distance relationship between, you know, statistics is a new algebra in math.
23:43
And statistical typing is the next big thing in writing a good interpreter. So, hmm, yeah, this is interesting. So indeed, statistics are good because for a language like this, you know that an expression can have any possible type,
24:03
but actually statistically it's going to be like, you know, 99 times over 100 it's going to always be one type. And yeah, does the statistics change with
24:21
the quality of the engineers? You know, I mean, honestly, I'm going to be very serious here. I think this is a good research topic because I could presume that a certain style of programming or a certain, I don't know, style of application or a certain, you know, approach to engineering
24:41
would lead to different spreads of these statistics. So I don't know, what would be your guess, folks? Like a bad program writes statistically more kind of entropic types or not? Right, so this is kind of an interesting question to ask myself and it would be a good subject for a study
25:00
but you know, if I were like, you know, one of the developers who issues a result that tells me I'm crappy, I don't want to be in that study, you know? So I kind of want to avoid that. You know what, don't publish that. Okay, so at the same time, let me kind of issue an opinion on this.
25:24
I think people who write highly entropic types, if you wish, with statistics that are spread like, you know, in a way they exploit the language better because they take advantage of the dynamism in ways that may be creative. So that's a kind of interesting open question. All right, so our vision with, I mean, my team's vision,
25:44
I wasn't there yet, with HHVM is that, well, let's kind of keep an eye on the types and the statistics of types. And we're not gonna generate code, we start by interpreting code. And that's what many VMs actually do, many JITs actually do.
26:01
So you know, kind of interpret code and kind of track the most likely types to be there. And you discover types that cannot be inferred in real time. One interesting kind of side point is that we didn't want to build, like, the absolute best static compiler.
26:21
So we didn't want to kind of, oh yeah, let's have this PHP program and, you know, collect statistics and whatnot. We didn't want to have, like, the minute optimizations that are the hallmark of modern compilers. For example, like, you know, something that worked for 70s compiler technology would have been great for us, but, you know, the matter is we're not compiling C,
26:41
we're compiling different language. So our approach was, take PHP, watch the types, generate specialized code for types that I'm gonna show you in a minute. And the generator of specialized code for those types does not have to be the best compiler ever, right? So far, so good.
27:00
And by means of introduction, one step that was done way ahead was PHP has sort of a semi-standard bytecode format, which is kind of simple. So this is like, you know, A plus B gets C, is going to be translated into simple stack machine bytecode.
27:20
So, you know, push argument two, push argument one, add, and then set left value to the result. All right. And a simple program. I swear I didn't design these slides, but I discussed min and max in my previous talk.
27:41
And I swear there's another guy who wrote these slides, and, you know, he also discussed the same example. Call it serendipity or monoculture, if you wish. So function, my max A and B, I'm going to, I'll not be very ordered here, but, you know, this is PHP. So, you know, let's see kind of step by step what happens to this function to the point
28:02
where it gets matched by the interpreter and we get results from it. So first of all, you know, you can call like max of two integers, good. Two floating points, good. Boolean, good. Arrays, which is kind of interesting. Other arrays, which is even more interesting.
28:22
And strings, which is going to return a max and a string. So we have all this polymorphism encoded in a very terse manner. Without types, it's all dynamic, so it's all going to be dynamically dispatched and all. All right, well, let's compile down to byte code and I'm going to translate, I'm going to tell exactly what happens here.
28:42
So I'm going to push an L value, which is the second argument I got on the stack. So this one is like, you know, second argument on the stack and zero is the first argument on the stack. With this, I'm pushing A and B to the sort of operand stack, right? Who knows, who's familiar a bit with this notion of like, stack machine.
29:04
Okay, I don't need to explain anything. I'm done. So I'm pushing these guys on the stack. Greater, it takes the top two things and eats them and puts back the result on the stack. Stack machine, you raised your hand, come on, right? You raised your hand.
29:20
Everybody raised their hand. Okay, so and then if the result is zero and then I'm going to go here and I'm going to push the first guy or the second guy and otherwise I'm going to push the first guy in return. Awesome, and I'm done. Terrific. And this all is bytecode,
29:41
but it's sort of a high level bytecode if you wish because for example, like this GT is kind of, you know, function code that kind of looks for strings and knows about the race and all that stuff. So it's sort of, you know, it's high level bytecode. This is our HH bytecode. This is our internal name for it.
30:00
Great. Well, how do we get from here, let me get back a bit. How do we get from this bytecode to something that's really fast, that kind of is able to do the integer version real fast and stuff like that. So we had, whenever I see we, it's like really the team before I joined. So this is sort of what on the bridge now.
30:23
Yes. Why did we choose a stack machine as opposed to a register-based machine? Right, actually that's a great question
30:42
and I'm going to sort of give the short answer because this is the subject of debate and we could talk about this, reasonable people may disagree on this. And the short story, and Java also chose a stack machine as you mentioned. So the short story is stack-based,
31:03
the stack machine code is easy to generate, easy to interpret, easy to look at, kind of it makes for an easy, basic tool chain. And with the register-based thing, you got to have like a register allocator and kind of all of these esoteric things.
31:22
And there's a paper I recall from a colleague mentioned to me that there's an equivalence that has been demonstrated like you can take code for a stack machine and make it for a register machine. And the other way is much harder, something like that. So there's pros and cons all the way, but essentially choosing a stack-based bytecode
31:44
makes for a very small bytecode that's understandable. So actually size of the bytecode is also a big thing. So this is very compact because it uses the stack implicitly as opposed to put this in register A, put this in register B, and so on. All right, care for an invite tonight?
32:01
Okay, I'll put it for you. Feel free to come take it. All right. So the first insight here is that we don't want to kind of optimize across jumps. So we want to take a basic block at a time. And compared to write the terminology,
32:20
a basic block is a block that has no jumps anywhere, has no branching, no if, no while, no nothing. That's a basic block. It's like a contiguous streamline code. And we're going to specialize for a specific types or combination of types. And our goal, our underlying goal, being incremental type discovery. And here's how it works.
32:40
We define this notion of tracelet. And a tracelet is going to be one of these basic blocks inside which the types are given, known. So once you have a tracelet, you get to actually generate machine code for you that's fast and efficient and everything. And this is very interesting because then you have sort of a collection of tracelets
33:01
that kind of interact with each other and you can combine them depending on the types to obtain pretty fast code. So the traces are built just before running them. And of course cached, there's a repository of tracelets.
33:21
And they're translated to machine code, which is also in that cache, the repository. And then they're going to be changed appropriately to get work done. This is sort of good and bad because in PHP, the average length of the tracelet is just a few instructions. So it's not very, like, good compiler,
33:43
you want to have as much code as possible to optimize over. Inter-procedural optimization is like whole program and then you have inside a function and you have inside a block and so on. And the best optimizations are those of larger scope. They're slower, but they optimize very well. So in this case, it's kind of a disadvantage that tracelets are so short.
34:03
So, well, at the tracelet boundaries, we're all ready to have an on-stack replacement to interpreter and this is good because this is what you want, is to kind of have the interpreter and the tracelets kind of replace each other at a moment's notice. And let's take a look how we built a tracelet
34:20
for this particular call. So my max, two numbers, two integers. Here's the initial code. So, as I said, tracelets, contiguous, they're gonna have no branching. So this is my first tracelet, right? And this is my sort of abstracted away tracelet
34:42
with jump, z, whatever. And it's a very short kind of sequence of code, but you'd be amazed how much we can do with it. So we have two locals we know and we type this particular specialization with int and int. So by this point, we have a tracelet that's guarded.
35:00
I know it's an int and I know it's another int and based on those assumptions are going to generate code. Once that's happened, well, hold on. So greater is going to be a function that takes two ints and returns a bool. I know that because I have the guard here and the conditional jump is going to be like a function that takes a bool and returns nothing.
35:23
It's a continuation. So, wow, very interesting. So at this point, this greater can be specialized, which is a big deal, it turns out. It doesn't have to be a greater that knows about all types in PHP. It's a greater that knows only about integers. And then, and here's where things get interesting.
35:42
We get to take this tracelet with the guard. So these are the guards. This is the actual bytecode for PHP. And we actually translate the guard, which is, well, is the type, this is really like, you know, look at the type tag of the first argument and look at the type of the other argument.
36:01
And if they don't match, they don't match this guard, then actually I'm going to retranslate, kind of take the alternative route. And this is, you know, this is the code generator for that thing. How much faster do you think that's gonna be?
36:23
So my max of 10 or whatever, and instead of interpreting this thing, you're going to actually do this. So you have, well, we have two jumps here that are going to a guard. And then I'm going to have the classic, I move into registers and compared registers and jump and such.
36:42
Well, let me put it this way. It's fast enough that even a mediocre code generator, right, even a mediocre code generator with the strategy is going to do better than the interpreter, right? So once you have this whole strategy
37:00
of specializing depending on the types. All right, so continuing that, so we have some more stuff, so we have some more stuff, and we have some more stuff here. So we have a very small tracelet there. And we're going to stitch these tracelets together. Okay, int int, this is my tracelet.
37:21
Int int, this is my second tracelet, which is like just a return. It's an extremely short tracelet. And they're kind of tied together, stitched in our cache repository. And all of a sudden, you're going to have a working program, a working function. And it's going to do work for integers only.
37:43
Well, what happens if I call the thing with, let me see how, okay. Well, what we're going to do, if you call it with strings, then it's going to essentially jump, at the guard, it's going to jump to the retranslation flow,
38:00
which is going to generate, it's going to do the same for strings, essentially. It's going to say, well, I'm going to generate code for strings. In the string case, you don't want to do the inline code, you want to call a function, because it's just much more complicated. Very interesting, so once you have ints and strings, you kind of generate code for strings and reuse it,
38:21
because you have the cached tracelets and their generated code, so you don't need to generate code twice for the same tracelet. Now, here's the problem. Do you feel these guards here, you have any combination of types, it's going to generate yet another tracelet,
38:40
so you have this essentially exponential thing. So, consider you have three types, and then you have any combination of the threes, like two to the power of three. Actually, not two, but it's in the number of PHP types to the power of how many arguments you have, so it's kind of getting really messy out there.
39:04
And statistics, it's a long tail, but at the end of the tail, it's very thin, it's like a rat's tail, if you wish, right? Okay, so there's a lot of types, explanation and everything, but beyond six, 12 items, you're going to have only this many chains
39:22
that are left to the interpreter to take care of. So statistics again, the principle here being, we're going to address the most frequent cases with compiled code, and the long tail, the complicated stuff, that's combination of types that nobody heard of, we're going to let the interpreter take care of them.
39:42
It's not going to affect our speed because of the sheer statistics. Yes, okay, so if I understood it correctly,
40:08
the question was, do you profile what types are more frequent and how that evolves with time, because maybe during startup you have some combinations types and then later on there's others, right? We kind of do that,
40:21
we have some warmup procedures and whatnot, so actually you're kind of addressing a very direct question. But in general, for let's say a sort of a stable JIT that's loaded, there's little variation over time
40:41
with the most frequent types. But it's a legitimate concern to have. And let me make one more point. This is going to do really bad in microbenchmarks. Microbenchmarks are like you load this much PHP code, you run it and you go away, right? And we're like lost in the...
41:00
We're not doing well at all in these very small benchmarks because if you just run some code once, the whole JIT thing is not even going to enter in action, it's just going to let the interpreter do the work. And the interpreter is a good interpreter, it's just not optimized to be the best interpreter around. So we're doing pretty bad in these benchmarks,
41:22
but on benchmarks or whole sites as they're going to show, these are much more interesting. So the camera didn't work after all of it, did it? All right. So as a good sort of... Extra point, consider methods and functions.
41:42
Well, again, you don't know what they return. So our early approach was, well, whenever you have a return value, it's considered like a jump. It's considered like you break the trace, let it start all over again in a way. Later on, so this is an example,
42:00
like you may have a class that gets a name, returns this name, and you have a constant string here. And what type do you get here at object get name? So what we ended up doing is called type profiling. There's some work that's fairly old by now,
42:20
it's well known and well used, value profiling, which is this particular function like square root or square or sine or whatnot. It's going to be called frequently with certain values, and I'm going to generate optimal code for those particular values. And then I'm going to sort of, for the others,
42:40
I'm going to use the generic approach. So this value profiling is well known and well understood. For us, it's type profiling, which is depending on the type of the value, we're going to generate different code and such. Now the thing is, if you go with this approach of type profiling, you're going to take long calibration, which is exactly the startup problem you mentioned,
43:01
like how long until we decide that, yeah, this function returns this value all the time or most of the time. So the next idea on the board, which was successful, was well, let's use profile names, which sort of uses the name of the function, and we map method name to return type into a table,
43:23
a hash table. And the system kind of learns things like method name, get name, return strings. Big surprise. Just kind of, it's a great heuristic. I think that Keith, who invented this, is a brilliant guy, and he had this strike of genius, which was like, it's the name of the function dummy,
43:41
which kind of tells you its type. So, well, and this works everywhere, like methods, free functions, whatnot, and it exploits the ways that humans naturally write code, which brings me back to the social coding aspect and all that. So consider that our code base, even though it has millions of call sites,
44:02
it only has like 13,000 or so actually unique symbol names. And the accuracy is like amazingly high. Let me give you some examples here, which are kind of interesting. Well, reset is going to return null. Get timer is going to return int.
44:21
Big surprise. Get a logged in user is going to be the user ID. Is fbEmployee is going to return bool. ArrayPool is going to return an array, and so on. So pretty nice. It actually kind of just works. It's hard to imagine that HTML to TXT is going to return a double, right?
44:43
So this is one of those things that just work. And now let's make it work fast. So let's kind of push the pedal, seal the pedal to the metal, see what happens. On today's JIT, compared to Zend,
45:04
like it would do really, really well. And consider that Zend is a very mature technology, sort of mature interpreter, and people have invested a lot of work in it and such. And this stands as one of the proofs that actually even an okay JIT is going to do a lot better than a well-tuned interpreter.
45:23
So this would be a good takeaway for today's talk, that actually it depends on the technology more than the details on how you do it, sort of the big results. But again, if I were to show you the microbenchmarks,
45:44
Zend would do better, which is kind of interesting. But you don't care about microbenchmarks, you care about running the site. There's the inevitable kind of bragging here, so actually people start to discover that HVHV is in it really fast.
46:02
And here's an example of a success story. WordPress on HHVM runs faster than pretty much anything, including our own static compiler. So it delivers much more many requests per second than Zend or other interpreters.
46:23
So these are Facebook technologies. And this has very direct money implications. Consider the Facebook CPU idle. So these are days, and this is how many percent idle
46:40
in the CPU on a typical server on Facebook. Actually this is averaged. So and the red line would be the static compiler and the blue line would be the JIT. What interesting, I mean they look almost the same. So first of all, let me ask you this. Why is it like that?
47:01
I'll give you a card. Diadic cycle, yes. So day, night, whatever, right? Actually it used to be much more pronounced, but since we have so many international users, it kind of, I give you a card, right? In good shape. Okay, so that's a diadic cycle.
47:20
Day to day people, you know. And what's one interesting thing about this graph? Red versus blue, although they look almost the same, there's a big difference between them. The blue is a bit below at the, yeah. Careful, cut. Okay.
47:42
So the blue is below. So below, like low CPU idle, it means, what does it mean? Utilization, exactly. So the utilization is higher. So now the nice thing is that, I think I reversed the thing. So the red is kind of the good thing,
48:02
which is kind of counterintuitive. So essentially with the JIT, you get to do the same work with more idle time. Sorry, I was wrong. So more idle time. So the red line is good, and the blue line is bad. And this difference, the key point here is that
48:25
you provision your servers for maximum load. You don't provision a server for minimum load. So this I don't care about. I don't care about this maximum idle, I don't care. I care about this because here's how I dimension my servers. Here's how I decide to buy servers and all that stuff and power sources.
48:41
So this is important. And the more I can move this up high for the same functionality, the better I'm off because I don't need to buy that many servers and pay so much money for the power. So the difference here, we're looking at a good, like 6% here and here and here and elsewhere. We're looking at a good more idle time per machine,
49:00
which means the machines are gonna consume less power and the power costs less money. So this is very nice. And again, I mentioned this in my other talk today. Essentially every 1% we save is literally like a vast amount of money per year saved in power costs.
49:21
It's very powerful. So you have some of the best, most senior engineers who work on that. It's a huge deal. And I presume it's only gonna be a huge deal going forward into the future with computing technology at large in general. I think this is sort of a big deal.
49:40
Yes. So the question is, first of all, these guys perform at the same rate, approximately, yes.
50:00
So the difference in requests per second are minor. And the actual question was, well, so do you care about having as much idle time as possible? Kind of not, yes. And the answer is, whenever you have a major event at Facebook, like the Arrow Spring or the Boston Marathon disaster, a lot of good events that are very popular,
50:23
FIFA World Cup and whatnot, there's gonna be a high load. So you got a provision for like here. You gotta be ready to handle this. You don't care about this. You care about this. So the more you get to kind of save on that bottom there, the better you're off, right?
50:40
And indeed, at this level, that means essentially we consume less power for the same work. All right. So the static compiler is not in production anymore. We actually took time, and I participated to that, I'm very happy. We took time to remove the old code, the static compiler to C++
51:00
from our C++ implementation of the JIT. So we can't directly compare, but last time we looked, HHVM was about 20% better. And don't forget that the static compiler was already pretty darn fast compared to the state-of-the-art Zen and others.
51:21
And at launch last November, it was already 8%. And I participated to that. It's very fun. At Facebook, we're like, oh, lockdown. We gotta improve 10%. And every engineer on the team was kind of working on optimizations that would gain half percent, one percent.
51:41
And we added those, and we had a graph. We had Jean-Claude Van Damme doing a dance at the end when we made it 10%. And it was kind of a nice curve, meaning that if we continue on that trend, we're gonna be faster than the CPU itself. That's a joke. All right.
52:02
So this is great, but it also means we eliminate a lot of craft and we can't compare directly against HP HPC. All right, to conclude, the virtual machine, which is open source, by the way, can run your PHP. It's very fast, and it's good for production.
52:23
It's good for development, because the compilation time is just, as you'd expect, really fast. And our claim here is that type inference is the first order, the topmost issue for dynamic languages. If you have good type inference in your JIT,
52:44
then you're going to have a good JIT. If your type inference is kind of not there, you're gonna have a crappy JIT. And there's a lot of work that's left, and I'm going to sort of mention a few things that are on my mind.
53:02
So I have this big repository of little tracelets and the generated code. And one question is like, you mentioned this, garbage collection. What happens is that some function, some particular tracelet is very frequently used at some point in the lifetime of your machine,
53:20
but then it kind of falls out of favor. Nobody uses that crap anymore, right? So what you're gonna do, so you generate these tracelets, and at some point it kind of saturates. It's like, oh, we're kind of full with tracelets here. We have enough. But some of those are gonna be less used than others. So the question is, do we have a cache eviction policy there? You know, what should we do?
53:41
And there's work on that. So we're kind of thinking of how to address that. Other topics that come to mind are things like extensions, like a lot of high-performance PHP code must rely on C++ extensions because it's difficult to write really tight loops in PHP.
54:03
And you know, one class, yes. So the question was, did you think of doing what TypeScript does, which is allow optional annotations
54:23
by programs of types? I can't talk about that now. All right, other questions? Quickly, please, yes. Oh, by the way, that does deserve a card. Are you in town tonight? All right, here it is.
54:42
Yes, HHVM, yes.
55:02
So the question was, does your JIT assume you're having this model of the script dies at the end of the request? Actually not. So the way things happen really, yeah, each request dies and everything, but actually the engine, the JIT engine stays loaded
55:23
for many invocations, for many requests. You don't load the JIT for every request, right? So the whole isolation between requests is at PHP level, it's not at the JIT level. You see what I'm saying? So I have the JIT, it's sitting there, it's loaded,
55:41
and we're kind of loading it and uploading it and stuff, it's kind of an issue of its own, very interesting ways like bit torrent and that kind of stuff to distribute these things. But essentially, the JIT stays loaded in memory and collects the statistics over many of these requests, right, it's not just one request I'm looking at. Because one request could be like half a second, right?
56:01
How long does your average Facebook page take to load? It's like 200 milliseconds. So you can't really gather statistics from one thing. So essentially the JIT sits there, looks at requests, collects statistics, collects tracelets, does its work, right? If what you're looking at is optimizing
56:21
one short script that you run once, maybe JIT in general, not only this particular incarnation, it's not for you, right? So what you want is sort of either load a long-running program or loading many short-lived programs in this JIT. And by the way, that brings me to a different research topic that we're kind of working on,
56:43
which is the following. We have many millions of, not many millions, we have many servers, I don't know how many, so we have many servers at Facebook, data centers, and whatnot, there's plenty of computers. And each is collecting its own statistics. Why?
57:00
I mean, pretty much the workload on any two given server is gonna be about the same. So why not sort of have a distributed caching and sharing mechanism for these compiled tracelets? And then you get to use much better statistics because you can collect over days instead of hours, and you get to share work.
57:21
So you can, I have the tracer for my min here, so I'm gonna use the one in our service in Sweden or whatever, right? By the way, we just opened a server in, what's the name? It's in Sweden, it's like by the Arctic Circle, where it's like the night is about to start,
57:40
like right now. So, great. Card, are in town? Don't forget to pick it up, folks, okay? Congratulations, cheers. Don't drink too much, don't drive? Yes, in the back.
58:11
Quersis. Is it for Java in particular?
58:34
I wouldn't know about this particular Quersis system in particular, but I have a colleague, Keith Adams,
58:41
who contributed a lot of these slides, and essentially he has another talk in which he kind of explains all the technologies they looked at when they started this, and kind of neither was exactly there for our kind of workload. So I can only presume that he looked at that and it didn't quite work well for our case, yes.
59:13
So do we have, wow, I have really good questions, so not a lot of you guys, but really, really good questions. So let me kind of restate the question in my words.
59:24
So do you have the same JIT instance for a lot of different requests as opposed to some sort of JIT specialization for I'm gonna serve only the homepage from this JIT? The answer is we use the same JIT for a variety of requests, but this whole notion of JIT specialization
59:42
is extremely interesting. We thought a bit about it, like how do you cluster requests? Like requests for, I don't know, images, I'm sure they have a very different workload than requests for like text and kind of stuff. So actually this, yeah, this is a.
01:00:00
super interesting topic, so maybe we can chat about it tonight, this is very interesting. I'm actually very pleasantly surprised to hear that. So this is great. All right, we have time for a few more questions, yes?
01:00:23
What if your code base uses a lot of objects? This is a real problem. Thanks for asking. So actually it's kind of funny, this is a, because somebody asked like what happens over time, like, you know, for a given team, and it's very interesting that the PHP code base at Facebook has with time become more sophisticated.
01:00:43
Like before I was like, yeah, yeah, just re-point, you know, let's kind of grow, and it was kind of all simple and kind of a simple feature site, but right now we have objects for everything, and actually at this point, I forgot the statistics, but actually a lot of our data like $A you receive is an object actually, right? And that was not the case like seven years earlier, which makes for a very interesting
01:01:05
analysis in evolution. So we do have specialization for objects. For example, methods, so what type do methods return, that's already working, right? But for objects in particular, there's some interesting
01:01:24
issues like, you know, you want to, you have an object and it has a schema, because it always has a get name, get ID, and I don't know, send request or whatever, right? Has like given method names. And we're looking at how to optimize particular calls for these kind of known methods.
01:01:43
So there's more work to be done, but we already kind of are prepared for that, so we're in good shape. Care for one of these? All right, awesome, yes. If you look at, hmm?
01:02:04
Maximum function, yeah, yes. Let me dial back to the function real quick.
01:02:21
Okay, so let's start from here. So the question was, you have the min-max function and you have three basic blocks, yes. Right, so okay, let's see the function itself.
01:02:40
Where is it, yeah, this is the whole function. You have three basic blocks, yes. And, ah, okay, okay, okay. So the question was, well, so you have one function, you decompose it to basic blocks,
01:03:01
and then where's the logic that kind of knows that the blocks are coming from the same function, right? Right, well, I didn't give a lot of detail about this, but the magic here happens in these connections here. Right, let me kind of bring the, okay.
01:03:21
So in the connections here, and we have some interesting rules, so a given block can have multiple entries, but only one exit and things like that. So essentially, although the blocks do not nominally belong to a given function, you know what the interleaving is because, but you could have the same,
01:03:41
you could have actually the same code in one tracelet and have it linked differently, so it can appear multiple times because of the links. So you know it's the same function because of the connections, and that's also a disadvantage because you get to have multiple instances of the same block, so there's sort of a bloating of blocks, but that's not really a material, it's not a lot.
01:04:03
Okay, I think we only have time for, I've seen a person who came after the previous talk, so I think we're about done. So with this, I invite you all for talking with me after this. I'll be here for a couple more minutes, and I would like to thank you for coming here. Thanks a lot.