Heroku: A Year in Review
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Part Number | 55 | |
Number of Parts | 94 | |
Author | ||
License | CC Attribution - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/30666 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Computer animation
00:32
TwitterMultiplication signLecture/Conference
01:08
Service (economics)BuildingRun time (program lifecycle phase)Latent heatProduct (business)Right angleNumberMultiplication signComputer animation
02:03
Online chatRevision controlDemo (music)RandomizationType theoryCartesian coordinate systemWeb pageSoftware repositoryRule of inferenceDisk read-and-write headDivisorComputer animation
03:09
CASE <Informatik>FamilyEndliche Modelltheorie1 (number)Level (video gaming)Right angleMultiplication signSoftware repositoryScripting languageMedical imagingIntegrated development environmentVariable (mathematics)Mobile appMultitier architectureDescriptive statisticsKey (cryptography)Set (mathematics)Computer fileComputer animationLecture/Conference
04:43
User interfaceMultiplication signClient (computing)Cartesian coordinate systemLevel (video gaming)Branch (computer science)Software repositoryMobile appDataflowINTEGRALProduct (business)CodeWeb 2.0Observational studyAreaPower (physics)Point (geometry)Water vaporRule of inferenceComputer animation
06:06
Cartesian coordinate systemState of matterOrder (biology)WhiteboardBitRule of inferenceDatabaseMultiplication signMereologyPhysical lawElement (mathematics)Service (economics)Plug-in (computing)Physical systemProcess (computing)Mobile appComputer animation
07:35
Error messageState of matterMathematicsProcess (computing)Cartesian coordinate systemChemical equationDefault (computer science)Scaling (geometry)Server (computing)Service-oriented architectureWeb 2.0Level (video gaming)Medical imagingInformation securityMultiplication signBlogPasswordMobile appDevice driverNoise (electronics)Instance (computer science)Stack (abstract data type)Point (geometry)Client (computing)FrequencyAdditionAuthenticationWebsiteNP-hardDivisorGroup actionLibrary (computing)2 (number)Router (computing)Product (business)WindowKey (cryptography)MultiplicationSeries (mathematics)Form (programming)Source codeWordPosition operatorOrder (biology)Programming paradigmExecution unitSummierbarkeitRight angleParameter (computer programming)InformationMereologySet (mathematics)Software testingVideo gameComputer virusTheoryElectronic mailing listNumberComputer animation
12:34
Graph (mathematics)Cartesian coordinate systemGraph (mathematics)Semiconductor memoryMultiplication signDependent and independent variablesMetric systemElectronic mailing listGreatest elementQuicksortComputer animation
13:18
Rule of inferenceCASE <Informatik>Constraint (mathematics)Expert systemBenchmarkRevision controlInheritance (object-oriented programming)Product (business)Software developerNumberRepresentation (politics)Insertion lossBinary fileMultiplication signMathematical analysisPhysical systemStudent's t-testQuicksortMereologySummierbarkeitDialect1 (number)Computer animation
15:26
Patch (Unix)Rule of inferenceMereologyQuicksortEndliche ModelltheorieAreaInheritance (object-oriented programming)BuildingMathematicsStudent's t-testData structureBarrelled spaceProduct (business)Arithmetic meanStandard ModelAdditionSubject indexingNumberDatabaseSoftware developerCore dumpProjective planeRegular graphString (computer science)Row (database)Data storage deviceFlow separationPattern languageCartesian coordinate systemData modelGroup actionComputer animation
18:25
Power (physics)SpreadsheetBuildingQuicksortType theoryMathematicsMereologyResultantMultiplication signComputer animation
19:32
Bit rateCache (computing)Table (information)Multiplication signComputer fileSubject indexingBackupMiniDiscProcess (computing)Query languageRight angleReplication (computing)Human migrationPhysical systemDatabase transactionDatabaseInstance (computer science)Reading (process)MathematicsBlock (periodic table)CountingRandomizationConnected spaceSign (mathematics)Front and back endsQuicksortProduct (business)Online helpMereologyFood energyWritingGoodness of fitBitComputer architectureOpen setPlanningTraffic reportingMaxima and minimaOperating systemTrailTotal S.A.1 (number)Group actionLogical constantInternetworkingPhysical lawTheoryExistenceDirection (geometry)LogicOntologyForm (programming)View (database)Coefficient of determinationTime seriesWordExecution unitEndliche ModelltheorieLevel (video gaming)State of matterPower (physics)Object (grammar)Well-formed formulaSheaf (mathematics)
27:13
Food energyQuicksortOperator (mathematics)MereologyRevision controlComputer animation
27:49
Information securityThread (computing)Software bugMathematicsTheory of relativityTraffic reportingProcess (computing)Goodness of fitMultiplicationClient (computing)Java appletEndliche ModelltheorieVibrationSoftware maintenanceWeb 2.0LogicMereologyCore dumpPhysical systemMultiplication signRight angleQuicksortAtomic numberWordSource codeVideo gameUniform resource locatorLine (geometry)Basis <Mathematik>Musical ensembleSoftware testingChemical equationComputer animation
31:30
Multiplication signDomain nameResultantPoint (geometry)View (database)System callLevel (video gaming)Execution unitBitUniverse (mathematics)Software bugPatch (Unix)Electric generatorParameter (computer programming)Semiconductor memoryWordStapeldateiKey (cryptography)Food energyNumberProcess (computing)Computer animation
34:14
Rule of inferenceNumberGroup actionConnectivity (graph theory)Game theoryInsertion lossUniverse (mathematics)MassMultiplication signLimit (category theory)Sign (mathematics)Open sourceProfil (magazine)Cartesian coordinate systemDatabaseComputer animation
Transcript: English(auto-generated)
00:00
So I'm here to talk about Roku 2015 year review. Every year we basically, for a better
00:26
talk, we just talk about new features and things that have happened since last year's world talk. I'm Terrence Lee. I go by Hono2 on Twitter. I worked with Richard Schneeman, gave the talk yesterday, Speed Up Science, something like that.
00:44
I actually got kicked out of that room because I was a fire hazard. But we work on the Ruby experience at Roku, which means every time you deploy a Ruby app, that is totally bad to running from us. And so if there's actually a problem with that, then that is our fault and you should come and talk to us about that.
01:03
We're here today at the Roku, so you should come and stop by. So today I'm going to talk about a few different things, as well as Will here on the right and Luigi as well. So first off, I'm just going to cover some general Roku features and things that have come out since last year's comp.
01:25
We also have an amazing Postgres product and Will works on that team, so he's going to cover some new features that have happened in the Postgres land there. We're going to cover some stuff specific to Ruby, things we've worked on, some announcements and stuff like that that you should be aware of.
01:43
And then finally, Luigi's going to cover basically some of the work that Matt's team that he's on has worked on in the last year as well. So on to Roku. So when I'm talking about Roku, I kind of just mean the general product, like the runtime and the build service and things like that.
02:04
So one of the really cool things that we've launched in the past year has been the Roku button. And so if you've seen these around in GitHub repos in the remake, there's basically this purple deploy button. And when you click on it, you get redirected to a Roku page to basically deploy your own copy of that application.
02:25
So in there you can specify if you just type in a different name that you want. If you hit a blank, it will make up a random poetic Roku name for you. And then inside of this thing, you can specify add-ons and other things to basically define what it takes to actually set up a template of this.
02:46
So this is great for any demos that you have, like if you're preparing a presentation. I've used these before last year when I was doing a bunch of WebSocket stuff at a conference last year. And so basically all of the demos that I had, people could just click the
03:03
deploy button and get their own version of the typical hello world chat thing for WebSockets. So just very simply, to actually set this up, you just have a repeat. And then inside of it, in the markdown, you just point it to this image and then link it to where we deployed. And so when you click that button, it will know what repo you're coming from and actually set that all up.
03:26
And the actual magic behind it is, with the app set up, is there's this app.json file that you put in your repo. And inside of that, it takes a certain set of keys and you just specify the name, description.
03:41
And then some of the interesting stuff is the add-ons. So you can pick various Roku add-ons, like if you need that, to get it up and running. So maybe you're depending on Redis, like in my WebSocket example, I'm doing session and pubsub using Redis. So I specify a Redis add-on provider, in this case it's just like the free tier.
04:02
So anyone deploying this thing will actually get a completely working thing. They don't have to go through and like Roku creep and then like git clone the app and then actually like manually go through and add all the add-ons and stuff. In addition, you can also set up environment variables. So if you need to set up specific environments to get that app up and running, you can specify all that stuff in here.
04:23
It's just a hash inside of the JSON, as well as any scripts and stuff that you need to run there. So you can do some host deploy and things like that. So that's the app.json. And so once you include that, as well as the button on the remake, your app is good to go for a Roku button.
04:45
The next thing that we, one of the other things that we're on has been GitHub integration. I know this has been something that a lot of people have been asking for for a while. And there's ways to achieve this using stuff like CodeShip. If you just use their CI service, you can have them actually just deploy to Roku or Travis has something as well.
05:03
But directly through Roku, if you go through the GitHub OAuth flow and connect it with Roku, you can then through the deploy tab connect your apps to a GitHub repo. And so you can have it auto deploy from a specific branch.
05:21
So say you're working on a feature for a client, and every time you commit, you want to deploy to a staging application to then show the client. So every time you push to GitHub, it will automatically do a web code callback to actually do that deploy. You can also manually deploy specific things straight through the web interface and not have to do the get push.
05:44
So a great feature for doing stuff in staging, if you're doing things in production and you have stuff tied to CI, I still would probably recommend kind of not doing it this way. I think this is great for handling various VRs and other features that you just want to easily get that up and out there
06:03
and not have to worry about that stuff. So we recently launched this last week, Roku Elements. And a lot of you who have used Roku before are familiar with our add-ons ecosystem. So the Postgres add-on for getting a Postgres database.
06:22
We have fascinating for some CDN stuff, New Relic, and various other services out there for using that. So in the Elements is that we now have add-ons as well as our buildpacks. So when you're deploying a Ruby app, you're using the Roku Ruby buildpack.
06:42
But you can go and fork and make your own custom buildpack. And as part of this, you can look and see without searching through GitHub, there are various buildpacks that are available that you can now search through and use. So maybe you want to add nginx or you want to go and add like getting JS as part of the application
07:01
because you're going through and driving like form, filling, or something in a worker process. And so now all those things are easily searchable in here. In addition, with the Roku CLI, we have this plug-in system. And as part of this, you can kind of search through and look for various plug-ins to then extend and add to your Roku command line experience.
07:24
So all those things are wrapped up into one place that you can go and kind of piecemeal together to build your integrated app experience on Roku. In addition to that, for those of you who are dealing with clients or in Europe, we announced pxDinos in Europe.
07:43
And pxDinos are the six gigabytes instances that are run by themselves, so they're not multi-tenant, so you don't have to deal with any noise and error problems and get much more consistent performance. So this is great for anyone dealing with things in Europe.
08:01
So if you are using a bunch of web servers and stuff that you want to basically scale out, that's great for being able to get that scale and kind of having the massive process there, a low balance per process and getting more orders. And then we also announced the CDER 14 stack, which is based off of the latest LTS release,
08:24
which is called 2.14.04. And with this, it brings hopefully not a lot of changes that you're going to notice about it, but more on the state libraries in the back end, a lot of them for security purposes.
08:41
And basically we've dealt with a lot of hard work for smoke testing, things, upgrading our own apps internally before launching this, and we have a fairly long, or a decent long beta period for testing all that stuff out. So a lot of people are probably still on the CDER stack, and with this announcement, we basically are going to be sunsetting CDER at some points.
09:07
So on November 4th this year, CDER will be retired, so you should look into migrating your CDER apps to CDER 14. And it's pretty simple, you just do, you can set the stack for your next push,
09:21
you just specify CDER 14, and then you just do a commit, and then when you do the push for RoboMaster, it will build in on the new CDER stack, and you'll be running on the new CDER stack on your next push. If you do run into any issues, you can use the RoboRollback command that's still available to roll back your slot between the previous image.
09:43
So, pretty easy way to do reading migrations, and recommend doing this on a staging application of your app before trying it in production. Because there are a lot of changes, so it's been like four years, I guess, since the last Ubuntu that we've been using, so I mean, libc and some other things have changed under the hood a little bit.
10:04
And if you go to the CDER 14 blog post or search in dev center, we do have some articles for some gotchas and things that you might want to be aware of with that. So the next thing with more security and things,
10:22
we now have two-factor auths, so you can set up your phone to basically have a second factor of authentication in addition to your password. So this is great for security and other things. I know Slack recently had a security issue with the last few months where a bunch of people had to roll their passwords,
10:42
and now I have eight Slack accounts that have a bunch of two-factor stuff that I have to log in with. But we at Broker now have supported this in the last year, so this is great because if you're deploying your business and other things, then you want it to be actually secure.
11:03
Recently, we shipped HP Git and switched all of our default stuff over to Git, so now you don't have to deal with SSH keys, which sometimes were issues I've seen in support tickets with customers with having multiple Roku accounts and having to basically switch your SSH keys in and out.
11:24
Now this all goes through SSL. And this is also great for Windows futures, where SSH has been a huge pain with that. So, great work there. And so, DHH, I guess, talked about in its keynote about Action Cable,
11:43
with WebSockets being powered by Redis for some of the PubSub stuff. And so one of the great things was we announced WebSockets more recently, or I guess kind of the end of last year, and this has been in labs for a while,
12:01
but WebSockets are enabled by default whether you're using them or not, so they're enabled on the router. So when Redis5 launches and you want to use Action Cable, it will work out of the box, at least for the WebSockets end. And if you want to play around with Faye and other things, we have a chat example on the Dev Center article from the Dev Center sites
12:21
for actually just using Faye as a WebSocket driver and running that and working on Roku. So great time to check out WebSockets for Rails 5. And so recently we relaunched Dashboard. We've done a bunch of work there. And one of the most interesting things for me has been
12:43
just all the metrics stuff that's been on there. So you now get response time, throughput, dialog. There's a memory graph as well. I know this is something where we've gotten a lot of requests from just like introspection into your actual application. So if you're an active user, you've probably already seen a bunch of this,
13:03
but just go through the dashboard and just look at all the graphs that are out there. And this is something that we're continually iterating on and tracking group on. So Dashboard will be landing a bunch of new features as time goes on. So I'm going to hand this off to Will to talk about Roku Postgres.
13:36
Yes, I will. I remember I've been with the Roku Postgres team for a long time.
13:41
We have a number of cool things that came out last year that I want to talk about both in the Roku Postgres product and also in Postgres itself. So the big thing is PostgreSQL 9.4 was released earlier this year as a community project. We support it currently in a beta capacity,
14:01
so you have just a dash dash version 9.4 when you provision it. But hopefully very soon, like this week or next week, it's going to become the official one so that they can review the database and get 9.4. And the greatest thing about Postgres 9.4 is the JSON piece support.
14:20
And so the last two versions of Postgres, 9.2 and 9.3, you've been able to use a JSON column, but it didn't really do much except for just some syntax checking to make sure it's now JSON. That still was super useful because you could pair it with something like PLV8, which lets you run the V8 JavaScript engine inside Postgres, and you can do some crazy cool stuff with
14:41
parsing out documents, putting check constraints on them to make sure it only valid documents that match your custom rules getting and so on. But the really cool thing about JSONB is that it stores the representation under the hood as in a binary format.
15:00
And so the Postgres developers were able to get some super impressive speed improvements out of this. And actually there's some really good benchmarks that PG experts did where it shows that in several cases the insert and update brief is faster than other document databases
15:20
that all they do is documents. And one of the great things is it's part of the rules of this patch. And that's how you can get it inside your... If you want to use this in Rails, you can just say JSONB instead.
15:48
One of the really cool things that you can do with the JSONB support is Postgres has GIN indexes, which is general inverted index, and that lets you... You can do that for several other data types,
16:00
but when you use it with the JSONB data type, you get an inverted index on every single document. So instead of some other document databases where you have to say, oh, I want an index on this key or this key, you can get it on everything. And so you can do really great searches using an index on anything in your document database there. And what's really awesome is
16:20
the pattern that we use internally a lot is we'll have our regular rows of regular models and then just have... Before we used HStore, now we use JSONB. Have a column in there, and that way as you work out your application, you can kind of add some extra data before maybe later loading it onto a proper column.
16:41
And is there... Okay. So I want to tell you that there's actually a cool story in JSONB step. And so like Ruby, where directly, you know, with Ruby employees, so the Ruby core members to work with Ruby, over the last several years, we've sponsored the development of
17:01
actually quite a number of the Postgres features, and one of them was, you know, the previous JSON, and we wanted to help get JSONB in. There's a group of Postgres developers referred to commonly as the Russians. It's because they're Russian. And if you talk to the Postgres developers,
17:20
they'll know exactly which three people you're talking about. And so they made HStore several years ago, and they were like, but HStore, for those of you who haven't used it before, it's a keyed out HStore in Postgres that is just strings only, and it's flat. And so the Russians were like, oh, we're going to work on HStore 2, it's going to be great.
17:41
It's going to have booleans, it's going to have numbers, and you can test it. And we said to them, we're like, just do JSON. And they're like, no, we'll do HStore 2 first. And they're like, come on. And so we sponsored a project to build JSONB on top of their infrastructure, but it wasn't looking like it, like it wasn't going to get in, and we did a bunch of negotiations
18:02
to get everyone on the same page, and it was just under the wire for the end of the feature freeze, and it got in, and this is a really cool feature. And I encourage all of you to use that in Postgres 9 for spinning stuff and play with it, because it makes building applications so much nicer to have
18:21
that sort of semi-structure with a structured data store check out there in the same place. So another thing here is data clips. So we've had data clips on Postgres for a little while, but we recently got a very nice design refresh, and for those of you who haven't used this before, it's a really powerful tool.
18:42
Most of the internal, like, BI, business stuff that we do internally is powered by the power of the data clips itself, and what it is, is you can type in a query, you can say data clips out there in the background. You type in a query, you get to see the results. It's sort of like a hidden gist about your data. You can share that around with
19:01
people in your company, and what's really great is that it's a read-only access to your data, and so if you have some business people say to your company that you don't really want to give full, read-write access to data, this is a great way to get them, you know, let them use some queries and stuff. And what's really nice is it has a CSV
19:21
format, and you can take that and import it to Google documents, and you can build spreadsheets and dashboards and stuff off there. It's a super powerful feature that I really like a lot. Another recent change is we've made a lot of improvements to our PG backup service, and to sort of designate the old system
19:41
for the new, the name changed by adding a colon there. And so we had a lot of problems, not a lot of problems, but some people had issues with the old system, a very big backup that would fail uploading and such. This new one has been re-architected, and it's much more reliable.
20:01
So this, I think it rolled out pretty recently, and we're in the process of migrating people's old backups for the old system to the new one, and so it should be pretty smooth, pretty smooth upgrade. And yeah, another big feature of the new PG backups is you can schedule when you want to pay the
20:21
backup time. Before, everyone, they didn't have the same time, but because we would queue all the jobs at once and let them tag out over the course of like nine, twelve hours. But this one, you can say if, for example, you're from a different part of the world and you're off peak hours at a different time,
20:42
you can specify that out front, and it makes for a nicer, much nicer product. This one, I think it's really awesome because I wrote it. It's PG-diagnose, it's a, you know, lots of port tickets, we end up looking at the same sort of things for looking for problems, and what this does is
21:02
it's coming out right in the CLI that generates a report that looks for a lot of common issues, and the backend for this is all open, so if you want to see exactly what it's doing, you can check out GitHub or review PG-diagnose. But let's take a look here to what some of these things are. So, one big problem with Postgres
21:21
is if you have a query running for a long time, because of the MVCC architecture that Postgres has, a long ref query keeps the snapshot of it for a long time, and that can start to cause problems after, you know, it depends on how fast other parts of your database are performing, but this example here in 9 days, that's way too long.
21:40
And so, what this does is it checks those long queries and it gives you the process ID there so you can go and do it. Another one here is hit rate, and so this will do look through all of your tables and all the indexes and tell you what the cache hit rate is, and you really want, if you're hit rate, you really want it to be
22:02
really in the 99 plus percent, because anything lower than that, like if it's 98 percent, that means 2 percent of the queries have to go all the way to disk to get the answer back rather than it being in the Postgres cache or the operating system file cache, and that really, really slows it down, so if you're seeing any of the little hit rates,
22:22
either, that's either commonly caused by a change in query patterns, or it could be a good sign that you need to move up to a larger brand or more random. The other thing that's nice in here is the indexes, and this will actually run through a bunch of checks, it'll tell you if you have an index that is never used, and so
22:41
you know, you're spending a lot of time and a lot of energy maintaining that index on the you know, insert, update, delete, when you never actually ended up hitting it, and so that's a good candidate to get rid of. Another thing that, it's not in this example here, but it'll show you indexes that have a large volume of write on a block table, but
23:01
are rarely used. These ones are a little more tricky, you need to use good judgment there if it's okay to drop them, but those can also be a great candidate for a tremendous database and making it better. The next check here bloat. I mentioned before Postgres is at PCC, so that means any time there's an insert, delete, yeah, insert, delete, it doesn't actually
23:20
modify the old data, it just keeps track of which, from the minimum transaction it was visible to, to the maximum transaction it was visible to, and so when you actually delete data, it's not actually removed from disk right away, and there's another process that goes along in the background called off-the-auto vacuum, and then that actually goes in and then deletes things
23:40
after the fact, but what can happen in certain pathological cases is that your table gets bloated, and your indexes become bloated with all the step values, and that's something that it's, well, once you don't look for it, you look for it, but a lot of people aren't aware of this, and so having this in the tool here is super helpful. Another thing is if you're getting close to your
24:00
connection count, one thing with Postgres is that every single backend takes about five to ten megabytes of RAM, and so you do want to keep your connection count down. This one, it looks at what plan you have, and it has the recommendation of that connection counts per plan, and it'll alert if you're getting high. Moving on down, idle transaction,
24:21
similar to the long queries, if you have a transaction open for a long time, it's the same as executing a query for a long time, it has to keep all that data for when it is, and that'll tell you, idle transaction, you should care. Blocking queries, if you are doing something that creates a lock and other queries are waiting on it, it'll show up,
24:41
and then just download the system, which is pretty straightforward. But anyways, it's a great tool, I'm a little biased in saying that, but I think it really helps really give you a quick diagnosis of what's going on in the database. One thing that's really awesome that came out in the last year is this expensive queries, and this is over at postgres.oracle.com rather than
25:02
the main review dashboard for technical, you know, talking and what this is called, and it actually uses an underlying feature called PGstat statements, which one of my colleagues made some huge improvements to and got committed into Postgres. That's Peter Dagan.
25:21
What this does is it looks at all your queries and it takes out the constants and replaces the question mark and then groups them together and you can see what your average execution time, the total execution time, like how much I have been spending, and it gives you a graph over time. This is really helpful to look at from time to time and see what hotspots in the database is. It's a good way to
25:40
find if you have an index or so and more indexes. I think this is towards the end, I think. This is pretty cool, but you're not going to probably use this every day, but fork-fast. For all my time now, I hope
26:00
Chris has supported fork and follow. Follow is like a read replica that stays up to date forever. Fork is a second instance of the database that uses the same underlying technology but then stops progressing. The forks are really helpful if you want to test out the migration. The way that the Postgres replication works is that there's a base backup and then
26:21
individual right ahead block segments get uploaded. When you do a normal fork, it downloads the most recent base backup and keeps playing, replaying the wall files until it gets to the current time. But fork-fast will just stop it after the base backup. Why would you want this? One reason is that it does typically be
26:42
a little bit faster because it doesn't have to do that extra work. If all you are doing is testing a migration of sorts, or if you want to take a backup off of not your primary system, the exact time doesn't really matter. This can save you a good amount of time in some cases. Thank you very much.
27:39
So that's where that support came to end.
27:40
We extended for 8 months, I believe. So if you're running on these versions of Ruby, you should move on with it. If you did not know, early on in 1.3, support has also ended, which means that it's not getting any bug fix or security maintenance releases at all.
28:02
So if you're on this, you're kind of on your own for backboarding and doing your own security. I remember I was talking to Brown and they had a client that was on Ruby 1.9.3 and he was asking about how to backboard security stuff for the most recent security thing that came out, and you definitely don't want
28:21
to be in this boat. Especially if you're on Heroku, we do all the work for doing this. As an example, on Christmas, on the day that Ruby 2.2.0 came out, we had Ruby up and running within a few hours of the release, so this is something that we take a lot of pride into.
28:41
We basically try our best on the same day, and I think we've only missed one MRI release since we actually had multi-Ruby support for MRI. To give you a sense of how many Ruby's we've built in the last year, since last Wells Conf, I personally built 55 Ruby's
29:01
across both the C++ and C++ stacks, including MRI and JRuby. So, this is work that you don't have to do to deal with it all, you can just get up and running on that day. You should definitely take advantage of this and not try to maintain your own movie at all.
29:21
Probably a big change that affects customers is that we're now recommending Puma. This has been something that's been worked for a while. We used to recommend Unicorn, and I mean, if you're on Unicorn, it's not super terrible, but one of the nice things about Puma is that if you don't know, it's a threaded web server, and if you don't, so if you have a
29:42
thread-safe application, you can now use multiple threads, but it also has a master worker model as well, so you can, if you have a non-thread-safe app, you can, like in Unicorn, just have a single threaded worker, but have multiple worker processes for you to map, so you can get that. And the other really nice benefit is, Unicorn
30:02
is built to sit behind Nginx or Apache generally, and so it does not have any logic for mitigating, like, slow connections, but Puma does, so we definitely recommend Vibrator and Puma, and in our docs
30:22
we now are around Puma and have documentation for that, so it's something you should definitely look into. In addition, I don't know how many people know this, but we do also have JRuby support, actually we have, we recently hired Joe Kuttner to work on the Java stuff, and if you don't know him, he's a
30:42
pretty active contributor to JRuby itself, he also maintains Warbler in JRuby system, so we have great JRuby support, he works full-time on that, so we have JDK8 support now, as well as when JRuby 9 came
31:01
out, by the next day we had support for that on the platform, so you can play around with that, and we also have really good relations with Charlie and Tom, so as part of the release they notify us of it coming out when we test it, and I have filed a handful of bugs, like bug reports about the build system being broken, because
31:21
I guess I'm one of the few people that actually test this. So on to Ruby core updates by
31:58
Weijin. For example, reducing
32:01
bugs, so no bug is nice in software, and also high speed, high performance, and low energy and low resource consumption, so low memory or something like that. In our team we have three members of months, many of you know
32:21
he is our returning mentor on Ruby, and he decided everything. And also Noru, he is a big patch monster, because he fixes many bugs and includes many bugs, and also fixes these bugs.
32:42
So this is the common number of JRuby, and you can see most of patches are written by Noru, so it means that all your Ruby is based on Noru. And also it's me,
33:00
so I am a designer of JRuby, JRuby JRuby batch machine, so it was introduced from Ruby back then. And also we have introduced generation incremental dishes. We
33:20
released Ruby 2.2 last year, and we have several many improvements. I want to focus on improvements, about the key word arguments. The key word parameter is very slow from
33:40
Ruby 2.1, but just now Ruby 2.2 improves the performance of the key word parameter, so please enjoy using Ruby 2.2. And we are planning to release JRuby 3.3 at the end of this year, so any suggestions, any ideas are welcome,
34:02
so please catch me later and discuss our process. Thank you. So, in the exhibition hall,
34:22
so after this talk, there's one more talk for this, but after that, we'll be happy hour. A bunch of folks in the open source community from both Rails team, Sam Pippen from RSpec, and Sam Saffron, who's done a bunch of performance work
34:40
on every profile and other tools works on the discourse team. We'll be at the Ruby booth doing basically ask us anything, bring us your problems, or if you just have general questions or other things, we should be there if you want to talk about Ruby 2.3 or if you want to complain about concurrency or something.
35:03
So we'll have it, we need to, we'll leave it until the lightning talks there. And then tomorrow, in the afternoon break, Richard Schneeman, one of our co-workers, he wrote a Roku out there running book, he'll be doing a book signing thing.
35:21
I think he only brought a limited number of books, so if you want to book, show up, I guess, earlier. And that's kind of all I have. You are still really important to us, if you can't tell, we're still heavily invested in it, both with Max's team, human and Rails friends, and then
35:42
obviously all of our Postgres work as well. I think Postgres has become, I think at this point, the de facto database for doing a lot of Rails applications. Thank you, come visit us at our booth, and looking forward to talking to all of you.