Merken

​Postgres at Any Scale​

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
this is the end of you and right everyone I welcome thanks for coming to my talk an retirement up stress at any scale to start off some contact
information give any questions afterwards knowledge workers Leinweber on will ever and become a work for his last so data and that's our website there and for those of you who don't know me you may have installed my wonderful German bundle which all it is is it requires bundler and I made this for a long time ago and I after getting frustrated because you you bundle installed bundle up and so on but you wanna go install you do German-style bundle over and I got tired of making a mistake myself summertimes I just made this job and it has all it is a jumps back and so this might be the highest number of downloads per line of code of any germ so you may have assigned system that I'm so the stock on go over I 3 different stages of life in a post rested starting with when you're just starting on new data is small and how to take good advantage of posters features at that small-scale and then we get into a medium size but how that sort of changes how the operations work for medium-size posters database and then when you get a very large scales are some strategies of how you can handle a lot of data and
so so it's not do we talk about large sizes there's a couple different of things of that you can mean you can its properties suppose this depends differ Leon if you overall data size but then also on the of flux of the data how much did you putting in the system how much you're pulling out of the system but you know a database that has been a just a little bit of data in appear constantly writing and reading is going behave differently than database that may have a lot of data we're only ever reading like maybe the most recent amount and you're not really touching any of the old things visit when you looking at designing and not on the system but now what servers types of service you need for your database and so on but it's important to look at both the overall size you can have and these sort of the the data processing and I use and so before you start you wanna plan by you know it you can weird Dave is gonna end up we applications conducted and that's the of course he's use of the done oftentimes things start small and this keeps snowballing of control why your sampling from part fire and then it's easy to uh ignore the database part until it becomes a little bit too late so that some the things in this talk will help prepare for that but some things in place when you are a small and so you have the time to
sort of put some of these constraints and place
on number 1 very the very 1st thing is please take backups now and every database talking to some of the say logistic backups and I the yes you should do this and that it's important to not only a take the backups but also restore them from time to time to make sure the good anyone do this when you're just starting a new business small because of the feedback loop of knowing the have a good backup to some place that helps is a lot easier when the data is small because a total backup restore cycle is so much quicker and and you also maybe don't have as much going on to strike you like your system that's been running a production for a while but when you also small the tool come to poster SPG don't it's gonna work very well I would PG dumped as a takes a logical back up and so that means if you need to restore to a different ASR system architecture different version of post-stressed that works just fine but we're going to talk a little bit later about another tool that does what's called a physical back up and that when you need that match the same versions and so on the I and you don't cut you know it's your big and very importantly tester backups and you know you have heard this before you here again I said before the city and as good is so importante have to your please test and you know 1 a the in a situation where the things are on fire and you that's when you finally the backups to work and simply inform from stress enough the I the main thing tho that I think is interesting to think about post when you're at a small size is that you can are embraced the constraints the process allows you have I has a huge number of constraints you can opt into the I and what I like to think about this is that you get 2 of enforce the assumptions that you have much data they might be storing you always think that all of these integers and stored in the circuits that users ages for a login count might think that there always be a positive number others ways of espresso also you can infer force that to be true and that way but you're not just it's not just assumptions in your code is relying on a dating being there but in forces it at the database level end this is really the best bang for the buck the you get out of post-stressed I n it's I don't think people think about this as in danger posters as much as it actually is use a ton of the can do oppose your fancy common table expressions I index types Estragon's size and so on but I think this this sort the unsung hero of posters really go into that so it also took me a long time to come to this realization that the constraint system is as as powerful and useful as it is but you know aside from and and I'm sure that a lot of people know that you want to use it if you have a unique column it's not enough to just say I you know validates unique in in rails because it could change underneath the hood see you need to put unique inside the database constraint on but a lot it's of a lot of people know that but all these other constraints are really go into our you know for some some reasonable like see that they're much less well known and and despite rails and your popularizing the things of embracing constraints on it that kind of stopped at the database layer and the least for the initial mass of the use of rails history the database she just as like a replaceable hatch and you could swap from 1 database to another and you know that's all great art but I I don't subscribe to that myself I think that if you can be pick 1 database and really use its features and embraced their constraints and keep that notion from the application down to the database there are you can have your database to a lot for you and what I'm happy that in the last couple years or forms like Active Record and my sequel has much you I use more and more of a database specific features instead of just the least common denominator feature set across all of the sequel databases and the reason that I think this is so important is because let's say that you are right boards at a constant rate between modifying the database and modifying the code and I do mean you I'm i've never written about myself like it's perfect but but but you know I understand that this happens and so you let so if you say that you right Our codes bugs rate the problem is that your code changes so much more frequently than you change your database schema are just for example at 1 of the apps I work on it's a little over a year old I a 1071 migrations but there are over 1200 releases and so in each 1 of those releases a chance for about to be introduced into production and so on but you know in that sense so few changes to the database relative to that it's easier to get that right and on and have that be sort of the last guard against bad data and and so if you get something India application of that starts writing some inconsistent data they're made out like in your database instances but inconsistent with the logic of your app are you know it can go on writing this bad in 4 weeks before you notice and then you have this horrible horrible cleaner process to do it because you can figure out what is true and what's not on your cleaning up a mess like that you have seen a see happen seen of work with people who were working through that and it's just that it's is really a nightmare and so if you use the database serves the last line of defense against that data but of course you know about the application but if you can have your database reject some of this spare data are usually in a much better place i and also so once it does not bonded to get in and also with the data is doing the rejection you'll know right away when you deploy something bad production because the starting errors in the application on the database was rejecting these rights and so that's a a very simple and this is maybe the simplest constrain the company database is adding to your columns not null and despite how a simple this 1 is I see that time and time again of a applications that because by default when you create a table you have to opt in to be not null I really wish to pursue with standard was the other way around where all columns were not nobody called and you'd say OK that's what I want to allow nals of portions of the world that we live and this is simple as this is but I bet right now if you look at your application to say of course all my users have a uh an e-mail address or you know something like that in that if that's not a primary key if that's not a non-null if you go look for now there's going to be some weird cases where something went wrong and like that is an old out there and um yes suppose this is like so any of these things of like you think like 0 this should always have that that's an opportunity to go in force that in the database and
moving on to the you know a little bit higher up is if you say it always is going have a unique e-mail address I'll go ahead and make that a unique index now on if you pay a little bit of penalty in and performance if you're trading indexing not being the unite using and that's the only way they can enforce a unique column is by having index but at this stage of the game when you're very small and it's better to take a performance at markers you not a notice it and have the things being forced to be unique and then later on year after capital to the going and they give this is actually bottleneck in know and drop it later but go ahead go through it all the columns of the tables like this should be unique go ahead and make unique so what are the other things it's also very interesting oppose cross lets you do is you can have a unique across 2 different columns and so if it's a you for example you have to like of reservation kind of thing where I want I don't want anyone to have OK that you had to reservations like the same person with the same other room you can have it on you can make a unique index on both of those and so on you can have 1 user has some reservations in 1 room reservations but together that is only 1 set of unique so and that's a very powerful thing that you can take advantage of a little bit different is the thing about datatypes as constraints the yeah yeah 1 of the great things about stresses that you have like a really wide range of data types you can have an you know your various images various text columns you can also have data types such as the I used part for 1 project the iron that's datatype clustering MAC addresses as able to make sure that only valid MAC addresses were being in I there's a little bit about a little bit of advanced is it stored it just as the number of bytes rather than the string but really just kind of nice to have make sure I know that if anything goes wrong and I can give you someone's like the main inside the MAC address count because unlikely that is like I can know that that is a preventive another thing you can do is um I've seen some people started to get excited about this purpose processor running their own data types is extension system is actually pretty straightforward you can pop your own a semantic data types in your database which is nice I and then I Jason B is a new data type are you a couple years old and that was very nice but this is if you're storing J. Sun it can be argued sure 1st offered its well formed on even if you are using the special operators but then there are a ton of special operas he can get data in and out of N is above bunch of accountants and other talks about by using this sort of datatype gesundheit have extra some I structured approach which I use in almost all of my tables of having a you know a couple of my real columns and energies and the columns are as my some my structured grab bag to sort of get the advantages of both the relational model and then sort of again a sequel isomer structured now good but I don't really want this to be a super could have a talk that and this is a little more work to to to show this 1 half the indeed some more 1 of the really cool range of data types that isn't super well known especially amongst us rest novices the art range data datatypes and so the stores a beginning and an end in 1 column together and I you can do some really cool things like that I make sure that you know the beginning is always you before the end like it just happens for you otherwise redo do it they can say that this is an exclusive or exclusive range interface includes or excludes the need and so on you can use it with numbers I've only ever use it with timestamps to sort of do a thing and is actually on straight from our billing table the of you know what we do is so that when you create a formation with us but we store when you created and in the end time we store as part of the was was that what to deal with infinite timestamps and to restore the end is infinite and we d provision remark that is the end period and you can we can make sure with i a little bit of some extra work that there's no overlaps in these periods and so this is called as the exclusion constraint and so we can say we never want to accidentally be billing someone twice for the same formation so like for example if we change the price on something will end than the old 1 and start a new 1 that would be a better disaster if they overlap at all and of course you know we have checks and code that make sure that doesn't happen but having this be fixed in the database that was I you know just so much of the weight off my mind knowing that the data is gonna reject any sort of things that would result in a billing someone incorrectly other is 1 little trick here are were using new ideas for our our primary columns and the unfortunate can easily use you in exclusion constraints and are we found this little hack and what that does is it's an internal matter posters that takes you ID and sends out is just a bite the array and byte arrays do work in excision stresses its unfortunately this button Isola trick so and then on these are these ones I'm not super sold on myself like and using them more and more now of for example of the variable X small sets of keys that you knock it be changing so much so for example and using them for a US regions because sure they do add more regions over time but it's not that often and this just prevents like little typos of like I know 1 time I so the QS that last 1 sort of West like manually and higher b and I got saved and then like that was hard to track down so having that you know again like it is prevents the little tiny little small areas and the and then if all that's not enough was a lot but if that's not enough but this has this 1 last thing which is really awesome called a check constraint so you make a function of usually and just in kill sequel impose trust but I've also done this with the POV 8 which that you'd run JavaScript inside and use that for my check constraints and what this is is a function that you define and every time you do a right and insert certain update but it'll just running you're you're role with that thing and then you can say is is as good or bad and so this way can do things like and you can say that I only want positive numbers I only want the maybe for some reason your answer Fibonacci numbers you can you can do a check like that really you know any sort of a custom validation he wanted do but you can do I this is probably for the diminishing returns like this is sort of like I don't think I'm using any check constraints approach right now and it's nice to know that it's there but and this is maybe getting a little bit a little bit too far so that's it for sure the small sizes
thinking more about medium sizes and this is In many ways are getting above 100 gigs areas are doing a lot of rights but the the interesting thing about this period of grasping the talk together the realize that most people don't spend a whole lot of time here it's an application that stays small words 1 of 2 such shooting right through medium up to large scales but there's still sometimes to take a look at year I
database and I make improvements at this stage but so hopefully you're a good place with constraints and intentional that suffer from before and if that was a good time to go you then like to some of the homework but I so the see that when a good space with the structure of a database of the biggest difference here is you start running out of RAM and a database to keep everything cached and so so you can you can do some should strategies like deleting all data are that helps but it 10 it just kind of per postpone the problem I this is when you i taking logical backups with PG don't doesn't quite work anymore I the this tool here all wall the and to me and my colleague who wrote this I we spent 5 years at her propose stress and we're renders on every single idea this cluster on roku and we're using it today that's all open source but if you're running a database by yourself please please please use Wally what it does is it takes the right have log oppose 1st generates and sends it off the machine tests 3 other some other point back if using juror or GC or something like that there's other back into can but the main 1 the most abuses S 3 and what's great about this bond is if the machine were to disappear completely off the internet you can restore everything from his right hand log files and unfortunately uh back in 2000 11 right before 1 of these big big Amazigh lodges we just put this in place and everything under adverse respect right before the others like a couple weeks and if that wasn't there and I would promise have a job but I don't know how good the database to have I wouldn't it would have been it would be bad so please please please use Wally of again it's important to test this out 1 thing that we do is we use the wall infrastructure for a yeah when we wanted you were like a critter follower so unlike were using that same the all the time and there will we know implicitly that these backups are good invalid I this is when he was are thinking about having a hot standby so you know having this running continuously so the main thing goes 1 fall over to it I we are the smaller sizes not important because you know I just restore quickly from from any sort of back up but when you start getting a lot of data just at the time it takes to yeah send the data over the network to the server storage starts taking longer and longer so having a hot standby up and is very good in 1 of the nice things oppose press has that again isn't super well known by developer communities it is would be a safe replication has under transaction by transaction basis you can say this is an important transaction I want this to be synchronous buckets it's a new users signing up and 1 it won't return until it has written to the replicas but for the most the transactions you can leave that as an asynchronous mode it'll return right away and you just trust that'll make it there and eventually and be able to mix those 2 for your important and like you not as important as is extremely powerful the so again you have to invest in monitoring and warning in this is another thing that everyone says you should do but people tend to do only and so it's a little too late but and this is no sort of theme of a lot of these things but has been sort of like a lot of things like I can get appeared tell you all you should do this you should do this but I'm not sure I think but why is it the case that everyone says you should do it but it turns out like no is actually doing it and I was reminded of this talk that I saw a youtube video of are flirting with disaster and is the name of the paper and the talk was this results in complex and of persistency you can go look for that and get out and I'm going to read you part exist but like it was 1 of the this is actually maybe the expression of why people don't do what they ought to do arm and so the things that you have this 0 this but 1 of the boundaries of the system is the economic boundary and you can not go towards it but if you go past it this is when a company fails goes out of business and so the natural gradient or pushing you away from economic boundary because otherwise you can't is gonna go out of business and so you go up to the line and then you get pushed right back down there's another boundary of the workload bound to this is how much you can work here employs a company and again like if you push it too far the a burned-out quit and so on the again there's a natural gradient away from their and finally there's a performance boundary and this is sort of the the interesting 1 for our systems and the problem is is you don't know where the profound performance boundary actually is until you go past and so we do as we set up this error margin and we say you know we're only an upper systems at like this capacity this the others the performance is of the how many requests per 2nd 1 beginning and no no no worse than that but the problem is you the time to go over it and then you know you can a freak out you go but only you bring it back and you keep going you keep going over it and then going back but then after that happens a couple times you like but why we freaking out so much what we just yeah which data that's fine Doppler systems operate that's that's pushed the boundary back the promise you don't know actually with that actual boundary is until eventually you go over and you have a big outage and it's a problem and that I never got to use the fire thing before 7 hours and over time and and so you see that the outage and everyone freak sound like OK we never want that to happen again you push it I don't care how much money across the care that you know how much would work but that's never have this embarrassing and push it all against but upon is over time like you know you're starting in the Gali idea that out from a cubby was yeah that was bad like a high and you're right back repeating this again and again and and I think that really explains why there's always things that you need you know you should be doing the like until you didn't like that's uh you that that's what happens and I really and it yeah remember when I results for the 1st time like I heard you go watch a cytosolic out maybe 20 25 minutes and inside the is a better job of explaining about adjusted but I I I really thought was an interesting way to think about systems so finally you start
running out in a manner matter what strategies you do place eventually if you did it keeps growing gonna come to a point where they just can't fit in a single machine anymore I and so you can be there like a
very part approaches shot and so this is instead of having a application which is the star talk to 1 giant database you make lots of little small there in every application talk to to all of them God if people the new charting for a long time there's a lot of and the homespun approaches you wanna be a lot you know a lot of people do this in the application layer but this is where
the company agenda year ago I this sort of meant to help with that and so on as is an open source Our post-race extension that you install impose presence on a couple of the nodes of stress and transformed it into a distributed chartered Davis I and yes it is there's you know you're welcome to go and check out a run yourself in the on
it so as soon as you you outgrow the dissimilar post press but with what sense does it takes care of not only just the sharding so you do a right it goes into the crack node but also the reads it'll distribu that out against all machines in a cluster and so on so there's a certain sort of a complicated a complicated view of it but it's actually the the approach that we take is actually a pretty a simple and
straightforward and so we have this idea of workers and shards really all they are our poster servers and tables and so on but you can go peek into all of these things inside and it's just a route the standard post press like that's being stored in used positive of
example I have these . 2 tables 1 is a CloudWatch metrics and the other just a general events table and the bowl shot tables and so when I when I look at this From application this looks like 1 table but it my application does enough to know that anything funny is going on I just connect a regular PostgreSQL and it's 1 table but if you connect to any of these workers in the back you can see that what's really going on there's a lot of our new tables here and each 1 of those tables on that server is all shot and you know another service can have another some shots and so on and so
when rights come in you say and that we we're 1 of these tables this is my Schottky we go ahead run a hash function that we know what a shot it's going to go to and we write it to that 1 shot just to show that I you know how
works the most posters stuff is done the so-called catalog functions that are under that sort of somewhat hidden from you and so we have a a a catalog function here that are catalog table that shows the start and end range of each 1 of the shards of is created and so we run the hash function we know if it's you know from into into a little bit over intonation now we know which are that goes to and and you know the next and so on that and sets of of the rights were but when you
wanted you think about how the architecture application for taking advantage of the distributed posters really to kind a ways that you can think about that 1 is more for real-time sequel unlike event data so say taking your tracking click sir page views or and your metrics from the system so 1 and the other 1 is more of a multitude of model of which I'm a star with 1st of the mother kind of
model is more let's say you're building applications like sales force where each year customer sales force as our own set of data and it doesn't you know go from customer customer and and use this as a model for like a lot of our companies were and the clusters did doesn't mix with each other I and what's nice about this is it's easy to migrate to but it's easy to maintain because all of the post press features that we talked about before that are back ups the wall you for i in a continuous a right have bought protection I scaling works you can both get bigger individual no then you can add more nodes to the system of and so on and the way that this works is all of your tables if you do you normalize it a little bit and add the customer or tenant ID to each 1 of the tables are when you're queries come in as long as as like where customer ID or or where or equals 3 we know that all that is is going to be on 1 particular node so we can send all the queries there and that she behaves exactly like that a single node posters at that point it does have to be concerned with all the other did of all the the customers and get the results back in and this is more if you're going to shard yourself if you say OK I need to you know build up cation and the thing to do the shopping is is probably much like what you do in your application code but this way you're insulting care of to maintain all that coding but it's more analogous to what you think what would people think about charting the other model is the
more parallel the model and what's great screenwriter here is that you can I have your data not just you can insert in 1 place but the queries when they come in they get distributed across all the nodes all the cult of the machines in a cluster of what's nice about this is 1 that of post-stressed and so the start of a post press you have to when you get to is or skills rectory architecture system so much to I you know you some other models but you can also if you have a post Christina near company people express with it you have to like they're not to go learn how to operate and maintain a complete a database and also because it is post stress what whenever process comes out the new features such as like g sine beta a copy was Europe all that comes for you because it's just an extension on top of a stress yes when the
query comes in it like I said we do a distributed group and phase for all the nodes get hit and then it comes back at the data the it was nice about this also it's not very common that someone's queries CPU-bound usually where the customers are posters users it's more like memory-bound but occasionally I have a query the CPU bound Our posters we connect of course about forks on back and and it's largely single-threaded father's been some advances in 1 9 6 and the next version where seek sequential scans are done in parallel but but larger you're gonna be single-threaded but because this is working on all the nodes in cluster you to use all the CPU all at once and so it's in some cases of like even like spied 700 times improvement because but it could use all the CPU's at once I we'll similar in
know 1 example of the besides max which lets you connect to any any of single node instead of just 1 quarter node and using the the yahoo cloud serving benchmark are really get on you about half a million rates per 2nd on a 3 tier cluster and were doing ball like 7 million but that's kind of cheating because the Balkan search is not usually people's eyes the workload patterns I and what this this is this is a pretty cool way we will we will how we worked around the contact any node and it's a the DNS entry that does round robin so you can act is goes to any 1 of the backing nodes and then they communicate with each other I and finally we have a a jump that you can use if you using the most model and I instead saying belongs to say that I'm also on user and then inside you code blocks you can rapid in this but we have a block and then it makes sure that Active Record scopes all your queries with their proper order the and I whatever this is you can use this as a even before you start using so as you can have your long the columns have the tables have those columns you can have this in place and we need to scale out everything is all a ready to go I
that's all I have 3 things very much but we do have an you can submit a couple questions now look at the time but also we do have a booth and I'm happy to help out talk there but very much and so so the question is using a set of images in what ways are you Eddie yes so I I this equation is using new ideas as part things instead of images and so I like it a lot some some of my more posts crest DBA colleagues will warn against because it does take up more space and joins are a a little slower but I I just really like using you is everywhere discuss like I know that you know 1 I if I do sure you were also not exposing the secrets about my system and then also like out I think it's that's fine but 8 it could be a performance bottleneck is something to be aware of that but I I tend to go with new ideas as ideas 1st and then only in if I if it back and think about it it's really going be a problem then maybe go to images and also if you are using images recounts please use begin to it's I that it actually takes up the same amount of size in the database because of the things are memory aligned to a 64 bits anyway and I'm only going title spaces if you'd have to 32 bit types together cause of that that bit packing which you probably don't have that so please please use begins otherwise you have a really nasty migration problem surprise you after what 32 million so please begins and yes 1 right is and they're starting to get they are and so the question is how how easy is it to migrate winter 1 winter by but if you can take the downtown like a dump restored by can be pretty quick because of the way down to a work they a lot of the header of training data so it's just the topples coming in but I would see if that like just take a dump and see how long it takes in like if that's an acceptable downtime and but there these are some change that to the application layer but are because it is still pose press like on that part isn't but usually it's just the downtime of migration of people read about used the actual code changes tend to be OK they're like some queries you might have to rewrite but usually it's not so bad especially his mother tongue and that makes a lot easier 5 so the question of if you medium size is a performance assuring them of their can be especially you do have CPU-bound queries but the other thing is it's always easier to start sharding before everything on fire because when it wants to start getting bigger than you know than it's more like a scrabble excited it's hard to again because of the flirting with disaster the it's hard to get people to do it before it's painful but I I would yeah projected data growth and see like work back in time when you start having to having to do it I guess 1 of a isn't it has it has you did this and most of those for backups and restorers but and what is I haven't found like a good to the can is mostly by just writing a couple scripts that in a run it the you know once a day and you know try and you know make sure that it did restore and I don't know if there's any good now that I know of any good off-the-shelf tools but like you know like doubt always always the adopted occasion of where to download your backups from a like you're looking at you know like maybe 20 line released public maybe sends an e-mail after the didn't work so that yeah explosive hockey homespun stuff Ch but if they get there much but please check is that a good given what few if happen
things use pH and
Faserbündel
Nichtlinearer Operator
Zentrische Streckung
Videospiel
Web Site
Subtraktion
Installation <Informatik>
Datenhaltung
Mathematisierung
Güte der Anpassung
Zahlenbereich
Physikalisches System
Quick-Sort
Code
Computeranimation
Wechselsprung
COM
Prozess <Informatik>
Strategisches Spiel
Information
Normalspannung
Gerade
Faserbündel
Nebenbedingung
Bit
Subtraktion
Kategorie <Mathematik>
Datenhaltung
Automatische Handlungsplanung
Kartesische Koordinaten
Fluss <Mathematik>
Physikalisches System
Quick-Sort
Computeranimation
Dienst <Informatik>
Mereologie
Datentyp
Stichprobenumfang
Datenverarbeitung
Gamecontroller
Server
Einfügungsdämpfung
Bit
Prozess <Physik>
Adressraum
Nebenbedingung
Versionsverwaltung
Fortsetzung <Mathematik>
Kartesische Koordinaten
Zählen
Login
Datensicherung
Eins
Formale Semantik
Übergang
Arithmetischer Ausdruck
Vorzeichen <Mathematik>
Zeitstempel
E-Mail
Schnitt <Graphentheorie>
Default
Gerade
Array <Informatik>
Schnittstelle
Umwandlungsenthalpie
Softwaretest
Nichtlinearer Operator
Lineares Funktional
App <Programm>
Bruchrechnung
Datenhaltung
Disjunktion <Logik>
Ruhmasse
Bitrate
Biprodukt
Dateiformat
Frequenz
Dialekt
Menge
Forcing
Rechter Winkel
Ganze Zahl
Automatische Indexierung
Dateiformat
Projektive Ebene
Normalspannung
Schlüsselverwaltung
Zeichenkette
Fehlermeldung
Standardabweichung
Tabelle <Informatik>
Rückkopplung
Nebenbedingung
Subtraktion
Gewicht <Mathematik>
Total <Mathematik>
Hyperbelverfahren
Validität
Mathematisierung
Zahlenbereich
Datensicherung
Mathematische Logik
Code
Whiteboard
Variable
Spannweite <Stochastik>
Informationsmodellierung
Bildschirmmaske
Ganze Zahl
Reelle Zahl
Fibonacci-Folge
Spieltheorie
Migration <Informatik>
Datentyp
Logistische Verteilung
Coprozessor
Datenstruktur
Speicher <Informatik>
Maßerweiterung
Bildgebendes Verfahren
Widerspruchsfreiheit
Tabelle <Informatik>
Relationale Datenbank
Eindeutigkeit
Validität
Indexberechnung
Physikalisches System
Frequenz
Quick-Sort
Programmfehler
Energiedichte
Flächeninhalt
Mereologie
Digitaltechnik
Dreiecksfreier Graph
Codierung
Computerarchitektur
Randverteilung
Resultante
Nebenbedingung
Subtraktion
Punkt
Kartesische Koordinaten
Datensicherung
Doppler-Effekt
Mathematische Logik
Komplex <Algebra>
Raum-Zeit
Internetworking
Gradient
Systemprogrammierung
Virtuelle Maschine
Arithmetischer Ausdruck
Prozess <Informatik>
Datenreplikation
Volumenvisualisierung
Softwareentwickler
Speicher <Informatik>
Datenstruktur
Gerade
ATM
Zentrische Streckung
Schießverfahren
Datennetz
Datenhaltung
Güte der Anpassung
Kanalkapazität
Gasströmung
Physikalisches System
Elektronische Publikation
Frequenz
Quick-Sort
Beanspruchung
Randwert
Transaktionsverwaltung
Flächeninhalt
Rechter Winkel
Mereologie
Basisvektor
Server
Strategisches Spiel
Wort <Informatik>
YouTube
Normalspannung
Fehlermeldung
Virtuelle Maschine
Punkt
Datenhaltung
Grundsätze ordnungsmäßiger Datenverarbeitung
Mereologie
Strategisches Spiel
Kartesische Koordinaten
Computeranimation
Fitnessfunktion
Virtuelle Maschine
Knotenmenge
Sichtenkonzept
Rechter Winkel
Open Source
Ähnlichkeitsgeometrie
Maßerweiterung
Normalspannung
Quick-Sort
Tabelle <Informatik>
Server
Dienst <Informatik>
Linienelement
Server
Kartesische Koordinaten
Routing
Ereignishorizont
Knotenmenge
Ereignishorizont
Tabelle <Informatik>
Standardabweichung
Lineares Funktional
Bit
Spannweite <Stochastik>
Rechter Winkel
Hash-Algorithmus
Online-Katalog
Quick-Sort
Tabelle <Informatik>
Resultante
Punkt
App <Programm>
Kartesische Koordinaten
Fortsetzung <Mathematik>
Code
Computeranimation
Homepage
Knotenmenge
Weg <Topologie>
Informationsmodellierung
Maßstab
Mixed Reality
Cluster <Rechnernetz>
Tabelle <Informatik>
Horizontale
Sichtenkonzept
Linienelement
Gebäude <Mathematik>
Datenmodell
Abfrage
Einfache Genauigkeit
Physikalisches System
Migration <Informatik>
Ereignishorizont
Echtzeitsystem
Forcing
Menge
Rechter Winkel
Codierung
Computerarchitektur
Ultraviolett-Photoelektronenspektroskopie
Cloud Computing
Tabelle <Informatik>
Sinusfunktion
Prozess <Physik>
Betafunktion
Datenhaltung
Datenmodell
Versionsverwaltung
Gruppenkeim
Abfrage
Physikalisches System
Zentraleinheit
Fastring
Virtuelle Maschine
Knotenmenge
Informationsmodellierung
Computerarchitektur
Maßerweiterung
Ereignishorizont
Normalspannung
Portscanner
Phasenumwandlung
Bit
Multiplikation
Wellenpaket
Extrempunkt
Mathematisierung
Gleichungssystem
Kartesische Koordinaten
Extrempunkt
Datensicherung
Raum-Zeit
Code
Datensatz
Wechselsprung
Knotenmenge
Migration <Informatik>
Datentyp
Mustersprache
Direkte numerische Simulation
Skript <Programm>
E-Mail
Gerade
Bildgebendes Verfahren
Benchmark
Güte der Anpassung
Abfrage
Web Site
Physikalisches System
p-Block
Bitrate
Beanspruchung
Menge
COM
Rechter Winkel
Festspeicher
Mereologie
Speicherabzug
Ordnung <Mathematik>
Explosion <Stochastik>
Streuungsdiagramm
Tabelle <Informatik>

Metadaten

Formale Metadaten

Titel ​Postgres at Any Scale​
Serientitel RailsConf 2017
Teil 84
Anzahl der Teile 86
Autor Leinweber, Will
Lizenz CC-Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
DOI 10.5446/31271
Herausgeber Confreaks, LLC
Erscheinungsjahr 2017
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract Postgres has powerful datatypes and a general emphasis on correctness which makes it a great choice for apps small and medium. Growing to very large sizes has been challenging—until now! First you'll learn how to take advantage of all PG has to offer for small scales, especially when it comes to keeping data consistent. Next you'll learn techniques for keeping apps running well at medium scales. And finally you'll learn how to take advantage of the open-source Citus extension that makes PG a distributed db, when your app joins the big leagues.

Ähnliche Filme

Loading...