Bestand wählen
Merken

PostgreSQL Backup Strategies

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
homo can I speak when you can still hear me the back a chemical if I get you to fall asleep so we have heard speak up a little bit I'm sure be fine and welcome back from lunch everyone that are going to count how many people fall asleep after lunch or do you just want null point you out if you do that in a case people don't want me to point you out so I will point you out and that so that's what I get this talk hopefully you can get something out this talk so again welcome back from after lunch when is minus high and work for a company called readily from a work of Stockholm in Sweden normally or in theory every now and then as I'm a consultant within we're working with both President of other products as well so I traveled a lot of different places and my work but that's where my basis would impose prison for on 1 of the comparison on that that but I'm not going to about any of that stuff phenomena focus on the stuff that you know is actually but using PostScript rather than building so if you're here in order to learn how close perspectives work internally sorry you're in the wrong place even earlier at transcendental the backup strategies that because you know we all need backup for we may think we don't need backups but yeah we only back so a typical thing when I talk to unfortunately and in far too many cases I talk to customers about you know the calls to the back up I meet them 1st like where you backup policies where all this stuff and you know you get but really backups we have
replication haven't even thing replication will say the replication is also but it is not a replacement for back backup you know no matter how many nodes you're replicated even if you're using the cloud actually that probably makes it worse at but you come across quite a lot of places where people are literally saying you know we have 5 replicas in 43 countries so we don't backups so many know that someone actually runs this thing called an SQL query and thus top table that replicates fast and replicates much faster than any other data but you measure the replication delay that's as close as they're as you can get that tends to go real fast so replication does not replace packets so that people that well
you know we have clustering right where failover high-availability clusters using these in hundred thousand or products it's the same thing it doesn't protect you if these are both designed to protect you against hardware failures they're not designed to protect you against application our poster is bugs yes we do have some hopefully fewer than many other products but we do have them or use a rare but there's no way that these high availability part will actually help you but another classic ways he is you know but I got I put a sad right it comes with a 100 per cent uptime guarantees so you're saying it might even having a built-in snapshots that run inside the same sand with a 100 per cent uptime guarantee I mean that's awesome is an old unneeded whatever run into that I mean what I say like that really can that possibly work does the concept of 100 per cent guarantee exist no doesn't like it it literally does not exist and I know there's a good news story about a bit over a year ago back home in Sweden where 1 of the largest the outsourcing providers which is generally outsourcing you know running I guess we almost always cloud traditional machines in in virtualized environments they ran it for you and they have 1 of these beautiful sounds with a 100 percent uptime guarantee multiple locations to know everything about with was guess so fast corruption replicated they lost 4 and a half thousand virtual machines production systems for know some of the most high-profile government organizations is waiting for example and they are sadly enough still relying on that's even though it didn't work so you need backups right that's sort of the the 1st state where there is no real substitute for proper back uh we combine them we don't just backups would you do need the backups and there's a lot of things
that people do when you start of my back up so the 1st thing you do is well you have a plan for backups so this is something that most organizations actually end up having you know you have some sort of you know what my back control how often gonna take this thing and how long keep the backups around some people keep them forever that eventually becomes expensive and but it's doable and somehow plan for what's your performance and like because back is not free it will cost you performance while you're taking your back the support of the backup planning and this is stuff that we need to do but what's surprisingly common that people don't have is restore plant but back
useless if you don't restore the backup plans are mostly useless and actually planned for what can happen when you restore it usually doesn't matter if it takes 2 full day to take the back as long as it doesn't do not prevent production from working it doesn't matter if it takes a if it takes a day to restore that's not going to be popular depending of course on exactly what the system is but 1 restoring your system is that we all talk about hot backups right we can talk about this for about 15 years in the database while 20 years were saying we can take the back up without affecting production were never gonna have hope restore because when we start the restore process the system is already down so we need to come back up as quickly as possible so that is a much much more important thing and it's something that we do need to consider in our planning which means we might need to combine multiple different solutions for solving different problems replications excellently for for dinner restore speed you just move your virtual IP over to the other she that unfortunately doesn't really protected printing your full database on you know pieces of paper and some type it back in yeah that's kind of back and it takes forever to restore I wanna if also is also ridiculously expensive even if you find cheap labor to do the typing and then quite error from that so let's take a little bit of an overview about the options that we have imposed which we actually have a lot of options and I'm not going to talk about all of them
but if we talk about the sort of core options we can split them into impose 1st we can work with something cological backups is back the SQL level of the database and this is what we typically referred to as adults which is why the tools used in PostgreSQL and is PG down and a couple products around 1 this is what you'll typically find in the model for the moment enterprise databases to the well this is the only thing that posters that prior to version 8 . 0 and the other option we have is physical backups which is due the back of its as some sort of a blogger file-system level well beneath the level of SQL no we can break this 1 up into multiple different options well with file-system backups have something feature-based back and we can do with local manual-based backups but the older group and they will work the same and the group so these 2 main groups which are logical and physical back and the
logical backups are really get if you've ever come from a non posters database if you come from the open source world among posters databases you will have phonological backups because that's basically the only thing they do that it is what you get when you do this you don't think is you a big long SQL scripts that has all your schema hassle your create table create colon create index create function create all the other multitude of objects that you can create a buffer database and load your data with normal copy command same as if you were just loading it from a CSV file and that there is no shortcut anywhere in what the article back ups to their regular posters clients so you can do that you can just having a Perl script Python script that does the same thing now we built 1 for you that doesn't forget about some objects so if you're taking back up we do suggest that you use at the built-in tools and don't build your own but there's nothing preventing you from doing that if you wanted to know it gives you great flexibility because again it's at the level you will know that posters a lot of interesting things with you when you can use that feedback you can do local filtering you can no longer options with the with them but it's not necessarily the greatest 1 4 4 performance and I'm sure many of you tried it is and when he tried to that 1 tried to PG dump a database bigger than a hundred gigabytes of a couple people bigger than a terabyte because fine yes I'm still sad for those of you had to PG done something that's bigger than a terrabytes 100 gigabyte is fine terabyte that's when it started to get painful delta multiple amino tens of terabytes it's just not going to work all way but it gets you pretty far it is very simple to use MPEG done
is the main tool that we use to do the stunts and did you doubled on 1 database that is important to remember if you have multiple databases on your system you need to run on multiple times and it is sad when you run into people that forget this on him back of 1 database and put the production data another 1 yes I've run into this uses regular posters connection that what it does is it basically when pg dump spots it opens a transaction and then using the in the CC functionality in post but that gives you guarantee that consistent snapshot across your whole X and while the back of his running and you can do anything else in the database Wal-Mart's 1 from there are locks taken so you can for example drop the table while it's being backed up probably a good thing but you can do all your inserts updates and lead you can create new tables while the backup is running they will be in the back up but you can create you would get it back up that is consistent asshole when PG down or started so if it takes you 10 hours to run the on the back that will be 10 hours old when it finishes so but it will be consistent but likewise if takes 2 weeks to run pigeon this are the pressing things that happens when you get into these multi terabyte databases it will take a week to run at and that's painful but it will be consistent as a week ago that PD done this single it is single for now because hopefully none of you have deployed 9 3 beta 1 Introduction at but with 9 3 beta 1 you can get parallel region of the will be 9 3 that you can scale out across multiple CPU uh but other than that is single threaded means meeting will use a single connection to the database we will use 1 of your you know 64 core database server will use 1 of them to back the other ones will just because you know that you paper but it's usually terms of that this is probably a good idea I would say I might on 64 core machine yeah I'd probably use parallel pg . I probably you 2 or 3 processes through 3 in college but the point is if we actually have say we have a 16 core machine where on 16 causal PG down who's gonna run your application Peter Dunn will kill you in full parallel mode that if you're running back ups you don't want to do that and again it doesn't matter how long it takes if you're doing saying upgrade using dump restore you just 1 run as fast as possible that some parallel dumpers also but for pure back up so it's usually not a problem that a single thread because it's usually not a problem that takes time that now you can
dump a number of different formats when using pg . so very simple it's selected by a dash capital left controls a former your backups should always always always be in custom format which is that see there's no reason ever to use anything else for your back the default is not that the default display that just generate a text file that is a big SQL scripts no again take this when it comes up to multiple hundreds of gigabytes and try to do anything interesting with this text files and we we can going to you know editor wars in which editors best but none of them is like this but none of them will work well with a hundred gigabytes large text files they just walked so custom format gives you the ability to do a lot of things it gives you the ability to index and to inspect your backups while they're sitting over a new backup server that's a good thing so always use custom format that when using custom format you also get compression by default will standard G 6 all compression I think the level is 5 or something like people usually database dumps tend to compress fairly well given the output format even if you're storing pre compressed binary data in your database 1 call periods it out it gets escaped into hex which we can then recompressed may not be the most efficient thing uh but it's still there is a gain to be had from the compression that Peter don't supports dumping of separate object because they don't just this table but just a schema but just these 5 table lets you do this when you're using on from back at you should never use these functions this is a good tool we use it for many other things but when you're doing backups you need to back up the whole database if you don't you're not guaranteed it will restore for example see where well I have 10 tables in the database but you can run 1 PG down for each of those 10 tables manual the problem is they each get a separate different transactional context they will each be individually consistent with its own table but they have a foreign key between tables someone like the modified your data between the 2 different PG downscaling into that it's suddenly your backup dozen restoring so when you're running a backup you should always use PG down for the full database that we can then filter things will restore and say I only 1 restored this 1 but when we put them in the back up everything should be in the
now as I mentioned before so Peter Dunn uses regular copy courts into standard SQL the a post is not that fast that link up at it's actually pretty slow copy it's much faster than anything else we have uh but it's not that fast it uses a single back and so a single TCP connection or Unix domain connection a single process single processor core generating I 0 no posters a smart enough with all these copies that he will not blow away your posters catch this movement obviously did you will have to read all your data for backing up your data no weight not to read it from disk at 1 point or another posters has this feature which automatically detects when a session is running large sequential stance when reading more data than is evident that in the cache system go straight through it will then confined that process in this case down for its copied to a smaller part of the shared buffer cache and just have really it's own buffets so it doesn't affect the rest of the system that is still the risk that can ruin your file system cache that depends a lot on how system cache works usually we end up reading every book just once so a good file system cache will ultimately least be ruined because they have similar kind of techniques to detect this type of behavior but there is always there will be an impact on your database caching when you run the down on less your whole database fit wrap it the whole database is in the cache already that we don't have to throw anything else out for us to get on that there is a potential for that's the end of course not only do we read the data we have to write this down fall somewhere this is something that I see quite a lot of people who just don't think about and then they MPEG down all the database server and write the downfall of the disk on the database server because that's that is that's attached to the database server now not only did we just add I O to the same disk that are databases on which changed what was nice sequential scans reading all they had to Samuel that go over here right now go back here a reading go over here right at so you're turning your I profile into something really bad now you can also rounded off run the dump on the local machine and story to a different disk on the local machine now no you're no longer generating this random I O and you no longer fighting for the same I O-bound but it's important to consider if you're writing to the same day as the right thing of the dump file is probably going to cost you more performance impact than the reading of the database even though yes it happens in a separate process it's running on the outside but it's all on the same hardware there is an impact there that we do need to think about that as I mentioned he don't just
compression by default and custom and this is actually something we can use in some sort of sneaky way so compression when we run it is run and happens in the heat of the corrupted you dump before running on the database server over should compression happens on the database server if you run the jump somewhere else on the network and it connects to the database server and compression happens somewhere else on the network and the data that's transferred between post present PG dumpers uncompressed that may or may not be what you intend sometimes in a network bandwidth is actually relatively cheap you may have a dedicated network for your back and this may not all be a problem but it's something worth considering that now the other use we can do for this compression is we can actually use it follow typically only on his we using a single CPU it tends to be the same for pretty much all the hardware we see today at somewhere between compression level of 3 to 5 is where you get the maximum benefit at what that means is well you can end up if you're running to jump on the same machine you end up with 1 p is 100 % of 1 CPU core and then the back and that it's connected to using 100 % of different CPU you can go faster and your backup will go as fast as possible but is that what you need is that what you want if you decrease the compression your alone will become I about right because your spare CPU to compress more you just decided not to do it so you will create more I O and will be very limited in speed by I O but if you increase the compression things will be slower the backup will take a longer time but it will generate less I and it will in in particular generally less I O per 2nd doesn't actually matter if your backup takes an hour or an hour and a half to run it doesn't impact the application node as am quite often found that if you increase the compression the back of from slower but the impact on the system is decreased significantly because pretty much all of our database servers have spare CPU capacity and with high compression were paying CPU capacity with low compression repayment with biocapacity we have CPU despair we don't have I suspect that's the typical scenario did seeing a backup case where you where you in the back of his impacting the system too much but try actually increasing the compression it sounds counterintuitive but it does quite often help that now there is another thing that you can quite often those so we said in this case compression haven't you done what if I wanna run in OK down somewhere else yeah you can dump it over NFS NFS man but big and what you want is granted you don't somewhere else and write down final there but no the compression happens in the wrong place how do we solve this SSH this is the
principal thing 1 that there were well SSH into database server but didn't maximum compression custom format and just plain let that the SSH tunneling works this works really well does anyone spoke the error in this command thought actually narrow that makes it not work this works but there is a very stupid mistake and this thing for
SSH does its own compression so and compression is slow compressing already compressed data is even slower and it makes it bigger they if using this you probably 1 has the compression equals no onto the 6 now this default whether it does its own compression at the Canton with platform whether it's on or off by default it might be off by default in which case this is not necessary only I never hurts to explicitly turn it off you might wanna also say well you know it's also in crypts at all 1st of all maybe that's a good thing right transferring your sensitive data backups might be a good thing that is being backed up but even so encryption is actually much much cheaper than compression weightings where CPU's where machines work today compression is the extensive things uh encryption is cheap but you can of course also age to use what's called a knowl encryption which means it doesn't hurt seldom come around actually doing that you know we got the same thing of course with this unabashed capital that 9 means we're gonna look up 1 CPU core at the database server for 100 % the client here is only processing this simple redirect but we're sending the data off across the network immediately which also has the advantage that we were not impacting the BIO O on the server by writing data there so when we're doing restored from PG down in since we said if you using this plain format it all just goes into IP or whatever it is only ever tried opening a 200 gigabyte SQL fighting the it's fun it uses a lot more than 200 gigabytes of RAM it's not very efficient about so don't do that at
we used to restore the dearest or work sort of what it does is takes this custom format and basically transparently converted into this 280 by dual file except it doesn't do the whole thing and what's it will then use regularly PQ connection standard posters connection run standard posters commands on it and just restoral detained and this 1 does support both the full database restore which is what we do when everything is broken and we need to restore from back the end we can use partial database resource we can restore single table with restore multiple wouldn't guarantees is that if you're doing a full database restore the data will restore because we had this consistency natural when we took the back and look for coming from the same 1 and you can you have the same sort of basic problem if you restore single-table at that table have foreign keys pointing somewhere else yeah we're not available sort because will still in force primary but primary and foreign keys but if we have independent groups Table 4 something like that we can restore these as a single unit and it's pretty easy to use now what this 1 does is it basically does
state what it does create tables than a copy of all your data they does create index and then after constraints 1 think that's fact these are like basically every single 1 of these operations are the ones we try to avoid because the flow and we're doing for of them and this can be very slow for large databases now how do we define very slow when my restore takes multiple weeks that is very slow and fact for a fair number of systems if my restore takes 2 hours that is too slow and it takes a long time to restore these once a database grows up to any level of sites I'm sure many of you have experienced this much sooner than you at a terrible backing up a terabyte is fine restoring it is not for the couple things that you can
do to make restore run faster but you can use the dash 1 flag this is the easy 1 this one's been around forever it basically runs the whole restores a single transaction but did you don't need to restore does is begin then restore everything that connects the reasons why we like this will in general we get fewer operations that's not really a big win because ah operations are so big anyway and the big wins are that we can enable some optimizations in the system and we do this we can for example right left data to the transaction log is only work if we're not doing replication well but if we don't do that it can help us a lot of things it also comes with the advantage that if the restore watercraft for some reason we don't have a half restored a friend of remember we looked at what it does
it does copper create index at constraints while if we're unlucky and crashes we do the copy we don't create the unique index now we have a dangerous there is no protection that unless you use
that's what the question but now it includes all the think this is just the
I'd includes triggers it includes the functions of rules that it doesn't nowadays it loads them after it flooded data so it doesn't run the trigger it's during the store that could create a very strange restore depending on which triggers so they don't run during that
I'm getting another thing you can do you really need this thing to run faster is parallel restore
this is something I would have seen 8 4 so we were able to restore parallel long before we were able to back up parallel and it works like your typical parallel command refuted that Jane given number will just run that many parallel session so if you have is a 32 core machine you run batch 16 Overview 16 cost oppose present 16 cost PG restore engine and sometimes that will actually run 16 times faster at so pretty good now 16 times faster than a week is still not good enough but it is much better than what it does is it still owes each 1 of your objects in a single session so if you have won a terrible table and 500 really small tables what it will do is stop 8 connection for this 1 terabyte table than another connection will finish all of those 500 small ones and you still waiting for the what it is that we can paralyze an individual paper but usually that's you it looks like you have a number of different objects that are reasonably smart and it tries to be smart knows how in which order it has to load them to satisfy dependencies but other than that it tries to spread out below not to get into that situation it does a reasonably good job now unfortunately this was not compatible with that's what you can use both of them and in almost every case but what you're looking for the performance this 1 is that this this 1 you can get to not the number of processors onto you hit the limit of what a disk system can once you hit the limit of the disk system while you're not getting any further but you will get faster now of course if you do that you lose the fact that the the integrity so that if it crashes halfway through you will have a half restore database that you need to drop and restart your attempts and it doesn't protected as that yes that N a typical value if you're running the risk on the same machine is half the number of course you have this then you get half the cost of posters have to be to restore your running under different machine it's number of course you have that if that doesn't kill your I O system then you have juvenile and then your happen because now it's fast there are some other interesting things that you can do you can turn off at this is something we should
never actually red recommended to this we should never recommend anyone turn affecting it's dangerous it's things will die things will get corrupt and you won't know until it's too late on the other hand is very fast and if we're doing a whole database restore only for doing the full database restore the fact is that if it crashes at some point during the restore or we can just delete everything is thorough in that case there's no real risk for matching the Bayes risk for messages you forget to turn it back and yes I've run into this more than what's actually ever run into this more than once with the same Customs and which is not a good sign that yeah that does happen do not forget to turn it back on after you turn it back on that you need to flush all these you need restart posters because it is that you also need to flush the operating system cache if you don't you will get corruption and you will get corruption that you want to detect and so how do you do for us the OS hashes to be sure of evolving easiest thing really rebuilt the server here even the server you know it's flushed there are ways there are completely different in all different operating systems but the problem that I have with all of them as they don't give you feedback for example in Linux there's something in Proc skull drop Poseidon catches would you can tell the colonel properly caches right now and it'll just go do it we won't tell you is that you don't know when it's safe to start the database you can go in the server and look at the desk since see when they stop flashing and hope that was it that doesn't feel very safe to and usually when we did turn thing for these restores number may turn them off in order to save on the order of hours you know you can afford the time to read it doesn't take long to do that and find out about this
1 don't forget this and that's as you mentioned in the back PG dump backs up a single database there are objects in post that are not inside databases these are your users these are your groups these are your tablespace definitions you need to know about them as well for this we use dump all with the parameter that but don't only the global object so if you don't do this back of will not restore it the manually going create the users which is fine if you have 2 of them and not if you have hundreds of this will get this will always generate the plain on a have millions of users that's on a problem and if you have millions of users who have a bigger problem they should not be database users if their millions of police but no 1 is genes global object only if you just impeded unbelievable that every single 1 of databases in plain text mode plus develop best she says ignore that the global zone so moving on to the other
options that we have which is physical back because we said logical slower at will physical slow to ones are everything is so we need faster machines non-physical backups and
impose press is basically comes from this post is stores databases in files a century CE there's lots of them with very fine 8 so we can back up these files 36 no need to deposit only to query anything we don't need to read up would only to decompress and toast data we need to do all of that stuff so it's much much faster it also becomes architecture dependent you cannot move a physical back up between 2 different architectures say at 32 and 64 bit begin little-endian that will not work it will become post was version dependence you can't you have to restore the same version now yes that is the same major version I think officially you have to restore it to the same major version and the same relate the minor version but in fairness in worked with an earlier 1 of but uses you should always be on the latest minor version that's not a problem is also dependent on your compiler flights if you are installing your database using a package saying rpm that impact injury in the stores those package maintainers make sure that they use the same 5 so that this is compatible so long times that broke broken the Europeans and like 8 . 2 words when HH integer date times and and your past might must be identical in restored to the same place that's kind of restricted but when you think about it when you're doing backups that's not necessarily a problem is usually the 1 restoring 1 restore onto equivalent hardware work you did then suddenly switch when you restoring it in in switch from Linux and insults Solaris SPARC you probably resort to something similar at so this is usually not a problem what might be a problem is that it only supports full cluster backups you can't say take a back of this 1 database on my 108 back-to-back mall as 1 step again for backups quite often that's OK but really it's the way it is a limitation it would be nice not to have limitation but we so the easiest way to do physical back ups really it's still is offline backups it
is easy how do we do it stop expressed take back up stop now if you can do this it's the easiest 1 you can back up your files exactly however you want both this isn't running but doesn't care just files his tory copy the false system snapshots e-mail them away you can do whatever you want but not in most scenarios this is not an acceptable solution because of this final step up here that's the stop those stressed the stop your application how many people are in the lecture saying well my application really only use between 8 and 5 nobody ever uses it in the 1st year that if you can if you have an application that's only use them by customers applications are only used for 1 hour a day then don't underestimate the simplicity of this option because it just works now you cannot do this if postbases right it may look like you could but you're bit your backup will not work at the end another very
simple way of doing it is simple enough facts these were using file system or sound-based snapshots while the databases right as long as you're snapshot can guarantee that this story of snapshot system whether it's I could be could be a ZFS snapshots could be a sense that supports at tomic snapshots across all of those 1st data that means water table spaces including the PG excellent director short transaction if it can guarantee this then you can just do this snapshot and you're . it will just work and most cases that's can be a problem particularly these things on different file systems you're down to you know enterprise very very expensive sensor to be able to do that it maybe it's 1 of those things in the note exactly atomic if they differ even a microsecond level you can then it's going to work in your testing and it's an fail production and you won't know until you try to restore actually when you know when you restore you will know the week after you restore when you realize that there was corruption in the thing you restored but it can be quite useful as was oceans and the advantage is you don't have to do anything it just works what you usually don't end
up doing it online days facts as we call this is known constrained false system-level backups meaning you can do whatever you want anywhere you want while running the back but was still in the file system level we do this and we recover it by using a transactional so we combine these days back up with the transactional with just the base back up it doesn't restore it needs the transactional we can do this with the without book called log archiving if we do it with longer archiving this is the basis for the point-in-time recovery the ability to restore not just the back up but any arbitrarily choose point in time which is where we 1 and the going now there are of course multiple different ways that we can do this the easiest way to do it on-line based
back is to use the integrated based backup functionality is is new as opposed is 9 1 but you are running who's not running on 1 a newer while alkane area well now you get another is not required at this runs on top of replication particle because they're very similar so in order to use this you must enable replication even if you don't have any actual replication you must configure PostgreSQL support replication is by setting the parameter while level to archive or hot standby but at least archive and set the value Maxwell senders to at least 2 interior consent to what but always said it at least and you know that the 10 people in the margin if you get applications for if you wanna run multiple backups at the same time or something like that and then you run the simple command
PDE-based back this simple brain right here is the simplest way to do the full backup oppose the base likability user because it has to log in to the replication of these user must have replication privileges that best details it right into to this directory over here dash capital P. gives you progress report unlike PG dump 1 can actually give you progress report that tells you know about done 25 % you you can go have a coffee and looking at on Advances dash X is the important 1 if you omit you're dash xtrabackup will not be restorable without a log archive but if you include that x feature-based backup will put enough transaction long in your back out so that the backup is stand-alone restorable that you do not need any longer archiving you don't need anything externally it just works in fact you can go this in 2 different formats if you don't get in the car format it will give you a Firefox if you run the default format which is plain it will actually be reconstructed data directory in in this case in this backup directory right here so after you run this command you can actually do CTL point to this director a start and it will start to post press in your back that's the form at the back of that you get at which means they're completely standalone level just both up and you get something running as Greece now this will give you 1 of backup capability this is the 1 you put in your quantum if you don't want to do anything complicated this is the physical backup equivalence of PD don't if you want to go all the
way you need to use something called archive it's a pain in the ass to set up but once you have it becomes fantastically so all works is that whenever posters finishes with a log file and this is transaction log file we're not talking much a system law that says you know the seasonal men whatever it's the binary transactional when it's finished a transactional it sends to the robot is new concept over there when restoring we read it from the log archive so basically we restore we take this base back which was taking in a certain point time and then we read all our transactional back from this archive we can roll forward through this and the feature here that we're looking for is the Wankel point-in-time recovery which is that we can stop so say we restore the backup from last nite and we start reading the log archives since last nite and we just keep going if we keep going till the end which is you know a 2nd ago that our database as restored would look like it did a 2nd ago we can also tell it to stop at 9 o'clock this morning which is when someone go back from the problem and that drop table that's why you should not load into database if you get back from the public mind for morning and if you do you should make damn sure you have working point-in-time recovery uh that's what the theory behind that's the way it works in close press solid-state look good and bad at it lets you do it anyway you want that's the good part MapReduce it requires you to define what this way it's even a
simple case what we do is we set a parameter called archive mode equals all this is 1 of those annoying parameters that require suppose with restart because once you do this we will start up a little archiver subsystem will start a separate process that does this then define a command that's archive command that just says it's a shelf command that you can find anything you want but posters will tell that command take this fight over here and archiving under this name it's up to you to define what that meets a typical example is you are thinking of fear you may copy to an NFS but you can do anything you want you can send it via e-mail that used to be my go to for showing that you can anything you want and I want a better idea you can post at the flare split up it works a word for a very short while before you get lot but because you're sending 16 megabyte chunks that lot tweets from 1 5 and they to detect that but it works and you can restore it didn't unusually complicated I would recommend it another typical thing if you actually want to be in the cloud you know use Amazon S 3 or something like that they're very good at doing on like but and the only thing that you promise other than being able to do this and being able to tell post this if you succeeded or failed is that you must also be able to define restore command which will do the other thing around both this will just tell me you know this file that you previously archive under this name give it back to me and put it over here so if you are saying that the way well even our a single batch property to an NFS server copy-back back e-mail that all from unaligned at the back post the query again just under that the only thing that you guarantees you can get it back when it's being requested at this fish specific name the rest is up to you that there are some limitations in this that we fortunately work
around a little bit more not that it requires it always uses 16 megabytes segments which means that if you have a database with very low velocity it might you might not generate 16 megabytes of transactions in a day but the ones you generate are very important for this we have a parameter had timeouts which tells pose questions you know even if I didn't generate the 16 megabytes and say about 30 minutes send it anyway so the problem here is that will send the full 16 megabyte file even if there's nothing in it and it won't even the 0 pattern it will contain random jock so doesn't compressed particularly well that so otherwise you just about 1 2nd and now you know that's not help that yes they impose goes up to 9 1 was sold this problem by replication you have a applications like you know if you get a hardware crash so you need to restore to anywhere within the past 10 minutes you just use your applications like if someone dropped the table what if you shut the server downloadable archive file so that's not the problem but it still gives you a candidate requires replication slave maybe actually need replication for anything else for this reason as opposed to 9 we have a tool called PG receive excellent that you can run that will actually pretend to be a replication slave is basically it is supposed replication slate without process we just took the database that we receive your transactional in real time and rebuild the log archive she'll be running this on the log archiving service somewhere so now you really can run it on Twitter and it will read write the false and generate them byte-by-byte instead of 16 megabytes by 16 megabytes and that those soul the problem properly away so in order to use this you need a bass back up again luckily we can generate this basic backup very simply we can use PDA-based
back up again just remove the dash X this is the default mode for PET-based back you just say here take based backup with over here once you've done that you got something to work with for a longer but it does not know at this point restore without the log archive that's what the dyadic before you have the chips actually don't you can still use the shakes your backups will be unnecessarily large but there was the work you can also use what we call the manual method of doing these backups and this is the old way of well that used in earlier versions both press which is you connect to the database is a pd start back up and then you copy all your files we don't care how you copy you can use copy sink snapshots pop whatever the files will change while you back them that's not a problem will be able to restore from whichever state you restore them forward or back them up as long as you don't miss the and then you run stop back when you're done most common thing that people make a mistake with this 1 is they don't check the error code of every single step is a multistep operation if any 1 of the steps failure backup is corrupt and not usable so check the error codes and really you based activity can it will take care of things in Lake Shore things are restored and receptor properly if something goes wrong you know look at these system impact of
Pt-based back up and it's sort of similar related on it has a much lower impact it will read all your data are unlikely to dumping will read the data from your indexes because it just weeks following this as it generates a lot of I but it generates sequential I so it's relatively cheap I is single threaded so I like to say that probably a good thing otherwise we kill your I O really really quickly this we can read so much faster than copycat we have the same sort of a copy thing with the dates back up you can add compression compression happens in PDE-based back and we can use this the throttle this 1 as well for generating too much I increased compression go slower but we have even less impact on the system in the form of walking while listening from it doesn't look anything you can drop and create tables just fine while the backup is running the throttling it might well be a good idea and when we look at restore
performance of this 1st or it will be orders of magnitude faster than the jury story quite often orders of magnitude so orders of magnitude faster than suddenly your terabyte database will you know restoring our small weeks that's a good thing but it will depend on the distance to base factor goes to restore has you restoring the full based back-up up copy all those files and then copying every transactional file since that back up to the point to restoring to node depending on whether you have a database that is large but with velocity or small but with high velocity either 1 of those to be the fastest and if you have a very small database that takes huge amounts of updates restoring the base factor will be really great but then you have to reload loads and loads of transaction logs which may have just been updating the very same road where you can have a huge database of multiple Terabyte to restore that and then you had a hundred makes a transaction of the thing that's going to depend on exactly where you are and what you got for that reason you probably want multiple generations of a back exactly how often you take is going to depend on the velocity of and the size and of course the that tell you how long you are allowed to wait for the storm that the only real way to figure that out properly is the test try so I'm starting to run out of time and have actually
started talking backup strategies which is what we said we have just about the options and when they start with the most important of everything in your backup strategies please make it back gaps of AI yes as opposed a consultant I can make a lot of money by helping people who didn't make backups try to recover you know whatever is left of the data but I don't like doing that I hate doing more constructive thanks so please make back don't forget about them so how
do you make these back well you probably you definitely want an online physical back relative nobody does not want this today prior to the due date back up maybe you could avoid it because it was kind of annoying as a PDE-based back just do you want at least that you almost certainly if you have a mission critical databases you have a lot of stuff in it you almost certainly what a point-in-time recovery not just in Ireland to restore backup from yesterday yeah you do that in late in the afternoon had a thousand people working 8 hours on data in the database to just threw that away with point-in-time recovery maybe you throw away 5 minutes instead of 8 hours you probably want the almost certain and then you probably want PD dump as well because you know really this is back up to now this whole when you belts and suspenders and whatever else you can think of if everything else fails think if you can do PG don't you probably want if you have a multi terabyte databases you can be really doesn't work it's not gonna work for but if you have a small database just right they're not as often maybe you do a daily online physical back up and weekly pg . these are also useful for example you don't will actually read all your data through the level it'll help you detect corruption that may have happened from a hardware failure on which they like to go through the data it has other with another big thing you have to ask
yourself is back at retention and that's the 1 of the difficult things is you need to actually go to your business people and ask them what they need the fact if you if we have to restore a database as it was a year ago that is even any point that is there anything interesting in a way that balance when the point and on the other hand we might be dealing with the financial data is regulated where we need to be able to restore it to 10 years ago that actually can be hardened do you speed you don't if you have that require because you're otherwise can have a resource on a 10 year old version of PostScript and we don't work that so did you don't if you actually need to restore something 10 years not as a plain text format plain-text formats have not changed in 10 years like we still have asking was that 8 bytes bytes New may have UTF-8 on public I think we're gonna have UTF-8 10 years from now the hope production have posters 9 1 10 years from for your sex and we should not that that's all Muhammad doesn't make sense or does it make sense to actually be able to do with you know 1 2nd interval restore the database 10 years ago or is it OK to say well if it's 10 years ago I can go down to monthly retention you have to choose month not 2nd filler things they're unfortunately comes back down to the business requirements is not something that we have technical people actually detect we can ask for business requirements and then we can try to build something at least pretends to do what they ask for the quite from what they ask for is not or very very expensive and you tell them that and they say make it cheaper but we need to
deal with long fun base factor remember the restore requires the bayes factor plus log files since the back it does not allow you to have any hold your log archive is really really really important use a single file in your log archive you get a break what you can restore from this is also region where we need to run based spectrum in theory we can run 1 dates back and just have a huge overcast it'll take forever to restore and if you use a single fire back at about so we need some sort of a reasonable level of when we run our based back this might be unavailable might be on a weekly level another comedy there's no point in keeping anything in the log archive older than the oldest based backup repeat with will always go forward we can never go back back even replication you want
both probably if you can only afford 1 it will be that that's a backup solve a problem of replication that is more important than the 1 that replication soulful of back but you really want both of the replication is excellent for recovering from hardware failure because you get up and running within seconds instead of minutes hours or base with much much shorter service interruption you need backups as well but that doesn't mean you don't need replication and other useful things you can do around this to restore
the due to to decrease tourist stuffers using a lag behind replicates this is not something we support the streaming replication probable that we do support upon the file-based replication which is basically you create a delay in your application system intentionally normally when keep replicas close to the matter as possible but the create is running 12 hours behind that means as long as you detect the arrow that somebody didn't within 12 hours what you can do is you can actually fast forward that replica removed the delay and fast forward government job right just before this problem without having to do a full restore and we can do this easily if we use file-based replication post because we just need to define a command that lets you can just look at the data files just multiple-peak until after the staging area that's that's 12 hours behind very simple to set up can be very useful if you need to be able to do quick source on very large systems yes 0 yeah absolutely absolutely you want 1 that's up to date to deal with a hard problem and then you want lag wanted you would use a problem or you know if the harbor problem is it created corruption so the next really really important thing please
tester back only the regularity tested editors backups and that's almost no effect of the and I'm in the evidence that your database backups and that was because it didn't crash so do test your backup for the wheel we don't always test American I actually have a customer there on a fairly important posters database they test restore every day every morning when the DDA getting the restores the database and runs test scripts and data verification make sure it works that they have 1 database you can't do that if you have a thousand database that would be way too expensive they have 1 database and this database crashes and doesn't restore they don't have a company is nothing less so for them that makes sense for most people that probably doesn't make sense this is the difference between that and testing once when you set it up and never again there is a feature to test your backups selecting restore I don't think there is I don't think there is anything in the queue for testing your backup selecting restore because why just the restore mutiny next a license for the separate machine to restore imposed was you don't Oracle just put up a virtual machine somewhere store in the then you actually tested the right things the other advantage of it is and why I is also
1 when a typical example is when using this the staging their restore
from back don't do that don't build a new 1 why because you also implicitly train your personnel now when this happens at 3 o'clock in the morning you actually done it before you know how it's done which is also maybe a few in your test should be restored because then you know the whole process works and you know how to do it yeah exactly so when when you know when the developers what we need needed new updated version of the database for development in a lot of you will then go take a copy from the master right away as you don't do that go your backups because now you just you back and he came for free exactly at least 2 3 because you train your people as as it well and don't ordinate as in you should automate additional automate everything like this should be the same manual processes that will be when you do really restore it needs to be there again so that you used to doing the right things that well I'm already significantly over time but luckily there was a long break after me so when got to stay but thank
you very much for showing up if you have any questions I think will let you all but feel free to approach me here and ask questions after and if you have interesting war stories from back up so I'm interested to hear the yes you're going to be able to get a copy of my slides if you is that going to blow but 100 that you have identical that's last talks if there already the only difference is that will say some other conference but it's there the kind of
Retrievalsprache
Datensichtgerät
Service provider
Gerichteter Graph
Client
Vorzeichen <Mathematik>
Fahne <Mathematik>
Mustersprache
Skript <Programm>
Dateiverwaltung
Kontrollstruktur
Differenzenrechnung
Caching
Hardware
Datennetz
Datenlogger
Hochverfügbarkeit
Biprodukt
Generator <Informatik>
Verbandstheorie
Forcing
Ordnung <Mathematik>
Tabelle <Informatik>
Lesen <Datenverarbeitung>
Objekt <Kategorie>
Subtraktion
Selbst organisierendes System
Interrupt <Informatik>
Datenhaltung
Virtuelle Maschine
Knotenmenge
Bildschirmmaske
Verzeichnisdienst
Perspektive
Netzbetriebssystem
Thread
Cluster <Rechnernetz>
Soundverarbeitung
sinc-Funktion
Programmverifikation
Binder <Informatik>
Programmfehler
Summengleichung
Caching
Gamecontroller
Wort <Informatik>
Wiederherstellung <Informatik>
Randverteilung
Prozess <Physik>
Euler-Winkel
Natürliche Zahl
Minimierung
Hochdruck
Versionsverwaltung
Gruppenkeim
Datensicherung
Eins
Wechselsprung
Regulärer Graph
Gruppentheorie
Kryptologie
Tropfen
Default
Parallele Schnittstelle
Umwandlungsenthalpie
Lineares Funktional
Schnelltaste
Parametersystem
ATM
Physikalischer Effekt
Datenhaltung
Speicher <Informatik>
Abfrage
Systemaufruf
Frequenz
Zeitzone
Konfiguration <Informatik>
Arithmetisches Mittel
Transaktionsverwaltung
Automatische Indexierung
Packprogramm
Schlüsselverwaltung
Prozessautomation
Message-Passing
Standardabweichung
Aggregatzustand
Web Site
Physikalismus
SPARC
Zentraleinheit
Punktspektrum
Physikalische Theorie
Mehrkernprozessor
Physikalisches System
Puffer <Netzplantechnik>
Inverser Limes
Coprozessor
Speicher <Informatik>
Einfach zusammenhängender Raum
Relativitätstheorie
Bildauflösung
Physikalisches System
Fokalpunkt
Quick-Sort
Packprogramm
Roboter
Objekt <Kategorie>
Flächeninhalt
Last
Tablet PC
Speicherabzug
Nichtunterscheidbarkeit
Computerarchitektur
Unternehmensarchitektur
Einfügungsdämpfung
Vektorpotenzial
Momentenproblem
Extrempunkt
Desintegration <Mathematik>
Compiler
Parser
Twitter <Softwareplattform>
Gesetz <Physik>
Raum-Zeit
Computeranimation
Softwaretest
Datenreplikation
Protokoll <Datenverarbeitungssystem>
Speicherabzug
Quellencodierung
E-Mail
Metropolitan area network
Softwaretest
Kategorie <Mathematik>
Güte der Anpassung
Partielle Differentiation
Störungstheorie
Kontextbezogenes System
Web log
Dienst <Informatik>
Rechter Winkel
Heegaard-Zerlegung
Lesen <Datenverarbeitung>
Server
Fehlermeldung
Fitnessfunktion
Geschwindigkeit
Automatische Handlungsplanung
Abgeschlossene Menge
Systemzusammenbruch
Äquivalenzklasse
Datensicherung
Systemplattform
Pufferspeicher
Domain-Name
Informationsmodellierung
Arithmetische Folge
Dateisystem
Datentyp
Zeitrichtung
Skript <Programm>
Abstand
Zehn
Open Source
Stochastische Abhängigkeit
Default
Aeroelastizität
Schlussregel
Automatische Handlungsplanung
Elektronische Publikation
Partikelsystem
Größenordnung
Streuungsdiagramm
Bit
Sensitivitätsanalyse
Konfiguration <Informatik>
Punkt
Web log
Virtuelle Realität
Regulärer Graph
Kartesische Koordinaten
BAYES
Login
Strategisches Spiel
Übergang
Arbeit <Physik>
Einheit <Mathematik>
Fahne <Mathematik>
Prozess <Informatik>
Nichtunterscheidbarkeit
Funktion <Mathematik>
Inklusion <Mathematik>
Nichtlinearer Operator
Installation <Informatik>
Stellenring
Quellencodierung
Ähnlichkeitsgeometrie
Quellcode
Dateiformat
Knotenmenge
Speicherbereichsnetzwerk
Teilbarkeit
Rechenschieber
Softwarewartung
Texteditor
Chiffrierung
Funktion <Mathematik>
Twitter <Softwareplattform>
Ganze Zahl
Strategisches Spiel
Dateiformat
URL
Verzeichnisdienst
Zentraleinheit
Portscanner
Rückkopplung
Nebenbedingung
Server
Gewicht <Mathematik>
Zahlenbereich
EDV-Beratung
Kombinatorische Gruppentheorie
RFID
Multiplikation
Interrupt <Informatik>
Mini-Disc
Quantisierung <Physik>
Warteschlange
Softwareentwickler
Transaktionsverwaltung
Widerspruchsfreiheit
Hilfesystem
Streuungsdiagramm
Fehlererkennungscode
Elektronische Publikation
Sechsecknetz
Betafunktion
Zwei
Einfache Genauigkeit
Kanalkapazität
Datenreplikation
Paarvergleich
Maxwellsche Gleichungen
Basisvektor
Mereologie
Bandmatrix
Normalvektor
Stapelverarbeitung
Verkehrsinformation

Metadaten

Formale Metadaten

Titel PostgreSQL Backup Strategies
Serientitel PGCon 2013
Anzahl der Teile 25
Autor Hagander, Magnus
Mitwirkende Heroku (Sponsor)
Lizenz CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben
DOI 10.5446/19055
Herausgeber PGCon - PostgreSQL Conference for Users and Developers, Andrea Ross
Erscheinungsjahr 2013
Sprache Englisch
Produktionsort Ottawa, Canada

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract Rumor has it all these other users and companies do this thing called "backup" on their database. And people keep saying it's a good idea. But what does it actually mean? In this talk we'll go through the different options we have for PostgreSQL backups, how to use them and some good tips and best practices for setting up a backup strategy that actually works when disaster strikes.

Ähnliche Filme

Loading...
Feedback