Bestand wählen
Merken

All the Dirt on VACUUM

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
talking years about the internals of how back works today
just as kind of a heads up of this talk is going to be rather in depth but it was actually a and C Simon here I've actually got the idea is thought of scene Simon de talk it back in last you open and it occurred to me that it well there's a number of pieces of information out there about this is how you should tune in different settings and what not and I hadn't seen anything that really talked in depth about what vacuum actually does and how it actually works of so that's what I'll be covering here today on I do have there are some references to actual of places in the code in the presenter nodes so if you are want to go down this I think we're making the the slides available on the web site of but if not I'll have it posted it for troubled comes all so today we will be
talking about and the C C C log multi transactions on some of the machinery that they kind of goes into why we have to vacuum on of this case what you know when things can be vacuumed of freezing hot back in itself obviously on and then some stuff about automatic true knows what is good basically
databases you have 2 choices of what to ensure a consistent data you can be walking but which stocks but for are you can use multiversion concurrency control and really what this essentially means is that when you do a delete where you do an update the old data does not get removed when you update a new copy of the rows created a model the delete just it's left behind so that's ultimately the reason the biggest reason we need vacuum is you you have these rows that left behind that eventually have to be removed of the commit
log is what pose presses to track whether a transaction has been committed or rolled back or whether it's actually still in progress of multi transactions which some of you many of you have heard about the recent bugs on you have if you're a if you're at 9 3 0 9 4 you need upgrade of because there were some they lost bugs were found on the revolving around a multi on multi exact of text but basically multi x axis needed on to handle row-level locking of and some conditions with with updating the weight of the reason these 2 things are related to vacuum is vacuum is what's responsible for going in and freeing up space in those 2 storage area so uh
as I mentioned you you do an update you're doing to lead you creating these dead rose but the big question becomes OK when you have a dead role when can you actually go in and remove that on and basically the constraints there is that where in a transaction when you start a transaction or select statement in post of the database gets what's called a snapshot that snapshot has visibility information that determines what rows that individual statement can actually see so if you have a snapshot in your database that say his that was taken an hour ago and in a sense that snapshot was taken you've gone and you deleted 3 million rows those rows cannot be cleaned up by vacuum because that our old select statement could still need to look at that data on there is the the other thing that comes into play here is that if you're using streaming streamingapplications depending on your configuration of streaming replication can can have that same visibility impact because of the way that streaming rap works it's it's a completely binary level replication protocol so when the master goes and vacuums and removes those old rose if you had an old select statement on a streaming replica you now can have this problem where the masters remove the data and the replica still needs it because it's got a query routing of so if you have a hot standby feedback enabled you're streaming replicó will affect the ability for vacuum to do its job on your master of and prepared transactions of these are different from prepared statements are basically the only reason ever turn prepared transactions born if you're specifically doing 2 phase commit on and the big thing there is that when you prepare a transaction that transaction has a snapshot that's associated with it so that prepared transaction again prevent backing from working on logical decoding of can also affect this because again it's while it's not a pure binary replication of technology it's still based on the replay what still has this constraint on back in all the
so that's kind of visibility side but there's other things that affect vacuums ability to do its job of because of how all index stands in and sequential scans operates you cannot vacuum in a page of data well somebody else is looking at that page you can just remove the data out from underneath the so there's a very special locks that has to be acquired and of page level for vacuum to be able to offer to remove data of the the these these queries can can hold these locks these are not necessarily short term blocks of especially if you have something like a nested loops scan the outer side of that loop can potentially hold opinion on a page for a very long time and that page while that pin being held cannot be vacuum what this all boils
down to is when you have a long-running stuff happening in your database that prevents vacuum from being able to do its job so if you have a very high transaction rates database you need to then be careful about long-running stock of and this is also not limited strictly to what tables being back versus what tables of inquiries because all snapshots if you get a snapshot that's an hour long and all it's doing is saying 1 table the thing is the system has no way to know that that transaction will only have the 1 table so you may be trying to vacuum some other table and it's not in the vacuum of removed the data it is possible that the
transactions that will let me meaning say lot of the fact so that they will never be able to yes yes and unfortunately in all versions up to 9 . 4 there's no way to get visibility of we did add some additional of verbose logging so if you do vacuum the Rose and 9 5 it will provide feedback to you when it hits when it had spurred tries to pin very transient cleanup pages and king of the 1 exception to this is of freeze back because of freeze vacuum we must scan and clean every single page so 1 vacuum is freezing if it can't apply the cleanup block it will sit and wait and that can actually then cause other problems because well it's sitting and waiting for this cleanup block anybody else that needs to get them all to to get a pin on that page is going to then sit and wait behind the back so i n and that's actually the way the vacuum just used to work period of i I wanna say this was changed in like 9 1 9 2 of where we change vacuum so that if it could not immediately acquire the cleanup what it wouldn't sit there and wait of prior to that it would just sit there and wait and then any other worries that came along would get stuck behind back all transaction ideas and multi-axis act ideas multi transaction ideas about the way that those are implemented is it's basically a circular on counter so starts at 1 thing it starts to work because the 1st couple values are reserved by it started to and starts increasing and then when it hits to the 31st all of a sudden 2 is no longer a transaction that's in the past becomes a transaction is now in the future so this the problem with that % is you if you have rows that say 0 I was created by a transaction number 2 and also a new yet to transaction to the very 1st plus 2 or whatever so now that row becomes invisible again because now the databases thinking 0 this role was actually created by a transaction way in the future and it will show it to you so the way that the system works around us and handles this is as this as the transaction increases eventually it has to go through and find all transaction ideas in all tables that are older and replace them with a special value that indicates this rose visible of visible to everybody it will always be visible to everybody until somebody goes in and believes that process is known as freezing and this has to be done both for transaction ideas and for multiaccess studies all and in fact if you don't handle freezing correctly eventually the database will stop allowing you to create a new transaction specifically to prevent this problem of all the your data source vanishing so the
key takeaway here is that if you have a very high rates of update transactions on you do a lot of 4 share locking because for Sherlock will always generate a multi transaction of multi exact idea of or if you have a lot of concurrent foreign key checks happening so that's the other thing that tends to generate multi transaction ideas of if you have very high rates of those you can run into problems with being able to freeze your data
he was just going we 0 how to see what the counter is but it's actually not very easy to see that for a transaction and there is a of there's a function I think it's get current snapshot of something like that but there is a function that you can run that will show you what your current of transaction ideas yes or no yes so you can look at it you can call TX ID current that will tell you your transaction ID which is essentially the most recent 1 normally ah yes that it's not just a wrong number you have to actually interpret it but you can interpret it and then you can compare it to all of us in PG databases a column called DATE frozen x IDE and that tells you the oldest non frozen transaction ID in each database so you can compare that but there are there are monitoring scripts are out there to do this but I I I fully I know there's at least 1 4 9 years of there's probably a few others this is 1 of the yes the last thing we want to is the oldest transaction ID yet all of the cells and you can do this In other words the
yes yes I got from the point of all yeah I know when it comes to multi exact ideas there are I don't know of any way to get that information I mean you could write a cup of tea is you can write a C function that would expose it all and there has actually been of proposal on the hackers mailing list uh and I don't I think it's going to get committed the probably in 9 actually I think it may get in 5 of that will add a function that will expose the multi excited about the multi X act of information but then again right now there's really essentially no way to do that other than writing the constant C function all In the sorry in in 9 . 0 we add a new feature called he only 2 points so under certain specific conditions if you update or role we can actually avoid the need to then come back in vacuum the old growth the way that this works is that if you if you perform an update and the update this not touch any indexes it doesn't change any values that are used in any indexes on that table then we don't need to touch the indexes we don't need to create a new reference in the index to point at the new world that's being generated by the updates I so if you're not touching the indexes and the new world still fits on the same page you can load the database will do what's what's called a heap only people where it does create a 2nd version but it ties these 2 together and here the tied together through the same basically addresses are so they once the old row is no longer visible to anybody we can do a heap only clean up and that people only clean up is done whenever pages are that it is a little bit more complex than that the exact conditions on but this can all this can make a world of difference of especially on tables that have a lot of indexes because it's those indexes they can really make a back in quite expensive so all my
this is the take away here is that you want to try and avoid referencing heavily updated columns in a table in your indexes and when I say referencing that means obviously an index on the column itself but also if the column shows up in a where clause or if it's used as a function if you have a functional index of where this really comes into play is if you've got a table that you're trying to use is something like acute table we've just got a lot of churn rate that's happening you don't want to of you will avoid doing things like putting an index on your status field or something like that because if you don't do that your updates to those rows they become higher and it makes it a lot easier to to keep stuff cleaned up I Back in itself
there are a forager variations on vacuum of other vacuum is is a built-in process that tries to just handle everything and does a pretty good job of it in the most recent version vacuum full completely rebuild the table from scratch vacuum is the regular manually run vacuum and then vacuum freeze is a special variation on vacuum that essentially it sets of some freeze threshold settings to 2 ensure that they we did that back in run will freeze everything it possibly can but as I mentioned on a
vacuum in it you generally don't have to mess around with it 9 . 4 the default settings at this point are pretty good if you're running older versions the if you're running an old enough version the 1 thing that you would probably want to change is is not originally on a vacuum used a on a threshold of the percentage of table that had to change on that was set to like 40 per cent so on a vacuum would enshrine back in the table until there was at least 40 per cent churn and that's a lot of effort put on so if you're running older versions of you would want change that but I I think that we change that in like 9 but in the long all the current threshold and yell at so that that's very low bit-rate with an event as well and not the other ways many of the have 2 of show them presented presented that number you know all that 1 of the things we know that John was the answering of people in the and out our later we runs the road was it was good for example analyzed in the motivated by analyzing the use are the same class of operators that role of the rules logic and rational you analyze some love and they're all because you next Tuesday's you optimized for all that oral returned maximal role to reflect the current plan changes slightly but that was a the 1st round of I have the right and if you are in an environment where you do some kind of a data loader you do some large bulk operation but it's a good idea to at least 1 analyze if not backing as well depending on what operation done immediately after it all because ultimately on of back generally does a pretty good job but you can do a lot to give it a leg up to give it a helping hand of by making strategic use of manual vacuum of so the goal here was very common case for that is if you've done a bulk operation of another very common case is if you have something like a acute table that you know it's a lot of rate and especially if you're trying to keep that table small you want to vacuum that table manually very frequently I actually typically will go on and set up a cron job that try that that back in that table once a minute because the idea there is if the table is small that back in his very inexpensive and my vacuum in that aggressively you're trying to keep it small if it starts getting bigger and that's when that's when you start having a serious problems all of the other thing you can do with backing they can make a big difference is if you have a website that has the regular traffic patterns or I say website really any database of your database has a regular traffic patterns so say you don't have as much activity during the nite or you don't have as much activity on weekends it's a good idea to go ahead and schedule and manual vacuum to run during those off peak hours because the work that the manual vacuum does during that time on a vacuum doesn't then have to try and do in the middle of the day when everything is busy all 1 word of caution something that some people have tried to do is use on a vacuum nap time to try and constrain in control when auto vacuum will actually run as a really bad idea of 1st of all it doesn't actually work on some people be I'll or back and run at nite so this set AutoBack in that nap time to 24 hours and maybe that'll work for a while but then you do a restart you have some problem in you restart the middle of the day and all it now your 24 process occurring in the middle of the day of and there there's just other problems is that you run into a lot by not running on by all backing not running them frequently this the process of you that the parameters of the model is on that the rest of the family have the rest of the many of just to the right yeah I I think the big challenges being able to recognize these patterns of here's a hot spot over here we need to keep special you know keep a special eye and that's about 1 thing we certainly could do is it right now but 1 another vacuum launches in a database we don't prioritize the tables in any way shape or form it just says here's my list of what I need a vacuum and that's it so that's something that we could potentially do to yes you can show up you can but especially in in a table that needs to be in very frequently it's really not it's not very likely to help a lot of because what happens is vacuum gets tied up back in other stuff and there just completely doesn't look at all of the world you know looking at the time of the year you might want to any of yes the the B et al actually cover that but yes of
certain that's a prior 9 . 0 0 finding of vacuum full was just a completely horrible idea the way that vacuum full worked is it took the last tuple in the table and tried to move into the front and the 2nd to last and tried move into the front of 1st of all this while holding an exclusive lock on the table so that nobody can do anything with the table of the other even bigger problem is that in that process of doing it it was blowing the heck out of the index because each 1 of these tuples they got moved it then had to create a index entries are and especially with the way that the tree works that could seriously damage not from the data standpoint but it could really blow your own your indexes of is anyone running anything older than I know upgrade of the definitely urinate something I've nobody's on 7 around 7 but I think that now that I have not publicly humiliating area of of the i've if your if your prior to 9 just do not even think about running back in full it's it's really just not a good idea of since 9 . 0 what we do instead of that is we just create a brand new copy of the table from scratch we recreate all the indexes on it all the bad news is this does still require an exclusive lock so while this is happening nobody else I know nobody can read can modify the data of and I don't think you can even select from the table while yes I so the good news is you get a really nicely efficiently PAC table and all your indexes rebuilt it's it's brand new and shiny but while it's running that you're a lot for trying to use it but good this if you do need to on if you or finding the need actually go do that there is a project on line of a git called of PG repack that basically what that does is it it does essentially the same thing it creates a new table copies of data over and build all the indexes differences this does it on a few rows at a time or sorry it's not a few rows at a time but it creates a table but it puts triggers in place so it can remember 0 hey here's the stuff I haven't copied copies the data over goes in and pulls out the last of whatever got done and then once it's once this new tables completely in sync it then takes a very short exclusive lock to swap the you know drop the old table rename the new 1 of and then you up and running again so if you do you need to do this and that's a a project that can be very useful
because of the we the on the other
end of 0 just that just as a heads up those the PG repacked is a fork from an older project there is now defunct so you wanna make sure using PG repacked think the other 1 was called PG reorder that's the old 1 of you when the World War I guess what yes it just no locks all of you the yards you and but so that the sorry the question was can you if you're using partitioning can you back in full justice and partition and the answer is yes because partitioning of really each partition is a stand-alone Table of itself it's it's almost essentially just another table of you are probably you're likely to run into all problems of locking in last year queries are specific enough that the player can determine the it doesn't need to read from that table if if you're saying 0 where data is greater than you know when they equals today in doing indeed partitioning and of the constrained set up and have constrained exclusion turned on then the plantar art may be smart enough to recognize that 0 hey I'm just like look at the table but I'm not even sure that's true you still could effectively be black it's it's of this so if you need to you could just manually query or some other partition that would be all
many vacuum itself of 1 thing might surprise you can't run a transaction needs to control transaction semantics so you cannot write a function that's going to manually back it will work also there is a vacuum DB shell command if you're doing shell scripting that can be handy likewise of Kron side what what vacuum actually does is for each table it scans the heat remembering all the tuples of the need to be removed then it scans the indexes removing the tuples from the indexes then it removes the tuples from the heat of finally if you ask for analyzable do that and then it'll updates of the PG database frozen maxmin column and the minimal mx ID column our the key dyarchy here is this part where we stand the heat remembering a set of tuples and then we go at the index to move the through holes and then go back to the heat how many tuples can be
remembered is controlled by maintenance work if you have maintenance work that's set to low 0 it starts scanning it then has to stop to all the indexes remove those and and keep scanning again and this index scans it's a full scan of every index so you do not want to set maintenance work mn to some really low level because you're vacuum will just died the only way all is not
driven by the size of the index so what we have to remember is a of c t idea tuple identifier which is 6 bytes so if you have a million dead rows that need to be vacuumed that's 6 megabytes this was the message was not a of that possibly a play the middle of the page area so here is the of the of the data from the and just of so the vacuum function itself by the way I'm now effectively walking to the code of the vacuum function itself calls vacuum Ralph backing relation of this function vacuums a single table but it does a bunch of mundane stuff that doesn't really matter and then it calls the cluster relations if it's vacuum full or if you run the Costa command of or it calls we vacuum of before returning in a regular back in the you just issued the back command it then calls itself again to back the toast table there is a difference here though those with all acting on a vacuum considers when a toast table needs to be vacuumed completely separately it's a totally separate scheduling decisions makes so that is 1 nice thing about on vacuum is if if you're producing a lot of churn in your main table but nothing in your toes stable it's not going to sit there and keep trying to back in the toast it will just leave alone all lazy
vacuum rel 1st thing it does is it figures out do we need to freeze this table of so it looks at some of these a couple of configuration settings that the control this and they basically say if on the rel frozen XID or the role men mx ID column in PG class not the database and PG class is old enough it it falls within vacuum threshold it then decides OK this is going to be a freeze back of what vacuum freeze those if you say vacuum freezer basically sets Islamist 0 so always be of respected all it stands heat on using laser-scanned on scans the heap and scans the indexes of if it makes sense if we have removed I know if we have freed up enough on pages at the end of the table it will try and actually shrink the table on disk cleans up the free space map and finally will update PG class with a minimum it will update the row count but they can also end up if it is a freeze vacuum middle of update those columns as well laser-scanned
heap all of this is the function that actually goes through the table page by page by page of 1 when it pulls in a block the slow harder describe this and and actually try and make it match the code but basically on if it's not a freeze vacuum and there are at least 32 consecutive pages that are marked as being all visible in the visibility map then it will skip it will just skip ahead to the next page that is not marked as being all visible so this can be extremely helpful if you have a table that's you know insert mostly so most of the table all the rows are visible and then you just have stuff at the very end of its being updated yes that that the terms are used somewhat interchangeably in the code itself but and that the reason for the 32 block limit there is that it you want to try and avoid confusing the read-ahead algorithm in the in the kernel if we have almost run out of maintenance work memory space to remember tuples it stops what it's doing goes and removes all the index tuples from each index by database where does the index scans of and then it removes the tuples of next thing it attempts to get this cleanup what if they can't get the cleanup block is not a freeze vacuum beaches as up again next block and so that's 1 and as I said earlier that something you have to be careful of this if you're holding you know that there are certain activities that will prevent backing of blocks with in relation all and honestly I don't think anybody has any idea if that's a real problem or not but because as I mentioned we don't currently track statistics on so all if we can get the cleanup what it calls the about page pruning code which is the same thing it's done for a hot cleanup up is actually this exact same code path all it remembers all white tuples were dead it it basically put so sort of T ideas is an array of or if there there is a special case here if a table has no indexes it just 1 pass the table and that's it is it identifies a dead tuples adjustments on but most people have at least 1 and so on it's up to remembers the dead tuples it updates the free space and the visibility maps of and then it will do all of it that's that's basically the end of the for each block loop of once it's process the entire heap relation it will update statistics on any does the final pass through the index hopefully the only passively index all the the
index cleanup function of basically because of the way the index is working in can you can actually create a new index method without theoretically without modifying any the the PostgreSQL mode of what that then turns into is that each index has a different method for how it handles back so I I don't really want to but I did want try and go into the specific method used for every of index 1 thing I will say is that
are definitely for B-tree indexes and I'm pretty sure this affects other other at least some of the other index types as as well if you're index gets bloated it can be extremely hard actually shrink that index back down because of how the tree works so don't have plot happens in if it does get loaded really your best bet is to all the the dual reindex command which will lock the table or you can do a concurrent index build to create a brand new index and once the new index is built you can drop the old 1 all lazy vacuum heap
is a function is actually removes the dead tuples from the table itself so this gets called after we've called lazy back after we've called lazy cleanup
index on each index
in the table with and hallways a vacuum heat it goes to each block that it needs to removes all of the tuples that were marked dead on and on each block on each page it will do fragment the page and report with the free spaces all there is a
separate function that handles the updating data the 2 columns and PG databases that are naming convention makes very actually say these in English but that frozen XID and that min x ID of mx idea of if there are new for that if we have a new values for you the 1 which the only way we can have new values is if the vacuum managed to stay in every single page of in the table normally that generally that probably only happens if it if it was run as a freeze vacuum of but it is smart about it to recognize it if it just so happened that it was a plane back you but it did visit every single page on so now we know that we have the most recent up-to-date information on what the oldest transaction ID and MX ID in the table or it will then still on do the update on if all so that happens we will update this at the database level and if we update the database level that is what we call of the function to truncate the commit log of it updates the shared-memory version of the information on the and then the multi x-axis files that that the cleanup of the multi exact storage will actually then happened during the next checkpoint and superactive
equipment but all so vacuum of there's 2 there's 2 parts on vacuum there's the launcher and there's the workers of the launcher prioritizes the databases of basically by how old close they are to needing to be frozen 1st looking at the transaction freeze 2nd looking at the multi transact of and if nothing's in danger of of if nothing needs to be frozen and then it's just gonna look at whatever database was most of what was least recently on multiple workers can work on the same database at once of em workers will be canceled if they interfere with another back and so this is another difference between auto back in and manual back manual vacuum will never be magically canceled by the system but if an auto vacuum worker is doing something and it interferes with another operation in database the bottom back back and can be terminated i i in DDL what can actually cause a moral vacuum worker to be terminated on their own for of that requires a lot more than that of the 2 people of last hidden in change in the heart of a lion and 9 4 of the way that that works is 1st of all we won't even try truncate the heap unless we can get the lock without blocking antibodies and then as we're doing the truncation itself if we detect that's all we are blocking somebody will stop the truncation at that point and that's true for any of the game for all these years and I think it would be a lot of people yes that will kill the other vacuum but there's actually a lot of that but but there actually there actually is some protection for that for manual vacuum is 1 of my own it does not ignore PostgreSQL 8 so the only databases it ignores as the ones that are marked as you can connect to them because if you cannot connect to them there shouldn't be anything to do it will actually unless you save unless you set rose-breasted be not connected to it will on vacuum of Y yes yes that's very important but up and the workers
themselves they get a list of of all the heat tables on and materialized views that need to be vacuumed up and then it gets a list of toast tables that need to be vacuumed it doesn't doesn't nor timetables all vacuum will never vacuum temperature if you want or if you're doing updates and stuff on a timetable and you and you think it needs to be lacking you must be that by hand but it makes no distinction for each relation in attempts to get blocked if it can't get blocked it just skips the table entirely so this is another thing where you have to be careful with you know this is another opportunity for long-running stuff in your database to not allowed vacuum or in this case a lot of backing to do its work of I don't 1 was to all will the biggest reason is because the visibility of information for timetables is handled differently there's some optimizations that we can do when it's a timetable because we know nobody else is going to read from a but that that means the vacuum from another process can't you reactants I actually possible to back off the yes yes so we might not be interested in of 1 of the things we things know is that I have made problems with so you can write on just like any since that session can be doing anything else in the meantime you can also just 1 of action for as well all so it will go through each of relation and once its run through all the relations it will of callback update that frozen x i the function of and then it exits it does not sit in through the tables once workers run through its entire list of wanted identified originally it needed to do it just exits
so here's a big 1 on a vacuum does not prioritize tables with the database it just says here's a list of tables that i want to vacuum and that's it it doesn't put any kind of priority and it only once the list what's on the other big thing
without a vacuum is that it can become ineffective for high-demand tables or if you end up with so if you have 3 large the normal the default configuration is you have 3 on vacuum workers if you end up with 3 large tables at all show up on the list and only to be vacuumed it once all 3 workers me stopped doing that and nothing else is going to get back in until 1 of those workers frees up so these are some of the things that that factor into wine especially for tables that you're trying to keep small of that have a high churn rate if you still wanna back in those by hand but Beckham
costal a it's it's pretty well or documented really the critical thing is that are as certain operations happen in vacuum either reading a page of shared buffers reading it from the kernel or during the page each 1 of those has a cost associated with it we keep a running total of the cost and when the costs seat exceeds the auto or vacuum cost limit we then sleep for some number millisecond of the head the entire idea here is to prevent backing from just sucking down every last available piece of I know that you've got on 1 thing you need to be careful with is don't slow vacuum down too much should don't set some huge cost delay or don't set the cost limit really small because if you do that yes you're going to save on your I O bandwidth initially by throttling the heck out of vacuum but it then means vacuums not getting the work done that it needs to get done and that means that you're table is just going to end up over time it's just going to get more more bloated you're actually making things worse so don't do that on on the other trick here is that if you have a system of systems with a nice expensive RAID controllers with nice RAID setups a lot of time is actually cheaper to write data than it is to read data so if you have a nice expensive RAID controller you may want to set the page dirty of vacuum page 30 of parameter lower than the page in this parameter of because it's cheaper to write the data than to go to the kernel to get all so basically
all just to reiterate that the key points of a long-running transactions they mess with vacuum of high transaction rates use for sheer for Sherlock and of lots of concurrent foreign key checks increase the need to freeze because they burn through max ideas of indexes referencing heavily updated columns that prevent hot from working which creates more need for backing up and make sure maintenance work memory is large enough it's very difficult to reduce the size of a border database and of backing can only do so much it's not magic but but but as much as we might wish that it was I am
out of time but I'm happy stick around of 4 questions I think it's lunch time at this point thank you
Demoszene <Programmierung>
EDV-Beratung
Web Site
Menge
Code
Rechenschieber
Ordnungsbegriff
Zahlenbereich
Information
Kombinatorische Gruppentheorie
Code
Computeranimation
Multiplikation
Datenparallelität
Datenhaltung
Gefrieren
Computeranimation
Nebenläufigkeitskontrolle
Multiplikation
Transaktionsverwaltung
Informationsmodellierung
Datensatz
Gamecontroller
Versionsverwaltung
Störungstheorie
Gefrieren
Auswahlaxiom
Transaktionsverwaltung
n-Tupel
Nebenbedingung
Rückkopplung
Mathematische Logik
Gewicht <Mathematik>
Kartesische Koordinaten
Information
Binärcode
Raum-Zeit
Computeranimation
Datenhaltung
Übergang
Multiplikation
Datensatz
Arithmetische Folge
Prozess <Informatik>
Trennschärfe <Statistik>
Endogene Variable
Datenreplikation
Strom <Mathematik>
Speicher <Informatik>
Konfigurationsraum
Transaktionsverwaltung
Phasenumwandlung
Befehl <Informatik>
Protokoll <Datenverarbeitungssystem>
Datenhaltung
Abfrage
Datenreplikation
Programmfehler
Transaktionsverwaltung
Flächeninhalt
Decodierung
Information
Siedepunkt
Datenhaltung
Abfrage
Physikalisches System
p-Block
Bitrate
Term
Computeranimation
Homepage
Übergang
Persönliche Identifikationsnummer
Homepage
Loop
Pufferspeicher
Transaktionsverwaltung
Automatische Indexierung
Gefrieren
Transaktionsverwaltung
Portscanner
Tabelle <Informatik>
Rückkopplung
Prozess <Physik>
Datenparallelität
Gemeinsamer Speicher
Versionsverwaltung
Gefrieren
Zahlenbereich
Computeranimation
Homepage
Datensatz
Multiplikation
Gefrieren
Transaktionsverwaltung
Beobachtungsstudie
Datenhaltung
Ausnahmebehandlung
p-Block
Quellcode
Physikalisches System
Bitrate
Gleitendes Mittel
Frequenz
Persönliche Identifikationsnummer
Transaktionsverwaltung
Generator <Informatik>
Schlüsselverwaltung
Tabelle <Informatik>
Bit
Subtraktion
Punkt
Mathematisierung
Adressraum
Versionsverwaltung
Zahlenbereich
Zellularer Automat
Computeranimation
Homepage
Homepage
Multiplikation
Datensatz
Skript <Programm>
Hacker
Störungstheorie
Automatische Indexierung
Lineares Funktional
Datenhaltung
Mathematisierung
Systemaufruf
Strömungsrichtung
Mailing-Liste
Konstante
Transaktionsverwaltung
Automatische Indexierung
Konditionszahl
Wort <Informatik>
Information
Speicherverwaltung
Speicherverwaltung
n-Tupel
Tabelle <Informatik>
Automatische Indexierung
Lineares Funktional
TVD-Verfahren
Schwellwertverfahren
Prozess <Physik>
Prozess <Informatik>
Versionsverwaltung
Gefrieren
Bitrate
Computeranimation
Datensatz
Dämpfung
Datenfeld
Menge
Automatische Indexierung
Prozess <Informatik>
TVD-Verfahren
Gefrieren
Tabelle <Informatik>
Prozess <Physik>
Punkt
Versionsverwaltung
Familie <Mathematik>
Synchronisierung
Ähnlichkeitsgeometrie
Computeranimation
Netzwerktopologie
Dämpfung
Regulärer Graph
Prozess <Informatik>
Mustersprache
Default
Gerade
Automatische Indexierung
Parametersystem
Nichtlinearer Operator
Schwellwertverfahren
Shape <Informatik>
Datenhaltung
Strömungsrichtung
Bitrate
Ereignishorizont
Scheduling
Menge
Automatische Indexierung
Hochvakuum
Projektive Ebene
Programmierumgebung
Tabelle <Informatik>
Subtraktion
Web Site
Kontrollstruktur
Mathematisierung
Klasse <Mathematik>
Nichtlineares Zuordnungsproblem
Automatische Handlungsplanung
n-Tupel
Zahlenbereich
Mathematische Logik
Informationsmodellierung
Bildschirmmaske
Datensatz
Tabelle <Informatik>
Booten
Mailing-Liste
Schlussregel
Frequenz
Ratsche <Physik>
Gamecontroller
Wort <Informatik>
Tabelle <Informatik>
Automatische Indexierung
Konfiguration <Informatik>
Abfrage
Disjunktion <Logik>
Partitionsfunktion
Ähnlichkeitsgeometrie
Nabel <Mathematik>
Computeranimation
Tupel
Funktion <Mathematik>
Projektive Ebene
Transaktionsverwaltung
Tabelle <Informatik>
Schreib-Lese-Kopf
Automatische Indexierung
Lineares Funktional
Konfiguration <Informatik>
Extremwert
Nabel <Mathematik>
Datenhaltung
n-Tupel
Nabel <Mathematik>
Computeranimation
Übergang
Formale Semantik
Softwarewartung
Tupel
Transaktionsverwaltung
Funktion <Mathematik>
Menge
Automatische Indexierung
Mereologie
Gamecontroller
Transaktionsverwaltung
Tabelle <Informatik>
Subtraktion
Stabilitätstheorie <Logik>
Extrempunkt
Klasse <Mathematik>
n-Tupel
Gefrieren
Zählen
Code
Computeranimation
Homepage
Homepage
BEEP
Datensatz
Mini-Disc
Statistische Analyse
MIDI <Musikelektronik>
Konfigurationsraum
Trennungsaxiom
Automatische Indexierung
Lineares Funktional
Schwellwertverfahren
Raum-Zeit
Datenhaltung
Relativitätstheorie
Weltformel
Systemaufruf
Einfache Genauigkeit
Entscheidungstheorie
Mapping <Computergraphik>
Scheduling
Menge
Flächeninhalt
Automatische Indexierung
Hochvakuum
Gamecontroller
Identifizierbarkeit
Binäre Relation
Speicherverwaltung
Portscanner
Message-Passing
Tabelle <Informatik>
Einfügungsdämpfung
Prozess <Physik>
Weg <Topologie>
n-Tupel
Gefrieren
Term
Code
Raum-Zeit
Computeranimation
Homepage
Homepage
Kernel <Informatik>
Loop
Weg <Topologie>
Datensatz
Algorithmus
Inverser Limes
Gefrieren
Störungstheorie
Ganze Funktion
Automatische Indexierung
ATM
Lineares Funktional
Statistik
Raum-Zeit
Datenhaltung
Relativitätstheorie
p-Block
Zeiger <Informatik>
Systemaufruf
Quick-Sort
Softwarewartung
Mapping <Computergraphik>
Tupel
Automatische Indexierung
Festspeicher
Ganze Funktion
Speicherverwaltung
p-Block
Portscanner
Message-Passing
Tabelle <Informatik>
Lineares Funktional
Automatische Indexierung
Raum-Zeit
n-Tupel
Plot <Graphische Darstellung>
Computeranimation
Homepage
Netzwerktopologie
Tupel
Automatische Indexierung
Datentyp
Dualitätstheorie
Speicherverwaltung
Tabelle <Informatik>
Ebene
Extrempunkt
Versionsverwaltung
Gefrieren
n-Tupel
Kartesische Koordinaten
Information
Computeranimation
Homepage
Homepage
Übergang
Multiplikation
MIDI <Musikelektronik>
Speicher <Informatik>
Lineares Funktional
Automatische Indexierung
Elektronische Publikation
Logarithmus
Raum-Zeit
Datenhaltung
p-Block
Elektronische Publikation
Zeiger <Informatik>
Systemaufruf
Transaktionsverwaltung
Tupel
Automatische Indexierung
Ganze Funktion
Information
Verkehrsinformation
Tabelle <Informatik>
Subtraktion
Punkt
Prozess <Physik>
Mereologie
Minimierung
Mathematisierung
Gruppenoperation
Gefrieren
Computeranimation
Eins
Datenhaltung
Multiplikation
Spieltheorie
Minimum
Tabelle <Informatik>
Nichtlinearer Operator
Sichtenkonzept
Datenhaltung
Relativitätstheorie
Mailing-Liste
Sichtenkonzept
Transaktionsverwaltung
System F
Mereologie
Binäre Relation
Speicherverwaltung
Information
Tabelle <Informatik>
Tabelle <Informatik>
Datenhaltung
Mailing-Liste
Bitrate
Default
Computeranimation
Datenhaltung
Tabelle <Informatik>
Total <Mathematik>
Punkt
Hecke-Operator
Datenparallelität
Extrempunkt
Disk-Array
Zahlenbereich
Computeranimation
Homepage
Kernel <Informatik>
Systemprogrammierung
Puffer <Netzplantechnik>
Inverser Limes
Störungstheorie
Transaktionsverwaltung
Schreib-Lese-Kopf
Automatische Indexierung
Nichtlinearer Operator
Parametersystem
Datenhaltung
Physikalisches System
Bitrate
Inverser Limes
Softwarewartung
Transaktionsverwaltung
Garbentheorie
Automatische Indexierung
Festspeicher
Gamecontroller
Bandmatrix
Schlüsselverwaltung
Tabelle <Informatik>
Punkt
Computeranimation

Metadaten

Formale Metadaten

Titel All the Dirt on VACUUM
Serientitel PGCon 2015
Anzahl der Teile 29
Autor Jim Nasby,
Mitwirkende Crunchy Data Solutions (Support)
Lizenz CC-Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
DOI 10.5446/19118
Herausgeber PGCon - PostgreSQL Conference for Users and Developers, Andrea Ross
Erscheinungsjahr 2015
Sprache Englisch
Produktionsort Ottawa, Canada

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract The use of Multi-Version Concurrency Control (MVCC) is perhaps one of the most powerful features PostgreSQL has to offer, but it can be a source of confusion for new and experienced users alike. In this talk we will provide an in-depth walkthrough of why Postgres needs to vacuum and what vacuum does. Topics: - MVCC details - HOT overview - Identifying tuples to be vacuumed/frozen - VACUUM and indexes - Vacuuming heap pages

Zugehöriges Material

Ähnliche Filme

Loading...
Feedback