Bestand wählen

SELECT * FROM changes; - Part 2

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Erkannte Entitäten
i minus and so I work at 2nd quadrant and I spent the last 2 years working on the future up on the wall here and I think it's very helpful for them apart from posters so I'm going to try to explain why it is what it does for you can use it and as you might notice from the initial slide for the future when quite a bit of different names I think initially named logical replication then and it renamed to myself to change that extraction then being Robert suggested logic of decoding and remember who it that's the marketing material now names it data changes streaming and as SQL names have as a change in the capture and another unnamed database has their that screen so we have a and
what is that feature with the many name but it allows us to do is to get all modifications performed to a database that is to say in and update elites in a format that you can use for other things and important part of this doing that in order that's consistent and that would consist in order means if you take all those changes put them into another database you will be able to replace them it won't be like random order so you don't get uniqueness conflicts or similar things and it was also very important and that was a very crucial thing during the whole discussion on the list initially was all hard coded and the out good that comes out of the whole feature is completely configurable you can make it do anything and was also but I found very important is it's possible to do that to use the whole feature even if you do all stuff like all the table top table because that's where the existing logical replication solutions for PostgreSQL like Sony like understood like 1 cardinal they all of the big problems with that and they break and uses trained don't like that very much and it is very low overhead in comparison to the existing solutions exist solution basically created figures for every action and cute all the changes into a huge table and that meant at least the double on the rights and that a lot of that right has a lot and that's relatively expensive and got even more expensive and have that stuff like a time stamp data that time float data type because converting the binary representation of times time-stamped to text is very expensive so just to start
off how to actually use the feature very very important concept and the whole thing is replication slots and that's basically 1 stream of the data can be streamed from 1 slot and so what I'm doing here i'm doing same set X star from PG create logical replication slot but that doesn't creates 1 replication stream and my streams named my stream and the output format will be the same i but the 2nd meter how that exactly works you all come back to the 2nd and then it gives you back what it did so and we have a nice overview over which reputation slots are already created and that's the key unification slot you and that just check shows you all the time in addition to slots and obviously this example I had only created 1 this also all that kind of slots but there's nothing to take the torque about that today so what happens if I actually do things do perform DDL so let's say I have a table called secreted table the talk name is the primary key because it this like surrogate keys and this description I inserted talk into the talk table and that's the reason it's far from changes they talk and the description explains what logical decoding and then to show that it actually worked with the be all and going to add a new column and I'm going to application the column to uh at a descriptive and review for my talk which is say whatever and then I'm going to talk and now I want to get all those changes and there's a a very accessible way to do that for which is very useful for many things and that's the PG logical slot that changes function and that is a separate hearing functions and that just returns all the changes that happen since the last and has been called and if that I call that's on the replication stream previously created that's my stream thing and all our we want the location and the data column that it is during and then will show the changes that happen so the 1st transaction the 1st data modification that happened was needed so we see that there is in the urge to the table the column talking of type text it's stock changes the description is explains logical decoding and then you can see there's a commit of that transaction and data we if you recall is practically be we don't always get the table and added the review columns and filled with text so now we update the table and there's the text but if you look closely there's also the exciting column and the the number of the commit and there's a gap in between that's where I did the table because at the moment we only decode the changes to the table not changes to the table structure so it didn't fit on the slide but there is an empty transactions between so and how is
data flowing and when we have a database and then the data flows from there to the so called occupied and that was the test underscored encoding I have and if I have several outputs less replication starts it'll just do the same thing every applications often have a different output plug-in and that will then go to the user or consumer or a database system so and I'll contractions are what determine how all of the data that we will not would look like so the idea for of
abundance is that I want to be lot the replication to only stream the data in the most efficient farmers that posters internal from this only people for example 1 the have compatible compatible format with the old to all that people want to have SQL because they want to just replace that into a variety of other database and skills symbols from there's lots of different things you can imagine to sometimes you only need the primary key is nothing but the actual hey later meant that i'll call considerable but since we didn't want to add all the so the format where we added a need to add a myriad of options said that's user defined code and that user-defined code can then form that is in the data itself and it works by getting a couple and outputting consists of a couple of callbacks there's 1 called that's gets called the blind should be initialized and then the important things are every time a transaction is decoded so this is the beginning called call that at 1st then for every single change that happens in different directions in in there being updated the the and the change called with Scott even if you delete a hundred thousand dollars at once by the delete the change call like will be called a hundred thousand times for every single row and then if you commit the commit of if you look closely there's no border at year end aboard calling here that's because aborted transactions are never ever decoded you can look at more to data here it's only 1 speaker commits happened the source data only then we can get the content of a transaction and then if we stop red during the decoding shut oncologist call that's usually only used to free memory and similar things and what's very interesting is that during the whole whenever an area change called it is called the have actually uh access to the transaction that is following the action so you know this transaction physics the it committed at this time stamps which is something you could never ever do before and it will never get also wasted called in the order is that the commits happens internally the it might be different to the perceived commit order from the user work because sometimes the network might be slower for 1 connection than the other and
already a couple of other components the built the compiler original commits it could have included the test decoding plugging that's the opportunity or earlier that includes the column name that includes con tired and has a text from admittedly that's not very useful for everyone but it's very useful for testing because it shows most of the information that the rate of then you literally have a handful of sort has written upper body and that's outputs Jason and that's already practically useful I think there's a fair amount of discussion to be had about in which way the database changes reflect the in invitation I'm not 100 % % happy with the output from a bad thing we want but we want adjacent format in core in my opinion but I think it has to be different and submitted here had was unfortunately not here has written in upper body and that just does everything as a scale statements so if the updated road get an update and but that means if you had originally 1 update statement updating the whole table you know that's 1 updates 2nd for every single road because everything is replaced you know so uh the way it
works is the data is initially stored inside the writer had lots of PostScript it never ever the alpha blending gets never called before in the inside the right half of the alpha blending gets called it as an argument the temple typos Postgres which is just the internal representation of data and you don't need you need to know where it's coming from that every time it streamed out it gets converted in you there's no like caching offer anything going on just gets finger of of some all of the so
the replication slots that was the 1 thing we could initially thought from PG replication slots they they really they are there to reserve resources we still need it there's a checkpoint and that checkpoint would remove right had lot that's still needed by the uh the what the application will say 0 I can remove that all reserve at the same mechanism is now you can be used for normal streaming replication so you can see just have a slot that says you can't remove any right have about that still needed by stand so that's the reason the slots actually exist just to reserve resources we need for some form of yes
and so unfortunately I
don't think so this additional columns there that say like the the last data that's required is this as an and then and then what went on and in the world the way so that the would the that's the basename of a f a shared object so what it does it creates when you say greater applications not with has telling what it looks for its in in that do that shared object that has let there and then had mass as at once all loads the configuration basically say that those culverts exists and that's because of function pointers and then it's initialized that is explained in documentation something it's particularly interesting but yet
so but if you you might say so what if my table doesn't have a primary key wire what if I also want the whole old version of the rel because want to compute the differences between roads there's the new future that's called terrific identity you can say also Table replica identity and this these options is considered to be false if the table has a primary key and then that's all put for available for the opera point and when you do an update if it sets to if there's no primary key you won't have access to the old version of the road but the new version of the row is available then what he also can do is to say I want explicitly you want to use another index as the identity often enough you have several indexes on table and they like 1 ID column and also the user and user name is also not now is also unique so you can save my external solution doesn't care about the internal IT I won't be use and then just say the username underscore annual use that from there and then they can say I want the full wrote that obviously going to need a bit more space in the right had lot because suddenly you need to store in an update the old version and new but it also might be that in your use case you only new version because the old data doesn't care if you don't care about it so you can just say Nothing and then the overhead will be the each of his and what might be important and interesting there is that if you update the row and the yeah set it to default or using index and there's no change in the primary key we won't actually send story the primary key again because you can just get it from the new topic because it will be there so
what do I do we think is this going to be useful for the primary reason I want we want this is replication currently if he's streaming replication you can't have any right activity understand by which is fine for many use cases for high availability of it over that's OK but you can create temporary tables you can have additional details all that's not possible so we want logical replication solution that allows to all that activity on the standby by there's all additional reasons we stream replication is new suitable for every case primarily it doesn't allow allowed to cross version is the upgrade from 9 3 2 9 4 there's no locked because you can't use the replication fork that which is if you have a multi together terabyte databases using PG Dumbaugh PG upgrade has its problems so that's why at the moment people use lonely where people in London but those have the performance problems so we like but this to make that possibly faster another very common use-cases catenation saying you you don't trust you oppose this database to be can act and all the requests and send you store with a key-value store in front of it and test data say you put uh uh like a caching proxy in front of your website and you might want to evaluate predictive let's in that catch so we can create a replication stream in the included in the upper 1 right now providing that only outputs the key to the cash and then in the cache just which all the incoming data that's been changed another very common use case we have to solve with it or support with it is auditing many insulation require that all changes need to be visual in some uh usable format but for example using locks statement the statement block doesn't really help with that very well because it wasn't consist order you can query because it's just texts uh it doesn't handle of transactions also is also very very expensive to look every creating agrarian another thing is is uh aggregating roll out you have several databases that have the same scheme and you have some for some odd reason it's still another database in then post and you want to updated with the new data from posters because it's also all software that's still around or similar all that should be relatively much easier with that support the point of view of you also will need some things like I A not yet I think we're going to get there that we need to attach user-configurable information to every transaction because obviously the username is going to be helpful what you can do is to create a like a statement triggered that just every 1 time Introduction adds the you in their into some special table that obviously not very nice I think we want more but there was a limit to what what we're going to I wanted to Robert not to kill me
so I have that I just showed before it there's a very simple SQL interface it's just send it star from get me the all the data and that's very simple there's a couple of variations in there there's a variation that States peek into the data stream don't consume it which is very useful if you playing around you can just say what are the next hundred changes show me them without consuming them there's also a version that supports outputting data binary which might be useful if you up abundant streams stated by and the the problems with that they are that it's relatively expensive to call the function it's OK if you call it to stream out of a gigabyte of data but difficult for it every single change the initiating the logic of the coding process takes some time and might read data from this repeatedly so you probably shouldn't call that a handful of 5 100 times such a next part obviously as function can do anything you can say all the rows as they come in pretty much at the moment not solvable with the copy as still call function so those are the concepts so what we also have is the called so-called Walton interface that already used for streaming replication and for PDE-based back up and things like that but now there's also command that allows you to get the data in the logical format it's pretty much the same commands you can do from SQL it's great it's not PG underscored topic that create replication thought but it's great logical replications of an upper case letter and then but the advantage of that is that it allows streams the data the as the data is produced of the primary which is very useful because we can get the data very fast there's no repeated start up costs and very interestingly that supports synchronous replication which is something that none of the trigger based solution has figured out how to do there is a command line utility to receive those changes that's called PDE receive logical that is another thing that was repeatedly renamed whatever but the problem with this is that it uses the same protocol to get the data out that these 2 new application from users and that's basically a binary protocol so you need to be able to say OK those 8 bytes are an atlas and those 8 bytes data and dates and that's because there is a bit of fiddling I really hope that we can get a like a small Python library is small room library and stuff that just gets the data in a more convenient form but don't have that and yes act on benchmarks so with PG bench where she is staying up the changes the overhead was like a 6 per cent of so that was with a because of like with pretty doing the output having compared to replication was just comparing to not doing this to doing it In comparison have really tried both phoneme understood they couldn't they completed fall apart because they couldn't keep up with the performance so we had been 1 is worth the backlog and effect within 12 seconds was just never decreasing anymore in any meaningful because like the overhead is so large that just think about the in size of the in it was in the morning on the question of the source of the so it will succeed so I could there was 1 last pg bench 1 decoding process could keep up with uh I think had bounded on the system with 32 secure there's a costs and could keep up with that with single decoding process that's what think there optional for improvement there and it's very heavily depends on what the occupied and asked if he off applied and converts everything to text it's different from when it also just streamed out the data in the binary format because that conversion from the but internal format to the texture following costs has cost so this cat trade-offs to be made we also can do stuff in the occupied is say I don't need this changes to this table I just skip the so what are the
problems with the support that's not committed and is coming as part of a 9 4 the biggest issue probably is that only committed changes of if you have a very very large transactions say you just restore the entire database and its one-terabyte large net nothing will happen while those 1 where that 1 terrabytes it's ready the whole set of all the distinct once they commit rights and obviously then you're going to be busy for a while staying that 1 terabyte because even a modern system that takes a while I think there is a very good case to be made that we want an interface that allows optionally stream data out while the data is being written and the problem is that that makes the applies very very complex because you need to be able to deal with an open transactions at the same time and all of those might overboard at some later point and post this currently doesn't as from the receiving end doesn't have any infrastructure for having a single process with multiple transactions in the process of the same time so it might be if we have the rights of autonomous transaction support imposed because this might be easier to use and that might be sufficient reason to implement it and don't think the sender side of very complex to implement I think the user how the interface looks like and how to apply it changes coming from that isn't going to be used so and the next biggest problem that personally I find to be much bigger is that we don't that chains itself as a sole before once there was no shit also table in that state what we need is the ability to say save get a command that that's does all this table has been altered the new schema looks like that and that and the command you need to do to transform that those 2 is that scale luckily Álvaro and before the damage he worked on that and we have 4 9 5 patch that implements a EDL replication for most commands the things that we don't support our create database because they will be coding only works in the database level so it was really make sense and create role and so and creates tablespace basically we don't support any of the operations that are bigger than the current database have cluster-level support scope so it's going to be interesting to see whether that gets into 9 5 unidentified fight for it I think a couple of other people FIL and this a couple of features this could very well use for example it would be very interesting if we had supports to decode 2 phase commit transactions separately so when you actually get the contents of the transaction when the 2 face prepare respond and then you can still to the other side to prepare on the other side can only do the 2 phase commit commit prepared once the other side has successfully applied the transaction because what that allows you to build this acid is MultiMaster systems that all optionally zinc synchronous applying a sense of the mystical lots supply but I think it can be the basis of a very powerful set of features but that's the spirit but I think there is that can be some interesting additional options to that allows to specify like we have your cation identifier that allows to specify which all the all those are there on a per column bases to have cater the uh overheads to whatever your table and your efficient solution needs but those are the ones I see so why did we
went through nearly 2 years of work on this and the reason is we are working on a project called B and that's a synchronous multi-master and we want that to work it's of source and we want that to be available in PostgreSQL and put as much as we are allowed to into called post because we think many of the scenarios that are coming up are much harder to do you need to build that all yourself and for now all they can from the the conflict resolutions we are working on is last speed that's good enough for many usage scenarios they obviously are other scenarios where you can use lot of of wins that's why we support conflict Henderson and say OK if there has been a conflict between that and that update from 2 different nodes this function gets called in the function and say 0 this is an arrow up I don't replicate anymore this is all broken or it can say use this trouble as a result and comfort it's all open source and that we're trying to get integrate and post I have a very small demo that so as you
can see this is 1 shell there's another shell since I didn't want to rely on the network here it's all on my computer that as you can see 1 for a two-year connects to 4 5 5 4 5 4 4 1 4 0 0 but so right now there's a table and a sequence so what I'm going to do 1st pick that's from when I tried to there that it actually everything worked so I'm going to get a client list drop the table and adopted on this node and now we can check the replicate yesterday replicate and even the sequence that that adopts you can also do have created new sequence and as you see there is a new syntax here and that's using EDI what that does it that says we want distributed system that works across a multimaster system so then we create a table that's comes doesn't come out very well here but here where credit tabletops again this time we have the key and that's begins and it had defaults to next of this sequence we just created on the other nodes it's the primary key we also have a talk and the description now I'm going to say that wasn't what I intended In that work before yeah so I'm not going to change the owner supposedly that should change the owner to the table so it doesn't get stopped together and the sure what's not working it did when I tried so then I did now insurgents to arose and through this very arbitrary 1 so you start getting wider any that work after that so now it updated those as is conceived the voting algorithm for the sequence doesn't start at 0 but that's pretty much of a problem so now I could start to it into talks well you that means the union of 1st of all thanks to that as the internet here sell off and as good at the this image and he wrote very inconvenient that the screen doesn't through things up and now I've inserted and it will also be was all here all really weird scriptwriting but it should be you have the same data and user a can see it uses different ranges of rows of the scientific so what I can show that if the hard words the configuration I think we're going to want to change that some point before we can convince anybody Twitter InterPro couples goes at the moment it's configured by doing cops setting some gaps so
we have to be out of connections it connects that's a P here there's 2 connections it does to other nodes is 5 over 0 and 5 0 through so we can recognize that 3rd note that I don't have space to show on the screen and then that's the connection that
specifies 2 where we connect and the same if you look here will be very much the same of we show obviously not itself because that's 5 over 0 obviously want predicate itself so there's just the other 2 nodes in the tree consideration and that's basically all you need to do to replicate the data between the 2 notes I think we're going to want to have something like creates a skeleton that top-level commands that connect to the on node but there's some infrastructure required for that and we don't want to patch couples deskilled anymore than the 1 needs to so that's the consideration we have right now because you don't need to change yeah
yeah and it's only getting close to the end that I want to thank you very much that until security and I have the have allowed our 2nd quarter and to work on this and without so many finding it wouldn't give impossible to work 2 years in the future and I think that's the reason why all the existing repetition solutions have used table with figures is just because it takes an enormous amount of time to implement anything that's better than that and it's a very convenient solution and the Committee for review I started 2 and a half years by sending an e-mail 0 yeah is that the future I want to work and I want to work master and I think that in the next 2 months of like 400 messages and the lots of arguing and but we got very very much better than my 1st approach the 1st approach was kind of ugly all that I wanted to say so but we improved on that it was a prototype there was went through lots of lots of review I want to particularly thank Robert here because he's without to review its very rather unlikely that the future would have made it into a 9 4 so where many banks that power was also want to thank the people that already worked on all proponents for the future was even committed like Steve singer had provided very early feedback of the API and I have my doubts that without that feedback saying 0 another solution than what you want to do has a chance to use this of the that we without that might have not gotten in so thanks and the of to also all the opera plug-ins are also have to think that they are on the unfortunately and and for letting me spend far far far too much work yeah and all of the world is of
the nodes in political we wouldn't want to model and you won't from called it is it was so we have met new ATM that for sequences those allowed another implementation of sequences that sequence a and then interleaved does say I need a new chunk of values because at the moment I have no market values than that to that's transferred to all the other nodes in the system because away all and of pitting conflicting value than this thing you're not allowed to use that word say I don't care use that and then once the majority says OK you want to use that it starts to the chunk at the moment the chunk size hearted 2 thousand values at a time and and then you can and we want to provide all detail sequence sets B . batch size at some point then you can say different batch sizes but it has to that original servers here and the yellows replicated so it's just in the same sequence name on from the surface as across and there it is so is this all nothing is this was the the art of connection that showed it uses the same cluster network that's configure there what was in about 1 I think you're right I am afraid there will be a couple of bigger complaints so that the approach to life on the spot is on the front end of the and that's formalism that does more career always the I had news in units of a cluster because many have rebalancing problems and so you need to have some quite you need some criterion C mechanism to agree on a new Interleaf and that's the result of yet but that you can just use the imprisoned sequences and set off a sequence set increment 1 by a free and do that all of a all of that is that this is the 1st of all right so what I as a test I now have a gapless sequence implementation that it doesn't allow allows any constant sequence for local because people after that it was fast but so this API is pretty much next fall rosette and all that options you can do what you want to work for you that she said you that are there and the advantage of doing something like this is that it can say this local channel that's required is only valid for a minute and then have like mostly Korean sequences across notes so you have you can do like water by sequence and you get a very reasonable value globally and that can be very helpful but also what they've been asked to implement the value of life there will be in the lead on this really is the end of the the year and it's only interesting if you want some correlation in in the area of the of old I think I should have shown that we have decided to keep an eye on what it is it's was the know that was the flight to laughter COGECA and applied but they reconnect so yeah and then reasons and so on so what this when you do the use the vaccination protocol you can say give me all changes that our esteemed after that was and that happened after that other than and then on the receiving side you remember which convection have you replayed alright and then if you disconnect replace all that we have an extension proposed to couple stress that allows to integrate that on the replace side automatically so the lower the Local Committee has information which remote commit was playing so crash-recovery can be they all that information there's 1 particular person in this room without of that approach and you'll see I think it's very neat and I like it but that's because the rotor so cool shit yeah so we have that's basically the approach you remember we how far you plate and then you request all those changes since last time and you can basically their works by every now and then you use the best and protocol you sent say I have received up to here and you can't get any changes that older you should do more frequently as a kappa so basically at the moment it even asks you please send me a feedback method now that's all the gas in a particle half that built in for streaming applications because that also once that information we just use this exactly the same protocol and that's also what allows us to implement single certification without any for the changes there wasn't any thing we have checked for that but the phone any other questions about this let me and socially my computer proved before and at most 2 3 slides so that's why that information was a flight then it's that he owns a kind of could be that this link up in a 2nd 1
of those will grow so database information that has a manual that has the data repository and I think at this point you should use the BER branch but the BDI next branch because it made made some progress is that the last release that it and it'll officially more officially been would be released at charter with you do we have 14 and get the to the minute
Demo <Programm>
Umsetzung <Informatik>
Hyperbolischer Differentialoperator
Nabel <Mathematik>
Formale Grammatik
Deskriptive Statistik
Streaming <Kommunikationstechnik>
Gleitendes Mittel
Befehl <Informatik>
Motion Capturing
Rechter Winkel
Computerunterstützte Übersetzung
Ordnung <Mathematik>
Repository <Informatik>
Tabelle <Informatik>
Folge <Mathematik>
Mathematische Logik
Spannweite <Stochastik>
Arithmetische Folge
Endogene Variable
Inhalt <Mathematik>
Protokoll <Datenverarbeitungssystem>
Verzweigendes Programm
Plug in
Binder <Informatik>
Patch <Software>
Wort <Informatik>
Abstimmung <Frequenz>
Prozess <Physik>
Atomarität <Informatik>
Kartesische Koordinaten
Skeleton <Programmierung>
Einheit <Mathematik>
Stützpunkt <Mathematik>
Figurierte Zahl
Funktion <Mathematik>
Nichtlinearer Operator
Zentrische Streckung
Konfiguration <Informatik>
Verkettung <Informatik>
Twitter <Softwareplattform>
Automatische Indexierung
Autonomic Computing
Projektive Ebene
Overhead <Kommunikationstechnik>
Varietät <Mathematik>
Proxy Server
Web Site
Inverser Limes
Zusammenhängender Graph
Zeiger <Informatik>
Speicher <Informatik>
Bildgebendes Verfahren
Leistung <Physik>
Einfach zusammenhängender Raum
Digitales Zertifikat
Einfache Genauigkeit
Physikalisches System
Design by Contract
Objekt <Kategorie>
Moment <Stochastik>
Brennen <Datenverarbeitung>


Formale Metadaten

Titel SELECT * FROM changes; - Part 2
Untertitel Logical Decoding
Alternativer Titel SELECT * FROM changes; -- Changeset Extraction
Serientitel PGCon 2014
Anzahl der Teile 31
Autor Freund, Andres
Mitwirkende Crunchy Data Solutions (Support)
Lizenz CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
DOI 10.5446/19089
Herausgeber PGCon - PostgreSQL Conference for Users and Developers, Andrea Ross
Erscheinungsjahr 2014
Sprache Englisch
Produktionsort Ottawa, Canada

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract 9.4 saw a great deal of development around a feature (now) called changeset extraction. This talk will explain what the feature does, which areas of use we see, what the state of the feature in 9.4 is and which additional features around it we want to see in future releases of postgres. Usecases for the changeset extraction feature are: Replication Solutions Auditing Cache Invalidation Federation ... Changeset extraction is the ability to extract a consistent stream of changes in the order they happened - which is very useful for replication, auditing among other things. But since that's a fairly abstract explanation, how about a short example? -- create a new changestream postgres=# SELECT * FROM create decoding replication slot('slot', 'test decoding'); slotname | xlog position ----------+--------------- slot | 0/477D2398 (1 row) -- perform some DML postgres=# INSERT INTO replication example(data) VALUES('somedata'); INSERT 0 1 -- and now, display all the changes postgres=# SELECT * FROM decoding slot get changes('slot', 'now', 'include-timestamp', 'yes'); location | xid | data ------------+---------+--------------------------------------------------------------------- 0/477D2510 | 1484040 | BEGIN 1484040 0/477D2628 | 1484040 | table "replication example": INSERT: id[int4]:1 data[text]:somedata 0/477D2628 | 1484040 | COMMIT 1484040 (at 2014-01-20 01:18:49.901553+01) (3 rows) All this works with a low overhead and a configurable output format.

Ähnliche Filme