Simple SQL Change Management with Sqitch
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 20 | |
Autor | ||
Mitwirkende | ||
Lizenz | CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben | |
Identifikatoren | 10.5446/19036 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache | ||
Produzent |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
PGCon 201216 / 20
1
3
6
7
9
11
12
14
15
16
19
20
00:00
VideokonferenzMetropolitan area networkMigration <Informatik>Mathematische LogikDesintegration <Mathematik>SkriptspracheWikiMigration <Informatik>Numerische MathematikSkriptspracheQuick-SortInterface <Schaltung>DatenreplikationKontextbezogenes SystemPhysikalisches SystemMathematikSoftwareentwicklerDatenverwaltungTeilmengeAlgorithmische ProgrammierspracheDatenhaltungEndliche ModelltheorieProgrammierspracheMathematische LogikIntegralWeb-SeiteZeitstempelFortsetzung <Mathematik>PunktSchreib-Lese-KopfXMLUML
02:27
MathematikIndexberechnungElektronischer DatenaustauschSkriptspracheFunktion <Mathematik>Elektronische PublikationCodeReelle ZahlDatenverwaltungMetropolitan area networkDifferenz <Mathematik>MathematikGanze FunktionGeradeMigration <Informatik>Algorithmische ProgrammierspracheCodeSkriptspracheExistenzsatzCASE <Informatik>VersionsverwaltungPhysikalisches SystemElektronische PublikationSchreiben <Datenverarbeitung>MultiplikationsoperatorQuellcodeFunktionalProblemorientierte ProgrammierspracheDatenhaltungProgramm/QuellcodeComputeranimation
04:10
Stochastische AbhängigkeitDesintegration <Mathematik>Auflösung <Mathematik>DistributionenraumSchnittstelleManagementinformationssystemSoftwaretestDatenhaltungAlgorithmische ProgrammierspracheProjektive EbeneSkriptsprachePhysikalisches SystemSoftwaretestPatch <Software>SystemaufrufData MiningSoftwareentwicklerKonfigurationsraumFunktionalAuflösung <Mathematik>GeradeFunktion <Mathematik>Mailing-ListeIntegralAutomatische HandlungsplanungVersionsverwaltungMathematikElektronische PublikationSchreiben <Datenverarbeitung>Kartesische KoordinatenImplementierungBitSystemplattformInterface <Schaltung>DatenverwaltungSpeicherabzugWeg <Topologie>Rechter WinkelDistributionenraumOrdnung <Mathematik>AusnahmebehandlungEinfache GenauigkeitARM <Computerarchitektur>Numerische MathematikFaserbündelJensen-MaßHyperbelverfahrenCodePunktExistenzsatzNummernsystemStapeldateiDatenfeldReverse EngineeringDatenstrukturEnergiedichteFrequenzGrenzschichtablösungFortsetzung <Mathematik>HackerProzess <Informatik>Klasse <Mathematik>Zusammenhängender GraphPolarkoordinatenOrdnungsreduktionMultiplikationsoperatorWellenpaketQuick-SortComputeranimation
11:11
DokumentenserverMaschinenschreibenSoftware Development KitSoftwaretestKonfigurationsraumProjektive EbeneVerzeichnisdienstSchreiben <Datenverarbeitung>SoftwaretestReverse EngineeringElektronische PublikationSkriptspracheSoftwareDokumentenserver
11:57
SpeicherabzugBinärdatenClientMengenlehreSpeicherabzugURLDatenhaltungReverse EngineeringDefaultKonfiguration <Informatik>MaßerweiterungSkriptspracheVerzeichnisdienstHyperbolischer DifferentialoperatorGeradeUmwandlungsenthalpieElektronische PublikationKonfigurationsraumInformationQuick-SortMAPPunktProjektive EbeneRichtungMathematikClientPhysikalisches SystemComputeranimation
13:52
Metropolitan area networkBinärdatenClientTotal <Mathematik>ATMElektronische PublikationSoftwaretestKonfigurationsraumSoftwareentwicklerMengenlehreSprachsynthesePhysikalisches SystemVerzeichnisdienstProzess <Informatik>MathematikElektronische PublikationMultiplikationsoperatorSkriptspracheSoftwaretestPunktZweiComputeranimation
14:48
SoftwaretestPolstelleMailing-ListeAttributierte GrammatikSkriptspracheKartesische KoordinatenTemplateVerzeichnisdienstDefaultFreier ParameterTransaktionProgrammierspracheAutomatische HandlungsplanungMigration <Informatik>Konfiguration <Informatik>Arithmetisches MittelSoftwaretestSchreib-Lese-KopfAdditionDatenhaltungApp <Programm>Klasse <Mathematik>DreieckComputeranimation
16:13
SoftwaretestPolstelleMailing-ListeAttributierte GrammatikDatenhaltungRechter WinkelPunktDimensionsanalyseMetadatenMathematikWeg <Topologie>
17:26
MathematikSoftwaretestTypentheorieApp <Programm>Vorzeichen <Mathematik>Produkt <Mathematik>SchießverfahrenSoftwareSchreib-Lese-Kopf
18:35
MathematikSoftwaretestLogarithmusPolstelleASCIIMailing-ListeAttributierte GrammatikDatensatzTypentheorieMathematikDatenhaltungComputeranimation
19:51
Singularität <Mathematik>Elektronische PublikationATMSoftwaretestPolstelleMailing-ListeAttributierte GrammatikDefaultMultiplikationsoperatorMathematikMereologieIntegralWeg <Topologie>Kette <Mathematik>Rechter WinkelZeitstempelComputeranimation
20:40
Elektronische PublikationSoftwaretestSpeicherabzugEinfacher RingAusgleichsrechnungDatenhaltungSoftwaretestSpeicherabzugTypentheorieInformationElektronische PublikationProdukt <Mathematik>Projektive EbeneKonfigurationsraumMultiplikationsoperatorVersionsverwaltungSoftwareentwicklerIterationUnrundheitHydrostatikAbstraktionsebeneApp <Programm>SkriptspracheComputeranimation
21:50
Metropolitan area networkReelle ZahlSoftwaretestIkosaederASCIIAutomatische IndexierungSchlüsselverwaltungDatentypTabelleGeradeProgrammierspracheTemplateMultiplikationMailing-ListeGüte der AnpassungSkriptspracheTabelleDokumentenserverSchreib-Lese-KopfAutomatische HandlungsplanungDatenhaltungWeg <Topologie>Computeranimation
22:44
SoftwaretestTabelleElektronische PublikationATMVersionsverwaltungDatenhaltungSchreib-Lese-KopfPhysikalisches SystemArithmetisches MittelKonfiguration <Informatik>DatenverwaltungInformationMailing-ListeAutomatische HandlungsplanungBitAdditionMultiplikationsoperatorDifferenteProgramm/Quellcode
23:47
MathematikSoftwaretestApp <Programm>Algorithmische ProgrammierspracheMessage-PassingOrdnung <Mathematik>MathematikNumerische MathematikFunktionalMultiplikationsoperatorDatenhaltungPasswortStandardabweichungStabilitätstheorie <Logik>Rechter WinkelWasserdampftafelProgramm/Quellcode
25:13
PasswortMetropolitan area networkGeradeMailing-ListeFunktionalTabellePasswortApp <Programm>EinfügungsdämpfungHash-AlgorithmusProgramm/Quellcode
26:03
SoftwaretestMathematikPASS <Programm>PasswortMengenlehreOvalDatentypMailing-ListeFunktion <Mathematik>MathematikMessage-PassingAnalysisFunktionalPasswortMereologieProtokoll <Datenverarbeitungssystem>Design by ContractDatenhaltungSchreib-Lese-KopfMusterspracheSoftwareentwicklerMultiplikationsoperatorComputeranimationProgramm/QuellcodeVorlesung/Konferenz
27:00
MAPMathematikFächer <Mathematik>DatenhaltungResultanteSoftwaretestMailing-ListePunktAggregatzustandComputeranimation
27:59
DistributionenraumMetropolitan area networkMathematikSkriptspracheVerzeichnisdienstDistributionenraumProgrammierumgebungPhysikalisches SystemAutomatische HandlungsplanungNichtlinearer OperatorFaserbündelMultiplikationsoperatorWort <Informatik>Computeranimation
28:49
MathematikHIP <Kommunikationsprotokoll>Automatische HandlungsplanungKonfigurationsraumDokumentenserverVerzeichnisdienstFaserbündelMengenlehreHinterlegungsverfahren <Kryptologie>EinsMultiplikationsoperatorInstantiierungElektronische PublikationRechter Winkel
29:44
Metropolitan area networkDualitätstheoriePASS <Programm>MathematikEuler-WinkelSkriptspracheExogene VariableVerknüpfungsgliedSoftwaretestVerzeichnisdienstPlotterElektronische PublikationFigurierte ZahlFaserbündelPoisson-KlammerKonfigurationsraumMAPDatenhaltungMathematikComputeranimationVorlesung/KonferenzProgramm/Quellcode
30:39
PASS <Programm>MathematikPasswortSoftwaretestDatenhaltungAggregatzustandGüte der AnpassungSoftwaretestPasswortHash-AlgorithmusComputeranimation
31:47
KryptologieElektronische PublikationMathematikSoftwaretestKryptologieElektronische PublikationRechter WinkelSkriptspracheMaßerweiterungPasswortMathematikMessage-PassingInformationsspeicherungFunktionalAlgorithmische ProgrammiersprachePunktComputeranimation
33:05
IndexberechnungPasswortMetropolitan area networkSkriptspracheIdempotentMathematikFunktion <Mathematik>Elektronische PublikationFunktionalKlasse <Mathematik>MathematikArithmetisches MittelMultiplikationsoperatorSkriptspracheRechter WinkelWort <Informatik>IdempotentElektronische UnterschriftUnrundheitProgramm/QuellcodeComputeranimation
34:56
PasswortMengenlehrePASS <Programm>RuhmasseSoftwaretestMathematikMathematikFunktionalPasswortKryptologieAutomatische HandlungsplanungWeg <Topologie>Computeranimation
35:50
SoftwaretestMathematikPortscannerPasswortDatenerfassungMathematikMultiplikationsoperatorHash-AlgorithmusElektronische PublikationPunktEinfach zusammenhängender RaumPasswortCode
36:38
MathematikPASS <Programm>Rechter WinkelElektronische PublikationVersionsverwaltungMultiplikationsoperatorPunktLesen <Datenverarbeitung>DistributionenraumCASE <Informatik>BildschirmmaskeFaserbündelMathematik
37:31
SpeicherbereichsnetzwerkMathematikPASS <Programm>Metropolitan area networkLogarithmusSoftwaretestMengenlehreDesintegration <Mathematik>DatenverwaltungManagementinformationssystemW3C-StandardRankingDistributionenraumAutomatische HandlungsplanungMAPProgrammierumgebungGebäude <Mathematik>Migration <Informatik>SoftwaretestQuick-SortProdukt <Mathematik>IntegralDefaultElektronische PublikationEntscheidungstheorieNumerische MathematikArithmetische FolgePunktMathematikBitMultiplikationsoperatorRechter WinkelDatenhaltungMaßerweiterungInjektivitätHilfesystemSchaltwerkARM <Computerarchitektur>Schreiben <Datenverarbeitung>Projektive EbeneTabelleStochastische AbhängigkeitDatenverwaltungWeb SiteCodeSkriptspracheSpeicherabzugFreewareProgrammierspracheTaskAnalysisLineare RegressionVersionsverwaltungMusterspracheZentralisatorEinfügungsdämpfungFunktionalintegralNatürliche ZahlFunktion <Mathematik>BeobachtungsstudieStrömungsrichtungMinimalgradElektronischer DatenaustauschComputeranimation
44:51
Metropolitan area networkDatenverwaltungMathematikFiletransferprotokollSoftwareentwicklerVerzweigendes ProgrammValiditätBitGarbentheoriePunktRechter WinkelSoftwaretestHybridrechnerInelastischer StoßSkriptspracheServerVersionsverwaltungRegelkreisMigration <Informatik>DifferenteSpeicherabzugProtokoll <Datenverarbeitungssystem>Fortsetzung <Mathematik>Projektive EbeneAutomatische HandlungsplanungWeb SitePhysikalisches SystemDokumentenserverElektronische PublikationMAPTabelleKlasse <Mathematik>IntegralDatenhaltungProgrammierspracheVirtualisierungVerzeichnisdienstMereologieVektorpotenzialIterationGeradeBridge <Kommunikationstechnik>SichtenkonzeptRuhmasseInformationsspeicherungProdukt <Mathematik>Computeranimation
51:45
Metropolitan area networkDatenverwaltungHaar-MaßMathematikRechter WinkelDatenverwaltungDatenhaltungSpeicherabzugZurücksetzung <Transaktion>ProgrammierspracheSkriptsprachePhasenumwandlungUmwandlungsenthalpieGüte der AnpassungQuick-SortSoftwaretestProgrammierumgebungKartesische KoordinatenGanze FunktionElektronische PublikationGamecontrollerMathematikTabelleMAPProzess <Informatik>Numerische MathematikFaserbündelWeg <Topologie>Cookie <Internet>Generator <Informatik>Differenz <Mathematik>BereichsschätzungPatch <Software>Verzweigendes ProgrammKontextbezogenes SystemGrenzschichtablösungDifferenteOnline-KatalogMereologieProdukt <Mathematik>Umsetzung <Informatik>SynchronisierungGleitendes MittelTeilmengeDokumentenserverProjektive EbeneObjekt <Kategorie>RauschenSoftwareentwicklerPhysikalisches SystemOrdnung <Mathematik>GradientWeb-SeiteGammafunktionFortsetzung <Mathematik>Wort <Informatik>Endliche ModelltheorieProgrammbibliothekBildgebendes VerfahrenMengenlehreParametersystemMailing-ListeTaskBenutzerbeteiligungSpieltheorieFlächeninhaltGraphMaßerweiterungComputeranimation
Transkript: Englisch(automatisch erzeugt)
00:06
So I'm going to talk to you today about simple SQL change management. But first, I want to start out with a brief review of what I, at least, think is wrong with the typical migrations models that we're used to seeing. One is that often they are implemented
00:22
with an incomplete mini language, which my experience with this stuff has primarily been with Rails migrations. And I found that pretty much I just used the SQL interface and ignored all the mini language stuff, which was a very, very small subset of SQL. But that's what's encouraged those sorts of things.
00:43
Another issue I had with Rails migrations in particular was there was no integration with logical replication. We got Sloanie all set up on our system before I realized that we couldn't actually deploy changes to Sloanie via migrations because they're completely incompatible,
01:00
being Ruby scripts. Another issue that I'm sure you've all had headaches with is that numbered scripts are hard to track. So you either have to have some sort of system of allocating numbers to developers or using epoch timestamps, which are meaningless and long and ugly or some sort of thing.
01:24
I work at a company called Iovation. And for our existing deployment stuff for Oracle, we have a wiki page where developers have to log in and register a number on the wiki page saying, this is my number, before they can write a deployment script.
01:43
Another issue is that there's no VCS awareness in these systems. They are completely independent of VCS, and they don't necessarily know it's there. So what about SQL migrations? These fix the issues with mini language, because now we can just use the native scripting language of the database we're targeting.
02:03
And because we're doing that, the logical replication issue is less of an issue because, again, we could have for Postgres pure SQL scripts to do the deployments. But another issue that's also true of the numbered migrations
02:20
is that managing stored procedures is a pain in the ass with deployment systems. For example, imagine that we had a stored procedure, and we needed to make a change to it, an existing stored procedure, we need to make a relatively simple add three lines of code to a procedure.
02:41
Here's the diff. Not too big a deal. This is from some really old code I had years ago. But you can't make a change in line in a migration script. What you have to do instead is paste the entire function from the original script that you used to create the function
03:02
into a new one, and edit the new file to make your changes. So now you have a complete copy to make a three line change. Then you copy the original one again to the down version of the migration script here so that it will recreate it as it was in case you have to revert.
03:23
So now we have three copies of what is essentially the same function. And we've lost the advantage of using the source code system to track those changes. And I'll show you some more about that later. But I have believed for a very long time that this really sucks.
03:44
What about Liquibase? Well, I haven't looked at it too much except to this. And that was enough. I'm a database practitioner. I am perfectly happy and comfortable writing
04:01
in this really great domain-specific language called SQL. Not so enamored of writing SQL as XML. Depaz, who is a very well-known Postgres hacker, has his own versioning system, which I have been using actually at our job.
04:20
It's pretty nice. You can get it on GitHub. And basically what it is is it's just a couple of stored procedures and a couple of bash scripts. And one of the nice things about it is it does dependency resolution. So you can say, this patch requires these other patches or conflicts with these other patches. And when you do a deployment, because the deployments are
04:43
done by first calling stored procedures, those procedures can check those dependencies and throw an exception if they are not met, which is pretty nice. Of course, it has very, very tight Postgres integration, it being solely stored procedures written
05:01
in the database and storing stuff in a magic schema. Again, there's no VCS integration, no tools other than the bash script and the quick Perl script I wrote at work to do stuff, which parses the SQL at the beginning of every file. And managing procedures is still a pain in the ass because not only do I have to copy the file,
05:23
but because the first line of the deployment scripts is a call to register a patch, I have to be very particular about that. I can't use the backslash i to include one script and another script because that function will
05:41
create problems. So we still have to create three copies of everything. So I've struggled with this on and off for several years. And earlier this year, I had an epiphany about how I thought it might be useful to change the way we do this stuff. And I've been blogging about it and thinking a lot about it
06:00
and working on the implementation now of something I call Sketch. So you might be wondering, what? So quickly, how I came up with the name is I was thinking about SQL change management. And what could I get out of that? So I started with just calling it Skitch like this.
06:22
I thought, well, it might be nice to have something in the middle. What if we stuck a VCS in there? I'll just dump the G and push it all together. That's why it's called Skitch. There is no U in this word. It's not Squitch.
06:46
So let me talk a bit about the philosophy of where I'm coming from and kind of the goals I have for the project. I wanted to have no opinions about what database system you were using. And this means primarily that you do the deployment
07:02
scripts in the native SQL scripting interface for your platform. So for Postgres, you'll write PSQL scripts. For SQLite, you'll write SQLite3 scripts, et cetera. I wanted to have VCS integration because VCS is pretty good at taking care of your history.
07:21
You can use the history tracking of the VCS to help you track deployment order of your changes. I wanted to steal dependency resolution, at least in a minor way, from Depeze's stuff so that you could declare basic requirements and conflicts. And I didn't want to have any fucking numbering.
07:45
But I also wanted, even though it's going to have integration with version control systems like Git, I also wanted to have a way for people to be able to bundle things into distributions so that you don't have to have a VCS on target systems to use it.
08:01
You have to have Git on the target system. But I wanted to be able to bundle it up so Skitch could just read. You don't have to have Git. You have to have Skitch. So Skitch could read the plan and do the distribution. And so you could bundle things up in an RPM or a tarball or whatever you want.
08:22
Particularly with reference to the procedure stuff, I wanted reduced duplication because that's, I guess, a pet peeve of mine. But I also wanted to have it to have built-in configuration. I modeled this on Git's interface so that you can tell it what database you're deploying to,
08:43
what the name of your database is, all sorts of other stuff that you can put in there to help ease the management of your changes. It also includes deployment planning. So this is a file that lists out what should be deployed where and when.
09:01
It also has the concept of tagging, which I stole right out of version control systems. So a bit of terminology. A step is basically a single patch or a single change script that is applied to your system.
09:21
And there are actually three scripts that correspond to a step. There's one for the deployment, one for the reversion, and one for testing so that you can run a test after you do the deployment. The next step is a tag. A tag is a name list of steps that represents a release point, basically, in the application you're developing.
09:47
The plan is the list of tags and their corresponding steps to migrate up and migrate down in your database. Deployment, of course, is making the changes and deploying them to your database.
10:00
And revert is removing those changes from your database. So before I go any further, a few caveats. Skitch is under heavy development right now. I released a testing release today, B0.3. This is alpha, alpha, alpha code
10:22
for people to start playing on. But functionality is rapidly evolving. This talk is based on a tutorial I wrote up, which also functions as a functional specification for the project. And of course, things are changing as we're doing development and figuring things out. So there are some good ideas here
10:42
that will be in the core of whatever becomes Skitch 1.0. But don't expect everything you see today to be in that version, because it will change. And people will tell me I'm being stupid in various ways and will make it better. There's still some gaps to be filled in. There are some places where I've assisted some arm waving
11:01
and said, then magic happens. Those will get filled in. And I'm still thinking about how to do the VCS integration, but a bit more on that later. But let's start with some of the original ideas here. So here's how it works. Let's say we wanted to create a project.
11:21
I'll call it Flipper. It's the anti-social network. I'm just going to create a Git repository. And then I'm going to run Skitch init. So I'm initializing this project for Skitch. And what this is doing is it's writing
11:41
directories that will be used for the deployment, reversion, and testing scripts. And then it's writing a configuration file. The configuration file will help, obviously, with, at least I hope it's obvious, with deployment stuff. So let's take a quick look at it. The default one is just this.
12:00
So there's a core setting, and the engine I'm using is PG. And that's because I used the dash dash engine option when I initiated. So now with this here and this directory, as long as I'm running Skitch from within this directory, it will always assume the PG engine when it's doing deployments and reversions and whatnot. There are a bunch of other settings you can see here for plan files
12:21
and various locations of things and the extension to use on the files, the scripts that you write. And then there's database engine-specific settings you can set here as well. So I modeled the configuration of Skitch on Git's configuration stuff.
12:41
So for those of you who may not be familiar with it, there are three levels of configuration. There's configuration locally in the directory you're currently in. That's in the .git config file. Then there's one for your user information, which Git calls global. And that's in your home directory, and it's called .git config, I think.
13:02
And then there's one that's system-wide that's called slash etc slash git config. And so when you run Git, it finds whatever configurations are in your current directory, and it falls back on stuff in your user directory, and it falls back on stuff in your system directory. So this is useful to set things system-wide
13:21
or for all of your personal projects as a user and then have stuff specific to individual projects in those directories. So one of the things I want to do is set up Skitch so that it knows where my post script PSQL client is at all times. So I use the config command, and I tell it
13:42
that I want to make a change to the user configuration, and I'm just setting the PG client to the location for PSQL. And if we take a look at that file, which is in .skitch slash skitch.config.conf in your home directory, you can see the configuration files there.
14:00
And now all kinds of changes you can make to the configuration stuff, there are quite a few config, a lot more configuration settings than are currently used in the system. But this is nice because now whenever I'm doing development on this particular system, it always knows where to find PSQL. So I'm going to go ahead and commit the configuration file
14:22
in those empty directories. And now it's time to make our first deployment script. So the command to do that is called add step. And what it does is it writes out three scripts for use as your step. The first one is the deploy script.
14:41
The second one is the revert script. And the third one is the test script. Let's take a look. So this is what the default deploy script looks like. This is based on a very simple templating language, and you can change the templates if you want in your personal .skitch directory,
15:01
in your home directory. And basically it just sets up a placeholder and suggests you use transactions. So all I'm doing is I'm changing it to create a role that is going to be used by my application. And then I'm going to change the revert script here
15:21
to just simply drop that role. Pretty straightforward. This is going to be familiar to anybody who's done migrations kind of stuff, right? So let's give it a try. I'm going to create a test database, and then I'm going to run skitch deploy, and I tell it to use this database.
15:42
I'm also using the untracked option here, and this means find those steps that have been created that are not yet tracked by the VCS. And when it does that, it deploys to a special tag named head plus, meaning stuff in addition to what's known for the VCS.
16:00
But you can see here, it has in fact deployed the app user step. And if we take a look at the database, you can see that the role was successfully created. So the status command shows you this current status of the database you ask it about.
16:22
I haven't written it yet, so this thing here means this is planned, but not implemented yet. But the idea here is it'll show you what the current tag is and what the last step deployed was and the date it was deployed. Probably should say by whom too now I think about it. And it will also say, you are fully up to date.
16:45
Right, I said that. So the metadata is stored in the database. So for Postgres, there'll be a special schema called Skitch, although you can change the name of the schema, and it tracks the metadata there. There are several tables.
17:01
We can also revert. I simply tell it to, if you tell it to revert with no, without telling it what to revert to, it'll revert everything. I have only the one tag. So it's reverting all changes and removing that step. And sure enough, if we look, the role is now gone.
17:22
I hope this is not shocking to anyone. It's terrible. You have to put a tag or you have like dollar sign tag and you forgot to set it and you'd be like, surprise. This is why somebody suggested an issue they put in get, why not make it so that deploy also does revert?
17:42
So you just tell it what to revert to. I said, because I want that to be something really separate. In fact, probably for revert, if you don't specify anything, it should say, are you sure? One thing that, if you type in my production EVV
18:05
and push enter, it blows your own network. Right, yeah. Yeah, there are various ways to try to protect people from shooting themselves in the foot. I'm pretty good at that, so I'll probably end up putting them in, shooting myself in the foot, I mean.
18:21
So now if we run the status command, it's gonna show that nothing is deployed, but it'll also show what hasn't been deployed and what you can do, and see how I like that, yeah. So it says, hey, the head tag could be deployed with its app user step. Oh, and yeah, no, no, yeah.
18:45
Now I'm going too far. So then it tells you what you can run to actually deploy it. This is a new day. It's the same as having a new database, basically. The lab command will read the history.
19:03
So recall that we've reverted the change here, and yet if I run skitch log, it will show me two records here. One, the most recent one, this step was reverted by David, and this step was deployed by David.
19:24
So even though we've removed stuff, we keep a complete history, so you can really hopefully have a better idea of whom to blame when things go wrong. Magic. Got it. You see this?
19:45
Minds out, can't type. So I'll go ahead and commit that change. So now we have our first change checked in, and we could again say deploy,
20:00
and this time it will deploy using the get SHA-1 as the tag name. This would be a potential default. This part isn't actually, I haven't done the VCS integration yet, so hence the bang. But the idea then is that we would have this named
20:21
by the VCS, which is keeping track of our tags for us. And if we run the update, or excuse me, the status command again, it will show us now this is the tag, and this is the user, and hopefully the correct timestamp.
20:42
Now that is getting a little tired continually typing in dash dash db dash name and the name of the database because I'm doing development, I'm doing rapid iterative stuff, and so I just want it to always use the flipper test database. So I'm calling the skitch config command, and I'm telling it the core postgres database name
21:04
for this project is called flipper test. The other nice thing about that is that then if somebody does this without specifying a database in production or something, it won't break production. They will have to specifically type in the name of that database because the configuration file names the test database. But now, of course, I don't have to type
21:22
dash dash data db dash name, but I still get the status information. So let's add dependencies. I'm gonna add a step now, and I'm gonna say that this new step, which I am calling users, requires the app user step that I've already created.
21:44
And so when it writes out the deploy script, it notes that this is required. And what that means if we take a look at the deploy script is that there is now a line with this little mini language, if you will, that says this step requires this other step.
22:01
The values here can be a comma delimited list, or you can have multiple lines for your requires. The template puts in multiple lines for you. So let's go ahead and create our user's table. That's good. And then I'll add the revert script
22:21
to simply drop the user's table. So let's deploy that. Again, I'm doing untracked because I've not checked these into the Git repository. And so it's deploying head plus again with the user's tag. And if I go ahead and I look at the database,
22:40
there's our user's table. And the status shows that's where we left it off. Again, tracked by the database. And we can revert to head, head being the last known commit in the version control system. And so it's going to revert to untracked stuff, which
23:03
means I can now check it in. And then I can deploy it, and it'll get the new SHA-1. The little beetle means that this is buggy, so you've been warned.
23:23
So again, I'm running the status. And this time I'm using the show tags option. So in addition to showing me what the most recent tag is and the most recent step, it's showing a list of all the tags that have been deployed, when and by whom. So the show command, there's quite a bit.
23:40
There are a couple of different things you can get it to show you for additional information so you can get more information about the status of your database relative to your plan. Now, it's time to create some stored procedures. We have the user's table, and I want to have a function to insert new users into it.
24:03
This, of course, is going to require the user's step and the app user's step. The app user's because I'm going to grant to it. So again, when I add the step, it notes that they have been required. I'm going to add a second one called change pass.
24:22
And this is going to be a stored procedure I'm going to write to change the user's password. Again, same dependencies. Do you have to specify dependencies? You don't. And so tags are always run in order. So if you have a dependency on something in an earlier tag,
24:43
you don't have to specify it because it's going to be installed. Within a tag, if you have a number of steps, they could be applied theoretically in any order. And so you do want to declare the dependencies within a tag. I probably would make a habit of always declaring dependencies as much as I can just for documentation purposes,
25:02
if nothing else, and a sanity check myself. The dependencies are also tracked in the database. So it knows what needs what. So here, again, it's written out the two requires. Again, this could be a comma and a limited list.
25:21
I just made it two lines. So here's my function. All I'm doing is I'm inserting into the users table a nickname and MD5 hash of the password. And I'm granting an insert user to Flipper.
25:42
So because I'm using the Flipper user, I have to require the app user. And because I'm inserting into the users table, I need to do users. Now, users requires app users, so it's implicit. But again, I will probably make a habit of trying to be explicit. And I will likely forget sometimes.
26:04
And the change pass script, it's relatively similar. Again, it has the prerequisites here. And then I'll just write my function to change the password.
26:25
So let's go ahead and deploy these. Again, I'm calling deploy with untracked. And it is deploying the two changes that are part of this tag, which is currently called head plus. And if I take a look at the database, you can see that those tags, or excuse me, those steps have been, or the functions in those steps
26:41
have been deployed. So again, I'm going to commit it. It would follow the same pattern and revert to head. I'd commit the changes. Then I would deploy them into my little development database. So with this, it's time to think about a QA release.
27:07
So I'm going to tag this and get. I gave it the tag, a dev1 tag. And I'm going to create a new flipper dev database and test deploy it to there.
27:20
So it first deploys that first sha1 tag with its step. Then the second sha1 tag with its step. And then the third sha1 tag with its two steps. And note that this is also the place where it has the tag that I have inserted into git. And as a result, if we look at the state
27:43
once it's been deployed, or the status, I should say, and ask it to show us tags, it shows a list of all the tags. And these two are basically at the same point, two names for the same point. And they're also listed here.
28:01
So we come to distribution bundling. It's time to package this thing up, because one thing I know about operations is they don't want to deploy via git in our environment. They want RPMs for everything, which makes me a little crazy, but that's all right. So the idea here is that we bundle up the change scripts and the plan into a directory.
28:25
The plan file that gets written to that directory is generated from the VCS history. And so then with this and this directory, you can package it up as an RPM or a Debian package or a gem or whatever, however you want to distribute it.
28:42
Then you won't need the VCS on the target system. You would just need Skitch on the target system to read the plan and do the deployment. So let's ship it. I'm going to bundle it, and I'm saying tags only here. Now, what tags only means is that when I write the plan file, I only want it to use tags in the VCS
29:04
for the named tags in the plan. I didn't want it to use the individual SHA-1s of every commit. And so what it does here is it first writes a configuration file, copying the configuration that I have for the local repository
29:21
so it knows to deploy for Postgres, for instance, and any other settings I might have in there. And then it creates the bundle for the v100-dev1 tag with its now four steps. And what it does with those is it writes them into a plan file in our bundle directory.
29:43
And if we take a look at that plan file, it's pretty simple. It's very similar to a Git configuration file or an any file. All we have is a bracketed name, space-delimited tags, and then the steps that are associated with them.
30:00
I have just the one tag here with the four steps. The individual scripts for these steps, the deploy, the revert, and the test scripts were all copied into directories in this bundle directory as well. So they're already there to be packaged up and shipped off. What that might look like, and we can test it here just
30:22
by going into the bundle directory, is tell it to deploy to a new staging database. And you can see here that it's deploying the v100-dev1 tag with its four changes. And of course, the steps, excuse me, the status
30:42
then shows that we just have the one tag in here. So this is the current state of our database. So now this looks good. This is ready to ship off to QA to do some detailed testing. Any questions so far?
31:00
I don't know if that's good or bad. Well, so QA was doing some testing. And they realized that if two users have the same password, they have the same hash of the password in the database, which might seem a little obscure. But the truth is if then user foo somehow
31:22
got access to our database, they could easily look up and see who else has the same password, and then they would have access to their data. Never mind that they have access to the database already. It's an example, OK? Work with me.
31:40
So the way to deal with this, I think, is to instead of using AMD5 hash on the passwords, we're going to encrypt them using pgcrypto. So the first thing we need to do is write a new step to add pgcrypto. So all I'm doing is calling add step with pgcrypto. And then I edit the deploy file to simply install
32:03
that extension. Simple, right? I'll leave the revert script as an exercise to my audience. Now, the question is how do we go about modifying the stored procedures, the functions, that insert users and change their passwords
32:21
to switch from MD5 to pgcrypto? Now, typically, what we do is we copy the insertuser.sql to a new deploy file, call it v2, make the change to that file, copy it again, the original,
32:40
to the revert file called v2, but don't make any changes to that one. And now we have three copies. I'm repeating myself, but this really does annoy me. And then you're going to do the same thing for the change pass script. Now, these are pretty minor changes, but the outcome in our VCS is that if we want to say,
33:03
so what changed between these two commits? It looks like this. It looks like I've written completely new functions and completely new steps. Like, there come whole cloth. Now, if I happen to know or notice
33:21
that they have almost the same name as an earlier one, I can diff those files specifically. But the VCS itself does not know that I've just changed this one thing because I've had to copy the whole bloody file to make the change twice.
33:42
So what I decided to do was to try to use the VCS history for this instead. So think about it this way. If we just modify the scripts, the change scripts, in place, right where they are without copying the new files, it might work with a single caveat, a single requirement.
34:04
And that is that the change must be idempotent. Forgive me if I've mangled that word. What that means is that no matter how often or in what way that this script is run, the outcome must be the same.
34:21
This happens to work very well for create or replace function. As long as you're not changing the signature of the function, you can make the change in place using create or replace function. And no matter how many times it gets run, the outcome will be the function that you have created to create or replace whatever
34:40
was or wasn't there before. So the nice thing is that then with this requirement satisfied, the idempotence of create or replace function, we can go ahead and make the changes in place. So the outcome is that then the change to insert users
35:02
is a simple two-line change where I have added PGE crypto as a requirement, and I've switched to using the crypt function here to set the password. And similarly, in the change pass function, I have a three-line change where I've added PGE crypto as a requirement, and I've switched to using crypt here.
35:25
So does this work? Not yet, but the idea is that then we could say, as usual, deploy untracked, and it will deploy these changes just as they are here. And if we give it a try, using the same password for two
35:41
users, we see that we now, in fact, have different values stored as the password. So that's great. What if something went wrong? What if we had to go back in time? So let's revert to head, and now those changes have been removed. If we run it again, we see that we're back
36:01
to where we were, where we're getting the same hash value for the password. And the way it's doing this is that instead of reading the file as it is there, it will say to get, give me the value at this point in time as of the tag that I have in the plan
36:22
file for this change. So it can read the complete files, the change files, from the get history without us having to duplicate that code in a new commit as brand new files.
36:41
So that's all going good if you're using your VCS for deployment, because then you have the history right there. But what if you then want to bundle it? Well, in this case, what we would do, let's give it a try. I will now tag this as beta1 and bundle it with tags only. And we can see here that as before, it bundled the dev1 tag with its four changes,
37:01
and then it bundled the beta1. Now, it has appended the v2 to these for us. So it's noticed that these are duplicates, but it's reading the versions. For these, it used the version that was at this point in time. And for these, it uses the version from this point in time, read directly from the get history.
37:20
Then it writes the files out, and you can bundle them up. So they're duplicated in your distribution, but who cares? Much less of an issue. And if we look, you can see more or less what I hope you would expect for the two tags and their steps in the plan, the distributed plan file.
37:46
So let's go ahead and deploy those to staging. You can see that they were added to staging, which had been on the previous tag. So that works. And now we are at the beta1 tag. Ship it.
38:03
So that is a very brief introduction to the ideas behind the core of Skitch. There are a number of other commands that I have planned and that are in progress. I showed you a log before. So a log would show a complete history of all the changes you've made to a particular database,
38:20
regardless of whether they were averted or not. There will be a check command that will basically sanity check your plans, your steps, check for duplicates, see if any prerequisites are missing from a particular database, that sort of thing. A test is something that I plan to do so that for each step,
38:40
you can have a test to regression test it, make sure that the deployment actually worked, the deployment of a particular step actually worked. I'm not sure how that will work yet. It might be that it'll just diff output, or maybe there'll be some sort of tap integration for post res or something. That's one of the places where I'm arm waiting right now.
39:02
And there's a help command, not unlike the get help command. And you can use it to say help on all these other commands. And they'll show pretty extensive documentation for many of them. So the current status of the project is I've been working on it full time for about a month.
39:20
I'm grateful to my employer, Iovation, for saying, yeah, we need this. Go ahead and work on it. And lend me work on it at work. I'm also starting to rethink the VCS integration because I think that this is nice, and it's a decent idea. And I really love being able to move backward and forward in time with that duplicating files.
39:40
But almost no one has a git history that's that clean. Never mind Subversion or CDS. So I'm starting to rethink whether instead of having kind of a deep integration with the VCS, there might be more of a complement there. So we'd rely more specifically on a plan file for which we
40:01
would have tools to write to it and complement that with the VCS history or have ways to mark particular points in time in the plan file that it can then ask for things from the VCS history. I'm still trying to work that out. But what's become clear to me is that we'll need tools to manage plan files and adding tags
40:20
and steps to the plan file independent of whatever is in your VCS history. So that's stuff I'm just starting to work through now. But I could use your help. We have a site, skitch.org, which is the default GitHub site with one of their new designs.
40:41
It's on GitHub, forkit. I do want your opinions. I want to know where I'm completely wrongheaded about this, what I have overlooked, what sorts of patterns, perhaps, it is currently in its current incarnation failing to satisfy, whether that makes things change now
41:02
or gets on some sort of roadmap. Either way, I want to know how we can make this as useful as possible for as many people as possible. So obviously, any code help would be much appreciated. Documentation writers, at some point,
41:20
it would be nice to have a nice website to sell it. But I'm asking you for your help to make it great. I will do the best I can by myself, but it would just be me then. And I want this to be something that everybody can use. So I thank you for your attention.
41:48
We seem to have 18 minutes for comments and questions and insults. No, you can't see me. Oh, look at that.
42:05
I'm as free as free as free as free as free as free as free. No, I'm sorry. So a huge pain point with migrations is when to run them. Yeah. And so on a large production database, everything's easy when you're on a hundred meg.
42:21
A migration takes a second or two, and maybe it's really hot and you can't get that altered table walk every day. Mostly, where things get really hard is like, how do you even stop a hundred gig or something?
42:41
And doing zero downtime deploys in an environment that's like a whole other channel. And right now, the way that people learn to do that is a failure. And there are people who have blogged about it, but there's really no support for that. And I think that if you're building a tool like this,
43:01
it's an amazing opportunity to make a decision on how to do that well. A tool like this, I've seen lots of users. In my slides, I showed a building. You put a default on the column, and it's like, ha ha. Right. Where you're writing that whole table, I hope you didn't need that for a while.
43:27
Sorry. Your check or test or something like that, have the ability to add intelligence around trying to protect people from those extremely destructive things? Yeah. To a certain degree, I think that would involve
43:43
being able to parse your deploy scripts to see what you're doing. Well, there's explain, but there's no explain for DDL yet. Right. So maybe that's actually some data project. Yeah. Yeah, that would be kind of cool, for Postgres in particular. But one of the things I am doing is trying to be neutral about what database
44:01
you're deploying to here as well. So there's code specific to each engine in there, or there will be. Right now it's just Postgres and a little bit of SQLite. So there will certainly be injection points to do stuff like that. Yes, I mean basically all it is,
44:24
is tags and scripts. And it doesn't really care what your scripts are written in. So somebody's already asked me, if at some point the deploy scripts could be written like in Ruby or Perl or whatever. And I figure that's something we could certainly add.
44:42
Maybe it'll look at file name extensions and things like that. Some people would be more comfortable writing migrations using the DBI or something. But overall, if somebody's willing to do the work to implement the engine class for a particular target, I'm fine with it.
45:03
I assume for a lot of those NoSQL systems there wouldn't be much need. All the same problems. Did I? You still have data. You still need to make changes. You still need tools to do that.
45:20
The difference is that with Postgres, with a NoSQL database, while you're writing through the table, you can still use it. And with Postgres, while it's rewriting the table, your site is down. Yeah, don't rewrite your table. Well, that's all of the difference. I think there's definitely opportunities for stuff like that.
45:41
I want to get the core working as well as possible in the next month or six weeks. But I'd love to see stuff like that grow up around it. I want it to be as useful as possible for people, in part because everything else sucks so hard. Yeah, Norman?
46:05
Maybe I'm not going to use Git or the virtual control system in this integration. Because all of a sudden, now you're not only coming up with kind of a scripting language for Git, but now you're saying, well, maybe I'm not going to use Git, and I don't know what I'm going to do in the testing integration thing. So you've got all sorts of competition.
46:22
Yeah, there are a bunch of questions. So where I'm coming from, it's not just Git. I was figuring it'd be whatever VCS that somebody added support for. I was going to do Git and Subversion. I was thinking, since you're kind of already kind of controlling how people get things in there, couldn't you force, I mean,
46:41
talk to me a little bit about how Git gets so messed up here that you're screwed. Well, so one of the things is that when I'm doing a lot of development, I end up changing my deploy scripts a lot, especially before first release, because I'm the only guy working on it and I do whatever I want. That means I have like 30 commits to a single script.
47:01
That would make a rather painfully long plan. Well, I'm 27 and I'm missing 74. Right, yeah. I run my tests first, but yeah. I guess you're expecting the person knowing how to do like a Git squash or Git rebase to clean up the history.
47:20
Right, and in the tutorial, I have a whole section on two developers working on branches and how they need to merge stuff in and how it can be a mess unless you rebase before you merge. I mean, if one, I would like to be able to use this and I think that if one is very careful about how they maintain their Git history, you can do it this way. But I think most people are not that careful.
47:42
So like I've been talking to Josh Berkus about the stuff they have at Mozilla and they have a master repository and all these people have forks and they don't have branches and it's just all over the place. So for something like that, it might be more useful to have the plan file I need to make sure merges are clean just as one file
48:01
regardless of what the history is. Sorry, I was going to say the trouble with it is once you start getting into like trying to get my history clean and validated, try to do the testing and revalidate, it's almost like you're trying to go to like a Hudson Jenkins protocol but you stick everything on and everything has to check out before you can apply it.
48:23
Yeah, well, forget that's cheap. It's right there in the .git thing. So like I wouldn't want to use it with subversion. But yeah, that would be another issue another reason why people may not want to use it. Have you considered just assigning every step that you ID and then
48:43
instead of relying on this commit hash, what you do is you just check the SHA of the sequel when someone tries to deploy it and say, has this step already been deployed? And what was the SHA of the script that was deployed? And that way, if your coworker already ran the script, it would be like, no, no, that's there.
49:00
Or if somebody ran a different version of it, you'll say like, well, well, this step already exists on the server. Is this a new version? Is this a you are screwed? Yeah, it's a good idea to hash it and store that as another check if nothing else. But the UUID is, I think, important
49:21
because you may add a new step and I may even add a new step and I deployed it to our staging server and now you deploy it to the staging server and it's prized, right? Yeah. Especially with long-running projects, you make that branch that's called ack
49:41
or fixit or something and it sets down. You're like, I've got to fix it. Yeah, the reason why I didn't want to use something arbitrary like a SHA-1 or a UUID is because then if I have a whole bunch of steps, my deployed directory is full of all these files that I don't know what's in them because their names are meaningless. Well, I think the names are good.
50:01
I'm just saying it never has collisions as a validation. I mean, you have... Yeah, right. If you have a collision, it gives it a new thing or a new thing that hasn't set the names basically.
50:21
Yeah, yeah, it bears thinking about. Yeah. The idea that you created the initial data and it can't be improved on 30,000 migrations? Yeah, or maybe you just...
50:42
So kind of what I'm thinking about with the Git integration in particular is it might be a hybrid with a plan file where maybe I have a plan file that takes me up to a certain point in history and then after that I use Git. And my initial thought for that was I had this in a project years ago where we had 200 Rails migrations and at a certain point we dumped the database
51:02
and threw them all out and just created one that was the dump. So to do something like that with something like Skitch, what we do is then say, here's a plan file and you start at this point with this new single file. So I definitely want to have ways to allow people to simplify things like that at a certain point.
51:21
It's an onboarding tool, too. I'd love to break my current... Uh-huh. Yeah, right. Yeah. Yeah, there are lots of different things
51:41
that could be done potentially. Right. I'm trying to get away from that notion of this one big deployment and it's going to be like four and six part phases
52:01
where we pull something out of the farm, migrate it. So we don't worry so much about a rollback. We think about our first phase as a very unique package inside a whole layer. Right. And we expect there to be some sort of layer on while it's detached
52:21
and play with it and test it and say, okay, this looks good. We're looking at our confidence. We'll go to the phase two where we'll bring another part out of the main farm and have farm number two. If it gets bigger, we'll feel good about it. Then it starts to come back, starts to create the farm two as the main farm. It's almost good. We have that kind of finalized step
52:41
which plays with that backward compatibility layer and cements it. The different stages are a baking process and when our cookies are ready, you know, we serve them. Checkmate. Do you think you could do something like that with this?
53:04
You don't think like each of those? Yeah, but if you make each one of those things be its own tag or even its own step. I don't know. Uh-huh.
53:30
Yeah. Yeah. Right. Yeah.
53:41
Right. Yeah. I don't write Ruby. What's that? Oh, I like Ruby. I just don't have much opportunity. Well, part of that is context. I roll back my data base. I deploy my application.
54:00
I don't roll my application back also. Right. So it goes on context. If you do this, make sure, warning, you roll back your application. Yeah. Right. Yeah. Who tagged our database? Our branch is database-based, right? They're very important.
54:22
Yeah, Chris. I guess the other thing is the other side of this that I've been thinking about tackling is just looking for differences to say, okay, so where are the differences?
54:43
We wind up with unclean systems. But over time, there's stuff that's gotten out of sync. Yeah. And it wasn't managed using any kind of sync manager. It was completely out of stock. Yeah. But the EPA added some tables. Oh, no.
55:00
We forgot to change some money. Somebody manually monetized some permissions and so forth. Yeah, and there's a table named Frank Test. So that's something that's, well, this doesn't seem to touch back at least at all. No. This assumes that you've got an environment that's under huge control.
55:24
And it never falls out of control? Yeah. If you looked at Perseus, so this is another kind of SQL change management system. It's on PGXN and SourceForge and a few other places.
55:40
And what it does is it basically, he's trying to support the entire language of Postgres SQL, which is massive. And he's dumping out specification files as YAML. So you could do a YAML dump of one database in another database and diff them.
56:01
And I think you can then diff those two files and it will generate patch scripts to bring them into alignment. We have databases that sit in catalogs and money tables. Why do we need to create something as some new representation? Money tables.
56:21
We've got money tables. I think he's doing it so that you can then have a file that's independent of the database. So that you can then use it for actually planning deployments. So it's much more of the database diff idea. Were you to develop in a database and then to write a deployment script, you run this thing and it creates a diff and generates an SQL script for you.
56:41
Or a YAML file that generates them. There's two of them. What I'd like to have, let's automate this into our build process.
57:07
So if we know that we've got something, we've got to create it.
57:20
Let's validate that's true. If it's not true, the answer isn't let's fix it. The answer is send it back to the developers. Tell them that these five objects have a difference. Go fix your thing. Yeah, that's why I want to put the test stuff in Skitch. The same thing works in production.
57:42
We think the schema should look like this. We think the schema must look like this in order for our schema to work. It's not so much here's how to fix it, but if you can identify where the balloons are.
58:04
Mr. Federer is usually the one who says, no, you can't automatically generate an upgrade script. He makes this noise pretty easy. And I'm willing to go along with that. He's a noisy guy. It's a hard problem, especially if you're going to have to do an update that's able to do a conversion.
58:24
Yeah, right. That's one of the reasons why I want to put the test stuff into Skitch so that you basically can have production-level acceptance tests where potentially you can say, do this deploy.
58:41
Now run these tests. If they fail, revert and send it back to your developers. The problem is that you might find yourself in a situation where the deploy script fails even before you can do the test. Because as soon as certain columns are there, which aren't there.
59:01
I thought it was too complicated. I had this crazy idea once of doing the PT dump right before just the schema, putting it in the Git repository. And then as I do the thing, I'm going, it's got to match the same tag on the table. It gets really complicated.
59:21
Different databases is hard. It's really hard. The other question, David, is if you've got a testing and staging of production, it could be that you're going to want to move certain things up in your bundle of staging and someone's never going to get there. How are you going to keep track of, this is all the steps and tests that's just thrown away. I deployed it, I tested it, I don't want it.
59:46
Yeah, branches is what I think of. So at Ivation, we have separate branches for development and staging and for production. Oh, and I do branching in Sketch.
01:00:00
I don't think so. I don't know. All this stuff needs more thought. Yeah. Well, Postgres would be my first target. It would definitely be most awesome to Postgres, but I want it to be a generally useful tool.
01:00:21
I don't want to limit it to just one or two communities. I mean, as it is, it'll start out, it'll just be pro and Postgres users who use it. But I hope that it will leak out, because other projects have other things to contribute, and they have the same pain. I mean, when I've worked on MySQL stuff before, this is
01:00:42
just, well, it's worse there, because DDLs are non-transactional. Because everybody hates this, and I don't think the solution has to be all that different.
01:01:01
Anybody else? Thank you for coming. I appreciate it.