Panel: Deployment
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 11 | |
Autor | ||
Lizenz | CC-Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben. | |
Identifikatoren | 10.5446/37894 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache | ||
Produzent | ||
Produktionsjahr | 2018 |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
EMPEX LA Conference 20183 / 11
2
3
5
6
10
00:00
Prozess <Informatik>RelativitätstheorieKartesische KoordinatenProdukt <Mathematik>MereologieLastInstantiierungErlang-VerteilungVirtuelle MaschineServerBefehlsprozessorApp <Programm>Algorithmische ProgrammierspracheProjektive EbeneDienst <Informatik>Schreiben <Datenverarbeitung>Weg <Topologie>QuaderStrömungsrichtungBitElektronische PublikationClientEDV-BeratungGeradeDokumentenserverAppletSkriptspracheZentrische StreckungRepository <Informatik>BenutzerprofilDifferenteStrategisches SpielStandardabweichungPunktwolkeProfil <Aerodynamik>MultiplikationsoperatorSchnittmengeCASE <Informatik>DigitalisierungSurjektivitätMetropolitan area networkRechter WinkelKeller <Informatik>Einfache GenauigkeitComputeranimationBesprechung/Interview
06:56
IntegralFehlermeldungApp <Programm>DateiformatSoftwaretestElektronische PublikationSkriptspracheMultiplikationsoperatorATMQuick-SortMAPVirtuelle MaschineKartesische KoordinatenVariableProgrammierumgebungAnalytische FortsetzungMereologieMixed RealityRepository <Informatik>Produkt <Mathematik>EindeutigkeitLastBitDämpfungBetrag <Mathematik>TypentheorieProjektive EbeneDatenbankRechter WinkelGanze FunktionDifferenteInstantiierungGemeinsamer SpeicherKlasse <Mathematik>BereichsschätzungSpeicherabzugClientTesselationDokumentenserverCASE <Informatik>Bildgebendes VerfahrenSampler <Musikinstrument>Parallele SchnittstelleQuaderBildschirmfensterProgrammbibliothekBesprechung/Interview
12:26
Dienst <Informatik>SystemaufrufErlang-VerteilungVariableProgrammierumgebungApp <Programm>MultiplikationQuick-SortLastteilungVerschlingungKnotenmengeAggregatzustandSkriptspracheSpielkonsoleDatenbanksinc-FunktionProfil <Aerodynamik>Virtuelle MaschineSoftwaretestAusnahmebehandlungDirekte numerische SimulationBootenVerzweigendes ProgrammMAPBildschirmmaskeTypentheorieCodeGruppenoperationOrdnung <Mathematik>MultiplikationsoperatorServerIdentitätsverwaltungDomain <Netzwerk>InstantiierungMomentenproblemDifferenteFormation <Mathematik>SondierungProdukt <Mathematik>Physikalisches SystemLeistung <Physik>ComputerspielDreiecksfreier GraphLipschitz-StetigkeitDemoszene <Programmierung>Rechter WinkelUmwandlungsenthalpieSchreib-Lese-KopfBesprechung/Interview
17:52
SoundverarbeitungSkriptspracheApp <Programm>Physikalisches SystemMixed RealityVersionsverwaltungMereologieStörungstheorieFahne <Mathematik>Selbst organisierendes SystemCodeDatenfeldMAPDatenbankMigration <Informatik>Spannweite <Stochastik>CachingCASE <Informatik>Wurzel <Mathematik>Komplex <Algebra>DifferenteElektronische PublikationServerFormale SpracheAggregatzustandPunktMathematikEndliche ModelltheorieFunktionalBitObjekt <Kategorie>Virtuelle MaschineSystemplattformAnalytische FortsetzungSystemaufrufReelle ZahlShape <Informatik>AbstraktionsebeneDatenstrukturDienst <Informatik>Klasse <Mathematik>Hash-AlgorithmusAdditionSchreib-Lese-KopfPatch <Software>Prozess <Informatik>Repository <Informatik>KonfigurationsraumE-MailMultiplikationsoperatorKategorie <Mathematik>Besprechung/Interview
25:48
MAPMigration <Informatik>ServerKonfiguration <Informatik>Quick-SortProgrammierumgebungApp <Programm>ComputersicherheitURLTermDatensatzUmsetzung <Informatik>Software EngineeringVariableWeb logNabel <Mathematik>ZahlenbereichSkriptspracheRechenbuchGruppenoperationDatenbankVersionsverwaltungProzess <Informatik>MathematikMusterspracheLuenberger-BeobachterMultiplikationsoperatorMixed RealityProdukt <Mathematik>SchnittmengeElektronische PublikationGebäude <Mathematik>ComputerspielClientKonfigurationsraumFundamentalsatz der AlgebraAggregatzustandDomain <Netzwerk>Vorzeichen <Mathematik>BitSoftwareErlang-VerteilungHook <Programmierung>Rechter WinkelBesprechung/Interview
32:42
COMXML
Transkript: Englisch(automatisch erzeugt)
00:07
All right, so again, questions for all of you. Give us a brief overview of your current deployment strategies and maybe a little bit of reasoning about how you chose the aspects of it that you did.
00:23
Let's start with Desmond. Hi. So I'm a freelance consultant, and I have a lot of internal applications that I spend up in my spare time and kick out to people. And I deploy these all under a single umbrella app onto Linode instances that I manage. So bare Ubuntu, and I use Distillery, and shit,
00:45
now I'm blanking on it. What's the other one? E-deliver. E-deliver. Thank you, man. That's fair enough for a reason. So I use Distillery and E-deliver to deploy my umbrella app to an Ubuntu instance in the cloud. There's no clustering, it's just a single thing. Cool.
01:01
At Motel, we're a digital product agency. So much of our deployment has been derived out of having to hand off projects to non-technical teams. So our internal infrastructure has been built on top of GitLab and GitLab's runners.
01:21
Our review apps are spun up with Docker and GitLab into Kubernetes. But at the end of the day, we deploy everything to Heroku. Hi, I'm Ben from Grindr. So we have a lot of stuff in our stack. We've got Java, we've got Elixir.
01:41
So we're using Elixir for some microservices and for keeping track of user presence and online status. So we do continuous deployment. We deploy every day. And we're using Makefiles with Ansible and Docker for actually doing the whole deployment process.
02:06
And yeah, do you want me to go into detail about that, or that's next? Say something else. All right. We clearly want to say something else. Yeah, yeah. So I mean, we're not using Docker to run in production,
02:20
but we actually use Docker to build the artifact for the target OS. Because with Elixir, you have to build it for your target OS. And we're also using it to actually test the deployment. So we spin up Docker to deploy onto that Docker machine to test the deployment itself. And the Makefile, I mean, they're wonderful.
02:43
They glue everything together, 40-year-old technology and Docker's five-year-old technology. So it all comes together. So how is your Elixir deployment different or the same than other standard DevOps procedures?
03:00
Do you find that it varies a lot, or is it often very much in line? Can you define other DevOps procedures? No. Sure. Ours has been, there is a tension
03:21
in between the Erlang way of doing things and then also modern DevOps. And modern DevOps, we worry about Snowflake servers that will eventually melt because they're unique and special, and it takes a lot of work to configure them. Like as an agency, like in 2017, with a team of five,
03:45
we handed off, I think, six or seven Phoenix applications. So really short time frames, intense work, and then a delivery of a project, of a set of assets. So in that case, those finished products,
04:03
they have to be something that a non-technical team can run and scale. And so we've been working on managing that tension, and it's an evolving process. But right now, we do deploy our production apps to Heroku.
04:22
Actually, just with GitLab CI, it just pushes with Git into Heroku. But I also can build a distillery release in a Docker VM and push that into a review cluster internally. So we've been doing our best to manage that tension
04:42
in between. And I would love to get to the place where we have the infrastructure, and we can figure out the client relation to where we can deploy the Erlang way, but we just haven't been able to do that yet. So we've really been trying to cut out the DevOps step
05:02
in a certain sense. So we've moved all of the procedure of deployment into the repository itself. So for example, we have a microservice called the profile service. It manages user profiles. Everything about deployment is in that repository.
05:22
So we don't have some repo over here that defines the deployment, that defines how it works. We don't have external processes. One of the reasons we use Docker for building the artifact is otherwise, you have to build the artifact on like a Jenkins box
05:41
or something. And so that's just another step in the process that we don't need. So by removing all of these different things, essentially we're saying, look, you guys are engineers. Part of your job is to actually define how it gets deployed onto a box
06:01
and what the dependencies are on that box. Just make it happen. And then we're cutting out the middleman. No offense to any DevOps people in the room. We find it's a lot more efficient to do it that way. So I don't know if that's modern DevOps.
06:20
I don't know what modern DevOps is. But that, I think, is the most agile way to actually get things deployed quickly and easily. I don't do any DevOps. I have a hacked together Ruby script that runs a couple of bash commands that
06:40
say build a release, push release to production, restart production. It's about as simple as you can get. I don't deal with CI. I build my production releases on my production host. My traffic is low enough that I can handle the additional load on the CPU. And that means I don't have to deal with putting my secrets someplace
07:02
or having the app not have environment variables there at compile time. So I just wiped away an entire class of problems. I have, I don't want to say a unique setup, but I'm not running a large app under heavy load and dealing with a big team of engineers.
07:20
So my situation would probably not work at their companies. It works pretty well for me to have several smaller projects in production. And I know you started mentioning this a little bit earlier, Ben. How does testing come into play at the time of, before, and possibly during deployment?
07:43
Yeah, absolutely. So one of the things about continuous deployment is you cannot do continuous deployment if you don't have good tests. You are going to shoot yourself in the foot. And I think they really play on each other. So continuous deployment keeps you honest with your tests because you're going to know if your tests are bad because you're
08:01
going to fail in production. So we do TDD. We also have full, large integration tests that are also contained in the repository itself. We run those so that we actually launch the application in Docker.
08:21
We use Docker to hold our database as well. We talk to the database, not through some mock database. And that keeps us pretty sure that everything is working correctly. And like I said before, we actually test the deployment. So we actually, from the test, from our actual test,
08:41
we call our Ansible scripts, deploy onto a Docker image, and test that. Hey, if I do the health check, it returns OK. So that gives us the confidence to really do continuous deployment and know that, because before continuous deployment step, it's going to run the tests. And if tests fail, we're not going to deploy.
09:04
At Motel with GitLab, we still host GitLab, which means that every time we open our pull requests on GitLab, they're called merge requests. We run our tests. We also, in parallel, run mixed format and check for formatter errors and then
09:23
run Credo in strict mode. And actually, just in the last couple of weeks, we've moved that running process off of the same Kubernetes cluster that was running GitLab. And I've moved it over to Spot instances on EC2, which is actually really easy to set up.
09:41
But what that allows us to do is for 10% of the cost of maintaining or reserving the instances, like on demand, I can just pull out a 16 core or 20 core machine, run the tests, or in this case, run my type specs from scratch.
10:00
So I can run mixed dialyzer, and it'll run and build a PLT file in four minutes and then shut down the resources. I don't have to pay for them anymore. So then, that's sort of how we test before we merge. We also, part of our keeping our infrastructure
10:22
in the repo itself. There's also a Docker file in there. So we'll build a distillery release with Postgres and then using GitLab and GitLab review apps, put out a review app, as you would expect with Heroku. Allow our design team, our client, to test the review app, test the features
10:42
as they're being worked on. And then when it gets merged, it's all then deployed continuously to a staging Heroku app, which isn't as exciting. I'm still working on that part. But yeah, so GitLab and the runners allow us to, in parallel,
11:01
test a bunch of different facets of our applications and then share that infrastructure either, it's either in the repo itself, if it's unique like a Docker file, or it's across all of our applications. So that infrastructure is transparent and is, whether it's, regardless of the client project.
11:22
So as I said, I have no CI. I run my tests after I, before I commit. And if everything's green, I push it up. And that's that. My infrastructure is they're pets, not cattle. I have my one Linode box, which is four cores, eight gigs of RAM. And that goes down from time to time.
11:42
Linode has random Linode updates, I would say three or four times a year. And it's a pain. They give me a window, but then my app goes down. I haven't been that conscientious about setting up etcd scripts to reboot my Beam, which would solve the problems about the thing going down.
12:02
But that's sort of particular to Linode. I don't think DigitalOcean or EC2 would have a similar problem. But so my infrastructure is probably more fragile, but also more stable, because it's just the one machine that I have to deal with.
12:20
So do we have questions from the audience right now? Hold on. Question for Scott. You mentioned a review app. Yeah. That's a new concept for me. So is that different from, how is it different from staging and QA environments? It's similar to a QA environment,
12:41
but it is specific to like a branch that is getting ready to be merged in. So after like, it's another form of code review, except it's feature review. So instead of my designer having to maintain Elixir on the system, learn how to use Git, and then pull down the feature branch,
13:01
boot up the app, and review it locally, he can, there's, in the merge request, in the pull request, after the review app runs, there's a link to a sub-domain of our GitLab instance, where the Docker container is running. And it is like the,
13:21
it's meant to be as close to production as possible. So it runs the same seed scripts as our staging environment does. It allows non-technical users to test an app. Any other questions? Here we go.
13:40
I have a question from Benyuin. You say that they have a lot of microservice there, and they're very like, they have its own lifecycle, but why don't you, we have like, microservice that know about each other, like how we're actually handling that, how we're spin up groups of stuff that they need to test together, and since you're very independent.
14:03
So I think your question, how do the services talk to each other? How do they know where they are, how to contact them? So if they know about each other, if you need to test them, so maybe you actually need to spin up too, for testing like a specific feature, but if your posts are hit, it's on lifecycle. Imagine you have a user, but user actually makes a call to the microservice,
14:24
whatever information, and if you need to test that on CI, you actually spin up multiple service in order to test this feature, or mocking the request between service that know about each other.
14:42
So I'm not clear on the question, so I don't know if I'm gonna answer it correctly, but in production, the services, we just use DNS. We don't have anything fancy. So environment variables, so when we actually deploy, we specify,
15:01
so for example, if a service needs to know where the database is, or where one service needs to know where the profile service is, we can just set the environment variable at deployment time. Does that answer your question, or it sounded like your question was about testing also. More about how we're actually handling
15:22
all those microservice in deployment time, maybe they know about each other or not, but you say that you have this identity microservice, but this identity microservice actually talks to another microservice. Oh, no, no, no, we don't have an identity, so the,
15:44
are you talking about the profile service? Okay, yeah, so we just use DNS. So we don't couple, so essentially we have a bunch of machines in AWS, and there's nothing special going on.
16:03
We deploy to, we deploy a service on machines A, B, and C, we deploy another service on D, E, and F, and if they need to talk to each other, they use DNS to actually contact each other. So it depends, it also depends,
16:21
are they behind the VPC or are they public, right? And we have load balancers in front of these services as well. So we're not using any sort of, we're not using like console or any of these solutions for dynamically telling services where other services are.
16:43
We just haven't found the need for that. This may be a quick answer. One of the things that I haven't seen us do very much as a community yet is talk a lot about how to write distributed, like actual distributed node type of apps within Elixir,
17:02
and so because of that, we haven't really talked about how you would deploy those types of things. I'm wondering if you guys have seen any apps that are like that, where you actually really have fully distributed sort of Erlang Elixir nodes, and if you've seen any advice on how you would do those kinds of deployments?
17:21
So is your question about if there's state on the servers, how do you do deployment without losing that state? Yeah, so we actually do that in one of our services. And so what we do, we just had to do a custom thing. So when we do a rolling deployment, so say we have three servers,
17:43
we actually make a call, and this is all in the Erlang Beam stuff, so the servers can communicate with each other, so we just drop down to that level and send the data before we go down,
18:02
or it's the other way around. I think we call before we go down from the other server, so that one now has all the state, and then we bring up the new one and then give the data back. But it's all using the Beam's abilities to basically talk cross-server from the process,
18:25
can talk to each other. It's a simple function call. I can, if you send me your email, I can talk to my coworkers and give you details
18:41
on what exactly we're doing. Other questions? I think this might be related to the previous question somewhat,
19:01
but one of the bullet points about, well, being new to Elixir, one of the highlights that you read about early on is the hot deployment codes, but it seems like nobody on the panel is actually doing that or has a use case for that yet. I do that. So can you speak to maybe,
19:21
maybe what are the challenges or what your use case is? I'm curious. It's mostly for fun. A lot of the tutorials around hot deployment, it's weird because the language is pitched as, as Emma pointed out, all these nights of uptime, less than a second every 20 years,
19:41
which is like, I have never seen an app that exists for 20 years. But then when you say, okay, so how do I hot deploy these people? They start to back away. Oh, well, you might not wanna do that. You have this code change function in your gen servers, but here be dragons and just, just, just restart your app, it's simpler. And it is simpler.
20:01
Turn it off, turn it back on again, removes the whole class of problems. How do you restore the state? I think it's a little weird that it feels like a bit of a bait and switch. And I think then your app stops being stateful in that way, and then becomes a cache layer on top of your database. There's things you can do to keep state
20:22
while your app's running and so forth. But I do hot deploy is, again, mostly for fun. My use case doesn't need that kind of uptime. I'm not running a big e-commerce platform where downtime is money, but it's fine. With distillery, it's just,
20:42
instead of saying build release, you say build upgrade, and it takes care of it for you. If you are changing the shape of data structures, then you have to think about, okay, well, I need to implement these code change functions in my gen server so they know how to migrate the objects that they're holding onto. In those cases, if I'm running a database migration
21:03
or whatever, I turn it off, turn it back on, because it is simpler. It's not impossible to do. It just takes a little more thought. And I think that might get complicated as the team gets larger, and you have to be more mindful about what is going out and what effects this can have across the system. But for simple changes, additional features,
21:23
it's fine, it's easy, it's transparent. Yeah, so we don't do it, because unless there is a real need for it, to us, it seems like it's not worth the risk, because you can really screw up your deployment
21:42
by doing that. Of course, if you're Ericsson, there was a reason for this ability, because you need to do live updating while phone calls are being routed. But we don't have that use case at this point.
22:00
But I'm sure you can find the use cases where that does open up a whole range of possibilities. But it's another layer of complexity and more stuff to worry about. But it can be fun. It's cool, it's cool to do.
22:38
Well, what we do,
22:43
in the mix config, I think it's like there's a version field or whatever, we just set that to git describe dash dash long, and then when we make a tag, so when we actually do our releases, we don't release, with continuous delivery, we release the most recent tag.
23:03
So we're not actually releasing necessarily the head of master, we're releasing the most recent tag. So say a tag is 1.1.1, git describe long, wherever you are, is gonna return 1.1.1,
23:21
because it's the most recent tag, and then it returns your commit hash. So that's what we set our version to. So there's always a version and it's automated. And it's tied to an actual tag version, which you can then tie to what features
23:42
are included in this version. So in my mix file, I have that version read from a version.txt file that's at the root of the repo, and the Ruby script I mentioned earlier that cobbles together the whole release, as part of that, it reads the version and bumps up.
24:00
I can set a flag when I run the script, say patch major, minor, it will handle that, and then create a commit, add a tag, and then proceed with the actual building the release and deploy. Questions? I don't know if this specifically is deployment,
24:23
but it might apply to a couple of you. One of the hitches around Erlang is certainly around the thing you were talking about, the cross-node communication, but as soon as you do that, you're using a different model than if you're in a large organization with a bunch of different languages involved.
24:42
They're almost surely communicating over HTTP, and that's just how they operate. What approach do you take to that? Is it worth employing that just for the purposes of doing it, and then how does it,
25:01
versus is it worth sticking to being standard? I realize this is not exactly the right question. I apologize for that. But it also relates to the point because a lot of these organizations expect a very standardized deployment, put it in the Docker container and send it off. So I think it depends on what level
25:24
of abstraction you're talking about. So when we're talking cross-service, which is really theoretically cross-domain, yeah, HTTP and JSON is a great way to communicate. Because everyone can speak that language.
25:42
But if you're talking like machine to machine and they're server to server and they're both running Erlang and they're both in the same cluster, the way they communicate with each other, you can drop down to the Erlang fundamentals because that's built in.
26:02
But I wouldn't want, if I have one Erlang cluster over here and one Erlang cluster over here and talking across them, I would keep that to using REST HTTP and stuff like that. Because otherwise you're sort of, it seems like you're polluting the domains, I think.
26:27
We've put in a question for Scott. What's the current state of deploying Elixir to Heroku? I think most of us here probably have some familiarity with some of the very streamlined, like deploying Ruby to Heroku. What's that like for Elixir right now?
26:44
There's a couple ways you can do it. Sort of the standard way is very like Ruby. You just run in your proc file, set and mix phoenix.server, set the build pack, a version number for the version of Elixir you wanna run.
27:00
If it has its downsides, I can't remember all of them, but I do know it's not as optimal as running a distillery release and compiling to the beam and then running with distillery. But it does have the same ergonomics of I'm just deploying to Heroku
27:20
and pushing when I need to and Heroku's gonna manage my database, is it gonna set a database URL? You don't have to compile your, you don't have to set your environment variables on build time. You can just set them in Heroku like with the command line. So you can run config set via command line
27:42
and Heroku will reboot your dynos and it works just like it would normally. So for light levels of traffic, for apps that you're iterating on very quickly, it works really well. Did I hear that Heroku changed it so that you can connect to the observer, use the observer to connect remotely to your nodes?
28:02
I don't know. Does anybody else know? You haven't been able to in the past. I think I heard this change. Sorry, I couldn't hear that. There's a new experimental feature of Heroku. I believe it's called Heroku exactly. You can hook into the box with observer.
28:21
Cool. Okay, cool. And there's like a, there's another, this is something I learned recently. Inside of a proc file for a Phoenix app, you can add a release step. Which could probably be a shell script that would build a distiller release.
28:41
I don't know how exactly that would work. But we use it to run our active migrations. So it'll, the release step will wait, or Heroku won't start your main, your other processes until the release step is done. So it will wait to say okay, I'm gonna run your migrations or run these calculations
29:02
and then it will, then it'll wait until your database is migrated. So that's how we do CI. So like our staging servers, we don't have to worry about deployments or data migrations because they just run automatically on release. We have time for one more question.
29:21
Make it a good one. Everyone's like no, I don't want to ask my question. Everybody's too intimidated to be the last question. I've got one more. Yeah, I don't know if this is a good question, but recently I saw an article about Google App Engines
29:44
having released some product or starting to accept Elixir. So in terms of, thank you for sharing how you guys deploy now. Do you continue to look at other options and what's out there maybe? Maybe you can speak to what other options are out there that you're seriously considering
30:00
or what would it take to prompt you to change? Do you want it? Have either of you looked at Giga Elixir? A little bit. Giga Elixir, for those that don't know, is like Heroku built for Elixir. But no one seems to have looked at it, so. It's probably a bad sign.
30:22
Yeah, I mean, I'm always open to better solutions for whatever. The life of a software engineer, we're all very busy. So if I have a solution that's working, I'm not necessarily on the prowl for other solutions, but when they turn up, they turn up.
30:45
And of course I would consider better solutions if it is better. I would say that regardless of whether you're building big apps or deploying to Linode or deploying to Heroku,
31:01
the thing that's most important, and the thing that I'm most interested in, when it comes to deployment in Elixir, is how I define and maintain and document my automation. And in the ways that I am, learning, as I learn things, I look to say, how can I automate it? How can I, and then how can I describe
31:24
and document this stuff so that the rest of my team, number one, the rest of my team can do it, and then also the clients that I hand this off to can do and maintain these things. So those are like, regardless of your deployment technology and tool set, those are the things that I come back to.
31:41
There's like, how can I do it? What benefits does it provide me? But also, what are the other repercussions of those actions? I think I would consider other options when my situation changes. I chose my solution because I looked at other options and this fits me very well. And if I hired several people,
32:01
if my traffic patterns changed, that would probably be to reconsider what I'm doing. I think that we are all looking for a silver bullet for deployment, and I think there is no silver bullet. I think it depends on your team's needs, your infrastructure needs, your product needs, if you have crazy security needs, HIPAA compliance, that sort of thing,
32:22
that would drive out a lot of this. And I don't think we're gonna find one blog post that rules them all. So, we just have to do the homework and keep having these conversations. I think that's a good place to wrap up. I'd like to thank Desmond, Scott, and Ben for this panel.