We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Observability Driven Automation

00:00

Formal Metadata

Title
Observability Driven Automation
Subtitle
Beyond GitOps with Keptn
Title of Series
Number of Parts
62
Author
License
CC Attribution 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
This talk shows how to enhance GitOps by putting observability and Service Level Objectives in the center of the deployment process, based on CNCF projects like Argo and Keptn.
Keywords
9
Thumbnail
57:49
44
45
48
Thumbnail
45:29
60
Thumbnail
1:00:03
FreewareOpen sourceHypermediaCloningSoftwareSoftware maintenancePhysical systemPrincipal idealMultiplication signProjective planeLecture/ConferenceMeeting/Interview
MathematicsConfiguration spaceSoftwareComputer animation
Open sourceFreewarePoint cloudParity (mathematics)Service (economics)Pressure volume diagramMenu (computing)Point (geometry)PressureCartesian coordinate systemOnline helpComputer configurationInformation securityPoint cloudRepository (publishing)Service (economics)BitSystem callMetric systemLevel (video gaming)Perspective (visual)Computer animation
Open sourceFreewareGame controllerError messageVariable (mathematics)State of matterConfiguration spaceVulnerability (computing)Information securityRepository (publishing)Multiplication signLevel (video gaming)Cartesian coordinate systemDeclarative programmingCharacteristic polynomial2 (number)BitDependent and independent variablesOpen sourceIntegrated development environmentComputer animation
Decision theoryDependent and independent variablesRead-only memoryBefehlsprozessorWorld Wide Web ConsortiumBit rateFreewareOpen sourceState of matterEvent horizonPoint cloudLevel (video gaming)MathematicsConfiguration spaceStandard deviationBuildingSystem callStatisticsInternet service providerInformation securityLength of stayPerformance appraisalInformation securityService (economics)Price indexControl flowCartesian coordinate systemCASE <Informatik>Error messageObject (grammar)Performance appraisalPoint cloudEvent horizonMereologyStandard deviationLevel (video gaming)Projective planeDerivation (linguistics)INTEGRALState of matterBit rateState observerDecision theoryMoment (mathematics)Integrated development environmentMetric systemPhysical systemOperator (mathematics)Mobile appComputing platformRollback (data management)Response time (technology)Open sourceComputer animation
System callPerformance appraisalBit rateInformation securityLength of stayStatisticsInternet service providerPressure volume diagramCalculusError messageCountingArchitectureGroup actionQuery languageBeat (acoustics)LoginDependent and independent variablesSoftware testingOpen sourceFreewareAlgebraic closureService (economics)Level (video gaming)World Wide Web ConsortiumRevision controlKontrollflussState observerMathematicsIntegrated development environmentBit rateState of matterData managementSoftware testingPerfect groupResponse time (technology)Level (video gaming)Error messageRepository (publishing)Service (economics)System callMultiplication signFluxLoginDependent and independent variablesInternet service providerEvent horizonPairwise comparisonDeclarative programmingInformationGoodness of fitInformation securityPrice indexStandard deviationCASE <Informatik>Cartesian coordinate systemCalculationGame controllerComputer animation
Demo (music)WebsiteInstallation artGroup actionCodeInternet service providerConfiguration spaceOpen sourceFreewareTape driveQueue (abstract data type)Information securityMetadataNamespaceSource codeLevel (video gaming)Server (computing)Bridging (networking)Link (knot theory)Parameter (computer programming)Software repositoryRepository (publishing)Disk read-and-write headArithmetic progressionWeb pageDigital filterPhysical lawSynchronizationRevision controlCartesian coordinate systemMoment (mathematics)Repository (publishing)Performance appraisalComputer clusterSource codeXMLComputer animation
Level (video gaming)Gamma functionExtension (kinesiology)Demo (music)Bridging (networking)Link (knot theory)MetadataNamespaceSynchronizationInstallation artGroup actionCodeService (economics)Logical constantInformation privacyTerm (mathematics)Source codeServer (computing)Parameter (computer programming)Arithmetic progressionPhysical lawDisk read-and-write headWeb pageDigital filterCluster samplingRepository (publishing)Simultaneous localization and mappingOpen sourceFreewareSummierbarkeitElectric currentExecution unitGEDCOMMach's principleIterationPerformance appraisalRevision controlMoment (mathematics)MathematicsSynchronizationSoftware testingRepresentation (politics)Multiplication signState observerCartesian coordinate systemProcess (computing)HookingError messagePhysical systemDemosceneComputer animation
DemosceneService (economics)Level (video gaming)Performance appraisalError messageBridging (networking)Open sourceFreewareUniformer RaumDependent and independent variablesPerformance appraisalTotal S.A.Error messageLevel (video gaming)Bit rateCartesian coordinate systemCASE <Informatik>State of matterService (economics)Demo (music)1 (number)InformationWeightProjective planeResponse time (technology)Modal logicSlide ruleComputer animation
FreewareOpen sourceAutonomic computingSoftwareFeedbackSoftware developerLocal GroupGroup actionDemo (music)DisintegrationComa BerenicesInstallation artMetadataNamespaceServer (computing)Source codeLevel (video gaming)Parameter (computer programming)SynchronizationLink (knot theory)Bridging (networking)Error messageMechanism designInformation securityConfidence intervalDemo (music)Repository (publishing)INTEGRALSource codeConfiguration spaceGoodness of fit2 (number)Projective planeBefehlsprozessorHookingCartesian coordinate systemOpen setService (economics)Group actionLevel (video gaming)Process (computing)Software developerLattice (order)Disk read-and-write headInternet service providerPatch (Unix)Open sourceBitDifferent (Kate Ryan album)Vulnerability (computing)XML
Open sourceFreewareEmailMetadataNamespaceService (economics)Token ringGroup actionCodeTemplate (C++)Data typeEvent horizonSource codeLevel (video gaming)Revision controlSynchronizationMilitary operationInformationRepository (publishing)Link (knot theory)OvalPhysical lawWitt algebraSynchronizationLevel (video gaming)Configuration spaceContext awarenessCartesian coordinate systemCommon Language InfrastructureSoftware testingBridging (networking)Event horizonDemo (music)Process (computing)2 (number)Multiplication signMereologyCuboidService (economics)Integrated development environmentCASE <Informatik>SequenceCross-correlationPoint cloudValidity (statistics)Lecture/Conference
JSONXMLUML
Transcript: English(auto-generated)
OK, so hello, everyone, to my talk regarding observability-driven automation and going beyond GitHubs with Captain. So as you might have noticed, this is an English-spoken talk.
But if you have some problems or if you don't understand something, just feel free to ask me in German. I'm speaking perfectly Austrian-German. So I think we'll make this together. My name is Thomas Schutz. I'm a principal engineer for Dynatrace, mainly responsible
for cloud-native stuff. And I'm maintainer of the Captain project. Furthermore, I'm take lead of the CNCF TechApp delivery and lecturer of cloud-native stuff and also CDF ambassador. Yes, in my previous life, so before I
went to the cloud-native space, I was a systems engineer. And during this time as a systems engineer, I often deployed software or made configuration changes on Thursday in the middle of the night. Whoever deployed something on Thursday in the night, OK?
Some people raised their hand. And most of the time, I thought, yes, after I deployed everything, I tested something. And for me and the developers, everything worked perfectly fine. So party, let's go for a beer, everything fine.
But at some point in time, the next day in the morning arised. So I had to do a lot of phone calls. Lots of people complained that applications run terribly slow.
And yes, we had the pressure to fix bugs. And sometimes, this was not really funny. So why did we get to this problem? So at first, we made our tests manually. So this was some years ago.
And sometimes, the monitoring was not proper. So let's say all of the metrics we had for our application were perfectly green. So everything was running. But in the end, our customer tried to access the application. And the application was terribly slow.
So not really the funniest situation. So for leaders, this leads to this conclusion. So one does not simply deploy on first evenings or on Fridays. So which options do we have?
We can deploy between Monday and Wednesday. But will this help us? I don't think so. So in the last few years, we got to a point where we have new options to solve these problems. So we have cloud native stuff.
So just as a start, should we wait until Wednesday for security updates? I don't think so. The second thing, if we wait for deployments, and if we say that we don't want to deploy in time,
what about the well-praised dev broad parity, which we know from the 12-factor app? But we are in a cloud native world, so everything is better. So at first, we have Kubernetes. I think this solves all of the problems we had in the last 20 years, right? And using GitOps, our deployments are stable.
So everything we have in our Git repository gets reflected on our Kubernetes custom. So last but not least, our customers can expect that the application is running flawless, right? So I don't think so. And this is what this talk is about. So at first, we will take a look on what is GitOps
and where the problems with GitOps are from my perspective. Furthermore, I will tell you some things about metrics, about service level objectives, service level indicators, and so on. I will tell you something about captain.
So I think you see the captain logo. So yes, it's a bit about this. And how we can go beyond GitOps with captain. So how can we extend GitOps in making everything better? And last but not least, we will wrap up this. And I will tell you how you can contribute to captain
or how you can get into the community. So at first, what is GitOps? Some, let's say one year ago, I thought GitOps is pretty easy, so I take my configuration,
check this into my repository, have my CI pipeline, and this gets applied automatically at every time I check it. But in fact, this is not GitOps. There is a GitOps working group, and they specified some characteristics of GitOps.
And I will make this clear a bit. So at first, we assume that all of the configuration we have in our Git repository is stored declarative in a versioned and immutable way
in the Git repository. Furthermore, this configuration is pulled automatically. So we might have some GitOps controllers, like Argo CD, like Flux, and however they are called,
which are there to watch Git repositories and find out what's new there. And they compare exactly this state we have in the configuration with a state which is stored on the target environment.
So in my case, this is mostly Kubernetes, but there might be other environments where this could be applied. So after we apply something with GitOps, we know a few things. So for instance, our technically desired state is met.
So if we have some Kubernetes deployments and we specified that we want a certain amount of replicas, that we want to have some variables configured, and so on, we can assume that after this manifest is applied that this state is met.
Furthermore, by working with health checks on Kubernetes clusters, we know that they are OK. So we know that if you have some HTTP endpoint, you query with your health check, that this might be perfectly fine.
But when we are dealing with GitOps, we don't know if our infrastructure was ready for this deployment. So when we're using GitOps controllers, the GitOps controller compare the state of the Git repository with the target infrastructure and try to get to this state.
And they don't ask you in the most cases. So at this moment, when the configuration drift happened, the GitOps controller wants to get to this state, but it doesn't take care about if the infrastructure was ready for the deployment.
So for instance, you could have an update on the Kubernetes cluster going on, and so on. And we also don't know how risky it was to deploy our application. So it might have been that we had around 220 down times in the last 24 hours.
Hopefully not. But this should also be taken into consideration before deploying something. And last but not least, we know, as I said before, that our technically desired state is met. But we don't know if our application is really running.
So for instance, it might be that the technically desired state is OK, but I don't know how my customer feels with this application. So I don't know if it takes him about 30 hours to get, sorry, 30 seconds to get the response to his request.
Often I don't know if he gets some errors when he starts something. So yes, we might miss something in our GitOps scenario. For instance, we might need some things as pre-deployment checks, like risk assessment.
So in which state is my infrastructure? How are my error budgets? And so on. We want to take control over the GitOps process, which might work in some GitOps tools, like Argo City.
And we want to do staging automatically. So we want to get from one stage to another, which is also often not possible via pure GitOps approaches. Last but not least, we also want to do post-deployment checks. And I heard a very interesting talk, I think two hours ago,
about smoke testing, performance testing, and so on, after deployment. So we want to test automatically after each deployment. And last but not least, we want to know how our application is behaving, and want
to have some kind of application health checks. And last but not least, and I know all of the open source enthusiasts are very security fan, so we want to do automatic security and vulnerability checks.
And as I told you before, I'm from Dynatrace. And Dynatrace is an observability company. And Dynatrace is also the company which is behind Kaptain. So we are working on Kaptain as an open source project. We donated Kaptain to the CNCF, and it stays an open project.
But yes, we as Dynatrace, or as observability provider, have a lot of data. And data can help us making decisions. For instance, all of the metrics we are collecting with our tools,
such as Dynatrace, Prometheus, Datadog, I think Nagios, and however they are called, yes, they are measured by monitoring solutions. And from this, we can derive service level indicators for instance, to tell and to find out
how the system is behaving at the moment and which behavior the customer is experiencing. This could be such things as the response time, such as the error rate, such as critical security problems, the average memory, the average CPU, so everything we know from our monitoring systems.
Furthermore, from this service level indicators and metrics, we can derive service level objectives and error budgets. So for instance, we could say we want that our response time in 90% of the cases is below 500 milliseconds.
We want to have no errors on our application. And before we want to deploy something, we want to make sure that we have enough error budget left to if something breaks. And based on these things, we can take decisions.
So we could say if we deployed something to our development environment and tested something and have an error rate of zero or the response time is perfectly fine and everything is shiny,
we could promote to the next stage. Furthermore, we could also say, yes, our error rate got higher than zero. So we do an automatic rollback. Or if our error budget is below our threshold,
then we simply do not deploy. So exactly with this data, we can take decisions. And the platforms we have, such as Prometheus, Dynatrace, and so on, can provide us additional data to make this a bit clearer.
So what is Kapton? So in the middle of Kapton, we always are operating on service level objectives. So there is a number, such as 98, as stated here. And these describe the business desired state of our cloud native apps and infrastructure for every stage,
such as the dev environment, staging environment, and the production environment. And Kapton is triggered based on events. So it might be that you want to deploy something. So if I deploy an event, you want to evaluate something.
You fire an evaluate event. And the same for tests. And Kapton can orchestrate exactly this. So at first, we assess how the system is behaving at the moment. We can deploy something, can test something, in the end, evaluate something, and promote to the next stage.
And there, everything starts at the beginning. The same is for day one and day two operations. So we might have a monitoring solution, which could also interact with Kapton. And Kapton has a lot of tools which it can integrate
into Kapton, such as Jenkins, Azure DevOps, such as Litmus K6, and so on. So there are many integrations out there which can be integrated in Kapton. And all of them get orchestrated through cloud events.
And this is also an open standard. So I told you that Kapton uses service level objectives to evaluate application infrastructure, desired state. So how does this look like? So at first, we want to start some kind of evaluation.
Then we have an evaluation service in our Kapton infrastructure. This is called the Lighthouse service. You don't have to remember this. And we have defined some SLOs. So for instance, we have objectives. They are based on the SLI error rate, in my case. And the criteria is that the error rate has to be lower or equal one, in this case.
The same for JVM memory, which is not monitored, which is not evaluated in this case, and so on. And based on these SLOs, we try to calculate a total score. So every SLI we have in the SLO gets part of a score.
And we can say, if our total score is higher than 90%, then we pass and go forward. Or if we are higher, between 75% and 90%, we raise a warning.
And below that, everything fails. And we get these SLIs from our monitoring provider, as shown here. This might be Prometheus. This might be Dynatrace, Datadog. I think also Splunk, and so on.
And in the end, SLIs are also specified in a more or less declarative way in Kapton, which show you where you see how the error rate is fetched, and yes, and so on.
Good. So how does the calculation look like in reality? So we have service level indicators. And this can be something that's the response time in 95%, such as the overall failure rate, test steps, and so on.
And we have the SLOs for them. In the first step, so at our first deployment, this is the base data. We define that our response time in 95% of the request should not be lower than 100 milliseconds, in this case.
We also define that the overall failure rate should not be lower than, should be lower than 2%. And one more interesting thing is the test step login response time, where we say it should
be lower than 150 milliseconds. And it should be less than 10% slower than before, so than the last evaluation. And yes, this is the target value of our SLO world,
Quagor. So we assume that if we get 90% of everything, then it's OK. If we have 75%, this is a warning. And we can use exactly this information to also take action. So for instance, if we are above 90%, we could say, yes, everything fine, deploy to the next stage.
Or if we are in the blue-green deployment, just switch the traffic to the new deployment. But if you are between 75% and 90%, I need to get an approval. So do a manual approval if you are in this stage, or write a Slack notification, or whatever.
So then we are building the first, our first, or we are deploying the first thing. And we see, yes, everything is green, everything perfect. And yes, we have an overall score of 100%. At our second build, we see that the response time
got higher than 100 milliseconds, so 120 milliseconds. So it gets to a warning state. The same for the failure rate. And we saw that the test service calls are higher than before.
So we see that our overall score goes below the 75%, and it fails. And in the third case, we saw that our response time raised higher than 10% of the,
in comparison to the last one. And we had one critical security vulnerabilities, so we also failed below the 75%, and this was also not okay. And last but not least, this got corrected in the last build, so everything is fine here again.
And yes, we can go forward. Okay, so I started to talk about GitOps before. And using this information we have now, is we could go beyond GitOps with Captain. So let's say we have our GitOps in place,
we know that everything is continuously reconciled, that everything is running perfectly fine, and now we take Captain into the equation, and want to extend this with observability and application health checks,
and an SLO-driven control flow. So we can use our observability provider, Prometheus and standard trace, to observe our Kubernetes environment. Then we use Captain to get data from these observability providers,
and to evaluate the state of these environments. And we could also use our observability provider to notify Captain that something is not really okay. So for Prometheus, you can raise alert manager event, notifications to notify Captain, and so on,
and trigger events with this. And Captain can be used to control the ArgoCD process, or the Flux process, and to promote to the next stage. Because if we are in a GitOps environment, we cannot simply say we roll back our environment,
or we trigger another thing in our Kubernetes cluster. We always have to go back to the Git repository and change something to get this in sync. Okay, so I think I talked very much,
so let's go to kind of a demonstration. Okay, so in my case, I have a simple application
which is deployed with ArgoCD. This application is the Potato Head. And what I want to demonstrate now is I want to change something in my Git repository. This will trigger Captain. Captain will do some evaluations afterwards, and in the end, hopefully our application will be working.
Okay, so these are my manifests for this application. And what we see here is that I have deployed a Helm chart with the target revision of 0.1.1, and so on. And I think this application is stable enough now
to get this to a higher version. So let's say this is the application 0.2.0 now. And what we see in ArgoCD at the moment is that the version 0.1.1 is deployed.
And we could also take a look on this application at the moment and see that exactly this version is deployed at the moment. Okay, so I changed the target revision here, and now I will simply commit this change. So normally I would create a branch,
would do the changes, would raise a pull request, someone will approve it, and then the process would start. But as we don't have so much time now, and I have no approval here, let's do it directly. So I committed the change now.
And in a few seconds, or after I refresh this, I should see that captain, did I change this, yes.
So the demo code doesn't like me today. So now it goes, so we see that captain has been notified
that something happened with this deployment. So what you see here is the captain bridge, so this is kind of a graphical representation of what is happening in captain. And what we could do, what I simulated here, was that I tried to get the error budget,
and yes, pre-deployment checks were already perfectly fine. And we said yes, we could deploy exactly this version now. So we see that we have version 0.2.0 in the queue,
and now we could say yes, I want to get this to my system. And this is more or less a speciality of Argo CD, because I could hook into the sync process. So for instance, or in this configuration, I said there is no auto sync.
If you get out of sync, notify captain, and captain will tell you if you can sync. So I will sync now, and what I will see in Argo now, this is the application I talked about.
Hopefully this will start to sync now, yes. So I see that my application syncs now, and that it got synced. And everything you see here gets done in captain. So what you could also do during this,
you could also include such a deliver infrastructure step before deploying something, or check something before delivering this. And this deliver step here was to update to Argo CD, so we notified the Argo API that we want to deploy now,
and this got deployed afterwards. What we are doing now is we are running tests. So in my case, I added a k6 test step, which is running automatically.
And this is also a thing I like to do very much, combined with blue-green deployment. So for instance, you could deploy something, could run some tests, could evaluate something, and if everything is running fine, just switch to the other version.
These tests will take about 30 seconds, I think. In the end, you see that the k6 tests were running perfect. And now captain talks to the observability provider,
tries to find out, gets the matrix, finds out if everything behaved as it should, and in the end, yes, hopefully we'll get an okay. So what did we do in the meanwhile, or what happened behind the scenes?
Before I started this demonstration, I set up an Argo application and created the captain project and service, obviously because otherwise this would not have worked. Furthermore, I set up the necessary notifications in Argo. So Argo notifies captain via notifications.
And I also set up a webhook for captain, so this is kind of a ping-pong. Argo notifies captain, captain notifies Argo, and so on. And last but not least, I labored Argo applications with captain information. So this was also the thing you saw before, that captain knows under which URL this is,
the application is reachable now. So let's get back to the demonstration. It's strange now.
So this might take until around two minutes. So let's get back to the last evaluation.
So what we see in our evaluation here in captain is that we have some kind of a history or heat map of our deployment, or of the quality of our deployment. For instance, we see the total score of the, so we see if the state of the score before.
So we saw except one application, everything was, except one deployment, everything was okay. We also see that the error rate was, and the response times were already okay. This is not the best for the demo use cases. But we also see some more data here.
So we see a slide breakdown where we see the response time of 50% of the services of the requests was at 572, I think, milliseconds was in this case. And the same for the throughput and so on. And we also see how captain calculated the score for that.
For instance, we saw that the error rate has a higher weight as the other ones. So it has 50% of the score, and the other ones only have 25%. And for some reason, okay.
So from the demo and everything we saw until now, just a second, now it works. What did we take away? So GitOps basically helps you delivering your software. So GitOps is not more and not less
than a delivery mechanism. Data helps you gaining confidence in your deployment. So using the data you have in your absolutely the solution and whatever this is, if it is open source, if it is close to us or whatever, it helps you getting deployment stable.
Captain can help you in enhancing a GitOps workflows with data. So as you saw in the demonstration, you can hook in with captain, captain can do some evaluations, can trigger tests and whatever. And it can help you get, enhancing, getting confidence.
And yes, as I said before, I validate your deployments customer centric. So using a slow error patches and so on. This is more, I think, more relevant than if your process uses 50% CPU
because sometimes you might have a low saturation on the host, but the application might also work perfectly fine. At some point, you might have not such a high load, but the application might not run perfectly. And in the end, continuously watch and remediate everything.
So yes, how could you get in touch with captain? At first, captain is working via Slack. So we have our own Slack workspace, just subscribe. And if you want to talk to something regarding captain, regarding continuous delivery and so on,
I think you will find me here, so just ping me there. Furthermore, as I said, captain is a CNCF project, it is an open source project. And as every healthy project, we have monthly user groups and biweekly developer meetings.
So you can also, they're all stated on the community page. Everything you saw here from the configurations and so on is stored in a Git repository. So at first, the integration of ArgoCT, so the steps to get this integration working is stored in a Git repository.
And also exactly this demonstration is stored in the Argo demo repository. Yes, I think the same as for every open source project, try it out, open issues. And yes, enhancements and improvements are always welcome.
So yes, this was it from my side. I think it was a bit too fast today. I hope you had as much fun as me. And yes, I'm open for questions now.
I think it's...
Thank you. I think this is very much about, in reality, about providing good SLIs, right? Which is really valuable. So maybe you have many sources for SLI data.
And you just showed a quite simple example. Can you also combine multiple sources for the same SLI? For example, security issues can come
from container vulnerabilities, from dependency checks, from Docker configuration checks, from you name it. And these are all security issues. So is there a mechanism included to aggregate data from different SLI providers?
So at the moment, Captain only supports having one SLI provider at a time. But you can always combine, you can always have multiple SLIs for your vulnerabilities and could wait then however you want.
So thank you for your talk. You mentioned that you had to do a couple of configuration steps like adding some labels
to your Argo resources. If you do have a bit of time, would you mind just providing us little work throughs? Like how does this configuration look like? Yes, I can at least show you how the application looks like more in detail.
So this was some of a typical configuration I did for my Argo, for my demo here.
What I added here was, and this was not very much, I added some labels here such as the stage. So I know that I'm in my dev stage. I know that my service is called Potato Head and that the Captain project is called Argo Demo.
So these were the things I needed for the ping pong so that I know where to ping back. And I also had to configure something in Argo itself and I will try to get this open, just a second.
And there is some kind of Argo configuration and we have the notifications for Argo. And this does nothing more than sending a webhook to Captain.
I added the hosting of Captain there and we added some data to the cloud events we are talking to with Captain. One of the hardest parts was that Captain is building its own context. So what you saw in the Captain bridge
that it has the pre-deployment, the deployment and post-deployment steps. They have to be, no, they have to be put together somehow and this is done via Captain context or via correlation context ID.
And the hardest part was exactly for this demo where we also wanted to do some pre-deployment things to get this through the Argo process. And we added this somewhere in the data of the Argo, of the sync process
as far as I remember, just a second. I know that we added this somewhere but I can't find it now. But this gets added to the sync data
and after the sync has finished, this gets all the parts over to Captain again. And this was why you see everything in one sequence. But in the end, these two configurations in ArgoCT, so setting up the notifications and setting up the labels on the application
are the only things you have to do in ArgoCT. All right, thank you.
I know that this may be, in some times, in some environments, it isn't allowed to be so, I call it dynamic, to go from a lower stage to production. Just like ideas how this can be implemented so that Captain can say, for instance,
this is now, it checked every box. Now it's time for human review before it can reach like this, say this final compliant cluster and so on. Yes, so this is the same what we did here in the beginning of the process.
So you can always do manual approvals. So for instance, yes, before, you can say I want to get to the end and wait for an approval until some human says that everything's okay.
I also had some use cases where I started to deploy in the first, in the dev stage when the dev stage was working perfectly fine. I passed over to the hardening stage, deployed but did not release,
and waited until someone approves for the, to switch traffic. Okay, this is really great, but going into existing environment and setups, of course adoption has to happen somehow,
and especially if we have extensive testing, which is built and existing already in various tools or pipelines. Can you integrate this here, or do you have to do all the testing which is part of the post deploy or validation process?
Do you have to do it with Argo CD, or can you wire together whatever you have? Yes, you can wire together whatever you want. So one thing we are kept in is pretty good is in integrating things. So you can take whatever tooling you want. So this is also one of our,
the things we mention very often. We don't want to deploy by ourselves. There are many tools which can do it better than we. Furthermore, I think k6, Jmita, and so on are tools which are much better in testing as we will ever be for such things.
And therefore we are focusing on integrating such things, and this can be done via Captain Services, while we also have a generic service where you can add whatever CLI you want. Thanks.
Any other questions? Okay, then if you have more questions, you find me outside, and thanks for having me.