We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Unifying Infrastructure and Application Delivery Using Keptn

00:00

Formal Metadata

Title
Unifying Infrastructure and Application Delivery Using Keptn
Title of Series
Number of Parts
287
Author
Contributors
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Did you ever promote your application from a staging to production and forgot some important infrastructure changes? Do you wonder how to automate chaos tests into your delivery pipeline to validate your services can deal with failing nodes? When we at Dynatrace started our microservice journey, we had to deal with precisely those and many more questions. Using Keptn as a control plane for application delivery, we can orchestrate all those tasks to avoid any bad deployments while providing a unified deployment experience for our developers. In this talk, I will bring some light into combined infrastructure and application deployment using Keptn to show you how those seemingly separated activities can be unified.
DiagramEngineering drawing
Boom (sailing)Software maintenanceOperator (mathematics)Software developerSolid geometryBuildingPoint cloudLevel (video gaming)Service (economics)Social classData storage deviceGame controllerSample (statistics)DatabaseDirect numerical simulationDifferent (Kate Ryan album)InformationInternet service providerNumbering schemeRevision controlComputer networkSocial classContext awarenessIntegrated development environmentServer (computing)Cartesian coordinate systemInformationComputing platformSoftware developerRun time (program lifecycle phase)Instance (computer science)Virtual machineRevision controlOperator (mathematics)Local ringDatabaseAdditionLevel (video gaming)Perfect groupGame controllerDifferent (Kate Ryan album)Open setMultiplication signMathematicsCore dumpConfiguration spaceData storage deviceParallel portConnectivity (graph theory)Constraint (mathematics)Public key certificateLatent heatService (economics)Analytic continuationInterface (computing)Key (cryptography)LastteilungPoint (geometry)Control flowTemplate (C++)Group actionSoftware maintenanceDirect numerical simulationPoint cloudControl systemStrategy gamePattern languageSoftwarePrincipal idealNumbering schemeElectronic mailing listDistribution (mathematics)Office suiteRoutingMachine visionInsertion lossPole (complex analysis)Design by contractPlanningProcess (computing)Video gameCoefficient of determinationSheaf (mathematics)Rule of inferenceCellular automatonInheritance (object-oriented programming)Term (mathematics)Goodness of fitBoom (sailing)Film editingPhysical systemWhiteboardInternet forumTunisShooting methodComputer animation
Internet service providerChaos (cosmogony)Software testingStructural loadPlanningIntegrated development environmentInstance (computer science)Cartesian coordinate systemSoftware testingProjective planeLevel (video gaming)Distribution (mathematics)Structural loadProcess (computing)Game controllerChaos (cosmogony)INTEGRALInternet service providerPhysical lawNetwork topologyQueue (abstract data type)DemosceneZirkulation <Strömungsmechanik>System callGradient descentProfil (magazine)Condition numberMoment (mathematics)Computer animation
MetadataTask (computing)Event horizonRevision controlComputer wormSource codeElectronic mailing listSequelRevision controlProjective planeSequenceMixture modelService (economics)PlanningEvent horizonOperator (mathematics)Group actionCartesian coordinate systemTerm (mathematics)Task (computing)View (database)Disk read-and-write headLevel (video gaming)Computing platformComputer fileBasis <Mathematik>Integrated development environmentInterface (computing)Bit1 (number)Uniformer RaumFitness functionMilitary baseConfiguration spaceProduct (business)Software developerGoogolNegative numberDemosceneSystem callPairwise comparisonPay televisionWeb pageKey (cryptography)Data structureLocal ringCategory of beingCumulantCoefficient of determinationTape driveWechselseitige InformationXML
Plane (geometry)Control flowService (economics)Event horizonLevel (video gaming)View (database)InformationSequenceEvent horizonService (economics)Projective planePay televisionWeb pageComputer animation
Service (economics)Level (video gaming)Ultimatum gameDigital filterDisk read-and-write headMultiplication signRevision controlPoint (geometry)Service (economics)Cartesian coordinate systemInstance (computer science)SequenceConfiguration spaceInformationComputer fileMaxima and minimaProjective planeGastropod shellPrimitive (album)Computing platformSoftware development kitOpen setProcess (computing)Group actionProteinInstallation artTerm (mathematics)Key (cryptography)CASE <Informatik>Integrated development environmentState of matterComputer animation
Service (economics)Level (video gaming)GEDCOMRevision controlIntegrated development environmentService (economics)Maxima and minimaInformationFunction (mathematics)SequenceComputer animation
Digital filterService (economics)Level (video gaming)Macro (computer science)ModemInternet service providerBoom (sailing)Content (media)Physical lawMathematicsConstraint (mathematics)Software testingInstallation artEvent horizonPlane (geometry)Control flowProcess (computing)Network topologyOperator (mathematics)AutomationIntegrated development environmentPublic domainSoftware maintenanceLogic gateLevel (video gaming)Process (computing)DatabaseArithmetic progressionRevision controlCartesian coordinate systemIntegrated development environmentAdditionInternet service providerSoftware testingService (economics)Constraint (mathematics)Stability theoryChaos (cosmogony)CodeCASE <Informatik>PlanningConfiguration spaceInstance (computer science)MathematicsGame controllerPattern languageLimit (category theory)Open setPrice indexVirtualizationSoftware developerVideo game consoleGoodness of fitMoment (mathematics)Multiplication signRollback (data management)TwitterNeuroinformatikNetwork topologyDifferent (Kate Ryan album)System callPlotterHypermediaRule of inferenceRootDemosceneWordGroup actionConstructor (object-oriented programming)Form (programming)Computer animation
AreaWindowGame controllerService (economics)Key (cryptography)Presentation of a groupCloud computingDifferent (Kate Ryan album)Computer animationMeeting/Interview
2 (number)Elasticity (physics)Cloud computingDistribution (mathematics)PlanningGame controllerIndependence (probability theory)Strategy gameBitPresentation of a groupMeeting/Interview
PlanningBitStrategy gameLimit (category theory)Musical ensembleMeeting/Interview
PlanningIntegrated development environmentGame controllerGene clusterBootstrap aggregatingMultiplication signCartesian coordinate systemInstallation artGrass (card game)Group actionPoint cloudMeeting/Interview
Connectivity (graph theory)Wave packetVirtual machineRule of inferenceAreaData storage device1 (number)Meeting/Interview
Virtual machinePlanningGame controllerService (economics)LogicMeeting/Interview
LogicSound effectMeeting/Interview
Repository (publishing)LogicSoftware testingPhysical lawRing (mathematics)Group actionMeeting/Interview
Point cloudMultiplication signIntegrated development environmentCASE <Informatik>MultiplicationLimit (category theory)Order (biology)Spherical capEntropie <Informationstheorie>Rule of inferenceDifferent (Kate Ryan album)Metropolitan area networkMeeting/Interview
Level (video gaming)Service (economics)Multiplication signProcess (computing)Moment (mathematics)Rule of inferencePhysical lawMeeting/Interview
Rollback (data management)Moving averageResultantCASE <Informatik>Meeting/Interview
System callLevel (video gaming)Revision controlLogicMeeting/Interview
2 (number)Configuration spaceMeeting/Interview
Moment (mathematics)Presentation of a groupForcing (mathematics)TrailSound effectFrequencyShared memoryArmLink (knot theory)Multitier architectureMultiplication signBitMeeting/Interview
Meeting/InterviewComputer animation
Transcript: English(auto-generated)
Hello everyone and welcome to my talk about unifying infrastructure and application delivery using CAPTEN here at the FOSTEM CICT Dev Room. My name is Thomas Schutz, I am Principal Cloud Engineer at Dynatrace and a CAPTEN maintainer. Furthermore, I am taking it at the CNCF TechApp Delivery, co-author of the CNCF Operator White Paper
and a lecturer for Security, Microservices and Infrastructure, as called, at a local University of Applied Sciences in Austria. Today we are going to talk about broken deployments, in the past but also today. To get a solid unified deployment approach, we need a solid relationship between platform operations and application development.
Furthermore, we will discuss an approach on how to reduce the possibility of breaking deployments today. And we will take a closer look on what CAPTEN is and how it can help you dealing with such problems. Last but not least, I will wrap this up and give you some more information about this topic and how to dive deeper into the CAPTEN world.
So, when we designed new systems, ops people ordered servers, mounted them in racks and did some initial configuration, as in NTP, SSH, monitoring, but also installed the runtime environment for developers. When this was done, developers deployed the application and adapted it when they found changes.
Sometimes, infrastructure changes were needed on the environment and after some iterations, the applications worked perfectly fine and everyone wanted to get this to the next stage. So, ops people or application developers installed the application there and it was not really unusual that this didn't work as intended.
So, what went wrong? The infrastructure changes we made were not introduced in the next environment and therefore the application was not working properly. So, these were ancient times and nowadays we got some new things as GitOps infrastructures called Guard Infrastructures and Kubernetes.
We might think that all of this has changed and we are not facing these issues anymore. So, let's take a look at it. When applications are developed, new environments are created either sequentially or in parallel. After that, application infrastructures such as ingress controllers, storage classes and load balancers are
configured and at some point in time, development teams are able to deploy their applications. During development, they find out that they may need additional storage classes, for instance NFS, which is created and afterwards the servers will be deployed.
After some time, a more stable environment will be needed for that application. And they want to promote their service there. Now, after the application is deployed, developers find out that this doesn't come up. But also wonder as the deployed application is exactly the same as on the development environment.
After some investigation, they find out that the storage class which is needed for the service has not been introduced in the second stage and therefore it doesn't work. In this talk, I want to give you ideas on how to deal with this.
Sometimes you tend to put the core platform infrastructure such as Kubernetes itself and its components and the application infrastructure under one umbrella. As the application might follow another lifecycle in the infrastructure, this could lead to different kinds of problems.
When your application needs a database and you want to get it upgraded, when would be the perfect time to do this? When will the database be deployed when running in a mixed environment? Would this be before or after the upgrade of the application? In my opinion, the perfect time to update the database would be shortly before the application deployment.
So I tried to split the infrastructure in two parts, the application infrastructure and the platform infrastructure. The application infrastructure consists of the service itself and the platform infrastructure of all of the services which might be necessary for keeping the system stable and compliant.
The application uses the platform infrastructure and a clearly communicated and in a perfect world machine readable agreement between them should exist. When I think on the platform infrastructure itself, I mean infrastructure which is shared between services. To keep the dependencies low and the service as portable, this should be as simple and small as possible.
Furthermore, this context should also contain the policy framework, for instance, the open policy agent or Kubernetes. Examples for services which might be living here are Kubernetes itself, its ingress controllers, storage classes and the core monitoring components, but not the application monitoring configuration itself.
On the other hand, the application infrastructure should contain the infrastructure which is needed for running a specific application. This should ensure that the infrastructure upgrades or installation happens at the right time, but also brings a lot of control about the infrastructure itself to the developers.
But on the other hand, they know best which infrastructure they need. Nevertheless, this should be as simple and loosely coupled as possible. For instance, sometimes it might be easier to depend on an ingress controller of a specific class and keep the configuration generic, than to depend on an Nginx ingress controller with a specific load balancer. I think you get the idea.
The infrastructure which belongs to this would be the service itself, databases, DNS entries, but also certificates and the monitoring configuration. This also ensures that the application is portable to other environments.
By providing interfaces and templates for a configuration, you could ensure that they are configured properly in any way. So now, we notice both contexts and might want to get a point where we define a contract between them. And you might wonder if there is one way to achieve this goal. In my opinion, there are different ways to achieve the same goal.
Firstly, we could define capabilities, which might tell our application deployment tooling which infrastructure it can expect. For instance, we could find out how Kubernetes is configured and if Open Policy Agent is in place. One more important question could be if network policies can be used, depending on the use of TNI plugin.
A second approach could be the definition of versioning scheme for the infrastructure. As an example, we could define that the infrastructure which supports network policies and uses Kubernetes 1.23 and OPA is our version 1.
On the other hand, an infrastructure where additional policies are added could be version 2. The third, in my opinion, most diligent but also hardest approach would be the gathering of this information directly from the infrastructure.
For example, we could try to find out if an ingress class exists and if this is missing, wait until the prerequisite is met or break the delivery pattern. Whatever approach will be used, we can use this information in our deployment strategy. We can store and use this information in different ways.
The first and the one we'll see today is using config maps, which might be the simplest approach as it's schema-less and you could store whatever information you like. The second most structured way would be using custom resource definitions and obviously also key value stores could be used for this information as an example.
In my opinion, it totally makes sense to use this information in your continuous delivery workflow. For instance, you could simply check if storage classes exist before deploying. When using operators, you could check for all of these infrastructure constraints in your control loop and take actions when prerequisites are not met.
Regardless where it's stored, you can use this information in your CT workflow and break or wait if some prerequisite is not met. Now we've heard a lot about how we could tackle this problem, but you might wonder how such things could look like in real life. And I will use Captain to demonstrate this.
Captain is a control plane on top of Kubernetes, which is used to orchestrate the lifecycle of an application distributed. For instance, you could have one Kubernetes cluster in the cloud, which acts as the control plane. You might have some environments where you want your applications to get deployed to, but you might also
want one cluster in your data center, which sends notifications to your JIRA instance or orchestrates your infrastructure. Captain is a CNCF sandbox project and currently in the process of incubating. Its ecosystem is growing and lots of integrations are already available.
There's Helm, kubectl, and Argo rollouts for deployment, Litmus chaos for chaos testing, and Jamita low-cost, but also k6 for load testing. To ensure that the quality criteria of your deployments is met, it is able to use Prometheus or Dimetris as an SSLI provider.
In the following demonstration, I used three clusters, one control plane and two execution planes, which represent a dev and a staging environment. In one of the preceding talks, Viktor Vácsic told you something about Crossplane. For this demonstration, I took a closer look on Crossplane and found out that this might be a perfect fit for orchestrating the execution planes of Captain in terms of provisioning Kubernetes
clusters, installing having charts on top of this, and setting the permissions needed on the execution plane. In further consequence, Crossplane will be used for all infrastructure deployment related tasks in the demonstration, and therefore all Crossplane operations are running on the control plane.
Furthermore, we will install an application using a mixture of Helm and Argo rollouts in a blue -green fashion and do some simple checks using k6 before switching the traffic to a new deployment. So, when I started implementing this use case, I defined the workflow for every service in the project in Captain.
This is the shippered file for our platform infrastructure. You see that there are two stages specified, dev and staging. Both consist of one sequence called artifact delivery and three tasks. The relevant ones are the platform setup and setting the infrastructure version. You can think of all of these tasks and sequences in there as interfaces. They are defined on
a project basis. Therefore, the workflow will be the same for all of the services in the same project. But these tasks can be implemented in different ways. For example, service A could be tested using Jamito and the second service could be tested using k6.
The only thing that matters is that the service and the configuration exists, which specifies how these events are handled. Captain acts event based. Therefore, every action which is taken in Captain is triggered by an event. The event here shows that an artifact delivery event has been triggered by me.
Finally, it will deploy the version 00122, which has been specified up front in the project infrastructure, the service platform and the stage dev. There has to be a service listening on the event to get an action done.
I have already deployed the infrastructure up front, but let's trigger one or two service deployments and try to find out what's happening in the meanwhile. When opening the captain's bridge, we see all of the projects and recently triggered sequences in our installation. In my demonstration, two projects exist, one for the infrastructure itself and one for an application project called the potato head.
In the first step, we will inspect our infrastructure project a bit. The infrastructure project consists of three stages, a development, staging and production environment.
Its main purpose is to deliver the execution planes and therefore platform infrastructure for applications. We can see which kept services are installed when we switch to the uniform from view. This page shows all of the services which are registered in captain and the events they are subscribed to.
In a further step, subscriptions can be changed and also be filtered on the stage service and the project. To see what's really going on with our services, the sequence view gives us information about our previous and currently running deployment workflows.
When clicking on a specific sequence, we see everything that happened in the workflow. The first step in our workflow ensures that all of the configuration needed is provided in a consistent way for this kit based approach we are using here.
Secondly, the platform setup step installs the execution plane, sets up the Kubernetes primitives on the cluster and also installs the Helm charts for the captain services. Afterwards, this step waits until everything is running and would break if the services are not ready after a certain amount of time.
Finally, the infrastructure version gets written to a config map, which can be used by the application deployments. We see in our sequence that currently version 00122 is deployed and information we will need later. To show you how our deployment in captain could look like, I use one service from the potato head.
This consists of one Helm chart and it should install a Redis instance in the Google Cloud instance using Crossplane. In the first step, we'll try to deploy the application with a higher infrastructure version requirement as the one which is currently provided.
Therefore, I will switch to the potato head project and open the shell. In my case, the minimum infrastructure version is specified in the configuration file in my deployment configuration.
For this demonstration, I will change this to a higher version and trigger deployment.
Now the sequence should be triggered and in the first step, we try to find out which infrastructure version is currently running. In our case, this should fail after some point in time. When clicking on the failed task, we see the output of the service which handled this event.
In our case, we get the information that the infrastructure version which is currently deployed on the environment is older than the one we require. Now let's fix this and switch back to an older minimum infrastructure version.
Now our sequence gets triggered again and we should see that the version check succeeds.
After checking the infrastructure version, many interesting things happen. So firstly, after the infrastructure version check, we apply our monitoring configuration with Monaco, a service to configure Dynatrace monitoring via code.
Afterwards, we triggered our infrastructure delivery and therefore, we also deployed our Redis database which we can find in our GCP console now.
After the infrastructure delivery, we triggered our application deployment. And then we tested our infrastructure deployment and tried to find out if this behaves as it should. After we tested all of this, we released this.
What this means is that we switched the traffic to the newly deployed version afterwards. So after that, the whole development stage was deployed successfully and now we can automatically
deploy the staging environment which is currently in progress and will also finish the same way. As we see, currently also the infrastructure delivery is in progress for this stage which means that we have exactly the same database in the second stage.
Finally, we deployed our service as expected and the deployment proceeds automatically to the next stage. We could extend all of this with quality gates and additional tests. However, we like our deployment experience to be. This approaches some limitations which might be subject to further investigation in the future.
At first, changes in infrastructure, for instance a rollback might be a problem. Furthermore, we defined our infrastructure constraints manually in the tooling at the moment and therefore they might not work if they are not tested. As a result, the infrastructure version might be a good indicator if things are already there but there might be better approaches for that.
Beyond all of these limitations, we've achieved some things using this approach and orchestrating the delivery workflow with CAPTAIN. As described before, we could install a small infrastructure layer and deploy applications in the infrastructure.
For the application context, we could deploy the infrastructure as well as the application in one workflow. Obviously, we ensured that we've got enough permissions to do this upfront. As Crossplane could also run in another environment, we could run the infrastructure deployment machinery clearly separated from the control plane and the application environments.
Today, we didn't take a look on the quality gates but we would be able to define quality gates for our infrastructure after deployment. As CAPTAIN is event-based, we could also define global chaos in the end-to-end tests. With this talk, I wanted to give you an idea about how infrastructure and application deployment could be unified.
This is not the only approach and I hope there will be more approaches in the future to provide stable deployments for everyone. Especially for this case, the Cooperative Delivery Working Group has been established in the CNCF, which will hopefully provide patterns and examples for such use cases in the future. As we want CAPTAIN to fulfill your use case, feel free to reach out to us
on the CAPTAIN Slack and tell us how you would like your delivery process to be. I'm currently driving an initiative to bring more keytops in CAPTAIN. So if you have opinions on that or want to contribute to CAPTAIN or the keytops project, feel free to contact me. We are always open for your contributions.
This talk was based on an article I've published a few months ago. The things you heard today can be recapped there. With this, I want to thank you for your attendance and hope you enjoyed and learned something from that. If you want to get in contact with me, reach out at the CAPTAIN Slack or at Twitter and LinkedIn at the handle at thehaistheraway.
As I think this was one of the last talks at Halstam, I hope you had a nice time in this virtual conference and I'm open for questions now.
Okay, then Thomas, thanks a lot for the presentation and I'm asking the audience if they want to raise the question, then we are here to answer.
In the meantime, I have a couple of questions for you. First of all, let me know if I'm wrong, but you are on top of Kubernetes, then you are also cloud independent.
You can use Kubernetes, vanilla Kubernetes, but I'm expecting that you are able to use any Kubernetes that is provided by the cloud vendor or I'm wrong. Yes, you're totally right. The demonstration you saw here was running on two different cloud vendors. The control plane we installed here was running on Elastic Kubernetes service, so on AWS.
The second, the execution planes we had there were both running on GPE. Yes, we are cloud vendor independent and you can use whatever Kubernetes distribution you like. The second question is, Kubernetes has to be already present.
I know you mentioned in the presentation that you're using cross plane. Can you a little bit explain what is the strategy? If you are able to boost truck completely the infrastructure, Kubernetes itself, or if there is
any limitation on that or what is the strategy that can be used by the user? The reason why this talk exists was to demonstrate that the only thing you need to deploy your application environment is one control plane environment.
You only have to install one Kubernetes cluster, bootstrap the captain control plane there, and after that you can use whatever tooling you like to set up new clusters. So in my case, I set up cross plane to install the other two execution planes and therefore I could reproduce my whole environment at each point in time.
So the only thing I needed for starting up with captain and the three execution planes was one control plane. You need one and then you can bootstrap where you want, if I understood.
I can bootstrap my install my first cluster in AWS and then start into Google Cloud or Azure and so on, not making the cluster over there, but not installing the Kubernetes, but you will be able to do everything from the central one.
Yes, and another thing which is pretty nice is you can also distribute your components. So if you already have one cluster which you are using for cross plane or if you have your machine where some terraform automation
is running, you can simply install a captain service there and connect it to the control plane and afterwards this can also be used. We have a question in the chat, then I have still one question for you, but there is Peter that is
asking which logic should be in your pipeline, which logic should be in captain, and how simple can your pipeline be? Then I'm expecting how it's just simple to extend or set up and stand and use your solution.
So in my opinion, your pipeline should be the whole logic to build your software, to publish the artifacts, and to trigger everything either in your keytop repository or to trigger captain directly to start the delivery in captain.
In captain itself, currently we have the whole deployment lifecycle covered, so it starts when all of the artifacts are published, and you take these artifacts, deploy them, test them, try to find out if everything works as you expect, and afterwards you can put this to the next stage.
There is also a keytop approach, so we are also working to do kind of a keytop approach, and in the future it might be possible to raise pull requests to get to the next stage. That means be drive totally by the git.
One question from my side. It happens to me to have an environment that not only is distributed across multiple cloud vendors, but also in different regions, and I want to run the deployment at a different time.
I don't want to deploy all the infrastructure at the same time around the world, but I want to split one by one. How do you support this case in captain? Are there still limitations, or are you working on that? At first, for captain, you can write whatever service you want, so if you want to create a service for time triggers, just do it.
The second thing is you can always do manual approvals, so you can say, I want to deploy the first stage fully automatically, and the second stage whenever I want, and the third, fourth, and fifth stage automatically.
So this will all be possible. In my opinion, it's all a matter of touring, and the things we build around this. For captain itself, we don't have time triggers at the moment, but this might be a thing which could be necessary in the future.
I see the deployment and then mainly that everything goes well. Maybe I missed it in the presentation, but if I want to roll back, are you using a roll forward? There is a rollback approach, there is guidelines, because you can deploy when you want, it is up to the user to decide what to do.
I think that's also a thing which depends on the tooling the user uses. In my case, in the demonstration you have seen now,
I simply deployed the new version in parallel, and tested this new version, and if this wouldn't have worked, then nothing would have happened. The other version was still in production, and we would have only switched the traffic to the new version if everything went well.
Is it possible also to implement the logic of canary, to have a small quantity of traffic going to the new version?
Yes, as long as you can orchestrate it, everything is possible. You could, for instance, start a deployment with a specified configuration, and could change the configuration in the next step, as it might do it with Argo or with Istio.
Okay. I don't know if there is any other question from the room. Let's wait a second. Let's see.
Yeah. I don't see any other questions at the moment. Then I will invite the people that want to live-dive on your talk, or have a sharing knowledge and so on, to join to the room, to the speaker room that will be published in less than five minutes.
Then you will find the link, then you will join us, then you can also talk face-to-face. In the meantime, I want to say thank you to Thomas, and also to all the other speakers, because this is the last session of the day.
That's an incredible day, I have to say. Still a little bit in the COVID period that was difficult. Like, no, we had this discussion with Thomas, but also with the other speaker. I have to really say a big thank you, because they were able to prepare all the presentation in a really small amount of time, using on some delay on the FOSDEM side, and also on our side.
Thank you very much. I hope to see all of you in person next year. Have a great evening in the Delirium pub, and finally be able to share also beer in the practice.
Again, thank you. See you next time. Thank you for having me. Bye. Bye to everyone.