We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Introducing Chef to an Enterprise and Creating Awesome Chefs

00:00

Formale Metadaten

Titel
Introducing Chef to an Enterprise and Creating Awesome Chefs
Serientitel
Anzahl der Teile
50
Autor
Lizenz
CC-Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
At Capital One, we want our cloud-enabled infrastructure to be an incubator for innovation and an accelerator for bringing more capabilities to our customers. We embraced the principles of Automation, Agile, DevOps, DevOpsSecurity, and Open Source with a robust automation framework to reach our goals. Chef combines innovation, speed, collaboration, and safety all into one DevOps platform. We introduced Chef to our DevOps engineers and quickly built a strong user community through sharing code and discussion forums like office hours and an internal Stack Exchange. Our Chefs didn't need to keep a personal knife because our Jenkins did all the work. We built a flexible Jenkins pipeline to deliver cookbook-enabled integration with automated application builds and provisioning. Implementing Chef Analytics provided more insight into the actions of the nodes and fed all of this data into Splunk for better visualization. A highly available Chef server and a private Supermarket provided our DevOps engineers with everything needed to manage their infrastructure and share their automation. This enabled fast and flexible IT as well as continuous delivery of applications and infrastructure. In this talk, we will share some details about our journey from sous chefs to master chefs. We hope you can leverage our experience on your own master chef journey.
UnternehmensarchitekturSoftwareDatenverwaltungAbfragePunktwolkeCASE <Informatik>Offene MengeMultigraphDatenverwaltungInformationRauschenSoftware EngineeringProdukt <Mathematik>VideokonferenzBitGruppenoperationVisualisierungSystemaufrufKonfigurationsraumBildverstehenNichtlinearer OperatorCoxeter-GruppeAuthentifikationSummengleichungPlastikkarteChatten <Kommunikation>Elektronische PublikationPunktwolkeOpen SourceEndliche ModelltheorieARM <Computerarchitektur>Rechter WinkelKappa-KoeffizientSoftwareentwicklerApp <Programm>DigitalisierungWeb SiteMobiles InternetTafelbild
SoftwaretestRotationsflächeSoftwarePunktwolkeCodeKonfiguration <Informatik>SystemplattformGebäude <Mathematik>ServerProdukt <Mathematik>Transformation <Mathematik>BeanspruchungCodeDatenverwaltungMaschinenschreibenComputerarchitekturSoftwareStabTransformation <Mathematik>VerfügbarkeitVerschiebungsoperatorGebäude <Mathematik>Produkt <Mathematik>WellenpaketMAPVariableSoftwaretestProgrammierumgebungBitLastPaarvergleichProzessautomationResultanteTermKonfigurationsraumServerNichtlinearer OperatorCASE <Informatik>Prozess <Informatik>Coxeter-GruppePuls <Technik>Gemeinsamer SpeicherGleitendes MittelKartesische KoordinatenDateiformatPunktwolkeOpen SourceWeb SiteEndliche ModelltheorieSystemplattformMultiplikationsoperatorKeller <Informatik>Minkowski-MetrikDienst <Informatik>REST <Informatik>DokumentenserverSoftwareentwicklerApp <Programm>Geometrische FrustrationComputerschachInformationRechenzentrumForcingZentrische StreckungGrundsätze ordnungsmäßiger DatenverarbeitungGüte der AnpassungStrategisches SpielVorhersagbarkeitVollständigkeitPunktRepository <Informatik>Klasse <Mathematik>HilfesystemReverse EngineeringFitnessfunktionEindringerkennungRechter WinkelKappa-KoeffizientComputeranimation
CodeStatistikVerhandlungs-InformationssystemServerEinfacher RingTouchscreenCodeDatenverwaltungInformationSelbst organisierendes SystemTransformation <Mathematik>ValiditätProdukt <Mathematik>MAPSoftwaretestProgrammierumgebungFunktionalKomplex <Algebra>Leistung <Physik>RechenschieberVirtuelle MaschineZahlenbereichZentralisatorE-MailDatenflussSystemaufrufSystemprogrammServerProzess <Informatik>Perfekte GruppeWurzel <Mathematik>TemplateInstantiierungComputersicherheitKartesische KoordinatenUmsetzung <Informatik>Framework <Informatik>AdressraumPlastikkarteWorkstation <Musikinstrument>PunktwolkeOpen SourceSchlüsselverwaltungWrapper <Programmierung>TouchscreenLoginMultiplikationsoperatorStandardabweichungRechter WinkelDokumentenserverSoftwareentwicklerFunktion <Mathematik>VerschiebungsoperatorVerknüpfungsgliedGrenzschichtablösungGeradeInhalt <Mathematik>Physikalische TheoriePhysikalisches SystemRechenwerkResultanteSpeicherabzugStellenringQuick-SortKonfigurationsraumGüte der AnpassungWasserdampftafelCASE <Informatik>Metropolitan area networkNummernsystemDichte <Physik>SchnittmengeWort <Informatik>EinfügungsdämpfungRahmenproblemWhiteboardAbgeschlossene MengeEndliche ModelltheorieNeuroinformatikElektronischer ProgrammführerSchlussregelDienst <Informatik>Demoszene <Programmierung>DefaultEinsComputeranimation
SkriptspracheHybridrechnerDatenmodellDemo <Programm>ImplementierungKonfigurationsraumIdempotentVersionsverwaltungGruppenoperationDienst <Informatik>Ideal <Mathematik>DatenverwaltungServerSoftwaretestProzessautomationRechenwerkFunktion <Mathematik>PunktwolkeProgrammierumgebungPhysikalisches SystemSystemprogrammierungFehlermeldungGeschwindigkeitSelbst organisierendes SystemGebäude <Mathematik>StandardabweichungComputerspielDämpfungDatenverwaltungGeschwindigkeitImplementierungOrdnung <Mathematik>SprachsyntheseModallogikVerschiebungsoperatorProdukt <Mathematik>IntegralSoftwaretestGrenzschichtablösungAggregatzustandEinbettung <Mathematik>GeradeGruppenoperationInverser LimesMereologiePhysikalisches SystemProzessautomationSpeicherabzugVerschlingungQuick-SortOvalAbfrageKonfigurationsraumVersionsverwaltungAutomatische HandlungsplanungSpannweite <Stochastik>Nichtlinearer OperatorWasserdampftafelAbstandProzess <Informatik>Perfekte GruppeZusammenhängender GraphVorhersagbarkeitTemplateRoutingNotepad-ComputerPunktSchnittmengeEigentliche AbbildungKartesische KoordinatenHilfesystemt-TestSkriptspracheSummengleichungElektronische PublikationEreignishorizontBitrateOpen SourceKonditionszahlEndliche Modelltheoriep-BlockIdempotentHinterlegungsverfahren <Kryptologie>Dreiecksfreier GraphKurvenanpassungMultiplikationsoperatorDatenreplikationStandardabweichungMessage-PassingInnerer AutomorphismusWald <Graphentheorie>Rechter WinkelDienst <Informatik>NetzbetriebssystemGamecontrollerMechanismus-Design-TheorieSoftwareentwicklerUnternehmensarchitekturCodeTermersetzungssystemGebäude <Mathematik>MAPGanze FunktionLeistung <Physik>ResultanteStichprobenumfangWiderspruchsfreiheitReelle ZahlÄhnlichkeitsgeometrieServerSchlüsselverwaltungVorlesung/Konferenz
ComputerspielInformationBitProzess <Informatik>Coxeter-GruppePunktOpen SourceProzessautomationUnrundheitVorlesung/KonferenzBesprechung/Interview
Transkript: Englisch(automatisch erzeugt)
Thank you. So, we'll basically be talking about how we rolled out Chef a little bit at Capital One, some details, and then we'll go through some best practices that we learned the hard way. So it helps the audience over here.
How do we sort of take a cloud and DevOps and Chef journey? Thanks for the introduction. Like I said, I'm Ali Rapji. All of us work for the cloud engineering at Capital One, and we're software engineers. So jumping right into the presentation.
Most of you over here, I think think that Capital One is a credit card company. Indeed we are. We are one of the largest in the U.S. with 70 million accounts. Some of you also might know that Capital One is one of the largest banks in the U.S., right, and a digital leader in banking.
We have some innovative and cool applications like mobile. Most of people do have that, right? But we have the app called Creditwise, which doesn't just let you get credit score, but it also gives you a model for your credit score. And then with a few companies, which can, you can,
our customers can ask Alexa if, hey Alexa, give me my account balance, and Alexa will let you know account balance, and some details on the account that somebody wants it. All right, some of you in the tech community also know us as
contributors of open source. Hygia. How many people use Hygia over here? Or have heard about Hygia? Few? All right, so Hygia is a dashboard, which is one place that you can go configure, a simple
visualization, which helps you to visualize your entire workflow. Basically your whole pipeline, right? I would encourage people to take a look at it. It's great. DevExchange, which is a developer portal that Capital One recently rolled out, and
that gives the ability of external developers, as well as our partners, to come in and access our APIs. That includes one of our authentication APIs. And then Cloud Custodian that we just rolled out, right, which helps, it's a policy engine, which does AWS management with policy files, and you can take actions on the resources running the cloud.
For more information on these products, you can visit our GitHub site or our engineering site at CapitalOne.io. Some of you might not know, but we are a
founder-led 20-year-old technology company, which started at a disruption in the credit industry. The company was formed on a premise that all consumers in the United States should not have the same kind of credit cards.
How we filled this vision was truly creative. We took data, technology, data sciences, and created an information-based strategy, also called IBS, which we used it for designing our products based on our customer needs and their lifestyles,
versus a one-fit-all model. We make adjustments to our products and our presentations to see the impact in our data. We predict business results before a full-scale market.
We essentially were doing big data before the term data came in the picture. We are the largest digital bank in the nation, and the preferred channel obviously is mobile. We are changing banking for good,
and adding more humanity to banking. For our customers which are looking for the human touch and a smile, we have Capital One cafes where a community can come together, know more about Capital One products, or a cup of coffee.
Our founder and CEO mentioned ultimately the winners in banking will have the capabilities of a world-class software company. During the last four to five years, we've been focused on becoming a world-class software shop.
We mostly build now software's in-house versus outsourcing to become 100% agile shop versus utilizing the traditional waterfall. We build and automate our software deployments utilizing three major pillars.
Automate everything, shift left, and dashboard everything. Our success in the software development was high, but adapting to these principles has helped us with faster app deployment
where the timelines are concerned. This question comes in, why cloud and Chef? Right, who over here has not gone to the pain of building a server in the data center? I was pretty sure that nobody, unless the person started using cloud day one, but so most of it has gone through
levels of frustration in building a server in the data center. At Capital One we had a strong and robust pipeline as far as our application development went. But a couple of years ago, we realized that now for our developers to make sure that they had full capabilities, we wanted to
start treating infrastructure as code, and this is where we got Chef to help us out with config management tool. For faster provisioning and on-demand workloads, we started utilizing the cloud and this initiated the next-gen infrastructure at Capital One.
We started with a few pilot applications to help us with the reference examples and utilizing best principles of rolling in public cloud. We rolled out these applications, these pilot applications, having nothing in cloud at all to running critical production loads in the cloud in a very short amount of time.
And Chef certainly was a catalyst to do that. Now we are focused on further how to improve our productivity, move quicker, get things to market faster and continuously improve.
We build our workloads on public cloud leveraging open-source technologies. We build using micro services architecture and RESTful APIs. Open source has definitely played a major role in our transformation.
And today we are an open source for shop. We build on open source and we give it back to the open source community. After the success of the pilot applications, we accelerated our cloud journey with partnering,
collaborating, and empowering our teams at Capital One. One of the tools that we utilized was a comparison of pipelines starting with a simple app deployment pipeline, which all the developers were already familiar with.
The infrastructure provisioning and configuration used to be manual at that point, right? With a simple Aflo developer writes code, checks it inside a code repo, the build job picks up the code, compiles it, builds it,
runs some tests on it, and then moves it to an artifact repository. With infrastructure as code, similar to app as code, infrastructure code also gets stored in a code repo. The build job picks up the code, right?
Builds it, runs tests on it, moves it to the Chef server, or if it's cloud formation obviously. So one of the examples we shared with our application community, as well as the ops side, that you could use cloud formation, roll out the
servers, use Chef for creating the infrastructure, and use Chef to roll out the application code. That they can create the infrastructure,
configure the application in a consistent matter in a few minutes. All right, so we want to make sure that we started creating Sous-Chefs, right? We wanted junior chefs to be floating around, because there was tons of market,
because everybody was excited, right? Because we had this burst of need from developers to understand, hey, how can we do that? So we decided to start with in-house training versus using traditional Chef training. To train all our staff, we trained employees,
police managers, support staff. We also customized the traditional Chef two-day training and made sure we started using our infrastructure, as
well as our pipeline. This helped that when the developer, after the two-day, went back to his job, he basically acted as a Sous-Chef, right? And they started coding and you'd started utilizing Chef. We also focused on building reusable platform recipes, and this helped
developers to just take it, customize it. We give them instructions how to modify the environment variables and utilize that five-star recipes that we had already developed. So how did we build a strong community?
We obviously started using community of practice in the DevOps space, in cloud, architecture. We had a course where horizontal teams would be available for developers to be asking any questions and understanding if they had any concerns with the technology.
Our product owners started running voice of the customers, understanding, talking to other product owners where they would get more information, how better to serve the internal customers. We have internal pulse site where people come and could say what features they wanted and
specifically they could say this process or this technology, the way the use of technology is potentially technology due, and we want to make sure that we removed it. At Capital One we also have our internal stack exchange site where the community could share ideas. They could post questions and tag them and other developers inside can respond quickly.
Then obviously open space. So with our journey we started with a standalone Chef server and quickly moved to a tiered architecture due to our use case.
We realized that if we wanted to use Chef as a config management tool with auto-scaling and using the full capabilities of the cloud, we had to have high availability due to not giving our customers any downtime for our applications.
Chef is working where we have, we've rolled out HA, but now we're working very closely with Chef to roll out a zero downtime solution. So I'm going to invite Eshu now to start getting inside a little bit details where he'll share about our pipeline. And then Surya is going to continue
getting in the conversation related to the best practices. Thank you, Ali. Thanks Ali for covering the basics of how we are doing public cloud at Capital One and how the DevOps transformation is happening.
Is that me? It would safely deploy the configurations, and if anything went wrong it would stop what it was doing. Okay, never mind. I'll just keep going. So okay, so you know for any developer how do you start taking your first step? So you basically start writing your code on your local workstation.
And I think that's the fastest means to do it. So like any other code tool that you use or any other framework or application that you do, you use, you know, you use your developer workstation. So in this case, we encourage our developers to use Chef TK that was provided by Chef and you know to make as much use of Test Kitchen, InSpec, and
Chef Spec and Food Critic right at their local workstation. So we first encourage our developers to So yeah, so for the very first step we encourage them to use Chef supermarket. So for example if they wanted to build a recipe
we ask them to just first check the supermarket and whether it's private or public. So we also have a private supermarket implemented within our organization where the developers just deploy and use their recipes of each other. That you know extends the power of you know dependency management. So as soon as they used to find any cookbook that they can use, they can just use
workshelf to just pull down all the dependencies on their local workstation and spin up a vagrant instance. So spinning up a vagrant instance was quite easy using Test Kitchen. So once our user just writes their wrapper cookbook around it and spins up a vagrant instance, they can spin it, create it, destroy it, test it again and multiple times.
So once they feel very comfortable that their code is ready to go, we encourage them to use our Amazon EC2 instance to mimic the environment that they will have in the next stages. It's production or QA. So you know once a developer also tests their code on EC2 instances, they would be pretty confident that okay
it's going to run on the next stages as well. So after that they can simply baseline their code and just get ready for the next stage. Okay, so now developing on your local workstation is always easy.
But the next challenge comes when you think of putting your cookbook on a shared Chef server. Obviously, we have a Chef server which is shared by multiple developers and that brings in more complexity and few challenges around it. So like in a regulated company like a bank or a credit card company,
there are multiple things that you should take care of when you are putting your code on a shared server. So one of those are basically how do you audit your code? How do you put traceability in your code? How do you make sure that the security is there? And you know one of the most important thing is how do you maintain a code quality standard check across all the cookbooks
which are being uploaded to your Chef server? So we had that already built up for our application pipeline, but when we start thinking of infrastructure as code, we think it's important there as well. So how do we solve that problem? We first started off you know by putting our knife keys on our CI server.
But guess what? That created a security problem because the same CI server was being used by our Chef developers as well. So how did then we thought okay, let's figure out another way. And this way was that. Basically what we did is
like you can see we created a central knife CI server where we place all of our knife keys and other root credentials to utilize Chef server and we built reusable and cohesive pipelines around it so that developers can make
API calls or even run them manually to just make utilize them and just pass their own credentials or pass their own information to the common jobs and just kick off and do whatever they are supposed to do. But obviously we also introduced user accesses around it so that only certain
authorized user can run these jobs and perform the activity. They were allowed the amount of information or accesses that they were supposed to have, neither less nor more. Once they were able to run these jobs, we also enforce the code quality checks within the same jobs and
so that all the code across the organization is following the same standard. Again, now for the higher environment, let's say you want to put your code in production or even in QA. We extended the functionality of using approvals.
I'll show you in the next slides how the developers also just entered the email address for an approver or any other information for the approver and the approver will get an email or notification where they can just pick to either approve it or reject it based on its genuinity.
So here is an example workflow that you can see here. Here you can see that this is what the users see on their Jenkins server. So basically we use Jenkins as our CI server because Jenkins is obviously one of the most famous tools around the world for a CI pipeline and
most of our teams are using Jenkins. So it was pretty easy to put our jobs on Jenkins so that it was easy to integrate for the developers to make optimum use of these jobs. It was pretty easy to integrate those jobs in their pipelines. So they either use these jobs as their downstream dependencies or they just made API calls to it.
So for example, you can see we had dev, QA, and prod specified. The developer can just pick, okay I want to run a dev job. So behind the scenes, they don't have to worry about putting information in knife.rb where the Chef server is and we will take care of that. The knife used to care of that on Jenkins server.
They can just point this job to their code repository and just run it from there. Now that was the screen for an uploader. This is the screen that our approvers see. The approvers will get an email saying, okay, this developer, for example, in this case, here is a name.
I'm sure people at the back can see it. So let's just assume any name. The approver can just pick and choose and see, okay, this is the promotion that I got. Do I want to put this in production or not? So they can just execute or reject the promotion based on its validity.
Now one of the main reasons that we put and built these pipelines was we wanted to put traceability and auditability. So we built innovative dashboards around it. And we basically fed all the logs generated by workflows to make sure that if our auditor comes into picture tomorrow he can just look at the dashboard and find out who promoted what cookbook and who
approved what promotion. So you can use any number of tools you want or any other dashboard tool that you are comfortable with. You can either use Hygia as well, which is a great open-source DevOps tool that we have like Ali discussed in earlier slides.
Now to put it all into one picture and perspective, this is how the flow overall works. The user pushes, you know, completes all the development on his local machine and finds out if all the dependencies are satisfied either using supermarket or any other dependency management tool. Once the user thinks, okay, my development is done. Now I'm ready to put it in on the shared Chef server.
He can invoke his own job. Now using his own job, he can invoke the remote job and, you know, just make an API call. Like I said, just run it manually. And publish his own code on Chef supermarket and on the Chef server at the same time.
I mean it depends on you, but this is how we're doing it today. So once the cookbook lands on the Chef server, the user can, you know, either run a cloud formation template and just calls all the information from the Chef server and configure their whole infrastructure. So this is how the overall pipeline looks like.
So that's it from my side. I'll invite Surya to deep dive to provide you with better information and more information. Thank you guys. Hi everyone. Like Ali and Ishu said, explain like how we
went through this transformation using cloud and DevOps and especially using Chef and how we established a workflow for our developers to use it across the organization. Especially for a large company like ours where we have thousands of developers working on this shared Chef server, right? So I'm gonna cover some of the aspects of what we learned over our journey.
And I think these are few things that each and every developer should keep in his mind before starting his journey or in his journey using Chef. So a taste of perfection comes with practice obviously and as we evolve into doing things in a repeatable fashion and we learn from our experiences and from our learnings, right?
So we call it automation kung fu because kung fu is all about practice and as you and automation kung fu is built upon similar lines of DevOps kung fu that Adam Jacob pointed out last ChefConf, right? So it's all about practice. It's all about learning from your experiences and mistakes and building better
recipes each and every time you develop them, right? So one of the key questions when we started with Chef is what do we do with our existing automation and what do we do with our existing scripts? Which have been working and promising for us for over like many years, right? So we cannot just dump all of that stuff and just rewrite everything in Chef, right? So we need to hit the right balance.
So some of the questions we asked ourselves is how can we leverage existing automation and all the investment that we did on existing automation? And then do we have to really design cookbooks from scratch or build a hybrid model in which we can leverage part of our automation which was helping us over a lot of years and
leverage the best which Chef has to offer, right? So those are the key questions that we asked ourselves when we started the journey and let's see what we did. So this is a sample example when you put your existing scripts as these into a Chef recipe.
It's a bunch of exude blocks running those scripts and commands that you usually run when you run your automation through any other provisioning system, right? So the problem with this is you don't have any control on what's going on in those scripts. I mean, you're using Chef, but are you really using the power of Chef which is
creating an immutable and idempotent infrastructure, right? And you also lose the capability of, you know, avoiding configuration drift at all. Because these resources, none of them are idempotent or you don't have control over what happens to them when if you run the same recipe again and again. You have to do a lot of investment into those scripts to rewrite them to make it work, just to make it work,
which is not worth the time or cost, especially in this new modern era when you have to deliver at your city. So the advantages, right? Like if you want to use existing scripts and recipes, as I said, you can kickstart your Chef implementation and you can maybe put your application or deployment automation into production within minutes.
But what you may lose is you lose the ability to prevent configuration drift, which is the key features of the modern infrastructure. And you lose the ability to extend those cookbooks and create a greater community of open source cookbook writers.
And yes, again, the main important part is idempotency. You lose idempotency and that resource which is ported from your existing scripts, if it runs again and again, it may disrupt your entire production system and you may run into production issues as well.
So use resource DSL instead. I mean, this is an example of a recipe with plain old Chef resources which gives you the ability to easily read what's in the recipe and what it is doing. So any non-technical resource or non-technical person can go through this file and say that, okay, this file, this
configuration or this recipe is installing Tomcat and it's configuring it as a service, which is very simple and very easy to use, right? And any developer coming into your team, maybe he is experienced with Chef or he doesn't know Chef at all. He can still figure out what's going on and he can easily use this recipe and write his own configuration on top of it,
which is not possible if or which is very hard to do if you're just porting your existing scripts into Chef recipes. So we created a hybrid model which is best suited for our teams and we partnered with several internal teams to get this. So the important point in this is what we did is we separated out our installation scripts versus our configuration scripts and
we thought, okay, the installation scripts can still run as is with properly having a guarded conditions, but the configuration is must and it should be moved to Chef as templates and cookbook files. The reason for that is a cookbook, say for example Tomcat installation, will have its own
configuration and every team in the enterprise may have its own configuration, right? And we don't want every team in the enterprise to write their own Tomcat cookbook as. So we want to create one Tomcat cookbook with enterprise standards and want to give it away to teams and ask them to extend it. And the only way they can easily extend that is
by putting the configuration into Chef templates or resources because that's easily scalable and easily overrideable and easily extensible. So and one of the other things that I want to touch upon today is Chef search. So it's not most common uncommon for people to use a config management tool like Chef and build service discovery mechanisms on top of it, right? Chef search is a nice feature
but again, it's not giving you the real service discovery features or cluster management features, right? And also it doesn't give you consistent results as well. I mean, it doesn't take into account the health of the system.
So if you can run a Chef search and query the Chef server to get all the servers performing which are part of a cluster, but you won't really get the status of whether that server is healthy enough to take the traffic or not. You end up configuring that server even if in its unhealthy state into the cluster and it starts taking traffic which again is bad, right? So Chef search is not ideal for service discovery.
That's what we learned over our journey using Chef and a proper service discovery tool like console, HCD or Zookeeper. Let service discovery tools do their job while Chef does config management and automation for you and a clear separation between these duties will give you a robust infrastructure. And again, so I asked a developer like what do you think is needed
in order for us to write recipes perfectly each and every time. So one thing that he told me is this, we are evolving and we are delivering at the speed of velocity and our requirements and our infrastructure is changing at very fast, right?
So he said if you want to deliver consistent results with our Chef automation or Chef cookbooks there is a need for automated build and test pipeline without which you end up performing manual tests for the features that you've already coded two years or three years back.
So the importance of a pipeline for automated testing is manual testing is plausible and it is effective, but it is limited. I mean as you add more releases to your automation, it's not really scalable or practical to perform manual testing. You do not want to do ten manual tests for just a new feature that you added this cycle, right?
You want all of those ten old features to be tested automatically and you want to just test the new feature that you released recently. So that's the necessity of having an automated build pipeline. It ensures that your systems and configurations are safe and error-free and most often you detect the errors and correct them very early in your life cycle or development cycle.
So this is an example end-to-end pipeline that one of our teams have built and it shows the importance of having a pipeline right from the code commit until deployment, right? And the code,
regardless of whether its application or infrastructure, goes through the same set of stages and the same set of tests are run against in the pipeline and that ensures that the entire infrastructure and the application are delivered consistently and faster. So just a quick summary of what we discussed today, right?
So building stronger communities is a must for any enterprise that wants to transform into a DevOps shop or DevOps practices and a standard workflow with best practices ensures that you are complying with your organizational policies at the same time
leveraging the benefits of DevOps and the new technologies that we're using these days. Dashboarding your DevOps gives you feedback real-time and that is a must for your teams in order to get feedback very early in the development cycle and you can, excuse me, so automated testing is a must as I said earlier and
you have to create delivery pipelines for your applications and infrastructure just like treat them both the same because they are core, right? Infrastructure as core and application is also core and once you treat them together and flow them through the same pipeline that you use for your applications, you ensure that you have a robust, consistent and
immutable infrastructure and finally, deliver at velocity and become a master chef. So at Capital One, we are pushing the limits of innovation and we are using technologies like cloud, DevOps, open-source and we are even contributing back to open source very heavily.
We are building tools where we don't find them and we are using the commercial and product open source products which are, which does the job well for us and saying that, that's all. Thanks. I'm just curious what you guys think about the Chef Automate tool since it provides some of visibility you have with Hygia.
So we have not explored the Automate tool just yet, right? So can't answer or hopefully we'll be looking at it. I could talk a little about Chef delivery.
We have like, we started with the DevOps principle, so we had a strong, and I'm sorry the echo, I seem something is messed up, but we have a very strong pipeline already and so we decided to continue with Jenkins and that's doing the job, right?
Once delivery probably picks up a little bit and has more features, we'll probably certainly be looking at it. By this point we're just using Jenkins. Have you published any information about how you've actually done the automation of knife? We, that's one of the things when we were preparing the presentation it came through that we'll probably be publishing that because that's pretty useful
information and we'll definitely be working on open sourcing that. All right. Thank you. Let's give it another round of applause.