We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Continuous delivery story with FIFA

00:00

Formale Metadaten

Titel
Continuous delivery story with FIFA
Untertitel
Introducing best practices in legacy project
Serientitel
Anzahl der Teile
110
Autor
Lizenz
CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Continuous Delivery is the process of having a shippable product after each check-in to the source control repository. Continuous Delivery is usually implemented as a natural improvement of a Continuous Integration process. This presentation highlights challenges and presents hints on how to start from a raw environment and incrementally build a successful deployment pipeline based on Team Foundation Server (TFS), providing substantial added-value for business. This presentation will describe the process of establishing Continuous Delivery in a project for FIFA. We describe the starting point, what we achieve in the first phases and what are the plans for further improvements in order to deliver high quality software in schedules defined by business needs – not by process and technology constraints. Making Waves took over as a Services Provider for the development and maintenance of FIFA's Intranet and Extranet platform in 2011. The main challenge was to avoid long release cycles, improve quality and provide a reliable hotfix strategy for urgent issues raised in production. The first phase of the project was focused on taking over the source code, development, test and production environments. This was a challenging task, mostly because of a lack of automation in build and deployment processes. This part of the presentation will cover possible approaches for how to incrementally create a flexible development environment, supported by a continuous integration process, in a legacy project inherited from an external part. The goal of the second project phase was to implement a continuous delivery process in the project. We will present the main arguments for investing in tools and processes which enable more frequent and automated releases, and how that brings significant business value. We will also cover how we implemented a set of practices and principles aimed at building, testing and releasing software faster and more frequently, including (but not limited to): deployment automation, release repository, production configuration tracking and version promotion between environments. The presentation will briefly cover tools which were used, including Team Foundation Server (TFS), but most of the content is technology agnostic and is relevant for both developers and more-business oriented people.
23
63
77
WellenlehreStetige FunktionComputersicherheitMini-DiscOpen SourceKontrollstrukturCodeProgrammverifikationStatistischer TestWeb logMAPMathematikFamilie <Mathematik>TeilbarkeitBeweistheorieMetropolitan area networkHierarchische StrukturQuaderImplementierungComputersicherheitOpen SourceVersionsverwaltungGeradeDesign by ContractProjektive EbeneProfil <Aerodynamik>SoftwareRotationsflächePhysikalischer EffektProzess <Informatik>SchnittmengeBenutzerbeteiligungVirtuelle MaschineStatistischer TestThermodynamisches SystemKartesische KoordinatenSoftwareentwicklerGruppenoperationElektronischer ProgrammführerFaktor <Algebra>QuellcodeStellenringFlash-SpeicherE-MailAutorisierungGebäude <Mathematik>ServerProgrammfehlerCodeRefactoringKomponententestÄhnlichkeitsgeometrieLineare RegressionWellenlehreComputeranimation
SoftwareentwicklerLeistungsbewertungQuellcodeLeistungsbewertungProzess <Informatik>GruppenoperationAggregatzustandFrequenzProjektive EbeneMathematikQuaderTermCodeAnalysisBildgebendes VerfahrenCookie <Internet>SoftwarewartungProxy ServerSoftwareComputeranimation
CodeSoftwareentwicklerMultiplikationsoperatorProjektive EbeneOpen SourceComputerarchitekturVersionsverwaltungAnalysisCodeZahlenbereichObjekt <Kategorie>CASE <Informatik>MathematikDesign by ContractKategorie <Mathematik>VisualisierungWärmeübergangPlug inGeradeInhalt <Mathematik>Elektronische PublikationPhysikalismusMathematische LogikBenutzerfreundlichkeitKlon <Mathematik>SchnittmengeWellenlehreBusiness ObjectDatentransferComputeranimation
KontrollstrukturWärmeübergangOpen SourceVersionsverwaltungCodeDokumentenserverSoftwareentwicklerAssemblerAuszeichnungsspracheQuellcodeDesign by ContractMultiplikationDokumentenserverCASE <Informatik>CodeProdukt <Mathematik>Thermodynamisches SystemMathematikKartesische KoordinatenVersionsverwaltungParallele SchnittstellePunktTorusVerkehrsinformationProjektive EbeneBildverstehenMAPDialektWärmeübergangTaskIntegralHierarchische StrukturSicherungskopieAssemblerMini-DiscGebäude <Mathematik>BitComputeranimation
ServerKontinuierliche IntegrationSoftwareentwicklerSpezielle unitäre GruppeGebäude <Mathematik>DokumentenserverOpen SourceEinfache GenauigkeitSoftwaretestMultiplikationKontinuierliche IntegrationProgrammierumgebungResultanteOpen SourceRechenwerkStatistischer TestTabelleStrategisches SpielCASE <Informatik>Computeranimation
SoftwareentwicklerProgrammierumgebungSuite <Programmpaket>Funktion <Mathematik>Lineare RegressionEin-AusgabeNP-hartes ProblemStatistischer TestRechenwerkFunktionalLastVersionsverwaltungThermodynamisches SystemSystemprogrammierungSelbst organisierendes SystemCodeStrategisches SpielStatistischer TestThermodynamisches SystemGeradeProdukt <Mathematik>Web-SeiteMultiplikationsoperatorCASE <Informatik>Exogene VariableVersionsverwaltungFramework <Informatik>SchnittmengeKontrollstrukturEin-AusgabeSoftwaretestRechenwerkProgrammierumgebungEigentliche AbbildungSicherungskopieRechter WinkelDatenbankFunktionalBildgebendes VerfahrenZeichenketteEinfach zusammenhängender RaumPatch <Software>BenutzerbeteiligungFirewallÄhnlichkeitsgeometrieNichtlinearer OperatorComputersicherheitFormale SpracheSoftwareCodeZweiEinfache GenauigkeitGatewayNewsletterFehlermeldungServerUnendlichkeitSelbst organisierendes SystemKomponententestLastteilungKonfigurationsraumLastProjektive EbeneRechenzentrumEinsAbgeschlossene MengeSoundverarbeitungTeilbarkeitTeilmengeRahmenproblemMAPZeitrichtungLineare RegressionBetriebssystemComputeranimation
SoftwareProjektive EbeneFunktionalSoftwareProgrammierumgebungSharewareStatistischer TestKontrollstrukturSI-EinheitenCASE <Informatik>Produkt <Mathematik>Prozess <Informatik>TouchscreenArithmetisches MittelComputeranimation
SoftwareentwicklerStetige FunktionFokalpunktSoftwareProgrammierumgebungMultiplikationsoperatorGewicht <Ausgleichsrechnung>TeilmengeProdukt <Mathematik>Projektive EbeneFokalpunktSoftwaretestBefehl <Informatik>MereologieSoftwareComputersicherheitKontrollstrukturStatistischer TestSharewareWort <Informatik>ZustandsmaschineBinärcodeDreiecksfreier GraphSchnittmengeServerCASE <Informatik>PunktFrequenzPRINCE2Framework <Informatik>SchlussregelUnrundheitHilfesystemComputeranimation
SoftwareentwicklerRückkopplungMathematikDialektKeller <Informatik>BenutzerfreundlichkeitSoftwareRückkopplungFlickrArithmetische FolgeEinsWhiteboardDreiecksfreier GraphMathematikWeb SiteDigitale PhotographieStrömungsrichtungBenutzerfreundlichkeitAggregatzustandWort <Informatik>MultiplikationsoperatorCASE <Informatik>MereologieSpielkonsoleSoftwareSchnittmengeFunktionalPufferüberlaufHalbleiterspeicherProjektive EbeneVersionsverwaltungKeller <Informatik>ProgrammierumgebungGoogolFigurierte ZahlSelbst organisierendes SystemProdukt <Mathematik>Statistischer TestSystem FDivisionKontrollstrukturComputeranimation
SoftwareentwicklerMereologieMathematikKontrollstrukturCASE <Informatik>IndexberechnungBenutzerschnittstellenverwaltungssystemOffice-PaketLastCachingBefehl <Informatik>DatenverarbeitungssystemElektronische PublikationSchnittmengeRechter WinkelMultiplikationsoperatorComputeranimation
VersionsverwaltungGebäude <Mathematik>DokumentenserverKontrollstrukturSoftwareentwicklerRechenwerkStatistischer TestResultanteStatistischer TestExogene VariableTeilmengeMereologieSichtenkonzeptProdukt <Mathematik>MAPKontrollstrukturSchnittmengeAssemblerThermodynamisches SystemSoftwaretestGebäude <Mathematik>ProgrammierumgebungProjektive EbeneKontinuierliche IntegrationVersionsverwaltungDokumentenserverKonfigurationsverwaltungWeb SiteCASE <Informatik>UltimatumspielTUNIS <Programm>IntegralEntscheidungstheorieComputeranimation
ProgrammierumgebungKonfigurationsraumThermodynamisches SystemCodeTypentheorieTemplateMereologieSchnittmengeKonfigurationsraumTransformation <Mathematik>TemplatePlastikkarteProgrammierumgebungServerDifferenteKartesische KoordinatenRegelkreisKomponententestCodeVersionsverwaltungOpen SourceMathematikCompilerGeradePhysikalischer EffektTypentheorieProdukt <Mathematik>Design by ContractMAPZahlenbereichThermodynamisches SystemSoftwareProgrammfehlerURLUmwandlungsenthalpieE-MailRechenwerkLoginBitComputeranimation
SkriptspracheZurücksetzung <Transaktion>VersionsverwaltungSoftwareentwicklerSharewareRegulärer GraphProgrammfehlerCASE <Informatik>VersionsverwaltungZurücksetzung <Transaktion>GeradeWeg <Topologie>SchnittmengeSkriptspracheRegulärer GraphCodeComputersicherheitAnnulatorRechenwerkMultiplikationsoperatorProdukt <Mathematik>TopologieProjektive EbeneStatistischer TestMathematikKontrollstrukturZweiComputeranimation
SoftwareentwicklerDreiecksfreier GraphBereichsschätzungBildschirmfensterSkriptspracheServerLastteilungLastAusnahmebehandlungLineare RegressionVersionsverwaltungMathematikOffice-PaketMultiplikationsoperatorZahlenbereichProdukt <Mathematik>CASE <Informatik>ProgrammierumgebungSoftwarewartungSoftwaretestMereologieVollständigkeitDifferenteBildverstehenPhysikalischer EffektMAPMobiles InternetRegulator <Mathematik>GrenzschichtablösungSicherungskopieComputeranimation
MathematikKontrollstrukturFlickrEntscheidungstheorieDifferenteComputeranimation
SoftwareentwicklerMultiplikationsoperatorSoftwareKartesische KoordinatenNeunzehnServerUmwandlungsenthalpieRouterProdukt <Mathematik>PunktVersionsverwaltungCodeWasserdampftafelBildverstehenPerfekte GruppeRationale ZahlSchnittmengeGeradeMereologieComputeranimation
SoftwaretestStatistischer TestSoftwareentwicklerSoftwaretestServerStatistischer TestWeb SiteSchnittmengeMereologieProgrammierumgebungRoutingKartesische KoordinatenComputeranimation
SoftwareentwicklerMultiplikationsoperatorProjektive EbeneThermodynamisches SystemSichtenkonzeptDatenbankPlastikkarteFacebookSeidelExogene VariableSoftwareServerRouterMessage-PassingFunktion <Mathematik>Zentrische StreckungLesen <Datenverarbeitung>FunktionalComputeranimation
SoftwareentwicklerVersionsverwaltungProgrammierumgebungKonfigurationsraumSoftwareBetafunktionVersionsverwaltungMultiplikationDatenbankKartesische KoordinatenMathematikOpen SourceCASE <Informatik>BenutzerbeteiligungArithmetisches MittelMomentenproblemWort <Informatik>BereichsschätzungMereologieWeb SiteMultiplikationsoperatorProjektive EbeneGleitendes MittelSoftwaretestVirtuelle MaschineAlgorithmische ProgrammierspracheSoftwareProgrammierumgebungProdukt <Mathematik>RouterSummengleichungFrequenzGruppenoperationDigital Rights ManagementKonfigurationsraumStabilitätstheorie <Logik>Nabel <Mathematik>Leistung <Physik>PRINCE2LastteilungFlash-SpeicherStellenringStatistischer TestSkriptspracheReelle ZahlSharewareGewicht <Ausgleichsrechnung>Computeranimation
Transkript: Englisch(automatisch erzeugt)
I'm Safid Naq. I'm a software architect and active developer at Making Waves where we developed mostly web-based solution. Today I would like to tell you a story how we improved the process after vendor transition in legacy project. What we learned, what were our challenges and what you could do in your projects to improve it.
I have a question for you. How many of you worked or works in a project which started before you joined the project team? I see many of you. So maybe this talk can be used as a guide of set of hints how to improve the process in existing project.
But before I go to the main topic, I would like to show you... It doesn't work. I would like to show you... It doesn't work either. I would like to show you Maslov's hierarchy. Did you hear about this? It's from psychology a bit. Maslov was American psychologist who proposed the human's hierarchy of needs.
In this hierarchy, in his concept, the basic needs have to be satisfied before we think about higher needs. So humans start thinking about breathing and food because it's needed to survive, to exist.
Then if those needs are met, we start to think about security, about safety. It can be safety of employment, it can be social security, it can be employment security or health security. Only then, if these needs are also satisfied, we think about social aspects, about having friends, having family.
But still, if we miss security somehow, then we don't think about these higher level needs. And then, at the upper level, Maslov placed self-esteem or need for respect.
Then, at the top, he placed the creativity, which means that if we want to be creative, we need to satisfy all the needs below. Why am I talking about this? Because recently, Scott Hanselman on his blog proposed similar pyramid of needs for software development.
At the very bottom, he placed the need for having a revisable software. It means that you need a kind of source control system to have a revision of your sources, to be able to see the version of the source, to be able to check who was the author of recent change, to merge branch.
So, if you in your company, I hope not, use a shared disk or send a mail with a code or flash documents or flash sources or PSD files, you don't meet those needs. I hope it's not applicable here.
On the second level, there is need for having buildable and deployable application. It means that you need to be not only able to get the latest source code, but also able to build it, deploy it as easily as you can build it, because it's not enough if you can run it on your local development machine.
You need to be able to run it somewhere else to be useful. Probably also to build it automatically on a build server somewhere. Then, at the third level, Scott proposed having maintainable software, where you are not only able to build the software, build the latest version,
but you are also able to fix the bugs and hopefully verify them. So, not like a blind man fixing, checking if it works and checking how many mails from the customer will get. At this level, you need having at least some kind of test.
It can be manual test, but you need to be able to verify the process, how it works. And at the fourth level, there is need for having refactorable software. So, you change the source code not only to fix the bug, but you change it also to improve the internal quality.
And at this level, you need a set of automated tests, automated unit tests probably, because then you can refactor code without fear of introducing changes or regression bugs. Do you have an idea what could be on the top of this pyramid?
There is pride. So, your code is not only refactorable, but you are proud of it. You are not afraid to show it here with excuses, oh, it's only proof of concept, don't look at this line, it wasn't mine. Okay, but let's get back to reality, to our project. Before signing contract, our team was ready to start the job.
Before signing the final contract, we were able to view the source code and evaluate it. So, it was the goal before starting the project to look at what is the current state before this transition period has started.
We found, as in many projects, in many projects you work probably as well, big technical debt collected over years. So, there was changes made only to fix the bug, not only to improve the internal quality or to maintain software with long-term goal.
We found some clever solutions like manual cookie maintenance or implicit sharing of images between solutions. It was clever, but it was hard to figure out how it worked. It was easy to miss after transition when we did the fresh new deployment. It was the goal of the project.
We also made some code duplication analysis. Did you do such analysis in your project? We used Clone Detective, it's a plug-in for Visual Studio, which analyzes how many set of lines are duplicated between files.
We did analysis for one of our best projects, well-architectured, well-developed project. Do you have a guess what could be the percentage number of code duplication, more or less? It was 4%, and it was mostly because domain objects were sometimes similar to DTOs, data transfer objects, and properties were also similar.
But it's high-end, I would say. In regular projects, in Making Waves we did, it was mostly content-based projects, and the duplication code was around 8%.
If it's below 10%, it's fine because it's not a real duplication, it's not a logical duplication, it's rather physical duplication of names of the properties between objects.
But in this code, it was much higher. It was much, much higher, I would say. By that, I mean it was not extraordinary project. It was project as you may have in your company, which you can also improve it. Maybe you will see how. We also found a lot of unused code, as you may find in your project.
When someone is not sure if this change is the final change, he will add 2 to the name of the method and leave it later, because he just forgets, or just in case. So the goal for the first day of the project, when the contract was finally signed, was to transfer the source code.
How we looked at the FIFA case? We found there were multiple SVN repositories. There were developers' repositories, there were production repositories, and the code was manually migrated between those repositories.
It was kind of a crazy approach, and it was a lot of code duplication, because it was intended to duplicate the code between developers' repository and production repository. But there were some missing revisions, because someone by accident forgot to migrate or made the changes directly on the production repository.
We also faced some disk corruption, so some revisions were not able to be restored from the backup. Sometimes it was also challenging to get the latest version of the code deployed to the production.
It means that assemblies were not always versioned properly, so there could be a few versions of assembly 1.0.0.0, and we had to figure out which one actually was deployed to the production. We had 13 applications running in parallel, so each application could possibly use different versions of the same 1.0 assembly.
We also met some changes made directly on the production. I think it might be the case in your project, when someone needed to make a quick fix just to run the production,
and he forgot to commit it later to the source control system. So now you can think about this project and think about your project, where you are, at which level of this hierarchy. Hopefully you met this revisable need. Maybe you are much, much higher.
So this source control transfer task was a bit tedious for us, a bit challenging to get everything from the latest version, so-called the latest. But then we did on the second day an automating build.
Actually it was introducing continuous integration practices, which means that we integrated our work every day, with having multiples built, run after each check-in. Who of you have a continuous integration environment? Many of you. And probably it looks like that, that when developers commit to source control, build is triggered automatically,
and everyone is notified by build status. He can check it on the dashboard, he can get a mail, he or she. And also some kind of test, probably unit tests, are run.
And at the end, executables, the result of the build is stored somewhere, when you can access it, or where you can check it. It's also not the end, when you can build the software. You should be able to test it internally. And in our case, we had some internal test strategy.
It was mostly manual test. We got a huge Excel with a lot of manual test, which required army of testers. It was immoral to ask our developers to run them manually. But still, it was useful as a functionality exploration.
We could explore features of the software we got by running some of those test cases. When we run it once, it's fine, because we get to know how the software works. It was also useful for us to check the regression test. So when we did the first deployment, we could check if we broke something or not.
But the most valuable input from those manual tests was the input for automation. We could rewrite those manual tests in an automated way, so no developers were required. We could check everything after each deploy, not every two weeks, every six months.
So we could create a set of unit tests. But as you know, it's hard to introduce when you start projects without having unit tests in mind. We have a lot of tools, but it's not enough. It often requires huge refactoring, but without having those unit tests, you are afraid to refactor.
So it's an infinite loop. So the other approach is to focus more on higher level, having acceptance tests. You could use Selenium and Behave or any other UI testing framework. But as you know, it's a fragile approach.
So we decided to move only some of those manual test cases to UI test cases, automated UI tests. But the most valuable from the continuous delivery aspect was the smoke test tool we developed internally. It was a rather small tool, but it checked if the basic functionality works,
if the front page loads, if the connection string is correct, if all the images are properly created, because that means that security rights were set on the server properly. So even those basic tests can automate your build,
because it doesn't require someone from IT to check a few simple things. So I think it's useful to have it also in your project. But having tests is not enough. We had to create also internal test environment. It has to be similar to test production, as you may know.
You probably care about having same OS version, about same IIS version. But do you care about having same language version of the operating system? Do you care about having similar set of patches installed on a production and testing server? Not necessary.
Do you care about having the same load balancer in your environment? Do you care at all to have two servers in a test environment if you have load balancer environment in the production? There are minor things you may miss, which can cause the errors later.
So it's important to reproduce testing environments to be as similar as production is. It also needs to be isolated. So it means that you probably don't want to send newsletter to all your customers during acceptance tests. It means that also the SMS gateway should be internally somehow isolated.
But some system cannot be isolated or created in your test environment. Like in our example, it was Google search appliance. It's a physical box, so we couldn't afford to buy another only for testing. So we created a mock of this system and then we can use it as an external process
which can respond with similar responses. You can also think about this mock system as an external delivery system from UPS when you don't want to order delivery each time you test something. But you want to be able to verify if the process works from beginning to the end.
Okay, so we have a testing environment, but we ask ourselves a question. How long would it take to deploy the single line of code to the production if we have it already in our testing environment? How long would it take in your organization to deploy it?
Would it be six months, one month, two weeks, or maybe an hour? It's only one line of code and it's a small fix. Maybe it's only changing one letter in string, so not very important fix. And the second question we ask ourselves, how long would it take to restore the production environment if data center would blow up?
I mean you have only database backup. Can you restore your production? Can you restore your iOS? Do you know what is the version? Can you restore configuration files, web configs? Do you know what is the configuration of load balancer? What is the firewall settings?
Or you would better close up the business and go away and someone else could fix it. I think in many cases it's like that, but it's a situation we don't care about. In this case it's IT hosting company problem, but not always.
Sometimes we have to say something to our customer why we cannot restore it easily or how long it would take. So when we answered, or we couldn't answer actually those questions, we set up the goal for the project to shorten the release cycle, to make it as short as possible.
So basically it means to follow continuous delivery principles when we want to build, test and release software as frequently as possible based on business needs.
And you may say, it's not my case, I do Scrum. Who of you does Scrum or Agile? I see a few of you does, but does it mean that you show the demo to the customer at the end of each sprint? Probably yes. The customer accepts the functionality you have created? Probably yes.
But you deploy to the real production, not only to your internal test environment. Are you sure it will work at the hosting environment? Sometimes not. It's so-called last mile. When you have sprints, two weeks, four weeks sprints, and you develop the software where customer accepts it.
Then when you are done with your backlog, you are ready to go to the production finally after six months, after a year. Customer says, okay, we are done, we want to accept it. And he runs user acceptance test or customer acceptance test. And our packages shining at the beginning are not shining as much as it used to.
Because customer has found some issues. It was not the way I thought it was developed. It was only a demo. I agreed. I didn't have time on that Friday. Please fix it now. So we go back and fix it. Then finally we are ready and we go to the hosting company.
Okay, here are our packages. Here are our binaries. Go deploy it. And they said, oh, wait, wait. You have developed with .NET 4.0. We support only 3.5. Okay, maybe it's an edge case. But what about open port when the hosting company says, we cannot open that port for you. It's our security rules.
You can only have port 80 open. So again, we go back to developers. Please fix it. And this time from being ready to being deployed is called the last mile. This last mile is sometimes a big part of the project to deploy it finally when it was already done.
And we did the scrum. We did the sprint. We were fine as a developer, but the cycle was not, or the process cycle was not closed and it was not deployed. Probably it looks like that in your company where customer gives you, it could be internal customer, gives you a set of requirements you have to create.
And then developer or development team builds it, develops, tests it, and gives it to the IT, the binaries. And then IT guy somehow, maybe with our help, develops it and run the services where customer can use it at the end. But the problem is that focus of last 10 years was on the left part.
When we have great tools, we have a visual studio, we have scrum as a methodology, we have set of testing frameworks, but we do not focus on the last part, on the second part. The last part is the continuous delivery focus.
And you may think, continuous delivery, okay, another buzz word. But check this. Do you recognize it? It's agile manifesto signed in 2001 and at his first statement there is written,
our highest priority is to satisfy customer through early and continuous delivery of valuable software. So we knew about this, but we didn't focus enough on that part of the project, of the process.
So we were sure we want to follow that way, to follow also on the second part, not only on the first, scrum is not enough for us. So we had to convince customer how could we do that. We had shown him the business value of this approach. The early feedback from the user was the first advantage.
Previously, they deployed every six months. Yes, it's half a year deployment. So when the user created issue in issue tracker and development team created it, it can be not valid anymore because World Cup was finished already.
So they could also fail fast and early if something is not what they intended to create. They could also release the risk of release. If they deploy every six months, the risk of release is huge because set of changes is huge.
If we try to automate it and create it in a small step, so instead of adding the huge value at each deployment, we can do this more frequent and add less value, which can be more easily tested. We can also track the real progress of the project
because they see when the functionality is deployed to the project, not only when it's done in our planning board. And at the end, the money starts earning. When it's deployed to the production, someone can use it, and it's investment which starts to return.
You may say it's something extraordinary, but the huge companies in the world use it. Flickr is probably the most common example of that approach. Have you seen the footer of their site? At the footer, they say Flickr was last deployed 42 minutes ago,
including five changes by four people. In the last week, there were 85 deploys of 677 changes by 21 people. So that means they deploy all the time. Almost every check into the source control is deployed.
They have also a console which says what is the current progress of deployment. And you cannot probably see it, but there is written that waiting for 151 hosts. So it's a huge deployment, even though it's only a site for hosting photos. They have a huge environment, and they are able to deploy it.
And they are not the only ones who can do this. Firefox is shortening the release cycle. Now it's version 12 or maybe 13. I'm not sure. I haven't checked what's today's version. But they decided to make it not only the version raise, but to shorten the feedback from the user.
If the user struggles from memory consumption, they fix it. If the user prefers better tap organization, they fix it. The same with Google or Stack Overflow. They deploy multiple times a day. But still, the case in many organizations, especially large organizations, is that for most of the time, software is in unusable state
because it's actually developed, and we cannot use it at all. How to figure out in which words we are more? Or how to figure out what are the bad parts of this continuous delivery? First is the waterfall.
It means that you follow the waterfall methodology, not necessarily in development, but also in deployment. When you do the development as long as possible, and delay deployment as long as possible. The second part is black art, when only Bob can deploy
because he knows that he has to first copy to FTP, then unzip, change the permission, turn off load balancer, then set the cache settings, then rename the file, turn on load balancer, revert permission, run IIS, restart application, and so on and so on.
If Bob is sick, we cannot deploy. The other bad indicator is saying when the boss says, okay, we're deploying tomorrow, everyone needs to come to the office earlier because we need everyone just in case. No one can control it. The control is distributed, so everyone is needed.
The other bad indicator is when someone says, okay, we should deploy on Saturday. We cannot be offline during the week. Do you recognize those statements in your work? Probably some of them, at least one, I guess.
So how to avoid that approach? First, we decided to design deployment pipeline. So what is deployment pipeline? Deployment pipeline is a process which describes how the deployment takes place. In our example, it looks like that at the beginning.
When the developer comes to the version control, the automatic build is triggered. It's part of continuous integration. We also run a set of acceptance tests or a set of smoke tests on our internal environment, and everything is stored in the artifacts repository when we store the build and the result of the acceptance test.
And if something fails, then the developer is notified, he has to fix it. It's still the automated phase. But later, there is a place where someone has to decide. In our case, it could be tester or project owner who says, I want to test this build manually.
And then he tests and verifies if the features we developed are something he needs. Then the results are stored in the artifacts repository. So the project owner can later decide, that's the build I want to have in production.
And he decides, and he gets it on a production release. I missed some staging environments and so on, but it's a basic view how it should look, where we have as many automated parts as possible, and we still have some manual decision,
so we have person responsible for deployment. In our case, it was project owner who says, and something what is running on staging is something what we need to on production. It's the features that we want.
There are some few important continuous delivery practices, like build once, deploy many. So you should not have multiple builds for each environment to have a build on staging, build for production, build for test, and build for acceptance test,
because probably if you do performance testing, you want to test the same version of assemblies, the same build as you release on the production, not the tuned for performance testing assemblies. You should also deploy as frequently as possible, so when something hurts, do it more often.
In our case, the most painful part was configuration management. So we decided that we should focus more on that part. Why configuration is that important? Because probably it's much easier to break the software
by changing one line of code, then one line of config, then one line of code. When you change the code, you have the compiler, you have unit test, you have version control system where you can track who has changed, where you can verify it, but with configuration, it only exists on the final environment.
When you change URL to number, probably system will break pretty fast if it's a login URL. So how we did that? We figured out that there are a few types of configuration, but some of them are environment specific.
It could be setting of a mail server, when in one environment, in staging environment, should be used that server on production the other one, and the application-based settings. It can be footer in application. We have 13 different applications which were pretty similar, but a bit different
depending on the customer who uses it. And there were some configurations which were both application and environment specific, like features that could be turned off or turned on. So we dealt with that having configuration template.
What's really important is that this configuration template is stored in a version control system. So configuration does not leave only on a final environment, it leaves also in our version control system, so we can verify who made the change, who was responsible for the change, and if this change could broke the system,
or if this bug can be caused by the last recent configuration change. Then, based on those template, we run smart transformation, smart XML transformation, so we created environment specific configuration.
Then we applied another set of transformation to create environment and application specific, and still both transformations were stored in version control system. So we again can track what were the changes on the production,
and we can also easily, looking at the source code, check what is the current configuration of the production without logging there. And at the end, everything was packaged into deployment package. And sometimes you can have troubles. You have to prepare for them.
The easiest approach when something goes wrong during deployment is to deploy the last good version. You can have rollback scripts, but probably if you deploy often, you rollback much less often. I hope it's the case. If you rollback more often than you deploy, then something goes wrong.
So these deploy scripts are tested. You can know how they work. You are confident. You are confident with them, and you should rather avoid fixed forward fire, as I call it. It's when you go to the production, because something during deployment fails,
oh, it's only one line of code. I have to add the security here, and here, and here. And you make so many changes. You cannot track. You don't control, and you fix another fix, and fix, and fix, and fix, and you're somewhere deep, and you cannot rollback then.
So better to deploy it, because you know how long it would take. You are confident with those scripts. The other set of kind of troubles, it's having a hot fix. When you have stable version running on the production, but one important bug was found, and you need to fix it without introducing other features which were developed in the project team.
So again, it's better to follow regular pipeline, deploy everything from the scratch, as you do every time, than to deploy only one assembly, because it's already a tested path when you deploy everything. And you know how long it would take. When you change one version of assembly,
you can find some version incompatibility issues, and you can, again, fix, and fix, and fix, and fix, and you don't know how long it would take. It may be five seconds. It may be five hours or five days. When you use deployment script, it will be always 15 minutes. And in case something goes wrong, you can always rollback.
So when we were prepared with all the scenarios, when something can go wrong, we were finally ready to deploy to the production, to deploy to the production using new approach. We got up really early. It was my first and hopefully the last time when I was at the office at 5 o'clock a.m.
We did a backup of everything. We cleaned up the servers from almost everything. We wanted to have a brand new, fresh environment. We did a deployment using our deployment script we used on the staging, we used on the production,
and it was quite easy. It took less than half an hour to deploy everything. And the rest of the day, we spent on regression testing. The customer was also involved in those regression testing, and we had a so-called maintenance window, so when we could shut down the service,
and it was on Saturday, so it was a bad part. But the customer was used to it, to having maintenance window from 6 a.m. to 6 p.m., because the previous company deployed in that way. We also made a huge improvement in release cycle. Previously, as I mentioned, they deployed every six months.
Now, we did the first deployment quite smoothly, so they get a confidence in our approach, and we could ask them for more. We asked them to deploy after working hours on Friday, so in case something goes wrong, Brazil would be affected because we deployed in our time. But we wanted to try, and everything again went smooth.
So we asked them for more. Can we deploy during business hours? And they said, okay, let's deploy during business hours. And we deployed at 12 o'clock during Friday. And now we can deploy whenever we want, because they feel everything will go smoothly.
We have a load balance environment, so we switch off one load balancer, deploy to that, switch the other server from load balancer, and customer doesn't notice the change except for version number in the footer that now there is a new version deployed. So it was a huge change from six months to two weeks
when we are ready with any feature. When we are ready with bug, we can deploy it instantly when the build completes. So you may ask, what's the difference? You could actually call it continuous deployment. But there is a difference between continuous deployment and continuous delivery. In continuous deployment,
we deploy after each check-in, as Flickr does. But in continuous delivery, we are ready to deploy all the time, but the decision is based on customer needs. When customer wants it now, we deploy it now. If he says, let's wait till Friday, let's wait till Monday, we deploy it. But you don't have to stay on Saturday
because we are unsure of our process. So it was a huge, huge mind change. We had also plans for improvements. We have plans for features. We know we are not done yet. We could do more. One of these improvements we could introduce
is so-called green-blue deployment. It's then introduced by Martin Folder when we deploy to production in that way where we have multiple servers. And on each server, there is a specific version of application running.
And when we create a new version of application and we want to deploy it, we create a separate application on those server or we use separate server depending on our strategy. Then, in some point of time, we decide it's ready to switch the version.
And we switch it only on the router. We can switch the port. We can switch the IP where the traffic is routed. It's really quick and easy. And it's a kind of code standby switch. And we can easily roll back to the previous version by switching it back. Another approach we can use is having a canary release.
Did you hear about this term? You heard about this. Do you know why is it called so? There is a history from 19th century England when the coal mining was being developed.
And it was a problem with carbon dioxide in a coal mine because there were no well-equipped to figure out is it allowed saturation in the air
if they could leave or they can work. So they use a canary for that. So if the canary stops singing, it means something is wrong. Canary is dying. So let's go out. And similar approach can be used for software. When we have a set of servers, probably more than two,
we can decide that part of those servers could be used as a testing environment, our beta testing environment. So we forward part of our route, not all the traffic as previously, to those new servers. And then if everything goes fine,
we can forward all traffic. We can use this as a performance testing. So we can check if the server performs well based on part of the traffic. If we decide to move 10% of the servers to that zone and move 10% of the customers, probably the performance should be the same.
If it's much higher, then something is wrong and we should revert back to the old solution. We can use it also as a A-B testing when we can compare which approach is better, the new one or the old one, and still redirect part of the user. And we can also use it as a brand new release approach.
If your company creates a brand new site for a customer or brand new application, you don't have to deploy it at once. You can deploy it only for one department, only for customers from Norway, only for beta testers who have signed for that,
and check if it fulfills the need, if you find any bugs. And if you find it, then only part of your customers are affected. Another approach which we think might be useful also for us
is a kind of so-called dark launch. It's launching the software without notifying the users, without letting them know how it works. It's the approach used, for example, by Facebook. In the old times, they had a chat which didn't persist a history of messages.
But customers or the users wanted that feature, so they asked for it. But the investment was huge because the scale at which Facebook works is that they cannot just insert a new role into database and do it later.
They want to check it intensively. So how did they do this? They had a server which was running fine, which was handling the chat messages without storing the history. And it was sending the responses to the user from the other users.
They had a router in between, and they created a brand new chat system which used database. It was distributed database, which got also the request from the users.
So in one time, two systems were handling the same messages, but the second system was only reading these messages but not responding to the user. But still, they could compare what were the output, what were the expected response from both systems. Someone from IT, someone from Facebook
can check if those systems work the same, if the performance is still fine. And you can say, okay, smart approach, but what to do if we use external system like delivery system? We don't want to deliver it twice. So we can create again a kind of mock
and compare what are the requests to delivery system from the old system and from the new system. They should be the same. If they are different, something goes wrong. And having this approach, Facebook introduced new chat functionality, and one day, they changed only the UI.
So in the UI, you are able to check your history. And you probably haven't even noticed what was that day. There was no big rush about this, but it was a huge process below. So how can you use those practices in your project?
What you should focus on? You should version everything. I mean everything, everything. Not only the source code, but also the PSD files, the flash sources, the configuration of your application, maybe configuration of your IIS, configuration of the router,
if you are responsible for handling that. You should do as much as possible to have a similar environment. If you're running on a virtual machine on VMware, try to run on VMware locally. If you have load-balanced environment, use load balancer locally as well. Then you will be able to fix the issues
you find on the production without just committing and hoping it's fixed. You can reproduce them. You should also automate existing procedure, automate everything. If you think, okay, I need to change this permission, why not to write a short script which change those permission
and then you can run it easily each time you deploy and don't care about this. So also if you have in your project deployment procedure, which has step one, step three, step 20, step 50, try to automate at least few of those steps. Try to start from the most risky part.
If you are unsure if you change the permissions of all the folders you need to do, if you are unsure if you switch off and switch off load balancer correctly, try to automate this. Then you get a confidence in that. And don't wait till the next big deployment.
Try to improve it now when you have a little of time between your sprints maybe. And you still may say, or your PM may say, I don't believe it. I prefer if you do this manually. I trust you.
But still if you sum up all the time required to do deployment, maybe your PM will see that it's better to invest in having automated deployment than deploying manually all the time, having all the team on site and paying for overtime on Saturday.
You can also say, oh, it's a nice idea, but not in my case because I very often create a brand new release. I don't maintain the software that much as I create and deploy. But still you can use the approach I've shown when you deploy only to the beta tester or you can deploy to the production environment,
but don't show it to the end user so the customer does not see demo at the end of his sprint on your local testing environment, but can check it on a real production environment. But only he can check it. And if you do this more often, then you get confidence that it works fine,
that you are not afraid to press deploy.bat because you don't know what's inside. You know it worked yesterday. It will work today. In case it goes wrong, you can always roll back to the previous version. And as I said, start as soon as possible tomorrow on Monday.
And that's everything from my side. I'm eager to answer your question. Did you try to implement continuous delivery in existing project?
Did you succeed somehow? Yes? You did? Great. How big was the project? Or how long would it take to implement that process? But now you see the value. What was the hardest part? To convince the customer or to convince the project manager?
Or everyone was convinced? So you were in a great... It was great. You were in a great situation.
It's not a problem. You just want to deploy, yes? There are plenty of tools on the market. But the problem in our case was that the time was pretty short for this transition period.
And we also had a legacy solution. So we decided not to change too much on the project side. So we did a lot of handcrafted work. I mean, we used TFS. We used PowerShell. But we could not use Web Deploy or Octopus because it would require too much change.
We preferred to make stable deployment at the first and later improve it using more tools like Puppet, Chef. Is it the answer for your question? So my advice would be not to change too much in the project itself but to improve everything around as much as possible.
Then you convince customer, PM, then it's the right approach and they will let you change more later. I think in the .NET world I would use Web Deploy and maybe Octopus and maybe Puppet.
It's a well-known solution. It's a tricky question but there is an approach to decouple deployment of database changes and changes of source code.
Basically it means that your applications should work with the previous and the old schema of the database. It's the best approach but not always possible. So we have upgrade and downgrade scripts when we upgrade during deployment the database.
It's also the easiest and requires the least possible changes in application. If no more questions then thank you again.