We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

A Pythonic Approach to Continuous Delivery

00:00

Formal Metadata

Title
A Pythonic Approach to Continuous Delivery
Title of Series
Part Number
132
Number of Parts
173
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language
Production PlaceBilbao, Euskadi, Spain

Content Metadata

Subject Area
Genre
Abstract
Sebastian Neubauer - A Pythonic Approach to Continuous Delivery Software development is all about writing code that delivers additional value to a customer. Following the agile and lean approach this value created by code changes should be continuously delivered as fast, as early and as often as possible without any compromise on the quality. Remarkably, there is a huge gap between the development of the application code and the reliable and scalable operation of the application. As an example, most of the tutorials about web development with Flask or Django end by starting a local “dummy” server, missing out all the steps needed for production ready operation of the web service. Furthermore, as there is no “rocket science” in-between, many proposals to bridge that gap from both sides, operations and developers start with sentences like: “you just have to...”, a clear indication that it will cause problems later on and also a symptom of a cultural gap between developers and operations staff. In this talk I will go through the complete delivery pipeline from application development to the industrial grade operation, clearly biased towards the “DevOps” mindset. Instead of presenting a sophisticated enterprise solution, I will outline the necessary building blocks for continuous delivery and fill them up with simple but working poor man's solutions, so that it is equally useful for professional and non-professional developers and operations engineers. After the talk you will know how to build such a continuous delivery pipeline with open-source tools like “Ansible”, “Devpi” and “Jenkins” and I will share some of my day-to-day experiences with automation in general. Although many of the concepts are language agnostic I will focus on the ins and outs in a python universe and outline the pythonic way of “get this thing running”.
Keywords
51
68
Thumbnail
39:40
108
Thumbnail
29:48
IcosahedronDivision (mathematics)Line (geometry)Regulärer Ausdruck <Textverarbeitung>Revision controlSoftware developerSoftware testingCodeDisintegrationMilitary operationFeedbackStreaming mediaBinary fileProduct (business)Web 2.0Revision controlInformation securityBlock (periodic table)Entire functionSoftware developerAnalytic continuationCodeOperator (mathematics)MereologyEndliche ModelltheorieVirtual machineStreaming mediaAdditionCartesian coordinate systemTerm (mathematics)Multiplication signReal numberWeb serviceWeb applicationLine (geometry)BitSingle-precision floating-point formatComplex (psychology)Arithmetic meanProcess (computing)Assembly languageAsynchronous Transfer ModePoint (geometry)Degree (graph theory)AnalogyData conversionContinuous integrationState of matterCASE <Informatik>Student's t-testTotal S.A.Cycle (graph theory)Configuration spaceFunction (mathematics)Vector spaceFood energySoftware testingWeightComputer animation
Operator (mathematics)HexagonPOKEProduct (business)Storage area networkDifferent (Kate Ryan album)Mechanism designRadiusProcess (computing)Operator (mathematics)Traffic reportingPoint (geometry)Gaussian eliminationAutomationSoftware testingProduct (business)Error messageVirtual machineComputer animation
Line (geometry)Software developerProduct (business)RoboticsComputer programmingSocial classExterior algebraComputer animation
Binary fileSkeleton (computer programming)MathematicsProduct (business)BitInheritance (object-oriented programming)MathematicsSoftware developerHydraulic jumpMereologyField (computer science)Rule of inferenceComputer animation
Metropolitan area networkCodeProper mapDisintegrationPhysical systemMassIdeal (ethics)DataflowRevision controlDifferent (Kate Ryan album)Revision controlContinuous integrationNormal (geometry)Execution unitMereologyLevel (video gaming)Repository (publishing)Moment (mathematics)Slide ruleTemplate (C++)Arithmetic meanCodeSoftware testing2 (number)Software developerIntegrated development environmentSimilarity (geometry)NumberConstraint (mathematics)Standard deviationTraffic reportingProduct (business)SubsetForestReading (process)Connectivity (graph theory)Line (geometry)MathematicsState of matterPhysical systemWeb pageDatabaseINTEGRALPerspective (visual)Spring (hydrology)Web serviceOperator (mathematics)Contrast (vision)Instance (computer science)Confidence intervalStatisticsGoodness of fitFeedbackMedianUnit testingCartesian coordinate systemWeightMaxima and minimaRight angleProcess (computing)Disk read-and-write headObject (grammar)Mobile appInstallation artPauli exclusion principleHydraulic jumpHTTP cookieMathematical analysisProper mapFluid staticsComputer fileWeb 2.0Server (computing)Computer animation
Repository (publishing)Open setSoftware testingIntegrated development environmentOrder (biology)Order of magnitudePort scannerData managementTask (computing)Gastropod shellStatisticsBinary fileHand fanConfiguration spaceScripting languageWordData managementSubject indexingSubsetVideo gameLevel (video gaming)Product (business)CodeWeb serviceVektoranalysisMereologyInheritance (object-oriented programming)Software testingAsynchronous Transfer ModeCategory of beingTheoryPiConfiguration managementShared memoryMoment (mathematics)Gastropod shellSet (mathematics)Form (programming)Multiplication signIntegrated development environmentSoftware developerGroup actionGoodness of fitWave packetPhysical systemINTEGRALCartesian coordinate systemArithmetic meanMedianGraph (mathematics)Repository (publishing)View (database)Revision controlOffice suiteBit rateDescriptive statisticsNumberPlastikkarteServer (computing)PlanningType theoryOrder of magnitudeMobile appCore dumpEmailPrice indexModule (mathematics)Normal (geometry)Web 2.0Analytic continuationSoftware bugWeb applicationSoftwareBoss CorporationStack (abstract data type)Continuous integrationComputer animation
Metropolitan area networkPersonal area networkRevision controlProcess (computing)CodePort scannerKnotData recoveryInternet forumMoment (mathematics)Data managementSoftware testingRepository (publishing)FeedbackIterationSoftware testingTupleRevision controlCodeProcess (computing)Greatest elementNumberMenu (computing)Level (video gaming)Radical (chemistry)Type theoryInformation securityArithmetic meanSubsetExecution unitProduct (business)Coordinate systemRight angleSocial classFacebookInstallation artMathematicsInformationVirtual machineINTEGRALDependent and independent variablesMultiplication signData managementArrow of timeTask (computing)Point (geometry)FeedbackScripting languagePredictabilityOrder (biology)Special unitary groupMeasurementHypermediaComplete metric spaceResultantRepository (publishing)Parameter (computer programming)Line (geometry)MereologyContinuous integrationData recoveryImplementationCollaborationismOperator (mathematics)Real-time operating systemMobile appDifferent (Kate Ryan album)VirtualizationPoint cloudAnalytic continuationAutomationError messageOcean currentComputer animation
System programmingHaar measureSima (architecture)Physical systemMathematicsPlug-in (computing)Process (computing)Analytic continuationGame theoryComputer animation
Transcript: English(auto-generated)
Okay, so welcome to my talk, title is Pythonic approach to continuous delivery. Yeah, I work for Blue Yonder, we are a company, we are building data science applications
and we also operate them. My job there is I'm the main developer of the so-called data services, so we are offering some external data like weather or public holidays and so on for all those machine learning models to enhance the data.
Yeah, and as I said, so I'm the main developer and in addition I'm also the main operations guy. So the question is how to operate and develop at the same time end services with just a single person like me, so more or less you can see so what we came up, how this could
be done. So who of you have heard of delivery, continuous delivery yet, just for me, okay, so maybe there's a strong bias if you're here, so okay, so just a short outline, yeah,
first of all I will explain what continuous delivery is with some definitions and analogies, so what my understanding is, maybe it's not completely in agreement with everybody else, but it's my opinion. So how does such a delivery pipeline look like and there we really deep dive into some
boring details. Then the biggest question, okay, I have working Python code, so how do I start? And there we really assemble such an exemplary production lines or a delivery pipeline out
of some building blocks and this we will do in a Pythonic way. Then yeah, what could possibly go wrong, some tips and tricks from my side, from my experience, and maybe a short outlook what might be in the future, some wishes and a short summary.
Okay so let's start, what is continuous delivery? So I think to understand what continuous delivery is all about is to understand what's the main workflow of traditional software development and there we have two of such so-called silos,
so we have a team of developers, they are doing all those code stuff, releases, features, continuous integration, things like this, in the end there's a product which is such a package with a version and on the other side we have operations for operating
for example, a web application and stuff like this, so what the company is really doing and there we have terms like packaging and deploy, lifecycle, configuration, security, monitoring, all this stuff and so the more or less traditional way of looking at things or really how to do things was we have such a wall in between, it's sometimes called
the wall of confusion and yeah, those two silos and once the developers are finished with a new feature they just take it, throw it over the wall and say okay it's an ops problem now, they should look how to deploy it, how to lifecycle it, how to monitor it, how to configure it, how to do security stuff and now what's happening is there's this new development of DevOps
and this just means we tear down this wall and we say okay, all those things, it's just, it's one thing, so we cannot divide it with a wall and say okay, you do this and you do that
because it's just one thing, we want to deliver this web application, so this is what one could call continuous delivery because now we don't have this wall anymore or maybe with such another picture, so one could see continuous delivery as extending the development, so with
code and versions and stuff like this into production, operations and also extending operations into the developers workflow and the important part is that now the development
really includes the entire value stream and value stream is really just you have an idea, you think our customers may like this new feature and then the value stream starts from that idea until it ends up and gives value to the customer and that's the important point, there's no wall in between, you just say okay, that's the value stream and development
includes the whole value stream and it really is important that we get feedback inside these development cycles, okay now to emphasise this continuous delivery, so okay we now know
we have to deliver value to the customer but continued delivery means this is the special thing, we want to release early but we want to release often and the saying continuous is far more often than you think and this brings with it a real explosion of complexity due to these increased demands on security, safety, failover, monitoring tests and this
is only possible via automation, so any manual workflow will just completely destroy a real continuous delivery and continuous yeah as we heard it is far more often than you think,
so you cannot do it just a little bit faster than before, it's a new quality, you really have to do it automatically, your delivery of all your features, okay there's another thing, it's called the poka-yoke, a poka-yoke is any mechanism in a lean manufacturing process,
it helps an equipment operator avoid mistakes, its purpose is to eliminate product defects by preventing, correcting or drawing attention to human errors as they occur and that's maybe such an important point, so we now need to leverage automation and automation is something
different than the workflow of humans, you can as a human, you can detect errors by just looking at things, you feel there might be something wrong or it doesn't look correct and that's a complete different story once you go into automation, machines are dumb in that side,
so we really have to build in mechanisms to detect failures early on, the earliest possible, so what could see all this stuff now like such a car manufacturing automated robots
production line and now if we compare it to what we did so far is that traditional software development was just programming those robots but in the end we don't earn any money if we don't deliver cars, so that's just the new side on the thing, it's just irrelevant
to program that robots, we have to build cars, okay so now let's look at this production line, how does it look like and once you really do it, you always feel kind of in such a
jump and run game, so first of all each change that means some software developer somewhere around the world is committing something is deployed to production, full stop, each change goes into production, there's no more let's wait until someone who might be responsible for
it then decides or whatever it goes through, I don't know, so you just have to keep in mind each commit ends up in production as soon as possible, so there's some time in between but let's say five minutes later, five minutes later your commit will be in production and hit
the customer, yeah exactly so unless it is proven to be not production ready and that's like in this jump and run game, so what we do is we design challenges for such a change, for such a commit and this is super Mario thing and if we fail to detect a wrong change
which is called bug, it will end up and catch the princess, so we don't want that, so we have to design challenges for this hero, yeah and maybe there's more or less a hint, so
I don't think that it's possible to start a little bit and then have some other workflows manual as before, so I would recommend to really start with a lightweight, a small pipeline but
you automate the whole process, so once you have such a walking skeleton, really where you first just commit something and it ends up in production, so this is the first step you should do and not let's start in the beginning and then we keep the rest as it was 10, 20, 40 years before,
so yeah here you can see once again this jump and run level kind of thing, so the delivery team, so we don't call them developers anymore in contrast to operations, it's just we deliver value, so it's the delivery team, so operations and developers, they check in, maybe this is the
first feedback, a merge conflict, first feedback, it gets back or then we have the traditional continuous integration which is, which approves the correctness of the code, maybe there we get feedback, let's say two minutes later, red, okay, fixing back, committing again, maybe this one
goes through and then there are stages later on, so the next thing would be an automated acceptance test, we will see in a minute what is in there, maybe some acceptance tests yeah and way more, so this is let's say the minimal production line we have to implement.
This picture is stolen from this book from Jess Humble, so who has read the book? Read it, it's very good, it's full of practical tips and it's really, really helpful.
Okay, so the another picture from that book just to give you a feeling, so it's not, it hasn't to be that it's really all stages, we call them stages, those jump and run levels, that they have to be streamlined, they can also be executed in parallel, but important thing is
that naturally we want to increase our confidence in that change until it hits production, so in the beginning, in the first stage, we are not that sure, we know maybe that all the unit tests passed, but is it really production ready, so is the, maybe is the logo red or is it green,
maybe that's not tested by a unit test, but maybe it's important because it's our company logo and so on, so going to the right means we have more confidence in our product, the same is to the right, the environments where we test these things needs to be more production like,
so we have an environment which is called production, this means maybe an EC2 instance on Amazon, so it's a good idea to try to be as product, as production like within the
environments as possible in the early stages, so if it's too expensive, okay, then you have to do trade-offs here and of course we get faster feedback to the left, so once someone commits, first thing is this commit stage, this is the traditional continuous integration, you commit
and then unit tests and so on and there we get feedback in two minutes and this increases, I mean the latest feedback we get is from production and that's the most dangerous and expensive one, okay, so now let's start, we have some kind of working python code,
our web application, name it hello world or whatever, so what do we do and yeah, the first step is we need a proper deployment, of course we know,
let's call it a normal python package, I think all of you know that there is no such thing as a normal python package, they all look different, they have different versions, dependencies and so on, so let's call it a standard python package, but you could
do anything else like proper Debian packages or some people use Docker for it, for the complete packaging, so binary package of your artifact or if you want you can do it by hand with just some tar files, it's up to you, some constraints, it should be uniquely versioned,
so that really if you know that's the number, you should know that's a revision in git, it should somehow manage the dependency, so what do we expect our environment to be and yeah, there's a small hint, so we we built a small tool called PyScaffold
and there was a talk on Monday, less known packaging features and tricks and who was there, some people, he had something similar on his last slides, it was a cookie cutter template, so this is nothing else than just the template of how to build a standardized package, so at least
for our company, it's extremely important that all the packages inside the company, inside our production delivery pipelines really look the same and the same means, for example, we see later, so it's really easy, you just say pip install PyScaffold, put up my app and
then you have a package which does quite some things, for example, gives you automated new versions per commit or doc tests and swings documentation and you name it, a git repository
and stuff like this, so it's really handy and yeah, with just with these two lines, we have the first thing, we have a proper, so next thing, I think most of you know continuous integration and yeah, in principle, this means we execute our unit tests as we do it,
so you can use any continuous integration tool for that build port or travel, so there are quite some tools out there, best is of course, it's really a server running somewhere,
so each commit is automatically executed, that's a very important step, so really, if there is a commit, we really should know at latest, 15 minutes later, whether this commit was good or bad, anything else would be a fail. One thing, okay yeah,
it's at least from my experience, it's a good idea to already dare start to have different stages, so really, maybe some unit tests are running really, really fast and others take,
let's say, one minute, then it's already there, best practice to have several jobs even there, so that you really have the fastest feedback you can get, so after 15 seconds, the first feedback and then the longer running tests take up to 15 minutes. Okay, so to help you,
if you are not creative enough for those challenges for our hero, normal unit tests, we know all of them, what it is and there, my definition would be, we only test the pure code there, so we don't need any environment,
environment might be some running instances of services like an S3 bucket or file system, maybe that's debatable or database, stuff like this. Then integration tests or component tests
how the parts get stuck together and there we can use databases, maybe some small dummy databases, so that's up to you, you can do whatever you want. Then, of course, something like static code analysis might make sense, Python, PyTracker, things like this,
I think it's a good idea to measure the coverage so that you detect if there is a huge new 20,000 lines of code with zero tests, then one might ask what's happening. Doc tests is a really good thing, so that you test your documentation, that it's
the latest. What we get out after the continuous integration is really a fixed artifact, which means we have now a version and we know this version is tested with our continuous
integration. For example, in this PyScaffold, PEP 440 is the PEP for versioning, somehow the Python community decided that for continuous integration, where we really need for each commit
another unique version, it has to look somehow this way, so we have the first three numbers are really tagged, so if you tag something, then this is the tag. Post says we are after this tag because we are developing further, and DEV15 means it's the 15th commit after the last
tag. The thing behind it, it's ignored by PIP, but it's a git ref, so it's quite easy to then reference from your version. When you find a package somewhere, you can directly see to what
git commit just refers. It's quite handy. Now it's time to fill up our artifact repository, and maybe I forgot to say you need an artifact repository. We use DevPy for this, and
I think Holger is not here, so he gave the keynote this morning. He's the core developer of this, so this seems to be the logo of the DevPy server. It's what you know. It's an on-premise or
on-source solution for this PyPy.org, and it has some quite good features, so we use it quite heavily, and I can reference to a talk by Stefan Elp. He's also from Blue Yonder. It's
deep dives into DevPy, so if you're interested, it's really interesting what we do there with some indices and private indices and inheriting indices and so on. Quite interesting.
Now that was the fun part. If you're a traditional software developer, that's the part where you know what's happening. It's the continuous integration part. Now comes what I felt for you as painters in configuration. Maybe you can add packaging or release
dependency management. It's a pain too. For the next stage, this is the acceptance test. The acceptance test, we need an automated deploy because now we triggered continuous integration, and continuous integration automatically triggers our acceptance tests.
The acceptance test really tests the behavior of our service or web application or whatever, so you should design them in a way you now are the user in front of your web application,
and you really need to test how it behaves. Something like if I click on item and type in my credit card thing here, then I should get an email saying that I've bought this thing. It's really the full stack. Everything you want your application to behave should be
tested here. For this, it's clear it has to be automatically deployed into some production-like environment, let's say Amazon or in-house somewhere on a server like Debian or whatever.
I think it's crucial to have really the same code which does the automated deploy for this test. It's the same that you use later on for the deploy into production. I think it is not a good idea to have a script which is called acceptance test deployer,
and another script which is more or less a copy is then called deploy into production, because once you find a bug in the deploy, also the deployment is tested here. So we test here if we are able to deploy our web application into production so that it behaves exactly like we
want. It's really hard to pin down that you also have to fix the bug in the script called deploy into production, so that's not what we want. As I said already, the environment has to be as close as possible, and this is just my experience. Somebody asked you
what's your guesstimate of how much time is needed for our automated deploy into production, and just do it. Multiply it by three, no matter what you thought, what do you need before. I don't want to say that you shouldn't do it, it's just that you should start very small.
So pick up a really really small thing, maybe where you think this should be done in three hours and then you need one day. So don't start with, I think we should get this all running in three weeks, because chances are high that your management will maybe tell you after
three months, come on, what's happening here? Maybe we have to stop and that would be bad. So start small and then you get a feeling where are all those pains, and what are the blocking things.
It's really strong advice, you can do whatever you want. A normal shell script is good enough, why not? It worked years before, but configuration management tools, who knows what configuration management tools are, okay. It just eases everything by orders of magnitude.
So there are things like Puppet, Salt, Chef, you name it. We use Ansible, it's the Pythonic tool, it's written in Python, you can quite easily extend it with your normal Python skills.
And Pythonic also, because it's really really simple, it's lightweight, and the very important thing is it's declarative, so you don't, in shell scripts you write do that, do that, do that, and here it's more or less like SQL, you just say I want in the end have this,
and this really eases your deploy quite a lot. This could be an example Ansible playbook, just to give you a feeling, so what do I have to do to really automate my deploy?
Okay, we want to deploy something on hosts called web servers, they are defined somewhere else, some IP addresses, things like this. Then, okay, we want our app to be installed, we use pip for it, there is a module in Ansible, so we say the package named my app, and maybe we created it with py scaffold, and it's already tested, and it uploaded to the devpy.
We want to install it into a virtual end, so that we are not really messing up with some already installed system dependencies. And there we say, okay, as an index, use our own devpy,
because there we uploaded our package. There is, yeah, it's maybe not a perfect example, so really rethink once again, if you don't just type it and use it for your production, it might screw up, because there's minus, minus, minus, pre. This would also install for
each dependency development versions, for example for requests and so on. So maybe you should rethink it. For example, first install all pinned requirements, and then just install in the end your latest development version. And then just start your app. I did it also in a declarative
way, this shell script, it's called myappstarted, if it would be plain script as you do normal, it would be called startmyapp, but then it gets broken because it's running already, so always think in declarative descriptions. Okay, so acceptance tests, as I said already,
they really prove the behavior, really be careful here, that's your money, full stop. It's your last chance, it's really the final boss of this thing, and yeah, tools, there are really some tools like Behave, for example, they are built exactly for these
kind of things. If I put an item in checkout and press button X, then you write it really in that sentences, so even your management can write those acceptance tests, and you execute these tests. Selenium 4, if you have a GUI, things like this, but yeah,
do whatever you want, but do it. And that's it, so that's the last step, so all those acceptance tests are passed, that's the last step. You might want to have some additional non-functional things, these are things like performance, measurements, so that
you don't have any surprises that checkout now takes three hours. Security and maybe explorative, that might be only feasible with manual tests, so really if you have an experienced tester who
really tries to screw up your thing, but that of course is then, you have to wait until the tester has time to test it, so that might, if you really have to wait for it, that would be such a blocking thing until the tester has really the time to do these manual checks.
But additionally, you really want to have such a manual approval, that would be such a button, where someone is pushing the button, but it is not for taking over responsibility to some manager who then gets fired if there's a bug. It's more or less for things like you want to
do a marketing campaign because your new feature is really, really a breaking change, so you want to coordinate somehow maybe, or some other things, legal issues, maybe you're not allowed to leak some information early on. If possible, you can do things like, there are
many different possibilities, canary releases, that's something that Facebook does that maybe you can already give some beta versions to some users, maybe randomized, or you select some
users who can already do it, so that's up to you, you can do whatever you want. What we do is we just have one button and then, okay, now it's deployed, hope.
Yeah, another important thing is it gets quite complicated to have all those stages, and this workflow, which is completely optimized, you don't have someone who says, we should not forget until next Friday to execute XY, so you need somehow coordination,
past unit tests, past integration tests, past acceptance tests, no security testing, things like this. Yeah, we do it with Jenkins. There are some specialized continuous
delivery tools. I know from ThoughtWorks it's go.cd or there's IBM Urban Code. I never use them because we just have Jenkins already there, so we glue those jobs together, and as you see here, there's a job called Port Deploy Current. Good enough, this one is blue,
but Port Deploy Latest is red, so there seems to be an issue with the latest deploy. It's of course bad, so it should be covered somehow early on, so these tests were bad here. Yeah, so our manual approval here is to click on the Jenkins Build button, and yeah,
this means deploy to production, so you see already it's not really a specialized tool for that, so build and deploy to production is maybe something else, but it works. So the question is what could probably go wrong. It sounds quite trivial to do it, but it's not.
So my advice is to really keep everything, every, every, everything simple, stupid. Don't do any complex things. You will screw up, so always if we had an issue somewhere,
it was always the first version was too complex, then we did a real dummy implementation. We said okay, that's the simplest solution, and this is always bad. You have to automate all the things, really all the things. Okay, I do it because I'm really lazy and doing things twice is already for me not acceptable,
but there are quite some other arguments for that. You complete delivery pipeline from idea until customer is in version control,
so you have, for example, predictable recovery, so you know, one minute, okay, so you know Amazon goes down, you can deploy it somewhere else in another region because you know everything is in version control. There's nothing missing, and machines are just better in
such repetitive tasks, like I have to reconfigure that script in order to be, machines are just better. They don't do errors in repetitive tasks, and you can concentrate on the value delivery, so you can think of features which might bring value to the customers. That's the important point.
Good advice is to really maintain, refactor, block time for your delivery pipeline for your automated deploy. You have to migrate to new versions of Ansible. You have to migrate to new versions of
requests and so on, so really do it. Don't just think okay, it worked once and now we can stop. It's automated. It never is automated. We have to continuously improve the deployment process. The cloud thing is a really handy thing here. If you have to work with tickets,
it's quite hard to do automation. If you, for a new S3 bucket, would need to write a ticket, it's really not a good idea. Better have some cloud thing there, where machines can automatically get all the things automated with APIs, things like this. The future, yeah, okay, I'm ready.
We learned from Guido. He has the same opinion than me. Dependency management, packaging, all this stuff is somehow not so perfect in Python, so we really have to find ways to
make it better. If really you automatically deploy to production after each commit, you really want to know what's installed there or is it now a new version this morning and we
deployed once again and all of a sudden we have a new version over there. It's not really perfect. There's the two worlds problem. We have, for example, the Debian world app is installing half of your dependencies but then comes pip and installs another half, maybe in a virtual end with complete different versions of all the dependencies. That's not
perfect. And yes, such a Pythonic, let's say, really lightweight and easy to use continuous delivery tool is also missing. A tool which is really aware of the delivery pipeline which knows,
okay, this is the acceptance test stage and we deployed version number x in that yesterday by user that and that. That's still missing and many, many tools, really many tools are optimized for manual workflow which means there is the operations guy sitting on the terminal and typing in app get install y. Jenkins cannot type y so you immediately broke your deploy.
You have to have a tool which types in yes for you in a non-existent terminal and that's not a case. We have to work on it. We have to hack on it. We really need to improve the tools there.
So, really short summary, okay, CD is really, it rocks, it's cool, agile, we have faster feedback from the customer so we don't lose money. Automated is better than manual. Collaboration is better than silos.
You can do it, just start. Example building blocks, once again, Pi scaffold for packages, DevPi, artifact repository. Jenkins for continuous integration and steering of the pipelines. Python unit tests, that simple, that simple, stupid for tests and Ansible for automated
deploys and you really need some courage. Once you do it and commit something and it ends up in production, you really say, okay, that's something new. Okay, but that's it. Okay, we'll have time for one short question.
Okay, one short question. Have you heard of the Jenkins Workfull plugin? I heard of it but I have never used it. You should because then Jenkins really becomes a continuous delivery system, not only a CI system. I struggle a lot with having those
single jobs that are connected. I think you're connecting the jobs manually, more or less, but the workflow plugin is really a game changer for CI and CD. I heard that I would try it, yeah. You should look into it. It's in the backlog. Thank you.
Okay, if you have any questions, we are at the Blue Yonder booth, so I'm over there.