We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

30 seconds to Code

00:00

Formal Metadata

Title
30 seconds to Code
Title of Series
Number of Parts
55
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Streamlining development setups with Docker and Open Build Service Creating development setups can be tedious, error-prone and quite horrifying to novice contributors of Open Source projects. You would set up a virtual machine, install the required software and spend quite some time configuring it. On top of this, your setup would require maintenance and updates. A more modern approach is featured in this talk: Create a reproducible environment, have automatic updates to new package versions just by using OBS to build your Docker container image from RPMs and a kiwi XML file. No more fiddling with VMs, no more manual install and configuration marathons - just download and run your ready-to-use Docker image from OBS.
44
BuildingService (economics)Open setCodeSoftware developerSystem programmingInformation technology consultingOpen source2 (number)Software developerCartesian coordinate systemRevision controlService (economics)Physical systemComputer animationLecture/Conference
Software developerCodeSystem programmingInformation technology consultingSoftwareComputer hardwareFocus (optics)Operations researchSoftware testingDisintegrationSoftware maintenanceSoftware frameworkCore dumpStack (abstract data type)Virtual realityConfiguration spaceData managementFormal languageDemo (music)Computer-generated imageryMechanism designOverhead (computing)MathematicsCartesian coordinate systemGroup actionStandard deviationMultiplicationMachine visionHeat transfer coefficientArm2 (number)Information securitySoftware developerMedical imagingVirtual machineInformation technology consultingFamilyRun time (program lifecycle phase)Wave packetInformation systemsPlotterService (economics)Web-DesignerSqueeze theoremCloud computingProduct (business)BuildingIntegrated development environmentNeuroinformatikSoftware testingConfiguration managementDescriptive statisticsMusical ensembleSpacetimeDatabaseSoftwareOpen sourceSoftware maintenancePhysical systemProjective planeINTEGRALMultiplication signLaptopModule (mathematics)Computing platformFile formatExtension (kinesiology)Data managementSoftware frameworkOrder (biology)Field (computer science)Continuous integrationCodeComplete metric spaceMiniDiscOperator (mathematics)Mechanism designPoint (geometry)Configuration spaceAnalytic continuationLibrary (computing)BitMereologyDifferent (Kate Ryan album)Collaborative softwareComputer animation
CodeOverhead (computing)Computer-generated imageryOpen sourceRepository (publishing)Service (economics)File formatContent (media)Software testingInteractive televisionSystem programmingGraphical user interfaceVolumePhysical systemMedical imagingService (economics)Open setRevision controlRepository (publishing)BuildingFormal languagePhysical systemDirectory serviceConfiguration spaceCore dumpMusical ensembleIntegrated development environmentFreewareInformation systemsSpacetimeSoftwareMiniDiscUltraviolet photoelectron spectroscopySet (mathematics)Game controllerRun time (program lifecycle phase)Projective planeSoftware developerComputer fileComputer hardwareVirtualizationOverhead (computing)Scripting languageDescriptive statisticsTraffic reportingSystem administratorTuring testComputing platformEmailMoment (mathematics)Execution unitSoftware testingBit rateOperator (mathematics)Library (computing)INTEGRALWeb browserKeyboard shortcutLaptopIntelligent NetworkArithmetic meanSoftware maintenanceStack (abstract data type)Virtual machineWebsiteUsabilityPhase transitionGoodness of fitWeb 2.0Connected spaceOperating systemInternetworkingCartesian coordinate systemData acquisitionTouchscreenMoore's lawPresentation of a groupLink (knot theory)Limit (category theory)Open sourceDrum memoryDistribution (mathematics)Normal (geometry)WordSelectivity (electronic)Electronic mailing listDifferent (Kate Ryan album)MereologyFile formatLoop (music)Installation artCodeSlide ruleLinear regression2 (number)Unit testingComputer animation
Software developerComputer-generated imageryVolumeInstallable File SystemConfiguration spaceSystem programmingContent (media)Physical systemWindows RegistryOpen sourceSoftware configuration managementComputer configurationDatabaseEmailOnline helpSoftwareServer (computing)Group actionLattice (order)Computer fileService (economics)Web 2.0Level (video gaming)Web applicationMedical imagingConfiguration spaceNetwork topologyPhysical systemRevision controlWindows RegistryCartesian coordinate systemOpen sourceComputer animation
Windows RegistryComputer-generated imagerySystem programmingStructural loadSoftware developerEuclidean vectorInterface (computing)Web browserLocal ringExecution unitIntegrated development environmentCodeComponent-based software engineeringRevision controlDatabaseDefault (computer science)InformationMedical imagingRepository (publishing)Windows RegistryDefault (computer science)Streaming mediaService (economics)Web browserMultiplication signSoftwareINTEGRALMathematicsLink (knot theory)Content (media)Computer programmingMappingDirectory serviceForm (programming)Video gameCartesian coordinate system2 (number)WritingLimit (category theory)Musical ensembleComputer fileTask (computing)Open setType theoryIntegrated development environmentLocal ringMereologyLecture/ConferenceComputer animation
VideoconferencingLecture/Conference
Transcript: English(auto-generated)
Well, welcome to my talk, 30 seconds to code, on using Docker to facilitate application development on your desktop. My name is Ralph Lang, I work for B1 Systems, a consulting company in Germany.
If you ever use the open build service, the online version, then you probably have seen our sponsor logo at one time or another. I personally work as a Linux consultant and developer, I also do cloud infrastructure
and trainings. Well, let's just skip over the marketing part, because you can read this on the website if you're really interested.
30 seconds to code. Well, who of you was in the workshop on the second day about using open build service to build Docker containers? Please raise your arms.
Well, not so many. This means I might have a chance, because when I originally planned this talk, I did not know that Adrian, one of the lead developers, planned a very similar format as a workshop of multiple hours, and, well, I figured I had to change something in my talk to have
any chance against that. Well, who of you, yeah, I know, I was attending, and I think Wolfgang did it, and they did it very well. Okay. So, who of you has already built an own Docker container?
So, okay. Not so many. So, what I want to try is mainly tell you the story of how we got to using Docker for this, and how it improved our way to build software.
And, it's not so much a technical introduction. So, let's see. Do you know Horde? Horde is a groupware suit, webmail application, and development framework in PHP.
It's been around for multiple years, and that's part of the story, or of the problem. It's a metro software which has been around for quite some time, and it carries a lot of history with it, which means it is not, it has not been designed with all those
modern tools in mind, like Composer, like PSR standards, and so on. So, it has retrofitted some of these, but at the heart, it's a very traditional application, and when you go to brownfield projects in software development, you see this a lot.
You see a lot of software which is not like all those shiny new tutorials show you, and I just want to show you where this is the official installation documentation of the base module, and it's already 32 kilobytes long.
Well, who reads 32 kilobytes of installation instructions just to start setting up his development environment? See the problem? It's a mighty piece of software. It can fit into a lot of scenarios, but it's not exactly easy to start, and this
is where we come from. In my development, oh well, my laptop doesn't like this adapter. In my development team, I often onboard new people, new guys who want to start coding,
who are not yet that much experienced, and what we encountered in the past was that there was quite some gap until we could get started doing actual coding work. There was a lot of setup work to do to get this piece of software running on your
laptop, and well, that's what we wanted to eliminate. We tried a lot of different approaches, and finally we found that using Docker images built from OpenSUSE is the way to go for us, and what's the points we try to achieve?
We want to minimize initial setup. We want to show something fast, and we wanted to reduce the wrong paths an inexperienced
developer could take, so we want also to reuse the same environment for running continuous integration platforms and continuous development, and run integration tests in a reproducible
manner, and we don't want to spend much time on the platform itself as it evolves, so I already told you a bit how it used to be. You set up a virtual machine, then you set up a LAMP stack, then you got all those ancillary
libraries and PHP extensions and whatever, you got the original self-written software on your laptop, and then you configured that software, you configured a database.
It's quite annoying. So, what did we do? We tried to introduce some configuration management, and this actually worked quite well. We used Puppet initially, then we changed to Salt.
This is a good way to manage production environments which are here to stay for years and which don't change a lot outside of scheduled maintenance and outside of tested scenarios which is really great for customers and deployment but not for development.
On your development, your computer, you want to do spontaneous things. You want to introduce change, you want to test change, you want to fiddle around, and configuration management is exactly not what is helping you here because everything
you want to do, every change you want to introduce, you either need to put into the configuration management or you stop the configuration management and just go ahead and from this point on, it's diverging.
Well, we found out it's not the way to go for development setups, so another idea which we already tried, we shared vagrant descriptions or complete VM images among the developers,
so we had defined setups which are ready to run. Did anybody of you use vagrant before? Well, some, not that many. When you go this route, you need a lot of resources on your laptop because these
vagrant images are full-blown VMs which means they need disk space for a whole environment, a whole operating system, free space for data and you need multiple of them if you want
to test multiple scenarios which duplicates the effort. It's quite a step ahead but it still was not where we wanted to go and we needed some deployment mechanism for new code because a lot of our developers don't like working
with VI or nano or anything on the code but wanted to work in an IDE with all those tools which it provides with typo correction and hints and so on and well, any time you have
a change, you need to transport it in the VM, there are mechanisms there but still it's another step to consider, another step not to forget, so we also wanted to eliminate that. We found that Docker is exactly addressing these problems.
With Docker, you don't have a full virtual machine but you have a small isolation layer around a package where all the runtime requirements of your software are stored and run in an
isolated way without interference from the rest of the operating system and nothing more. There's no kernel, there's no virtual hardware, there's no consideration about free disk space
for data in the individual container and this helped us a lot define the kind of setups we liked. Also the runtime overhead is considerably smaller, you don't need that much RAM and
computation power to run a Docker container as you need for full-blown VM. Another neat thing about Docker is you can loop mount code from your home directory
on your laptop into the container and execute it as you edit it, as if it would be running in your main desktop environment, so there's a shortcut which eliminates yet another step in development and this was a big step forward in providing our colleagues a means of just
starting over with a new aspect and with a fresh install. But what about updates? Well, with Docker, normally you don't run updates of the base system or the dependencies
in the container, instead you just throw away your container and get a new version online and this takes much less downloads than with a full VM and it's incredibly fast.
Only problem left with Docker was how would we like to maintain our container image, how would we maintain updates to the setup and, well, there's a lot of options, there's Docker
Hub, you can simply go to GitHub, whatever, but we decided to explore Open Build Service because we already do a lot of work inside the Open Build Service. We have a lot of experience on the team, how to use it, mostly we used Open Build
Service in the past to build deployment images, virtual machine images, from an XML file called Kiwi and you can do the same to build Docker images, so what basically happens
whenever there's a crucial update on operating system, parts of the operating system which are needed as a dependency for your container, the Open Build Service just rebuilds the image with the newest set of libraries inside your project, and you can just download
the new version of the container image, remove your container and start it up again and all your custom data is kept, but the basic operation environment is refreshed.
This is of course very convenient because developers want to develop, they don't want to be administrators of a zoo of platforms just to run code. Okay, also a very interesting feature which we currently not use for our image is that
the Open Build Service has so-called source services which are basically watchers for updates on repositories like a Git repository and they can automatically fetch a new version from Git and repackage it into whatever format needed like bzip or whatever and then
apply it to your image which you want to build. This also works for building RPM packages and we are quite fond of this but we are
not using it in this scenario at the moment. Open Build Service also allows to build containers in a Docker native description file. If you know Docker a little bit, you know it has an own definition language for building
images, and it's a very concise format, it's not XML, it's more like a bash script or a configuration file, and if you're already used to that format, you can just go ahead and use that format inside Open Build Service with a few restrictions.
For example, Open Build Service is designed towards building reproducible scenarios which introduces a limit that you have all your dependencies, all your data prepared before
the actual build step, and during the actual build step, you have an isolated environment which means no interference from the outside operating system, no internet connection, no external data, everything has to be in place in a step before and maybe you need
to modify your existing Docker file if you want to move it to Open Build Service to benefit from it. For example, you can't use cURL or you can't use pip if you're a Python developer
or NPM if you come from a JavaScript background. You have to have your resources already there, and that's where source services are really helpful to get it. How would it work? I originally tried to present this in my browser screen, but my laptop is doing funny
things together with the presentation, so I won't do this now. You know this screenshot from the Open Build Service. This is basically the container we are currently using for open source development, and to
check out and run this container, you really need less than 30 seconds with an appropriate internet connection, of course. Yeah, we designed this container to both interactively run the application for manual
integration tests, but also for written unit tests in PHPUnit for reproducible regression tests. So how would we go about it? The workshop showed how to do it with the XML files. I just show what to do in the web UI.
You know this dropdown which you reach over the shown URL, which provides you a list of build targets. Normally, you would select distributions there. You would select slash 11 SP4, or OpenSUSE Leap 15, or some Red Hat-based scenario,
and here you just select a Kiwi image build, and then inside your project, you create a new package.
All those images are actually maintained like packages, so the difference to a normal package is, in a normal package, you would have a spec file here, a building instruction for distribution package. Here instead, you have this config.kiwi file.
I will show you an excerpt in the next slide. And this config.kiwi file basically describes what to put in the container, which packages need to be installed, which sub-volumes need to be configured inside Docker, which
ports to expose, and so on. And, well, I know it's a little small, but maybe you can just follow the link and look what it looks like later on.
So you basically add any packages you want inside. You can also package a tar ball with individual content, like some configuration files which you don't want to put into the actual software, and also not into the RPM packages,
a little glue around the system. Well, and then, when you're done with your XML file, OBS, the image automatically builds the image and puts it into a Docker registry hosted by the open build service.
That's a new feature. We discussed this yesterday. I think it's only in version 2.9 or already, or even still unreleased to the downloadable open build service, but the live open build service has it, yeah.
As I said, there's still room for improvement. We could make use of the source services here. We could also provide a YAML file to cooperate with a database, because with Docker, normally, you only put one application into one container.
So maybe you have your web application and maybe the web server in one container, but you won't put the database inside there. Or if you're running an email scenario, you won't put the mail server inside there. Instead, you would have a container for the IMAP server, a container for the database,
and a container for the actual software to run, and those would interact in a defined way. This is a little beyond the scope of this talk. We already have some scenarios in place, but they are not ready for publishing yet. They are still a little bit round, rough around the edges.
So, how to use that? I promised it would be easy. Just pull the container from the registry. It's a one-line command. This downloads the image to your local repository on your laptop, and the default name provided
by the build service has quite a long repository part. This is due to some design limitations. I suggest to just shorten that to something you would like to type more, because it's
shorter. This is what the second step does. It's not technically needed. It's just for convenience. And then you're ready to run your container. Just run the container exports to port 80 or whatever port you would like your application
to run on, and you can just open it in your local browser, and that's it.
You can also interact with the container. For example, run some quality checks or any other command line program inside the container, and what about this IDE integration?
What we basically do is we want to eliminate a step where you just download the software and then link it into your container. Instead, we go the other way. We already deliver a snapshot of the GIP repository with our container image, and
then basically you run it for the first time. You copy the data into your home repository once. You kill the original container, and you start it again with your home directory content
mapping into the container, and from this part on, whatever you have in your home directory, whatever changes, are live inside the container and can be tested without any further steps apart from saving the files. Well, there's some other examples how to interact with the example container which
we put online, and it works amazingly well. We didn't have to change the container definition in two months for upgrading the container
or doing anything. Open Build Service just kept updating the base image as we needed it whenever there was a change, and this allows us to go back to the image and do improvements when we
want to, and when we want to change something in how it works, and not for manual tasks. Spend less time on setup. Spend more time on writing code, and if you somehow did something wrong, just kill
your container, reload it, and if you did something wrong in your custom code, just rewind your GIT repository, and you're done. Thank you very much. I think we still have two or three minutes for our questions. Five even. Great.
Do we want to ask any questions while it's early in the morning? Thank you very much. Goodbye.