Docker for Perl[56] People

Video thumbnail (Frame 0) Video thumbnail (Frame 14029) Video thumbnail (Frame 16046) Video thumbnail (Frame 17355) Video thumbnail (Frame 18487) Video thumbnail (Frame 26931)
Video in TIB AV-Portal: Docker for Perl[56] People

Formal Metadata

Docker for Perl[56] People
A Ridiculously Short Introduction
Title of Series
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Release Date
Production Year

Content Metadata

Subject Area
* "What problem does Docker solve for Perl 5/6?" * "What? There is a problem?"
Scheduling (computing) Java applet System administrator Multiplication sign Source code Workstation <Musikinstrument> Boom (sailing) 1 (number) Client (computing) Mereology Computer programming Formal language Web 2.0 Programmer (hardware) Optical disc drive Coefficient of determination Different (Kate Ryan album) Software framework Information security Physical system Identity management Social class Software developer Gradient Binary code Data storage device Physicalism Parallel port Cloud computing Bit Lattice (order) Instance (computer science) Flow separation Proof theory Type theory Arithmetic mean Data management Process (computing) Remote procedure call Laptop Server (computing) Service (economics) Computer file Image resolution Virtual machine Product (business) Twitter Revision control Goodness of fit Operator (mathematics) Operating system Authorization Software testing Compilation album Authentication Time zone Dependent and independent variables Information Validity (statistics) Cartesian coordinate system Configuration management Word Kernel (computing) Integrated development environment Software Network topology File archiver Point cloud Library (computing)
Point (geometry) Distribution (mathematics) Run time (program lifecycle phase) Information Binary code Mereology Cartesian coordinate system Medical imaging Mathematics Integrated development environment Visualization (computer graphics) Internetworking Chain Configuration space Library (computing)
Scripting language Server (computing) Service (economics) Computer file Code Multiplication sign Projective plane Data storage device Directory service Cartesian coordinate system Binary file Medical imaging Data management Personal digital assistant Operating system Energy level Configuration space Software testing Series (mathematics)
Axiom of choice Laptop Slide rule Group action Presentation of a group Service (economics) INTEGRAL Multiplication sign Set (mathematics) Mereology Rule of inference Product (business) Revision control Medical imaging Emulator Goodness of fit Different (Kate Ryan album) Internetworking Semiconductor memory Operator (mathematics) Energy level Software testing Information security Module (mathematics) Collaborationism Default (computer science) Dependent and independent variables Distribution (mathematics) Standard deviation Software developer Cartesian coordinate system Process (computing) Kernel (computing) Befehlsprozessor Exterior algebra Software Personal digital assistant MiniDisc output Right angle Table (information) Asynchronous Transfer Mode
okay good day so I would like to talk about docker the
use of docker for Perl people some people some people know me as LJ of anak set em on I see on Twitter or whatever I'm also the different manager we have kind of a schedule problem last time so that's why last minute so that's why I'm here a little bit me I work at the information the competence center of information security at University of Leuven and we do stuff related to security to identity to authentication authorization and stuff like that so I would like to start with a with a kind of controversy question so what kind of problem does docker solve for pearl and a typical reaction would be what problem do we have a problem and it's true that imperil like we we are cheap and for like Asia's you know to be for people with bird beard web bones we have a very nice testing culture we have very nice community we are very great our writing tools so it's very easy to because in your in your parallel environment because it's very nice but if we rephrase the question are all different differently we may get another answer so if we if we ask the question how do we deploy our programs now in 2017 we get a lot of different answers you know there's more than one way to do it some people use cPanel safe in our remote machine hope no one's here works like that but I've everywhere I've work I've seen people doing that on production machines some people use CUDA brew of pearl brew to use their own peril and not use system pearl some people use cotton to pin their dependencies some people use locally fat packer very nice if your application is pro only so you can just make a one file so you don't need to have this dependency tree mini sea pound dog Sipan some people just create an archive and put it on the server some more sysadmin type people will create an OS dependent package like a debian package of an RPM the same kind of people will probably use a configuration management tool like puppet like salt like rigs like Sparrow do and and if we are honest with ourselves we be must acknowledge that it can be a little fragile at times when when you're in full control of your environment they're very nice tools but when you're it kind of a bigger environment where it's a separation of responsibilities we have some people like the Linux odd means being responsible for the US and updates you have application people being responsible for the application maybe they will do an update and your application is not well tested and it will break if you are working on the cloud when you need a really fast switch of environment it compile and so will take a long time so it's not always the best solution so if we if we look at other examples we if we look over the fence into other communities that kind of work around this problem we see that it's not that easy if we take Java by example a job is great in this regard because in Java you put everything in a jar you'd put the jar in the machine you feed it to the JVM and it runs it will load probably 9 or 50,000 classes without exaggeration but it will run it's great but even then you you will meet glass pot hell when you have the same library with two version on different paths your application will run but it will explode on the way so even then if you go to go also a very nice language when you do in a static compilation of your program when you add all the dependencies to your binary you take the small file you put it on the server you run it it's fantastic it's fast it's not VM but even then if you have a security problem in one of your libraries we need to track down all those smile binaries everywhere and that's not easy if you don't have the infrastructure for it because programs tend to I'd leave the programmer I've heard from some colleagues where I work like 10 years ago they still using a proof of concept I wrote and the programs are found this is a proof of concept in capital they still use it I don't think they even have the source so that so that's a little difficult so if we look at at grade of all the communities like Java and go we realize that deploying is always half of the question and the real question in my eyes is how do we integrate with an ecosystem that is no longer language centric and and and what do I mean by that I mean that the future is API centric so you don't care that much about the language you care about the API you care about integrating stuff together and even more I could say that the present already is if you're working with with within a develop teams when you have a lot of people from different backgrounds you have operation people these admin people that they have their own tooling you have developers that maybe have their own tooling and if you if you work with people of different background that means different languages different frameworks so you're already mixing store there if you work with a cloud when it's very important to be able to switch from one cloud provider to the other when it's very important to be able to bring your instances up it's very important that you have the best tool for the job and this is a good thing because it's very possible nowadays that the the best tool for the job is not reaching peril it could be written in Java and go and Robi doesn't matter because you still get the best tool for the job so you can integrate stuff together well back to docker a typical question is is it here to stay is it a hype because if you've been around for some years and IT you know things come these go these come back slightly different so that's a very good question and the same people I would say we had see them for like 20 years they would say yeah but we already have VM what's new so a VM a virtual machine the idea behind it is to fully integrate a discrete environment to fully to have a full operating system that also means that you need to fully administer an operating system like you used to have one big physical machine and now you have one physical machine with team VMs we need a lot more work to keep that up to date to give it secure to create users and so on and most importantly a VM and a container are not at odds they can work together is a very valid is near to have a VM and to run containers on that maybe because you use under eyes around a VM you can deploy them and then provision them very quickly of me because of security you don't want your containers to share the same kernel a lot of valid reasons to do that well after this introduction I would like to answer the question what is docker because I'm talking about dakka dakka dakka but I haven't explained it yet so if I'm forced to summarize it in one word its I said it already it's a container and the same people I said yeah Sipan JVM they would take container we've been doing that forever me myself I've been doing that 2005 on Solaris sorry stones I probably migrated hundreds of physical machines to Solaris zones it was found you RTC divest you could copy your container we threw it say to another machine it was a lot of fun people working on on IX probably 10 years before that but it's not the same thing so what's different again is the API so docker give you an API to integrate it with other stuff so if we redefine where the container is nowadays of course it's an application itself contain that's kind of the definition of a container but the most important part is it's portable so you work at your at your workstation on the same container on the same binaries as on the production server as the same thing that the customer has so it's portable you move stuff around and you move the same thing you do you don't need to recreate everything and then test for the differences so container because of this have a really huge impact on how we develop how we distribute and how we run software so as a developer it's really it's priceless to be able to develop on the same environment ask the production machine because that's always at the battle between these admins and developers yeah it runs on my laptop it doesn't run on the on the only production server which is so slow whatever so you you will develop differently because you can have the full stack all the different services on your laptop you can distribute it within your company test quality production you just move the same thing on new environment you can you can push it to a client exactly the same thing as you are having it on your laptop and it's also a very standard way to run software you don't care they use Suzy or Fedora of Debian of a boom - you just don't care maybe they run they run it on the big iron in the premises maybe the usership cloud provider you just don't care this is a standard way
to do it so this is a visualization or how a container looks like it took me a while to get this because it's kind of confusing what a container really is and the most important part is the image in an image you can it can be compared to an ISO a DVD a live Linux distribution where you put all your libraries of your or your binaries and when you bring that up you always get a fresh environment every chain change you make will be lost when you restart your container so it's kind of a reach only environment but you can change on the fly when you restart it you lose it that's idea you always start from a fresh environment then you need to have some runtime information so something that you need to have a useful container maybe it needs some Network addresses some ports maybe need some access to enough to or facets and mount points etc environments whatever and and most important for your application is the persistent data that's something you don't put in the image because the image is just cheap you can put it on the internet you don't care but your your configuration your secrets your business data that's outside of the container and the container has access to that so that's those are the big three parts and
then if we look at a dish again we
realize that we still need tools to manage the wrong time info the configuration the image creation and there are probably tools that we already talked about and in this case even sip and on the server because you're working locally it's okay an undercurrent in each file is just a series of scripts of comments that you run after a basic image of an operating system you start with a very small down deviant and then you say at this at that run that that said but you only do it once and it's get a store in a kind of a binary format it's very easy to to have something very simple you don't need to complicate it stuff at that level so everything is containerized is everything is easy to understand and because of that is also easy to
implement but if you go here we used to have a very big radiator a parable application a radio server project when we only who use puppet and puppet was the code was very complicated very big where a lot of tests because puppet up to manage users are to manage packages services the older they run and and and at the end it configure my application now we're a container I just don't do that anymore because that's in an image that's frozen the only thing I have to do have to care about my own application so puppet now just take files you put in a directory take a template inject some secret and that's it so my code is very easy to read very easy to understand because I don't have to look at the full picture all we have to look at my application so
we dad so what does doctor bring to the table kind of summary what I said is efficient because there's only one process running is running on the kernel so the kernel Marian talked about it has some C group to give you some basic security but it's on the same kernel it's not it's not about emulation also the way of working is very efficient because you are working on the real thing so you save a lot of time do you don't go back and forth with with the season means talking about what's difference it's portable like I said you can distribute your your images and it's also embeddable I cannot read that so we'll say embeddable that means that you as a pro guy of girl you can create a base pearl image have a very up-to-date pearl very secure pearl you can provide a base set of images of modules excuse me and that you vetted the version you tested them and then someone can come take your image in your company of someone from the internet and just care about their application they just add a layer on your image they can the image for their application so you're responsible for the purl part and they are only responsible for the application so it's it's very easy to have a secure baseline that you can update and they don't meet the older knowledge for that so only have five minutes so I will warn you so I don't want to sell you stuff I don't want to be only positive so I know do you know the first rule about docker I know I know someone here so you won't shut up about docker so it's okay to give a presentation but don't do it at the dinner table because if you get very annoying I've been there so don't do that so most seriously when you use docker you need to test test test it's not as straightforward as look yes things are easier thing as simpler but a lot of corner cases you need to ask yourself some very good questions that you something that you always need to do but now you're forced to do it you need to ask yourself is my application horizontally scalable if the answer is no you need to really take sure your application or you just don't bother with docker because docker uses the concept of cattle the doctor doesn't care about the service doesn't care about your your container if you have a reduce problem you just pop up some new one so it's if your application is bound by CPU memory maybe that's not a good solution issue application performance in docker I already said it's very efficient but there are some trade-off on the level of networking on the level of disk i/o and because if your application is horizontally scalable it's not important but still you need to test you need to make sure that you make the right choices because on network there there are some implications on security on on flexibility so that choices that you need to make and you make sure that you make these choices and you just don't use the default of your distribution because they it's just a very generic setup this is the most important thing I would say today docker is not a security solution for most people that work with docker they think that is you give you a very dangerous false sense of security because you think it's containerized I'm safe you're not you need to follow best practices we need to follow common-sense you need to test you need to make your application update of course it's an extra layer of indirection there's a good thing but it's not enough so if you get into docker most books mode most stocks don't go into that you need to look into that I don't have the time to go in detail but with a very base minimum effort you can get a very secure application but you need to really be proactive about that there's also about people with politics this is not a technical issue but most companies institutional can divide it operations and and and developers and if you start with docker you get a lot of people that will have something to say about your image you have a lot of chefs in the kitchen so you need to be ready for that - you need to have a good collaboration with other teams you need to be able to acknowledge input to talk about it and this also gives an opportunity I already talked about a base pearl image it's a very very good opportunity for a pro person to create a standard to may to be the one that is not knowledgeable about pearl someone that can create a baseline someone and then can make sure that the security is followed and so on I have some slides left but I just gonna leave like that so maybe if there are some questions I don't know I can so yeah what sorry yeah certainly happy that the idea behind it is to to have the real thing on your on your on your laptop so you are working on the real thing that will run on production so I couldn't develop otherwise because otherwise you will always trouble with discussion with it works on my laptop it doesn't work on production one minutes not in one minutes so I would say docker is easy because there's a lot of integration already so it did that make it easy but there are other good alternatives as well so it's I think I'm gonna regulate that thank you very much [Applause]