Running virtual machines in containers

Video in TIB AV-Portal: Running virtual machines in containers

Formal Metadata

Running virtual machines in containers
Title of Series
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Release Date
Production Year

Content Metadata

Subject Area
The idea of running virtual machines inside containers is surprisingly old andwas used due to several reasons. They include willingness to run VM-s incontainer orchestration engines like Kubernetes or Borg, or packaging IaaScloud software like OpenStack in containers. In this presentation, I am goingto describe these use cases and two main different approaches ofcontainerizing VM-s - putting every qemu(-kvm) process in a separate container(like Borg or Rancher OS are doing) and putting libvirtd process in container(like OpenStack Kolla or Stackanetes are doing)
Email Presentation of a group Group action Service (economics) Open source Connectivity (graph theory) Virtual machine Data storage device Raw image format Virtual reality Flow separation Blog Different (Kate Ryan album) Single-precision floating-point format Software Computer hardware Information security Address space Physical system Point cloud Email Dependent and independent variables Block (periodic table) Building Software developer Projective plane Computer network Virtualization Directory service Term (mathematics) Flow separation Virtual machine Digital rights management Kernel (computing) Process (computing) Software Integrated development environment Personal digital assistant Point cloud Information security Operating system
Demon Trail Presentation of a group Game controller Group action Service (economics) Open source Computer-generated imagery Virtual machine Set (mathematics) Data storage device Run-time system Vector potential Software bug Medical imaging Computer cluster Root Computer configuration Software Core dump Queue (abstract data type) Energy level Process (computing) Implementation Information security Physical system Vulnerability (computing) Run time (program lifecycle phase) Projective plane Data storage device Planning Virtualization Component-based software engineering Process (computing) Integrated development environment Software Personal digital assistant Cube Clifford algebra Point cloud Video game Moving average Cycle (graph theory) Information security Abstraction
Laptop Server (computing) Presentation of a group Randomization Scheduling (computing) Service (economics) Run time (program lifecycle phase) Open source State of matter Multiplication sign Virtual machine Run-time system Mechanism design Operator (mathematics) Information Data structure Abstraction Physical system Information Demo (music) Server (computing) Run time (program lifecycle phase) Interface (computing) Projective plane Interactive television Cartesian coordinate system Flow separation Digital photography Wave Digital rights management Logic Interface (computing) Point cloud Abstraction Asynchronous Transfer Mode
Medical imaging Server (computing) Demo (music) Normed vector space Electronic visual display Local ring
Demo (music) State of matter Multiplication sign Cube Virtual machine Electronic mailing list Electronic visual display Knot Right angle Video game console 2 (number)
Email Presentation of a group Group action Socket-Schnittstelle Computer file Open source Multiplication sign Modal logic Computer-generated imagery Virtual machine Mereology Predictability Different (Kate Ryan album) Drum memory Dean number Newton's law of universal gravitation Module (mathematics) Scripting language Computer icon Execution unit Demo (music) Projective plane Menu (computing) Volume (thermodynamics) Instance (computer science) Type theory Process (computing) Integrated development environment Configuration space MiniDisc Convex hull Routing
Metre Laptop Complex (psychology) Game controller Presentation of a group Thread (computing) Open source Multiplication sign Modal logic Connectivity (graph theory) Execution unit Virtual machine Set (mathematics) Mereology Computer configuration Term (mathematics) Operator (mathematics) Single-precision floating-point format Formal verification Core dump Physical system Distribution (mathematics) Slide rule Demo (music) Interface (computing) Projective plane Keyboard shortcut Planning Total S.A. Bit Cartesian coordinate system Computer Flow separation Human migration Word Digital rights management Integrated development environment Personal digital assistant Blog Video game Point cloud Right angle Computer worm
Slide rule
so a service industry stymie how I
worked I work in Kimble company I am a contributor of kubernetes currently but before I was contributing to OpenStack and mostly to OpenStack Cola projects which parks the OpenStack components into docker containers and runs OpenStack as in container environment and I'm working for kimbop which is a company based in Berlin we are doing general Linux related development but the most of work done bulk invoke is visible in rocket project but also in system D or with scope and you can check our activity on our block you can check our github as everything we are doing is open-source or you can just write us an email if you have some questions regarding a company so I will start with explaining some basic concepts around my presentation first of all container and virtual machine I think that most of you know that difference so I'll not focus on that that much but you know container doesn't use any new kernel insights and virtual machine uses the separate operating system and is simulating the hardware container is just isolating several things in Linux system and by cloud I mean any kind of service which is provided over network to the user and user doesn't have to know where the service is actually provided in case of container based clouds container-based cloud is cloud environments where the user demands some containers and doesn't have to know where where they are physically located this they are scheduled automatically and though today there were a lot of talks about kubernetes that's the most popular container based clouds system which is open source but there is also misses and locust swarm and there are two are machine based clouds OpenStack is the most popular of them from the open source project from no open clouds which are focused mostly on virtual machine there is AWS and there ec2 service and yes what is the problem and what I try to address the problem is that these clouds container and virtual machine-based are separate so for running a cloud consisting conference over to our machines booted from some kyouko or raw images you use OpenStack for for running containers you use kubernetes and yeah it's very hard to maintain a single environment single infrastructure which provides both of them to the user and that's the problem I would like to address so how to create about genius cloud environments which addresses both VMs and containers and one of the answers which is implemented in several ways and I will show them is putting virtual machine inside container it sounds crazy but it works and it's in my opinion makes sense and I will explain it in this presentation but first of all let's beginning with question what needs to be to run a virtual machine inside container what what does the what characterizes that container which can is able to run virtual machines first of all it has to be privileged so we need to give the most of Linux capabilities to that it needs an access to see groups because for example libvirt is using c groups for resource management for virtual machines it spans for queuing processes response we need also to provide an access to all the needed devices we would like to share with VMs and if we want to use kvn we need to also share the kvn devices it's yeah it's just a device in the dev directory and here comes the question whether this idea of putting VMS inside containers whether it improves security somehow mmm and the
obvious answer is no that's because it's privileged it has access to the devices it has an access to see groups and that's why if someone gets inside the container in virtual machine we should assume that he has an access to the node to the host so it doesn't provide any security the idea of packing virtual machine inside container is only for simplifying things and creating a homogeneous environment but the security of VMs is the same and we should care about bugs in any software which manages our virtual machines or runs it so how to do that and how to use the concept of continue rising virtual machines in the cloud environments there are two most popular ways first of them is to put every qmo process in the separate container and the second one is to just put Liberty demon inside container and have mana qmo processes inside that one container with liber8 in case of qmo in container we have in this case we have some hosts I'm not we have two or more qmo containers and which run the virtual machine and the two most known examples of cloud systems which are using that approach first of them is Borg so Google Borg internally for virtualization is using containers and this putting each virtual machine beside another container and just scheduled them as the other containers and also Ranger OS has a control plane for virtual machines and it's using exactly the same approach and they have a docker image for with qmo you can you can even just pull it and write in docker run Rancher VM something something else you have run in virtual machine and the advantage of that is that we don't rely on any other tool for managing this lifecycle at the lifecycle of VM and if we for example should dump devi empty queue amma process jettison goes down and for docker or any container runtime system mmm it's just closed it's just a shutdown of container and then in the same way you Burnett and the other container cluster systems see that fact but there are two disadvantages the first of them is that you have to somehow manage the images so if you have kubernetes environment in which you would like to run the containers with VMs you need to somehow provide the Kiko or roll images and if you are developing such a solution you need to somehow provide the image service for that and you have to put your own efforts on providing the external storage and play with qmo options in case of clifford in container we assume that every node in the cloud is running one level container so in case of cube entities it could be daemon set and there are a lot of human children of liberal libertage as their lives their life cycle and the most all examples of that are the the most known example is OpenStack Cola project which I mentioned on the introduction of myself so it's project which contain arises the OpenStack at they also have an option to run open stack on top of kubernetes and there is also valid which is a project that aims to run which aims to make a VM a native citizen of kubernetes so it's implementing the VM pot feature and there is also a cube root project which the guys developing developing it's had a presentation yesterday and the virtual machine and infrastructure-as-a-service track and yeah main advantages is that Liberty provides an abstraction for managing images it manages the remote storage and
it's much easier than dealing with cuomo directly but on the other hand you need to somehow interact with that libvirt which is itself a layer of abstraction so it's not very easy to decide whether we go with qm wobble or libvirt there are some projects which is the first approach there are some projects which are using the approach of liberal tents we may see in the future which approach was better and which layer of abstraction provided more problems so how exactly it relates the clouds as I mentioned where that is a project which uses waves in kubernetes and how it do that it uses the container random interface there was a presentation today explaining what it is but I will explain it quickly quickly so CRI is a mechanism in cuba Nettie's which allows you to write your own server which provides some run a runtime service to kubernetes by the photo kubernetes uses docker so if you run some port on kubernetes you receive a bunch of docker containers running somewhere in the cluster by sierra you can replace that with any kind of runtime system you want this is how it looks or in notes in kubernetes there is a cubelet which is daemon managing a lifecycle of the container and mode so it's only receives information from keep scheduler what it has to do and the most not example of this Eri service is rocket lat which uses rocket but for virtual machines you can also use CRI and just by getting a definition of thoughts run the virtual machine and interact with liber8 instead of rockets so these things work but do we really need such an inception it may be sounds crazy because why do we need to run virtual machine is inside container but I think we need this inception because the goal of Cuban Nettie's and container management system are is to be as small as possible and do not implement a more complicated logic instead of that they want to give the people the opportunity to create this logic itself a good example is concept of operators there were a bunch of talks in this truck about an operator about operators and there they are using kubernetes but they prevent they prevent kubernetes itself of being too big so that's why cuber Nettie's community doesn't implement logic of upgrading stateful applications except the accept the concept of state who says but that's why people are moving more complicated deployment logic to outside things like operators and I think that we should see any solution trying to run a virtual machine inside container as a solution of such kind so we are just using the simple of kubernetes to achieve something more complicated and we just add one layer of obstruction to achieve something which gives profits because I think that separation between VM clouds and container clouds it's a huge problem which some which even somehow may prevent some people from thinking about using kubernetes if they are using a lot of VMs and have an instructor structure acting with just virtual machines so unfortunately I cannot show a demo because I have no adapter for my laptop yes BC so we try to be before the talk unfotunately I cannot show it demo on my laptop we have some time so I can there
is a one demo on github over that how it works so
you see everything okay it's maybe that would be better so in this demo first of all we run the vert led server and then start the local kubernetes cluster and after that we have a definition of pot I will try to stop it here yeah so this is just an usual pot where we define a container named Fedora it uses the fedora image which is served by some high HTTP server it's just a nickel oh maybe that will work so display port and now it's the CBC I'm not I'm dubious
this didn't work before oh it's just DisplayPort let's display okay so yeah yet another try off say
week my demo okay so that's the definitional pot and we can just create the spots by cube City air create and it will work let me continue until more and it takes some time to run the virtual machine that's first why for some time the container is in creating state knots after 14 seconds it's became running now we can get into the container so that's why there is docker compose exactly Bert brush list so the leopard is the name of container which runs liberal and yeah we can also access a console no kind of container and rights revert okay here's just slow typing okay and I
can also show you I have a project just called da curly beard which is very tiny docker environment for putting liberated container just to not make docker run comments very long I noted this definition insights docker compose and file so I exposed part of libvirt here there are the necessary mounts I mentioned in my presentation and there are also volumes for libvirt were the actual instances and disks for livers are stored so we can just use named volumes for that and I have also here a start script which wraps libvirt here I do some magic with detecting whether which type of processor we haven't which KVM module I have to load and also some necessary chmod CH routes for configuration files that's because if you mount some fire to the container in docker there is no way to define to which user it should belong and what's the whatever the permissions for the file so if we are mounting this configuration files here we need to CH more than inside start script and the configuration of liberals of q and q mo is very small so for liber8 we just wants to make it listen on the sockets so we can contact with libvirt outside the container to not have to enter the container every time and just is be able to use verge on the host or even invert them in a graphic of virt-manager on the host and we have also here no config which defines the here's our group and that's that example i mentioned also about Qbert project so the definite so the difference between birth and Qbert is that benefit as a CSO on the demo uses a pop definition for running the virtual machines Qbert uses third party resources for that not so I'm not going to explain it in details because Q Bert was explains yesterday on the talk
let's go back to pre presentation that's unfortunately all I wanted to to show I'm sorry again for not showing you live demo from my laptop we have any questions yes so the use case is okay so the question was why do I even want to run rahama she's in the containerized environments and what's the use case for that use case for that is I think the migration between virtual machines in containers so some company which was using virtual machines for a long time thinks about kubernetes doesn't really want to manage to separate clouds for a long future and they want to be able to use keyboard Nettie's but on the at the same time provides some option for virtual machine a traditional virtual machine users without necessity of maintaining the virtual machine oriented cloud I realized that it's not a problem in case of a WS or clouds which are not managed by also but I think that you know using kubernetes as a main infrastructure without managing separate OpenStack for example without having eight roles in top of kubernetes or using something else like virtual for meter machine I think it's really simplifies things and that it's my assumption maybe I am wrong but that's the idea I have behind it this is very clearly here and dear to our hearts and four of us a lot of this work fits into our work and Crockett is our project and I spoke earlier today about operators like this kind of dovetails under that as well and you've touched on this a little but I want to make sure we draw it out one of the reasons why you would want to package virtual machines and in containers has little to do with what you think there was the execution isolation the convenience of packaging and distribution the ability to do verification on that package that's at the street can discretize package and most importantly to schedule it dynamically around grid compute resources with an orchestration system like kubernetes so the units of scheduling the thing we know how to move around between computers in these systems is a container not a virtual machine right so that is a whole other set of reasons why we would want to package virtual machines inside containers is to give a handle to orchestration systems on legacy applications that that exist at this time already in virtual machines you get the container around them as a package and now you can schedule them dynamically on on your cluster resources the way you can containerized applications which certainly he touched on but I want to make sure to kind of bring out and put a nail on top for now it doesn't but both effortless and cubed words want to address it somehow then I'm sorry again I probably I keep forgetting about it the question was whether we want to address more complicated operations for each Ramos is like life immigration for example and that isn't implemented yet I think in of projects but the most probably we will need external controller similar to operator which will consume the third-party resource called live migration or something like that and call live with API underneath what about scaling so the question was what about the other operations like scaling over to our machines and to be honest I didn't think about it yet and yes but life I probably liked migration would be the first you know more complicated feature of virtual machines which will be total Bart project and it's I think designs now and maybe implemented in the near future okay like this because unless you so the
question was the question the that Aziz was that putting open or open stuck inside kubernetes doesn't simplify things because you still have Nova you still have heat and a lot of components and it doesn't really simplify things for the operators I only provided Cola as an example worklet project isn't using OpenStack at all so I just wanted to make this presentation you like objective don't promote a single solution but yeah if you want to throw a throw out OpenStack er very sweet okay so part of why we invested a lot of time and effort and and and friends of ours in the community and contractors and other folks we've worked with invest a lot of time and effort specifically in porting OpenStack the OpenStack control plane into containers and running it as a kubernetes application is actually that our findings are quite contrary to what your suspicions are by unifying around a single management interface that is the kubernetes api by deploying OpenStack which is just a bunch of applications as wonderful as it seems and as magical as it seems because it's a VM management system all it really is is a big stack of Python apps so we put them in containers we've run them on kubernetes we schedule them with kubernetes we recover from failure which are quite frequent in those Python apps with kubernetes and actually we've found and and I think there's there's a fair amount of stuff on the core OS comm blog about the the project between Koro asked in bulk Intel the rocket open source project and all of the pieces that fit into to the OpenStack port for kubernetes the finding actually is that we reduce the administrative overhead by unifying around a single cluster management interface instead of trying to deploy OpenStack applications in an open stack silo outside of kubernetes so that's like the aims there to to answer the speculation that that adds to complexity and what only to make things more difficult all I could do is encourage you to grab the stuff and try it out and see if if you find that that's true okay so tenant application payloads the folks you're serving they still consume OpenStack api's and schedule their virtual machines through the OpenStack facilities well actually let me back up for a minute there it is not necessarily true that customer VMs are running inside containers when we're kind of talking about two separate things here and would blended the issue together a little bit the talk is about running VMs inside containers the OpenStack work which relates to it and encompasses it to run parts of the control plane in terms of containers does not necessarily imply that your end user your customer VMs are packaged in containers they are VMs consumed from OpenStack scheduled with OpenStack but running on high on top of the on top of the hypervisor at least in being virtual machines yes so is that a better answer and then that gets more than no yeah buts because no no I think that doesn't relate if you want to use the concept of tenants for VMs like in OpenStack you can run open stuck on the kubernetes and just provides the OpenStack control plane to the end user and thread the whole kubernetes stack running this thing on for your internal operators so that's why I wanted to put the things neutral if someone needs OpenStack I think they should be free to use almost like if not then not different people have different needs thank you


  401 ms - page object


AV-Portal 3.21.3 (19e43a18c8aa08bcbdf3e35b975c18acb737c630)