Getting cross-platform: bringing virtualization management to the PPC world
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 199 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/32534 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Software engineeringMedianNetzwerkverwaltungBitCodeSoftware maintenanceCross-platformProjective planeConfiguration spaceComputer animationLecture/Conference
01:22
NetzwerkverwaltungCross-platformComputing platformOpen setCASE <Informatik>Power (physics)Physical systemView (database)Pole (complex analysis)Endliche ModelltheorieComputer animationLecture/Conference
02:18
Software developerSelf-organizationProcess (computing)BitXMLLecture/Conference
02:44
Software maintenanceOvalMereologyBitSoftware developerCycle (graph theory)CodePhase transitionTime zoneImplementationFigurate numberComputer animationLecture/Conference
03:45
Goodness of fitVirtual realityData centerINTEGRALNetzwerkverwaltungFocus (optics)System administratorPhysical lawExterior algebraSphereCartesian coordinate systemOpen sourceUsabilityScaling (geometry)VirtualizationServer (computing)Computer animationLecture/Conference
04:48
Software testingDemo (music)Integrated development environmentSingle-precision floating-point formatHuman migrationData centerVirtualizationMultiplicationComputer animationLecture/Conference
05:12
MultiplicationData centerIntegrated development environmentComputer architectureHuman migrationDomain nameVideo gameCluster analysisComputer animationLecture/Conference
05:38
Logic gateDependent and independent variablesProcess (computing)Data storage deviceLevel (video gaming)Operations support systemSoftwareService (economics)Scheduling (computing)Cartesian coordinate systemData centerConnectivity (graph theory)DistanceInformationSingle sign-onConfiguration spaceSound effectJava appletMultiplicationLogicDecision theoryVirtual machineWeb 2.0System administratorGateway (telecommunications)XML
07:50
MathematicsComputer architectureArmCluster analysisComputing platformWater vaporSocial classMaxima and minimaLecture/Conference
08:23
Open setSingle-precision floating-point formatComputing platformLatent heatConfiguration spaceSoftwareMiniDiscElectronic visual displayComputer animationLecture/Conference
09:02
Integrated development environmentInterface (computing)WebsiteSCSIComputing platformComputer animation
09:31
Process capability indexHuman migrationComputing platformCodeRun time (program lifecycle phase)Address spaceComputer architectureLecture/Conference
10:01
Software design patternStrategy gameDiagramMereologyImplementationInterface (computing)Group actionComputer animationLecture/Conference
10:35
Subject indexingCASE <Informatik>Run time (program lifecycle phase)CodeStrategy gameInstallable File SystemComputer animation
11:05
CodeComputer architectureWordLevel (video gaming)Data centerComputer architectureComputing platformDifferent (Kate Ryan album)Cluster analysisSocial classLecture/ConferenceComputer animation
11:46
Computer architectureSocial classConfiguration spacePhase transitionComputer architectureType theoryContext awarenessSet theoryWindowLevel (video gaming)Strategy gameCodeField (computer science)Open setLecture/Conference
13:29
Field (computer science)Computer architectureBefehlsprozessorElectronic mailing listStrategy gameVideo game consoleConfiguration spaceSoftware design patternLatent heatCodeOpen setEncapsulation (object-oriented programming)Computer animationLecture/Conference
13:57
Slide ruleConfiguration spaceCodeInformationComputer architectureCuboidForcing (mathematics)Operating systemFilter <Stochastik>Computer animationLecture/Conference
14:39
MikroarchitekturSinc functionBitLatent heatCodeElectronic visual displayComputer architectureCommunications protocolRevision controlType theoryElectronic mailing listMultiplication signSocial classOperating systemHierarchy
16:29
CoprocessorComputer hardwareInformationVirtual LANProcess capability indexAxiom of choiceFront and back endsMathematicsBlock (periodic table)Network topologyHuman migrationLatent heatDirection (geometry)Level (video gaming)Lecture/Conference
17:26
Strategy gameCodeComputing platformCASE <Informatik>SCSISign (mathematics)Water vaporImplementationCartesian coordinate systemProjective planeComputer architectureSoftware design patternAddress spaceXML
19:03
CodeStrategy gameComputer architectureLatent heatAddress spaceLecture/Conference
19:28
Sign (mathematics)Connectivity (graph theory)Computer architectureSCSILine (geometry)ImplementationCivil engineeringLogicAuthorizationCausalityGame controllerXML
20:03
Template (C++)InformationOpen setMiniDiscComputer architectureBitStrategy gameLecture/Conference
20:36
BefehlsprozessorVirtual machineComputer architectureFamilyPOWER <Computerarchitektur>XMLLecture/Conference
21:12
InformationOpen setWikiWebsiteProjective planeSoftwarePerturbation theoryMiniDiscInterface (computing)Configuration spacePartition (number theory)PlanningVirtual machineComputer architectureCyberspaceDynamical systemHuman migrationVirtualizationElectronic mailing listInteractive televisionElectronic visual displayPoint (geometry)Operating systemBuildingVisualization (computer graphics)Gastropod shellSet theoryGoodness of fitArmData storage deviceType theoryImplementationBefehlsprozessorDifferent (Kate Ryan album)Revision controlCluster analysisGroup actionNP-hardMultiplication signCellular automatonPlotterLevel (video gaming)HypermediaInsertion lossGraph (mathematics)WordDialectCASE <Informatik>Fitness functionQuicksortRule of inferencePower (physics)XML
Transcript: English(auto-generated)
00:01
So, hi everybody, sorry for the little delay, hope it's better now. So, it's my talk, so I will just start. Okay. So, who am I? My name is Omer Franco. I'm a Software Engineer and Team Lead in Red Hat,
00:21
that's Red Hat, and I'm also an Ovid Engine Maintainer and Ovid Engine Project. What I'm going to talk about today is how we made Ovid to become a multi-platform management capable, starting with x86 and PPC64,
00:42
and basically how we got Ovid to be a multi-platform. I will go through a little background of what we did to make this happen, a little bit about the people that did this work, and a little bit about Ovid for anyone who is less familiar with it.
01:04
I'll discuss the problems that we had to face, the solution that was accepted, what we did, I will show a little code and configuration, and then discuss a little what is still left to do.
01:21
So, somewhere around. Our goal was bringing a multi-platform management capabilities to Ovid starting with x86 and PPC64, and basically having Ovid to be a multi-platform capable. Why? Why now? So, KVM on power system was announced recently,
01:45
and also there was another announcement of the Open Power Consortium by Google, IBM, Nvidia, and some other big companies. So, this becomes relevant now. Also, it was a good opportunity to have infrastructure to
02:02
have more platforms supporting Ovid, just in case small platforms will support KVM in the future. So, something very important. Everything I'm going to talk about today was contributed by developers from El Dorado, Brazil.
02:22
El Dorado is a not-for-profit organization. They are located in Brazil and focused on technology development. Basically, these are community members that contributed this support, and it was a really nice work with them, and I want to talk about this just a
02:41
little because it was very interesting process with these guys. So, these guys wanted this integration. They sent a design to the Ovid Wiki. We, maintainers and other community, did some review to this design and some enhancements, and once this design was accepted,
03:02
the development and the implementation phase started. Again, we, over ILC, to make it a short cycle because these guys in Brazil, so it was a big time difference. Also, over Mails, we solved out all the issues. Our maintainers work really closely with these guys,
03:22
helping them figure out all the issues that they had, making sure all the code has been reviewed in a timely manner, because we really wanted and we actually succeeded to make it part of the current 3.4 release for Ovid. So, this is a really great success for us.
03:43
So, a little bit about Ovid. Anyone here already knows Ovid? Okay, good. So, I'll go through it quickly, just to make sure everyone will know after that. So, by definition, Ovid is
04:01
large scale centralized management for server and desktop virtualization. What it means is that we have open source alternative for applications like vSphere and vCenter and it allows managing virtual data centers. The focus is on KVM for
04:22
best integration and performance, so we are using KVM and there is a big focus on ease of use and ease of deployment for the users. So, Ovid is really easy to deploy and use as administrators to create your virtualized data centers and for users as well.
04:43
So, it looks okay. So, you could use Ovid for small environments like this single data center, single host running couple of VMs. This is basically good for demo and testing because here you
05:00
don't have some really important virtualization features like live migration. Now, you can grow with Ovid to multi data center and multi cluster environment. Basically, you can see here, I hope you can see here, Ovid manages multiple data center.
05:21
Each data center has multiple cluster, and each cluster has multiple hosts. So, cluster is some kind of a migration domain and within cluster you can live migrate VMs from one host to another. I want to give you a really quick high-level architecture
05:40
for Ovid just because I'm going to talk about these components later. So, I'm not going to go too deep to Ovid. So, we have the engine. This is a Java application, runs on JBoss. This is basically where all the logic relays and all the decisions are taken in our engine.
06:00
If it's scheduling decisions and other decisions, and this is also the gateway. All user requests are sent to the engine and engine process it and do something with it. The engine talks with the hosts that actually runs the VMs.
06:21
In the host, we have agent, we call it VDSM. The agent has a couple of important responsibilities. So, first, it does all the host level configuration, if it's storage or network, and of course, handles everything that related to the VMs on this specific host.
06:42
We are using libvilt for all VM operations, if it's starting, stopping, and migrating. And we also have another package called MAM that is responsible for scheduling and SLAES services. And finally, of course, on the host,
07:02
we have running guests or VMs, virtual machines. And the guest agent within is the package that responsible for sending information from within the guest outside. If it's IP or applications that are installed,
07:20
every information that we want to show up in the engine web admin. And also responsible for some commands on the guest inside, if it's, for example, single sign-on that we really have to communicate with the host inside the guest. Okay, so what was the idea?
07:41
So if you remember before, I showed you there are multiple data centers and each data center has cluster. What we wanted to achieve is that we could have a cluster which is x86 as today, and also have clusters from other platforms like PPC 64 cluster,
08:00
and in the future, if available, anything that supports KVM, so ARM is the question mark. And the goal, of course, was adding this support with minimal, as far as possible minimal changes to the architecture inside the engine and to the UI to use the existing infrastructures
08:21
and so on. So what is the problems that we had to deal with? First, Overt was designed and developed with a single platform in mind. We only had x86 supported for KVM, so this is what we developed for and this is the only thing that we had in mind.
08:43
What happened is that there is no platform specifications. For example, VM devices like network and display disk just the same as in the physical world, not all the configuration are supported on all platforms. So for example, in Overt you can have disk interface,
09:04
IDE, VirtIO, VirtIO SCSI, and we found that PPC 64 doesn't support IDE. So we had to do some filtering to block users from trying to use it on PPC because we know it will fail.
09:25
More problems that we had, so many assumptions were made without taking platform in mind. So for example, PCI addressing is different between x86 and PPC 64, so we had to deal with it in runtime.
09:42
We had to change the code that does the addressing and I will show that later. Also, some features are not supported in all platforms. For example, live migration is still not supported in PPC 64. So again, it's a feature we had to block according to the architecture.
10:01
So the solution that was suggested and eventually we used was using the strategy design pattern. I have some diagram of it. Basically, the strategy design pattern, what it says that you have the interface of what is supported,
10:20
what are the actions that are supported, and you can have different implementations. So implementation one and two will be implementation for x86 and implementation for PPC 64. What it allows us is a couple of important things. First, selecting the behavior at runtime.
10:43
We don't need all kind of special ifs and cases in the code to select which code to run. It gathers the specific code altogether. It encapsulates x86 code in the x86 strategy
11:02
and the PPC code in the PPC strategy. And it allows people that are entering the code to find the specific code very easily because it all encapsulated together. And very important is that it allows us easily to add more architectures in the future.
11:26
Other stuff that we had to take in mind is we are defining architecture in the cluster level. So we could have in one data center different clusters, platforms.
11:43
So you could have, as I showed before, cluster for PPC and cluster for x86. The CPU type is reported by the host. It allows us very easily to decide if a host can run in the cluster or not. Also in Overt we have two configuration packages
12:03
that we are using it as infrastructure for deciding if some feature is supported. So one thing we had to do is add the architecture awareness to the configuration. So one is the global configuration for features
12:21
and another package that we have, we call it always info. It allows us to manage capabilities in the guest OS level. So for example, Windows has a set of devices that it can use and Linux has a different set.
12:43
So we are saving it all in one configuration and what we had to do is to add architecture to this configuration as well and you will show an example later. Okay, so what was done so far? We can divide it to three main phases.
13:02
First was to identify and move the specific code to the strategy classes. Then changing all the configuration and creating the new configuration for the new architectures that we just added. And finally some specific coding for the PPC64.
13:27
So we had to, as I said, add the architecture field to the cluster and also supported CPUs. We have the supported CPU list in Overt that we are supporting
13:41
and once we implemented the strategy design pattern so we could move all the specific code to the right place and basically create some encapsulation for the x86 specific configuration. So I will show, I think it's the next slide,
14:02
I will show you how those info configuration looks like and it gives us great flexibility because the guys that contributed the code didn't have to invent pretty much anything. They were using an existing infrastructure so they would basically just add the new architecture
14:23
and the new values for the guest OS's and configuration and it was just walking out of the box. Oh, it gives us flexibility in the UI for filtering. I will show that as well. So this OS info, how does it look like?
14:43
So this is a snippet of one OS and you can see that we added the PPC 64 OS and we added this CPU architecture class within and the value for this OS is PPC 64.
15:00
So basically there is hierarchy so all we have to do is add x86 to the base OS of the existing OS and just add this one and all new OS's that we are going to add, if it's RHEL 6 and so on, will inherit from this OS.
15:22
So basically it will be really easy to add more supported OS's once we verify they are working. Also you can see that here we define the devices per OS. So for this specific OS, since it's PPC 64,
15:44
the display protocols that are supported are only VNC because Spice is not supported on PPC 64. And this is how we use it in the code. So the OS info, just you ask for the package,
16:01
give me what display types are supported for this OS in this compatibility version for the cluster and it will understand from the OS what is the architecture and returns the list. I will show in the UI how it looks like as well.
16:23
So specific code that had to be done. So first is the specific addressing that I talked before because the PCI addressing in PPC 64 is different than x86, I will show that. Also we added the specific PPC 64 devices
16:43
for VLAN and VSCSI, the SPAPR or how you pronounce it. A lot of front-end adjustments and I will show you right after that to filter out only the relevant choices for PPC.
17:00
And we had, as I said before, to block unsupported features if it's snapshotting and migration. In VDSM, the host agent that I mentioned before, there was some changes in the topology. We added a processor name for the reported information and also hardware information that had to be changed according to the new PPC.
17:25
So how does it look like in the code? You can see that this is the code before when we were building VM drives. So there is no ask about the platform here. This is just, if it's Virtio SCSI,
17:44
this is what you're gonna do. There is no addressing code here because on x86, we were relying on automatic addressing from, we were relying on automatic addressing from LibVirt and in PPC 64, we had to be aware of the addressing.
18:03
So is it clear? I hope it's not too small. You could see right under the case, there is if, let's say, okay, if there is no address,
18:20
so let's create it and basically you can see here, we are getting the strategy by the VM. Yeah, we are getting the strategy by the VM architecture. So this way, the code is not aware of what architecture it's running right now because we are getting the right strategy and then running the code.
18:40
So basically, running the assigned SCSI address code. So basically, the code is not aware of the specific architecture because of the strategy. We also had a visitor design pattern which has the specific implementation in the subproject.
19:05
It comes in handy. The strategy gives you the correct code for the architecture and the visitor is responsible for actually executing the code for the specific command,
19:22
if it's for addressing and so on. We'll show you an example. So you can see here that we have assigned SCSI address and it's an architectural command and we have implementation for run for x86 and implementation for run for PPC. And you can see that for x86, it's still empty
19:42
because we are still relying on the old libvirt logic and for PPC, we have different addressing if it's a virtios SCSI or if it's a spapper vscsi. So the addressing on the controller is different.
20:00
So what we have ready. You can create cluster and mention the architecture, VMs, templates and pools, everything that overt has. Importing and exporting keeps the architecture information. Attaching disks according to the specific strategy,
20:22
architecture, sorry. Searching, we can search VMs by the specific architecture and managing VMs if it's starting stopping, of course. So a little bit how it looks like. I hope you can see it okay. So this is the new cluster dialogue
20:42
and you can see that below we had the new supported CPUs so it's all a family of IBM power and basically once the user select the specific CPU, then the architecture is set from the CPU.
21:05
So every CPU has architecture so it's automatic. This is the new virtual machine dialogue. So you could see that above the cluster is the PPC cluster so the operation system is filtered out
21:21
and I can see only the supported so currently we only have other OS but once we have verified more VM OS's, this list will for sure grow.
21:40
This is still the new virtual machine dialogue. You can see as I mentioned before, SPICE is not supported on PPC so again this is filtered out in the UI and this is really nice because it's pretty much magic. It's all the infrastructure of the OS info does this for us so it was really easy and nice to do.
22:04
Adding new disk for VM dialogue. So again you can see that the disk interfaces are filtered out according to the architecture so we know that we are adding a disk to VM which is a PPC VM so we will filter out IDE
22:22
because it doesn't work. What still need to be done? So there are still some issues booting from network on PPC VMs and the guys from El Dorado are checking this out but more important we have blocked features on PPC
22:42
because they are still not implemented or not exactly working so live migration is the most important one. Also taking snapshots, VM snapshots is still not supported and hot plug for disks is not supported. We do have hot plug for NICs for network interfaces
23:02
but disks still not. Okay, so Samanita. Today, thanks to this work, the overt engine is multi-platform ready. Today we have x86 and PPC 64 already in in the 3.4 release but it's very important to say
23:23
that we now have the infrastructure to easily add more architecture really easily so once we have KVM supported on more architecture it will be, should be easy work to edit. There is a lot more information
23:40
so there is our website and the wiki that the guys from El Dorado created for the engine and for the VDSM which are the projects and all the information and implementation is there. That's it. Yeah, yeah.
24:13
So basically we have difference between CPU because the virtualization features are different.
24:20
KVM is different on this versions because you have different features that you could use. I don't know the IBM features by heart but let's say that one CPU has some features that KVM can use and the other doesn't so I have to specifically say what is the CPU type
24:42
for this cluster because I want migration to work. So basically you don't have to fill this out. If you leave it blank the first host that you will add to the cluster will set this for you. Yeah, we have automatic setting
25:02
if you set it, if you leave it empty. Yeah, yeah. I think there is some issue there with the storage.
25:22
I don't know exactly what was the problem with snapshot but on PPC so maybe I'm not updated. I hope it's working as you say.
25:48
Yeah. Did you integrate that all in overt or did you integrate it in libvert as well? So what's the distinction between that? So the work on KVM was announced regardless
26:03
and I think they did some work in libvert but mostly it was VDSM and the engine. So I think that libvert does support it but there are issues in VDSM. So it works better on overt than in libvert itself?
26:21
Sorry? It works better in overt than in libvert. When I just use plain libvert with the shell for that, with the visualization. So that's good news. I asked you if that's the case, I don't know. Ah, yeah, it should work. We are using libvert. The same is used in verse. So VDSM is used in libvert as well.
26:42
Okay. So it's in libvert. Yeah. Okay, that's good. And then another question, you said arm. Yeah. You know I'm quite horny on the till and I can't wait till March because I'm dead set about the new arm operands. Do you know of any work in that?
27:02
I have no idea. I have no idea. I'm sorry. But you don't know if they work on KVM for arm? I don't think there is work on KVM for arm. I don't think so but I didn't look for it. Okay. Yes.
27:24
Sorry again? Okay. So the question was actually at this point
27:41
what you're displaying or what I understood from your display is that you already have Linux running on the machine and then you can build other Linuxes on top of that. That would mean that the LPAR had to already be defined. Is there any plans? What would be defined for it? The LPAR, the logical partition on the machine itself. Okay.
28:00
So is there any plan to interact with the HMC so you can do configuration of the LPAR or do dynamic LPARs on it? Actually I don't know. I don't really know about this, I'm sorry. On IBM power. On IBM power, so. Yeah, I think it's below because we are using KVM
28:21
and KVM is user space so I don't think it has any interact in there. Okay, we are out of time. Thank you very much.