We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

BuildSys and QA in CentOS using a Private Cloud: OpenNebula

00:00

Formal Metadata

Title
BuildSys and QA in CentOS using a Private Cloud: OpenNebula
Title of Series
Number of Parts
90
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
In this talk we will discuss how the build system and QA of CentOS is implemented using a private cloud: OpenNebula. We will focus on the challenges of integration, advantages and workflow. We will also demonstrate how to implement and automate a building system and QA tasks by creating a private cloud on the fly with a CentOS laptop using OpenNebula. Administrators will benefit from this technique -and its simplicity and ease of deployment- to overcome the complexity of this kind of systems.
25
Thumbnail
15:46
51
54
Thumbnail
15:34
55
57
Thumbnail
1:02:09
58
Thumbnail
16:08
62
Thumbnail
13:26
65
67
Power (physics)Open setDemo (music)DampingData managementSoftwareVirtual machineBitInstallation artXMLUMLLecture/Conference
Interface (computing)Hybrid computerOpen setSet (mathematics)Data centerPower (physics)Cartesian coordinate systemHybrid computerVirtual machineInterface (computing)XMLLecture/Conference
Interface (computing)MultiplicationComputer networkData storage deviceComputerCommon Language InfrastructureGraphical user interfaceHill differential equationSoftwareData managementVideo gameVirtualizationVirtual machineOpen setINTEGRALInformationInternet forumComputer animationLecture/Conference
Interface (computing)ComputerData storage deviceComputer networkGraphical user interfaceCommon Language InfrastructureMultiplicationData storage deviceOpen setInformation securityFile systemComplete metric spaceWeb serviceMedical imagingGroup actionComputer animation
Virtual LANComputer networkComplete metric spaceSocial classMereologyDemo (music)System administratorGraphical user interfaceOpen setPower (physics)Physical systemInterface (computing)Lecture/Conference
Data centerOpen setInformation securityMultiplicationInstance (computer science)Web serviceTime zoneVirtualizationInterface (computing)Different (Kate Ryan album)Graphics tabletUniform boundedness principleLecture/Conference
Computer networkMultiplicationComputerCommon Language InfrastructureGraphical user interfaceInterface (computing)Data storage deviceComputer hardwareOpen setDebuggerData managementDifferent (Kate Ryan album)Gene clusterComplete metric spaceCategory of beingAuthorizationInstance (computer science)SupercomputerAuthenticationComputer animation
Scheduling (computing)Hand fanUser interfaceDemonSlide ruleOpen setWeb 2.0Disk read-and-write headInstallation artLecture/Conference
Computer hardwareScripting languageOpen setComputer fileVisualization (computer graphics)System administratorVirtualizationProgram flowchartLecture/Conference
Information managementMobile WebTelecommunicationOpen setConnectivity (graph theory)Type theoryComputer animation
Virtual machineInstance (computer science)Medical imagingWeb pageElectronic mailing listStructural loadFreewareLaptopLecture/ConferenceXML
Open setInstance (computer science)VirtualizationWordSubstitute goodData centerPosition operatorRight angleData management2 (number)Lecture/ConferenceComputer animation
Reading (process)Lecture/Conference
Electronic visual displayVideo gameTouchscreenQuicksortProduct (business)Software developerLaptopVirtual machinePointer (computer programming)Computer-assisted translationComputer animationLecture/Conference
Natural languageVirtual machineRepository (publishing)Open setGeneric programmingComputer animation
CASE <Informatik>QuicksortStack (abstract data type)Source code
Directory serviceServer (computing)Source codeJSON
Directory serviceContext awarenessKey (cryptography)Computer fileConfiguration spacePoint (geometry)Set (mathematics)Source code
Distribution (mathematics)Default (computer science)Interface (computing)Meeting/InterviewSource code
BefehlsprozessorString (computer science)Multiplication signUser interfaceQuicksortSystem administratorMoment (mathematics)Electronic mailing listMeeting/InterviewSource code
BefehlsprozessorStatisticsBitElectronic mailing listKey (cryptography)Data storage deviceGame controllerNeuroinformatikLecture/ConferenceSource code
BefehlsprozessorStatisticsVirtualizationMechanism designNeuroinformatikVirtual machineWordSource code
AreaVirtual machineData storage deviceDifferent (Kate Ryan album)SoftwareMedical imagingDefault (computer science)Library (computing)Source codeLecture/Conference
BefehlsprozessorStatisticsString (computer science)Resource allocationSource code
BefehlsprozessorResource allocationStatisticsCommon Language InfrastructureGoodness of fitVirtual machineStatisticsNumberNetwork topologyMultiplication signState of matterCASE <Informatik>Point (geometry)Right angleSource codeComputer animationLecture/Conference
Scripting languageSoftware repositoryDefault (computer science)Electronic program guideComputer-generated imageryMedical imagingUsabilityRight angleQuicksortInstance (computer science)Virtual machineInstallation artPoint (geometry)SoftwarePhysical systemCuboidDistribution (mathematics)User interfaceElectronic program guideWeb serviceBuildingMultiplication signGodServer (computing)Computer animationLecture/Conference
Different (Kate Ryan album)BuildingImplementationBus (computing)Physical systemService-oriented architectureProcess (computing)Moving averageMedical imagingWeb 2.0Projective planeSource codeShift operatorLevel (video gaming)Server (computing)Functional (mathematics)Software testingComputer fileRevision controlPatch (Unix)Entire functionString (computer science)Suite (music)Centralizer and normalizerQueue (abstract data type)Point (geometry)Scripting languageBoss CorporationUsabilityRight angleConnected spaceResource allocationInstance (computer science)Logic gateRepository (publishing)Perpetual motionVirtual machineMultiplication sign2 (number)RoutingComputer animationLecture/Conference
Scale (map)Instance (computer science)Computer-generated imageryBitLogicInstance (computer science)Virtual machineScaling (geometry)Equaliser (mathematics)PhysicalismFreewareiSCSIRaw image formatMultiplication signSoftware testingParallel portThread (computing)Channel capacityMoving averageSuite (music)Server (computing)Web 2.0Entire functionDistribution (mathematics)Series (mathematics)InternetworkingSet (mathematics)Right angleLevel (video gaming)Physical systemCase moddingBuildingInterface (computing)Data storage deviceLinear regressionAffine spaceLoop (music)Kernel (computing)Term (mathematics)INTEGRALMedical imagingLibrary (computing)MathematicsMechanism designSequelQueue (abstract data type)Scripting languageSoftware repositoryMoment (mathematics)Exterior algebraRepository (publishing)Key (cryptography)Installation artFunctional (mathematics)SynchronizationComputer animationLecture/Conference
Transcript: English(auto-generated)
So we're going to do a half an hour talk on Open Nebula and CentOS because we think it's a very powerful cloud solution to combine both of them. So the way we're going to do is I'm going to speak a bit to introduce Open Nebula. And then Karamir, he's going to speak how they use Open Nebula at CentOS.
And then we're going to do a demo, a live demo on how to install Open Nebula in a CentOS machine from scratch and getting up and running. And hopefully, that will work out. So Open Nebula is a cloud management software. So if you have a data center, if you have a set of hosts
and you want to expose a private cloud or a public cloud, Open Nebula will let you do that. With the private cloud, we expose public interfaces. And we also have support for hybrid computing. That means that we can send virtual machines directly
to Amazon. So if you have an application for it to dynamically grow, you can send it to Amazon. So Open Nebula does all those things you would expect from a cloud management software. That means, of course, it will manage the lifecycle of virtual machines. You can deploy them, shut them down, cancel them,
migrate them, live migrate them, do all those things. But besides that, it also controls many other virtual resources. I'll start with the network. So Open Nebula users want to be isolated from each other so they can deploy the same networks and all
those things without being able to get into other users' networks. So we provide network installation through VLAN. We have integration with Open vSwitch. We can do all those things. We also have firewalling. That means that the same way Amazon Web Services work, you can define your security group,
and then it will just filter out some ports and those things. Then storage. So we have many storage pack-ins. That means, imagine if you have a cabin and you want to register your image, you can integrate it with iSCSI, with any technology you want, you can have your images. End up in plain file systems and all those things.
Then also Open Nebula will manage all the, you can clone images. So the storage support is very complete in Open Nebula. Also, it wouldn't be a complete cloud solution if it didn't offer a cloud API, both private and cloud.
So for the private part, we have a command line interface. Open Nebula is very, very unique. We have a very powerful command line interface that it's saying that system administrators, because we guess those are the guys who are going to be running the cloud. And so if you want to do anything
you can do with Open Nebula, you can do it with the command line interface. It's really simple to use. I guess you'll get to see that on the demo. Sandstone is a graphical user interface. You'll see that one, too. And we also have a third web service
that is for the public cloud. So if you want the extra security even to expose just the infrastructure agnostic API, you can use the public cloud interface which sits on top of OCCI or EC2 that will let the users simulate the same they would
do when connecting to Amazon. We also have multi-tenancy. That means that you can have multiple Open Nebula instances across the world. And then you have the 1.0 zones that will create virtual data centers out of them. It's kind of maybe a complicated concept.
But it works pretty well, especially if you're very scalable, if you have a very large cloud. And you need to divide those things and split them up into different Open Nebulas. So it will manage permissions, roles, authentication, authorization, everything.
Of course, inside of Open Nebula, we have also inside a single instance, we can divide the hosts into different clusters. Each cluster can be labeled as whatever you want or can have the properties you want. So you can have one cluster that it's for, I don't know, high end customers, HPC. Or whatever you want.
So it's pretty much feature complete. Now, one of the big things of Open Nebula is that it's really, really simple. Deploying Open Nebula is just a matter of getting a front end node and installing Open Nebula there. You don't need to install anything on the nodes.
It doesn't have agents, runs agentless. So in the Open Nebula node, you will have the daemon, which is Open Nebula 1D, and the scheduler. And maybe if you want, you can have your public cloud interfaces, your web interface, et cetera. And then that will communicate with all the hosts
through SSH. These hosts only need to have a hypervisor and SSH installed. And it will bootstrap them. And they won't need to have anything running on them, just those things. So getting your head around Open Nebula is really a matter of looking at it a couple of, I don't know,
for a couple afternoons. And then you understand how are the basics. Also, I don't know if I have a slide for this. I don't. Also, Open Nebula, one of the big things is that it's very extensible. That means that everything, in the end, is a bash script. So if you want to create a deployment file,
you simply need to know what command you have to send your hypervisor. So hacking a support for any hypervisor just can take you not too long. Because the way it's designed is that it's supposed to be flexible and very extensible and hackable.
So that's one of the big wins of Open Nebula. It's really easy for its admins that know their way into virtualization. It's very easy for them to understand that. We have a reactive community. Many people are using Open Nebula, both in the hosting industry and the telecommunications
industry. We have many components contributed by these guys. We have many contributors and many users. That's wrong. It's not about contributors, but Open Nebula users. That's a typo there. This is a picture of the people using it.
And if you want to try it out, simply you can log into this page, and you can download three appliances, one for virtual bugs, one for VMware, and one for KVM. So ready to run instance. You started with one of these, and it will run a node list.
That means that it will be able to run virtual machines inside of them. And the good thing is that we also have one for Amazon. So if you don't want to install one of these hypervisors in your laptop, you can always go to Amazon, start a micro instance of one of the images we have prepared. And inside of that instance, you will be able to run virtualization.
So it's a very quick way to try out Open Nebula. And now I'm going to give the word to Karamir, who's going to speak to you about how they use Open Nebula and CentOS, and why is it useful for them.
For? Exactly. That's Open Nebula's niche. That's Open Nebula. As you know, there are other cloud management solutions.
But Open Nebula wants to be a substitution for vCloud. That means that we mainly take care of your data center. You have your own data center. You have VMware. We have full support for VMware, for Amazon, for KVM, same features.
And we will provide the tools for you to treat your data center as it would with vCloud. So yeah, absolutely. It's actually what we try to do. That's our position in the cloud.
Is that OK for everybody? OK. I'm going to try and see if this works. Give me a second.
Is there something up there? Not yet.
Does that look? Can everybody read that? OK. I can't see it, though, so I'll have to. Hang on, let me see if I can mirror my displays. That would be perfect.
Maybe?
There's no mouse pointer here? Sorry, we should have tested this before. Apparently, I can't because just
to make life easier for myself, I'm going to bring the chair around so that I'm not leaning over and typing like a mad cat. Now I can't see the screen anymore because the screen's back here. OK, let's try this. I think it's probably worth looking at just getting
a basic install up on my laptop, which should be fun, because my laptop is set up like any developer's laptop. Got like 700 million things which are not related, which hopefully people will not have in production. So this is basically a generic CentOS 6 machine.
Can everybody still hear me? Yeah, OK.
So this is basically just a generic CentOS machine, and I've got basically I've got the base, the updates, and I've got a little repository for Open Nebula set up. I have Libvert installed,
and KVM should be there. Is KVM there? Yeah, KVM is there. So it's literally a case of, how does that look? Right, so we bring in a bunch of Ruby stuff, a couple of Ruby gems, which are all sort of packaged up and tested with this particular stack.
How's that? Better? Or do you want to go more?
So there are a couple of things that happen over here. It sets up an EDC directory which has a couple of things in it. Right, I need to actually install the server.
So we can actually do something with it. Right, so what's happened is it set up a basic home directory. It set up a couple of SSH keys. It's got all the context stuff in, and we now have a few more configuration files.
Now, what we have done with packaging on CentOS is that we've set it up in a way that everything is pre-configured to go from the point that you install it. OpenNebula as OpenNebula shipped has a lot of same defaults, but then they try and target every distribution out there. So what we've done is we've taken that, contextualized it to what you need on CentOS and we built it around that.
So we don't need to actually edit anything from here. All we now need to do is, should it start it? Yep. So OpenNebula comes with two interfaces. You have the command line interface, which I'm quite fond of, and the web interface which tends to make more sense to
people when they're doing things for the first time. So we'll try and look at both of those. Change over into one admin, and we can now do one host list, and we have no hosts in here at the moment.
So we should be able to now go and go on host. Actually, let's do this. So the big thing with OpenNebula, one of the reasons why I like it a little bit as well, is that nodes run agentless. All chatter between the controller node and
compute nodes and storage nodes is over SSH. The only thing you have to really make sure is that SSH isn't going to ask you to accept the key. This can also be automated. So depending on how you're doing your rollout, you can actually have keys which are pre-setup and pre-rolled out. So once you've got that in there, we can go struggling a bit with that. Does that look right?
So what I've done here is, this is rocket science, so I'm going to explain this. We've got the one host create.
Yeah. Sorry. Sorry. Yeah. So what we've done here is we've said we want to create a new compute host that's going to host our VMs. So you've got to create which machine you want to go to. The minus I will tell us what virtualization we want to use,
what kind of mechanism we want to use for getting data across and what kind of network we're going to use. Now, we're using a networking dummy here because it's on the same machine. So we don't want to go into things like what OpenV switch or you can use anything you want really.
We're using shared storage which is by default is going to be shared because what live one is always going to look like what live one because everything is on the same machine. But you could then specify a different data store, you could specify a different transport to get images from one place to the other place. But this should, what did I have?
Did I have a tea? Okay.
We missed basically everything that happened over there.
So what's happened is that it's done its prep, it's done the initialization setup, it's figured out what this machine is and you've got stats up here. So you've got 400 as you've been counted as it because you've got four cores and you've got 7.4 gigs from the eight gigs that is available on the machine. As we keep adding hosts, we'll keep seeing those numbers go up.
What I was hoping to show you was the init state and the prep and then how it comes on. But that's going to come on. I'm waiting for time. We've got not a lot but seven minutes more. Right. In that case,
what I will do is, does that come up? Right. So basically that's how easy it is to get the hosts in. Images would deploy in exactly the same sort of ease and you'd create instances in just about in the same way.
If I can't get you the images here because we're running out of time, then come down to the centers booth and I'll show you, I'll walk you through how the images come up, what they do, and how they work, and things like that. What I want to do is walk you through this. CentOS has been around for seven years, eight years, and pretty much every year, year and a half, we've had a rewrite of what we call the build system.
The current build system that we have came in in November last year, and it's just based on OpenNebula. But what it actually does is that we've taken OpenNebula, stabilized it for what we consume internally, and we tried to make it as easy as possible for the people to consume. So this is basically what we've done. We've got the packages which you've seen already. They're pre-configured, they're pre-setup to go
out of the box on a CentOS machine. You don't need to install anything like we mentioned earlier, that you don't need to install anything on your worker nodes. So once you do a YAM install on your server, that is all the software you're going to ever install. Everything from that point on is from the command line or the web interface. We've got a quick start guide which is literally six steps which is what I did.
I went through the quick start guide which is what you can see online, and we have pre-contextualized images for CentOS for five and six, 32-bit, 64-bit as we release them, and what we're doing is every two months, we're rebuilding those images with all updates applied. So if at any point you need to download an image, you don't have to create it yourself, you can just download it from cloud.centos.org.
This is also where we're distributing images for the Amazon Web Service. That's about that. The CentOS build system I started talking about, and how that fits in, is that we have a Beanstalk worker. We have a Beanstalk DE-based implementation. How many people here know what Beanstalk is?
Right, not very many people. So it's basically a queue broker, it's a messaging bus that you can put jobs onto, and then various workers at different places can take jobs off, do them, put them in different statuses, put them in different queues. The advantage that that gives us is that we only need one instance of the central Beanstalk server as it was. And then we can have three builders,
or we can have 100 builders. And the job allocation and the job submission is automated. Again, if you guys want to see this being used in anger with a lot of jobs going in and out, come and find me somewhere and I'll show you hundreds and thousands of jobs per second that can go through. I'm happy to do that. So this is basically what we do. We import a source RPM as it's released, upstream, downstream, wherever. Everything has to start with the RPM,
and everything finishes with an RPM. We import it into Git. We have a prep stage where we disassemble the source RPM, and we do some sanity tests. We want to make sure that the tarballs that are included in the source RPM are the tarballs that the project shipped, and nobody in route has fiddled with them. So we take, so for example, if it's bash.tart.gz,
we'll actually compare that to the same version string as the project exports it as, just to make sure that the project tarball is what our tarball is as well. We do things like we make sure that the patches that have gone in look sane, and all of this stuff happens automatically. Then we create an audit log to see what files have been changed,
at what point, on what date, by whom. And that's just available if anybody wants it. We create a source RPM again out of the tarball and the specs and the patches that we have, and then we build it. And each of these things happens in an independent virtual machine, which has no connection to anything else. So for example, if there's a compromise in the Git import stuff, the prep stage will find that sources have changed.
And there's no way that a script in the Git, in the VCS VM, can influence anything which is happening at any other point. This is very important for what we're doing. Right, so, the build system will effectively give us RPMs at the end.
These RPMs then get included into siloed repositories, very small repositories, one per build. And then the URL for that is passed into the build time QA server. And what that'll do is it'll deploy a couple of roles. So it'll deploy a VM which has a predefined as a web server, which will have a functional web server.
We do a normal KDE and there are a bunch of other kickstart files that we've converted into images. And then we run our entire QA test suite on those images before we put the RPMs that we built. I don't know if anybody can see that. So what we're basically doing is that we do sanity tests on these images. We test these images against our QA script
before we've put the new RPMs in, just to make sure that the tests actually pass. Because the tests are failing on what was there before the RPMs were built, then the tests are no longer usable. Does that make sense? Right, so then we'll do the pretests, make sure that the tests are functional. We yum import and yum upgrade the new builds,
and then re-run all of the tests. And if everything passes, we go to release, right? So I think you can kind of see how a cloud setup really benefits us because we get the auto-scale bit for free. We have four physical machines, four Dell 710s, and we have an Ecologic 6K, iSCSI SAN, backing those up.
And we have up to 32 worker instances. But what those worker instances are doing depends on what the current requirement is. So for example, when CentOS 6.4 comes out, for the first two days, we're going to have 32 instances just doing build work. And as those builds start getting converted to RPMs, our Xabix interface will then start migrating those into QA instances.
And then we'll see those 32 workers go down a number and get the QA instances come up, which is all of those in the GNOME desktop, the KDE desktop. And every RPM set gets tested independently. The other thing that we also get easily is image sanity because we've tested those images with a set of scripts that we've got. We can guarantee that the images are the same.
So as long as we can protect the data store, which has the raw images, we know that nobody can actually influence what's in those images. What this also lets us do is that technically any one of you guys could say, hey, this is my role. I have a web server which has these hundred things installed in it. How do I make sure that the CentOS QA system is going to QA all of the packages
that are coming up for release on my role, right? So what this lets us do is that you could build an image and you could submit that image to the CentOS team and we could include that image and have that get included automatically into the QA stuff. It doesn't happen now. There is no mechanism to do this, but this is something which could potentially come in.
The other thing that we can also do with OpenNebula is that we can scale into AWS if you need to. Like for example, let's say when 6.5 comes out and rather than the usual 300, 400 packages, this time there's like 5,000 packages we have to build. What we could do is we could actually start scaling our workers into AWS and use those nodes without actually changing anything at all
and OpenNebula handles all of that stuff for us. Does that make sense? Is there anything anybody wants to talk about at this stage? Yeah.
So the question is how do we, how should I define this? For example, if we were to build glibc, that would have a series of dependencies. Well, the whole distribution would become a dependency.
If I have internet access, I can actually show you how that works. So what we do is the images will have what was there in the past, right? The last set of same images as we knew them to be. So we run the test suite on that. Then we'll update just the packages that we wrote. So this will be just glibc. And then we'll rerun the test suite. We don't actually rerun or rebuild everything
that is going to depend on, but we rely on the tests to find that for us. So one of the tests that we do is we do an LED test against every binary exported from the RPM that is built. Then we do things like there's an ABI tester, which is hosted kernel.org, which will then go through whatever the libraries export.
And all of those things will get tested and then get compared to what we had previously. So within a release, there shouldn't be an ABI change within glibc. So we rely on that. If there is, then the test will find it. In nine times out of 10. So our test suite is complete, and the test suite will do things like it installs Apache if there wasn't Apache.
It'll install PHP if there wasn't a PHP. It'll install MySQL, and there's a very small PHP script that gets called to sanitize that the LAMP stack is working. We have a similar, we have a test to make sure that zlib is always working. We have a test to make sure that libwort is able to do what it needs to do. So we're all, and all of the function tests get run every time.
So because, for example, let's say there is a bash update. Our tests are written in bash, so they get tested anyway. So if there's a Ruby update, we have Ruby tests as well. But because there's a Ruby update, we will still run everything. We won't skip the other tests. It'll still run the Apache test, for example. It'll still run the mod Python test. So we rely on, so what we're testing for really is the user side,
I think, the functional side of the stuff, rather than regression testing, if that makes sense. It depends on how long the build takes. It can be as short as two and a half minutes for.
It takes two and a half minutes, if it's something like time, if you're building time. The test, well, the build runs in two minutes. And then the tests, I think, take about 15 minutes to run. All of them, because they're running in parallel. And this assumes a couple of things. This assumes there's one builder thread, and there are maybe 16 QA threads available.
I mean, our total capacity is locked down to 32. And that'll take, so maybe about 18 to 19 minutes, we can have something ready for release. It's, I mean, to making it public is a bit longer, because it takes 38 minutes to actually run create repo. And then it has to sync through all the rsync mirrors
and all of that kind of stuff. But for the build and for the QA, it's pretty quick. So there are some tests which run on i686, and some tests which run on x86-64.
When I say some tests, I mean some roles that run. Like, for example, the desktop roles are only 32-bit. But for the web server role, we run on 32-bit and 64-bit. But the test suite that we run is exactly the same. So the fact that we're testing it on a web server should not have any implications
on what the tests that have been run, or what packages have been tested. Does that make sense? Because we're still testing for Ruby everywhere. We're still testing for Apache everywhere. So even when you're testing the KDE desktop instance, we're still installing Apache, we're still running on Ruby with the Ruby test, we're still running the Python test. I think in terms of coverage, if we just have two VMs,
a 32-bit VM and a 64-bit VM, I think we're running out of time as well. If you have a 32-bit VM and a 64-bit VM, which is a basic CentOS install, and we run the entire test suite, that should give us all of the coverage. The only reason why we're testing roles like a Gnome desktop or a KDE desktop is to see how that particular update is going to influence an existing install out there.
We have one more question. Quickly, let's try and squeeze that in. Do we have tri-servers? We have the entire build system. We have something called the alternative build system, the centos.alt setup, which has an IRC interface,
because we all love IRC, and you can push git repositories into it, which it will convert to binaries. We haven't got the hooks in place at the moment to run it through QA, but Fabian manages a IBM Blade Center for us, which has capacity. So we could implement this whole thing in there as well. If you're interested, why don't you drop in on centos-devel
and help us do that? So, well, maybe we can see how we can integrate. Thanks, guys.