We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

OKD Virtualization: what’s new, what’s next

00:00

Formal Metadata

Title
OKD Virtualization: what’s new, what’s next
Subtitle
New features on OKD Virtualization 4.11 and 4.12 and next challenges
Title of Series
Number of Parts
542
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
OKD Virtualization is the community project bringing traditional virtualization technology into OKD. OKD is an Open Source Community Distribution of Kubernetes optimized for continuous application development and multi-tenant deployments. OKD is a sibling community distribution to Red Hat OpenShift Container Platform. Meet the OKD Virtualization community, learn new features, discover deployment patterns and get involved! OKD Virtualization is the community project bringing traditional virtualization technology into OKD. OKD is an Open Source Community Distribution of Kubernetes optimized for continuous application development and multi-tenant deployments. OKD is a sibling community distribution to Red Hat OpenShift Container Platform. OKD Virtualization is built on top of the KubeVirt project and its sibling operators. The Hyperconverged Cluster Operator project ensures an opinionated and robust deployment automating common Day-1 (installation, configuration, etc.) and Day-2 (re-configuration, failover, etc.) operations. The Hyperconverged Cluster Operator is available in the Openshift Community Operators catalog and so also on your OKD cluster. In this session you will learn about fresh OKD Virtualization features like: - Deploying a single node development/testing environment on CodeReady Containers (with nested virtualization) on your laptop - Using KubeVirt Tekton tasks to automate VM common tasks (creating/copying VMs, generating ssh keys, executing commands into a VM or manipulating disks with libguestfs tools) in a CI/CD fashion - Automatic importing and updating of pre-defined or custom boot sources After this session newcomers will know what OKD Virtualization is while experienced users will get a sneak peek of new and upcoming features.
14
15
43
87
Thumbnail
26:29
146
Thumbnail
18:05
199
207
Thumbnail
22:17
264
278
Thumbnail
30:52
293
Thumbnail
15:53
341
Thumbnail
31:01
354
359
410
SoftwarePrincipal idealVisualization (computer graphics)Virtual realityComputer animation
Computing platformService (economics)Programming paradigmConnectivity (graph theory)Value-added networkComputer networkStandard deviationVirtualizationLocal ringComputerLaptopGame theoryCyclic redundancy checkAerodynamicsGroup actionConfiguration spaceMiniDiscCartesian coordinate systemOrder (biology)Product (business)Virtual machineMechanism designVirtual realityBitScheduling (computing)SoftwareVector spaceConfiguration spaceDistribution (mathematics)Context awarenessObject (grammar)MiniDiscVolume (thermodynamics)Extension (kinesiology)Set (mathematics)Computing platformSource codeInterface (computing)CyberspaceCodeLaptopSoftware testingDependent and independent variablesSemiconductor memoryComputer animation
AerodynamicsGroup actionCyclic redundancy checkPressureOvalOperator (mathematics)Variable (mathematics)PasswordHypercubeVirtualizationExtension (kinesiology)Menu (computing)Installation artMomentumIntegrated development environmentOperations support systemTerm (mathematics)Type theoryOrder (biology)Web pageUser interfaceLogic gateTask (computing)Virtual realityMedical imagingBootingSystem administratorVirtual machineMechanism designConfiguration spacePresentation of a groupWeb browserComputer animationSource code
Operations support systemSystem programmingMiniDiscComputer-generated imagerySimilarity (geometry)Pattern languageVirtual machineIntegrated development environmentMedical imagingOperating systemMiniDiscComputer animation
Windows RegistryBroadcast programmingExecution unitLibrary catalogVirtual realityVirtual machineMaxima and minimaInterior (topology)Template (C++)Volume (thermodynamics)NamespaceSource codeObject (grammar)Medical imagingOrder (biology)Library catalogWindows RegistryOperating systemContext awarenessDataflowVirtual machineMiniDiscUser interfaceOperations support systemVirtual realityMechanism designScheduling (computing)Information securityArithmetic meanCuboidRevision controlComputer animation
Complex (psychology)AutomationBefehlsprozessorTask (computing)Error messageImplementationVolumeTemplate (C++)Virtual realityBit rateKey (cryptography)Task (computing)Medical imagingOperations support systemOperating systemCodeBlock (periodic table)Mechanism designInternet service providerPhysical systemLatent heatoutputSet (mathematics)Analytic continuationBuildingVirtual machineVirtual realityComputer fileComputer animation
8 (number)Condition numberRepository (publishing)Server (computing)Task (computing)Shift operatorComputer-generated imageryCyberspaceVirtualizationWorld Wide Web ConsortiumMedical imagingWindowoutputVirtual machineInstallation artComputer animation
Time domainVideoconferencingRepository (publishing)MIDIDemo (music)Computer animation
Game theoryBlogEvent horizonGEDCOMVideo game consoleScripting languageMiniDiscWindowDemo (music)Medical imagingOrder (biology)Perfect groupoutputUsabilityVirtual machineComputer animation
Computer wormMedical imagingWindowSoftwareBitProgram flowchart
Duality (mathematics)Dean numberWorkloadDemo (music)VideoconferencingCone penetration testWeb pageContinuous functionComputer animationSource code
Computer-generated imageryWindowFiberSQL ServerCore dumpVirtualizationGame theoryGoogolInternet forumGroup actionMobile appShift operatorServer (computing)Installation artLink (knot theory)AreaVirtual machineChannel capacityArmSoftwareStreaming mediaCentralizer and normalizerLevel (video gaming)Object (grammar)Virtual realityInternet service providerMathematicsMechanism designProjective planeComputer animation
OvalError messageTask (computing)Complex (psychology)AutomationLibrary catalogHill differential equationOperator (mathematics)Point cloudLocal ringIntegrated development environmentSingle-precision floating-point formatLaptopStandard deviationComputing platformHybrid computerMedical imagingOrder (biology)Scheduling (computing)Level (video gaming)CASE <Informatik>Virtual machineMeta elementMultiplication signRow (database)File formatSet (mathematics)Binary codeTheoryWindowOperating systemComputer animation
Program flowchart
Transcript: English(auto-generated)
Welcome to the virtualization room. Similarly, we'll talk about OKD virtualization. Enjoy. So, hi, all.
Nice to see you here. Today we are going to talk about OKD virtualization. We are going to have just a quick intro for who doesn't really know what OKD virtualization is, just to get a bit of context. Then we are going to see
how you can use the CRC to play with OKD virtualization at home in a really small footprint, just if you want to try it or if you want to start developing on OKD virtualization. We are going to see a couple of new features. I choose this one because they are cloud-native.
I think that they are a bit different from what you use it to see in related kinds of products. And then we are going to see the next challenges for us. So, let's start from OKD. What is OKD? OKD is a distribution of Kubernetes.
It's a sibling distribution of OpenShift Container Platform, which is the data distribution of Kubernetes. OKD is the community of the upstream release of the vector. It's based on physical machines that can be bare-metal
or virtual machines on hyperscaler. In our case, it would be better to use bare-metal nodes just because we are talking about a virtualization solution. Then we are going to have hosts there that nowadays are based on Fedora QoS,
but then we are going to see that this is going to change. Then we have all the Kubernetes stack and you can use it to start your application. Now, what is kubeweart? Kubeweart is a set of virtualization APIs for Kubernetes.
So, you can extend Kubernetes in order to be able to run virtual machines that are in containers on your Kubernetes infrastructure. At the end, it's still using the KVM hypervisor. It's able to schedule and manage virtual machines as if they are native Kubernetes objects.
What is its main advantage? It's that it's cloud-native. It means that you can use all the Kubernetes stack, so the container networking interface for the network,
the container storage interface that you are already using on Kubernetes for the storage. It's based on a customer source definition and customer source that are a way to extend Kubernetes with a new API. It can schedule virtual machines as native Kubernetes objects and you can manage them to talk with what you already developed as a microservice.
So, in an ideal world, you are going to write your application from scratch, completely split into microservices. In the real world, probably you have a bit of legacy code or something that it's already running in a virtual machine.
When are you supposed to schedule it on an external hypervisor or on the same infrastructure that you're using for your microservices with the capability to have them talking natively to your virtual machines? Kubernetes is the response for this challenge. Now, how can you test it at home?
You can easily try it with CLC. CLC is a really quick way to start playing, debugging, hacking on OpenShift in general. CLC is a micro distribution of OpenShift
that runs in a virtual machine that can be executed on your laptop. It's absolutely not intended for production usage. It's going to be executed in a virtual machine, so it's a single node cluster. It's not going to scale out, it's not going to support upgrades.
It's just a test platform. Here are just a few instructions if you want to try it at home. Since we are talking about virtualization product and you are running it in CRC, which is a virtual machine as well,
you need to enable nested virtualization on your laptop in order to be able to start virtual machines inside the CRC one. Then you can tune the configuration. Normally, CRC comes with a really small configuration. If I'm not wrong, it's about nine gigs of RAM,
which is not that small, but it's just enough to run the OKD by itself. If you want to think about playing with a couple of virtual machines, it's better to extend the memory up to at least 20 gigs in order to be able to do something realistic. It's also nice that CRC already comes
with a CubeVirt host path provisioner, which is a way to dynamically provision PV, persistent volumes for your virtual machines. As you can imagine, a pod is something ephemeral while your virtual machine needs a persistent volume. You need a way to provide persistent volumes
for your virtual machines. CRC is just a virtual machine where you can run other virtual machines inside, but you still need a mechanism to provide persistent volumes for that. It's already integrated in CRC, but you have to extend its disk in order to have a bit of space to create disks.
At the end, you have just to execute a couple of commands, CRC setup and CRC start. After a few minutes, you are going to have to gain the access to your environment. You can connect... Of course, you can do everything also from the command line.
Probably much of you are going to prefer using the command line here. I attached a screenshot to the presentation just because they are nicer. You can connect to the user interface, to the admin user interface where you have the operator hub page.
In the operator hub page, you are going to find already there because it's distributed via the operator hub mechanism. You are going to find the cube, the type of the converted cluster operator. As I mentioned, you don't need to configure the storage, just so you're only supposed to install the operator
and create a CR to trigger the operator, but the storage is already pre-configured. After a while, you will be asked to create the first CR for the operator in order to configure it. Here, you can fine-tune the configuration of OKD virtualization for your specific cluster.
In particular, we have a stance called feature gates where you can enable optional features. Here, we are going to talk about two features. One of them is already enabled by default, which is enabled common boot image input, and the other is deployed Tekton task resources.
This one is not enabled by default, but if you want to do a term of what we are going to see now, you have to enable it. You can enable it also at day two. After a few minutes, the operator is installed. It's going to also extend the UI with a new tab
where you can see what you can do with your virtual machines. Now, let's start talking about one of the... In the last year, we introduced a lot of features, but today, I want to talk about two of them.
The first one is golden images. Why? I think that it's interesting. Nowadays, if you think to any public ground environment on a public hyperscalers, you are going to find... It's really used to use them. Why? Because you can find their images
for your preferred operating systems already available. You have just to select one of them, click, and in a matter of minutes, you are going to get a virtual machine that is running. You don't need to upload your disk. You don't need to upload, eventually, an ISO, start defining your virtual machine, and so on.
They are really convenient. We want to have the same experience also in KubeVet. So, we introduced this feature. The whole idea is that we are going to have... Oh, sorry, we already have a container registry
which contains some images with the disk image for your virtual machines that are going to be periodically refreshed to include a new feature of the operating systems or security fixes. Then we have this new object called the data import kernel,
which is going to say that you want to periodically pull an image from that container registry with a scheduler and import it in your cluster. There is some mechanism in order to configure the garbage collecting,
the retention policy, but at the end of the idea is that you are going to find images for popular operating systems out of the box already available in your cluster. And they are going to be refreshed over time, so each time you... Now, let's see.
This is the catalogue in order to create virtual machines in the user interface of our KDD virtualization. We have a catalogue with objects. The whole feature is here. As you can see, for popular operating systems, we already have a nice label
saying that the source is already available. It means that this new feature, automatically imported for you, a golden image of that operating system, and it's going to continuously keep it up to date. The benefit is that when you want to start a virtual machine,
you will be able to do it with a single click. You can customize the name, you can say in which namespace it's going to be executed, but everything is already ready. With one click, you are going to start your virtual machine. What is going to happen on the storage side? We see that we have some existing persistent volume claims
for the disk that got automatically imported. One of them is going to be cloned to be your virtual machine. Depending on the CSI implementation, this can be even completely flow added to the CSI provider, and it can be really fast. After a few minutes, your virtual machine is there.
As you can see, through Cloudin, Itorsisp, Prepper, whatever, it can be also customized to use a custom name and so on. How our data import can look like?
We are saying that we want to have a data source named Fedora with a schedule, with the usual context. We want to periodically consume images that are available on Kuei, which is a Docker register. Here, you can see the status and the image is up to date, meaning that the most fresh version of Fedora
got automatically imported in your cluster. The important thing is that if you look here, you see that the tag for the Fedora image is the latest. It means when the next release of Fedora is going to come out, it's going to be automatically available in your cluster.
Of course, we are providing images for Fedora for centers, but you can use the same mechanism and the same infrastructure to import on your cluster your own images. You can create custom data import columns.
Now, I want to talk about an additional really nice feature, which is KubeVSC TechTron task pipelines. In the previous section, we see that we are able to import images for popular operating system, but maybe there is some other operating system
that requires to create a virtual machine starting from an ISO and installing it. So, how can we automate it? We cannot expect that the provider of all the operating systems in the world are going to use this mechanism and publish images for us. We need a way to be able to automate the creation
of the images for other operating systems. In this case, we are going to use a TechTron pipeline to automate this. What TechTron is? TechTron, also known as OpenShift Pipelines, is a cloud-native continuous integration,
continuous delivery solution. It's also cloud-native and it's fully based on Kubernetes resources. It uses what are called TechTron blocks to automate tasks, structuring the infrastructure.
In the TechTron world, we have tasks. A task is something that defines a set of build steps, like compiling a code and tests, or building and deploying images. In our case, we are now interested in deploying images, but as you can imagine, you can combine it with other tasks.
Then you can define a pipeline. A pipeline is a set of orchestrated tasks. And then you can use a pipeline resource, which is a set of inputs that are going to be injected into the execution of your pipeline, which is a pipeline run. On kube-vit-techton-task-operator,
we introduced some specific tasks to create, update, and manage the specific kube-vit-techton sources, so virtual machines, data volumes, data sources, templates, and so on. You are able to populate these images,
even with libgustfs to inject files and so on. You are able to execute scripts, bash or PowerShell and whatever. We have a set of tasks I don't want to give you, but some are already available. We are extending one. And we have an operator that is going to populate
the task for you on your cluster. Now, we want to see an example pipeline. This pipeline is going... We have two pipelines that are going to be injected by the techton-task-operator. The first one is called the Windows 10 installer. It's going to populate a golden image for Windows 10.
According to some input that you are going to provide. The idea is that it's going to copy a template, it's going to modify a template, and it's going to start installing Windows from the ISO. And it's going to create a virtual machine for you.
We can see a small demo. So here is the pipeline.
We have to provide a few inputs in particular. We have to provide the ease of the Windows that we want to install. And it's the first... Yes, there. Perfect.
Here we see the pipeline. The pipeline is going to copy the template. It's going to modify it. It's going to create a first VM that it's used in order to install Windows and then create the Windows image from that VM.
Here is... Now we are simply going to see what is happening, but everything is fully automated. You don't really have to watch it. But if you like, you can also see it live. Here is our virtual machine. And as you can see, it's starting, it's booting, and it's going to install Windows.
We have also a second pipeline. I have a demo for that. It's called the Windows Customize. Probably we are a bit already over time. The idea of the second pipeline is that you can customize this image
by running additional steps, like installing the software that you need, or modifying the image that it's going to be, one of the golden images that you are going to provide in your cluster.
Let's move back.
This is the second one. I'm going to skip the demo, but if you have any questions, please reach me. The idea is that we can use this pipeline, in this case, to install SQL Server and so on.
What's next? OKD is going to change. Now, in the beginning, I told you that OKD is based on Fedora CoreOS. We are going to have a big change there, which is called OKD centralized streams,
which means that the nodes of OKD are going to be based on centralized streams. It's going to be a really upstream for OpenShift container platform, where the nodes are based on Red Hat CoreOS. Central stream is the upstream of Red Hat CoreOS.
Everything on central stream is going to be built as well using Tekton pipelines, just because we really believe in that project. On OKD virtualization side, we are going to introduce many features. We are going to try more pipelines, more automation. We are working on ARM support.
We are working on better the capacity of APIs. We are working to reduce the privileges of the pods that really execute the virtual machines. We are working on the real-time area. Here, a few links if you want to reach us.
Thank you. And Kata containers. And I wonder, OKD compares or integrates
or overlaps with these two projects. The question is, we already have other projects like OpenStack, Kata containers. I want to understand how KubeVirt compares to them.
The first idea is that in KubeVirt, we are managing virtual machines as a first-class citizen of Kubernetes. You can define a virtual machine with a customer resource, because Kubernetes provides a mechanism called the customer resource definition
to provide customers new APIs. You can use them to define a virtual machine as a really native object that is going to be scheduled by the Kubernetes orchestrator alongside the pods and other resources. The main benefit is that you can use the same storage
that you are using for your pods. You can have your pods talking at a network level with virtual machines natively without the need to configure TANs and so on, because virtual machines are running on the lower stack, like if you have OpenStack under Kubernetes.
Virtual machines are going to be a first-class citizen of this infrastructure. So it integrates with OpenStack? So, not really integrated. If you... If we go here...
OK. Here the idea is that we have something here, which in our case probably is going to be the meta nodes, but it can be also another hyperscaler, but in that case you need nested virtualization, which is not always the best idea. Then you have Linux host nodes,
now with Fedora CoreOS, but in the future with Centustreams, and here you have the Kubernetes stack. Kubernetes is going to schedule pods as containers on those nodes, and virtual machines there. So you don't really care of what you have on the last level.
OK, so the question is,
you showed how to use a pipeline in order to prepare an image for Windows. Isn't it simpler to directly use a Dockerfile to create a container? So, in theory it is, but you have to start from an already running virtual machine and take the disk, because Microsoft is providing an ISO
with a tool that you have to execute. But you have to execute it. I mean, you have to execute the binary of the installer, so you have to...
At the end, you are manually running something that is going to install Windows, and at the end, you need to take a snapshot, which is going to be your image. You want to automate it, you want to continuously execute it in order to fetch up data.
How we solved this? We automated it with a pipeline, because we have a set of tasks, and so the pipeline is the most smart way to execute and monitor them. Are you sure? Last one.
Which format are used for the operating system disks? Oh, do we support other formats? No?
So, just a row. Thank you. Time is up, but if you want, please reach me outside.