We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Ceph on Kubic: Deploying Ceph with Rook on Kubic k8s cluster

00:00

Formal Metadata

Title
Ceph on Kubic: Deploying Ceph with Rook on Kubic k8s cluster
Title of Series
Number of Parts
40
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
In this talk we could see how easy it is to deploy and configure Ceph ready k8s cluster on top of Kubic. And setup Ceph on top of it with Rook (rook.io). We would see couple of examples for Openstack and Vagrant to run such clusters for your CI and development environments.
Physical systemData storage deviceAutonomic computingObject (grammar)Hash functionScalabilityReplication (computing)HookingComputer architectureGateway (telecommunications)Data managementLevel (video gaming)Alpha (investment)WikiSoftware repositoryPhysical systemSoftware developerDatabaseData storage deviceWordVolume (thermodynamics)Configuration managementComplex systemFile systemBuildingService (economics)Point cloudPerspective (visual)Revision controlObject (grammar)MiniDiscInterface (computing)Library (computing)Different (Kate Ryan album)AlgorithmCrash (computing)Computer fileBlock (periodic table)Moment (mathematics)Projective planeServer (computing)Stability theoryMultiplication signProduct (business)Enterprise architectureExpert systemInternet service providerSoftwareRootPrinciple of maximum entropyWeb pageSurface
Revision controlRepository (publishing)Software repositoryMiniDiscLaptopPhysical systemVolumeInstallation artBefehlsprozessorMathematicsSoftware developerProjective planeBuildingWikiServer (computing)Web pageComputer animation
Repository (publishing)Physical systemData storage deviceInformationScalabilityGraphical user interfaceData managementPatch (Unix)Public key certificateMathematicsComputer-generated imageryBranch (computer science)BuildingSoftware testingAreaService (economics)InfinityProjective planeSoftware developerComputer animation
Web pageWikiRevision controlFactory (trading post)Process (computing)Cluster analysisSoftware developerNeuroinformatikMultiplication signMedical imagingSoftware testingProjective planeOpen setLibrary (computing)Physical systemFactory (trading post)Computer fileMiniDiscCuboidDifferent (Kate Ryan album)Process (computing)Virtual machineMultiplicationPoint (geometry)WikiConfiguration managementVirtual realityService (economics)File systemOperator (mathematics)RoutingStaff (military)CASE <Informatik>System administratorComputer animation
Vertex (graph theory)Revision controlAdditionReading (process)CodeConvex hullVirtual machineConfiguration spaceComputer fileRepository (publishing)Right angleLibrary (computing)NumberSource codeComputer animation
Default (computer science)Data storage deviceKernel (computing)Data typeCache (computing)VolumeRead-only memoryComputer fileScripting languageGastropod shellVertex (graph theory)Revision controlCounterexampleExtension (kinesiology)AuthorizationRollenbasierte ZugriffskontrolleCluster analysisRule of inferenceRankingAreaPhysical systemKeyboard shortcutOperator (mathematics)Computer fileRight anglePhysical systemCuboidSystem administratorPresentation of a groupProcedural programmingVirtual machineFile systemVideo gameIntegrated development environmentHacker (term)Set (mathematics)Public key certificateInformation securityStaff (military)RoutingArithmetic meanMultiplication signFunctional (mathematics)Line (geometry)Open setDefault (computer science)Revision controlResultantBranch (computer science)Software developerProjective planeOperator (mathematics)KnotService (economics)Cluster analysisMiniDiscNamespaceHexagonSource codeComputer animation
Default (computer science)Software developerNichtlineares GleichungssystemMultiplication sign
Cluster analysisReading (process)CodeMedical imagingMultiplication signParameter (computer programming)Computer fileDifferent (Kate Ryan album)VarianceConfiguration spaceType theorySemiconductor memoryCuboidIntegrated development environmentDefault (computer science)Scripting languageMiniDiscOpen setComputer animation
Reading (process)AdditionCluster analysisDifferent (Kate Ryan album)LoginPoint cloudProcess (computing)Template (C++)File formatScripting languageHost Identity ProtocolSoftware developerFigurate numberComputer fileIntegrated development environmentCubic graphComputer animation
Server (computing)Group actionInformation securitySoftwareMedical imagingCASE <Informatik>Parameter (computer programming)Integrated development environmentComputer fileNumberOnline helpRepository (publishing)System administratorTemplate (C++)MultilaterationSource codeComputer animation
InformationInterface (computing)NumberSoftware bugComputer configurationSoftware developerMedical imagingRoutingLoginMultiplicationSoftwareInformationOperator (mathematics)Scripting languageCASE <Informatik>Different (Kate Ryan album)Virtual machineFunction (mathematics)CubeDampingField (computer science)Lipschitz-StetigkeitSmoothingCubic graphRight angleSystem administratorTemplate (C++)Stack (abstract data type)Open setComputer animation
Musical ensembleSource codeComputer animation
MiniDiscPhysical systemLaptopVolumeInstallation artBefehlsprozessorToken ringPhase transitionVertex (graph theory)Local ringInternet service providerNamespaceOperator (mathematics)Thomas KuhnIntegrated development environmentDemonComputer-generated imageryOnline helpModul <Datentyp>Service (economics)Object (grammar)Asynchronous Transfer ModeData managementLocal ringSoftware developerSource codeComputer animation
Musical ensembleMultiplication signProjective planeOpen setComputer animation
Videoconferencing
Transcript: English(auto-generated)
So once again, my name is Denis, I'm working at SUSE storage team and today I will be talking about the Ceph on Kubik, how to Set up that on Kubik, how to run that and how to you can possibly test that as well and develop it
so as I said I'm working in storage. We are developing the storage product based on Ceph and That's includes not only Ceph but a lot of different configuration management software around Gateways and all the possible tools are there. I will briefly go through the Ceph technology. What is the Ceph?
What is the Rook and then we'll show you how you can set up that on your environment and Start running it so storage is Distributed Storage system sorry and
It's highly distributed highly available It's distribute data across the different nodes and provides you access to it as block devices file system and as S3 Swift interfaces as well it this the what is good in Ceph is that is replicates data across the cluster manage it and
So fill in itself if some nodes are down or some disk are broken So it will figure out where the data is. It's really located again and provide you data always there so here is the One of the heart of the Ceph system is the Rados reliable of the nomic distributed object store
Which all these interfaces are based on so this is a general system Storage system where all other are based on it, so you can use it directly with Librados Interface with little robust library as well as it's you have services like Rados gateway providing you
S3 and Swift interface you have block devices that you can use mount and use as well as file system For the CephFS so CephFS. This is native file system for Ceph. That's actually Works and provide your file system really
rapidly and There are also other gateways that you can use like in a fast gateway high skies a gateway that you also can Apply here and use this Ceph system In the heart of the Ceph system, there is the crash algorithm that actually
allows you to distribute the data to know where data is how to replicate it and And so on so basically what you need to know about the Ceph for this topic is this complex system in the different nodes, and it's really Not trivial to configure and maintain and manage upgrade update and etc
That's those are challenges of Ceph in the configuration management perspective What is the Rook? Rook is the cloud native storage orchestrator and It's also have all these fancy words like Ceph management, Ceph scaling, Ceph healing
But it's on another level. So if Ceph is the solution for data Rook is solution for the orchestration of these services Around the cluster. So it's automates the deployment bootstrapping configuration of great update everything. So
This is the Rook it supports not only Ceph a storage system, but other storage system as well For example databases and etc. There are a couple of them, but I think the Ceph is only one that is the Rich the version 1.0 other systems are still in the beta stage some in the alpha stage
so what Rook actually does it's setups The Ceph across your nodes across your disks, it's knows where it should set up their Storage nodes. It knows where to stop the monitors managers gateways and etc
It does it all for you in the Kubernetes cluster, for example, so here you can see the Rook architecture It's provides. It's not only setups the Ceph It's also provide the access to Ceph through the volume claims or directly so you can use in your containers this
This storage from Ceph in your containers so there is briefly architecture of Ceph of Rook, sorry and There is what I already said Is that this Rook agent for example provides the access to the Ceph storage so you can reuse it in your containers
actually, I'm not expert in Rook per se we have people here and I can point to Stefan Haas Sitting here in the room and if you will have any questions regarding Rook how it works you can ask him So we have Ceph we have orchestrators that works
in Kubernetes What we miss is that's actually how to Use it in OpenSUSE In storage team as well as in other teams in SUSE we always have the OpenSUSE first approach so we first commit our
packages and our solutions to OpenSUSE and then Tested there build it there and after that we use it in the enterprise product. So we have the If we have the packages long time built in our file system Ceph development repo both are get submitted to tumbleweed and
the Back stable packages of Ceph ago submitted to leap as well So you can see here that we have the OpenSUSE Ceph wiki page, which you can Check and there are some hints how to configure how to use Ceph as well as you can check those build server
projects what kind of packages are there and How that looks like I will show you in a moment so this is Our wiki page that you can Read and figure out what kind of methods to install in Ceph. It starts even from beginning of
Deployment with Ceph deploy is how you can deploy it with salt how you can deploy it in containers How do you can deploy it as a look as well? So it's great quite useful for the beginning Here is our Build server project and it's contained it contains all the packages I need it
for you to start with Ceph build it and Run it by the way, they still this is all development project So the release packages already and in the leap and stumble with distro
You also could pay attention to sub projects. There are quite a few of them starting from hammer release Jewel luminous currently the stable one is no tools and the development one is octopus
So the process for the reason the Ceph Looks following so we have this file system Ceph octopus for which is development project we submit those packages to the file system Ceph and from there we submit them to factory and From file system Ceph no tools. We submit the packages leap
For containers, actually, this is a new thing for us. It's quite rapidly changing lately as Fabian said it's Became more stable. So we are starting to build the containers as well We have the rook Ceph image, which is there
Based on rook package and this is a rook operator That I describe it as well as Ceph image Which is the Ceph is all needed services that are listed here like Ceph monitors Ceph itself and Ceph libraries as well as Ceph OSD and
We are planning to follow the same process. So That's Fabian define it on the wiki page We plan to submit our containers from file system Ceph to the open SUSE factory and leap So you will see those containers in the distro Sometimes in near future, but I hope and
We have everything right now we have system we have Ceph rook We have builded packages as well as containers What we need is in our development setup is to actually set set it up use it develop test it
How we can do that in open SUSE We have in opposite of the Ceph. We have long time project a long time project that named Waygrand Ceph It was developed a long time ago to set up the virtual environment on your local computer that enables you to develop
configuration management systems for example We have deep C based on salt and that was really useful to have like multi nodes set up on your local machine to set up the Ceph from different nodes and that's That's project actually provides you this ability it's also provides a lot of it's it has a lot of libraries to
Prepare these images to attach the disks to the OSD nodes to define the different roles like Administration role monitor role etc. It's also pre upload some of the files if needed This all define it in the as I said in the wiki page so you can read this documentation there as well
How it works you need to have a box and Luckily enough open SUSE factory has this box for micro loss. They building the Vigrant box. So what you need is to add the box is one comment and
Make why grant up and that will bring up the cluster in this case. It's three node tiny cluster and You will have the kubernetes deployed there. So So This is the starting point the kubernetes cluster on some nodes so you can you can start to deploy a rook on it
How that's possible so there the Vagrant Ceph as I said this Number of libraries right and it's also have this config.yaml file where all the configuration defined or you can define your own configuration
And also it defines what to do After after the virtual machines brought up so you cannot some repositories there You cannot some packages you cannot some files upload some files and run some comments if you need For example, here is the example for tumbleweed
but I will Talk about how to stop the Kubernetes on that cluster So setting up kubernetes is quite tricky If you go the right way to result ELS certificates and etc, I
Review with couple of the vehicle vibrant Setups for kubernetes and it was really complicated so what what did I want is to use the same vagrant Ceph because it's provides already infrastructure for you and Set up the kubernetes. I did it's really hacky. It's actually quite a lot of hex here
so the first line here is the Kubernetes admin init and it's used the predefined token It runs on admin node as you see and it's runs all the commands to have the fully functional kubernetes-admin node Kubernetes master node and
Also pre uploads a Rook package. They are not Rook package, but Rook sources to the admin node and The hack here another hack here. Is that how the other nodes are joining the Cluster I run that in the Demonized way this kubernetes join takes some time
It's actually half time out, but it's enough for kubernetes-admin nodes to join the cluster So it's just runs on all the nodes except the admin. I Do not recommend to use that in any production environment or anywhere out of your local system because it's quite a hack
But it does work done And you will have as I showed previously The kubernetes cluster that is there for you. It's here is three node, but you can define more nodes if you want So we have the kubernetes cluster right now running our development setup
what we want to do is to start the Rook that will deploy the staff on that cluster and This is really easy to do. So the Rook has these examples YAML file where all the operator and nodes defined so and cluster defined so you
actually execute these two commands to create commands that one is creating operator is some of the Security and access rights and Another command creates actually cluster this cluster YAML file create the
Ceph cluster toolbox. This is the special container that helps you to Troubleshoot or Access the Ceph and Yeah, and then you will have The Ceph cluster on these three nodes that was done by the operator. I
Will try to show you that I think in the life so Here I already run the vagrant up on my setup
So that's brings up the virtual machines as you can see attach some disks to it and Start to run the provisioner. So provision those machines here, so that's kubernetes on admin node and
And After that, you are logged in to your machine and you can see that. Yeah There is the three nodes cluster kubernetes clusters are ready for you. So what we are going to do next Is to execute those procedures that I talked about
so first of all, we will create the operator and And Here it is. It did create let's check that out so we can see that
There were some C or D created and the important one is self cluster. That's clusters. That is actually operator and right now we can We can actually see that
There are some ports started in the Rook Ceph namespace So you right now it's just Rook operator and later there will be also The toolbox created here as well
And we can go ahead and create the cluster now Okay And now we can watch how the operator will create the the cluster. It will take some moment. And so I switch to
the presentation So you already can can see the how it start to create the Ceph agents Ceph Yeah, this is the service container to detect the version of Ceph and then you will get this self
okay, so as I said this is Could be configured to anything. This is by default the tiny cluster and It also use these boxes from the not boxes the containers from file system Ceph
Development project they build there and you can find them and also use them as well We have our own fork that is adapted to the cubic as well as cusp so
This is located here in the github SUSE Rook most of our development is done upstream and we have this only for maintaining compatibility with open SUSE and SUSE and We have this SUSE master branch specifically for that
Yeah, so it's creating. Let's give it some time and That gives you the Ceph cluster there that is operated by Rook and You can yeah develop it you can test it you can do anything you want in the your local setup
that provides the ability to Do your development locally any questions so far about the Vigrant Ceph Rook anything looks like not
So when you got your development cluster ready It's still creating. I think it's just right now downloading the images Probably that takes some time
So I want to show you how you can adjust the Vigrant Ceph Vigrant Ceph for the parameters. First of all, you have traditional like vagrant file and
Here you can find all the needed Parameters like what kind of box by default it will take it from environment or it will default to the open SUSE Also, the configuration of the cluster is here is tiny right now other configurations
You can find the config.yaml file at the end of the file Yeah, there is this different types of configuration with different nodes How much memory it requires?
How many disks to do they need SSDs or just the spinning disk? And you can adjust that in the background file and box you can provide in the parameters It's also provides you the script to set up the background Vigrant locally as well if you want
Okay After you have your development stop and developers are enabled to do their work what you actually need is to check their work in CI and
And Here I will show you how you can possibly build your CI cluster in OpenStack to run against those containers and to verify they are working fine, so You might do that with Vigrant as well if you want
But we have the OpenStack cloud In our environment so we can use that for OpenStack You can use different tools to spin off the cluster them like terraform scripts or terror Vigrant as well you can do that
What I did find useful is the heat templates Because you do not have any other tool in the middle you just define the heat template and OpenStack cloud understand it and do you think it's just the The job of the OpenStack cloud to do it and You will not have any other problems with the tool in the middle
Because when you have a tool in the middle in my horrible opinion then When analyzing the lock you try to figure out who is fault in this issue and Yeah, it's really hard to analyze many logs with different tools and figure out Where is the problem so I like heat templates and here I will find show you how to
spin the cluster and open stack is heat template and run some CI on it, so the Kubernetes approach for cubic is actually the same and really I was impressed how Easy it is to set up the Kubernetes on cubic
Yeah, it's a couple of steps couple of comments, and you have your cluster So let's so for heat templates you need some Files to define your cluster in the Vagrant that's in the YAML definition plus special scripts that
Checking what kind of node you need and spinning this out with LibVirt is hit template. This is special format You can find it in this repository as example so first of all
yeah, this is definition of parameters and etc and It's yours also can define different networks Here what kind of networks does it need routing does it need? floating IP and etc security groups and Actually here you spin the servers
One or couple of them if you want that's done with the help of the server resource group So this is actually definition of number of workers that will be spinned in the open stack What kind of flavor they will use what kind of image? What kind of network and etc so this templates actually defines your cluster, and that's why I like it you define it in one
small file and Upload it to open stack and it creates everything for you it I use couple of environments like What kind of flavor for each node because for example in case of the
cubic or right now micros you can use Smaller images for the workers and OSDs that do not require the Kubo admin and larger worker a larger image on the master node for example
so So you take in this template and upload it to the open stack There is one comment you take as an output here is the IP of the master machine and
then you run this three standard command which Richard was also defined and we had seen in the vagrant example kubo admin in it in the master node and Then just kubo join on the all other nodes
And then you deploy the rook in the same way you can use some your own Script if you want or you can use I don't know other tools like salt or whatever So you run these commands of deploying the exactly the same commands of deploying the rook and in the end We will have the same
Health okay cluster here. I use different command It's like get safe cluster and that's actually could be used to automate your C I it's like checking out if the Operator was create created and operated created the cluster and what is the health of the cluster right now?
Yeah, and if you need to gather the useful information like logs in case of failure you can use those commands to Get all the logs from all the operators there or all the logs from the other different containers as well In the end, you just delete the cluster is one comment and that's pretty much it
So we discussed the vagrant setup on development how you can set up in the CI We have the setup working really well other tools that you can use for Rook For example terraform cubic EVM you can as I said
Spin off the cluster in OpenStack as well as using the live it In locally, so it's up to you. It's up to your taste if you want to start with scratch. Nobody prevents you If you do not like the vagrants vagrant, for example self or something like that
The upstream also have the examples of minikube and their own chorus Kubernetes Vagrant setup, so yeah, there are plenty of options if you want to use them. Go ahead Unfortunately, it doesn't work that smoothly. There are some bugs in the images currently in open SUSE
open SUSE wears about it and Trying to fix it. So there is bug that if you spin the Multiple nodes some of the nodes do not start right now that prevents me to use them micros images, but
But I Open SUSE team provided me Fabian provided me the workaround and I use those images so far after they will be fixed We will use Micros for sure another bug is this Open open stack. I found that if you will define the number of interfaces
Then it will configure wrote incorrectly So right now on all the nodes there is only one network interface It doesn't matter for rook and development But in future when the rook will be able and self will be able to use different interfaces that's Needs to be fixed and we need to test that as well and then the end. Let's see what our cluster says
So right now you see different nodes are running
You can see that OSDs are now running Monitors are now running and we can go ahead and execute for example Some of the comments I will copy paste it from the documentation. So you see
So you see it working fine Yep, right now it has what Warren I think that's there Six OSDs up all the monitors and quorum and
Some manager failed to dip some dependence. I think this both containers are still in development mode. So That's not a problem. But anyway Here you have your cluster Running operated by rook and running locally How to contribute you can contribute
In self you can contribute in rook Also, you can contribute as documentation to open SUSE self as well as to our obvious project And that's all questions. I don't I don't have time for questions, right? So if you have any catch me up around catch up
Stefan and Yeah, ask any questions you want Thank you