Gluster Container Storage
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 561 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/44291 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
SCSIStandard deviationVolumeDefault (computer science)Data managementStack (abstract data type)Multiplication signVolume (thermodynamics)Projective planeNormal (geometry)BitSystem administratorMereologyFlow separationTouch typingCycle (graph theory)GoogolScaling (geometry)Right angleData managementAsynchronous Transfer ModeSign (mathematics)Interface (computing)NumberView (database)Data recoverySynchronizationRewritingFile systemPoint cloudIntegrated development environmentSingle-precision floating-point formatProduct (business)QuicksortDefault (computer science)Theory of relativityForceMobile appSurfaceSlide ruleServer (computing)Social classVolumeOperator (mathematics)Stack (abstract data type)Software frameworkSoftware maintenanceScheduling (computing)Mathematical optimizationConfiguration spaceComputer configurationConnectivity (graph theory)Standard deviationLink (knot theory)Shift operatorComputer clusterComputer fileInternet service providerComputer animation
08:53
Stack (abstract data type)Operator (mathematics)Data managementLevel (video gaming)VolumeSession Initiation ProtocolAutomationBlock (periodic table)Single-precision floating-point formatSingle-precision floating-point formatAutomationLevel (video gaming)Goodness of fitData managementPoint (geometry)Object (grammar)VolumeProcess (computing)SoftwareMereologyQuicksortVolume (thermodynamics)Right angleSource codeFunctional (mathematics)Interactive televisionPlug-in (computing)Projective planeOperator (mathematics)BitView (database)Client (computing)Mobile appSession Initiation ProtocolType theoryRollback (data management)Representational state transferOcean currentDifferent (Kate Ryan album)Block (periodic table)Computer clusterAdditionTranslation (relic)Internet service providerSoftware frameworkConnectivity (graph theory)Hecke operatorSocial classCASE <Informatik>Hard disk driveSurfaceSet (mathematics)SCSICovering spaceState of matterComputer animationProgram flowchart
17:40
Open setVolumeTemplate (C++)Block (periodic table)Scale (map)Limit (category theory)Operations researchMathematical optimizationThread (computing)Read-only memoryMetric systemMultiplication signComputer fileMoment (mathematics)Client (computing)VolumeOcean currentArithmetic progressionScaling (geometry)Cellular automatonProcess (computing)Metric systemProjective planeAsynchronous Transfer ModeRight angleSurfaceMedical imagingMereologyVolume (thermodynamics)Operator (mathematics)Internet service providerLevel (video gaming)AdditionPoint (geometry)Mathematical optimizationDemonSoftware developerQuicksortClassical physicsPole (complex analysis)SpacetimeMiniDiscBefehlsprozessorNon-volatile memoryLimit (category theory)Pattern languageGraph (mathematics)Single-precision floating-point formatCASE <Informatik>Block (periodic table)File systemTDMABitLeakMehrplatzsystemField (computer science)Real numberCoprocessorGoodness of fitDoubling the cubeTemplate (C++)Standard deviationOpen setInformationControl flowSoftware frameworkFocus (optics)Data managementSocial classObservational studySemiconductor memoryHard disk driveConfiguration spaceSynchronizationNormal (geometry)Complete metric spaceRaster graphicsHeegaard splittingSource codeLoop (music)iSCSIParallel portDivision (mathematics)Tracing (software)Analytic continuationComputer animation
26:26
Configuration spaceRule of inferenceVolumeInformationRow (database)CASE <Informatik>Data managementSystem callVolume (thermodynamics)WhiteboardFeedbackVolumeQuicksortConfiguration spaceInformationDataflowDatabase normalizationMereologyRule of inferenceProjective planeInternet service providerNumberOperator (mathematics)Right angleCountingConfidence intervalMoment (mathematics)Computer configurationHeegaard splittingComputer fileMechanism designSingle-precision floating-point formatRevision controlMiniDiscSoftware developerSoftware testingArithmetic progressionLogic gateBuildingOpen setView (database)Directory serviceCuboidSeries (mathematics)CubeSource codeThermal expansionProcess capability indexTheory of relativityTerm (mathematics)Product (business)Forcing (mathematics)Video GenieLaptopLevel (video gaming)InferenceTable (information)Computer animation
35:12
Computer animationLecture/Conference
Transcript: English(auto-generated)
00:05
Alright, we'll stay on the topic of containers, this time with Gluster, give a hand for Kaushal. Thanks, thank you everyone. Okay, so today I'll be talking about Gluster container storage.
00:22
This is storage for containers, running in your containers, similar to what Rook is doing with Seth. So this is what Gluster does for containers, right? So, I'm Kaushal, I'm a senior engineer at Red Hat and I work on the Gluster project. I'm a maintainer and I work mainly on the Gluster management frameworks.
00:40
And right now I am contributing to the container storage around Gluster. So, in today's talk I'll talk about what GCS is. We'll go into a little bit more of detail about what GCS looks like, how it performs, performs as in how it does what it does.
01:01
And I'll let you know how you can try out GCS. Okay, so just a quick question first up. So has anyone already heard of Gluster in containers with something called Hecate? Right? So yeah, this is, GCS is built on our experiences that we gained with the Hecate project
01:22
and running Gluster in containers with Hecate. So I'll touch a little bit about Hecate and try to compare what GCS does differently from the Hecate-based stack in this talk. But yeah, so let's get started. So, Gluster container storage.
01:41
I'll be calling it GCS for short. Please don't get confused with GCS because in Kubernetes there is already GCS which stands for Google Cloud Storage. It's not Google Cloud Storage, it's Gluster container storage. We'd welcome suggestions for a better name. So for now it's GCS, I'll be calling it as GCS. Okay, so what is GCS?
02:03
GCS is a new effort from the Gluster community to provide persistent storage for containers, right? So we try to provide the two normal demands of container persistent storage, that is the shared storage and the, what do you say, explosive storage. The RWX and read-write-many and read-write-once volumes.
02:24
We try to use the standard container interfaces as much as possible right now. So GCS implements, one of the projects in GCS is to implement the container storage interface on CSI. And using that we provide volumes when requested.
02:41
For now we are Kubernetes only, but maybe later on if we get help we'll move on to other container orchestrators. And GCS is being designed for a hyperconverged deployment right now. As in, your storage runs along with your apps on the same servers that your app's ports are running, right?
03:02
So yeah, so while the storage ports run along with your app ports on the same nodes, these are not your normal app ports, these need privileges, because we do deal with underlying devices, underlying file systems and all. So we need to run in a privileged mode, right?
03:20
Okay. So GCS, right, it's not just a single, it's not a single project, it brings together a lot of Gluster related projects under a single umbrella. So I'll go into detail about what each of the project does for GCS specifically. So we bring in GlusterFS that I hope everyone already knows about, right?
03:44
So the file system, there's GlusterE2, that's the new management tool that we have been building. There's the CSA driver specifically, there's Anthill. Anthill is a Kubernetes operator that will deploy and manage Gluster on Kubernetes.
04:00
And there are other projects like Gluster Prometheus, which helps monitor Gluster inside a Kubernetes deployment. So basically you can plug in Gluster into your Kubernetes dashboard and you can drill down into Gluster volumes. Gluster Mixin provides the dashboard configurations, provides the alerts and rules, et cetera, right?
04:23
So now, why would you want to use GCS over what we have currently with Hecate, right? So the main theme of GCS is to simplify the whole experience as much as possible, right? So we simplify how the containers get deployed,
04:41
how Gluster gets deployed on Kubernetes. We simplify the general overall management workflow intended for the admins or users, right? We also go ahead and we are trying to simplify the GlusterFS stack itself, and try to simplify the stack, the file system stack,
05:02
and ensure that they work well for the container ecosystem. So what we do is that we try to make opinionated defaults, right? So right now the normal GlusterFS installation doesn't do any opinionated defaults, so it just leaves it open to the users to what you want.
05:21
Everything is enabled, everything is available, so GCS is in that. So we choose features that need to be enabled, we choose options that need to be defaulted, and yeah, that's how GCS goes. Yeah, and all of this is going to be automated, so hopefully the most an admin needs to do is to write a manifest for Anthill,
05:46
deploy Anthill, write a cluster manifest on Kubernetes, and done. So Anthill takes care of deploying everything that is required. So all the projects that I mentioned earlier, so Anthill takes care of deploying all of them, managing all of them. In the future they'll also do data operations like replacing a node,
06:03
upgrades, and other stuff. And all of this together allows us to improve our scale in the number of volumes at least. So GlusterFS originally wasn't designed for the container sort of environment. So GlusterFS was designed as a NAS replacement,
06:21
wherein you have one single large volume or a few single large volumes. But in containers you generally deal with lots of very small volumes. So GlusterFS isn't particularly designed for that, but we are doing optimizations, we are trying to figure out ways so that we can scale better.
06:44
Our goal is to, I guess this is goal for a three-node cluster, that is like the minimum for, if I believe, one shift that is required, that we can provide 2,000 read-write, what do you say, read-write many volumes and up to 5,000 read-write once volumes.
07:00
These are like small, small volumes, these are not really huge, so something that can run in a three-node cluster. So this is what Jesus is going to provide. That is going to be different from the original stack. So I'm not going to talk too much about the current stack as I mentioned. We have a Hecate-based stack, so the Hecate-based stack looks somewhat like this.
07:23
So we have a component within Kubernetes itself, so we are pretty much tied into the Kubernetes release cycle, so it's not so independent. There's a Gluster provisioner within Kubernetes. We have a separate Hecate part. Hecate manages the Gluster parts.
07:42
And the thing, one big problem with Hecate, the base deployment, is that we have a Hecate DB separate with a Gluster configuration separate. So there are two separate views of the Gluster pool. Hecate has its own view, Gluster has its own view. And the problems that we see there are mainly about these two going out of sync.
08:01
And recovery from that is hard. But a lot of work has been done on this, and right now, a lot of it has been solved. But still, there is a problem with this. And in general, this stack relies on the normal cluster deployment,
08:21
the normal cluster installation as in without the opinionated defaults that the new GCS stack is going to provide. So there are details available below in the slide. The link is available in the schedule. If you want to, you can look at this. So looking at it, it looks relatively simple,
08:42
but deployment and managing this cluster is challenging. It's not easy. So let's get into GCS itself. So GCS looks like this right now. So there are a few components. We have Kubernetes, we don't need to worry about that.
09:01
So we have Anthill. Anthill is a Kubernetes operator, as I mentioned, that is going to deploy and manage the rest of these things. In addition to Anthill, we have the CSA driver. That is going to be the point of interaction with Kubernetes for us. So Kubernetes talks to the cluster CSA driver, will then talk to the cluster parts.
09:21
In the cluster parts, we have the new management tool running, cluster D2. We have cluster FS. We have the Prometheus exporter inside. And we also have an etcd cluster. That is our single source of truth right now. So the new management tool cluster D2 stores the cluster view in etcd.
09:42
So GCS runs its own etcd cluster and doesn't make use of the Kubernetes etcd cluster. So let's go into Anthill first. Anthill is a Kubernetes operator.
10:03
Which defines custom cluster GCS resource definition. So Kubernetes provides base for, what do you say, other projects to define their own resource types, object types. Anthill defines the GCS resource types in Kubernetes
10:24
and it automates deployment, upgrades, and day-to-day management of GCS. At the current state, Anthill is very early. There isn't actually anything, other than the CRDs, there isn't actually anything present yet.
10:40
But things are, we are working on stuff, things will come out soon. Anthill is developed under the cluster project in GitHub at github.com slash cluster slash anthill. Next up, we have the CSA driver. So the CSA driver consists of,
11:02
so as I mentioned, CSA driver translates the Kubernetes CSA request into the GD2 rest request, the cluster D2 rest request. That's the main goal of these drivers, right? So in addition to doing the translation of the request
11:20
to create and delete volumes, right? It also will take care of doing the mounts. Unlike in Hecate, right? What would happen is the cluster provisioner in the Kubernetes, within Kubernetes would take care of reaching out to Gluster or reaching out to Hecate. That provisioner would take care of doing the mounts.
11:42
But in this case, we do everything via the CSA driver, so it gives us more freedom to do things the way we want. So the CSA driver consists of three different types of parts. There's a node plug-in, there's a provisioner and an attacher, right? I'll go through what each of them does.
12:03
Attach a provisioner and node plug-in, right? The attacher is for us, at least in GCS world, it's a no op, it's useless for us, but we just need to provide it because the Kubernetes CSA spec expects it. There is work going on to ensure, to allow CSA drivers without attaches.
12:24
So not all CSA drivers actually require attaches to work. So the problem is this part, right? This is the main part for us right now. So this takes care of translating, this is the one part that listens for Kubernetes persistent volume requests
12:42
or persistent volume claims, forwards them to Gluster, and yeah, that does it basically. It listens for the create and delete volume requests and forwards them to Gluster, translating them to the GD2 requests. So in this part we have, sorry,
13:02
oh yeah, in this part we have four different things running. So there is a Kubernetes provisioner, sidecar, so these, a few of them are provided by Kubernetes themselves, these are not developed by us, but yeah. So there is a Kubernetes provisioner. The Kubernetes provisioner's job is to listen for Kubernetes persistent volume object requests,
13:23
and then the provisioner would talk via the CSA spec to the Gluster CSA driver, which would then relay that forward to the Gluster layer. So there is a provisioner, we have a snapshot or sidecar which takes requests,
13:41
handles Kubernetes requests for volume snapshots versus volume snapshots, and there is a cluster driver registrar which registers Gluster FSCSI as a cluster level driver, right? So it makes the Gluster storage available as a storage class, right? So that's what the provisioner does.
14:01
This is deployed as a, right now if I am right, a stateful set so that it's always available, running, and then we have the node plugin. The node plugin basically runs on every Kubernetes node that you have, right?
14:25
Its main function is to mount the volumes when requested, or attach the volumes to the app ports when requested. So yeah, that's all it does. So there is a driver registrar, again, it makes the Gluster driver available on that node for mounting.
14:41
So there are two containers. The driver registrar makes sure that the driver is registered for, so the Kubernetes knows to reach out to the driver to perform mounts, right? So if the node plugin parts, right, also contain the Gluster first client bits. So you don't need to have the Gluster first client bits
15:00
installed on the Kubernetes host. With the previous deployment with Hekate, you needed the client bits installed on your Kubernetes host. So that was one more issue with that, so yeah. So then now let's go into the Gluster part itself, right?
15:21
So the Gluster part has Gluster D2, Gluster FS itself, and the Gluster FS exporter. So first I'll talk about Gluster D2, right? So Gluster D2 is the newer management framework for Gluster FS. It's intended to provide automated management operations for Gluster FS, as in automatically create volumes based on requests from the user, right?
15:45
So the user can request for a volume of a particular size of particular capabilities, and Gluster FS GD2 will automatically decide upon the layout of the volume as the user requested and create the volume, right?
16:01
In addition to that, it also tries to provide intelligent or automated volume management operations about, say, scaling a volume or shrinking a volume, replacing nodes, stuff like that, replacing bricks, stuff like that, right? So basically what it is doing is it is converging the functionality
16:22
of different tools earlier that were there outside the cluster, right? So we had Hekate running outside the Gluster cluster, Gluster pool, sorry. There was a project called Gluster Block, which is used to provide the RWA volumes or block volumes, right? These were running outside the Gluster pool, right?
16:41
They were not directly managed by the Gluster layer. All of this is converged into one tool right now, so Gluster D2. Gluster D2 does the work of Hekate, right, automated volume management, and it also provides block volumes. In a different sort of way compared to Gluster Block, we'll see how it does it in a little while.
17:05
Okay. So yeah. So the Gd2 provides a REST API like Hekate, right, against which you can program. So this allowed us to write our CSI driver. We have an API that can be reached out to over the network to do operations. So the CSI driver works on that.
17:24
Gd2 also has a very good orchestration engine built in, right? So it's more resilient to failures than the orchestration engine in the current cluster management layer, right? So we can do undos, we can do rollbacks if some operation failed.
17:44
We can do an operation and say a node comes back up later, it can rerun the operation if required, stuff like that. So it brings in features to do more resilient orchestration of operations. So basically the management framework, what it does at the end is that it needs to go to each node and prepare the brick,
18:05
start processes and do stuff like that. So the Gd2 brings in a much better framework to do that. In addition, as I mentioned earlier, there is just a single source of truth for everything. There is no split in cluster information between Hekate and Gluster.
18:22
So all the information is with Gluster D2. So Gluster D2 knows what all volumes we have, how the volumes are laid out, what all devices we have access to, stuff like that. So how many devices we have used, everything, right? So yeah, there is less possibility of failures because of Hekate and Gluster going out of sync, right?
18:51
In addition, Gd2 also brings in support for Prometheus and OpenCensus so that you can trace or monitor Gd2 with the standard tools that are used elsewhere in the container ecosystem, right?
19:07
So Prometheus is widely used for monitoring within Kubernetes. OpenCensus implements the Open Tracing API. So again, we provide Open Tracing API endpoints so you can trace operations across the cluster if required, right?
19:23
Yep. So as I mentioned, Gd2 provides normal volumes as well as block, Gluster block. So the normal volumes are used to satisfy the Kubernetes. The way how Gluster D2 helps there is that for normal volumes there is automatic provisioning of bricks.
19:47
So we go ahead and we create the LVMs, we create the thin poles, we create the file system on top of that, we do the brick mounts. We do all of that and Gd2 takes care of doing that. And in addition, one special thing that we bought in was support for volume templates, right?
20:04
So the normal Gluster installation would contain a volume template with everything, right? All the features that we provide. But for the GCS use case, for the continuity use case, you wouldn't need all of them. If you had all of the features enabled in the Gluster graph, it would be heavy.
20:20
It would be resource heavy, right? So you don't need that. So we can provide a custom, simple template for RWX volumes for containers. In addition, we also provide RW volumes, as in we also provide block devices. So block devices are meant for single users, single client access.
20:43
This is still a work in progress. Bits and pieces have been merged in. It's not complete yet, but yeah. So what this does differently from the Gluster block project is that this tries, does block exports in a very simple way using loop pack devices, right?
21:04
So what it basically does is that you have a GlusterFS mount on your Kubernetes node. And from that mount, on that mount we create an image, I mean device file or an image file and then do a loop pack mount so that it's available as a block device, right?
21:23
The Gluster block project did its exports by iSCSI and there was a lot of involved processing getting that set up. So that's more complex. Scaling was problematic there. This one allows us to scale really easily, much well.
21:41
So let's see the current status. Yep. So as of this moment, we can scale to, again, all of these I'm talking about in a three node cluster, right? We can scale to about a thousand volumes, RWX volumes and hundreds of requests,
22:01
as in hundreds of volume creation requests or volume division requests happening parallel. So it doesn't bother you. We, at this scale, we have begun hitting the limits of LVM scaling itself, right? So LVM, again, like Gluster previously, LVM itself wasn't designed for this sort of a thing. So Gluster uses LVM thin pools to provide snapshot capabilities.
22:24
So we use LVM. So we need LVM. But right now we are taking on LVM scale issues. We are trying to figure out optimisations in the way we perform LVM operations itself that can help scale a bit further. And we are also looking at other options, if possible, for RW volumes.
22:48
Yep. In addition to the LVM scaling, we also sort of hit HCD scaling when all of these requests come together. Again, we are tuning the HCD config itself, how we deploy HCD, and we are tuning how we use HCD itself.
23:03
So as in how we store data in HCD, how we access HCD, right? So we are doing that. And Gluster D2 is also, again, developed as a separate project under the Gluster community. So it's developed under GitHub, Gluster, Gluster D2.
23:22
So now let's look at what we have been doing in GlusterFS itself to help with GCS, right? So the main focus has been optimising resource usage. So this has been in progress from the last few releases. So Gluster 5, one of the main topics was optimising resource usage.
23:41
Gluster 6, the upcoming release is also, there is a lot of focus, again, on optimising resource usage. So basically we are trying to reduce the memory that we use. So we are fixing a lot of the memory leaks that we have and everything there. We are trying to also optimise how we use threads, how we deploy it, and yeah.
24:01
So we are optimising resource usage. And again, there is this, I don't know if you have heard of multiplexing before. This multiplexing is a feature in Gluster where we run multiple bricks in a single process, right? So again, before GCS, it wasn't such a priority. It was okay, we wouldn't have thousands of volumes running.
24:22
With GCS, with containers, we can't have thousands of processes running inside the pod, right? So we need to consolidate that. So yeah, we have multiplexing already for that. Multiplexing works for bricks, but there isn't a similar process for self-healing. So the volumes that we provide are redundant, as in replica Gluster volumes.
24:44
Gluster replica volumes, right? So they need to have self-healed daemons running. Self-healed daemons right now are individual per volume. So again, there would be as many self-healed daemons as volumes that we have. So we need to multiplex the self-healed daemons as well. So that's a work in progress, okay?
25:02
And there are other features like fencing. What is fencing? Yeah, so fencing is like a feature where we can ensure that only one client can access a file at a given time. So this is required for block volumes. So that we don't have clients or say, right, say now a pod was rescheduled on some other node.
25:23
We don't want them to like begin accessing the image file before some cleanup has happened or some release of locks or whatever has happened. So things like that. So again, Gluster FS is developed under GitHub Gluster FS, right?
25:43
Okay. Next on, monitoring. So monitoring is developed under two projects. Again, under the Gluster community, there's Gluster Prometheus and there's Gluster Makes Sense, right? The Gluster Prometheus project provides us with the Gluster FS exporter daemon.
26:00
This runs along with GD2 and exports the Gluster metrics available, right? So it's not just the cluster level metrics that GD2 could provide. It's like per volume metrics, per big metrics, process metrics as in how much CPU is being used, how much disk space is being used and all of that.
26:22
So the Gluster Prometheus project provides this. And there's a Gluster Makes Sense project which provides rules and alerts, right, for Prometheus and for Grafana so that you can build dashboards out of it. So right now the Makes Sense project provides a dashboard configuration for a Gluster FS volume.
26:40
There are others being built. So I'm going to skip this. I was just going to talk about how the flow of operations was going to happen for volume creation. But in simple terms, it's just straightforward, right? So it's Kubernetes to CSI driver, CSI driver to Gluster, again the same flow.
27:00
Create a volume, mount a volume. It's like one flow, single flow. It's the same for both sorts of volumes because everything is in one place. There's no split. There's no Hecate so that the request needs to go to Hecate or then to Gluster, right? Okay. So, okay, if you want to try out GCS, just keep in mind that it's still in the very heavy development.
27:24
We are planning on doing a one-daughter release sometime soon where, again, it's not fully featured, but at least you can make use of it and just see how it works, right? So, yeah, at the moment we have an Ansible-based deployment tool written.
27:44
It's not a deployment pool. It's just a playbook that deploys GCS, right? As Ant Hill isn't complete yet, so we are depending on Ansible right now and this playbook deploy gcs.yaml. This performs the full deployment right now, but as Ant Hill grows and gains more features,
28:03
we'll be cutting down what the playbook does. So in the end, the hope is that the playbook would only deploy Ant Hill and we are done with it, right? So for more information on how to use the playbook to deploy GCS on your Kubernetes cluster, you can read the README.
28:20
So the README gives you information on how to prepare the Ansible inventory files and how to run the playbook so that you can deploy GCS, all right? But if you want to just do a quick test on your local laptop, so we have a vagrant file available which does this.
28:40
It's vagrant libvert only for now because we do some additional, adding additional disks and all that we haven't yet figured out on how to do for vagrant with VirtualBox. Contributions are welcome to help do that. This vagrant-based tool will deploy Kubernetes and GCS for you on a three-node cluster, right?
29:04
So Kubernetes deployment is handled using CubeSpray. We have a playbook that calls CubeSpray finally, but this playbook has some configurations that we need to enable for GCS to run, right? So we need to enable certain feature gates to allow the CSI driver to work.
29:24
So CSI driver is still a work in progress in the Kubernetes community, so not all the features are enabled out of the box, so we need to specially enable some features for the CSI drivers. Again, more information on how to use this is provided in the readme, right? So this deployment tool is available under the GCS project,
29:44
so there is a github.com slash cluster slash GCS. Under this, we have a deploy directory where all of this is contained, so yep, and that's basically it. Again, as I mentioned, GCS is being developed under GitHub cluster GCS,
30:04
so under this project, we mainly have the overall issues tracking the full GCS, what do you say, world view sort of a thing, and there is a waffle board which tracks the GCS-related issues across the multiple different projects that I just described.
30:22
So that's it. Please try out GCS, and let us give us your feedback, and I'm open for questions now. Okay.
30:42
So do we support extending volumes? GD2 itself supports it? The CSI driver doesn't yet. I don't know if the Kubernetes CSI spec supports it yet, so if that support comes in, we can build it. Our, the cluster management layer already has support for automatic expansion,
31:01
so we just need to do an extra call when that is available. And in case of three-node deployment, is it possible that, so I suppose all three nodes should have equal storage, equal hardware, yeah? It would be preferable? Preferable, yeah. It would be preferable, but, okay, sorry.
31:21
The question was, in a three-node cluster, For example, I have one server with big storage, or some, I provide only one record for that, and for some other volumes, I ask to have, for example, three replicas.
31:45
Is it possible? Okay, so I'll be repeating the question. The first part was, is it possible to have non-homogeneous hosts? We'd prefer if you have homogeneous hosts, but GD2 should be able to work with non-homogeneous hosts.
32:00
The second part was, if you are able to choose the, what do you say, redundancy count, replica count. Right now, we aren't targeting that, because we want to simplify the number of options that users have, for us to build confidence in what we are doing first. That might come in later on, right? So right now, the CSI driver is hard-coded to create three replica volumes.
32:24
It's hard-coded. So those three replica volumes can be created on a heterogeneous cluster, as in, but the thing is, since it expects always three replicas to be present, and if we can't individually provision bricks on each of the three nodes, the request will fail.
32:41
So that's going to happen. Any further questions? Yes? Not really stable.
33:00
Okay, the question is, how stable is GCS right now? GCS isn't really stable. We haven't reached something that we would ask users to deploy in production yet. So the upgrade stories and all should be pretty simple, but we haven't thought about that yet. We have been just working on getting what we want to deliver in the first release fixed.
33:23
So the upgrades for GCS would be simple enough, because it could be just deploying the new version of the cluster parts. All our configuration right now stays in HCD. HCD itself has its own upgrade mechanisms already defined for Kubernetes, so that's taken care of.
33:41
The GCS parts themselves always rely on the information in HCD. So once we reach that stage, yes, it should be simple enough to do upgrades. Yep, we need to do so.
34:05
Okay, the question is, how do we intend to do the upgrades? So if parts are mounting volumes and whatnot. So as I understand, there is some sort of, I don't know, gating sort of capabilities in the Kubernetes operator world. I don't really understand the operator part of it well.
34:22
So the operator would take care of doing that. So what the operator would do would be to bring down one particular pod, then get a new version of the pod up and running, do whatever mounts is required inside that.
34:40
Till that pod is up and running, it would block the upgrades of the rest of the pods, do stuff like that. So it's not exactly clear at the moment, but I believe there is some way to get operations to allow this to be handled in a smooth way.
35:00
Any further questions? No? Thank you, everyone. Please provide feedback on possum.org.