Rook: Cloud Native Storage for Kubernetes
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 490 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/46899 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Software maintenanceGame controllerSystem callSystem administratorSoftware maintenanceComputer animation
00:35
Projective planeData storage deviceMereologyProcess (computing)Point (geometry)BitComputer font
01:20
Normed vector spaceArithmetic meanQuicksortSurgeryShared memoryKnotData storage deviceArmIntegrated development environmentCloud computingBitBlock (periodic table)MultiplicationPoint cloudService (economics)Multiplication signPhysical systemNormal (geometry)Classical physicsLocal ringProduct (business)Point (geometry)Drum memorySoftware bugData centerFile systemSocial classAbstractionSource code
04:29
Object (grammar)Point (geometry)Product (business)Group actionDistanceLevel (video gaming)Complex (psychology)StatisticsMereologyPhysical systemRoutingOperator (mathematics)Goodness of fitSoftwareData storage deviceSource codeData conversionBitCartesian coordinate systemMultilaterationComputer animation
06:30
Operator (mathematics)DatabaseData structureLatent heatVolume (thermodynamics)Open setProjective planeSystem administratorBitMiniDiscObject (grammar)2 (number)Social classAutomatic differentiationOpen sourceCellular automatonArmDifferent (Kate Ryan album)Service (economics)Patch (Unix)Data storage deviceComputer animation
08:29
Data storage deviceInternet service providerVolume (thermodynamics)Revision controlOperator (mathematics)Normal (geometry)DataflowPoint (geometry)Multiplication signSineConfiguration spaceComputer clusterComputer architectureComputer animation
09:32
Data managementVolume (thermodynamics)Social classBlock (periodic table)Operator (mathematics)State of matterSystem administratorInterface (computing)Information securityLevel (video gaming)Data storage devicePoint (geometry)File systemDevice driverGoodness of fitMereologyOrdinary differential equationStaff (military)RoutingPrisoner's dilemmaHand fanMathematicsGroup actionData storage deviceMagnetic-core memory
11:40
Operator (mathematics)Point (geometry)Routing
12:18
Configuration spaceMereologySoftwareOperator (mathematics)Group actionSimilarity (geometry)Object (grammar)MiniDiscProcess (computing)Twin primeData managementMetropolitan area networkBus (computing)BitMultiplication signLie groupPoint (geometry)Right angleScaling (geometry)Goodness of fitOffice suiteMechanism designState of matterComputer clusterData storage deviceSystem administratorIntegrated development environmentPower (physics)Service (economics)Volume (thermodynamics)Normal (geometry)Point cloudCloud computingStandard deviationSoftware developerSource code
16:08
Service (economics)MereologyObject (grammar)Point (geometry)Information securitySource codeData storage deviceFront and back endsValidity (statistics)Computer animation
16:46
Process (computing)ArmSoftwareProjective planeInterface (computing)BitMultihomingOverlay-NetzSource codeComputer animation
17:49
CASE <Informatik>SoftwareConfiguration spaceCasting (performing arts)Right angleMereologyProjective planeComputer animation
18:15
Partition (number theory)CASE <Informatik>Multiplication signDigital photographyStaff (military)Point (geometry)MiniDiscSource codeComputer animation
19:20
Hand fanLevel (video gaming)Service (economics)CodeObject (grammar)Projective planeOperator (mathematics)Right angleINTEGRALRule of inferenceMiniDiscInternet service providerOffice suiteStaff (military)SoftwarePoint (geometry)Data managementData storage deviceMultihomingMereologyGradientSlide ruleStability theoryMathematicsComputer animationSource code
22:29
Information securityGoodness of fitSoftware developerNumberPoint (geometry)Arithmetic meanRight angleProduct (business)Group actionTouch typingMessage passingIndependence (probability theory)BitState of matterProjective planeArmTrailMereologySoftware testingSource codeXML
24:59
Point (geometry)Computer programmingEmailElectronic mailing listLattice (order)TwitterXMLComputer animation
25:37
Frame problemMereologyTraffic reportingData structureData recovery2 (number)CodeData storage deviceVolume (thermodynamics)BitFocus (optics)Point (geometry)Spring (hydrology)INTEGRALHardy spaceTable (information)Network topologyQuicksortStaff (military)Functional (mathematics)Operator (mathematics)Software frameworkProjective planeExtension (kinesiology)Social classRoutingWeb 2.0Magnetic-core memoryAutomatic differentiationGoodness of fitOptical disc driveKey (cryptography)Patch (Unix)MiniDiscGene clusterArc (geometry)Server (computing)SoftwareBackupComputer animation
33:15
Point cloudFacebookOpen source
Transcript: English(auto-generated)
00:06
All right, then welcome to our first Rook talk of the day. Please welcome with me, Alex, on Rook intro and what's new in Rook. Thank you.
00:21
Well, first of all, technology works as we want. So I'm Alex Henner. I'm a DevOps engineer at Cloudical. I'm one of the Rook maintainers and, well, certified Kubernetes administrator. So before I'm just jumping into the topics,
00:40
quick point that you see on the rough outline on what I'm going to talk about. First part, Kubernetes and storage. Just a quick point on how is it looking in Kubernetes. What's the situation? What is Rook? Did you, well, get kind of a bit of an overview on what is Rook even?
01:02
Architecture, and then we're going to dive into the new features, also in the roadmap for the future. And just as a, well, last point to throw in, we are looking to graduate the project soon. So I'm going to talk about that in a bit as well.
01:21
So Kubernetes and storage. In Kubernetes, if you want storage, you're, well, basically bound to use external storage. Meaning, well, taking on-premise with bare-metal as an example, you need to have some sort of storage already existing.
01:41
Not to name any names, but, well, there's bigger appliances for storage you might have. Well, if you're just on-premise, that might be already enough for you. But in general, with the storage there, it's not too portable. Meaning that if you go from on-premise
02:02
and you're on the multi-clouds, hybrid cloud kind of trip, well, your appliance in the data center might not be sufficient for, well, your AWS or whatever cloud scenario.
02:22
So, well, as we have it, if your appliance is in the basement and not in the AWS cloud or GCP, wherever, well, you need to have access to the services. That's not so much of a challenge if you're just on-premise, but, well,
02:41
but it's, in general, just like a... It's not too portable unless you are in the same environment all the time. For if you're more or less in a cloud environment, the problem that arises is, well, you're kind of locked in, because if you're with AWS,
03:01
well, you can use their EBS and their, well, their file system and, well, S3 and so on. At least S3 is a bit more open, if you will, so, so that's not too much of a problem, but normal, well, classic storage like block and file system storage is kind of a burden to have.
03:21
Well, switching from environments. Even though, as we have in Kubernetes, storage classes and such, which kind of are, well, a bit of an abstraction there, well, it's still a bit of a challenge, because if you have a bug on, well, on, let's say, your local appliance storage
03:43
and you move it to AWS and you don't have the bug there, well, there are certain scenarios where it's simply a bit more important to have the same storage from your on-premise or whatever to full production, so, no. And another point, well, if you're a big company,
04:02
you probably don't have to too much as a problem, you need to have someone that manages the storage. So if you just put some appliance in and, well, it runs, then, well, you might not need to manage it too much, but still, at least it feels a bit weird to just put in some appliance and, yeah, it runs,
04:21
and it runs till it doesn't run, yeah, well. So kind of the question there, who's managing the storage? So that's where we come to what is Rook? Rook is, well, a storage operator.
04:41
The part of the operator part, I'm going to go into that concept a bit further later on, the part of operator is simply as it kind of implies, well, I'm, for example, an operator to, well, operate, well, some crane or something. It's kind of the same if you think for Rook there,
05:01
it's, well, operating things like Ceph, EdgeFS and other storage systems, storage software, backends, provide us how you want to name it. And the point with Rook there is that it, well, you have Kubernetes, oh, well, fine, put Rook on it, you have Ceph, for example, or EdgeFS, so it's generally not just about Ceph there,
05:23
it's general point of those, well, complex storage systems, making them easy to run in your Kubernetes cluster. So, well, in containers in the end. And a part of where, well, where the Rook part with the operations is coming in is,
05:43
it's trying to have it as abstract as possible, but with also a good amount of customizability either way for the users, where Rook takes care of deploying, managing, installation, configuration, all those hundred steps, and, well, in the end, you create one, two more objects
06:05
to talk about the Kubernetes part there, and bam, Ceph cluster in your Kubernetes cluster, ready to be used by the applications. Well, if you want to bring that to production there's a bit more to think about like what's kind of distance on, but, well, kind of if you think about more on a lower level
06:23
still remains that Rook will take care as good as possible about those points for storage software. But, well, let's kind of step a bit back there. Rook, it's open source, Apache 2.0 license. We are a project of the CNCF.
06:42
We currently, that's also later on, as I said, we are looking to graduate the project as other projects like Kubernetes, Prometheus as well, have already done graduating from CNCF, so we're trying to do that as well.
07:00
As we had with the operator part, to make it easily possible in Kubernetes we can introduce so-called custom resource definitions. Those are basically, well, a possibility for users, especially more, well, administrators of the cluster, to extend to Kubernetes API.
07:22
So, well, for Rook, looking at Ceph there, we would, for example, have a Ceph cluster object. For, well, other purposes, like, well, MySQL, for example, you might want to have like a MySQL database or something object, which especially looking at,
07:41
well, user experience for the developers. If they need a database, well, go ahead. Create your MySQL database object and some operator in the end will take care of creating all the things needed for the database. It's the same concept there for Rook. The admin, hopefully not just all the users on the cluster,
08:02
creates a Ceph cluster object and based on the specifications in that object, Rook will take care of, well, for example, for Ceph to create OSDs from the disks in the servers, will discover them and, well, create the command structures and all needed
08:23
to, for example, then tell Ceph volume to use those and those disks based on these specifications. Yeah. I already mentioned that it's not just about Ceph. It's also about another storage, which, well, is, yeah, well, complicated to run.
08:43
Ceph, CockroachDB, HFS, Cassandra, NFS, UgoByteDB. Kind of shout out to the UgoByteDB people. They just joined with like version 1.1. Great people. And yeah, the idea is to grow that even further.
09:03
So it's not just as like Ceph or something. The idea is that in a broader sense, well, I have my own premise, for example, I create my Ceph cluster using a Rook Ceph operator and, oh, well, I need a, well, UgoByteDB. And I create it and just, for example, tell it to use as normal Kubernetes flow
09:22
to claim, persistent volume claims is a keyword there, the storage, for example, from the Ceph cluster, because why not? So, yeah. Going to the architecture. There's, in itself, three layers. You have the operator layer. It's, well, managing the Ceph.
09:41
It's configuring it. But the main point to mention, it's not on the data path. So there's no, well, my one operator part right now is down, so, oh, no, everything's halting, IO is stopped, and that's not gonna happen. For the storage provisioning, thankfully, through the CSI, the container storage interface,
10:03
it's, well, it's amazing, as it's not just, well, for, there was something previously. Don't, like, don't, well, it's not too important. It's flex volume. Point is, with CSI, it's more like a common solution
10:21
for general storage. It has container storage interface as a name, but it's general interface to more or less be able to say, hey, give me 50 gigs of storage, and then on a node to say, hey, please mount the storage, for example. And that's great, as previously we had written our own flex volume driver, and, well,
10:41
it's a pain to maintain such a driver, so. But that's also for, it's not just for a SAF part. It's also for HFS, for example, as they're also providing block storage and file system storage. Rook takes care of setting up the CSI driver as good as it can. There's definitely some points where,
11:01
well, the admin needs to step in to create storage classes to maybe, well, have certain changes, certain security levels specified in a storage class or something. But main point is that Rook will normally get you, for example, a SAF edge of a class in a state where you just need to create a storage class.
11:21
So, yeah. So, as previously said for the operator, the operator is not on the data path. So no need to worry about operator crashing or anything, or being incompatible. It's just SAF, or HFS, or Cassandra, and so on. There's, well, yeah.
11:41
Yeah. I mentioned previously, like, Rook SAF operator, and there's even Rook HFS operator, and so on. The point there is that it's separate operators. It is not like, yeah, I want to use SAF, but because I installed Rook, I have the Rook SAF operator, HFS,
12:01
and then, and so on, operator. That's not like it. That's the reason why it's split like that. If you want SAF, well, go for the SAF operator. If you want, let's say, Cassandra as an example, you use the Cassandra operator. So you have the freedom there to kind of say what you need and what you want. So, yeah, well, as said, we have the Rook operators.
12:23
They do the management, configuration, upgrades, and stuff of the software. From, as previously said, with, like, the operators, it's basically the same. The operator is simply, well, automating certain actions
12:41
which normally a human would need to do, or, well, maybe even another automation tool, like, well, Ansible or something. So, yeah. Coming back to the custom object, if you create a SAF cluster and say, hey, I want to use, for example, well, my STB drive or something, or maybe my, well,
13:02
you add a new disk in that definition and say, well, use my STC disk as well. Your Rook operator will take care based on that, and, well, we'll see, okay, the state changed, and act on that. We'll trigger all the things needed, all the preparation jobs, for example, for the OSDs,
13:21
and you will hopefully in a few, well, let it be a minute. Sometimes LVM and so on takes a bit of time, or at least for SAF, there's a volume part there. But yeah, so in the end, operator watching over everything, seeing OSD changed, I need to do something, and even kind of like health checking to a certain point
13:42
where it's like, oh, we can't upgrade now because the status of the SAF cluster right now says we have data, well, OSDs down or something like that, too many or so. So there's parts where the operator kind of tries to also throw in a good amount of knowledge from the people that run SAF cluster, or even the, well, more or less also the developer
14:03
of SAF, where, for example, there's like a, what is it called, SAF to stop, I think, and SAF to remove command and such. And there's parts where Rook also tries to, where more and more are also moving there, to use the SAF native parts to
14:22
well, have to cluster operator SAF as possibly. So from the Rook part, operator part again, which is simply is the Kubernetes API. There's not too much magic involved. It's just a normal power of Kubernetes.
14:45
Nothing well too special there to say. As said, it's managing upgrades and such. It's trying to do them as already for like SAF and so on as, well, stateful if you want.
15:00
Coming back to your customer service definitions. So we have those customer service definitions, and in those, the admin specifies what they want. They can specify one device after another. They can even, if they're, well, in an AWS environment, well, in a cloud environment in general, they can even specify that storage should be taken
15:22
from like the cloud provider and such through, again, like standard Kubernetes methods, persistent volume claims and such, and well, just a normal way of kind of doing it. And the object is basically in the end, as Kubernetes wants it, it's the state as it's desired.
15:40
So there's not, it's, the Rook operator will try as long as possible to use the disk even if it doesn't exist, or well, there's mechanisms in place to hopefully prevent that, but just from a, just from a Kubernetes standpoint, if the object says, hey, use this disk, it would technically, well, try it forever
16:03
because the user wants to decide state to use this disk. So, no. So moving on to the new features. You get a, but you go by DB-Joint in 1.1. Already it's Rook release 1.2, but well, yay!
16:21
New storage backend. Same goes here. We are a perfect example right now for the CRD party customer sources. We can define our own objects or even, well, own APIs. And the people that create those CRDs can basically put in whatever they want and even have, which is pretty powerful about Kubernetes,
16:43
have validation and such in place. HFS, HFS has been able, thanks to Jirwan Isa Muz, I think I'm butchering the name,
17:01
Mustafa has implemented multi-home network. That's a big challenge a bit still in Kubernetes. I can tell you, at least just from general concept there for the, well, for the, for projects such as Multus, if anyone has maybe heard about Multus yet,
17:21
the idea is simply to have Kubernetes native customer resources and such to be, to easily be able to have multiple interfaces from the node, but even virtual, well, overlay networks and such, per application, per pod running in your Kubernetes cluster.
17:42
He, well, he didn't just design it. He even implemented it. That's great. That's cool to see. Just kind of quick look here for, there again part, we just specify a new part of configuration. In this case, I think there's even not just support
18:00
for Multus, but I can't get the name right now, but one of the other projects that's allowing custom networking stuff in Kubernetes and easily configure those networks to be used for these, for this HFS cluster in this case. For Seth, I think a lot of, a lot of people,
18:21
including me, well, have been kind of desperate for this feature. Partitions, finally, at least to kind of bring up what my case is, maybe, who knows, Het's not hosting provider, anyone, maybe? Now, well, you have cheap servers on you,
18:43
but you have two disks most of the time, and those disks are mostly like four terabytes, eight terabytes or something big. And if I have my OS on one of them, I basically lose the whole disk because previously I couldn't use a partition, which, well, I would, 100 gigs maybe for the OS
19:00
and the rest for Seth, because why not? And finally, I think the, yeah, since, well, with the upcoming, I'm not sure if it's already out, the 14.28 release though, that is available as a feature to use. Yeah, it's great to see, it's awesome.
19:21
So big thanks for everyone that's contributing, helping, being on the Slack, just, well, also thanks for being here, yeah. So for the roadmap, well, we just try to further stabilize the custom resource definitions,
19:40
try not to, well, change them too much, as we are on a stable level for Seth and HFS, for example, right now. We, well, we try to use, well, we have our own code that does like the watching on the custom objects and such.
20:03
Well, it's, well, there's a bunch of other projects out there as well, which kind of offer a more, well, more commonly used approach to doing that, like the operator SDK from Red Hat Core,
20:21
I think those two projects slash companies. So the idea is to, well, move away from our own stuff and get something which, well, everybody uses, yeah. As we had it with you, you've got to be joining for 1.1, we hope to get more storage providers on board with Rook.
20:43
For Seth, I talked about the multi-home network stuff for HFS, well, it's soon also coming to Seth in, I think, let me, in, where do you hear it? It's planned for around 1.3,
21:02
thanks to Sebastian Hahn again. Yeah, let me go back one slide here. And for Seth pods, the manager part, who knows about the Seth manager? Who knows about the Seth manager dashboard?
21:22
Well, the idea there is there's a cool dashboard where everybody is working on to, in general, have better integration with such orchestration tools, like, well, the deep C stuff and so on, like the existing, well, Rook also exists, but also for Rook's side too,
21:41
even also better integrated with that. That's one of the ideas, Daniel, is that you can specify everything you want in the Seth cluster object, but if you want, you can also just log into the dashboard, see which disks are available in each node, and then just click, click, and say, ah, well, make them OSDs or something.
22:02
And, well, we want to get there. We're not quite there yet, but it's getting better and better. For HFS, they simply want to feature, well, to support new features that they have implemented in the project themselves. And one of the bigger parts here definitely
22:21
is the CNCF project graduation. But besides that, there's more to come. So we saw that, project graduation. I'm not gonna bore you too much with dates or anything. Point being, we joined the CNCF project as a sandbox project in like 2018 or something.
22:45
And, well, we further improved and stuff, and woo. Point being, we'll try to graduate in 2020. So hopefully around, well, March. And, well, besides looking from the past to now,
23:04
well, a lot of numbers have increased, so that should probably mean something good, right? It's growing, to summarize that slide. Rook is growing further and further. We have done, also, which is for a good amount of companies,
23:20
also very important, there was a security audit about Rook. Well, an independent security audit was performed by the Trail of Bits guys. They even did the Kubernetes security audit. Point being, there were some vulnerabilities. We fixed them, whereas one or two small,
23:42
like, well, not two critical things, which are still being fixed. Point is, we got a security audit. Looks good. We didn't have too many critical things. In general, just like the current fracture stuff, we need to improve a bit on the CI part and such, and, well.
24:01
So should you run Rook already? Who's running Rook in development or so, testing around, playing with it? Okay. Who is running it in production? Oh, well, at least a few hands. I'm happy about that. If you want, I would really encourage you
24:22
to get in touch with me maybe now, or you can also just send Jared Watts a message on Slack or on Twitter, and, well, kind of talk to him that we get a bit more, no, what is it called? Not customer testimonials,
24:42
but testimonials about people using the project and, well, using, well, using it simply. Yeah. And, well, for people that might not be able to say, hey, yeah, we're using it, it can also be confidential. So, well.
25:02
Well, I'm working for Cloudical just as a point. If you want to work on Rook, like to program Go or something, feel free to reach out to us or to me. But now, getting back to Rook, if you want to get involved, feel free to jump on our Slack. Rook.io is also a great place
25:21
in regards to the documentation and such. Twitter, we even have a mailing list. We have community meetings, so feel free to join. And, yeah, you got the photo? Thank you.
25:46
If you have any questions, feel free to ask them now. I'll try to repeat them as we, well. Some audio issues. Yeah.
26:09
So the question is that there is an issue about extending persistent volumes, slash persistent volume claims with Ceph CSAI, right? Yes. That is solved.
26:21
Well, it's solved in master to be that guy, but it's back plotted as far as I know to 1.2, something, 1.2, two, or? Yeah, it is in 1.2 as far as I know, but not released yet, or it is already in the latest patch release. Point being, Ceph CSAI, which brought this feature,
26:42
which is the, for them, the 2.00 release, has brought this feature finally. Probably worth checking out github.com slash Ceph slash Ceph minus CSAI. I hope they've updated their feature table below.
27:05
So the question is, is it possible to use multiple, well, Rook storage operators, for example, Rook Ceph and Rook HFS in the same cluster? Well, technically speaking, yeah, sure. That's not a problem. They're self-independent.
27:21
Well, if you use the same disk for both, there is a chance that, you know, one might take a disk first before the other or something, but, well, it's, you have two people fighting over the same disk, for example, then it's not a clash of the operators because they don't know about each other,
27:41
but just a clash of the disks on the servers. Yeah, if you separate like what nodes they use, then that shouldn't be too much of a problem. Yeah, it's like the same with, if you use Ceph and the Cassandra one, you don't need to use the Ceph one. In the end, it's just normal, this is volume clamp storage. And if you separate and so it's, yeah.
28:00
Yeah. How is currently the Rook, how is it evolving?
28:20
So the question is, well, let me try to summarize it. How far Rook operators will go in regards to secondary operations, right? Well, one of the main parts there at least is already covered by the operators, for example, for Ceph. That's, well, the most common I know there
28:42
is the health checking of the monitors. Because, well, you don't want to lose your quorum. Well, bad things will happen then. In regards to backups of volumes and such, that's not a thing Rook is looking too much into.
29:02
Simply due to the fact there's already a big ecosystem around it. Like, let me, where do we have it? From Stashed project, the Stashed operator, well, it's even an operator which has CRDs as well. There is Heptio, or it, well, it was previously,
29:20
I think it's VMware or something now, with their Arc project. I hope they didn't rename it as well. Yeah, the new name. And there's a few more projects, some even more on the Ceph layer, if you want, for Ceph, for example, like Becky 2, I think. It's secondary operations,
29:40
but it's something just due to the diversity there in, do you want it in Kubernetes, fancy with customers, those definitions are more outside, just back up everything, for example. Well, and at least one phrase that kind of stuck with me from some people that do a bit more in regards to Ceph
30:02
is if you want to back up a Ceph cluster, put in a Ceph cluster besides it, and it's like, well, yeah. Yeah. Howdy, integration.
30:25
So you mean if you want to add a new storage backend, basically. So the question is, what do you need to do basically
30:40
to add a new storage provider? Well, we're trying to improve that even further now. The main idea is that we have a lot of, well, framework-like structures to call it like that. It's not yet at a, well, I would say a full-blown framework for that,
31:02
but there's basically certain structures already available where you design your customer resource definition. And well, to put it like that, you can copy and paste certain parts of the codes, which, for example, just do the easy part of watching and notifying your own functions then to,
31:21
well, on ads, like a new object has been created, you react to it and such. There's a good amount of functions already available. It can be still improved, like I said. It's more like we're moving more and more to be a framework in that regard. Okay, any other? Ah, so the question is if one RookSaf installation
31:47
can support multiple clusters. To a certain extent, yes. You would have, let's assume the following. You have one cluster with the RookSaf on it, and you have, let's say, three other clusters,
32:01
three other Kubernetes clusters. You would use RookSaf just as normal in the first cluster to get SAF running, and in the other three clusters, you could also use Rook there with the, forgot the name, external cluster integration, where you would take the access key and such
32:23
from a disk cluster, give it to each cluster, and then the RookSaf operator can also set up the certain things, like SAF CSI and such. Though one important thing there to mention is the RookSaf cluster, you need to run it with host network true as of right now,
32:40
because the other nodes and such, they need to be able to access the OSDs and the monstings, like, well, everything in the end. So that's one of the kind of restrictions as we have right now. Yeah. If you don't have any questions anymore, well, I got a few stickers still with me,
33:03
so feel free to grab them. So, yeah. Thank you. Thanks.