We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Unified Cloud Storage with Synnefo + Ganeti + Archipelago + Ceph

00:00

Formal Metadata

Title
Unified Cloud Storage with Synnefo + Ganeti + Archipelago + Ceph
Title of Series
Number of Parts
199
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
This talk presents Synnefo's evolution since FOSDEM '13, focusing on its integration with Ganeti, Archipelago, and Ceph to deliver a unified, scalable storage substrate for IaaS clouds. The talk will begin with an overview of the Synnefo architecture (Python, Django, Ganeti, and Xen or KVM). Then, it will introduce Archipelago, a software-defined, distributed storage layer that decouples Volume and File operations/logic from the underlying storage technology, used to store the actual data. Archipelago provides a unified way to provision, handle and present Files, Images, Volumes and Snapshots independently of the storage backend. We will focus on using Ceph/RADOS as a backend for Archipelago. With Archipelago and its integration with Synnefo and Ganeti one can: * maintain commonly-used data sets as snapshots and attach them as read-only disks or writeable clones in VMs running the application, * begin with a base OS image, bundle all application and supporting library code inside it, and upload it to the Archipelago backend in a syncing, Dropbox-like manner, * start a parallel HPC application in hundreds of VMs, thinly provisioned from this Image, * modify the I/O processing pipeline to enable aggressive client-side caching for improved number of IOPS, * create point-in-time snapshots of running VM disks, * share snapshots with other users, with fine-grained Access Control Lists, * and sync them back to their PC for further processing. The talk will include a live demonstration of a workflow including this functionality in the context of a large-scale public cloud. The intended audience spans from enterprise users comparing cloud platforms, to developers who wish to discover a different design approach to IaaS clouds and storage virtualization. Please see the abstract above for a rough sketch of the proposed presentation. The presentation will mostly consist of live demonstration and a small deck of slides, meant to describe the main features of Archipelago and its integration with Synnefo, as well as provoke discussion in the Q&A session. The main workflow in the demonstration will be [copied and pasted from the Abstract]: * begin with a base OS image, bundle all application and supporting library code inside it, and upload it to the Archipelago backend in a syncing, Dropbox-like manner, * start a parallel HPC application in hundreds of VMs, thinly provisioned from this Image, * modify the I/O processing pipeline to enable aggressive client-side caching for improved number of IOPS, * create point-in-time snapshots of running VM disks, * share snapshots with other users, with fine-grained Access Control Lists, * and sync them back to their PC for further processing
65
Thumbnail
1:05:10
77
Thumbnail
22:24
78
Thumbnail
26:32
90
115
Thumbnail
41:20
139
Thumbnail
25:17
147
150
Thumbnail
26:18
154
158
161
Thumbnail
51:47
164
Thumbnail
17:38
168
Thumbnail
24:34
176
194
Thumbnail
32:39
195
Thumbnail
34:28
Disk read-and-write headSpeech synthesisData storage deviceSource codeInternet service providerRight angleData structureMultiplication signForm (programming)Physical lawLecture/Conference
VideoconferencingCASE <Informatik>VirtualizationCloud computingData managementSoftwareData storage devicePresentation of a groupComputing platformWorkstation <Musikinstrument>MultiplicationFront and back endsJSONXMLLecture/Conference
Power (physics)Product (business)EstimatorCloud computingShape (magazine)Reduction of orderMereologyRevision controlMultiplication signComputer animationLecture/Conference
Multiplication signView (database)Translation (relic)Computer hardwareNumbering schemeService (economics)Arithmetic meanOpen sourceParameter (computer programming)Computer animationLecture/ConferenceXML
Demo (music)User interfaceProduct (business)Student's t-testMultiplication signMultiplication1 (number)Field (computer science)Computer clusterVirtual machineConnected spaceCartesian coordinate systemData managementServer (computing)AreaHuman migrationRight anglePhysical lawSlide ruleINTEGRALLecture/ConferenceXML
InformationIndependence (probability theory)Social classLecture/Conference
Computer configurationVirtual machineLoginFront and back endsSoftware developerGravitationObject (grammar)Gene clusterInternet service providerMultiplicationComputer animationLecture/Conference
ResultantHeat transferComputer fileService (economics)MathematicsData storage deviceEquivalence relationPlug-in (computing)Program flowchart
Medical imagingConnected spacePlotterOperating systemCodeData storage deviceWindowService (economics)Computing platformComputer fileDifferent (Kate Ryan album)Template (C++)Physical lawBitInformationShared memorySimilarity (geometry)Lecture/Conference
Video gameSummierbarkeitMedical imagingVolumeGoodness of fitCloningProgram flowchartLecture/Conference
Volume2 (number)SynchronizationMathematicsProgram flowchartLecture/Conference
TouchscreenResultantWeightDemo (music)Revision controlSoftware developerXMLComputer animation
Source codeXML
Source codeXMLProgram flowchart
Analytic continuationLecture/Conference
IP addressVirtual machineSoftwareData storage deviceSystem administratorWeightDemo (music)Device driverPasswordCombinational logicTheoryFrequencyQuicksortPresentation of a groupVideo gameRight angleSource codeXML
MereologyStack (abstract data type)Mobile WebIdentity managementDatabaseStandard deviationPasswordService (economics)JSONXML
Service (economics)Product (business)SoftwareTouchscreenScaling (geometry)Internet service providerStack (abstract data type)MultiplicationVirtualizationInclusion mapMathematicsLecture/ConferenceXML
Encapsulation (object-oriented programming)Identity managementService (economics)Server (computing)Sound effectSystem administratorRight angleSinguläres IntegralSemiconductor memoryMereologyGene clusterControl flowEvent horizonGame controllerData storage deviceWordVideo gameTheoryLecture/ConferenceProgram flowchart
Service (economics)Server (computing)Extension (kinesiology)SynchronizationCodePlotterAddress spaceComputer fileBlock (periodic table)Data storage deviceWorkstation <Musikinstrument>Social classLecture/ConferenceJSONXML
PlotterElectronic mailing listPartial derivativeComputer fileMedical imagingVideo gameLecture/Conference
Data storage deviceVolumeField (computer science)Multiplication signPattern languageMappingBlock (periodic table)JSONXMLLecture/Conference
Data storage deviceCloningLogicBlock (periodic table)Computer fileMedical imagingMappingInsertion lossLecture/ConferenceProgram flowchart
MappingMedical imagingMiniDiscBlock (periodic table)Insertion lossLecture/ConferenceComputer animation
MiniDiscLinearizationBlock (periodic table)Range (statistics)Data storage deviceProcess (computing)Level (video gaming)VolumeBezeichnungssystemSource codeLecture/ConferenceComputer animation
Device driverVolumeMedical imagingBlock (periodic table)Time zoneFunctional (mathematics)AlgorithmComputer fileImplementationSet (mathematics)CASE <Informatik>Lecture/ConferenceComputer animation
Drag (physics)RandomizationDevice driverBlock (periodic table)Computer fileVolumeInterface (computing)Singuläres IntegralStreaming mediaLecture/ConferenceProgram flowchart
MappingWritingInformationVolumeLevel (video gaming)Multiplication signRight angleLecture/ConferenceDiagramProgram flowchart
VolumeData storage deviceRandomizationLevel (video gaming)Singuläres IntegralBlock (periodic table)Web 2.0Lecture/ConferenceProgram flowchart
VarianceDatabaseInformationSoftware bugMultiplication signLecture/Conference
DatabaseAddress spaceThermal fluctuationsInformationHuman migrationSoftware testingBlock (periodic table)Social classXML
BitMetropolitan area networkQuicksortEvoluteWebsiteOrder (biology)Axiom of choiceDemo (music)Lecture/ConferenceComputer animation
Machine visionPhysical systemUniverse (mathematics)DataflowAuthorizationVideo gameMultiplication signResultantForm (programming)Right angleBitCellular automaton1 (number)2 (number)SynchronizationSoftwareCloningOrder (biology)Digital photographyData storage deviceSpacetime10 (number)Revision controlOperator (mathematics)Block (periodic table)Type theoryMedical imagingProduct (business)Lecture/Conference
Numbering schemeLimit (category theory)Total S.A.Physical systemVirtualizationProgram flowchart
Presentation of a groupLecture/ConferenceProgram flowchart
MereologyBefehlsprozessorLevel (video gaming)SoftwareBand matrixVector potentialLink (knot theory)Replication (computing)Data storage deviceFront and back endsSampling (statistics)Mapping1 (number)Set (mathematics)Computer fileElectronic mailing listDialectStatisticsBlu-ray DiscInsertion lossMedical imagingPhysical systemDemo (music)Sign (mathematics)Process (computing)SummierbarkeitAreaMedianState of matterRight angleForm (programming)Universe (mathematics)Heat transferLecture/Conference
Transcript: English(auto-generated)
Today, 40-minute sessions, each session 30 minutes, with 10-minute questions for you guys. And let's hand over to our first speaker. Okay, so since we have 40 minutes for each slot and there's no break between them,
we'll give you a heads up 10 minutes before the end of the speak. Okay, last five minutes for Q&A. At that time, the next speaker will direct himself for his next slot. Thank you very much. Can you hear me? No? Why am I moving all the way up here?
Can you hear me now? Can you hear me better? Okay. So, good morning. It's good to be here. I'm here to talk to you about how to provide unified storage for ground sources.
Now, Cineco is our cloud platform. We built it based on GANETI. GANETI is the best tool for virtualization management out there. Cineco has more storage, same slot.
Cineco is our platform storage station that uses multiple storage backends. And it is set to provide a storage back-end for archipelago. This is what I'm going to talk about. Now, I feel like today is most of the presentation in our live demonstration. But I can't do it right now because the network is way too slow. You can't wait, you must stay with me.
So if you could just, you know, just tell us that you don't listen to me, turn off the wireless, and we can try using the storage. So if it doesn't work, then we're going to have to describe and imagine what's going to happen. But certainly better, we can actually see if they do this, right? Okay, so where did it all start? Cineco was built initially to power GANETI's public cloud service.
We call it OKEANOS. It's big for OCEAN, because it's a good name for a new, fast cloud service. Cineco started in late 2010. We're in production since the dark 2011. We have been running our estimation ever since.
It's a single deployment. We've migrated it from version to version. We still have VMs that have been made from this time to this time. I mean, we didn't have a downtime. The whole time I said it was power, but okay, we didn't have. Something to do about that. We're now at more than 5,000 features.
We have more than 7,000 VMs running. We installed more than 250,000 VMs, and we're able to destroy more than 70,000 virtual networks, which are pretty good numbers. We've scaled it, it works. Okay, how did it work on the cloud that we built? We wanted to build an IoT-like service
in the sense that it provides computers, programs, and networking capabilities. We wanted the VMs to be persistent, meaning that VMs should be able to understand and translate hardware failures. How do you do that? I'll be talking about that right next to me.
Everything you do with the open source Cineco is too close to your license. You can download it if you want. You can deploy it yourself. You can go see our demo site, which I've mostly been using for this presentation. We wanted to create a production of such a service, and we wanted a user interface that we can see was possible because our gratitude was that students and researchers
would think actually how to produce exposure to cloud infrastructure. So how can we bring that? What kind of tools can we get? Why is it difficult to do that? Everything will be based on commodity character, no storage areas.
How do you do virtual machine migrations if you don't have any shared storage? Because you wanted the VMs to be persistent and resistant to carbon failures. Contrary to most cloud approaches, we treat the VMs as our pets. People will say that things are casual, that she's got a thousand cows,
and she'll let them go out in the field today in the grass, and then bring them back milk. If a cow dies, I think they may have an emotional connection to it. What people need, this puts an enormous burden to the application writer that he or she has known to manage multiple VMs and create new ones when a VM fails.
Most people don't actually do this right now. People who set up their VM, their cloud server or the name server, they continue the way they like. They want it to remain alive if something goes wrong. They love it. It's their pet. They take care of it. And Galen is very good at managing this kind of VM. So what we're trying to do is we're trying to combine Galen with our pet in having VMs
with CFO's offer of flexibility and well-known cloud APIs. We're exposing the OpenStack APIs. We're not using OpenStack Roles, but we're exposing OpenStack APIs. You can speak to CFO as if it was an OpenStack information and choose JCloud.
We've used this kind of leverage. And why is this approach good? Because it's much more manageable. We can upgrade things independently. So how are we doing what we're doing? We're using CFO to provide the OpenStack APIs
to the outside world. We're using Yarneti to actually manage the virtual machines using multiple Yarneti clusters at the back end. So CFO receives OpenStack commands and translates them to write commands. We're using PRVD inside Yarneti to provide reliable machines
without any storage array. And we're also using a developer to provide ink logs and snapshots for virtual machines. We're using that developer to provide a unified way of handling storage, the gravity describing the right object class.
We're using CFO at the back end to add a developer and to expose a developer to that idea. This is the overall picture. Don't try to grasp every single detail. The idea is that CFO is all the way up on the top.
It's a in-layer, equivalent in Python. We're using multi-domain plastics for handling the data. And we're using safe plugins for handling the data itself. What can we do with this?
And what can I demonstrate to you today? What do we mean by unified storage? We have a five-storage service. Users can upload files to their five-storage service. They can do it in a drop-off cycle, in the sense that they synchronize it with partial transfers. We have a 10-gigabyte file that can change like four megabytes of it.
Then I can re-upload it to my storage service and it totally leads up to those four megabytes that I've changed. And I can look at files. Well, we need images, the templates for building VMs, the exact same way as files.
Images are files. We don't use a different service. We don't use a different storage platform. It's the exact same back end, the exact same files, plus you have to make the data. And that's a good thing because you can react to your huge windows and say, I've never tried uploading an operating system image over a 3G connection with a five-gigabyte image.
Would it work? How long would it take? It would be more or less a disaster. What you can do, because if somebody else or you has previously uploaded a similar image, then you only need to react to what's actually meaningful for react loading. You can change plots.
And then you can share files with your colleagues. You can share your picture from your locations or something. You can do the exact same thing with the exact same APIs for your images as well. We use the same code base. Because, you know, images are files. OK, let's move a bit forward. You know, you now want to, so you've got your files
and your storage service, and you register them as images. It's a lot better than that. It's a good thing because it's got this item and so on and so forth. And then you can create VMs for these images by cloning them into, like this, into volumes. And this also happens in a unified way.
We do it with thin clones. The data doesn't actually move. And then after some time, the VM keeps working and keeps producing data. And then you need to snapshot it back. So you create a snapshot from a dive volume into a snapshot file, which again is a file.
So you can actually go download the last days back in a synchronized fashion. Say I want a snapshot back in my VM once every 10 seconds or once every a minute anyway. And then you can go sync this snapshot back to your machine and throw it into download and change slots.
And all these things, they remain the same archipelago, Stogratz, Bacchae. We treat them uniformly. Right? This is what I'm going to be demonstrating. If you think you need to ask something, please do. OK. We can make a more interactive presentation tomorrow.
So what I'm going to be doing is, first I'll switch screens.
So this is a demo installation of the latest and greatest cinema version. It's a development snapshot. I hope it works. If you can, just please disable your Wi-Fi, because it's way too low.
I don't even want to describe things. I want to show you.
Yeah, it's too slow. And certain things just work. OK.
Let's try. If it doesn't work, then we abort. And we just describe things. No, it doesn't work.
I did try it, but it wasn't too slow.
Because if it doesn't work, at this point, you should put them aside.
And we'll continue.
Try to see for your own.
It's at demo.cinecom.org. If the demo is on for everybody, you can log in with your Google account, with your LinkedIn account. You can log in using a password combination.
The IPs are not working, because we don't actually have that many IPs to provide to people. But if you can connect, you guys can say to your IP, you'll be able to try it. So, all I can do is try it. Sorry for that. But if you want, later on, when things have subsided, the network can even do administration
once month. Why am I using the IP? The IP is a tool for managing virtual machines. You give it commands, please create this machine for me. And it executes them on multiple nodes. These nodes are called a cluster. Ganetic has multiple drivers for handling storage. We have actually contributed some of these drivers.
We are one of the biggest external Ganetic contributors. All these methods, all these external storage, weights of drivers for handling storage, we can use them on CMake. And we expose them all the way to the user. The user can pick on which kind of storage
back-end user can run, depending on dependent users. The user can pick on the RBD-based VM, to not synchronize the data for migrations, or on an RBD-based VM. And we use Ganetic on all the APIs, just in few commands.
This is the overall user. The user comes from a command-line client, or from a database. This is the standard of the Stack API, the speed of the Stack mobile API, for example. And we convert commands into user-directed graphics.
We have an integrated identity service, which is just a code, just three for logs. It was a nice thing. It can support managed simplification methods, Google-based notifications,
Bing-Bing-based notifications, Bing-Bing-based notifications, plus Google passwords. We have integrated with a shibboleth infrastructure. We also provide services to record with European, go to the API. If you have an academic account with European University, you can actually go and use our production service with your own credentials.
We do quotas on and across the compute service. It's a thin layer in Python and Django, that only needs to enter the old Stack APIs. And that's actually doing the hard work.
It passes the hard work on to the network. Networking is pluggable. We have many side screens. We can do multiple methods of isolating virtual networks, either using a single VLAN, which does not scale, really, if you're on the 5,000 users scale.
We can do MAC address-based filtering on a single VLAN to provide inclusion of multiple virtual networks. We can do the XLAN encapsulation. All these provide some experience. Or you can run your own, if you want to. So that's the overall picture.
The identity service is the VLAN. The compute service and the server both speak VLAN. How do we interact with Python?
An external command comes, and we ask an entity to actually make it happen. This is the effect type. Something happens in a direct cluster, we learn about it, and we update, using a visible state. Why is that good? Because the administrator always has a side path to the connected clusters.
So even if the signal goes down, or even if you're upgrading, or even if it's not the most stable problem there, because, you know, this does happen, something breaks. Your VMs are still there. They're still managed by the event. You can still go and migrate them if something happens. And then your control path comes back up, and then your users can have access to the VMs to control them.
But the VMs were never found. And we're actually doing this right now in production. We've done numerous rolling upgrades and kernel reports, and nobody ever did that thing. We migrated VMs from one to another, using our memory commands, and nobody understood that thing.
Now, let's go to the unified storage part. Our storage service exposes the Swift API to the outside world. To us, extensions for efficient synchronization. What are these extensions?
Every file gets chunked up into lots of individual four megabyte blocks. When I want to upload a file, I ask the server. I want to upload that file with these specific plots attached to it individually. These are code and addressable plots. I ask the server, I want to upload these specific plots.
If you haven't already, the server replies with the missing plots. So I only need an example of these. When I want to download that file, I ask the server, what's the list of names for the plot that drives this file that I want to download? I compare it to the plots that I already have,
and I only need to download the missing plots. Does this make sense? Why is this important? Because if your images are huge files, you can do partial uploads and downloads and in much more detail. All of this is done by archipelagos.
Archipelagos are common storage for files, images, snapshots, and volumes. And archipelagos back ourselves. So what is archipelago? On one hand, we have resources. Resources are files, images,
all of these things are handled as maps to individual blocks. Blocks are stored on a storage pattern. We decouple the snapshotting and the cloning logic from the individual physical storage.
These are files, these are images. Files on which the files are captured. These are volumes.
These are snapshots. I have my image. I clone it into a running VM. My running VM is very easy to do a snapshot. How does this happen? I've got a snapshot like this. I clone it into a like this.
Other than that, these are handled as maps to individual blocks. And the blocks are stored on individual documents. And archipelago is in the middle. I'm handling this kind of mapping.
I've got my golden dead end image. I want to spawn a hundred virtual machines from it. But, how do I do it? I did not copy individual blocks to create a hundred individual disks. I just copied the mapping and referred to the original blocks.
These are going to be ends. They each have three and two bits. These are just linear block ranges. How do I go from commands on the disks to my actual storage?
On the left-hand side, the VM says, I want to write to block five. I want to read from block six. On the right-hand side, I speak to my storage packet, which is set backwards. And I say, write this block or read this block. How do we bridge these two worlds?
How do we translate from VMs to individual block numbers to actual storage or named options? In the middle is the archipelago floor. It knows the map behind every resource.
The volume of the process will be secreted to the question, translated, and given to the packet. We have northbound drivers. For example, we've exposed this volume as an image block device. This is a kernel driver. Or we could expose it as a file for this example.
Or we could expose it as a user-based driver for 10. We receive this command, and then we use southbound drivers to the set button to bring it down. The initial implementation of the basic algorithms are southbound driver, the driver,
and bring it from an MS file. The upper layers, we didn't know anything about. So we were able to build these zones and functions over NFS. When we wanted to leave NFS, we might take the open-ended box, drag this into the stream, from NFS files to random subjects, and nobody understood it.
Archipelago was not the driver. We were experimenting with the driver for set, and the file driver for NFS. This is the southbound interface. At the northbound interface, we've got a kernel block driver
for exposing block devices to the kernel, a kernel driver for exposing user-based drivers, and an HTTP gateway, which actually makes the files appear, the volumes appear as files over stream. And this is all the actual mapping we have.
This is a redowning map for snapshot. We've blown the snapshot into a redrive map. We just have to copy the map. Whenever a write information happens, we copy the blocks, so that we can do copy and write. And we write on the copy and write. Then we can snapshot the volumes back
to a redowning map. And this happens forever. This happens for files, for images, for snapshots, for volumes. This is the overall feature of random archipelago into a redrive map. Three names speak to the kernel block driver,
to translate to archipelago to commands to the individual storage service, or users can speak to our server, speak to our experience, and these are available to access the snapshots and the removals over the web.
I won't be speaking about this, because I'd like to give some time for questions. Why is it a good thing? Why does it work? We can do, if you tell them, GANESIS and CNOFA prints without the user notes.
We can do CNOFA bug fixes without the VMs going down. We do not have a single database holding all VM information. Some information is still inside the running classes, the live VM information, migration status, and so on. And the cloud stuff is here in our own database.
The cloud notes, the GANETI notes, the GANES addresses are all in our database. It's a more distinct way of seeing things. We have been able to do all the fluctuations in those actions, like the invigations. We're able to do across that assembly, from needles to AMD machines,
nobody else did a thing. We're able to do migration from any test to routers, nobody else did that, their blocks were moving. We have scaled from a few physical posts to, in the order of hundreds of posts, not a lot of hundreds of posts,
we're able to add GANETI to our database, and expose or hold the choice regarding the sort of against the issue. This is our website, you can go download the software, you can go experiment with our demo installation. I really want to give it a try and,
I don't know, show you the demo site at least. But it doesn't seem I'm able to do it anytime soon. Thanks for your attention, would you like to ask things, talk about things? Yes.
Hi, I'm Luis Echele, I'm a set developer, and our future is set in the late two weeks. So would you like something to make your life easier? Yeah, we'd like scrubbing to have this impact on the production systems. Ah. Because we are running set in production right now,
and we're working beautifully, but when they start scrubbing and we're deep scrubbing things, the only thing is no, you know, they are almost inaccessible. So we have users complain that their IO operations didn't leave, and that stops scrubbing with the time to make things come back to normal.
It's in the order of tens of nodes, about 20 pieces of nodes. It's in the order of a few better ones, one or two better ones. We have had issues with latency, rather symposis,
why some latency, if you're doing network-based IO, we have latency, this shows in the VMs, so we cache as much as we can near the clients. We have a caching version of our development, which caches on the clients, and only does the actual IO, the VM says, I want to sync this, I want to keep this,
I want to keep this, I want to keep this, I want to sync this. Yeah, but if you do 14 writes with one or two outstanding requests, you wouldn't have more than one second latency shows.
How large are our network requests? We're trying to keep them pretty small because they're much more efficient and responsive this way. We're trying to keep them in the order of 15 nodes. We don't have to do it, but they're pretty loaded right now, and since we're exposing a cloud service,
we do have a very high request range, so we're keeping them at about 15 or at most 20 nodes.
If we're cloning, and go to the answer 12 here, and we still use the RTT node, if the user selects the RTT, I can actually demonstrate, so please go and see when the network starts operating.
So, not guaranteed, but we could do it. So here's the thing. I'll repeat the question for you back here. You're saying, if you want to do big clones of a global integer, can we do it with the RTT node? If you select the RTT as your storage, you'll experience a 5 or 10 minute delay until the image is copied from our blocks to the RTT.
And that's what we're doing exactly, with the RTT node. But if you use RTT, they'll have to take storage. We know that some markets will just disclose this into a hundred folders.
One question again? Do you still need to take more smaller ones at the end? No, it doesn't. Yeah, but this will take five, this will buy a hundred. This will take five hundred things, right? Then, the networking will be the ultimate
because you're approaching that computer, and you don't really want to do it for short. Maybe one or two big VMs or something. If the user wants his VM to survive for a year, make sure these people create in their VMs. Otherwise, we'll give you some VMs.
We have cloud-level photographs in the sense that every resource is accounted for. We have a nice dashboard which,
please, oh, of course, let's see. Okay. Let's, please have our phones ready. Let's see. These are users' dashboards. These are storage space. And because I've given myself humongous amounts of product. These are system bits,
number of signatures, number of runs, number of virtual machines, number of virtual networks, and number of classifies, these are the issues that I'll give you. So, we limit the number of VMs, for example, that the user can create. We don't actually do I don't pay limitations right now, but we do plan to do this.
Follow flows, and then force quotas on the virtual world. Let's say the total number of IOPS per user, by combining these or their flows, we'll meet this. We can't carry them together.
Yes, they are accessible.
This is an interesting question. The question is, yes, that's what I'm going to do. If snapshots and the images are stored on our storage system, these, that's how we call them, if something comes to the data center,
what happens with availability and, you know, reliability and robustness? Do they get destroyed? Well, our storage system currently, a single installation lives inside the single data center. But then, we have two pieces of installation, two different regions. We should have one here
and one in Athens, for example. The very fact that we account for each other's loss allows you to very efficiently synchronize your important files to a very remote region. You can say, every night, I'll be fetching maps from region A and comparing them from region B.
And I don't need to transfer, as a user myself, right? I don't need to transfer that many plots, the ones that have changed. We could then automate this.
We are doing this because exposes the maps to the ABI so the users can do themselves a bit, or they can synchronize the files themselves. But then, since we're using backends without actually knowing what we're doing,
if, say, for example, at some point, acquires the ability to do a synchronous replication sample without actually being part of the performance, without actually being synchronous, then we can take advantage of it. Right? Because we're agnostic to the actual storage but can reduce it. But even now, we can automate at the ABI level.
Did that give you the potential to get the feedback? How did you do that?
Statistics over the ABI.
We provide... The question is, if you have a network, if you have traffic signs and you automate the process of bringing up new VMs, we provide statistics over the APIs. We can say, a one CPU or network statistics from IPMs. We provide the ability to continually create VMs or snapshots. The infrastructure currently
does not provide a main way of looking at the statistics and deciding who's going to the end. But you can do it your own way. If you feel that, you know, a certain set of VMs may come under a great load, you can require a problem. Make a list of all those and automatically spawn to the end. But we don't have infrastructure
that doesn't allow that. The networks are not advisory. We look at the links here and we provide CPU and network bandwidth for every VMs. Please, go to the demo side and see how it works. I mean, what do you want?
We can speak some minutes. Okay.