We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Hardware accelerated applications on Unikernels for Serverless Computing

00:00

Formal Metadata

Title
Hardware accelerated applications on Unikernels for Serverless Computing
Title of Series
Number of Parts
287
Author
Contributors
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Serverless computing facilitates the use of resources without the burden of administering and maintaining infrastructure. The simplification of IaaS appears ideal (in theory) but providers and users are presented with several challenges: providers aim to reduce infrastructure maintenance overheads; users require isolation, flexibility and programming freedom. Serverless deployments are mostly backed by sandboxed containers. To enable programming freedom for users, providers allow the use of containers for function deployment, however, to ensure strict isolation, these containers are sandboxed in VMs. As a result, this bloated stack brings complicated maintenance costs: (a) several layers of abstraction between the user function to be executed and the actual execution environment; (b) increased attack surface; (c) increased request-to-exec time; (d) reduced set of feature availability for functions (hardware acceleration). Unikernels promise fast boot times, small memory footprint and stronger security but lack in terms of manageability. Additionally, Serverless frameworks only support containers. Moreover, unikernels provide a different environment for applications, with limited or no support for widely used libraries and OS features. This issue is even more apparent in the case of ML/AI workloads. ML/AI libraries are often dynamically linked and have numerous dependencies, which directly contradict the statically linked notion of unikernels. Finally, hardware acceleration is almost non-existent in unikernel frameworks, mainly due to the absence of suitable virtualization solutions for such devices. In this talk, we present the design of a flexible serverless framework designed for the cloud and the edge, backed by unikernels that can access hardware accelerators. We go through the components that comprise the framework and elaborate on the challenges in building such a software stack: we first present an overview of the necessary components of a serverless framework; then we focus on the function execution framework based on two popular unikernel frameworks; finally, we present a hardware acceleration abstraction to expose semantic acceleration functionality to workloads running on top of this framework. A short demo of the working components will be presented, discussing the challenges and trade-offs of this approach.
Computer hardwareBasis <Mathematik>Function (mathematics)DemosceneComputer-generated imageryInternet service providerScaling (geometry)CodeData modelGroup actionAsynchronous Transfer ModeInferencePlane (geometry)Control flowGateway (telecommunications)Scheduling (computing)Integrated development environmentInterface (computing)Event horizonExtension (kinesiology)Information securityLeakOverhead (computing)BootingGraph (mathematics)Semiconductor memoryKernel (computing)Process (computing)Run time (program lifecycle phase)SurfaceOperator (mathematics)Data managementMusical ensembleBinary filePhysical systemHybrid computerGeneric programmingInformationPrice indexComputerPresentation of a groupCartesian coordinate systemInstance (computer science)CollisionComputer configurationExecution unitSpring (hydrology)Cellular automatonSeries (mathematics)Binary codeSelf-organizationGeneric programmingPlanningFunctional programmingSocial classInterface (computing)DataflowIndependence (probability theory)Service (economics)Figurate numberWeightLogicQuicksortVideo gamePlastikkarteIntegrated development environmentOcean currentConnectivity (graph theory)Roundness (object)Functional (mathematics)Context awarenessSemiconductor memoryInformation securityMultiplication signState of matterTerm (mathematics)Software frameworkOperator (mathematics)Mixed realityDemosceneDecision theoryGame controllerGoodness of fitUniform resource locatorBasis <Mathematik>Forcing (mathematics)GoogolMedical imagingPhysical systemINTEGRALRun time (program lifecycle phase)BootingMathematicsDemo (music)Internet service providerAdditionCodeMereologyFront and back endsSoftware repositoryGraph (mathematics)Asynchronous Transfer ModeWorkloadInferenceEndliche ModelltheorieGroup actionGateway (telecommunications)Event horizonCASE <Informatik>Musical ensembleData managementOverhead (computing)Kernel (computing)BootingCloud computingBuildingVirtual machineValidity (statistics)Limit (category theory)Queue (abstract data type)Scheduling (computing)Computer hardwareComputer animationEngineering drawing
Computer-generated imageryAnnulus (mathematics)Game theoryComputer clusterPoint (geometry)Functional programmingPlanningGeneric programmingGateway (telecommunications)Parameter (computer programming)Computer architectureStapeldateiMedical imagingComputer fileGame controllerPhysical lawFunctional (mathematics)Goodness of fitMortality rateComputer animation
Lemma (mathematics)Computer-generated imageryGamma functionComputer wormMenu (computing)Game theoryException handlingCausalityComputer configurationAuthorizationPhysical lawUniqueness quantificationVirtual machineMoment (mathematics)Computer fileFunctional programmingUtility softwarePlanningComputer architectureFunction (mathematics)Game controllerSource code
Radical (chemistry)Expected valueFunctional programmingFunction (mathematics)Computer animation
Computer hardwareExtension (kinesiology)Information securityLeakOverhead (computing)Service (economics)Functional (mathematics)ResultantDecision theoryIntegrated development environmentWorkloadSoftware frameworkCASE <Informatik>Shared memoryComputer hardwareVirtual machineComputer animation
MultiplicationComputer hardwareExtension (kinesiology)Information securityLeakOverhead (computing)Information extractionImage processingEndliche ModelltheorieArchitectureDevice driverGeneric programmingExistenceLink (knot theory)Function (mathematics)Plug-in (computing)Binary fileDisintegrationIntegrated development environmentCodeConnectivity (graph theory)DataflowTensorOperations researchInterface (computing)Computer-generated imageryINTEGRALOperator (mathematics)Plug-in (computing)Integrated development environmentComputer hardwareFunctional (mathematics)Connectivity (graph theory)Information securityDemo (music)Cartesian coordinate systemWorkloadGraph (mathematics)Keyboard shortcutoutputCryptographyVirtual machineMedical imagingMultiplication signInternet service providerShared memoryArtificial neural networkComputing platformRun time (program lifecycle phase)Functional programmingAbstractionLogicMachine learningDevice driverInformation extractionProcess capability indexHigh-level programming languageDecision theoryFront and back endsCodeSoftware frameworkInstance (computer science)Different (Kate Ryan album)Library (computing)Entire functionVirtualizationRange (statistics)ResultantCASE <Informatik>Mobile appSoftwareType theoryExecution unitInsertion lossDataflowSource codeClosed setLatent heatChemical equationTask (computing)Buffer overflowPlotterRoutingLink (knot theory)UsabilityPower (physics)Modal logicDoubling the cubeSpeciesForcing (mathematics)Object (grammar)Perspective (visual)FluxOpen sourceCorrespondence (mathematics)Transportation theory (mathematics)Boss CorporationComputer animation
Real-time operating systemMomentumGame theoryMaxima and minimaGamma functionGEDCOMDegree (graph theory)Duality (mathematics)Keilförmige AnordnungAreaPoint (geometry)View (database)CASE <Informatik>PixelCausalityDevice driverCodeInstance (computer science)Directory serviceLoop (music)Parameter (computer programming)Sound effectExecution unitObject (grammar)Cartesian coordinate systemGraph (mathematics)Bus (computing)Integrated development environmentGroup actionInternetworkingScripting languageMedical imagingVirtual machineSinc functionVirtualizationPlug-in (computing)Inference
CloningGraphics processing unitForm (programming)Duality (mathematics)Point cloudCategory of beingExecution unitPRINCE2Different (Kate Ryan album)Configuration spaceComputer animation
World Wide Web ConsortiumLibrary (computing)Run time (program lifecycle phase)Data typePhysical systemSymbol tableForm (programming)Function (mathematics)SubsetRandom numberSoftwareCore dumpInterface (computing)CurvatureProcess (computing)Parameter (computer programming)System callClient (computing)Resource allocationWeb pageBinary fileParsingDevice driverBlock (periodic table)SynchronizationMessage passingComputer networkRing (mathematics)Electronic mailing listStack (abstract data type)Structural loadKey (cryptography)Menu (computing)Arrow of timeLibrary (computing)Multiplication signFile systemMedical imagingDevice driverGraph (mathematics)Core dumpInternetworkingPhysical systemComputer animation
Form (programming)Semiconductor memoryBefehlsprozessorProteinGEDCOMSocial classGraphics processing unitLink (knot theory)Parameter (computer programming)Medical imagingPoint (geometry)Cartesian coordinate systemMultiplication signNumberRootScripting languageIterationDirectory serviceSource code
MalwareGradientWorld Wide Web ConsortiumCountingCache (computing)Semiconductor memorySanitary sewerBefehlsprozessorThumbnailDirectory serviceInternet forumRead-only memoryDrum memoryCoprocessorRight angleState of matterInformation retrievalResultantMedical imagingComputer fileCartesian coordinate systemCoefficient of determinationGraphics processing unitProcess (computing)Functional programmingCurveComputer animationSource code
MalwareGraph (mathematics)Mixed realityMach's principleData storage deviceGoogolComputer-generated imageryMassMenu (computing)Computer wormSoftware developerRelational databaseBit rateTerm (mathematics)Maxima and minimaTime evolutionArrow of timeWeb pageGamma functionGEDCOMDirectory serviceBinary fileSemiconductor memoryCache (computing)Function (mathematics)BefehlsprozessorQuantumoutputData modelWorld Wide Web ConsortiumLink (knot theory)Game theoryPresentation of a groupSocial classComputer animationSource code
BefehlsprozessoroutputFunction (mathematics)Link (knot theory)GezeitenkraftInformationOvalACIDComputer networkData modelSemiconductor memoryForm (programming)Directory serviceFirst-person shooterComputer-generated imageryComputer hardwareRun time (program lifecycle phase)BootingSurfaceComputer programAssociative propertyRun time (program lifecycle phase)Software frameworkComputer hardwareSurfaceBootingMultiplication signReduction of orderSemantics (computer science)Projective planeService (economics)Cellular automatonEndliche ModelltheorieSource codeComputer animationProgram flowchart
InformationDemosceneSource codeWebsiteForm (programming)XMLComputer animation
Graphics processing unitInformation securityBoundary value problemPoint (geometry)Computer hardwareCodeMultiplication signParameter (computer programming)1 (number)Right angleOffice suiteUsabilityComputer programmingService (economics)Asynchronous Transfer ModeCartesian coordinate systemConnectivity (graph theory)TesselationMereologyInternet service providerHypercubeFlow separationSpacetimeMeeting/InterviewSource code
Source codeOperator (mathematics)Process (computing)Scripting languageFunctional programmingPhysical systemDeclarative programmingFormal verificationFunctional (mathematics)Cartesian coordinate systemComputer fileDescriptive statisticsMeeting/Interview
AreaBlock (periodic table)BuildingOperator (mathematics)Medical imagingSurgeryCASE <Informatik>Functional programmingDegree (graph theory)Computer hardwareStreaming mediaParameter (computer programming)Endliche ModelltheorieComputer fileAsynchronous Transfer ModeoutputMeeting/Interview
Functional (mathematics)Software frameworkMultiplication signRight angleTheory of relativityDrop (liquid)Process (computing)InternetworkingMeasurementResponse time (technology)Meeting/Interview
Sign (mathematics)Right angleMeeting/InterviewComputer animation
Transcript: English(auto-generated)
Hi, my name is Tasos. Along with my colleague, Pavlis, we're going to present hardware-accelerated applications on unikernels for serverless computing. First, we're going to talk about
serverless computing and how unikernels can be the basis for light-width function execution. Then, we're going to show you the current state and the missing pieces of a serverless framework we're building based on unikernels and the short echo demo on OpenFaaS with Solo 5.
Then, we're going to add machine learning workloads to the mix and hardware acceleration, and we're going to show a short image classification demo on OpenFaaS with VXL. Serverless computing is essentially managed infrastructure orchestration by the service provider. It offers effortless scaling, allows the user to focus on business logic,
and deploy their code without provisioning the infrastructure. The user code is deployed as a function with its dependencies. There is a well-driven execution. The billing model refers to actual resource usage versus paying for idle resources, and the mode of execution is
stateless, oriented to microservices and triggered actions. Most serverless frameworks nowadays are deployed on cloud infrastructure. However, this mode of execution is useful for edge workloads as well. For instance, running machine learning inference for fast decision-making is a valid use case. Currently, our serverless frameworks are backed by containers.
They consist of two basic components. They have the control plane, where there's the API gateway, the scheduler, the queue worker, etc., and the actual functions, where there's the main
init function, control plane logic that setups the environment, setups the handler, and the endpoints, and there's the handler function, which is spawned on invocation and contains the actual user code. Now, this logic is bundled in container images and is spawned, either sandboxed or plain, to listen to events via the endpoint or the gateway. This is a figure
showing an example of a fast deployment in Kubernetes. We have the control plane running as basic generic containers. We have the user functions, which are created by the control
plane via the Kubernetes API, and the gateway running in the control plane forwards requests to the control plane logic running in the functions that trigger the execution of the user code. Now, as we mentioned, currently, serverless frameworks are backed by containers.
This raises an important issue, an important security issue, with regards to multi-tenants. The current solution that service providers take is to sandbox these containers using VMs. However, VMs have two significant issues. The first one is the
non-negligible overhead with regards to memory and management footprint. The overhead associated with this mainly refers to the boot time. In the serverless context, this is
known as the cold boot time. It refers to the memory footprint, especially apparent in edge devices, and in general to the VM lifecycle. All this VM stack is really complicated. The service provider has to maintain the VMM, the kernel around this
the way to handle the container image inside the VM, inside the sandbox, etc. How about we try something more elegant as the basis for serverless execution? We could try unicurnals. Unicurnals offer fast boot times, they offer low memory and
management footprint, and increased security. However, unicurnals lack interoperability in terms of function and code compatibility, and they lack container runtime support.
Serverless frameworks, on the other hand, are designed for containers. They are based on container runtimes, container operators. Unicurnals are not containers. Their management interface and their IO interface resembles the one of VMs. The application is bundled in a single binary, and there is a limited orchestration support. To support
for serverless, we need to take into account two things, the container image and runtime flows and the invocation triggers. We have to bundle the unicurnal binary and its dependencies in a
single container image. We have to tweak the container runtime to spawn a unicurnal along with its monitor or sandbox, and we need to make sure that in the unicurnal there's the interface with the serverless gateway. To integrate unicurnals in modern orchestrators,
we just need to build a compatible runtime able to spawn a unicurnal. Using this runtime on an existing serverless framework should be straightforward. Instead of spawning a container and function invocation, the system will spawn a unicurnal. No changes needed on the serverless
workflow. In this logical diagram, we show this addition of a new unicurnal runtime. Now we have the control plane, we have the user functions deployed as generic containers,
and we have an extra part where upon function creation, the control plane redirects the function creation to a new runtime class. So this new runtime class, which is OCI compatible to be able to interact with Kubernetes, is able to spawn a unicurnal
with the relevant control plane logic for the OpenFaaS scheduler operator, etc. Upon function invocation, the unicurnal can handle the user code, the user execution.
As a first step, we take a hybrid approach where we keep the container for the interface and the endpoint setup, and we spawn the unicurnal for the actual code execution.
We implement this on OpenFaaS on a generic Kubernetes cluster. We keep the fastnet and send the gateway containers as is. We keep the function pods, we use generic containers with fwatch.to exec the user function, and solo5 as the unicurnal example. So we have
prepared a short demo. This is a Kubernetes cluster. There's OpenFaaS deployed as a control plane. We have around C as the backend as the
return class that spawns a generic container. In the container, there's fwatch.grounding, and upon function invocation, it triggers the execution of a simple solo5 unicurnal. Let's have a look at the demo. So a GitHub repo. Now a Dockerfile that builds the actual
function. This is generic OpenFaaS stuff. We build, we clone the solo5
framework. We have a small batch to print out the architecture of the node that the container is running. We build solo5. We statically link the monitor to make sure that there's no compatibility issue. We start from a lighter container image. We copy the
watchdog. We copy the monitor. We copy the unicurnal. We expose the ports for the control plane software, and we add the actual execution command, which could be also provided
as a command line parameter. The yaml file that deploys the actual function is shown here. So we have the gateway. We have the name of the function, the container image,
an OpenFaaS profile, and some parameters for the execution. Now let's see how this is deployed.
We have a unique Kubernetes cluster. These are the name of the nodes, the architecture of the nodes. We can see that status ready. We can see the control plane of OpenFaaS running.
We have the gateway, which is the main pod, and a couple of other control plane containers. And if we check the function name, we see that nothing is running at the moment.
To deploy the function, we use this yaml file. We use the fastcli command line utility to deploy this function, and if we query the cluster for the actual function, we can see that
it is being spawned. Now to trigger function execution, we just do a curl command
to this address. We can see that there's the output of solo5. We can see that we haven't provided any input and that it is running on arm64. We can also provide some input. We can see that the command line is hello solo5.
We can see that it's running on a different machine. We can retry the execution, now running on arm, now running on x86, etc.
Another option to interact with OpenFaaS is to use a simple UI that they provide. So we have the eco-solo5 function. We have some more information here. We can invoke it
without an input. We can see the exact same thing as the as the terminal output. We can also invoke the function. We have a success here because that's what the unikernel expects.
That's it. Going back to the presentation. So we've seen how we can easily use unikernels
to spawn functions on OpenFaaS deployed on a Kubernetes cluster, both on x86 and arm64 machines. Now the second issue that VMs exhibit when it comes to sandbox and containers for serverless frameworks is hardware access and device sharing.
My colleague Babi will dig into this issue. Babi. Thank you Tazos and hello from my side too. The second issue that we saw earlier was hardware access and device sharing in a multi-tenant environment. The reason we're interested in
such a problem is that there are workloads like machine learning and artificial intelligence applications that require access to hardware acceleration devices. We can think as an example an application at the edge which instantly needs to make a decision based on input data or applications like image processing, information extraction and more. Since our
solution is based on unikernels, these applications need to run on top of them. But how easy is it to run ML and AI applications in unikernels? And for the time being, the truth is that unikernels do not provide a suitable environment for such workloads. And there
are two main reasons behind these problems. At first, unikernels do not support any machine learning framework. ML frameworks have a lot of external library dependencies and they are usually linked against them during the execution. However, as we know, unikernels are statically built and porting libraries and that many libraries, it will not
be an easy task at all. Secondly, unikernels usually support generic virtualization devices like VirtIO and they do not offer any support for hardware acceleration devices. Of course, we can think of hardware pass-through for unikernels but this means that for every
accelerator, for every device that we're interested, we need to port the entire device driver and this is not going to be an easy task at all. And moreover, hardware pass-through, it's not a viable solution for a multi-tenant environment. Even in the case of
para-virtualization, it seems there isn't such a generic solution which can be applied in different kinds of accelerators. Therefore, still we have to port different para-virtualized drivers to unikernels. In an effort to address these issues, we propose VXL. VXL
simultaneously exposes acceleratable functions to the user while in the same time it can support a wide range of hardware acceleration frameworks and devices. VXL consists of three components. At first, we have the user-facing API which can be bindings for machine learning frameworks
or bus operations or crypt operations or something else. The second most important component is the plugins which can be like the hardware devices that are supported or available in the system, the acceleration frameworks and even in the case of virtualization environment,
it can be the transport layer between the host and the guest like VirtIO. Between these two components, we have a thin software layer which is called VXL runtime and it has a sole role and it has with one role to dispatch the acceleratable functions
to the appropriate and available plugin. With that design, VXL is able to provide hardware-agnostic API at function granularity while at the same time it abstracts the hardware-specific
logic from the unikernel. The applications that use VXL can easily be executed in different devices and hardware in general and can also be executed in different platforms. This is especially for unikernels. This is very important for debugging reasons
since the same applications can be executed natively in the host, inside the container, in a virtual machine or at last in the unikernel. Moreover, VXL is able to provide
integration with higher-level frameworks like TensorFlow, PyTorch and more and that means that an application which already uses them can directly run on top of VXL. At last, VXL provides the necessary security for a serverless environment since all the
applications are isolated and the user code does not directly access the third accelerator. Let's take a closer look at how VXL operates inside the unikernel. Especially for the case of
unikernels, the plugin component of VXL consists only of the transport layer. For the time being, we have support for VirtIO either as VirtIO PCI front-end driver, either as VSOC, but we can easily add more plugins for transport layers.
So let's take us an example of a BLAS application which runs inside the unikernel. The application makes a BLAS operation using the VXL API. The VXL runtime inside the unikernel
will directly dispatch this request to the VirtIO plugin and then the VirtIO plugin will communicate with the VXL runtime running on the host. On the host, we have one more VXL runtime
instance which is directly linked with a virtual machine monitor and receives the request from the guests either from the VSOC or the backend of the VirtIO PCI device. The runtime running in the host, based on the type of the request and the available plugins
and hardware, offloads this request to the corresponding plugin. For example, if we have a BLAS operation, this will be offloaded to an FPGA which is configured for BLAS operations. As soon as the result gets ready, the VXL on the host will
forward the result to the unikernel and then it will get back to the application. So for the time being, VXL supports two unikernel frameworks, Unicraft and
Rabra, but as we saw, VXL doesn't have any dependencies, of course only for the transport layer, like VirtIO, so we can easily port it to other frameworks. On the user perspective,
VXL provides C and C++ API and bindings for Rust and high-level languages like Python. Moreover, VXL has an initial integration with TensorFlow and it can support BLAS
operations. At last, we will present a short demo of deploying Animus classification app which runs inside Unicraft and we will deploy this application in OpenFOS in a serverless environment.
The setup is similar to the previous demo with the difference at this time, the user functions that we deploy are hardware accelerated using VXL. At first, we will take a closer look at the container which we deploy and it contains the execution environment for the unikernel. We can see how we built the container image.
This container image also builds the unikernel that we're gonna use and as we can see, we just need the unikraft and we just built the application into the unikraft and then
we have the execution environment for the unikernel. The execution environment is based on Jetson inference since we're running on top of an NVIDIA GPU and as we saw earlier, we need to have a VXL instance which will run in the host and also the virtual machine monitor
that will communicate, that will provide the support for the vxlevert.io driver. And lastly, it's important to note that the entry point for this container image is gonna
be the OpenFOS, the fwatchdog from OpenFOS and when we invoke this function, fwatchdog will execute the command which we see here which is actually a bus script which
executes the chemical command and uses two arguments which are passed to the application inside the unikraft and they are needed from the application. We will talk more about these arguments later. So let's see how this script works and here we have
the VXL plugin that we're gonna use which is the Jetson inference plugin and we also have we also have the chem command with the virtual vxlevert device and also a directory which is
served between the host and the unikernel in order for the application in the unikernel to be able to read the image. So let's see how this works in action and we will create
the unikraft from the beginning even if we have it here as we can see but we will create it again. We will clone unikraft and the application okay and we already have
a config which can directly be which someone can directly use and it contains all the
dependencies for vxl but let's take a closer look of what are the dependencies and as we can see selecting vxl like the dependence of course is a virtio vxl driver for the unikraft and we
also have specified those dependencies some file system related libraries that are needed in order to read the image as you can see we don't have any external libraries from unikraft only the core internal libraries and let's build our unikernel it's gonna take a lot of time the
application is just a simple application which take two arguments the image that we want to classify and the number of iterations for the classification that's the arguments that we saw earlier and here we have the unikraft image we will copy it we will replace the
image that we had in the root directory with it so the chem script uses the updated
image and let's have here let's use this image as an example so on top you can see the the gpu state and right now no processor using the gpu so let's see when we invoke
the function what's gonna happen with the gpu okay now we see that the chemo process is using the gpu and then we have the result it's a golden retriever by 86 percent based on the classification and what happens here when we execute the above command is that the
the fwatch dog the fwatch dog receives the image the image will be saved in a file
uh and this file is then shared between the host and the unikernel and then it's read by the application and it makes the classification the image the classification we can take one more example a face maybe uh oh let's try this one i don't know how
that's gonna work you see that the camera okay and it's a lion face is it a lion face
yeah it's okay and now we will return to tasos we will sum up this presentation thank you a lot thank you babi to sum up we've seen how serverless execution based on unikernels
can reduce cold boot times and the attack surface we use vxl to expose hardware acceleration semantics to unikernels we have function-based hardware acceleration and multi-framework support our next step is to develop a pure unikernel runtime for upper layer orchestrators and integrate it to the end-to-end serverless framework this work is partly funded by two
eu-funded projects serrano and 5g complete thanks very much for attending you can have a look at our github repos at the vxl website and at our company's website thanks very much
from julian with the gpu back and running outside the vm which provides the main security boundary does it somehow weaken the isolation yeah so regarding this um the main point of decoupling the hardware execution
uh from from the user code is that you can audit the code that is running on the acceleration itself so presumably the the code that would be running on the accelerator is not
user provided it is it could be user provided but it's uh audited by the by the vendor it's something that is supported by the vendor and exposed by the vendor by the provider that offers this service so and apart from that there's
there are two modes of executing a vxl application the first one is using a hyper tile device and so um i feel like a bucket so the the backing is on the hyper is running on the hypervisor on some part of the hypervisor and the second mode is the
using this shop so this is um this is using another agent uh a separate user space program that is uh essentially uh a vxl application that is running on the host so uh the the isolation is provided by this uh part by by this component
um presumably uh you would we we could add more more uh kinds of uh isolation and sandboxing on this component but we're not yet we we haven't tried that yet all right so let's see if there are any other questions not really from the audience so far
um one thing that popped into my mind as i said before i am i'm not i don't know much about these kind of of um processes and and operation of source appliances um you have a lot of
descriptive declarations what the system actually should do if you start up one function or one container basically is there one way to like those are all docker files or yaml scripts is
there some some well um way to make the is it in in a sense reproducible in or can you in a verifiable way reproduce all the functions that you are compositing and executing in those
applications hello can you can you hear me i can hear you yes okay so can you can you can you
repeat the question please as i said since it's not my area of expertise i was just wondering since you have so many small building blocks for an image classification operation uh so uh it doesn't matter how how this is done so if you define the the api of the function
so i want to i want to classify an image and the input parameters are the image file or the image stream and some other parameters that refer to maybe the the model of classification the mode of classification stuff like that so if you define this and there's
and there's a hardware accelerated operation that performs this uh then you could you could do many many stuff on on many hardware in this way is this what you asked for
yeah to some degree so so far uh there are not many other questions left so if there's something you want to add to your talk or i i see that uh there's a performance
related um question so so the question is if we measure the drop in performance with uh unikernel based functions with regards to the non uh non-unikernel fast framework uh
the thing is we we haven't been able to measure the the performance stuff just yet so we're we're in the process of developing the whole framework and getting it together and
finishing the the end-to-end example so it's it's still a bit early about the performance we we expect that the that the spawn times would be a lot better all right
if there are not any other questions i we can close this q and a session if you'd like so again thanks for having you thanks very much all right take care