We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Gneiss: A Nice Component Framework in SPARK

00:00

Formal Metadata

Title
Gneiss: A Nice Component Framework in SPARK
Title of Series
Number of Parts
490
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Gneiss is an abstraction layer for component based environments that aims to provide a foundation for formally provable components. It enables the creation of platform independent, asynchronous components in SPARK and provides function contracts that allow to prove the correct interaction with the underlying platform.
SoftwarewiederverwendungProxy serverSoftwareProof theoryFormal languageIntegerHill differential equationSoftware frameworkPhysical systemPosition operatorPreconditionerRun time (program lifecycle phase)Latent heatImplementationWeb browserIntegerProcedural programmingSound effectFunctional (mathematics)Cartesian coordinate systemResultantCASE <Informatik>SoftwareTouch typingBuffer overflowFunction (mathematics)State of matterComputing platformFormal languageProxy serverFormal grammarMappingCategory of beingFormal verificationException handlingWeb 2.0Multiplication signCommunications protocolComputer architectureAbstractionForcing (mathematics)ArmDigital photographyFitness functionRing (mathematics)Right angleSoftware testingMachine visionUltraviolet photoelectron spectroscopyWorkstation <Musikinstrument>Level (video gaming)Well-formed formulaError messageProof theoryRow (database)Single-precision floating-point formatNegative numberAxiom of choiceView (database)Computer animation
Client (computing)Cartesian coordinate systemInterface (computing)Latent heatMereologyType theoryComputing platformAbstractionComputing platformBlock (periodic table)Semantics (computer science)Slide ruleMetropolitan area networkDifferent (Kate Ryan album)Social classPhysical lawCASE <Informatik>Uniform resource locatorComputer animation
CodeSlide ruleInterface (computing)Block (periodic table)Scripting languageComputer animation
Block (periodic table)Interface (computing)Client (computing)CountingWritingComputing platformLengthProgrammable read-only memoryReading (process)Programmer (hardware)State of matterState of matterHard disk driveException handlingOperator (mathematics)Interface (computing)Computing platformBlock (periodic table)Procedural programmingCodeInformationCASE <Informatik>CountingImplementationFunctional (mathematics)LengthPlastikkarteResource allocationMathematicsPoint (geometry)Dependent and independent variablesMereologySemiconductor memoryPointer (computer programming)Error messageUniform resource locatorFormal grammarPreconditionerPlotterRight angleFlickrArmBlogMultiplication signComputer animation
State of matterBootingLengthClient (computing)Block (periodic table)Computing platformComputing platformoutputPointer (computer programming)WritingHacker (term)StatisticsResource allocationInterface (computing)Normed vector spaceBlock (periodic table)State of matterClient (computing)Interface (computing)Right angleSemiconductor memoryComputer fileMultiplication signPhysical systemMessage passingSystem callGeometryFunctional (mathematics)Point (geometry)Computing platformProcess (computing)Film editingSoftware testingOffice suitePressureProcedural programmingDynamical systemResultantUniform resource locatorSpeicheradresseArmoutputLevel (video gaming)Condition numberWritingBitSemantics (computer science)Arithmetic progressionFunktionspunktmethodeLimit (category theory)Touch typingResource allocationMereologyGeneric programmingSoftware frameworkSinc functionFront and back endsNumberOperator (mathematics)SineContent (media)AreaType theoryAbstractionWrapper (data mining)PreconditionerError messageBound stateFerry CorstenAliasingDifferent (Kate Ryan album)Field (computer science)Server (computing)ImplementationOcean currentPointer (computer programming)Computing platformTelephone number mappingThread (computing)Cartesian coordinate system
Uniform resource locatorProxy serverRepository (publishing)InternetworkingCartesian coordinate systemCommunications protocolSinc functionFluxComputer programmingSphereFormal grammarFigurate numberDesign by contractStreaming mediaParsingType theoryRow (database)IntegerFirewall (computing)Field (computer science)Binary codeFormal verificationLecture/Conference
IntegerProof theoryFormal languageProgramming languageProxy serverSoftwarewiederverwendungPoint (geometry)Block (periodic table)Proxy serverFluxRight angleImplementationProcess (computing)ExpressionCondition numberPower (physics)BuildingIntegerPreconditionerTransport Layer SecurityType theoryRow (database)Computer animation
Point cloudFacebookOpen source
Transcript: English(auto-generated)
Thank you hello everyone. I'm going to present nice our nice component framework in spark so Why did we go and? build a component framework While when we build component based systems We can't just go and rebuild all our software because it's just too complex and another thing is
Some software like web browsers you can't just verify them if you need correct software so what our Architecture goal is to have untrusted software like network stack and the web browser, which is already built by other operate by other people and
Connect them together, but when we connect to untrusted things together there might Some bad things might happen so we put a not interested proxy in between them that for example enforces policies validates the protocol they speak and
Yeah, and this Proxy needs to be correct in this case so otherwise we wouldn't achieve anything So what we what do we need to have correctness so on Correctness for us is correctness by proof, so we don't just want to test it and say okay It looks it looks like it's doing the right thing, but we want to make sure as true as possible that it doesn't
that it Treasures or there's an exception So we have to prove it at least for for the absence of runtime errors and in some cases We also want to prove functional correctness So what do we need to prove this we need the tools so we need a language this that allows us to formalize some
specifications and we need a mapping between the formalization and the actual implementation and Another thing is proving is quite hard so the time you put for you might need two or three times the time to prove a software, then you needed to build and
Also provability is not always it's not always given so you might build a software, and then you try to prove it and you Recognize your architecture just has some flaws that make it impossible to show some some properties
So what is our tool set where we use spark which is based on Ada so it's a compilable language its performance wise in the like C or C++ you can compile it with GCC or LLVM and a very important thing is it brings a verification tool set and
as the verifica as the formalization is included in the language we don't need to show the mapping between for Specification and implementation so what you can see here for example we have a function that shows us the absolute value and In the two complement the negative side of an of the integer is larger than the positive side So there's a single integer which which is the smallest one that cannot be converted to a positive value
But it will just overflow and return a negative value again, so what do we have to say here? we say a precondition that I must be larger than this smallest value and On the other hand the function guarantees us that the result will always always be greater or equal to zero
Also in in spark functions are always free of side effects So we don't have to say anything else if we put the integer in and something comes out We don't have to worry about our application state procedures are another thing so this procedure will just increment the integer you put in and
So it says as a precondition the integer must be smaller than the greatest integer to prevent an overflow and the output Will be one larger than what we have to put into the function and very important is we have to annotate our global state And which is in this case null because it doesn't touch any global state
But if it would be had to annotate it so we could also Show its side effects on the platform What is important for nice is reusability, so we want to have a platform abstraction
That we can just move our formally verified applications to the next platform without having to re-verify everything And in this case, it's really important to have Interfaces that can be mapped to different semantics because most platforms employ different semantics for the same for the same interfaces
And we have to rely on dependencies that can be satisfied by all platforms so application that are written in nice Cannot have cannot depend on libc for example because not all platforms implemented libc Or even the specific type of libc like you have in Linux so
Okay So this is probably something that has already been done before So this is nothing new but the new thing comes here We want also to have provability and provability in the first place means we have to provide a formal Specification of how our platform behaves so this is the actual hard part
Because our many platforms require some some application behavior in a specific way and they also Provide some guarantees about how the platform will behave But the problem is on most platforms this isn't annotated You just have to find out and then you move to the next platform and you notice, okay
The guarantee which we thought we had isn't just there so we can't use it For this I'm going to show an example our block client interface just as a disclaimer the some the C++ and Spark code you see might compile, but it's very simplified because the actual problem is too complex to put it on a slide
So as a short recall what is a block interface This is what you talked the interface you talk to your hard disk or your SD card So you have blocks of a specific size
And you have some kind of request which you send with a descriptor and this descriptor contains The starting block and how many blocks you want to write and probably a pointer to the memory you want your data to have where you want your data and How it works is you create this packet descriptor
Then you allocate memory for the request if it's a write request You have to put your data into the request first if it's not a write request You just do nothing then you send your request to the your device the device tries to either write or read the data sends it back to you and if you Send a reach request you just read the data out of the response
So how do we formalize this this is a very simplified Part of the g-node API for block for the block interface Because g-node was our first platform we targeted So what do we have you have a packet descriptor?
We say it's a write packet. We start at some start point and have some count of packets Then we have an accept we have a try block where we allocate the packet and if it fails we have some allocation error and Before we can submit the packet to the platform which means sending it to the other side
we have to check if the platform is ready to submit and What can we read from this code? So to formalize it we have to see okay our packet descriptor can just be created There's no problem. The location might fail. So we have to need some error handling in this case and Also, we have to check if it is ready to submit before we actually submit the package so
Those are now we have to formalize this So we build in spark API We define our packet descriptor Which has a start and a length and an operation and we check if it is allocated Since we want to prove that there are no exceptions
We have to catch the exception in platform code and encode this information into a boolean and in this case As we can handle exceptions in our own code and we have a function ready that checks if the platform is ready and Our submit procedure has those two preconditions. So the packet must be allocated and it must and the platform must be ready
What happens if we just use this while this code in this case would I Didn't try but I think it would prove because yeah, we have our packet and we say it's the allocation is just true We never called allocate but nobody
Prevented us from just saying that so we check if it is allocated and it's ready Okay, this is allocated is true. And if it's ready we just submit it This would the first thing would fail because we haven't allocated anything And the problem here is maybe the platform can only handle one packet at a time, but we submit twice But we checked here that that it is ready. So the submit doesn't change anything on its state
So we have not we have to fix two things We have to make this inaccessible so that it can only be changed by the actual platform implementation and we have to annotate some state onto the submit
So, what do we do we make the packet limited private what does this mean? Well, I think private is pretty clear We just can't access the fields of the packet limited is a more special thing. It says we can't copy the thing So it is always passed by reference. There are no copies. You might be able to assign it, but then you lose whatever
if you if you assign a new value to this to this It's a variable of this type you lose the old one This is important because if you change the state of a packet it might be attached to some platform state for example a memory location where the actual data is if you would copy it and
Then change the state of the copy. You would have two copies that had to have different state but point on the same memory location and Yeah, then you have an invalid state and but you can't exit you can't tell from the from the API So you're just not allowed to copy it To create packets we now need to create function that of course doesn't take an allocate
An allocate operation also we encode the packet state in in an enum so it's either empty or allocated and we have a get function for the state and The create function always gives us an empty packet
So now we have we need to We need to modify design or platform state so we just there's Something called abstract state in spark. So we can say we can't
Otherwise we would have to implement the platform spark if you want to have a real estate But we just say we just tell spark Okay, we know that there is state and that might be modified but we can't exactly tell you what it is So we just say there's a abstract platform state and we say ready gets input from this platform state. So
Now spark knows, okay The platform if it is modified ready might also be modified and here comes the important part Our submit procedure has an in out. So it is dependent on platform state and it also modifies the platform state So if we would now call submit The previous result of ready would be invalid because the platform state has been modified. So ready might also have been modified
and Another thing is we now of course have to check here that this is in this that all packet is in the state Allocated and not just P dot allocated No, I forgot something of course when we when we
Submit a packet we can't submit it twice. So the state of the packet after submission is always empty So You can you can then reuse the packet but you can't submit it twice for example So that the device might write or read twice from the same memory location
Okay, now we have our first platform and we can prove it and it's it's relatively sound but then we want to go for example to some Linux based C platform and We have a C API, okay, we have our struct it's like before
We have a result and we have a packet point. We have a pointer that points to our memory location Okay, we call malloc and we can check if malloc failed or not And then we call we just write the packet out to the system and write might return with an error
So of our primary API is to first check if we are able to submit and then submit So but here we just submit and get an error back Okay, maybe we can we can make sure that there are no errors. So let's look at the errors The first thing is this is not implemented This is easy to check
Also, if there are invalid arguments or if the file offset Is out of bounds so we can both check these things before actually writing Or we can also check the file descriptor, but there's this Yeah, if if we don't know what happened, but we still failed we have our out of resource error
On Linux, for example, so yeah, we can't check if this isn't going that this isn't going to happen So there might be always an out of resource error and no matter what we do before So our API just won't fit On this semantics, so we have to change our API again
What we do is we make submit we give submit the ability to fail So yeah, the precondition is now on that. This is only allocated because we can't check anything in before But the post condition is more interesting because states might be either empty or allocated after
After calling submit which means if it is empty the submit was successful and this is allocated We get we still have the same packet and we can just we can just submit it again because it hasn't been submitted
Yeah, the global state stays the same But how do we design of the first platform now because it has different semantics well, this is quite easy we can just use a wrapper around this native submit we had before and Check if it is ready then we submit it and if it isn't ready we just don't touch it
So it stays the same as it was before As as far as we got now we have our Framework is asynchronous, so we don't have any threads because threads are
Not determinist often not deterministic and things that are not deterministic are really hard to prove If they can even be proven We support capability-based systems since we don't have function pointers and spark we implement all our callbacks via generics so you can just create a
generic with a function so on on the platform code, which Isn't always spark we can then make a function point out out of it there is a limited dynamic resource allocation, so this means You in spark you don't want to have dynamic you I don't know I think you can't have dynamic resource allocation because just can't can't prove it
But so nice works completely without dynamic resource allocation But for some platform API is for example a Linux we just needed so In the back end it is done, but it is invisible to the actual to the actual application
The same is for memory pressure so all if all interfaces are designed that The resource consumer can decide when to consume a resource so it can always make sure it has It has the resources to actually process a package for example
So there is no No pressure on the receiving side the sending side has to has to determine if it can actually send something Also, there is no aliasing. This is what I explained with the limited packets So if you have two packets copy one and they show point at the same memory location. This is aliasing
The platforms we support we have implemented this called these concepts for g-node Linux and when so these are three quite different systems and Currently We have a we have a lock client and server implementation and we can read
Communicate with block devices we have we have a timer and we are working on them on On Arbitrary messaging and shared memory so Designing a new interface takes quite some time and you have to test it on different platforms because of the issues I showed
so yeah, it's sometimes a bit slow progress Yeah, that's already it thank you are there questions
Eiffel yes
I well, okay. The question was if I know a field which always also supports contract-based programming And for my verification and to be honest, I've never heard about it
So, okay the question was what are the applications that we want to be envisioned that you want to build with it Well on one hand protocol verifiers, so my colleague Tobias Raya yesterday talked about record flux and the idea is that we
include record flux in the into nice applications and we can then build proxies that have a formally verified binary parser and can therefore check binary streams or protocols for their correctness and Like kind of firewall for for unsafe applications
Superior than the first integer or if we were to do
Last integer, but if the value is of type integer isn't obvious that it's going to be Less than the last integer, I mean like I think it was this let's know the next one
I mean like one E is of type integer, isn't it given that E is going to be at least equal or greater than the first one? Yeah, but the precondition says it must be greater than not at least greater than This is did you mean this this is just an error
So like this is in the range of integer, but we don't we want to exclude it because this is a value we can't process Because there for the first integer which is minus
2 by the power of 31 There is no positive expression in the integer because the largest integer one, okay Yeah, we're talking about 30 or in general to complement integers, yeah
We don't have an actual TLS proxy. Okay. My the question was I showed this and how far are we with this TLS proxy We don't have an actual TLS proxy, but we have a working TLS implementation in record flux
that is formally proven that you can pass TLS a TLS 1.3 and We have a working proxy scenario for example for the block device Yeah, so we can build proxies and build TLS. So I think we can build this too. Any other questions?