Formal methods: Decentralized Runtime Enforcement of Message Sequences in Message-Based Systems
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 30 | |
Author | ||
License | CC Attribution 4.0 International: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/52865 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Vector spaceMessage passingSequenceMechanism designInterprozesskommunikationState of matterAlgorithmRun time (program lifecycle phase)Electronic mailing listCASE <Informatik>Group actionPrime idealNetwork topologyFunctional (mathematics)Process (computing)Cartesian coordinate systemDependent and independent variablesUniform resource locatorForm (programming)Decision theoryAllegoryState of matterCategory of beingTranslation (relic)Set (mathematics)AverageVariable (mathematics)Active contour modelInterprozesskommunikationSound effectMixed realityNumberResultantQueue (abstract data type)MathematicsCore dumpType theoryWeightBarrelled spaceSource codeSequenceTable (information)Message passingPartial derivativeInformation technology consultingUltimatum gamePosition operatorControl flowDifferent (Kate Ryan album)Quantum stateMereologyWave packetMultiplication signVideo gameChainTerm (mathematics)File formatGame controllerProof theoryBlogOrder (biology)InformationStrategy gameLink (knot theory)Resolvent formalismInsertion lossMaxima and minimaMoment (mathematics)Software testingPetri netUniqueness quantificationIdentifiability1 (number)Focus (optics)PlanningConnectivity (graph theory)CASE <Informatik>Extension (kinesiology)Physical systemTheory of relativityPartition (number theory)Vector spacePoint (geometry)PlastikkarteOverhead (computing)Office suiteMilitary baseUsabilityLine (geometry)DeterminantMoving averageMechatronicsRow (database)Temporal logicInternetworkingTorusCuboidParameter (computer programming)Computer simulationLengthSemiconductor memoryCollaborationismComplete metric spaceError messageLinearizationCorrespondence (mathematics)Complex (psychology)AutomatonBuildingAlgorithm2 (number)Latent heatCodeInformation securityRun time (program lifecycle phase)Figurate numberBlock (periodic table)Electronic mailing listSoftwareComputer animation
Transcript: English(auto-generated)
00:01
Hi everybody. Our paper is about the decentralized runtime enforcement of message sequences in message-based systems. We consider a message-based system that consists of distributed components which collaborate via asynchronous message passing.
00:20
These systems may consist of off-the-shelf components developed by different vendors, and hence we have no access to their code. Inside systems, the sequence of messages may lead to the occurrence of bad behaviors. Due to the off-the-shelf components, it's not possible to prevent such message sequences formation at design time, so we aim to prevent their formation at runtime.
00:46
However, the order of messages cannot be determined exactly due to the absence of a global clock. As an example, assume a building that consists of different locations named A to E, where
01:02
the location E is restricted and a visitor must enter the restricted location for a legal pass. Each location is equipped with a smart security camera and a smart door, and the visitor must use the smart door to enter the location. When the smart door of a location has opened by the visitor, the door sends a message to a central system.
01:29
The only legal path to the restricted location is through the consecutive locations A, C, and then E, which can be detected by this message sequence.
01:41
Similarly, if a visitor is entered a restricted location by pathing through the consecutive locations A, B, and then E, it can be inferred that a visitor accesses the restricted location illegally. The path between different locations of this building is such that if the consecutive
02:03
locations B and D are visited, then the visitor will return to the location A. Hence, the message sequence which is generated by pathing through the locations A, B, D, C, and E does not violate this security rule,
02:21
because the visitor returns to the previous locations A by pathing through the consecutive locations B and D. Actually, we focus on unwanted message sequences in which the occurrence of some messages contributes to the formation of a sequence,
02:41
while some other messages may cancel the effect of the previous ones. Before I explain how we prevent the unwanted sequence formation at runtime, note to some preliminaries. We assume that in message-based systems, two messages sent directly from one process
03:02
to another will be delivered and processed in the same order that they are sent. Each process has a unique identifier and a message queue in which a process sends a message like M to a target process using its identifier. Each process takes messages from its queue one by one in five order and invokes a handler regarding the name of the message.
03:29
As there is no global clock in the system, we use happen before relation to determine the order of messages which is implemented by the vector clock.
03:43
To formalize the unwanted message sequences, we use the sequence automaton. This automaton is an extension of the non-deterministic finite automaton where transitions are partitioned into two sets of forward and backward transitions. Forward transitions contribute to the simple paths from initial states to final states.
04:07
The backward transitions cancel the effect of the occurrence of messages labeled over the forward transitions. Between the source and the destination set of backward transitions, there must be a path made of at least one forward transition.
04:28
For instance, this automaton describes that which message sequences must not be formed at runtime. The labeling of transitions denotes sending a message. For example, the triple P1 M7 P3 denotes the action SEND P1 M7 P3.
04:57
Here, if first the message M7 is SEND and then M3 is SEND, while the message M5
05:06
is not SEND in middle of M7 and M3, then the message sending sequence M7 M3 is formed. However, in a message sending sequence M7 M5 M3, sending the message M5 has cancelled the effect of M7
05:31
and so sending M3 will not form a sequence as the reaching state Q0 is not a final state.
05:41
But the sequence is formed by sending the messages M7 M5 M7 and M3. To prevent the sequence formation at runtime, we need to use two auxiliary functions.
06:00
First function defines the preceding transitions of a transition, isolated by example. Consider this automaton. When the message M6 is SEND, a transition like Q5 to Q6 may lead to the formation of a sequence from the initial state up to Q6.
06:22
To form such a sequence, it's necessary that at least a message over one of the preceding transitions of Q5 to Q6, which here is the transition Q0 to Q5, has occurred, and no message over the backward transitions has cancelled the effect of the preceding transition.
06:43
The pre-transition of the transition Q5 to Q6 is the set of preceding transitions whose labeled messages can be sent before M6 in a sequence and have a same destination as the source state of Q5.
07:04
The second function is via transition. This function defines the set of forward transitions which their effects are cancelled by backward transition. Here, Q M Q' is a backward transition and QN M' Q0 is a forward transition.
07:24
For example, in this automaton, a backward transition Q4 to Q0 can cancel the effect of all forward transitions on a path from Q0 to Q4. Up to now, I explain how the unwanted message sequences can be formalized by sequence automaton.
07:47
Now I want to explain how this automaton can be decentralized among monitors in which monitors have partial access to the automaton. For this, we break down the automaton into a set of transition tables.
08:03
Each table belongs to a monitor and contains the information of transitions corresponding to its process. For each transition, the set of its pre-transitions is also a sorting table. Since the effect of a pre-transition may be violated by the occurrence of its wide transitions,
08:22
it is necessary to sort the set of wide transitions for each pre-transition in table 2. For example, here these automaton are broken down to the three transition tables which belongs to the monitor of M1, M2 and M3. The first table contains transitions which the sender of messages is P2 and denoted by the green box.
08:51
In the choreography-based setting, monitors collaborate with each other using monitoring messages. A monitor sends the monitoring message ask to inquire if a message has been sent and receive the response by the monitoring message reply.
09:07
For example, here the monitors communicate with each other to avoid the sequence formation of send M1, send M2 at runtime. As the message M2 is the last message in this sequence, the process P2 must be blocked before sending the message M2.
09:27
Until its monitor gets information about the partial sequence formation from others and makes sure that sending the message M2 does not complete the sequence formation.
09:41
For this, the monitor M2 sends the monitoring message ask to M1 to check if the message M1 has been sent. The monitor M1 responds to the monitor M2 by sending the monitoring message reply. The monitor M2 may receive that the message M1 has not been sent.
10:03
However, due to the delay of the network, this response may be received late and meanwhile the inquired message M1 may be sent before receiving this response by the monitor M2. So the process P1 has sent the message M1 before the message M2 and the sequence has been formed.
10:24
Hence, to avoid the sequence formation, the inquired message M1 must not be sent by the process P1 until the message M2 is sent by the process P2. And then the monitor M2 notifies the monitor M1.
10:44
In a reinforcement setting, monitors must inform the result of the sequence formation by pulling or pushing strategy. We use a pulling strategy for collaboration among monitors. I explain the reason by an example that with this strategy, monitors find out the order of messages more accurately.
11:03
Consider the property that the message M2 must never send after the message M1. Assume that the process P1 sends the message M1 after the message M2 has been sent. But the vector clocks of these messages are concurrent depicted in figure.
11:22
With the pushing strategy, the monitor M1 must inform the monitor M2 the moment that the message M1 has been sent, that is JN0. When the process P2 sends the message M2, the monitor M2 cannot conclude about the violation of the property as it has not received the moment that the message M1 was sent.
11:46
After pushing the moment of sending the message M1 by the monitor M1, the monitor M2 cannot conclude the order among the two messages accurately and decide on the property which is not held, as the vector clocks of the messages are concurrent.
12:05
However, with the pulling strategy, the monitor M2 inquires about the sending status of the message M1 from the monitor M1 after its process sends the message M2. If the process P1 has not sent the message M1 yet, then its monitor responds with a false result.
12:27
Upon reception of this response, the monitor M2 can conclude accurately that the property is not violated. Now I want to explain our prevention algorithm. In our algorithm, the process PX maintains
12:44
the variable which denotes the list of messages labeled on transitions reached to final states. The process is block B for sending the last messages until its monitor makes sure that sending the message does not complete a sequence formation.
13:01
Also, the process and its monitor have three shared variables which are respectively the list of triples, consist of a message that the process is going to send, a vector clock of the process upon sending the message, and the type of a monitoring message that the monitor must send to other messages, that is, ask or notify.
13:23
A pair of a message which the process PX has been blocked on it, and the status of a message to be sent in which it can be either OK or error. And a list of messages that must not be sent by the process PX until its monitor receives a notify message.
13:44
The monitor Mx also maintains a transition table and a variable which is the list of triples that consist of the transition that is taken before, a vector clock of a process upon sending the message M over that transition, and a result of a partial sequence formation up to that transition.
14:07
Regarding the position of the message M in the unwanted sequences, either M is the last message of a sequence or not, and our algorithm behaves differently. If M is not the last message in any sequence, like the message PX and P1 in this figure,
14:27
the process PX sends the message and appends the triple, the message M, its vector clock, type of monitoring message that here is asked to the end of the shared list. The monitor Mx takes a message from the shared list.
14:44
If the type of the message is asked, it finds those rows of the transition table whose labeled messages on its transition equals the message M. For each row, the monitor Mx inquires about the taken status of the pre-transition and why transitions in
15:03
the row by sending appropriate monitoring messages to the monitors corresponding to the sender of the messages over these transitions. Then, Mx adds a temporary record to its history. Adding the temporary record is helpful when another monitor inquires the monitor Mx about the taken status of the transition.
15:28
In such cases, the monitor Mx must postpone its response to the inquiry until a result of the transition be defined.
15:40
Based on the blocking status of the inquired message, the inquired monitor replies to the monitor Mx. If there exists at least one taken pre-transition like transition Q0 to Q1 that was not taken before
16:01
its way transition like Q1 to Q0 and its taken time is before the sending moment of the message M. It's concluded that the bad prefix is going to be formed. If M is the last message of a sequence, the first seven steps of the algorithm are the same as the previous case with these differences.
16:28
The message M has not assigned vector clock as the process Px is blocked before sending the message M. Also, the process Px sets the shared blocked message to the message M and then is blocked.
16:45
After the seven steps, if a sequence up to the message M is formed based on the value of the result, then the monitor Mx updates the shared blocked pair to error to inform its process that sending the message M links to a complete sequence formation.
17:05
Otherwise, it updates the shared pair to okay to inform its process that the message M can be sent safely. The process Px either sends M or sends an error message regarding the status of the message in the shared pair
17:23
and appends the triple M, its vector clock, upon sending M notified monitoring message to the end of the shared list. The monitor Mx takes the triple with the message type notified from the shared list and then sends the corresponding monitoring messages.
17:43
Finally, the monitor My which receives the notified message from monitor Mx removes the message labeled on the acquired transition from shared waiting list. To evaluate our algorithm, we investigate the effect of different parameters on the efficiency of our algorithm,
18:07
including the number of processes, the maximum number of message handlers of processes, the maximum message communication chain between processes, and the length of the message sequences.
18:20
The maximum message communication chain denotes the maximum number of processes in the chain of message handlers that send messages to each other. We developed a test case generator which produces message-based applications with different values of such parameters and a set of message sequences according to the generated application.
18:42
We also developed a simulator which simulates the execution of each application on our prevention algorithm and then measures the communication overhead of our algorithm. We generate four applications with three, six, nine, and twelve processes, where each process has maximum five message handlers,
19:06
and the maximum message communication chain in each application is four, five, six, and seven respectively. Our results show that as the length of the message communication chain increases, the number of monitoring messages and the average blocking time grow linearly for complex applications.
19:28
We also evaluate the average number of monitoring messages, the average of memory consumption of the monitors,
19:41
and the average time to enforce a property for the application with nine processes. The result shows that the average number of monitoring messages and the average memory consumption of the monitors grow linearly as the length of sequences increase. Also, as the length of the sequence increases, the monitors evolve in
20:02
more collaboration and hence more time to gather all responses from other monitors. Concluding, as I said, we addressed the choreography-based runtime prevention of message sequence formation in message-based systems, where distributed processes communicate via asynchronous message processing.
20:25
We have assumed that there is no global clock and the network may postpone delivery of messages. Our proposed algorithm is fully decentralized in the sense that each process is equipped with a monitor which has partial access to some parts of the property specification.
20:47
Our experimental results show that with the increase of the complexity of application or the length of message sequences, the number of monitoring messages, memory consumption, and the time to prevent the sequence formation grow linearly.
Recommendations
Series of 13 media
Series of 38 media