We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Finding a Universal Execution Strategy for Model Transformation Networks

00:00

Formal Metadata

Title
Finding a Universal Execution Strategy for Model Transformation Networks
Title of Series
Number of Parts
10
Author
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
ModelltransformationInformation securityInformationComputer networkStrategy gameSoftwareConsistencyPhysical systemTransformation (genetics)Function (mathematics)Endliche ModelltheorieoutputLimit (category theory)RecursionCartesian coordinate systemRecursionMathematicsTransformation (genetics)AlgorithmEndliche ModelltheorieResultantNumberMoment (mathematics)CASE <Informatik>Java appletSoftwareCodeAdditionSoftware developerLatent heatOrder (biology)Virtual machineConsistencyComputer architectureConnectivity (graph theory)Category of beingPhysical systemoutputStrategy gameTheory of relativityMultiplicationDecision theoryPermutationRadical (chemistry)Combinational logicPresentation of a groupLimit (category theory)Best, worst and average caseConservation lawContext awarenessDifferent (Kate Ryan album)Type theorySoftware architectureWeb serviceInfinityModelltransformationProgramming languageIterationNormal (geometry)Multiplication signMultilaterationSet (mathematics)Bound stateCycle (graph theory)FeedbackSpacetimeAdaptive behaviorSelectivity (electronic)Extreme programmingUniverse (mathematics)Internet service providerObservational studyParameter (computer programming)Price indexModal logicProof theoryFunction (mathematics)Arithmetic meanDirection (geometry)Process (computing)Complex (psychology)Network topologyGroup actionDescriptive statisticsComputer animation
Transcript: English(auto-generated)
Hello, my name is Heiko Klare. I'm going to present joint work with Joshua Gleitze and Eric Berger, which is about preserving consistency between multiple artifacts, or we say models, involved into a software development process by means of model transformations. In particular, we have focused on finding out how often and in which order to execute multiple such
transformations to preserve the consistency between multiple models. Let us first take a look at an ordinary software development scenario. There are usually many people involved into such a process, taking different roles and using different tools to describe the system from different viewpoints. For example, we may have software architects
describing the system architecture in tools like the UML, and the architects may also be interested in other quality properties of the system, describing or analyzing them. For example, using the Palladio component model to predict the software performance. Of course, we also have developers using a programming language like Java to describe the system implementation,
and they may use additional artifacts, for example, to describe web service APIs of the system under development. We may also have UX designers, which use some tools for developing the UI of the system and then implement it in Java code. And of course, they can
multiple more roles, but even in the simplest scenario where we only have one role, we usually do not only have the code, but additional artifacts describing the software system. And of course, these models developed in the different tools are not independent of each other, but there are relations between them across which we have to preserve their
consistency. So for example, let us consider that the architects introduce a change to the software architecture. Then the modification of the UML model may affect other models, for example, requiring the adaptation of the Palladio component model and requiring the adaptation of the code implementing the system. Then, in consequence, the adaptation of the code
may require the APIs to be adapted, and we can also have feedback cycles where we then need to adapt the code because of these changes to the APIs. And again, we can have the requirement to adapt the architecture description because of the modifications to the Java code.
So there can be complex dependencies and complex processes to preserve the consistency of these dependencies between the different models involved into such a process. And of course, it is difficult to preserve the consistency of all these dependencies by hand. This is pretty expensive and it's also error-prone to do this process by hand, which is
why automation is beneficial. One means to automate that process are model transformations. You can basically think of transformation as a definition of how to update one model after another one was changed. So this is restricted to the binary case where we
have binary or bi-directional transformations defining how one model has to be updated after the other was changed in both directions. Of course, you can also think of a multi-directional transformation, simply putting all involved models into one relation. We decided to stick to binary or bi-directional transformations rather than a multi-directional one because
of especially two reasons. First of all, it improves comprehensibility because it's pretty hard to comprehend and thus also to develop a multi-directional transformation between all the involved artifacts. And we even have to have the knowledge at one or multiple people to define such an overall relation. In addition, it fosters reusability
because we can reuse each of these bi-directional transformations in different contexts while a multi-directional transformation relating all models is tied to exactly these types of models that we have put into that relation. In addition, bi-directional transformations are well researched. So which properties they have, how we can improve these properties,
how they can be defined has been well researched in the last years. But what's lacking is knowledge about how the combination of these transformations to a network works. And next to problems like driving compatibility of the transformations, how they work with each other, one challenge that we investigated in this work is the challenge of determining an execution order
for these transformations. So to decide in which order and how often to execute them. Of course, there is some existing work on finding such an order, but all this work lacks universality. So they are tied to executing each transformation only once or executing
it multiple times, but then requiring manual decisions on how to combine the changes provided by different executions. And so what we want to achieve is to find a universal execution order for these transformations. If you do not properly find such an order, possible failures that can occur are, first of all, non-terminations. So you execute the
transformations again and again, but you never terminate because you do not find a consistent result. You may also not find a solution, so you terminate at some point, but you do not find a consistent result. And in the worst case, you even terminate with an inconsistency, but you do not detect that you're inconsistent at all. And that's, of course, what we want
to prevent in our work. So the problem that we deal with in our work is that we want to find a transformation execution order that, first, is correct. So we want to have a result that is consistent or indicates that we were not able to find consistent results. The order should be hippocratic. So if the models were consistent before,
they should not be changed at all. And finally, we wanted to terminate for every input. So we want to have the guarantee that the algorithm that we provide terminates. The contributions that we make to this problem, first of all, is an investigation of the necessary number of executions per transformation, especially considering the two extremes of
one and an unbounded number of executions per transformation, and deriving that both of them are not sufficient practices in practice. In addition, we provide a universal execution strategy and especially discuss the design principles underlying it to be readily used. Finally, the benefits that we expect from these contributions are that we get
fundamental knowledge about the design space of execution strategies, especially regarding the number of executions that we have to make. And then we have a concrete strategy at hand, which especially provides specific support for the uses of transformations in case that
the strategy fails to find a consistent result, which I will come to in more detail later. So first of all, to have a common understanding of the basic terminology that we use, I will introduce what we consider a transformation, a network and an execution strategy. First of all, we consider synchronizing transformations, which compared to the bidirectional
transformations introduced before, always take two models or two changes to two models as both its input and its output. This is important in a network of transformations, because you can always have the situation that across different paths in your network, both models related by one transformation have been changed. And so both of them have to be
updated by the transformation. In addition, we always consider the result of such a transformation as consistent. So this is basically a normative notion of consistency where we say, whatever transformation delivers is what we consider consistent. A network of transformations then is simply a set of transformations together with a topology of
defining across which types of models they are related and can be executed transitively. And finally, an execution strategy or a universal execution strategy takes an input, takes a network as its input together with a set of models, which will usually be inconsistent because their consistency shall be restored and outputs consistent models or some indicator
that no consistent models were found just by applying the transformations in the given network. Now, the first contribution that we make is a discussion of the number of necessary executions per transformation. And we can span the space between one and
an unbounded number of executions indicated by the infinity symbol in this graphics. And we see that both extremes have some problems. So on the first hand, having only one execution per transformation or even a bounded number of executions is generally insufficient because each model may be changed multiple times by different transformations to which
the other transformations then have to react. And in fact, we were able to show that we can provide a transformation network that requires an arbitrary high number of executions. So any bound that we can define will be insufficient in some cases. On the other end, we may allow an unbounded number of executions per transformation, but in fact, it's then
undecidable whether this execution will terminate. We have proven that by reducing the resulting problem of Turing machines to finding the execution order of transformation networks. And so it's basically undecidable whether executing transformations will terminate without an execution bound if we do not make severe restrictions to the transformations
themselves. So a consequence is when we want to ensure that our execution terminates, we have to accept that some solutions may not be found, which is what we call conservative. So defining some reasonable bound for the number of executions will always prevent from some solutions being found, but then guarantees termination. We then use these insights to
develop a concrete strategy for finding the execution order of transformations and first the design goals for this strategy. At first thing, we wanted to have a reasonable execution limit. So that's what I mentioned before. We want to define a reasonable limit not based
on a specific number of executions, but based on some criterion that gives us a reasonable limit for the number of executions. In addition, we want to support the users in finding the reasons whenever the strategy fails. It is inevitable that the strategy will fail in some cases. And so we want to support the user in finding out what the reasons for failing are.
We've developed a principle on which our strategy should rely, which basically says that we first want to ensure consistency to all transformations that have already been executed before executing another transformation. This is especially useful for supporting the user in finding reasons when failing, because in the moment when the strategy fails, we know that
consistency could be preserved for all the transformations that have been executed before, but it could not be preserved for the last edit transformation. That must not necessarily mean there is a fault in this last edit transformation. But by adding that transformation, some problem was introduced. So that makes it easier to find out where the problem is, because we
know that for all the other transformations, it was possible to find consistent results. We used this principle to develop a concrete strategy, which realizes this principle in a recursive manner. So we iteratively go through the transformations, executed one transformation
after another, and then reapply the algorithm for the already executed transformations recursively. So considering the example given on this slide, we may have four transformations relating four models, which will be initially consistent, and we introduce a change to the topmost model. Then we select the first transformation to be executed, which can for example be transformation
T1. After executing this transformation, we have the guarantee that the models A and B related by this transformation are consistent. And since we do not have any already executed transformations, there is nothing to do in this situation. Then we select the next transformation, for example T2, to restore consistency between A and C. After executing that transformation,
we have the guarantee that these models are consistent, but of course due to a change to A, the models may be inconsistent to T1 again. So we recursively apply the algorithm to the transformation T1. And this is what we do iteratively for all the transformations. Hopefully we now have models that are consistent to T1 and T2. We then execute, for example,
T3, and after its execution, reapply the algorithm recursively to T1 and T2. And then in this recursion step, we have to do another recursion, because after executing T2, we may need to execute T1 again. And this is what we do for all the transformations. Finally, we also execute T4 and have of course one further recursion step, because in each
iteration, we have one more already executed transformation requiring one more recursion step. So after executing T2, we have the same recursion as in the step before. But then after executing T3 in the recursion, we have a subnetwork of two transformations and perform
another recursion step. And hopefully, as a result, we have models that are consistent to all transformations. Of course, there is no guarantee that we have consistency to all these transformations, which is why we check for consistency to the last added transformation after recursive application of the algorithm in each iteration. And this is where the algorithm
may fail, because if we then do not have consistency anymore, the algorithm fails, realizing the conservativeness property but enduring that we have termination of the algorithm. Now, the properties that we have guaranteed by the algorithm are most importantly the one that we wanted to achieve. So first of all,
the algorithm terminates for every input. And more precisely, it has an execution bound exponentially in the number of transformations. Of course, this is only the worst case performance, but in average case or best case, it will perform even better. In addition, we have the guarantee that results are correct or indicate that there was some problem in
finding a consistent result. And it's hypocritical by definition of the single transformations that do not perform any changes when the models were already consistent. In addition, the algorithm fulfills the design principle of first enduring consistency to all already executed transformations before executing another one. And finally, we also have the guarantee of success
for a specific kind of non-self-reacting transformations, which I cannot explain in detail in this presentation and have to refer to the paper. But at first impression, the idea is that each transformation should only need to react to each permutation of the other transformations once. And if this is the case, then the algorithm even guarantees to succeed
and find consistent models. So to conclude my presentation, the problem that we have addressed in our work is to find a transformation execution order that is correct, Hippocratic, and terminates for every input. And the two contributions that we make to that is first
a discussion and proofs for the necessary number of executions per transformation showing that either one or an unbounded number of executions is sufficient. And we provide a universal execution strategy and especially discuss the design principles for it to be readily applied. We've also derived some future work from what we've done. In particular, we want to provide
empirical evidence for the usefulness of the proposed strategy because I've argued why we think that it provides benefit in the case that it fails for the user to find the reasons for that. But of course, this is only done by argumentation right now and we need to provide empirical evidence and studies to find out whether this is actually the case. In addition, we perform a
naive selection of the candidates to be executed next right now and we think that the strategy can be optimized by making a careful selection of the next candidate to be applied by the algorithm. And finally, we've restricted ourselves to bi-directional transformations and networks of them. But in fact, there is no restriction in the algorithm to bi-directional transformations. We
could also have multi-directional transformations. So we want to investigate whether a combination of multi-directional transformations is possible in the same way. Thank you very much.