We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

KReach: A Tool for Reachability in Petri Nets

00:00

Formale Metadaten

Titel
KReach: A Tool for Reachability in Petri Nets
Serientitel
Anzahl der Teile
22
Autor
Lizenz
CC-Namensnennung 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
We present KReach, a tool for deciding reachability in general Petri nets. The tool is a full implementation of Kosaraju’s original 1982 decision procedure for reachability in VASS. We believe this to be the first implementation of its kind. We include a comprehensive suite of libraries for development with Vector Addition Systems (with States) in the Haskell programming language. KReach serves as a practical tool, and acts as an effective teaching aid for the theory behind the algorithm. Preliminary tests suggest that there are some classes of Petri nets for which we can quickly show unreachability. In particular, using KReach for coverability problems, by reduction to reachability, is competitive even against state-of-the-art coverability checkers.
Schlagwörter
System-on-ChipÜberlagerung <Mathematik>BeschreibungskomplexitätFolge <Mathematik>Kette <Mathematik>GRASS <Programm>InformationMaßerweiterungNebenbedingungHelmholtz-ZerlegungKategorie <Mathematik>Ganze FunktionZahlenbereichInduktive logische ProgrammierungE-FunktionLineare AbbildungToken-RingSuite <Programmpaket>VektorraumGruppenoperationPaarvergleichPhysikalisches SystemGebundener ZustandProzess <Informatik>Komponente <Software>Petri-NetzProgrammbibliothekLineare OptimierungAggregatzustandMaßerweiterungFahne <Mathematik>InstantiierungBenchmarkVollständigkeitDatenflussFunktionalFormale GrammatikVektorrechnungÜberlagerung <Mathematik>KonfigurationsraumResultanteMetadatenQuellcodeHelmholtz-ZerlegungNebenbedingungBinärcodeVerschlingungMultiplikationsoperatorAlgorithmusKategorie <Mathematik>Funktion <Mathematik>Gewicht <Ausgleichsrechnung>Reelle ZahlKlasse <Mathematik>Konstruktor <Informatik>InvarianteDateiformatProdukt <Mathematik>Algorithmische ProgrammierspracheFolge <Mathematik>Virtuelle MaschineDifferenteSyntaktische AnalyseCASE <Informatik>StichprobenumfangSchnittmengeBeweistheorieTermKette <Mathematik>Ganze FunktionZahlenbereichSoftwareentwicklerGlobale OptimierungZweiLokales MinimumEin-AusgabeThetafunktionVersionsverwaltungEinsImpulsMathematikService providerTopologieEntscheidungstheorieGerichteter GraphTypentheorieMinkowski-MetrikRechter WinkelKomplex <Algebra>Familie <Mathematik>MAPTeilmengeOrdnung <Mathematik>Induktive logische ProgrammierungKonditionszahlPrädikat <Logik>MinimumBitCodecArithmetisches MittelEinfach zusammenhängender RaumAdditionElement <Gruppentheorie>Formale SpracheStrategisches SpielComputeranimation
Transkript: Englisch(automatisch erzeugt)
Hello everybody my name is Alex Dixon and I'm going to talk to you today about K-Reach which is a tool we've built for reachability in general Petri nets. This is work that was produced for the TACAS 2020 conference and is joint work with my supervisor Ranko Lazic. So a quick overview of what we've been up to. We are presenting as our
primary contribution here K-Reach which is a tool for deciding reachability in general Petri nets and this is based on the work of Kosaraju which was published at stock in 1982. We also include a suite of libraries in the Haskell language for writing and interfacing
with vector addition systems with states so if that's something that's interesting for you. And we wrap up with some results which are surprising in that even this 1982 algorithm on certain classes of coverability is festive against what were at the time in 2019 2020
state-of-the-art solvers. So let's quickly make sure that we all know what we're talking about when we say reachability problems. So our input to this is a Petri net which is composed of places and transitions as well as a flow function which relates those places
to those transitions so the transitions can move tokens between places according to this flow function and we will equip this net with an initial marking which represents the initial arrangement of sediments and a target marking which is the arrangement that we're
to attain. The output to this algorithm is simply whether or not there is some sequence of transitions which gets us from the initial marking to the target marking according to the flow function and if it is then we output reachable if not then we output
not reachable. Note that I'll be using the terms Petri net and vector addition system and vector addition system with states interchangeably there are some well-known results that show that all three of these formalisms are equivalent. So at the time of
writing the paper the bounds had recently moved from comically far apart to interestingly close so as of stock 2019 we now know that the reachability problem on general Petri nets is at not elementary and the upper bound is known to be ackermanian.
By comparison we already know that the coverability problem in which we seek to either meet a target marking or exceed it is known to be x-based complete.
So Kosaraju's algorithm built on the earlier work of Meyer, Saccadott and Tenney and it is a complete algorithm which is to say it gives us an affirmative or negative answer for deciding reachability and it will always terminate given enough time and the good news
is that this algorithm can be implemented and tested which is what we've done here. To the best of our knowledge this is the first time that Kosaraju's algorithm has ever been implemented and tested on real world instances. At the time this was a proof of decidability so there was no real impetus to provide complexity bounds but we know it's got to be at least not elementary
and we will see just how good or bad it is in practice shortly. So let's quickly introduce the algorithm and see how it works. It's essentially a search through a state space where the states are decompositions
of the original vector addition system that we put in or petrin that we put in. We compute such decompositions using a structural predicate of these systems and that predicate is called theta.
Eventually as with all searches we will find such a decomposition that fulfills the requirements in which case we know for a fact that the original vas problem was reachable or we run out of decompositions to check in which case it was not reachable.
So when I talk about decompositions what I really mean is simplification of the vector addition systems which we know do not affect reachability and we compute these decompositions on a generalization of vas are called gvas.
So a gvas is in essence a sequence or a chain of these vector addition systems which are joined together with arcs and we can annotate each of these individual vector addition systems with some metadata and some constraints on the configurations that may appear
inside of each of these component vector addition systems. There's a very simple process by which a vector addition system can be lifted or augmented to become one of these generalized vas and as mentioned just now we're able to show or excuse me it has been shown that reachability in the original vas v is exactly equivalent to
reachability in any valid decomposition of the lifted version of the same vas. Here is an example that's been lifted from the paper on the left with two states q and r
two transitions t0 and t1 an input configuration mi which is one zero um in state q and a target configuration zero zero in state r in state r
and we lift this vas into gvas in a fairly simple way the the most important change that we make is that we um each of these component uh vases has to be strongly connected which is to
you can reach any state from any other state we note here that we cannot reach q from r so we have to separate the two states out into their own kind of self-contained strongly connected components so
in a slightly more complex example this may generate more than one vector addition system sequence because we need to take all possible paths through these strongly connected components and that may include or disinclude different subsets on such paths so it will actually create
a family an exponentially large family of input gvases depending on the input now let's talk a bit more about this theta condition which is really the heart of cos raju's algorithm so there are two properties involved here which and theta two theta one is
a global property of the entire gvas chain and theta two must hold on every small component vas along the way and depending on which of these two properties is violated we're able to decompose the full chain in a different way and we simplify it according to the strategy
for each of theta one and theta two so theta one is this global property that i said about and and this relates to reachability in a very direct way so theta one is slightly more relaxed
instead of pure reachability from the beginning of the gvas all the way through to the end of the last element of the chain we consider pseudo reachability so pseudo runs instead of regular runs a pseudo run means that we're allowed to in in certain places in the
in the vector so we no longer care about being bounded below by zero as in normal vas um but we also need to make sure that every transition in every vector addition system with states along the way to addition system with states along the way must
be able to be fired an unbounded number of times along such a pseudo run um we are able to compute this particular property um using integer linear programming so
we construct a large ilp problem for the entire gvas which says the output of which is a semi-linear set which shortly summarizes all possible pseudo runs and from this semi-linear set we transition along this along all such pseudo runs
is always bounded if there is such a bounded pseudo run then we know that theta one is violated if there is no such bounded pseudo run then theta one holds if theta one is violated we can unfold a violated we can unfold a violated transition by taking the maximum bound
for the number of firings that it may perform which we got from the semi-linear set and just render out every single possible number of these firings but instead of inglimous transitions we now encode them as arcs between um between gvases
and if we do that then we no longer need that transition to exist so we've vastly increased the number of component gvases but we have removed one transition from all of them i have an example here so we know in this case that t0 can be
fired at most one time and therefore we can just render out the instance where it's fired zero times which is the top one on the right and the instance where it's fired once which is the bottom one on the right and notice that minus one one now appears as an arc between
q and q prime both of which are duplicates of q but without the t0 transition which is now no longer needed these two is a property that holds on each of the component vases and one of these vases has a path from the start to the end of the vase in which all places
increase and the same is true if you reverse all the arcs so effectively a path from the final to the initial state via which all places increase to compute this we compute coverability from this we compute coverability from the start to the end with all places increased by one
and if we can do this then we know that these two holds if these two is violated then some place has an upper bound on its value everywhere inside the vase in which case we can
encode all of its possible values into the state of the machine via a product construction as you might imagine this significantly increases the size of the instance but we've reduced the number of places that we care about by one we started by testing reachability on some synthetic samples example i've been using surprisingly the example runs in linear time
in the cvc4 solver but exponentially inside of z3 the reasoning for this is not clear but we included cvc4 with the benchmarks notice that the time is measured in whole seconds notice that the time is measured in whole seconds even for a relatively small example like this
again at time of writing there wasn't a large quantity of reachability problems out in the wild so what we were able to do instead is reduce from coverability to reachability saying that
we augment the coverability instance with something which winds down places to zero after the target is covered and then reach the zero vector the upshot of this is that we can test our reachability procedure by designed for coverability what we found is that kreach
is actually able to outperform some state-of-the-art coverability checkers for particular types of instances we believe that these instances are ones that kreach does not need to decompose the tree in order to does not need to decompose the tree in order to rule out all cases via
theta1 or theta2 however as you might imagine the more decomposition kreach needs to perform the worse the outcome is and in fact almost all instances which required more than two or three levels of d memory if you're interested in trying the tool yourself you can grab it from
this github link which includes both the haskell source code which can be compiled and run by yourself or it you can download linux 64-bit binaries which include all of the benchmarks used and a copy of cvc4 check the readme for usage guidance but the tool itself is used either with
the dash r flag which means run the reachability checker or dash c for coverability and there are instances of both including the synthetic examples used in the paper included in the release while the tool is fully functional as is there's still plenty of work that can be done on
extensions one would be to build out some additional parsers for different interchange formats that are standardly used for petri nets including the p n m l format we may also investigate whether or not it's possible to make use of the invariants from the ilp construction to dedicated coverability tools that can make use of them statically and there may also be scope
to introduce some new optimizations based on developments from the ackermanian upper bound proof of lure and schmidt's from 2019