Complexity of controlled bad sequences over finite sets of $\mathbb{N}^d$
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 56 | |
Author | ||
License | CC Attribution 3.0 Germany: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/49276 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | |
Genre |
00:00
Kolmogorov complexityInfinitySet (mathematics)SequenceSequenceGame controllerKettenkomplexSubsetInfinityXMLComputer animation
00:19
Order (biology)InfinityEuclidean vectorSystem programmingFormal verificationFood energySequencePhysical systemOrder (biology)WhiteboardMereologyData structureConcordance (publishing)BitQueue (abstract data type)Natural numberObservational studyPrice indexAlphabet (computer science)State transition systemWordSet (mathematics)Correspondence (mathematics)Formal verificationElement (mathematics)Finite setSigma-algebraLemma (mathematics)XMLComputer animation
01:27
System programmingFormal verificationLatent heatLatent class modelCovering spaceSequenceKolmogorov complexityMathematical analysisAlgorithmElement (mathematics)InfinityBound stateArchaeological field surveyLevel of measurementProduct (business)MultiplicationSet (mathematics)SubsetState transition systemVirtual machinePhysical systemFunctional (mathematics)Configuration spaceComputer configurationOrder (biology)Multiplication signSet (mathematics)SubsetPoint (geometry)Group actionComplexity classObject (grammar)Software testingInfinityEqualiser (mathematics)Product (business)MappingElement (mathematics)Archaeological field surveyCovering spaceAlgorithmBound stateMaxima and minimaGame controllerLengthNormal (geometry)Lemma (mathematics)SequenceFile viewerPattern languageResultantCartesian coordinate systemMathematical analysisLatent heatAlgebraic closureNumberFinite setPreorderNeuroinformatikState of matterEvent horizonDrop (liquid)WhiteboardSelf-organizationUltrasoundCASE <Informatik>CausalityRevision controlDifferent (Kate Ryan album)Gradient descentSystem callSocial classContingency tableBuildingHydraulic jumpCoefficient of determinationStudent's t-testGame theory2 (number)WavePhysical lawGenderGoodness of fitObservational studyGoogolData miningLattice (order)Sound effectVariety (linguistics)Beat (acoustics)OvalQuantificationXMLComputer animation
10:01
Social classKolmogorov complexityLevel of measurementTowerAlgebraic closureLinear mapElementary arithmeticRecursionPrimitive (album)Function (mathematics)Endliche ModelltheorieAutomatonNetwork topologyPolynomialBound stateAnalogyReflection (mathematics)SequencePolynomialAlpha (investment)Reflection (mathematics)Limit (category theory)Normal (geometry)Multiplication signLevel of measurementGame controllerSequencePhysical systemAlgorithmNetwork topologyAutomatonStapeldateiBound stateOrder (biology)Condition numberFunctional (mathematics)Level (video gaming)Reduction of orderLengthComplexity classAnalogyKettenkomplexResultantComputational complexity theoryCovering spaceElementare FunktionType theorySubstitute goodProjective planeSummierbarkeitAlgebraic closureRecursionTowerElement (mathematics)Game theoryParameter (computer programming)Product (business)Physical lawGoodness of fitStudent's t-test1 (number)Digital photographySubject indexingBitOffice suiteSelf-organizationQuicksortDifferent (Kate Ryan album)Expert systemExpressionLattice (order)Extension (kinesiology)Sound effectObject (grammar)Hydraulic jumpPoint (geometry)MereologyHypermediaRevision controlSocial classWhiteboardPattern languageGeneralized linear modelForm (programming)Event horizonComputer animation
15:30
Bound stateLevel of measurementReflection (mathematics)PolynomialConjunctive normal formEmailArtificial neural networkInformationPersonal area networkSequenceAutomatonEndliche ModelltheorieSet (mathematics)InfinityProduct (business)Form (programming)Ideal (ethics)Potenz <Mathematik>Order (biology)Reflection (mathematics)Multiplication signCASE <Informatik>MappingSet (mathematics)Game controllerDifferent (Kate Ryan album)Maxima and minimaCoordinate systemAlpha (investment)Group actionSubsetLevel of measurementLengthFunctional (mathematics)Bound stateInformationNormal (geometry)SequenceAlgebraic closureElement (mathematics)Level (video gaming)Covering spacePhysical systemNatural numberCartesian coordinate systemPolynomialRing (mathematics)TunisQuicksortParameter (computer programming)Thomas BayesExtension (kinesiology)MathematicsMereologyData miningEvent horizonGoodness of fitSound effectProcess (computing)WordLattice (order)CountingSelf-organizationBitObservational studyComputer configurationVideo cardWhiteboardExtreme programmingHypermedia2 (number)Entropie <Informationstheorie>NumberRoutingGraph coloringProgram flowchart
20:36
Meeting/InterviewComputer animation
Transcript: English(auto-generated)
00:11
Hello, I am A.R. Balaf-Burmanyan and today I will be talking about complexity of controlled bad sequences over finite subsets of N to the d. Our journey to controlled bad sequences begins with well-causing orders.
00:23
So a set X along with a partial order on it is called a well-causing order if all its bad sequences are finite. What is a bad sequence? A sequence x0, x1 is bad if for all pairs of indices i less than j, the corresponding element xi is not less than equal to the corresponding element xj. So very common examples of well-causing orders include the component-wise ordering of Q cross N to the d
00:44
where Q is any finite set and the ordering between two elements such that in the first coordinate that is the coordinate corresponding to Q, they are the same and in every other coordinate they respect the natural order of natural numbers.
01:01
It is well known by Dixon's lemma that this is a well-causing order. Another common example is the sub-word ordering of all words over an alphabet sigma where x is less than equal to y if and only if x is a sub-word of y. It is well known by Higman's lemma that this is a well-causing order. Well-causing orders are very ubiquitous in infinitesimal verification
01:21
where they form the backbone for well-structured transition systems as evidenced by these two papers and for whose discovery the CAV award in 2017 was given. So since well-substructured transition systems are very well known, I will only explain them through an example. So let us consider
01:40
the very well-known lossy counter machines. So these are exactly like counter machines where we can increment a counter, decrement a counter or test for equality. However, we also allow the counters to non-deterministically just drop their values at any given point of time. For example, if a counter had a value 5 at some point,
02:02
then we allow it to non-deterministically pick any value less than equal to 5. That is, it can just drop to the value 2. So a transition over this lossy counter machine is as follows. So let's say we start with the configuration where we are in the state q0 and the value of the first counter is 2 and the value of the second counter is 3.
02:21
So we can now take the transition which increments c2 and go to the configuration q2 2,4. Now since we allow counter values to just non-deterministically drop, we can take this 0 test as follows. From q2 2,4 we first reduce the value of the second counter to 0 and then take this 0 test to go to q0 2,0.
02:42
And then from there we can take the transition which increments c1 and go to q1 3,0. Notice that if we pick any configuration which is bigger than q0 2,3, say q0 3,4, we can simulate the same transitions and arrive at configurations which are bigger and bigger along the old sequence. In this sense, from a bigger configuration
03:02
we can simulate all transitions which a smaller configuration can take. And hence this system is called well-structured over the well-quasi-order q0,q1,q2 times n2. So now let us look at the classic backward coverability algorithm for well-structured transition systems but applied to this specific lossy counter machine. So we want to compute the set of all configurations
03:25
which cover q1 1,1. What do we mean by this? We want to compute this set u star where u star is the set of all configurations qv such that from qv we can reach q and v dash and v dash is bigger than or equal to 1,1.
03:40
So we first define an intermediate object which we will call uk which is the same as u star but we only want to compute configurations from which we can reach, we can cover q1 1,1 in utmost case steps. So u0 is pretty clear by definition. It is just an upward closure of q1 1,1. So for computing u1, so notice that from q0 0,1
04:03
we can take the transition which increments c1 and go to u0 and hence, and it is then also easy to see that from any element any configuration is bigger than or equal to q0 0,1 we can do this. So this gives us u1. From u2 notice that from q2 0,0 we can use the 0 test to go to u1 and since we allow counter which has non-deterministically dropped, we can also do this from any
04:27
in any configuration bigger than q2 0,0. Similarly we can ensure that u3 can be obtained by upward closure of this q0 0,0 and then we also get u4 in a similar manner. One can then verify that the union of all these is in fact u star and
04:42
that we have computed u star. Now notice that during this computation we actually derived a bad sequence that is a sequence q1 1,1 q0 0,1 up till q1 0,1 over the well quasi-order q0 q1 q2 times n2.
05:01
This example serves to illustrate that in general from an execution of a coverability algorithm over a well-structured transition system let's say over some well quasi-order x, we can extract a bad sequence from the running of the coverability algorithm. More often than not the running time of the algorithm depends roughly on the length of this bad sequence.
05:24
However, the bad news is that bad sequences can be arbitrarily long even from a fixed element. To notice this, consider n2 and pick the first element to be 3,3. Then we take the second element to be 6,0 and then just walk back on the x-axis to the origin.
05:46
We can generalize this example by jumping from 3,3 to any n,0 and then walking back along the x-axis to the back to the origin. So this gives us bad sequences which can be arbitrarily long. So in hope of doing a better attempt we consider not just well quasi-orders
06:05
but normed well quasi-orders. So in normed well quasi-orders we are given a well quasi-order along with a norm function. So what does this norm function do? This norm function maps every element in x to a number in n such that for every n the pre-image of n under this norm function is finite.
06:24
With this norm function we can define a better notion of bad sequences called controlled bad sequences. So first we fix a function g which we will call the control function and we fix a number n,0 which we will call the initial norm. Then we say that a sequence is g,n,0 controlled bad if it is bad
06:44
and the following pattern occurs. So the norm of the first element x,0 is less than equal to n,0. The norm of the second element x,1 is less than equal to g of n,0. The norm of x,2 is less than equal to g of g of n,0 and so on. To give an example let us consider n,2 and take g to be the successor function and the initial norm to be 2.
07:04
So as a first element we can pick 2,2 since its norm is 2. As a second element we can pick 3,1 since its norm is 3. And as the next element we pick 0,1. Next we jump to 5,0 and then walk back to the origin. This is an example of a g,n,0 controlled bad sequence.
07:23
In general we can then show that if you fix a control function g and an initial norm n,0 by Konig's lemma we can prove that there is a g,n,0 controlled bad sequence of maximal length. Hence this lets us define a function which we will call the length function that when we fix a well quasi-order x and a control function g,
07:41
maps n,0 to the maximal length of g,n,0 controlled bad sequences. So by looking at analysis even more closely from the coverability algorithm over a well quasi-order, normed well quasi-order, we can extract a controlled bad sequence for some control function and initial norm. And bounds on this controlled bad sequences translate to bounds on running time for the coverability algorithm.
08:04
Motivated by this, some results have already appeared in the literature, and these include bounds on length functions for sub-word ordering over words, linear ordering over ordinals, product ordering over n to the d, lexicographic ordering over n to the d, and multi-set ordering over multi-set ordering over n to the d, and so on.
08:21
We refer the viewer to an excellent survey by Silva Schmitz for an application of these bounds, right? So now we state our results. To state our results, we need to define two things. First, we need to state the order. And second, we need to define complexity classes to state our bounds, because these complexity classes are not very traditional.
08:45
So as a first step, we define our ordering. So our results concern finite subsets of n to the d. And we consider two different orderings. And for both of these orderings, we give lower and upper bounds. So what's the first ordering? The first ordering is called the majoring ordering.
09:02
Here, let's say we are given two subsets of n to the d, x and y, which need not necessarily be finite, so they can be infinite. And we say that x is less than equal to y over the majoring ordering. If for every x in x, there is a y in y such that x is less than equal to y. So this less than is the usual product ordering over n to the d.
09:24
Further, if x is finite, then we define a norm of x to be the following. So we pick every element x in x and take its norm. We also consider the cardinality of x, and we take the maximum over all these quantities. So for finites at x, we can define a norm.
09:42
Now this constitutes the majoring ordering. Now for minoring ordering, once again, we are given two subsets x and y, which need not necessarily be finite. And we say that x is less than equal to y over the minoring ordering. If for every y in y, there is an x in x such that x is less than equal to y. Once again, the same norm applies for finite sets.
10:03
As a second step, we now need to define complexity classes. So we will define complexity classes indexed by ordinals. And we'll do this in three steps. The first step, let's say we are given a limit ordinal alpha, which is less than equal to omega to the omega. So what we will do is for every x in n, we will define a sequence,
10:21
alpha 0, alpha 1, alpha 2, which in some sense, roughly tends to the limit ordinal alpha. So the limit of the sequence will be alpha. So how do we define this? So we define alpha x to be very similar to alpha, except we pick the last coordinate, pick the last
10:42
element in the sum, which is omega dk. Since alpha is a limit ordinal, dk is bigger than zero. So we subtract one from dk. But we then add this new element, x plus 1 times. So this is how we get alpha x. In the second step, using this sequence, we define functions.
11:03
So how do we define these functions? So we start with a successor function, f0, this is successor. And for our successor ordinal f alpha plus 1, we take f alpha and then iterated x plus 1 many times on x. And then finally for a limit ordinal lambda, we first jump to the x-th predecessor of lambda,
11:24
which is lambda x, and then apply that function on x, right? So some milestones of these functions include, so f1 of x is just 2x plus 1. f2 of x is roughly the exponential function, 2 to the x. f3 of x is roughly the tower function. And f omega of x is roughly the arcanoman function.
11:42
Finally, we define our complexity classes. So the complexity class f alpha is the elementary recursive closure of the function f alpha. So roughly this means that we take the function f alpha and the usual functions, constants, sum projection and so on. And we can do substitution and very limited primitive recursion.
12:01
So some common milestones of these complexity classes include f1 of x is just linear functions, f2 of x is all elementary functions. If you pick the union of all f alpha below omega, then we get the primitive recursive functions. And at a higher level, we get multiply recursive functions and so on, right?
12:21
With these two done, we can now state our results. Our results are as follows. So we fix the control function g to be a primitive recursive function. Our first result is for a majoring ordering, where we show that the length function of g is upper bounded by a function in the complexity class f omega to the d minus 1. Further, we also show that this upper bound is tight.
12:41
What do I mean by this? We show that there is a polynomial function g, such that its length function is lower bounded by a function in f omega to the d minus 1. So these are our results for the majoring ordering. For minoring ordering, we show that the length function is upper bounded by a function in f omega to the d minus 1 times 2 to the d.
13:01
Further, we also show that it is lower bounded by a function in f omega to the d minus 1. Note that there is an obvious gap between the lower bounds and upper bounds. Finally, as the motivation behind getting these upper bounds, we use these upper bounds to show upper bounds on the running time for the coverability algorithm for some well structured
13:22
transient systems. These include incrementing tree counter automata, and then two types of register automata working on data trees, right? So these are all our results. Now we go to a very brief technical overview where I will first explain polynomial reflections, which are our main tool for
13:41
proving the lower and upper bounds. And then I will show very briefly how to get the lower bounds of majoring ordering and the upper bounds of minoring ordering. Due to lack of time, I will not be discussing the other results. Okay, so polynomial reflections. Polynomial reflections are roughly the analog of reductions in complexity theory. So what are polynomial reflections?
14:01
So let's say we are given two normed well-caused orders, x and y. And the map R is said to be a polynomial reflection between x and y if it satisfies two conditions. The first condition is as follows. So let's say x is mapped to R of x, x dash is mapped to R of x dash, and R of x dash is less than or equal to R of x.
14:21
Then, this should then imply that x dash is less than or equal to x. So this is the first condition. If R of x dash is less than or equal to R of x, then x dash is less than or equal to x. The second condition is that the norm of R of x is not very far away from the norm of x. That is, the norm of R of x can be upper bounded by a polynomial in
14:40
the norm of x. This is the second condition. So if a map satisfies these two conditions, it's called a polynomial reflection. Now what are the uses of polynomial reflection? It allows us to translate bounds as given by the following fact. So if R is a polynomial reflection from x to y, then we can translate lower bounds on x to lower bounds on y and upper bounds on y to upper bounds on x.
15:02
There is a slight technicality here in that we have to compose the control function g with a small polynomial p, and also change the initial norm slightly. But for our complexity classes, this should not matter. The brief idea behind this fact is that if we have a control batch sequence x0, x1 up to xk in x, then its corresponding
15:21
mapped sequence rx0, rx1, rxk is a control batch sequence in y. This is easy to verify, right? So with this tool, we'll now show how to prove lower bounds of major ring ordering. We do this by giving a polynomial reflection from the ordinal ordering. So, let cnf of omega to the omega to the d minus 1
15:42
denote all ordinals which are less than omega to the omega to the d minus 1. From this, we give a map to the, map to finite subsets of n to the d over the major ring ordering. However, I haven't defined what the, I haven't defined what the norm function for ordinals are, is. And, and I'll briefly do that. So, let's say we have an ordinal, and we write it in cantonormal form.
16:02
So the norm of this ordinal alpha is, we first extract the coefficient c1 up to ck. Next, we extract the norms on the exponents n beta 1 up to n beta k. And then we take the maximum of all of these quantities, right? So, by an example, we, we will now see how to go from omega,
16:23
omega to the omega to the 2 to sub, finite subsets of n to the 3. So, since we are considering only ordinals over omega to the omega to the 2, they can be written in the following form, where the exponents look like omega to the ci plus di, right? So now we extract the first exponent, omega to, omega times c1 plus d1,
16:44
extract its coefficient c1d1, and attach a 1 before it. So we get an element in entry, 1, c1, d1. Similarly, we take the second exponent, extract its coefficient c2d2 and add a 2 before it, 2, c2, d2, and so on.
17:00
The main idea is that this mapping encodes enough positional information to preserve the order of exponents between any two ordinals. That's the, that's roughly the main idea. It is not very hard to show that this is indeed a polynomial reflection. And since we know that the length function for omega to the omega to the d minus 1 does not belong to f alpha for any alpha less than omega to the d minus 1.
17:22
The same applies for length functions over finite subsets of n to the d over the majoring ordering. So this kind of gives a brief overview for lower bound for majoring ordering. Now we'll go to upper bound for minoring ordering. So we do this by giving a polynomial reflection into a product of majoring orderings.
17:42
So let us consider finite subsets of n to the 2 over the minoring ordering. And let us take a non-empty set X. The, the, if X is empty, then we consider it as a special case in the paper. So let us consider, take a non-empty set X. Now, let us take its upward closure. It is known that upward closure preserves the order, preserves the minoring
18:03
ordering, that is X. If X is less than or equal to Y over the minoring ordering, then upward closure of X is less than or equal to upward closure of Y over the minoring ordering, right? So now let us take the complement of the upward closure. It is known that the ordering is preserved. However, the ordering, the order itself changes from minoring ordering to
18:21
majoring ordering, right? Notice that the complement of this upper closure is a downward closure. And hence, we can write it as a finite union of ideals. So, i.e., a finite union of the form ideal, downward closure of Z1 in a downward closure of Z2 up to downward closure of Zk. Where each Zi roughly looks like, where each Zi is of the form n, omega,
18:44
where n is some natural number, or omega, n, where n is a natural number, or n, m, where n and m are both natural numbers. So here, the 0, omega roughly denotes this strip along the Y axis. And this omega, 0 roughly denotes this strip along the X axis.
19:00
And all the other elements denote these smaller strips, right? So now what we can do is we can group these elements into three. So we first pick all elements which have omega only in the first coordinate. Next, we bunch all elements which have omega only in the second coordinate. Next, we bunch all elements which have no omega at all.
19:22
So this gives us a map to these three groups, where this group roughly denotes that there was an omega as the first coordinate. This group roughly denotes that there was an omega at the second coordinate. And this group denotes that there was no omega in these coordinates at all.
19:40
Now notice that this gives us an element, it's finite subsets of n, cross subsets of n, cross subsets of n square, right? But this is a very general idea, and with some care, one can show that this gives us a polynomial reflection from finite subsets of n to the 2 to this normed well-quasi-order. Since we have proven upper bounds for this well-quasi-order in the paper,
20:04
we can now translate this to upper bounds for this well-quasi-order, right? So this gives a brief overview of upper bounds for minoring ordering. To recap, in this paper, we have proved upper and lower bounds for controlled bad sequences for two different orderings on finite subsets of n to the d. The main ingredient behind these upper and lower bounds was the notion
20:25
of a polynomial reflection between any two normed well-quasi-orders. And using these upper bounds, we proved upper bounds on the running time for coverability of some well-structured transient systems. I would like to thank Professor Sylvain Schmitz and Professor Philip Schnoebelen, without whose guidance and mentorship this
20:43
paper would not have been possible. Thanks.