One-Clock Priced Timed Games are PSPACE-hard
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Untertitel |
| |
Serientitel | ||
Anzahl der Teile | 56 | |
Autor | ||
Mitwirkende | ||
Lizenz | CC-Namensnennung 3.0 Deutschland: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/49290 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | |
Genre |
6
13
45
00:00
PSPACESpieltheorieMultiplikationPSPACESpieltheorieNP-hartes ProblemRechenwerkAbstimmung <Frequenz>MultiplikationsoperatorMereologieXMLComputeranimation
00:20
SpieltheorieResultanteSpieltheorieTotal <Mathematik>MultiplikationsoperatorVersionsverwaltungMereologieFormale SpracheMathematikComputeranimation
00:47
SpieltheorieGruppenoperationStrom <Mathematik>Lokales MinimumCASE <Informatik>NeuroinformatikDemoszene <Programmierung>SystemaufrufTopologieComputeranimation
01:49
SpieltheorieStrom <Mathematik>GruppenoperationAlgorithmusDijkstra-AlgorithmusGlobale OptimierungOrtsoperatorSpieltheorieMultiplikationsoperatorStrategisches SpielArithmetisches MittelReelle ZahlAutomatische HandlungsplanungMailing-ListeBefehl <Informatik>AggregatzustandGesetz <Physik>Dienst <Informatik>Fächer <Mathematik>Computeranimation
02:19
SpieltheorieStrom <Mathematik>GruppenoperationAggregatzustandZahlenbereichGruppenoperationSummierbarkeitStrömungsrichtungMultiplikationsoperatorLokales MinimumKonfiguration <Informatik>Total <Mathematik>Inverser LimesRechenwerkQuick-SortEinsTopologieMehrrechnersystemComputeranimation
04:03
AggregatzustandTeilmengeSpieltheorieInklusion <Mathematik>ExistenzsatzOrdnungsreduktionSpieltheorieSymboltabelleSystemaufrufTeilmengeOrdnungsreduktionMultiplikationsoperatorFunktionalPlastikkarteBeobachtungsstudieComputeranimation
04:25
SpieltheorieFunktion <Mathematik>FunktionalMereologieSoftwareschwachstelleBeobachtungsstudieQuick-SortMaschinenschreibenSpieltheorieComputeranimation
04:47
SpieltheorieFunktion <Mathematik>Bellmansches OptimalitätsprinzipStrategisches SpielMultiplikationsoperatorFunktionalAusnahmebehandlungSpieltheorieGraphfärbungBeobachtungsstudieDatensatzComputeranimation
05:25
Bellmansches OptimalitätsprinzipAggregatzustandMultiplikationsoperatorStrategisches SpielDiagramm
05:45
PunktEreignishorizontFunktion <Mathematik>PunktSichtenkonzeptEreignishorizontKomplex <Algebra>MultiplikationsoperatorSpieltheorieZahlenbereichBellmansches OptimalitätsprinzipEinflussgrößeGeradeKoordinatenOrdnung <Mathematik>Natürliche ZahlFunktionalDiagramm
06:15
PolygonMultiplikationsoperatorAlgorithmusPunktEreignishorizontStrategisches SpielBellmansches OptimalitätsprinzipBildverstehenComputeranimation
06:37
PunktPolygonGlobale OptimierungFunktion <Mathematik>EreignishorizontSpieltheorieGebundener ZustandE-FunktionPolynomForcingComputeranimation
06:58
PunktPolygonFunktion <Mathematik>Globale OptimierungSpieltheorieGebundener ZustandPolynomEreignishorizontE-FunktionAbstraktionsebenePunktZahlenbereichResultanteBildschirmmaskeBitEreignishorizontWurzel <Mathematik>MultiplikationsoperatorOrdnung <Mathematik>Computeranimation
07:20
PSPACEBitMultiplikationsoperatorNummernsystemZahlenbereichPhysikalisches SystemPunktNP-hartes ProblemEreignishorizontComputeranimation
07:39
E-FunktionEntscheidungstheorieProzessautomationPolynomTopologieAutomat <Automatentheorie>CASE <Informatik>Spezifisches VolumenLemma <Logik>Gewicht <Ausgleichsrechnung>TopologieFunktionalWhiteboardXMLComputeranimation
08:04
EntscheidungstheorieE-FunktionProzessautomationPolynomTopologieWurzel <Mathematik>Funktion <Mathematik>Statistische SchlussweiseGeradeTeilmengeBellmansches OptimalitätsprinzipPunktTeilmengePolynomEreignishorizontZahlenbereichTopologieGeradeWhiteboardGebundener ZustandMultigraphMaßerweiterungProzess <Informatik>t-TestBildgebendes VerfahrenXMLComputeranimation
08:23
EntscheidungstheorieE-FunktionProzessautomationPolynomTopologieMultigraphAutomat <Automatentheorie>SpieltheorieAggregatzustandLoopLokales MinimumAggregatzustandLoopGraphDatenstrukturPunktCASE <Informatik>TopologieComputeranimationXML
08:49
DatenstrukturGraphNotepad-ComputerEntscheidungstheorieBeschreibungskomplexitätTopologiePolynomMultigraphEbener GraphCliquenweiteBaumweiteRankingTopologieMailing-ListeGraphCliquenweiteClique <Graphentheorie>RankingMinimalgradCASE <Informatik>MultigraphDatenstrukturSprachsyntheseGeradeOrdnung <Mathematik>MultiplikationsoperatorCoxeter-GruppePunktFlüssiger ZustandMereologieSystem FComputeranimation
09:27
GraphNotepad-ComputerDatenstrukturEntscheidungstheorieBeschreibungskomplexitätTopologiePSPACECliquenweiteOffene MengeBaumweiteRankingMultigraphPolynomEbener GraphZahlenbereichDatenstrukturEreignishorizontResultanteNP-hartes ProblemQuick-SortGraphVollständigkeitPunktCASE <Informatik>SpieltheorieAggregatzustandComputeranimation
09:56
SpieltheorieNotepad-ComputerEntscheidungstheoriePolynomAutomat <Automatentheorie>AggregatzustandAggregatzustandGewicht <Ausgleichsrechnung>SpieltheorieZahlenbereichPunktXMLComputeranimation
10:20
SpieltheorieNotepad-ComputerEntscheidungstheoriePolynomAutomat <Automatentheorie>AggregatzustandBeweistheorieGebundener ZustandPunktEreignishorizontZahlenbereichEreignishorizontBeweistheorieNummernsystemPi <Zahl>AggregatzustandMathematikCASE <Informatik>t-TestWhiteboardXMLUML
10:45
Hierarchische StrukturSpieltheorieNotepad-ComputerEntscheidungstheorieAutomat <Automatentheorie>PolynomAggregatzustandNP-hartes ProblemEntscheidungstheorieHierarchische StrukturZahlenbereichPolynomialzeitalgorithmusAggregatzustandNP-hartes ProblemCASE <Informatik>EreignishorizontFormale GrammatikMAPGewicht <Ausgleichsrechnung>SpieltheorieMultiplikationsoperatorTopologieMaßerweiterungAutomatische HandlungsplanungWechselseitige InformationHypermediaComputeranimation
11:37
NP-hartes ProblemZahlenbereichEntscheidungstheorieApproximationNP-hartes ProblemStandardabweichungMultiplikationsoperatorEntscheidungstheorieAggregatzustandStrategisches SpielApproximationPunktZahlenbereichStörungstheorieComputeranimation
12:00
MereologieFigurierte ZahlVersionsverwaltungGebundener ZustandPunktArithmetisches MittelAlgorithmische ProgrammierspracheLokales MinimumEreignishorizontGeradeZahlenbereichSystemaufrufMultiplikationsoperatorFunktionalLie-GruppeDiagramm
12:55
SpieltheoriePunktGeradeLokales MinimumZahlenbereichKonstruktor <Informatik>EinhüllendeBitrateMessage-PassingGeometrische FrustrationLeistung <Physik>Bildgebendes VerfahrenGewicht <Ausgleichsrechnung>Quick-SortComputeranimationDiagramm
14:07
SpieltheorieKategorie <Mathematik>Komplex <Algebra>ResultanteMAPQuick-SortBridge <Kommunikationstechnik>StörungstheorieMereologiet-TestDiagramm
14:32
VersionsverwaltungSatellitensystemMereologieWahrheitstabelleAusdruck <Logik>Abgeschlossene MengeComputeranimation
14:50
Ausdruck <Logik>SatellitensystemVariableAusdruck <Logik>MultiplikationsoperatorAggregatzustandCAN-BusComputeranimation
15:29
Ausdruck <Logik>SatellitensystemTransformation <Mathematik>MultiplikationsoperatorSchnittmengeAusdruck <Logik>Quick-SortMereologieVariableBitBellmansches OptimalitätsprinzipGeradeInverser LimesGamecontrollerFunktionalWärmeleitfähigkeitSoftwareentwicklerGruppenoperationBenutzerbeteiligungProdukt <Mathematik>PunktMailing-Listesinc-FunktionDiagramm
16:49
SpieltheorieAggregatzustandGeradeAusnahmebehandlungZahlenbereichPhysikalisches SystemAusdruck <Logik>Negative ZahlVariableMomentenproblemMultifunktionTopologieGamecontrollerOffice-PaketPunktComputeranimationDiagramm
17:41
Ausdruck <Logik>SatellitensystemLokales MinimumAggregatzustandBitrateEinsVariableZweiFunktionalMultiplikationsoperatorBildschirmmaskeGesetz <Physik>TopologieGewicht <Ausgleichsrechnung>Computeranimation
18:47
Ausdruck <Logik>SatellitensystemGeradePunktNP-hartes ProblemBitrateSpeicherabzugAggregatzustandLokales MinimumSoftwaretestAusdruck <Logik>VersionsverwaltungMultiplikationsoperatorKomplex <Algebra>SchnittmengeBildschirmmaskeVerweildauerGewicht <Ausgleichsrechnung>Gesetz <Physik>CASE <Informatik>MehrrechnersystemClientComputeranimation
20:25
SpieltheorieTopologieCASE <Informatik>ResultanteKomplex <Algebra>Konstruktor <Informatik>CliquenweiteEin-AusgabeKurvenanpassungGebundener ZustandMultiplikationsoperatorZahlenbereichAggregatzustandPunktMereologieClientComputeranimation
22:01
Computeranimation
Transkript: Englisch(automatisch erzeugt)
00:10
Hi, I will be talking about 1 o'clock price-time games at PSPACE HARD. This is based on a joint work with Jon Fernley, Rahul Sarvani, I am Wasim Siften Jensen, and we are all three from the University of Liverpool.
00:20
First, a brief overview of my talk. First, I will be talking about what are 1 o'clock price-time games, then previous and our results, and finally how we showed our results. To define price-time games, I find it best to start without the time part, given those price games, and end at the end of the time afterwards. So, price games has many other names, like longer shortest path games, visibility cost games, or total cost games.
00:41
Basically, as suggested by the first name, they are a game version of Tracer's shortest path. So, we have a very directed graph, three kinds of nodes, maximizer nodes pointing up in red, minimizer nodes pointing down in blue, and a yellow star node, which is the goal. So, we start somewhere, and we try to generate a path. The way we do it is, each step the coin stays on or selects an outcome, actually.
01:07
And the outcome is then some of this cost before reaching the star. So, we start at outcome zero, the maximizer gets to choose, it can either move left, but in that case it will get to a goal of zero, or you can move right, maybe getting something more expensive.
01:22
So, he moves right. Minimizer can either move left, in that case he might get to a goal for some cost, he can move right, he gets to a goal for five. He moves left, because it seems better. Now, minimizer can move up, and that's the case in the cycle, or he can move left, and get to a goal for three. So, he does that, he goes to the center, pays three, another maximizer turns, he can
01:44
either, he has no option, he just goes to goal, we see that outcome here was three. So, these games have a value that is some outcome that can be ensured by both players by playing well. And they have a position in optimal strategies, meaning that if in a given state, they can always do the same thing.
02:02
So, this is optimal values, and optimal strategies looks like this. Now, the main thing HSNL showed was that there's a Dijkstra-like algorithm, but one time precisely like Dijkstra for this kind of games. Now, let's add in time, giving you a simple price-time games.
02:22
So, in this kind of games, time is real, which is not a deep philosophical statement, but simply that it's a real number, and in this case, it's an interval zero to one. So, I'm going to use this example to illustrate them from my previous paper by myself, Henson, Henson and Middleton from 2012. And the numbers on the states here means how much they cost to wait for one time limit.
02:44
So, what happens is in each state, the current state's owner selects an outcome in action and a delay. The delay must be sorted, the current time is in the interval zero to one. And outcome is then the sum of cost, including from waiting, before reaching down. Let's see how it works.
03:01
Let's say we start in the middle at time zero. The minimize he gets to choose is his state, so he might choose one-third time unit. Of course, it costs six to wait here. It gives an outcome of two, and he might move up here. Now, the maximizer gets to choose. He might also choose one-third, giving us the time two-thirds.
03:21
Here, it costs only three to wait, so therefore, the outcome is only three now, and we move right. The minimizer's turn here, he doesn't want to wait because it's kind of expensive at nine. So, he moves down here, and he has two options here. And what he's going to do is he's going to one-third more time unit, giving us the time one,
03:43
paying two again, and then he's going to go to the other real state, paying three. So, now we've paid eight in total. This maximizer's turn, he has no option. He cannot delay anymore. The time is one, so therefore, he just goes to one. And we get to go at a cost of eight from the middle state, as selling from time zero.
04:03
Now, I'm supposed to be talking about one-clocked price-time games, which also has intervals on edges, denoting when it can be used, and a subset of edges that get reset when it passes through. I'm not going to do that, though, because in my previous paper, I showed a p-time reduction from one-clocked price-time games to simple price-time games,
04:20
so I just need to talk about simple price-time games. That's what I'm going to do in this talk. Simple price-time games, similar to price games, have a value, but here depend on the starting time, as well as the starting state. When viewing the value as a function of time, it becomes piecewise linear, continuous, and weakly monotone decreasing. We will see some example of this.
04:40
The non-red part came from Boyer, Casser, Flohr, and Larson from 2004, and it's two for all one-clocked price-time games. And the red part came from my previous paper, and it's not true, except for simple price-time games. So, let me show the value function for the example. I will also show the strategies. Let's see, let's start with this one up here.
05:01
It has this value function, drawn in red, and along the x-axis, you can see the strategy. So, between time zero and time one-third, we should follow the redheads. Between time one-third and time two-third, we should wait because I have no color. Between two-thirds and one, we should go to the blackheads. We can always show the strategies like this in this kind of games.
05:24
For the middle state, it can look like this. Zero to one-third, you should wait. Between one-third and two-third, follow the blackheads. Between two-thirds and one, you should wait. And finally, at time one, you should go to the redheads, giving you this value function if both players play optimally.
05:41
The other state has simpler strategies. I'm not going to go into it in detail. So, looks like this for this one, looks like this for this one, and looks like this for this one. The key thing here from a complexity point of view is how many event points are. Event points are the time coordinate of the endpoint of some line piece in some value function.
06:04
So, here this event points are zero, one-third, two-third, and one, giving us four event points. This would be used as a complexity measure for this kind of games. What is known about a number of event points? Well, all known general algorithms have one time poly event points comma n comma m.
06:23
And this is kind of natural because they're outputting the value functions, and the value function must be this size. This means that also any other algorithm that outputs the value function of the optimal strategies must have this kind of runtime, at least. No narrow bounds on it. There's a tribulation in our bound by Boyer-Kacifler and Larsen in 2004
06:43
with Korski improved this to two to the n squared in 2011. My last paper, I showed it was 12 to the n. And there's no known lower bound on this kind of thing, though, beyond a small polynomial. In my last paper, I conducted it to be at most polynomial.
07:02
So, it's a bit bad form to start showing up and talking about it again. But it was also mentioned by Bihir, Gersch, Kistner, Masner, Momonke, and Travidi in 2004. It's been half the upside on it, and it's been eight years. So, I feel it's reasonable to start talking about it by now. So, the main result of our paper is a lower bound number of event points of a square root of two to the n minus two.
07:25
Hence, the number of event points is two to some constant times n. A bit of a gap between the two. We also generalized that to our PSPACE hardness lower bound for the system questions about this scheme. But next, well, we have shown a good lower bound.
07:40
Therefore, we should show a good lower bound for a special case. So, automata of the volume player case is already known. It's an NL by Loyes, McKay, Snobland, 2004. Weights 0 and 1, Eason, P, as shown by Bihir, that I already mentioned, 2014 paper. We have the tree question.
08:00
We managed to show in this paper that it's the most n to the third by showing the following lemma. The value function of a board of a tree with k leaves is a subset of k lines. This is not the same thing as there being at most k-mini line pieces, but it means that there's most k-squared-mini line pieces, because that's the number of intersections of k lines leading to an n to the third lower bound.
08:23
Finally, we managed to show that underrated graphs are polynomial many event points in DAF and P by showing that if you go to a maximizer state, then you can just go back immediately and DAF for loop and DAF for never been scored. And therefore, we can just erase all edges leading to maximizer state and solve a subgame of only a minimizer state and goal state.
08:43
And afterwards, solve the maximizer state, which is also easy at that point. So we can also talk about other special cases of graph structures. So we talked about trees and underrated graphs, but there are others such as DEX, small in and out degree graphs, tender graphs, small tree width graphs,
09:04
and basically tree-like graphs if you don't know what it is exactly, graph with small path width, which are basically path-like, graph with small clique width, which for some reason is not clique-like, but at least the clique has a smaller clique width, or finally, small rank width. This is basically the list of all the graph structure special cases I know of.
09:26
So it turns out that our lower bound is actually in all of these special cases. Therefore, in any number of them, it's still going to be hard. So except for maybe a small path width, with no question we don't know if that's hard or not. But everything else is PSPACE hard and has a huge number of event points.
09:43
Finally, we also managed to show that DEX, especially, are in PSPACE, and therefore the question there is PSPACE complete. So this is sort of the final result about graph structure we had. We know a few game poverty special cases like race 011 and automata, or the one-player case, they are already in polynomial automata,
10:03
but what happens if you have a few states, a one-player, and many states with other players, it might be easier. It might be easier if instead of having just two weights, we had a few weights in the general case, and we can also look at something called non-urgent states. So what is urgent state? Well, a state is urgent if you cannot delay in it.
10:23
So trivially, if you cannot delay in any games, you are equivalent to a pie scheme and lost your NP. And the proof by Boyer, Casseflow, Larsson, and Kotkowski, that's all about number of event points, works by changing more and more states into being urgent states. Therefore, we could sort of hope that if you have fewer non-urgent states,
10:42
then it will be easy. It turns out not to be so. So in all the special cases of game poverty, we have already a hard number of event points, and this is already the case if it just has slightly more than the trivial number. So if it's just two weights, that's all an exponential number.
11:05
If you want just one non-urgent state, that's all with an exponential number. And it's basically immediately NP and coin P-hard, so already with three weights, zero, a half, and one, it's NP and coin P-hard, and if just with two non-urgent states, then it's NP and coin P-hard.
11:23
And as you add more and more weights and more and more non-urgent states, you increase the level of the polynomial time hierarchy. And the last thing here with fewer states of one player is still an open question. Let's look at the formal decision question we show hardness for. It's this one. So it's as follows.
11:41
Given a state and two numbers such that the value at time zero is one of them, which one is it? That turns out to be P-space hard, and it implies hardness for a lot of other questions like the standard decision question, approximation questions, strategies questions, a lot of other questions. Next, I will talk about the key idea of our lower bound.
12:01
So we want to implement the following procedure in a simple price-time game, and that will give us our lower bound on a number of event points. So we have these two figures here. It's consisting of red and blue line, and we're going to move the blue line halfway up like this, and then we have taken the upper and lower and made up as max and mean for functions,
12:24
and we end up with these two figures here. And if you look at it, then these two figures are made up of two versions of the initial figure. So let's see it again. From here, we move the lower part halfway up. We take the upper lower envelope,
12:42
and we're left with two more, two times as many of the initial figure as we had before. And this procedure we're going to implement in our simple price-time game, and that's our lower bound. So how do we do it? Well, to get a flat line, you just take a goal node. To get a falling line like this, you can take a max node and point it to goal and have a cost of one.
13:06
That will give you precisely like this, as we have seen earlier. And now we want to move it halfway up. So what we do is we make a cost to go to the goal node, which is different from the cost of going to the red node.
13:21
And then we want to take the lower and upper envelope. Well, lower and upper envelope is naturally what the minimizer and maximizer is basically doing if they're not waiting. So we want them not to wait, so therefore, we give them the worst possible rate for them. So minimizer's first rate is one because that's the biggest number.
13:40
Maximizer's first rate is zero because that's the smallest number. And therefore, they're going to take the lower and upper envelope using this. So how do we then continue doing this? Well, we want to move halfway up like this. And that means we want to put half a number here as before.
14:01
And then again, we're going to do the same thing like before. We're going to take the lower envelope by doing this kind of construction. This is basically a deck. It's very path-like. If you sort of think of anything on the same level as being the same node, it's a path. It has low degree.
14:21
It has only two rates. And it has only one state, the lower right state, where the player waits in. Therefore, it has all the properties wanted. Now let's move on to showing complexity results. I'm only going to show the NP and co-NP part of it. You can read the paper for the PSPACE version.
14:40
So to show NP-hardness, one of the classic version of doing it is to show how to solve a SAT formula. The classic way of solving a SAT formula is to use a truth table. So you have a bunch of columns corresponding to the variables in the formula, and some columns corresponding to the parenthesis of the formula, and then a full formula on the white side.
15:03
So the first variable will have the first half of the time be 1, and the last half be 0. V2 will then change twice as fast. V3 is twice as fast as that, and so on. So in our reduction, we will end up having a state for each of these columns, and time will then handle the assignment.
15:24
Now after having made this kind of truth table, we need to look at the right-hand side here and see if there's any 1s in the table, and if there is, then the formula is satisfiable, otherwise it's not satisfiable. Well, how do we do this in our part-time games? Well, the first thing we will do is to make it sort of a bit easier to look at this value function by adding time divided by 2 to the value.
15:48
So let's not look at just the value, we look at the value plus time divided by 2. That makes it sort of horizontal instead of going diagonally down. Next step, next modification we're going to do is to move the blue part up a bit,
16:03
and now if we introduce sort of a black line going to the middle, we see that half the time it's above the black line, half the time it's below the black line. And the idea is that as long as it's above the black line, the corresponding variable is true, as long as it's below the black line, the corresponding variable is false.
16:22
Now it's not really perfect yet because we started in the middle of a true assignment and also end in the middle of a true assignment, so we'd like sort of to take out just a small part of this and then use that small part as the whole thing. So this small part between these two blue lines is what we want to take out.
16:41
I'm going to show you how to do this, and then I'm going to use this as the limitation for the variables. So if you go back to our construction, it looks like this. It gave us this value function, and what we then do is we multiply all the numbers, all the cost by 2, and to do is a cost from this one costing max state to goal of half the smallest number.
17:06
What it means is that we end up with this kind of value function, and if we then move this blue line up as we wanted to, we end up with precisely this, exactly as we wanted. So now the blue line here will correspond to variable V1, and the red line will correspond to negation of variable 1.
17:30
So we're not going to allow negations in our system except on variables, and we are allowed to do this because of De Morgan laws. So now we have seen how to implement the variables for a set formula.
17:42
So we see how to implement V1, similarly we can implement not V2, and similarly we can implement V3. So the idea was that a time goes downward, and in the first half V1 is true, and in the second half V1 is false, and similar for the other variables. So we have a state for V1, we have a state for not V2, and to implement V1 or not V2,
18:05
we just take a max state, because or is a max function, in essence, if either of them is true at a given point in time, then it should be true, otherwise it should be false. So we just take the max state, and again we don't want him to wait, so we get, give him a cost of a rate of zero.
18:25
Similarly, if I wanted to do a full formula, we introduce a V3 state, replacing a minimal state with a cost of a rate of one, because that will then just take the minimum of the two functions.
18:44
Now, we want to find if there is any ones in the column over here. How do we do that? Well, again, it's fairly easy. Remember that this black line was horizontal, which means that it was actually falling with a rate of a half.
19:00
So if we introduce a max state with a cost of a waiting rate of a half, what it allows him to do is to wait until any point until we are above this black line, and then get something above the black line, if possible. Otherwise, if you always add up below the black line, well, he can of course not wait to any point and get anything above the black line.
19:22
Therefore, if he, at time zero, can wait to any point and get above the black line, well, he must have a satisfying assignment to the formula, otherwise he does not have a satisfying assignment to the formula. So this is how we implement the Z formula and how we solve that problem. Now, to do the core NP version is kind of easy.
19:43
We just do tautology. Tautology is a corresponding problem to the Z formulas. It's testing if there's any zeros in the outcome column here or here, and that we do by just turning it from max state into a mean state. And then it will be below the black line if and only if there is a point where it's below the black line and therefore there is a zero.
20:09
If there is a zero, of course, it's not a tautology, and therefore we have solved the tautology problem. And therefore, this problem is both NP and core NP-hard.
20:21
The last thing I wanted to do before ending my talk is to talk about some open questions. So the first open question I wanted to mention was, there's a slight complexity gap. We have a PSPACE lower bound. We have an extra time upper bound. That's a slight complexity gap. Some could work on solving that.
20:41
There is a complexity of small path width. I've mentioned our curve for the endpoints for basically a path. But we have added a big deck on the top of it, a big tree on top of it for this formulas. And this is basically why we're losing the path width there.
21:00
So we don't really know what the complexity of small path width is. If curves are very path-like, it is easy in general case. We don't really know. We just know there are many endpoints. So another question could be, what if the numbers and input are given in unary? Our construction has exponentially growing cost on the edges.
21:22
Are they easier if you have only small numbers? And let's mention the result of the case. What if one player has only few states? We don't really know anything about whether one. It could be easy, it could be hard. We don't know. A final question. I'm pulling the parentheses here because I haven't defined what two clock classroom games are.
21:44
But what is the complexity of that one? The three clock variant or more clocks variant is known to be undecidable, but two is still open. One could work on that one as well. So thank you. That was my talk.