We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Logic Beyond Formulas: A Proof System on Graphs

00:00

Formal Metadata

Title
Logic Beyond Formulas: A Proof System on Graphs
Subtitle
Q/A Session A - Paper A6.D
Title of Series
Number of Parts
56
Author
Contributors
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
LogicGraph (mathematics)BeweissystemComputerWell-formed formulaConnected spaceProof theoryPhysical systemGraph (mathematics)BuildingTerm (mathematics)Speech synthesisLogicData structureOperator (mathematics)Stress (mechanics)Entropie <Informationstheorie>QuicksortGrass (card game)XMLComputer animation
Graph (mathematics)LogicGraph (mathematics)Proof theoryTheoryInformationRoutingLogicGrass (card game)Physical systemInstance (computer science)Context awarenessJunction (traffic)Data structureTerm (mathematics)TheoryStandard deviationPropositional calculusShape (magazine)Rule of inferenceGraph (mathematics)Pattern languageResultantProof theoryOperator (mathematics)Computer animation
Computer-generated imageryQueue (abstract data type)Process (computing)Programming paradigmEmbedded systemSemantics (computer science)Well-formed formulaCuboidExecution unitStructural loadEvent horizonBitOperator (mathematics)SineState observerMessage passingSeries (mathematics)Graph (mathematics)Queue (abstract data type)Endliche ModelltheorieDistributed computingData structureWell-formed formulaBinary codePhysical systemBuffer solutionComputer animation
Buffer solutionWell-formed formulaParallel portGraph (mathematics)Series (mathematics)Well-formed formulaData structureShape (magazine)Operator (mathematics)Queue (abstract data type)Binary codeTerm (mathematics)Pattern languageBuffer solutionComputer animation
Symmetric matrixTelecommunicationFlow separationShape (magazine)DiameterComputer networkData modelLogicSoftwareInstance (computer science)Operator (mathematics)Data managementDiameterData structureUniqueness quantificationTelecommunicationSymmetric matrixInternetworkingWeightPoint (geometry)Shape (magazine)Service (economics)Flow separationCommutatorConcurrency (computer science)Workstation <Musikinstrument>Real numberCore dumpLinearizationVideo gameRepresentation (politics)Vertex (graph theory)TensorPower (physics)LengthPositional notationMultilaterationWell-formed formulaPattern languageComputer animation
Rule of inferenceDerivation (linguistics)LogicGraph (mathematics)Semantics (computer science)Graph (mathematics)Module (mathematics)Well-formed formulaRight angle1 (number)InferenceRule of inferencePoint (geometry)Shape (magazine)Data structureContext awarenessCorrespondence (mathematics)BitComplementarityLogicJunction (traffic)Similarity (geometry)InformationPRINCE2Form (programming)Multiplication signPhysical systemShared memoryGrass (card game)Instance (computer science)Fitness functionComputer animation
Modul <Datentyp>Vertex (graph theory)Finitary relationGraph (mathematics)Positional notationModule (mathematics)SubsetNetwork topologyLogicGrass (card game)Module (mathematics)SubgraphWell-formed formulaGraph (mathematics)Context awarenessComputer programmingComputer animation
Graph (mathematics)Prime idealModul <Datentyp>Positional notationHelmholtz decompositionLemma (mathematics)Personal digital assistantPositional notationGraph (mathematics)Prime idealConnected spaceOperator (mathematics)BitModule (mathematics)Poisson-KlammerNumberFood energyDigital photographyComputer programmingComputer animation
Graph (mathematics)Prime idealModul <Datentyp>Helmholtz decompositionLemma (mathematics)Personal digital assistantPositional notationLogicConsistencyEquivalence relationTheoremIsomorphieklasseReflexive spaceDistributive propertyAnalytic setProof theoryTheoryProof theoryLogicGraph (mathematics)Context awarenessProcess (computing)Link (knot theory)Dimensional analysisInstance (computer science)Sinc functionExecution unitModule (mathematics)Data structureJunction (traffic)Group actionTerm (mathematics)Physical systemFinitismusNegative numberOperator (mathematics)Axiom of choiceNetwork topologyProjective planeWebsiteConsistencyRule of inferenceDistribution (mathematics)Search treeCondition numberFlow separationClassical physicsLinearizationLattice (order)Modul <Datentyp>Helmholtz decompositionMultiplication signComplementarityTheoremAnalytic setSquare numberConnected spaceIntuitionistische LogikComputer animation
Reflexive spaceRule of inferenceDistributive propertyDuality (mathematics)Prime idealGraph (mathematics)Propositional calculusProof theoryPhysical systemGaussian eliminationTheoremCondition numberAnalytic setModulo (jargon)Physical systemRule of inferenceInheritance (object-oriented programming)Game controllerInteractive televisionGroup actionTerm (mathematics)Social classConsistencyUsabilityCondition numberReflexive spaceResultantRadical (chemistry)Cellular automatonFrame problemWebsiteContext awarenessStudent's t-testPlanningConnectivity (graph theory)Prime idealInferenceGraph (mathematics)Computer programmingLogicPresentation of a groupPropositional calculusPrime numberHeat transferVideo GenieMereologyFilm editingTheory of relativityAtomic numberLinearizationClassical physicsSequent calculusTheoremSet (mathematics)Gaussian eliminationConnected spaceTensorDistribution (mathematics)Computer animation
Heegaard splittingEuclidean vectorRule of inferenceConnected spaceGraph (mathematics)Proof theoryContrast (vision)Context awarenessAtomic numberImage resolutionModule (mathematics)InferenceContext awarenessRule of inferencePropositional calculusGraph (mathematics)Heegaard splittingConnectivity (graph theory)Data structureFood energyBitBit rateInteractive televisionInformation technology consultingConnected spaceGrass (card game)RoutingAdditionShape (magazine)Gaussian eliminationSequenceFilm editingKnotProof theoryWell-formed formulaLengthSubgraphComputer animation
Graph (mathematics)Prime idealImage resolutionShape (magazine)Principal idealProof theoryInferenceCore dumpHeegaard splittingReduction of orderRule of inferenceContext awarenessLemma (mathematics)ExistenceTheoryPhysical systemContext awarenessHeegaard splittingConnectivity (graph theory)MereologyGraph (mathematics)Statement (computer science)CASE <Informatik>Smith chartRight angleLogicKnotShape (magazine)Atomic numberResultantPrime idealCore dumpProof theoryComputer animation
Rule of inferenceGraph (mathematics)Analytic setBeweissystemLogicInferenceData structurePropositional calculusGaussian eliminationConsistencyGroup actionExtension (kinesiology)Distributive propertyMultiplication signAddress spaceLogicInferenceContext awarenessProof theoryPropositional calculusPhysical systemResultantGraph (mathematics)Basis <Mathematik>Heegaard splittingConservation lawDimensional analysisData structureLinearizationRule of inferenceMultiplicationAnalytic setDemosceneKey (cryptography)CASE <Informatik>Extension (kinesiology)Gaussian eliminationPrime idealMixed realityFilm editingDistribution (mathematics)Maxima and minimaNegative numberComputer animation
Transcript: English(auto-generated)
In logic, we are used to the idea of building up formulas using connectives such as AND and OR. In this talk, we explore how we may be able to deal with connectives,
which cannot necessarily be expressed in terms of AND and OR operators. And these more general connectives are better expressed in terms of graphs. So we move beyond formulas and look at building proof systems or logics on general graphical structures.
This is joint work with Matteo Acclavio and Luc Strassberger. I am Ross Horn speaking. We begin with an overview of the talk. We firstly want to motivate why we would want these additional structures.
Why would we want to do logic and graphs? In short, it is because we can end up with structures such as the following. You see this N shape here, which cannot be constructed in terms of AND and OR operators. And we can find a context in which they are useful.
We then think about, well, if there has been no logic designed for such structures previously in the sense that we mean, then how do we go about designing this logic? So we want to build the logic from logical principles. For instance, we would want to have that we have some graph here that the graph should imply itself.
This is something we should be very worrying if we did not have a logic. So we start with very simple assumptions like this and essentially build a system. And then we go on to examine some of the challenges that we face.
A really important thing is that we needed to go beyond the standard proof theoretic toolkits that we use from day to day. So, for instance, we have a structure like this. We have the atomic proposition A and we have some graph which is in a disjunction with it. We should be able to rewrite that graph to become the complement of that atomic proposition and thereby apply a rule to that atomic proposition.
This is a standard result. And notice that here in this formulation, we have some additional information here. We have some additional context.
We need to do things inside context, be much more context aware than we have previously. Of course, this context is provable and we'll see this pattern arising later in this talk. This work stems from some rather simple observations and ubiquitous structures such as queues.
Queues are used, of course, in distributed systems for reliability, for instance, ensuring that series of messages arrive in the same order in which they were sent. And if we want to model a queue graphically, maybe we want to do it like this. So we could enqueue something. So A is just something that we're putting in the queue.
And if the queue allowed only one thing to be, and it's a very short queue, then we would have to dequeue A before B can be enqueued. So B cannot be enqueued before A is dequeued. And we can represent such one buffer as a formula using a non-commutative binary connective as follows.
Similarly, if we want to model a two queue, what we can do is we can enqueue A, we can enqueue B. But if we want to enqueue C, firstly, we must enqueue or dequeue A because only two things are allowed in this still rather short queue.
And of course, we can model this as formulas. But this is getting a little bit more complex. Let us break this down. What's happening here? Well, we have the enqueue A event. Everything happens after this enqueue A event.
So we have a binary operator here. And what happens here? Well, we have everything before the box, the enqueue A event is related to everything inside the box in the same way. And similarly, everything after these two enqueue B and dequeue A events, everything is related in the same way.
So we can actually break this down into another operator and we can actually say, well, these are two things in parallel. So we can decompose a two queue using series and parallel operators, using binary operators.
The surprise is that if we want to do the same for a simple structure such as a three buffer, this is just a queue in which three things can simultaneously be queued. We cannot write down a formula in the same way, nor can we do it if we have, for instance, unbounded buffers.
And the reason has been known since the 1970s. In short, it's because whenever we have this n-shaped structure appearing as an enqueue subgraph, so this appears all over the place in these structures, these n shapes, then we cannot write down a formula in terms of binary operators to represent this pattern of dependencies.
We were looking at asymmetric dependencies where these non-commutative operators, but the same problem even arises for much simpler structures, the more and and or like structures.
For instance, here we have that A is separated from B and C is separated from D. We're looking at the symmetric operation of separation. And what this represents is because A and B are separated, they cannot communicate, but B can still communicate with C and can communicate with D.
And we can represent this, of course, as a formula. So we have the A tensor B here and we have the C tensor D or and, tensor is and. And we have the R here, which is this power operator. Why we're using linear logic style notation will become clear later.
And of course, we could represent this structure. So what's happening here, B can communicate with C, but B and D, sorry, B can communicate with D, but B and D could not communicate with A or C. And of course, we can represent this pattern of dependencies using binary operators. But look at this.
We cannot represent it again. We have this n shape appearing. This is, in fact, the P4 graph, the path on four vertexes. And this n shape cannot be represented using binary operators. And it may be a real life problem. So, for instance, we have some angry person who's trying to communicate with either the technical support or the manager of the company.
He's happy to talk to either. But the CAM person doesn't really want to talk to the manager. He just wants to talk to the technical support, doesn't want to get involved there. The manager doesn't want to get involved with the technical support either because he's above these kind of things.
So we have policies which are separating people, but we're still allowing some communications to happen. And this simple network could not be represented if we built things up using binary operators representing separation and concurrency, as illustrated previously.
So we can only model separation really in networks of diameter, too, because of this restriction that we cannot have paths of length three appearing in our graphs, which are induced by our formulas. How do we even go about designing such a logic, given that we have no semantics or reference points for logics and such systems?
Well, we could start, for instance, by looking at co graphs, which are the graphs generated by formulas. So, for instance, we would want to forbid the following inference, probably where we have this bow tie appearing out of nowhere.
And this corresponds to forbidding an inference such as the following, which is expected in pretty much any logic. So if we didn't forbid this inference, then AND and OR would collapse. We might also want to permit the following inference. So what's happening here is a little bit more complex.
Essentially, we have this this junction of two graphs and the graph on the right is moving inside. The other graph is moving inside a module or a module.
The graph is essentially a sub graph which is connected to everything in the outside in the same way and works like a context informally. And you can see that the module is grown above and the graph on the right has moved inside the module. Essentially, this corresponds to a rule like this, where you can see the formula on the right has moved inside another formula.
And this is admissible in pretty much any logic you can think of. So we have some reference points that we think we don't want to violate. How do we consider the inferences which are allowed or the rules that were allowed when
we have graphs which don't correspond to formulas such as this N shape that we see there? OK, we weren't allowed to have the bow tie appearing at the top, but should we allow, for instance, an N or disallow this?
What about if we can be moved from a V shape to an N shape? Should this be permitted? What about the third structure here? We have an N and this complement. And what's happened above is every dependency has been added, except the dependency, which is basically
allowing the A and the not A to appear in the same context as each other. So there's no dependencies between A and not A, similar between B and not B, C and not C, and D and D and not D. So which of these should be allowed? Which ones should be forbid?
The first problem we face when designing logics on grass rather than trees is that we have no longer subformalists. But we can recover this by using the concept of a module, which I mentioned previously. So a module is essentially a subgraph such that everything inside is related to everything outside in the same way.
And we're going to use context-like notation, as we would for formulas, to denote a graph which contains a module M. OK, so modules generalize subformulas.
Another concept we're going to require is the concept of a prime graph. And a prime graph is essentially a graph which contains no non-trivial modules. So it can't be decomposed any further, and these four graphs in front of you are examples of prime graphs.
We're going to use a bit of a notation here. Notation, this kind of bracket that we're going to use will denote an operator which is essentially one of these prime graphs, say P, is perhaps the N shape, where a number of graphs, say A1 to AN, replace each of the nodes inside the graph.
So this is our N-ary connectives formed from prime graphs, or graphs in general, in fact. And if we take co-graphs, which are those generated by formulas, of course we can decompose them. If it's a non-entechograph, then it can be a singleton graph, just an atomic proposition, or it could be a disjunction, or it could be a conjunction.
But in graphs in general, there's a third possibility. It could also be a prime graph formed from one of these operators. So we can recover the tree structure using these modular decompositions of graphs.
Bear in mind these definitions. We come back to the question of designing the logic, and for this we look at some universal logical principles, or not quite universal, because not every logic necessarily has an involutive negation.
We design our logic such that it does have an involutive negation, which gives rise to De Morgan dualities, of course. And we also require consistency, which is that if G is provable, then the complement of it is not provable. And essentially, if we make these two assumptions already, we can say that negation must be defined as graph complement.
There's no other choice that will satisfy these conditions. And graph complement gives rise immediately to the De Morgan dualities that I mentioned. So this is here we're using this generalised connectives for any graph.
And this means the complementary graph or the complementary connective, much like and and or are related. Indeed, those are both prime graphs. And already we can exclude some theorems just from these two principles. We can say, for instance, the following cannot be proven since the complement of this graph is,
of course, a not related to a related to a related to a not, which is exactly isomorphic to the graph itself. And therefore, consistency could not hold if it were provable. We also add some other logical principles, which are again are fairly common, but not absolutely universal.
For instance, in intuitionistic logic, you do not have implication defined as not something or something else. But in several logics such as classical logic and linear logic, it's quite a normal thing.
So we were going to require that negation is defined in terms of disjunction and negation as represented here. OK, we're going to require that implication is reflexive. This is a very reasonable assumption indeed, and we're going to assume transitivity. If you can imply F implies G and G implies H, then F implies H.
This is a fairly common principle that would be pretty worrying if you didn't have the logic. The next one is maybe not completely universal. I think people could maybe devise logic square. This does not hold. We want to require that if G implies H here, then G implies H in any context.
Remember, contexts are, of course, built in terms of modules. So this we want to retain in our logic. I'm going to call this distributivity, although it could be given several other names, such as monotonicity in lattice theory.
Monotonicity is, of course, already reserved as a word in logic. So with some effort, if I just give you the principles I've laid out so far, we can already prove everything, all theorems that we would like to have in our logic. We want to do better than this. We are structural proof theorists and we want to design a system that is analytic.
So analyticity is what the Gensin systems originally had. And basically, it means that every conclusion corresponds to finitely many premises of proof searches. Your proof search tree is finite. Of course, it was formulated in terms of subformulas,
but already we are in the controversial world when it comes to subformulas. So this is a sufficient definition of analyticity. And also, we might as well have, that is, Kuk, Rekha, i.e. every rule can be checked in polynomial time.
Now, using the rules distributivity, transitivity and reflexivity, it's possible to generate rules such as the following rule here. And this rule is not as strange as it looks immediately. But what we're saying is that if G is provable and of the context in which G we plug is also provable,
then D with G plugged back into that context is indeed provable. So the parts are provable, then the whole is provable. And this is the context in the outside there we can always add due to distributivity, by the way.
This we call the super switch up rule, because it's actually a generalization of the switch rule which applies in logic. We also have from reflexivity and distributivity, we could generate the following rule.
This is just that not G or G holds, as you get in classical logic and linear logic. So this is not a big surprise here. Again, the context is coming from the distributivity rule. We're allowed to add that. And of course, using reflexivity, transitivity and distributivity again, we can also generate the duos of these rules,
where we swap the premise and the conclusion, we dualize everything. So we get the super switch down rule and the interaction up rule, which actually is a generalization of the cut rule in what are called deep inference formulations.
So we'll see why we need deep inference in a minute. But this is not quite enough by itself, the set of rules. There was a nice candidate that seems to generalize multiplicative linear logic nicely to graphs. There's something missing. So, for example, what happens if we want to prove that a prime graph implies itself?
So we have N implies N. How are we going to do this? Well, what we do is, of course, we're going to have to dualize one of the N shapes. This is the dualization of the N. And we have to still be able to prove this implication. Otherwise, P implies P doesn't hold, which would be weird.
And so what we have is a special rule which brings together the components of a prime graph in this complement. And then after that, of course, the A and not A are together and they cancel each other out. So we require, in general, a special rule for prime graphs. This was a key insight that we needed.
So you have a prime connective and its dual prime, dual connective, which is also a prime graph. And then we have all the components. So the first component of the first prime graph will be brought together with the first component of its dual graph. And then everything else, all of these components are in the tensor with relation to each other.
And of course, using distributivity, transitivity and reflexivity, we can also generate the dual rule as we did before. So we have this P up rule or prime up rule. And then now we have this rule, interaction and the cut rule can be made atomic. So we can bring them down so they only apply to atomic propositions.
And that's by using these superswitch and prime rules. And from this, we get our system. Our system, GS, consists of the following rules. This superswitch down rule, the P down rule and atomic interaction rule.
And the dual rules to these are all admissible in the system. And the interesting thing is that these rules together, basically from them, we can derive the cut rule. So if these rules were admissible, that is, we can prove all the same theorems using just the system GS,
as just system GS with any of these rules added, then we have a cut elimination result. And this is the main result of our paper, is that indeed cut elimination does hold for the system GS, this graphical system. And indeed, GS satisfies all the criteria that we laid out, reflexivity, transitivity, distributivity, consistency.
It's analytic, it's correct how it has all the conditions that we demanded earlier. Most of these logical principles are established via cut elimination.
And the boiler room of cut elimination for us is splitting. The splitting technique, what it does is it simulates the rules of the sequence calculus, when we don't actually have rules available to us or formulas available to us. Now, the idea with splitting is what we can do is we can take any of these connected components in the conclusion of the graph,
such as this seven here, and we could rewrite the rest of the graph, such that rule can be applied to the shape that we've selected or the component we selected. What's happened here is the D knot has moved inside the end shape and then has cancelled out the D.
And we're left with two separate components, one of which can move to one side of the seven and one of which can move to the other side of the seven. And then we proceed because they're now formula like these graphs. OK, we could also have taken the end shape and then we can renormalize the proof.
This is what splitting says. We can take any of these components and we could renormalize. And what happens instead is the knot D moves inside the seven, forming a Z shape. And of course, we've seen this structure before and we know how to prove it. So this is the idea with splitting. But splitting is a little bit more complicated than it has been previously.
So the problem is that if we select the component A now, so this is a slightly different example. So we select A and we should be able to rewrite the rest of the graph, such we can apply the interaction rule to knot A. But we can't do this. So the B out the end prevents this from happening.
But what we can do instead is we can treat the rest of the graph as a kind of context and we can move A inside the graph. And of course, we can apply a rule. So we need additional context awareness to handle these solutions, these particular scenarios.
This is not required when we have formulas. This is a particular problem to when we have graphs which allow subgraphs which are P4, these paths of the length three.
We briefly illustrate another problem which is unique to this graphical system, has not been witnessed before in logics with formulas, is if you look at this scenario here, we have this N shape. And according to what I said before, with splitting, I should be able to rewrite the rest of the graph such that it becomes a Z like shape,
even though some of them are empty components. And then I can apply this P down. I cannot do this in this scenario. So we need a special case which is going to allow for prime graphs to be destroyed. And that's what's happening here. So the B knot here is moving inside the N shape and is destroying part of the graph.
And then we continue as if with the remaining components that are left. So we allow for the destruction of prime graphs. And this leads us to our statement of splitting, which, remember, is our core thing for allowing us to do proof theory, is that we have the statement. This is a normal splitting result.
I mentioned it at the beginning and there's mentioned that there's context awareness. This was illustrated by an example previously. And we have this case which says, ah, if you have some graph next to some prime graph, then I should be able to rewrite the rest of the graph such that there's a complementary prime graph. And each of these components, K1 to KN, is provable.
And of course, there's this extra context awareness which we had for the atoms. And we can do this for every prime graph. But there's a special case saying that what we could also possibly do is we could break up the graph into two components. Inside the context again, where one of the components destroys one of the components, MI inside the prime graph.
While one of the main challenges is the proof of splitting, there are also challenges when we apply splitting for showing the admissibility of certain rules, particularly the up rule for prime graphs.
And this is one of the cases involved. Details can be read in your own time. So let us conclude. The problem that we have addressed is we have designed a minimal logic on graphs with an invalid of negation and the implication defined as not G or H. So as G implies H. And it satisfies logical principles expected of an analytic propositional proof system.
These are the rules. There are exactly three rules in our system. Their insight is that deep inference was required, essentially this context awareness and the splitting results.
So we've actually generalized further the calculus of structures, which is the basis of deep inference in order to establish cut elimination here. From cut elimination, we obtain the following results about our system. Firstly, provability in G.S. is NP-complete.
We didn't mention this previously. G.S. is a conservative extension of multiplicative linear logic with mix. It extends an existing established well-known logic. G.S. is consistent and implication is indeed transitive and distributive. These are our key results.