We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Decision Problems in Information Theory

00:00

Formale Metadaten

Titel
Decision Problems in Information Theory
Untertitel
Q/A Session B - Paper B4.F
Serientitel
Anzahl der Teile
123
Autor
Mitwirkende
Lizenz
CC-Namensnennung 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
NeuroinformatikPhysikalische TheorieInformationEntscheidungstheorieEntscheidungstheorieComputervirusInformationDigitale SpaltungDifferenzkernComputeranimation
Physikalische TheorieInformationEntscheidungstheorieAlgorithmusBeschreibungskomplexitätUngleichungKomplex <Algebra>Gesetz <Physik>RekursionValiditätInformationstheorieDigitale SpaltungKartesische KoordinatenEinfügungsdämpfungKomplex <Algebra>Hierarchische StrukturGrenzschichtablösungResultanteLinearisierungEntscheidungstheorieInformationSelbst organisierendes SystemSymboltabelleEnergiedichteComputeranimation
FontPhysikalische TheorieInformationEntscheidungstheorieZeitbereichRuhmasseEntscheidungstheorieDruckspannungBildschirmfensterMetropolitan area networkFontInformationGleitendes MittelMAPCASE <Informatik>MultiplikationsoperatorElektronischer ProgrammführerSichtenkonzeptFinitismusEntropie <Informationstheorie>SummierbarkeitZufallsvariableVorzeichen <Mathematik>WahrscheinlichkeitsraumComputeranimation
Schreiben <Datenverarbeitung>Wechselseitige InformationVariableEntropie <Informationstheorie>Physikalische TheorieInformationGamecontrollerDienst <Informatik>TermDialektRadikal <Mathematik>InformationRechenwerkDifferenteTwitter <Softwareplattform>Schreiben <Datenverarbeitung>Entropie <Informationstheorie>TeilmengeWechselseitige InformationKonditionszahlRandomisierungEinfache GenauigkeitLeistung <Physik>ZufallsvariableSchnittmengeVariableSummierbarkeitDistributionenraumComputeranimation
Entropie <Informationstheorie>Physikalische TheorieInformationVektor <Datentyp>MereologieInformationsspeicherungBenutzerschnittstellenverwaltungssystemProzess <Informatik>VektorraumGammafunktionZufallsvariableAlgebraisch abgeschlossener KörperTeilmengeZusammenhängender GraphEntropie <Informationstheorie>SchnittmengeLeistung <Physik>CASE <Informatik>ZahlensystemComputeranimation
Entropie <Informationstheorie>Vektor <Datentyp>Physikalische TheorieInformationForcingEntropie <Informationstheorie>Folge <Mathematik>VektorraumInverser LimesSchnittmengeComputeranimation
UngleichungInformationVektorraumGanze ZahlKoeffizientÄquivalenzklassePhysikalische TheorieArithmetischer AusdruckQuellcodeSchlüsselverwaltungOrdnung <Mathematik>ValiditätLinearisierungUngleichungEntropie <Informationstheorie>Digitale SpaltungVektorraumGammafunktionKoeffizientDifferenzkernSchaltnetzTermSchnittmengeReelle ZahlComputeranimation
InformationUngleichungVektorraumGanze ZahlKoeffizientÄquivalenzklassePhysikalische TheorieEntscheidungstheorieARM <Computerarchitektur>Endliche ModelltheorieDifferenzkernInformationMereologieUngleichungLeistung <Physik>VersionsverwaltungDigitale SpaltungKoeffizientGanze ZahlGüte der AnpassungComputeranimation
UngleichungRekursionModul <Datentyp>InformationPhysikalische TheorieBeweistheorieUngleichungTermTeilmengeRandomisierungEntropie <Informationstheorie>Modul <Datentyp>ValiditätDigitale SpaltungZufallsvariableSchnittmengeRechter WinkelSprachsyntheseGarbentheorieGrundraumKonfiguration <Informatik>Wort <Informatik>Physikalischer EffektGruppenoperationSchreiben <Datenverarbeitung>Computeranimation
UngleichungRekursionModul <Datentyp>BeweistheoriePhysikalische TheorieInformationGesetz <Physik>Komplex <Algebra>CodecUnendlichkeitRoutingBeweistheorieUngleichungEntscheidungstheorieFolge <Mathematik>DifferenzkernVollständiger VerbandInformationProzess <Informatik>ParametersystemInformationstheorieDigitale SpaltungLinearisierungKartesische KoordinatenResultanteRandomisierungGüte der AnpassungNebenbedingungKomplex <Algebra>EndlichkeitComputeranimation
Physikalische TheorieInformationEntscheidungstheorieBoolesche AlgebraNebenbedingungFunktion <Mathematik>ValiditätGruppenoperationDifferenzkernInzidenzalgebraResultanteInformationCASE <Informatik>NebenbedingungInformationsmanagementZentralisatorAusdruck <Logik>Boolesche AlgebraBildschirmmaskeValiditätEntscheidungstheorieVektorraumKoeffizientEntropie <Informationstheorie>UngleichungSchaltfunktionDifferenteReelle ZahlComputeranimation
InformationNebenbedingungBoolesche AlgebraKoeffizientGanze ZahlFunktion <Mathematik>ValiditätPhysikalische TheorieEntscheidungstheorieTypentheorieVektorraumNebenbedingungKoeffizientInformationBoolesche AlgebraSystemaufrufOrdnung <Mathematik>ComputervirusComputeranimation
TypentheorieNebenbedingungInformationBoolesche AlgebraUngleichungPhysikalische TheorieEntscheidungstheorieUngleichungTypentheorieBoolesche AlgebraNebenbedingungInformationKartesische KoordinatenRechenschieberKomplex <Algebra>PräkonditionierungKonditionszahlBildschirmmaskeGrenzschichtablösungCASE <Informatik>Arithmetischer AusdruckDigitale SpaltungLinearisierungLokales MinimumDruckspannungEvoluteVersionsverwaltungOrdnung <Mathematik>KnotenpunktComputeranimation
Physikalische TheorieInformationÄquivalenzklassePartielle DifferentiationMailing-ListeUngleichungLeistungsbewertungBeschreibungskomplexitätAbfrageFormale SemantikUngleichungSpannweite <Stochastik>Lokales MinimumMereologieRelativitätstheorieInformationNebenbedingungGruppenoperationAbfrageBoolesche AlgebraSecret-SharingDigitale SpaltungDatenbankOrdnungsreduktionMultifunktionTVD-VerfahrenKartesische KoordinatenEinfach zusammenhängender RaumResultanteMailing-ListeSpieltheorieBeobachtungsstudieBenutzerfreundlichkeitKonditionszahlGrundraumTopologieTouchscreenComputeranimation
Physikalische TheorieInformationHierarchische StrukturTeilmengeFamilie <Mathematik>SchnittmengeResultanteSOLOMON <Programm>Natürliche ZahlSystemaufrufHierarchische StrukturFamilie <Mathematik>RechenschieberInformationSigma-AlgebraNebenbedingungTypentheorieBoolesche AlgebraSchnittmengeEntscheidbarkeitTeilmengeTupelComputeranimation
Hierarchische StrukturTeilmengeFamilie <Mathematik>Physikalische TheorieInformationSchnittmengeArithmetisches MittelAdditionDatensatzSigma-AlgebraGüte der AnpassungÄußere Algebra eines ModulsEinsHierarchische StrukturEntscheidbarkeitStandardabweichungComputeranimation
UngleichungInformationTheoremPhysikalische TheorieResultanteTermInformationTheoremLinearisierungArithmetischer AusdruckUngleichungInformationstheorieEntropie <Informationstheorie>Digitale SpaltungPhysikalisches SystemComputeranimation
UngleichungInformationTheoremPhysikalische TheorieBeweistheorieRationale ZahlUnendlichkeitMinkowski-MetrikRationale ZahlUngleichungLesen <Datenverarbeitung>Vorzeichen <Mathematik>EindeutigkeitDivergente ReiheRichtungGreen-FunktionCASE <Informatik>ParametersystemTheoremFinitismusKoeffizientIrrationale ZahlAuswahlaxiomWahrscheinlichkeitsraumNachbarschaft <Mathematik>MAPDigitale SpaltungValiditätHierarchische StrukturComputeranimation
UngleichungInformationTheoremRationale ZahlUnendlichkeitBeweistheoriePhysikalische TheorieMultiplikationssatzMagnetooptischer SpeicherInformationDigitale SpaltungUngleichungWechselseitige InformationFinitismusGanze FunktionKonditionszahlBildschirmmaskeCASE <Informatik>ParametersystemWahrscheinlichkeitsraumDifferenzkernQuellcodeNachbarschaft <Mathematik>Physikalisches SystemQuadratzahlDatensatzWort <Informatik>Computeranimation
Meta-TagUngleichungInformationMultiplikationssatzTheoremPhysikalische TheorieKonditionszahlResultanteBeweistheorieMAPHierarchische StrukturComputeranimation
MultiplikationssatzUngleichungInformationTheoremPhysikalische TheorieFehlermeldungResultanteKonditionszahlGruppenoperationSymboltabelleSummierbarkeitDifferenzkernAbstandBildverstehenInformationUmsetzung <Informatik>GrenzwertberechnungTermUngleichungDigitale SpaltungLambda-KalkülMultiplikationsoperatorPräkonditionierungAdditionLinearisierungÄußere Algebra eines ModulsSchaltnetzKonvexe MengeVektorraumSchnittmengeEntropie <Informationstheorie>KoeffizientQuantorAbgeschlossene MengeCASE <Informatik>Nichtlineares GleichungssystemComputeranimation
UngleichungInformationMultiplikationssatzTheoremLemma <Logik>Physikalische TheorieBedingte UnabhängigkeitKonditionszahlValiditätNebenbedingungStochastische AbhängigkeitFeuchteleitungKonditionszahlValiditätCASE <Informatik>FreewareIndexberechnungZufallsvariableDifferenzkernRandverteilungEntropie <Informationstheorie>Stochastische AbhängigkeitRandomisierungBildschirmmaskeBefehl <Informatik>LogarithmusWechselseitige InformationFormale SpracheSchnittmengeAdditionPräkonditionierungDifferenteAuswahlverfahrenVollständiger VerbandARM <Computerarchitektur>InformationComputeranimation
Bedingte UnabhängigkeitUngleichungValiditätNebenbedingungTheoremBeweistheoriePhysikalische TheorieRekursionInformationAdditionResultanteMAPMultiplikationPhysikalische TheorieReelle ZahlKonditionszahlHierarchische StrukturWort <Informatik>GruppenoperationBildverstehenComputeranimation
InformationPhysikalische TheorieDatentypLokales MinimumUngleichungBedingte UnabhängigkeitRekursiv aufzählbare MengeAlgebraische ZahlRekursionAdditionDatenmodellWort <Informatik>MultiplikationsoperatorPunktResultanteQuaderBeobachtungsstudieSchnittmengeAbgeschlossene MengeEntscheidungstheorieComputeranimation
Physikalische TheorieInformationRekursiv aufzählbare MengeAlgebraische ZahlUngleichungRekursionAdditionDatenmodellMultiplikationssatzLokales MinimumOffene MengePotenz <Mathematik>Digitale SpaltungModelltheorieReelle ZahlPhysikalische TheorieLokales MinimumOrdnung <Mathematik>ResultanteNichtlinearer OperatorUngleichungComputeranimation
Transkript: Englisch(automatisch erzeugt)
Hello. Today, I'm going to talk about decision problems and information theory, and this is joint work with Mahmoud Abu-Khamis, Fokion Paulaitis, and Hank Nohan. My name is Tan
Suju. This talk is based on a paper that appears in ICARC 2020, and it is a talk presented at the conference. Information inequalities are inequalities between entropic terms, and they have been called by Nick Pincher in 1986 the loss of information theory. Today, they have even more applications than they had back in 1986, and today, we
require even more complex loss of information than linear information inequalities. Yet, the most basic algorithmic questions remain open. It is open today, even with deciding simple linear information inequalities is decidable. In this talk, my goal is to introduce
several decision problems in information theory and present some results of their complexity, algorithmic complexity, by placing them in the arithmetic hierarchy. A quick outline of my talk, I will give some basic definitions, then I will define decision problems
in information theory, briefly mention some applications, then describe the results and conclude. In this talk, we will be concerned with finite probability spaces and a random variable over such a probability space. The entropy of this random variable is defined as the
standard way. It is a sum over all outcomes of that random variable of the probability of that outcome times log of the probability of that outcome with a minus sign. In this talk, we are interested not in a single random variable, but in n, n joint random variables x1 up to xn. For any subset of these random variables, we consider the entropy of that subset,
so therefore, we have 2 to the power n entropic values or entropic terms, and we denote them like you can see here, h of x1, x4 of h or h of x, y, z. This means the entropy of the
joint distribution of the variables x, y, and z. When x and y are sets of random variables, then we denote their union by simply dropping the union term. We write x, y for the union. We will often use these standard definitions. The conditional entropy written h of y bar x
is simply the difference of two entropic terms, the union of all the variables minus the entropy of the variable in which we condition, and the conditional mutual information i of y, z condition on x is the sum of four entropic terms that are represented here. The details
are not very important for this talk. Since there are 2 to the power n possible entropic values, we view them as a vector, and conversely, given a vector, a boldface h, vector with 2 to the power n dimensions, we call it entropic if there exists n
random variables such that for each component of the vector, its value is precisely the entropy of those subsets of random variables. So for that reason, because we view entropies as a vector, we will often denote them with lower case, we would write lower case h of a subset
of random variables. The set of all entropic vectors is denoted with gamma and star. It is a notation introduced by Zhang and Young in the 90s. It is known that this set is not
topologically closed, and it is not convex. However, its topological closure denoted with bars and gamma and star is known to be convex. It is much better behaved, and the vectors inside this set are called almost entropic vectors. They are simple limits of sequences
of entropic vectors. It is known that for n greater than or equal to 3, the set of entropic vectors is strictly included in the set of almost entropic vectors. Okay, so with these basic definitions, we can now describe the topic of our talk today,
which are information inequalities. These are simple assertions that a linear combination of entropic terms is greater than or equal to zero. C denotes a vector of coefficients.
They are real numbers, and c dot h is the standard dot product. We say that the equality is valid for all entropic vectors, or gamma and star and valid, if it holds for all entropic vectors. Similarly, we can refer to validity over almost entropic, over a set of almost entropic vectors.
And because these inequalities are very simple linear expressions, the notions of validity over entropic vectors coincides with the notion of validity over almost entropic vectors. Good. So this justifies our first problem statement, which is the information inequality
problem, which is as follows. Given an information inequality with integer coefficients, decide if it is valid. Check if it is valid. We insist on the coefficients being integers, because we want to use this as a decision problem.
Okay, so to give you some examples of valid inequalities, here are the very well known Shannon basic inequalities. They are called monotonicity, which says that the entropy of more random variables is greater than or equal to the entropy of a subset of random variables,
and submodularity. These information inequalities are known to be valid. Their consequences are called Shannon inequalities, and it is decidable if a given linear inequality is a Shannon inequality. In other words, it is decidable if a given
linear inequality is a consequence of Shannon's basic inequalities. Here is an example. This is an inequality, which is a Shannon inequality, which means it is a consequence of these three, and therefore it is valid. So let's prove that this inequality is a consequence of
Shannon's basic inequalities. So we start with a term on the left, and we apply some modularity to the first two terms. This allows us to replace them with the two terms consisting of the union of the variables, which means x, y, and z, and the term of the intersection of
the variables, which is just y. And then we apply some modularity again to these two terms, and we get x, y, z, and the empty set, and h of the empty set is zero. So this is a short proof that the inequality that we have seen here is a Shannon inequality.
Good. So in essence, what Bipinja asked in 1986, whether the Shannon inequalities are complete for all information inequalities is, at its essence, a question about decidability. If Shannon's inequalities were complete, then it means that we can check every information
inequality by checking if it is a consequence of the Shannon inequalities. However, a Zhang and a surprising result in 1998, they proved the first, they gave the first example of an inequality that is valid, but it is not a Shannon inequality. We call it a non-Shannon
inequality. Moreover, in 2007, Matus proved an even more surprising result that there are infinitely many non-Shannon inequalities that are inequivalent, and this only requires four random variables, n equals 4. And this ruined any hope for obtaining decidability
by listing a finite set of inequalities that are complete for checking all linear inequalities in information theory. So even more interestingly, today we are facing applications
that require more complex constraints over information inequalities than the simple linear inequalities, and this is my topic next. I'm going to define the more general decision problems in information theory. So the most general form of decision problems that we're going to consider are Boolean
information constraints. A Boolean information constraint is given by a Boolean formula F. Here is a brief example to give you some intuition. This is a Boolean formula with four Boolean variables, and as many vectors of real coefficients, and the Boolean information
constraint is just the assertion that this Boolean function applied to the inequalities defined by these vectors of coefficients is true. It is this assertion written here.
In this case, we distinguish between validity over entropic vectors and validity over almost entropic vectors, and there are cases where it is known that these two notions of validity differ. So the most general problem that we consider is that of Boolean information constraints, which says given such a Boolean information constraint where all the coefficients are
integers, check if it is valid for all entropic vectors or check if it is valid for almost entropic vectors. Here are a few types of Boolean information constraints that each of them has important applications. The simplest type is that of linear information inequalities
or simplest information inequalities, like the one we have seen before. Next, we consider max information inequalities. These are of the form the maximum of some expressions greater than or equal to zero, or the maximum of some expressions greater than or
equal to some other expression. These are equivalent to the Boolean disjunction of several inequalities, and they are a special case of a Boolean information constraint. More generally, we can consider conditional inequalities, the assert that if some inequalities hold, these preconditions hold, then another inequality holds as well.
And finally, we consider general Boolean information constraints, which can have multiple preconditions and then max inequalities as false conditions. All these types of Boolean information constraints have important applications,
and my goal next is to briefly describe these applications before we discuss the complexity of these types of Boolean information constraints. Okay, so one slide about applications. This only lists four applications, and I invite you to look at the paper for more.
The first I want to mention is a beautiful result by Chan from 2007, and he proved that information inequalities are computationally equivalent via one-to-one reductions to certain group theoretic inequalities.
And this connection is very direct. It's very one-to-one, and it is a beautiful correspondence between information inequalities and inequalities in group theory. What we proved very recently is that max information inequalities are computationally equivalent via many one-to-one reductions to a problem that has
been studied intensively in database theory, namely the containment problem for conjunctive queries under back semantics. And finally, I want to briefly mention that these Boolean information constraints, they have applications to secret sharing and to query variations over relational databases, and I invite you to check the paper.
So clearly Boolean information constraints have a wide range of applications, so what do we know about their complexity? This is going to be the topic of the next part of this talk, so let me describe the results that we have. And all we know is that we can place some of
some types of Boolean information constraints in the arithmetic hierarchy. This one slide is a brief review of the arithmetic hierarchy, also known as the Kleene-Monstovski hierarchy, and this consists of families of subsets of
tuples of natural numbers defined as follows. At the base of the hierarchy are the families sigma 0 and pi 0, which consist of all the decidable sets. So every set in sigma 0
or equivalent to pi 0 means a decidable set. Sigma n and pi n are defined using n quantifier alternations. Sigma starts with the existential quantifier, and pi starts with the universal quantifier, followed by a decidable set B. So this is a standard arithmetic hierarchy.
It is known to be strict, and each set except the base ones contains many undecidable sets, undecidable problems. Good. So with this definition in mind, let me describe the first of three results that I want to mention. And the first result is
actually a floor theorem that I view as a warm-up. And it concerns the simplest information inequalities, which are linear information inequalities. To give you a sense of why they are hard, here is an example of a nonchalant inequality taken by
spoke by Jung on information theory. We call that each conditional mutual information, like this term here, is simply a linear expression of four entropic terms. So this
is just a linear inequality over entropic terms that the third is greater than or equal to 0. It is valid, and it is not a consequence of Shannon inequalities. We do not know how to prove such inequalities in general, except to use ingenuity which is
specialized to each such inequality. However, it is relatively easy to refute them by simply enumerating over all finite probability spaces with rational coefficients. And this brings us to the first theorem, checking the validity of such an information inequality is in pi 1,
and the first level of the arithmetic hierarchy. So with proof, we enumerate over all finite probability spaces with rational coefficients. If any of them invalidates the inequality, then we return false, otherwise we continue up to infinity with the next probability space.
Now, the key piece of the argument that enables this simple theorem is the following observation. If there exists a finite probability space on which this inequality fails, in which we have this strict inequality in the opposite direction,
even if that finite probability space has irrational coefficients, there exists an entire neighborhood of those probability values, of those irrational probability values, such that in that entire neighborhood, any choice of the probability values we still have will still be a reputation of this inequality. And this means that we can also find the
probability space with rational probabilities. And this is something that we enumerate in this in this process over here. However, the simple argument fails in the second case, when we consider conditional information inequalities. Here is an example of a conditional
information inequality taken from a paper by Calcetto and Dromashchenko. It asserts that if these four conditional mutual informations are equal to zero, then this fifth conditional mutual information is equal to zero as well. Now, in our formulae, we discussed inequalities
as opposed to equalities, but it is easy to convert each of these assertions into an inequality because it is known that the mutual information is greater than or equal to zero, and therefore equality to zero is equivalent to minus the mutual information greater than
or equal to zero. For such inequalities, the previous argument no longer holds. If we have a finite probability space whose probabilities might be irrational, that forms a violation of this implication, that probability space will have to make all
these mutual informations exactly equal to zero. And in general, it is no longer the case that this continues to hold in an entire neighborhood of source probability values. Nevertheless, we can prove that with an additional condition on the preconditions of
such a conditional inequality, the problem is in pi 3, the third level of the arithmetic hierarchy. And more than the result itself, the proof technique is actually even more interesting, and this is what I'm going to show you next. The proof technique is a method that converts
conditional information inequalities into simple linear inequalities. And let me describe you how it goes. Consider the first, the conditional information inequality in equation 1. It asserts that if C linear inequalities hold, then so does the consequent. And consider condition 2,
which is obtained by asserting that the consequent is greater than some positive linear combination of the antecedents. Here, the lambdas are any positive coefficients. Now, it is easy to check that if condition 2 holds, if this linear inequality holds,
then the conditional inequality holds as well, simply because if these preconditions are greater than or equal to 0, then by a condition 2, C times H is also greater than or equal to 0.
In a surprising result, however, Kachen and Romashchenko proved that the converse fails. It actually fails even on the example that I showed you on the previous slide, which is from their paper. What we proved, however, is that the converse does hold if we allow an error term on the left-hand side of this inequality.
More precisely, we considered this condition, which is an alternation of three quantifiers. It says for any epsilon greater than 0, there exists positive coefficients such that for any almost entropic term, the inequality 2 plus a small error term which is multiplied just with epsilon
is greater than this linear combination. It is easy to see that 3 continues to imply 1, and what we proved is that the convex also holds. If the conditional inequality holds, then this inequality with an error term holds as 1.
So for this result, it is critical that here we take the set of almost entropic terms. This needs to be a closed convex cone, which is the case for the set of almost entropic vectors,
and it does not apply. We don't know if it holds for the set of entropic vectors. Now, if in addition all the preconditions are tight, which means that the inequality is opposite direction, it's a valid one, then we can further increase the lambda here because these terms are always less than or equal to 0. So we can increase them to obtain natural numbers,
and this places this problem in bias-free because it's an alternation of for all, exists, and for all. Okay, the third problem I want to discuss is that of conditional independence statements between random variables. So this doesn't need the language of
entropy. These are simply statements of the form two random variables, y and z, are conditionally independent given the values of a random variable x. Such assertions can be written down
as equalities over marginal probabilities, as I wrote here. We need to write down such an equality for every possible outcome x, y, z of this free random variable capital X, capital Y, and capital Z. The conditional implication problem has been
studied intensively in the literature. It also asks us to check if given some conditional independence between some sets of random variables and other conditional independence holds as well. Now, a conditional independence holds if and only if the mutual
information is equal to zero, so therefore the conditional indication problem is really a special case of a conditional equality. However, there are two differences from our previous discussion. First of all, here we are really interested in validity over entropic terms, not over all
entropic terms, almost entropic terms. And moreover, here the condition, both the precondition and the consequent, can be written using only multiplication and addition. Addition is needed to write marginal probabilities like p of x. Hence, there is no need for the logarithm.
Using this and the fact that the theory of reals with addition and multiplication is decidable, this is a new result by Tarski, we can prove that the conditional implication problem is in pi 1, in the first level of the arithmetic hierarchy.
Okay, so this is all the time we have, so let me briefly mention a few words in conclusion. Here is a brief summary of our results. As you can see, there are many boxes that are still
open. And I want to mention three points in conclusion. First of all, the question of decidability is related to another question about the set of almost entropic terms, whether this set is semi-algebraic closed. So please see the paper for more discussion.
Another interesting question is whether the max information inequalities are computationally equivalent to linear inequalities. Again, we have some evidence in the paper that they might be computationally equivalent. And finally, we want to mention that our result based on Tarski's theorem, in order to extend this to conditional inequalities,
one needs to know whether the theory of reals with addition, multiplication, and exponentiation is decidable. And this is a major open problem in model theory today.