We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Topos and Information

00:00

Formale Metadaten

Titel
Topos and Information
Serientitel
Anzahl der Teile
6
Autor
Mitwirkende
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
As part of the IHES-Huawei partnership, this one-day workshop is organised by the Huawei's Mathematical and Algorithmic Sciences Lab jointly with IHES and aims at creating scientific exchanges around mathematical topics that are essential for the development and innovation of the ICT. The topic of this year is on the potential of the mathematics of Artificial Intelligence for breakthrough results in the ICT field
Algebraische StrukturFolge <Mathematik>Natürliche ZahlNumerische MathematikRelativitätstheorieStatistikTopologieMengenlehreModelltheorieGebäude <Mathematik>Produkt <Mathematik>HydrostatikModulformKategorie <Mathematik>Ausdruck <Logik>Berechenbare FunktionUnendlichkeitÜbergangÄquivalenzklasseDivergente ReiheEbeneEindeutigkeitEinfacher RingFunktionalGrothendieck-TopologieGruppenoperationHensel-RingIndexberechnungInhalt <Mathematik>Kompakter RaumKomplex <Algebra>Kontraktion <Mathematik>Lokales MinimumLoopMaßerweiterungMereologieMomentenproblemPhysikalische TheoriePhysikalisches SystemPhysikalismusProjektive EbeneStellenringFamilie <Mathematik>VerweildauerStochastische AbhängigkeitNichtlinearer OperatorParametersystemZwölfCoxeter-GruppeFormation <Mathematik>VollständigkeitDistributionenraumFächer <Mathematik>PunktLogistische VerteilungVerzerrungstensorZirkel <Instrument>Offene MengeExponentialfamilieUmwandlungsenthalpiet-TestProzess <Physik>MultifunktionSummengleichungGraphfärbungTopostheorieExplosion <Stochastik>MeterDifferenteBiproduktObjekt <Kategorie>Element <Gruppentheorie>sinc-FunktionMultiplikationsoperatorWald <Graphentheorie>Minkowski-MetrikAssoziativgesetzOrtsoperatorFisher-InformationDiagrammDynamisches SystemEntropieFunktionalgleichungGeometrieKohomologiePhysikerPolynomQuantenmechanikSignifikanztestZentraler GrenzwertsatzDeskriptive StatistikGruppentheorieAllgemeine RelativitätstheorieFrequenzMatrizenrechnungAxiomInvarianteZufallsvariableVariableModulVektorraumFunktorIntegralDimensionsanalyseDerivation <Algebra>Ganze ZahlFinitismusOrthogonalitätAggregatzustandAlgebraisches ModellAnalytische FortsetzungArithmetisches MittelAussage <Mathematik>BeweistheorieEinbettung <Mathematik>Einfach zusammenhängender RaumEndlichkeitFundamentalsatz der AlgebraGarbentheorieGrundraumHermitesche MatrixHyperbelverfahrenInjektivitätInverser LimesLogarithmusMultiplikationPerpetuum mobileQuantisierung <Physik>RandverteilungRenormierungsgruppeResultanteSelbstadjungierter OperatorSigma-AlgebraSimplexverfahrenStatistische HypotheseTeilmengeTermTheoremÜbergangswahrscheinlichkeitThermodynamisches GleichgewichtNichtlineares GleichungssystemEinflussgrößeZeitrichtungEndlich erzeugte GruppeBasis <Mathematik>AbstandNormalvektorPropagatorPunktspektrumTranslation <Mathematik>EnergiedichteÄußere Algebra eines ModulsFundamentalgruppoidPolylogarithmische FunktionKlasse <Mathematik>PartitionsfunktionInnerer PunktBeobachtungsstudieDifferenzkernBimodulArithmetischer AusdruckDiagonale <Geometrie>RelationentheorieMonade <Mathematik>Jensen-MaßHierarchie <Mathematik>Weg <Topologie>KonditionszahlEinhängung <Mathematik>Messbare AbbildungGibbs-Verteilungp-BlockIsomorphieklasseFahne <Mathematik>Auflösung <Mathematik>DickeRandwertGruppendarstellungVorlesung/Konferenz
Transkript: English(automatisch erzeugt)
I thank you very much, and thank the organizer to present this work.
So, I call topos in information, or of information, and the work I will present comes in two papers. I mainly speak about this one with a biologist,
from neuroscience, which is in Marseille now, Pierre Bodeau, and another paper, which, so this is in Entropy,
journal Entropy, so it is in free access, and this last year, with another people from neuroscience, which is Alain Bertos, and also there is another paper,
which is not of me, which is on archive, and which is a student making this thesis with me now, which is Juan Pablo Vino.
And I will speak about some part of his thesis here. In fact, the topos which came in this different work have not the same status exactly, because here the work was done
without knowledge of topos first, mostly. And at some moment, I discovered that it was really an implementation of topos theory, and especially the comological aspect,
so I will present it soon from this starting point of view of category and topos. And in this one, so here, mostly, it speaks exactly of the nature of Entropy. So, it's not only the journal, it's also the subject.
With Pierre Bodeau, we try to understand if there is, if we can define different kinds of forms in the traditional point of view, forms, as we are, of information.
There is a kind of topology of information, and it appears that it has many facets, and it is only one of the facets, another facet I will speak today with other students. And perhaps this afternoon we mentioned that, on more machine learning and especially belief propagation,
understanding through comological setting also. And it's not the same comology, they are related, but it's not the same. And here it's different, because here it's to understand if there is a new kind of geometries for motion,
in fact, for voluntary movements, and especially of humans, where we work with Alan Bateau and other persons, but in general for animals. The thesis is that we do many kinds of movements,
but each one needs some kind of geometry to be organized. And this geometry, who is a person, is in some sense without point. That is, the usual geometry has point, it is in the physical world, but the geometry we use to prepare motion probably has no point.
So here it was more conscious, of course, that a topos will be pertinent in this setting. And in this paper, Juan Pablo does several things, but in particular, he makes really firm all the verification
that all is natural from the point of view of topos, and he also makes an original extension of the first theory I will mention. So this paper is probably much easier to read than this one. I will present to you also some other aspects.
So in some sense, we came to topos in a very concrete setting, through a category of sheaves, over some objects, but you will see that it corresponds exactly to what Grothendieck
or Grothendieck and Verlier especially told, that everything which has some kind of locality, for example, is relevant for topos, that topos were made starting by sheaves,
that the sheaf is equivalent to a kind of comprehension of locality. And so I start with what I call information category,
and I give a presentation which is in the thesis of Juan Pablo, which was made to answer a question of Gromov that we discussed some months ago. And because many people, since five more years,
try to understand probability in a different way, to perhaps make more concrete the proposition of Kolmogorov, that first comes information, and then probability,
and especially, or as I said, Taro and Gromov, which contribute to this point of view, and I knew independently of that, starting with the fact that usually, when you have a curse of probability, you have a probability, and random variables came to measure the probability.
But in this new point of view, you have really even more importance given to the variable, considered as the measurement,
everything that you can measure or estimate, and then probability, in some sense, a secondary status with respect to that. And for Gromov, because it was presented in this setting here in the first paper,
it's not a good idea to start with a set, which is set omega, which represents all possible knowledge, and then you look at probability as function on some subset of this omega. But it's better to start with what happened, really, and that we measure something, or know something, or experiment something,
and it's possible to do that in our setting. In some sense, it was also implicitly done in part, because we treat not only classical information, but also quantum information, and in quantum information, you don't add omega, but you have an Hilbert space.
And even in some sense, this Hilbert space is not really the fundamental thing. It's practical, both are practical to organize, but it's better to understand how they merge from measurement. And so you start with a category, which is a small category,
and it is at the opposite of a groupoid. I have an object, and when I have two objects,
either there is at most one arrow from one object to the other. So it could be no arrow, but if there is one arrow, there is only one. If one, only one.
So that's the first axiom. And we need something which is a final object, which, in both settings I represent, represents a certainty. This is the ordinary work, without probability in the classical setting,
but you will see, in the quantum setting, things are a bit different. And there is this final object. And when you have the axiom, which is the most important,
is the conditional product. And I will comment on what we have to do with information. Conditional product means that if you have two arrows,
which is present on Z to X and Y, like in this setting, then it exists a product here,
and this product is like a category product, and it is unique, and it makes a diagram which comments. And that's the main axiom for our information category. What it means is that each X will represent some measurement,
or something which makes a discrimination on the system. The system, I don't know what it is, but I am inspecting the system. For example, what is in this table.
And so, I have to take something, and it gives me one information, which is in a certain space, I will describe. But this arrow means that this one has less information. That is, this one is coarser.
But on the reverse side, here is some kind of measurement, and here I make a refinement. So this one, we can see, has a refinement of information. For example, I take again, this is the first variable, I see this object, but now I adapt my vision to try to read what it is here.
At first, I see something is to read, but I don't see what is written. Now, I look, and I fix something written. And this will be the natural topology. So, you can, in this category, we'll take in fact what is called the distress topology.
Of course, at this moment, we could have a more refined theory later. And this is the topology in the sense of Grothendieck, and which is the simplest one, in the sense that it means that if you have an object here, the set of seals, or refinement,
I take for this object, is all the subcategories coming to this object, which is really the notion of refinement, which is equivalent to what is called a seal.
So, here you can make restrictions, you speak just before, but the greatest one is this one. For this topology, every preshef, for this particular case, every preshef on the seat, which is here, the side,
every preshef is a sheaf. So, there is no technical consideration on that. And what means this axiom? It means that usually, it's not possible to prolong measurement made by two variables.
So, it's very usual, not only in quantum, but also in classical observation, for example, your visual apparatus sends information in your brain,
and your different kind of cell for feeling motion, or for feeling high frequency, special high frequency. You cannot both read very carefully, and move up to good information of how something moves. So, you cannot.
But if this information here could be refined, both, but one observation, then there is a minimal joint, which is called the joint operation. So, this is the essential point, but they are not only.
To give a good theory, you have examples which have more. And these examples are meant, if I take, for example, a set Ω, and I look here at a subset of partition of Ω.
That is, there is a very interesting algebra, associated to any set, which is the partition of this set. And you take any partition, you have always, so take this set,
and partition is really comparable to an observation. You try to, so you have an a priori of Ω, and every operation is a way to cut this set in part.
And if I take one partition like that, and one other partition like that, you see it happens, that is the partition y, and that is the partition x, and here you see the partition xy. And the Ω in this sense is the usual one.
It is exactly the refinement, the sub-covering. I say that this partition is finer than this one, that I get the partition, and now I begin to partition more, the same partition. And in this category, so here you have a particular category,
I will suppose to simplify that Ω is finite, and here also I will make some simplification inside. In fact, it's not essential, because there is, for example, not in this paper, but in a new paper with Juan Pablo,
a study of the Gaussian families, where here you have continuous parameters, and then the sets are no more finite, and you can extend that to a non-finite situation. But here the problem is, in some sense,
do you have always such a representation? And you can have, or not, it depends, and here you must be a little more precise, and so you make more precision on action,
and you suppose that, in fact, every variable is a finite set, corresponds to a finite set, that is, here I make just a letter, because it's an object in a category, but now I become more concrete.
When I do a measurement, I get some point somewhere. And what is supposed is, here, every time it is a surjection. So that is one of the axioms. And the other axiom is, so this one, axiom two,
is that if you take the space which corresponds to the product, xy, when it exists, it is automatically embedded by the two projections here, in the product of x and y. In fact, all that eliminates a very pathological situation.
It seems a bit too abstract here also, and it's true. In fact, if you just take that, you have a lot of exotic counterexamples, showing that it doesn't conform
to what is really the observation of a system. So the axioms I add here are more natural. Yes, Ex, Ex is exactly the set of possible results of the observation. So in the paper, there is a confusion.
Juan Pablo works only with sets. So x, y are sets. But I prefer to say x, y, z are the observations. Like observable quantities and some variables. For example, in the quantum case, here, the finite set will be a collection of vector space.
That is a flag, an orthogonal flag in the vector space. But I mention here as x, because in fact this orthogonal flag comes through the diagonalization of a real observable.
It is an auto-adjoint or self-adjoint operator, which I name x, which is what physicists usually name an observable. So it will be more like the spectrum of the observable. In quantum mechanics, it might be countable?
Not only. You can do finite space quantum mechanics. For example, in quantum information, which is applied, everything is finite. Almost. But you're right. Here also, the extension I mentioned for Gaussian, you can do also in non-compact situation for quantum mechanics.
And third, that is more important. You have finite depth. And for computation, I represent after, that's very important. The fact that it cannot possible to go infinite.
So if I take from x1 and I look at the arrow, which comes to it, it has a finite length. It could be only locally finite, but it cannot be infinitely precise. So that is the finite hypothesis.
And since this could describe this set of arrows, it is sufficient to describe the limit in the category. S, like this category, like the C. And in the sense that for every x in some set x,
there is a point in the projective limit which corresponds.
Which is a sequence, a maximal sequence going to this point. So that is the axiom. And under the axioms, you have a theorem that in some sense, the category S is equivalent to a subcategory S1,
which is included in the product of this limit. You can take as Ω, you can take Ω as this set.
If and an if, so it's not true. If and an if, each time you have two objects, such that they cannot be measured, so two observable, such that they cannot be joined.
In a sense, the natural embedding, which is given by here, of this object in the limit. The limit is the subset of the product of each object x of the set x.
So you have this compatible, it means a compatible experiment, all possible compatible experiments. And every subset embed in this set, and it is named Rotil.
And you must have different embedding for different objects. The fact that there is no joint implies that they are different because this category has this property to be idempotent.
So the product of x with itself exists, and there is no xy. It could be that they give the same.
And what it says is the theorem in some sense, it is not only positive in some restrictions, it says that this, in some sense, that is, no observation permits to really say that x is equal to y.
But the fact that Rotil x equals Rotil y is a kind of equivalence relation, as a limit. These two kinds of observables give the same information. So in some sense, you lose nothing to impose equality in a quotient, of this category.
And then, in a quotient, what says a theorem, it is, if you have the Saxiom, then you are in the case of partition. So, it's a representability theorem of this kind of information category.
So this information category is the basis. But now, where comes probability, or something else? And probability, there are probability models. You want to model your knowledge with probabilities,
and so you associate with every object, x in the category, two i's, which is embedded, I call that delta x, like a simplex, which is always to be set in the finite setting.
So this is only the simplex made on the set x. That is, sigma of P x equals 1, and every P x is null or positive,
that's the usual notion of probability. And here, you can check almost any subset, but the best is to get a simplicial sub-complex.
We work mainly in this setting of the simplicial theory, but it could be more general, more non-linear. You see that this axiom of joint,
it is a remark, if I look at this set of partitions, it means exactly that you have a simplicial structure. Because here, the xy says that if you have two faces, in larger faces,
then you have the joint of these two faces also in the set. So you have simplicial sub-complex, and what they do is they form a covariant filter,
from s to set, and it is axial. You need to have a covariant filter, that is, if I have an arrow here, I have the natural arrow,
which is a projection of x, and this is called the push-forward, or in probability, it is named the marginal. Every time you have a variable which is less informative,
you have a marginalization of the probability, so you ask exactly that the probability is such a function. So this is the first important operation on probability, to take marginalization. And you see that this setting here,
to consider variable as a kind of localization, and the topology as a refinement, is perfectly compatible with marginal. But there is a second operation, which is central for information, which is condition. So from where is condition?
And I will present to you in a moment, a more natural way to do conditioning, but it is more easy also in the mathematical theory, so I start with this simply.
Conditioning here will make that the function, I take fx, it is a measurable function, this is real for example, but measurable functions on this subset of probability.
And it is this function, of course they form a contravariant function, but you have more structure on here, and you have a natural structure of modul,
it is naturally a modul in the topos, over the ring structure, in the sense of topos,
the ring which is, I will call this ring ex, and which comes exactly from this structure. If I take ex, now I look at every variable, which starts from ex.
So I have here a subcategory, and in this subcategory, I always have products, all products are defined. And so I have a ring here, so this seat, the side, I say, is not only a side,
but it is a ringed side. So it has a natural shift in rings. So you make a computation, which in the case of partition, we say that locally you can multiply, or refine the partition. And now it is a modul, it is a lemma.
It is an expression, when you look in the marginal setting, it's an expression of Fubini theorem, which says that if I have now two variables, for example, x, y, in, I say, by abuse of language,
is somewhere here, and I forget to define the action first. What is the action? This is important. I have a function f,
now on probability, which has any probability in qix, to real number, and I can make the conditioning through the following file.
So I take any x, y, which is in a x, and I define y, f here, of p, and it's equal to sigma over all possible values
of the observable iq. I take the probability of y, and now I take the function f, that's evaluated to the probability condition by the value.
And this formula, in fact, was used by Shannon, in the study of information. And this is the main formula that you will consider. And this is a kind of integral
of the function f, which is very non-linear, you will see in the example, on the fiber, and you take the mean, with this probability. So in Juan Pablo Vigno, so you will see,
this will give, starting from that, the Shannon theory. But Juan Pablo Vigno observed that if you take an exponent alpha, and this alpha being any number strictly positive, and if alpha is not one,
in fact, you will get all the Thales theory. That is an alternative to the usual limit theorem of Shannon. You have this Thales theory, which is interpreted as a non-extensive statistical equilibrium theory.
And, which has also many applications, for example, in biology now. And if you do that, so you have another possibility that I use, and the lemma says that for both,
in this case, you have ek z. If you reapply y to zf, you have the same thing as if you apply directly the joint variable to f. So that's not
very difficult to verify, but it's not totally a triviality, that you really use something of the probability to do that. And this is a structure given by which takes the condition into account.
So now, I have a module in the topos, and I come to the cohomology, because it was the first observation
before the invention of topos, that in such categories, you have what is meant enough injectives, and I will not explain all of that, because if I have time, perhaps I come to this point, that you have a natural
theory for every what is called left exact functor, derivative functor, which allows you to form a topological invariant of this module, and so you define, I take this one, I will take hn of the trivial module,
which is a trivial module, is the module which every in every point x you only have the number. And I take the cohomology in this topos, from this trivial module
to the module, which is f here, fq, which represents the probability. And that is copied to a group theory. In a group, what is named the cohomology of a group
is exactly made like that. You have a canonical module and now you take the hom from R to this module, and you take the derivative functor. This is the x functor, exactly. Here, the cohomology
is at least xn of the functor hom. x is the derivative functor hom. This is the definition we take now. And now how concretely are made this this
cohomology? Is that a problem? Or perhaps not? What are what are these kind of things?
So it's almost impossible to use the definition with injectives that Grothendieck gave, but by chance we have canonical in this case it exists canonical projective
resolution of S which is a generalization of the bar complex and which allow to compute this cohomology. So what are made the elements here?
They are made by concentrating what is called a cocycle of dimension n and an element in a cocycle we call f is given by family of functions of elements
and probability. So concretely what you have here is a family of functions indexed by each observable of which is guided by n and here
x1, xn all belong to x and p is a priori a probability in qi. So you have all of that. And this cocycle must be defined cocycle. This is quotient
and here you have a cocycle in cn which are given by an operator that is a kernel of a certain operator going from cn to
cn plus 1. And this is very standard for example you can go to Maclean to see all of that. It's very similar to what happens in group theory or even in algebra theory.
It's a kind of generalization of algebra. And what is important is where the topos come. Like it comes in this formula I put before I understood that it was topos I put by n because if I don't do that I think totally absurd
things. Even by using the good operator here you don't think something which corresponds to information. To do something corresponding to information you must have this formula if you have x going to y
you have fx of and if you take all the xi in Ay you are less precise in the same sense and you compute this then
it is equal to the fy of xi xn and here you can take the marginal and this is a locality property
you look at function of probability indexed by all the variable and you want to interpret them as measuring some information and this is the fundamental property
that is the fact that this function as function of the probability are localized in fact on the product on the variable they don't really depend on x they only depend on x1, xn and that's the fact that you compute the cohomology in the topos setting. If you compute usual cohomology
without being in the topos looking only at functors on the set cohomology is also functor and you don't have this property and so now what are the does your computation
of cohomology like this projective resolution can it be understood as some type of Chech cohomology or not? it is not really like Chech cohomology
cohomology is more for objects so here you know that you have really something that's the problem for functoriality for example because here this module blocks the functoriality you will always work with something like isomorphism or not so far
so I don't believe it can be defined by Chech but probably there is other kind of what is I will tell you now the theorem which is
surprisingly difficult because in fact when you look at the equation you get for a cos icon you get the set of what are named because probability depends on continuous even if you take only two states you have the
probability which are given by point in an interval or at least we need three so you have something continuous which happens and what you get is a set of functional equations and these functional equations are very reminiscent to look at them
at the polylogi in general polylogi multiple but they are not they are in fact they are related but we don't know for example if it's all true for every n for the moment we have only a computation
for one and I take an example of cos icon so I don't want the operator not g but it is not difficult to
imagine how it is done an example of cos icon are given by the S-alpha and there is a formula for S1 which is Shannon entropy and S1 of variable
x and probability so it is a one there will be cos icon in Z1 and so we know cos icon in every dimension but the problem is on general there are also boundaries you get because the cohomology
I forget to say x10 is equal to this kernel divided by the image of the same operator but coming from this is the definition of the cohomology and this one will not be the core that is the core boundary
and which is given by the usual formula that is minus sigma of p in x and here you take the log you can take in bus 2 but you can take every basis over x in x that is this one and the Thales one for alpha
which is given by 1 over 1 minus alpha it is a normalization sigma of p of x alpha minus 1 it is the Thales entropy of probability and what they satisfy
is this equation is f alpha of a joint which is equal to s alpha of x plus x point s alpha of y this is the action of condition and this was the starting it shows information
equation it was given first by Shannon and in fact it is known by specialists of Thales theory that they are also true for the Thales entropy so it tells you that how you depart from the independence of these two variables
and here I can say that the Kobandari operator is only the difference of that at the first level so you have generalization of that when you look and so the theorem is that if I look in the
complex you look at the S you have I say sub-complex of some abla and the dimension of hash 1 in fact is equal to
the number of connected components here for that I need an hypothesis each time I have two variables x and y which have a joint for in
looking x and y I have sufficiently many probability in fact this could be infinite dimensional but it is infinite dimensional every time the variables are too far you don't have probability
able to make them together if you if you look at the joint probability here I look at the joint probability p of x y and it could be that for the joint probability I have not in the space
q x y sufficiently many probability to generate at least a two dimensional parameter of probability and that's a necessary condition for having finite cohomology but what is important is that the generators
are the s-alpha in fact the only class interesting is the class of the entropy the entropy is not the interest of this approach in some sense every other characterization of entropy
it was the theorem 1 of Shannon the fact the theorem 1 of Shannon was the fact that this formula characterized the entropy but to prove it or even to justify it you have to look at the collection of an infinite number of variables and here what is the
advantage of that is that doing this topology you can look only in one category and you show that the only invariant is the entropy without comparing different categories there is this hypothesis and
and that's including alpha equals 1 including alpha equals 1 for all alpha strictly in fact in our first study we proved only for alpha equals 1 and for alpha equals 1 in fact it's not too difficult
to come to an equation on only one function which is known as information theoretical equation for Sverberg which it was known that every measurable solution is the entropy for alpha it was not known
and in fact we come to it is in the paper of Juan Pablo we come to a very amazing functional equation in fact it's a functional equation for telling that you have equality between two of the function f somewhere
because all these function f could be different the problem is to try to identify them as in the Abel equation for example for collection and this equation makes to be solved a very surprising intervention of SL2Z
and so that I have no time to show it precisely the proof is different for alpha equals 1 and for alpha is different because of that and so in that sense you have here
the theorem which make hope that the cohomology of all this kind of objects could be interpreted as information quantity the problem is now how to compute
H2, H3 and so on and so the equations are really difficult to solve but they could be perhaps there is only the one dimensional cohomology which is non-zero that we don't know to prove that so now
how much time do I have? 10 minutes? So, I will only say a quick remark on the extension
of that first in the quantum situation so first extension in the quantum situation the entropy which is known is known from Freund-Nemann
so I think in the quantum situation you can start also with this finite set except there are finite sets of suspense the advantage of the definition I gave which doesn't use omega is that I can apply in this situation but what is the probability
the probabilities are now if I take any X it corresponds I say to a flag of orthogonal subspace in
in that space and the probability here are given by the Hermitian positive matrices What do you mean by orthogonal subspace? I mean that yes, you are right it's just a flag
and yes, it is only because you have this restriction of the joint that you can take the product only when you have orthogonality but it's for the product it's not for the individual
set and the probabilities here are given by this kind of formula so this is
this is I take X this is the name for the variable and I take ordinary so I take a matrix and I define probability as some ordinary
probability on the on this path p p p y and here I put an Hermitian matrix a o y this positive
Hermitian operator on v y that is the kind of object we have and the non-entropy polynomial was defined by
the path so S of O path of the operator O O and this so it is a global one
so we can also localize it and what you what you get if you localize it in fact is the relation which is here so I take this for each x and I have the fact that the co-bondary the proposition the co-bondary of the
localized fundamental is the usual entropy in fact what happens with respect to the ordinary probability that you see here
every time you have an ordinary probability associated to the quantum probability and if you look at this entropy it is still a co-cycle in the comology but in fact it is a co-bondary so it is something
like reminiscent of something which is claimed as a translation that you have a universal class which is given by the entropy and by going to the quantum case, you kill this class, that you kill it universally, so the interpretation of fundamental entropy and
Shannon entropy or Thales entropy is not the same from this point of view of comology that's why I wanted to insist you have also the fact that the the the Kullback Leibler the Kullback Leibler is a function it could be also general
Kullback Leibler distance and it is given by this expression sigma of p i log p i over p i
between two probabilities it can also be localized and you can generalize the comology to a module which makes several probabilities which come, and in fact these probabilities when you have several probabilities so you have a second graduation
given by sigma every mean is taken with the first one in the formula of the action so it's also a co-cycle it's also non-zero in comology the comology is not the same it is for several probabilities
you also localize it is also in H1 and so it exists also for an extension for Thales and perhaps I say one remark before I come to the specific to
because there is another way to which is which is developed in the paper to put the conditioning and the more natural setting
of all of that is for underlying all of that is for trees trees observation so I made just a remark because I have no time to develop this point because it's not, it's a topos
but it's a topos where you will replace you will replace the ring by what is called monads also in a category because what happens if you take an observation it gives you some value
the measurement. But what you do now is, depending on the value of this measurement, you put another variable which gives you another variable, and so on. Really when you observe something, you are doing this kind of tweeze of observation. And you can do
what I have done now for ordinary variables, you can do for tweeze of variable. And this gives, now, the probability comes, in some sense, more naturally in this setting than in the other. They don't come through the function of
probability, but they itself act on the, so you get what I call a monoid, which is a certain functor. And here, the probability permit to give functor, which is, which acts on the height. Because here you see, if I take a probability,
I can consider directly the conditioning given by the first measurement, without viewing function. And if you look at the 1, the first tag, so you get a cohomology and so on, exactly the same. And you look now in the H0, and you have the function, in the C0.
In fact, what I have done here is only the type of an iceberg, which is the function, measurable function, I get, is only the first part of a much bigger cohomology. And if you look at the relation, which is, which satisfies by entropy, is very interesting, it was
known by Fadellier. It is that when you have this relation, in fact, in the condition term, you can put a family of variables. So it's more general than the non-relation. And this more general relation corresponds to the fact that the entropy is also cosecant in this
setting, where you replace algebra by, okay. So I just finish with a movement, these are the ideas. In that sense, you will have a structure of observation, which is made by this category, and which come to this point,
one, which is the final point. But when you prepare a movement, you will, so here, you have, on this point, something like the category of set. And you have usual geometry.
And the idea is that, for example, j is a group of displacement, of displacement, and the geometry is given by the group H size, which are the rotations, about what. When you
do that, you get the ordinary, ugly geometry. You can do also for Galileo, for everything, that you have the ordinary mechanics, that's the movement which are executed. And now, with
this idea that, here, you can have all the parts of the body, for example, of inner preparation. And this inner preparation is delocalized, or localized, but in many
parts. So it's very simple, if I take only one arrow, for example, take only one arrow, and A here, and here, I will put, for example, I will take the example of the movement of this object, which is usual geometry. But now, I plan this one. And when I
do that, I have my body and my arm, which is doing that. And the fact is, here, you have the conjecture, is that, and there is several candidate, here, you can to define what is called the posture,
to get this object. And it will be given in the same setting, by a kind of group, or pseudo-group, with a set of subgroups, defining the space of posture, something like this one. The support of posture
will be in a larger space than the usual space, because at least I have to displace the object. So the group is larger, in this sense. And this, so you have this, so this is the simplest category, S, which is only
one variable. And here, you have this diagram of object. And if you look, for example, at time, it is observed now, in the preparation of movement, that here you don't have a unique time. That is, you have
relation between the different time of position, but they have also some independence. So the possibility to enrich the ordinary geometry by looking at this topos of geometry, makes that in some sense, the time of preparation could become multi-dimensional, for example. That's one
of the advantage. And just a conclusion, you speak about the classifying of sub-objects. And in this case, of this category, it is the first non-trivial case, where you have two possibilities, that is
to have fast, true, or you don't know. And the fact you don't know corresponds exactly to what is called the redundancy. That is the fact that you have some elements of this group GA, which is called the group hash, and which says that some postures
are equivalent only with respect to the affected movement, but are not equivalent as postures. So, I want to just to say that because this is the first intervention for me of your classifying object, because here I don't know the classifying object.
Thank you very much. When we cited Fadiv, which formula... sorry, you meant... When we cited Fadiv, which was the formula? Yes, I showed you this
formula. This one, suppose you have this situation. So, one variable x and a set of variable,
and you see the variable x giving a result, xy, and here I put a variable xy after that. So, different variable. And when I look at the entropy
corresponding to the composed variable, the variable which comes as a result, because it is a partition, so you partition the set x in subset, and then I partition, so x partition in subset, and then I partition each
subset but using different partitions. And so I get a new partition, which I call mu of x and xy, xn, if m has m also. And
I look at the entropy from a certain measure. It is the entropy for the variable x and the probability p plus here sigma of p of x, again x in, and here you can put the entropy of the
variable xy for the conditioning x equal xy. That is, the same relation as the Shannon relation, that says that you don't have in general additivity, only the variables are
independent. And so here, instead of only one variable, I look at m different variables, but still the relation is true.
You mentioned that there was some possible connection with polylogarithms or some generality. It is known that the entropy is what is called the differential d-logarithm. If you start for the
d-logarithm, and you take in some clever way the derivative of it, and you get the entropy. And the functional equation could be derived from the functional equation of the d-logarithm. And here, first,
there is, especially in the quantum setting, a relation between this function and a function which we see in, for example, Gantrachov walk. And so the conjecture-wise, you have something from the,
related to the differential polylogarithm for the eigendimensional cases. But we have discussion with a specialist, Weisberg Ganz, which is a specialist of that. And he makes tests showing that, because we look at the following for H2 and H3, that it cannot be
the logarithm with something Eiger, say you have an integer here, and this integer cannot be different from one. But still, there is a possibility
that it's related to the multilogarithm, L1111. We have not explained that, because the equation looks like, but they are not, it could be, they are deduced from this one,
with all A1 equals 1. It is not so easy, and at this moment, I don't know the time, and students were afraid by this question.
And so, I don't know. So maybe another question. So, I mean, I mean, your presentation, which was very good, you were able to show that tuples are another way to revisit, basically, the way how, I mean, information theory was built in
terms of Shannon, basically, formula, which provides the way that Olivia was showing, that it provides you another point of view of another discipline. But what can it bring more, compared to Shannon, based on this new point of view that you're bringing? I take the example, for example, of your
paper with Anna Bernoulli, which is this problem of tracking. Yes, that's different. In fact, these two problems are, in some sense, dual. And the example here that you do for tracking, what can it bring more in terms of new tracking tools compared to the classical techniques that we have in terms of tracking? In fact, it is
an unsolved problem to define this posture space. Because people in robotics, because we are working on that also with a robotician, which is Jean-Paul Lomo, and they use space of parameters, of course, and look at
the relation between the control on all the parameters and the affected motion. But they are still in a very highly dimensioned space. And this proposition
is to understand matter to slow the dimension for having what is named geometry in this preparation. And it is an old problem that the first
perhaps we mention and begin to work wherein the Russian people, that is starting with Bernstein, was, Gelfand was involved in this kind of problem, to try to have in some sense universal or finite
dimensional understanding of what we control essentially to make a motion. There is a hierarchical structure and what is controlled? It is still an open problem.
And the other is Feldman, which is a neurobiologist, which makes this hypothesis that in some sense you control the threshold of the motor on the whole. But this need to work in a very large dimensional
space. And what is the structure of this space? And probably there is a relation between the two situations, because here it's natural
on all these parameters to put probability and says that what is controlled is a process, a Markov process, or a probability condition. And it could be done through what is named free energy. It is a localization of free energy. It is not only
entropy and this variable, but also something which uses energy measurement, which could be a priori if you do visual machine learning. But here in this case, it could be and it must be also
physical things. So this space, the conjecture is that this space is not only, as usual in geometry, a description of space. It takes into account the energy relation. So it certainly makes
information quantity and energetical quantity. So to understand how it works really, these groups act on something, which is internal states. And these internal states
are the support of dynamic. And this dynamic uses information quantity and certainly also a physical quantity, not only information. So is there a cohomology theory, whose invariant would be free
energy and other things? This is a very good question. In fact, another student, which is Olivier Petre now, has developed a cohomology theory for understanding a belief propagation.
And propagation belief is equivalent to the management of this free energy. And in a sense, the free energy appears as a caution in this theory. But I don't know the relation between this one and this one,
the one I present. They could be related, but I don't know how. But still there is a kind of cohomology for this quantity.
The structure of topos in this case is not... Of course, because it's a natural cohomology, that is topos, but nothing as I present now.