Cointeracting Bialgebras
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 28 | |
Author | ||
Contributors | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/51270 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
9
10
13
22
00:00
TheoryKörper <Algebra>QuantumAlgebraic structureAlgebraMorphismusMonoidAlgebraic numberFunction (mathematics)ConvolutionAssociative propertyProduct (business)Local GroupEndomorphismenmonoidPolynomialAbelian categoryAutomorphismEquals signObject (grammar)ForestCoalgebraKontraktion <Mathematik>Musical ensembleInfinityMusical ensembleTerm (mathematics)MereologyNumerical analysisVertex (graph theory)RoutingFilm editingRight angleGroup actionRing (mathematics)Goodness of fitDirac delta functionNetwork topologyPrime idealProduct (business)Arithmetic mean2 (number)Point (geometry)MorphismusMaß <Mathematik>Category of beingAlgebraKörper <Algebra>Element (mathematics)Laurent seriesMultiplicationTensorInverse elementAutomorphismComplex (psychology)EndomorphismenmonoidObject (grammar)Set theoryBialgebraProcess (computing)ConvolutionRootBasis <Mathematik>Series (mathematics)Cone penetration testForestNumerical digitGreatest elementSymmetric algebraKontraktion <Mathematik>PolynomringStatistical hypothesis testingEnergy levelSemidirect productComplex numberPolynomialGraph (mathematics)Multiplication signAxiomMonoidLaurent polynomialDoubling the cube
09:44
Kontraktion <Mathematik>Musical ensembleInfinityAlgebraMaß <Mathematik>Graph (mathematics)Product (business)Graph (mathematics)Basis <Mathematik>Curve fittingEquivalence relationSocial classConnected spaceIntegerSign (mathematics)SequenceAbelian categorySymmetric matrixMorphismusProduct (business)Cone penetration testVertex (graph theory)Chromatisches PolynomGraph (mathematics)Set theoryMaß <Mathematik>Process (computing)AlgebraBasis <Mathematik>Power setInverse elementNatural numberRight angle2 (number)Range (statistics)Kontraktion <Mathematik>Partition (number theory)MereologyMusical ensembleSocial classTerm (mathematics)Network topologyEquivalence relationCombinatoricsCategory of beingConnected spaceGraph (mathematics)MorphismusGroup actionRadical (chemistry)IntegerObject (grammar)Combinatory logicLimit of a functionPrime idealGradientMonoidFunctional (mathematics)Similarity (geometry)Numerical digitState of matterBialgebraSubgraphIdentical particlesKeilförmige AnordnungComputer animation
19:13
PolytopHausdorff dimensionPolynomialVertex (graph theory)Integral domainForestPoint (geometry)INTEGRALInterior (topology)Graph (mathematics)MorphismusAlgebraInverse elementSign (mathematics)IntegerAlgebraProduct (business)ConvolutionMaß <Mathematik>ForestInverse elementMorphismusPolynomialGraph (mathematics)Vertex (graph theory)CombinatoricsKontraktion <Mathematik>Social classNetwork topologyGroup actionRadical (chemistry)Algebraic structureCategory of beingObject (grammar)ResultantPolytopSymmetric matrixDimensional analysisPoint (geometry)Functional (mathematics)Basis <Mathematik>ComputabilityGraph coloringMereologyCoefficientDegree (graph theory)Connectivity (graph theory)Condition numberSign (mathematics)Standard errorMultiplication signCone penetration testVarianceNumerical analysisMusical ensemble2 (number)Similarity (geometry)Parameter (computer programming)Inversion (music)Nichtlineares GleichungssystemDirection (geometry)Combinatory logicIntegerCartesian coordinate systemProof theoryPlane (geometry)WeightDuality (mathematics)IterationWell-formed formulaChromatisches PolynomGoodness of fitArithmetic meanHelmholtz decompositionPartition (number theory)ModulformUniqueness quantificationMultiplicationLimit of a functionAntiderivativeElement (mathematics)Set theoryPrime idealFormal power seriesPrinciple of relativityDivisorNumerical integrationSeries (mathematics)Price indexBinomial coefficientInvariant (mathematics)SubgraphFunktorForcing (mathematics)Mathematical inductionCommutative propertyGraph theoryHopf algebraDirac delta functionQuantum field theorySymmetric functionCommutatorTheoryMany-sorted logicAlpha (investment)RootGeometry1 (number)Doubling the cubeCalculationOrientation (vector space)Line (geometry)LogarithmPresentation of a groupPolarization (waves)SummierbarkeitRenormalizationSquare numberInjektivitätDiagramCountingGraph (mathematics)Term (mathematics)Vector spaceConnected spaceAxiom of choiceAveragePower (physics)TensorFilm editingRight anglePerimeterFunctional (mathematics)FamilyInductive reasoningSequenceScalar fieldPermutationDressing (medical)Moment (mathematics)Projective planeSimplex algorithmSheaf (mathematics)Greatest elementThree-dimensional spaceTranslation (relic)Derivation (linguistics)Inequality (mathematics)Mathematical analysisFlow separationCoalgebraBialgebraPlanar graphMortality rateQuantumKörper <Algebra>SpacetimeValidity (statistics)MonodromieComputer animation
Transcript: English(auto-generated)
00:15
First of all, happy birthday to Dirk. So I'm going to start with a monoid or a group, which is proto-algebraic, so it does
00:27
not really matter what it means, but it just has a good ring or algebra of polynomial functions. So this algebra just reflects the topology of the group, and you have more, you have
00:47
functions, which is the coproduct delta, which you can see here, which just reflects the composition of the group. So this is what is called a bi-algebra if you have a monoid, and if it is a group, it is a hop algebra. And you can recover, it's quite well known, the group or the monoid from the bi-algebra
01:05
just by taking its characters, the characters are just algebra morphism from your algebra to the field, which is for me the complex field. And there is a product or character, which is the convolution, which is in some sense
01:21
this jewel of the coproduct. So I'm not satisfied with only one group or one monoid, I need two groups, and in fact I want to do some semi-direct product or something like this. So for this I take two groups or two monoids with a group by algebra, and I suppose
01:42
that the second one, g prime, acts on the first one by monoid andomorphisms, which is exactly what I need if I want to do a semi-direct product. So what does this mean? I have first a monoid g, so it has a hop algebra or a bi-algebra, which I call a.
02:00
I've got a second monoid g prime, which has also a bi-algebra, which is b. g primes acts on g, so b co-acts on a, so there is a correction, which is a map an algebra map from a to a times sub b. So this just reflects the action of g prime and j, and if I want to translate the fact
02:24
that the action of g prime is by monoid andomorphisms, this means that a is a bi-algebra in the category of b-commodules. So what does this mean exactly, this fourth point? This means this axiom. So the first one just means that this is a collection.
02:43
This is the axiom of a wide collection. The second one means that the product of a is a morphism in the category of comodule over b, and it also means that the collection is an algebra morphism, this is the same.
03:01
The fourth one means that the co-unit of a is a comodule morphism from a to the base field. And the last one, which is the Morris-Swiss thing, means that the co-product of a, the big delta, is a comodule morphism from a to a times a, which is also a b-commodule.
03:25
So this means that if you do first the collection and then the co-product on the left, this is equal to first the co-product, then the collection on both sides of the co-product, and then to regroup together the two terms which belong to b.
03:44
So this is this product m1324, which case four elements, and regroup the second and the fourth one at the end. So, for example, let's just take a very simple example. You can consider the group C with the addition, which is a group, the abelian group,
04:06
and on this the group C star with the multiplication naturally acts by group automorphisms. So this is exactly what you have before. So the first group G is C plus,
04:20
so its algebra, its Hopf algebra, is the polynomial algebras on C, which is first the polynomial ring with one indeterminate, with a co-product which is additive, the co-product of x is x times one plus one times x, so this is a Hopf algebra. The second group G prime is C star with the multiplication, so the polynomial algebras on
04:43
C star is the Laurent polynomial algebra, C of x and x minus one. I have to add an inverse to x because there is no zero in C star. We have another co-product which is multiplicative, which sends x to x times x, so for the first one the big delta x is a primitive element,
05:05
for the second one it's a group-like element. So this is also Hopf algebra because there is an inverse, and B co-acts on A because C star acts on C with this collection rho, which sends also x and x times x. In fact, more or less, the correction and the co-product, the second co-product,
05:25
are more or less the same. So I don't act very much this x minus one. I would prefer to have Cx, so I just forget it and I obtain a Hopf, not a Hopf algebra, but a set by algebra, which is B, which is the same algebra as A with another co-product. So it's no more Hopf algebra
05:47
because x has no more inverse, it's just a bi-algebra, but it co-acts in the same way on A with the same correction, and for this the correction on the co-product is the same. So this is the frame I will use now. So what I have is an algebra A with a single product
06:08
and two co-products, big delta and small delta, such that A M delta is a bi-algebra, A M small delta is a bi-algebra, and the second one co-acts on the first one
06:22
with the correction rho, which is also the second co-product. So what I have is an object with one product and two co-products. For both co-products it's a bi-algebra, and moreover there is a compatibility between the two co-products, which is something like this. Replace just rho by delta and you obtain the
06:47
second, the compatibility between big delta and small delta. So if you want to be complete this should be called bi-algebras in a category of co-module of another bi-algebra, which is
07:00
quite long, so I just now call them double bi-algebras or co-interacting bi-algebras, something like this. And the first example we have is just the polynomial ring C of X with its two co-products, big delta, which is additive, and small delta, which is multiplicative.
07:20
We know more examples, so for example the well-known cone-cramer Hopf algebra of trees, which is based on rooted forest, and I won't belong on the remainder on it, so it's just based on rooted trees. For me the roots of the trees are at the bottom, so you have the
07:41
trees with two vertices, three vertices, four vertices. The product is the diligent union, so it has the basis of forests. The first co-product is the cone-cramer one, just given by admissible cuts. So you take your tree or your forest, you just cut some branches,
08:02
you put the branches on the right and the trunk on the left. For example, for this tree you can cut nothing, and you obtain the tree tensor one, or you can cut everything, one tensor of a tree, or you can cut a leaf in two possible ways. You obtain two tensors of
08:24
which go on the right, and it remains only the root. And the same for this, you can cut nothing, or everything, or the trunk after the root, or the trunk just before the leaf, and you obtain these four terms. There is a primitive part, the first two terms,
08:41
which means that the co-unit is very simple. The co-unit of a forest is one, if the forest has no vertex, and zero otherwise. And you can observe that it is graded, obviously, by the number of vertices. If you cut a forest, you don't lose any vertex. Some goes on the left, the other on the right, but you don't lose any vertex, so this is graded
09:01
by the number of vertices. So this is the most famous co-product on it, but there is a second one, which was first described by Damir Kalak, who was shebrae Nifat and Dominique Monchon, in 2008, I think, which is given by your process of construction and extraction.
09:21
So what does this mean? For example, for this tree, you can separate your trees into disjoint sub-trees. So for example, you can disjoint it into three sub-trees, which has only the vertex, on the left to contract these sub-trees, but nothing appears, and on the right to put these sub-trees, so here you don't do anything, or you can contract
09:44
the edge on the left, so this sub-tree, so this gives this tree, in two possible ways. You can contract the whole tree, it only remains one vertex, and
10:01
put the other tree on the right. So this is another co-product, which is also co-associative, it's not co-commutative, you can see it, and it's not a hard algebra because you have group-like, which is the tree with only one vertex, with no inverse, so you only obtain a bi-algebra for this co-product. And they prove that this is a double bi-algebra,
10:25
so this means that this co-product really acts in a good way on the first Krone-Kramer co-product. And there are similar constructions on post-sets, finite post-sets, you can see trees as post-sets just by taking the order, the partial order,
10:43
to be higher in the tree, so this, if you have a tree, you have a post-set, and such a construction also exists on finite post-sets, or modularity, or finite topologies. So this is the first example of, well, not the first, the second example of double bi-algebra.
11:04
I'm going to give another one based on graphs. So for this, the basis of my hope algebra of graphs is this whole set of graphs, so these are just simple graphs. So here you have all graphs connected or not, with one, two, three, or four
11:22
vertices. There is a simple product on it, which is the Dijon's union. The unit is the empty graph, which is here, so for example, this graph is the product of this by itself, something like this. There is another simple product, see, there is a very simple co-product which is just given by separating two graphs into two parts. So take your set
11:45
of vertices, you put some of the vertices on the left with the edges between them, the other vertices on the right also with the edges between them, and you obtain a nice co-product, which was first defined, I think, in a paper of Schmidt on incidence of algebra.
12:04
This is an example of incidence of algebra based on a set of graphs. This is just a set of graphs with a given set of vertices with extraction of edges. So this is a co-product, it's co-associative, it's really not difficult to see,
12:23
and it's co-commutative. This is very different from the Kornkreimer co-product, this is co-commutative. There is a second one, which also can be found in the paper of Schmidt with another incidence by algebra, and which was also described in a paper of Dominik in 2011,
12:46
with various examples on graphs, identity graphs, acyclic identity graphs, something like this. So this is the same idea as for trees. So for trees, the first co-product is just cutting the trees into two parts, so this is the same for graphs. The second co-product was
13:06
given by extraction and contraction, so this is the same for graphs. Just take a graph, you can take some equivalences on the set of vertices, so this means that you just do a partition of your vertices. On the right, you just contract your equivalent cases,
13:28
so this means that you contract some subgraphs of your graphs to vertices, and on the left you just forget the edges which are not between, which are in vertices,
13:42
which are not equivalent. So I'm just going to give an example. So for this one, you can contract everything, you just remain a vertex, and on the right you obtain the whole graph. You can contract only one edge, for example this one. If you contract, you
14:01
just obtain a graph with two vertices and one wedge between them, this, and the extraction is given by this edge and the other vertex, so something like this. You have three possible ways to do this. You can just extract the vertices, you go on the right, and if you contract the vertices, nothing happens, so the graph stays itself. So this is also a second
14:27
co-product, this is also by algebra. It is not a hop algebra because there is a group like the graph with one vertex, and it has no inverse, so you don't have any antipode. It's not a big problem, but it's not a hop algebra. And the co-unit is very simple.
14:46
In fact, this is because of this part, for each graph you obtain a vertex answer of a graph, plus a graph answer vertices and more terms. So the co-unit is just given by
15:00
sending any graph to one if your graph is totally disconnected with no edge, or zero otherwise. And you can prove that this is really a double bi-algebra. So the second co-product, this extraction contraction co-product, really acts on the first one. And the last example,
15:21
which is the hop algebra of quasi-symmetric functions. So as an algebra, it's based on the set of compositions, and the composition is just the finite sequences of positive integers. With this product, which was used just before by Dominique, this is the quasi-suffer product
15:42
on composition. The co-product, the first co-product is given by decontamination. You just cut your words into two parts, between two letters. And there is a second one, which is given by extraction and contraction. So for example, for this you will cut your words into two into several parts. One part, two parts, two parts,
16:08
and so three parts, two parts, two parts, and one part. You just contract the parts. So contracting just means that you sum all the letters of your word, and because the letters are integers you can sum them. So this gives the terms on the left, and on the right you just
16:26
quasi-suffer your parts. So this is another co-product. You can find it in the papers of Thibaut, Novelli, and the others on this subject. I'm not totally sure they proved this
16:40
is really a double bi-algebra. I'm not sure they proved the co-interaction, but well, it's true. And they proved it with a trick, which is based on manipulation of alphabets. So it's not really obvious, but you can do it without too many combinatorics, just with algebraic tricks. So well, there is a co-unit, I forgot it, which is also given by
17:10
this. You take a word, a composition, if itself left zero or one, the co-unit is one and zero otherwise. And it turns out that this is a character of Q-Sim, which appears in another
17:22
paper by Aguirre, Bergeron, and Sotile. In this paper, they define the category of combinatorial Hopf algebra, which are graded connected Hopf algebra with a character, which is their pairs. And they prove that in this category, Q-Sim with this character, epsilon prime,
17:44
they don't mention that this is the co-unit of a certain co-product, by the way, but this pair is a terminal object. So this means that if you take a combinatorial Hopf algebra, so a connected graded Hopf algebra with a character, you automatically obtain a
18:00
morphism to Q-Sim compatible with this co-unit. Okay, so that's nice. These are nice objects, but the question is what you can do with this, what you can deduce on this construction, and what will it give on these examples of graphs and trees and things like this.
18:25
So first of all, take a double bi-algebra A with one product and two co-products, another bi-algebra B, and you're looking at morphism of Hopf algebra or bi-algebra from A to B. So it turns out that the monoid of characters of A for the second co-product acts
18:47
on the set of bi-algebra morphisms with the help of the correction of the second co-product. So this means that if you obtain a bi-algebra morphism from A to B, in fact you have a lot
19:03
more bi-algebra morphisms from A to B. You can deform any bi-algebra morphisms with the help of characters of A. If you have this one, you will have every one, just by using the actions. So let's try to do this for forests. So forests form a double
19:26
bi-algebra, so this means that there should be a unique morphism from a forest with both co-products. You can prove that you can compute it conductively. For example, let's start with
19:43
first tree, the tree with only one vertex. It's primitive for trees, so its image should be primitive for polynomials, and the set of primitive elements of k of x is one-dimensional, it's generated by x. So this means that phi one of this tree should be a multiple of x.
20:05
Moreover, phi one is compatible with the second co-product and with its co-unit epsilon prime. Epsilon prime of this tree is one, so epsilon prime of its image should be one. So epsilon prime of lambda x is lambda, so lambda should be one.
20:22
So you entirely determine phi one of this tree, this should be x. For the second one, we do the same. Let's first compute the co-product of this tree. This is this, this is only one admissible cut, non-trivial. Let us apply phi one to this. Phi one is compatible with back delta,
20:44
so you should find something like this. Phi one of this is this. So this means that phi one of this tree should be x two over two plus a primitive element, so lambda of x. This morphism phi one is compatible with the co-unit epsilon prime,
21:03
so epsilon prime of this polynomial should be epsilon prime of this tree, so this should be equal to zero, and you obtain that lambda is equal to minus one over two, and you obtain that phi one of this tree, this is exactly this. So what you are doing now is to prove that this morphism is unique.
21:25
What is not clear is that it is really compatible with the second co-product. I only use that it's compatible with the co-unit, but you obtain for free that it is in fact compatible with the second co-product, just for free. So you can continue like this. For this tree, you obtain something like this,
21:44
which is a Hilbert polynomial, quite famous. For this tree, this is no more Hilbert polynomial but something like this. Maybe you recognize it. This polynomial counts the sum of the square. This evaluate to n is one square plus two square plus
22:04
etc plus n square. So these are quite special polynomials. You can do more in fact. Well, this is a nice way to compute the invariant phi one, but it's quite long. In fact, you can do better.
22:21
You can prove some formula like this. If you take an element of your double by algebra A, you can compute phi one of A in this way. First, you can compute all these reduced co-products. So the reduced co-product is just by forgetting the primitive parts. So this means that for a tree to just forget the trivial cuts and the cuts of everything,
22:45
you can compute it and iterate and iterate and iterate and something like this. And you know that at a certain point it will stop. The iterated co-product should be zero after a certain point. So you compute all of them. You obtain tensor of trees or something like
23:01
this. You just apply the co-unit on the left and then you multiply the terms by your Hilbert polynomial. And this means that your invariants really count something. If you evaluate x into an integer, this is a binomial coefficient. So really phi one of A just counts something
23:28
for forest. And this counts something like this. This is quite a well-known construction. If you take a forest with k vertices, well just through the indexation of the forest,
23:42
it doesn't really matter, you can associate to it a polytope of dimension k. In fact, this polytope is defined by some inequality. If you take one vertex, which is under another one, x1, xi is below the vertex j in your tree,
24:08
then you associate to it an inequality xi smaller than xj. So this defines a polytope. And you want first to delete it by an integer, i minus one times the polytope, so just
24:23
do a number to see. And you want to count the number of integer integral points inside. So it's quite a famous result that this is in fact given by a polynomial sequence. So this defines a polynomial which is called the error polynomial.
24:42
The strict error polynomial is the same, but just count the number of vertices inside your polytopes. So this is quite known that these define two polynomials which are related, I will say later. So just to mention a problem, usually in the literature on this,
25:03
the error polynomial in n just count the number of integral points of the deleted of the polytope by n. Here I have a problem. If I do this, it does not work. So I have to do some translation by one. So for example, for this, you have three vertices
25:27
which are indexed by one, two, three, from bottom to left. One is smaller than two, two is smaller than three. So your polytope is defined by x smaller than one, smaller than z,
25:41
between zero and one. So the polytope associated to f is just a simplex. So if you want to count the number of integral points into the deleted of f, what you count is the number of points x, y, z, which are integers such that
26:01
x is smaller than one, is smaller than z, is smaller than n minus one. And it's not very difficult to count them and to prove that this is n plus one, n plus two over six. So this is for the error polynomial. For the strict error polynomial, you count the point inside. So this means that you replace your smaller or equal by just strictly smaller,
26:24
and n minus one by n plus one. So this counts things like this, and it's not difficult to prove that this is this polynomial, which by the way is in fact phi one of f evaluated to n. You can do the same thing for this tree. So here you can try to
26:47
draw the polytope. This is a pyramid with a square basis. You can count the number of points inside, which is this. And so this is the error polynomial, the strict error polynomial, which is again phi one of f evaluated to n.
27:05
And in fact this is exactly this. A unique morphism compatible with the product and the two co-products is in fact the strict error polynomial. Okay. So if you look at this,
27:20
you can observe that the strict error and the error polynomial are very similar. More or less they are the same coefficient and things like this. And in fact you can prove that there is another morphism, from the cone primer to polynomials, which is compatible with the product and two co-products. This is not the error polynomial,
27:41
but more or less itself. You just replace x by minus x and you have to correct the same by multiplying by the power of minus one. It's not very difficult to prove combinatorially, but this is compatible also with the product and both co-products. But there is only one morphism compatible with the product and the
28:04
two co-products. So this means that this morphism phi is the same as phi one. And what you obtain is an algebraic proof and the one-round rating principle for error polynomial. In fact the strict error polynomial and the error polynomial are really closely rated. This is just by evaluating x into minus one minus x and replace
28:27
the thing. So here as an application you have a proof of the derivative principle for error polynomial, more or less without any real combinatorial stuff.
28:43
The usual proof uses some Moebius inversion in some posets, so these are really combinatorial. Can I ask a quick question here? Sorry. So you exemplified this in the case of these polytopes you get from trees via this
29:02
post-it order. Can you also use a similar kind of argument to get this error polynomial duality result for arbitrary polytopes? I did not manage to do it. In fact I would need a half algebra structure and polytopes, a good algebra structure and polytopes. I have a product which is just the usual product of
29:25
polytopes, but I don't find the co-products. In fact those coming from Florida are very special. You see they are defined by so many questions which are nice. If you take any polytopes, I don't know how to cut it.
29:40
Yeah, thank you. So let's do it for graphs now. So for graphs I can apply the formula for phi one, which I gave before. So what does this mean? I have to do all the iterated co-products of graphs, so this means that I have to cut the graphs into any number of parts I want.
30:06
So the iterated co-product just means that I will cut the graphs into a lot of parts, and then on each part I will apply the co-unit for the second co-product. So the co-unit is one if the part has no edge, and otherwise it's zero.
30:24
So this means that in phi one, which appear are only decomposition of your graphs into parts with no edges. The decomposition of the graph is equivalent to a coloration of a graph. So a coloration
30:43
is just associated to any vertex of the graph a color, which usually is a number. So a partition of the graph is just the same as a coloration. And the coloration appearing in my phi one of a is just the coloration which are called valid. So this means that
31:03
if two vertices have the same colors, there should not be neighbors in the graph, with no edges between them. So this means that in phi one of a, I just take in account valid colorations. So this is a polynomial with counting like this, and which is called
31:21
the chromatic polynomial. So in fact what I find with graphs, the unique morphism compatible with both co-products and the co-product, is the chromatic polynomial, which is perhaps not a big surprise, because if you work with graphs you know that chromatic polynomials is an essential tool for studying them. So this is perhaps an explanation why it's so important.
31:46
In fact it's a unique polynomial invariance on graphs, which will be compatible with all these structures of extraction, or so contraction and extraction of subgraphs.
32:01
Okay, so something else now, another application. So I'm going back to a theoretical result. In fact, I'm looking for the antipode. So in all my examples, for the big delta, this is a bi-algebra, so it has an antipode. For the second component, it's just a bi-algebra, so no antipode.
32:25
In fact, I can prove that if I want to compute the antipode of A, I just have to compute the inverse of a special character, which is the co-unit of the second co-product. So the co-unit of the second co-product is the unit for the convolution of the second co-product,
32:42
but for the first co-product it's just a character with no special property. Maybe it's invertible. If it is, well, you know that A is a hop algebra and you have a nice formula for your hop algebra. Just apply the second co-product and then the character and
33:01
the first components. So for double bi-algebras, if you want to compute the algebra, you just have to compute a special character. There is something more. In fact, maybe you observe that all my examples of double bi-algebras are commutative. In fact, what you obtain here,
33:23
like this, is that S is a composition of algebra morphisms. So this means that in double bi-algebras, the antipode of A is an algebra morphism, but usually it's an anti-algebra morphism. So double bi-algebras are special bi-algebras such that the antipode is both an algebra
33:48
and an anti-algebra morphism. So this means that essentially if you have a double bi-algebra it should be commutative. So this is the reason why all my examples are commutative. In fact, you cannot obtain any double bi-algebra which is not commutative,
34:03
because this is one of the reasons the antipode should be an algebra morphism. You can do more. In fact, you have to compute the inverse of the character, which is not so obvious in fact. You can do it inductively, but it's not so obvious.
34:22
But if you know how to compute phi one, well it's very easy to find the inverse of the character. Just take an element of the algebra with small a, just compute phi one of a, this is a polynomial, and then evaluate it into minus one, and this is really the character
34:42
alpha of a. So for example, for rooted fluoresce, if you want to compute the antipode, you need to compute the error polynomials of any fluoresce evaluated to minus one, and it's very easy, just a power of minus one. So alpha of s is just a power of minus one,
35:03
and you obtain this formula for the antipode, which was proved by Conan Kreimer. Not in this way at all, this is just an indicative proof. You obtain this, something like this, with no induction. It's more interesting for graphs. For graphs, for a long time,
35:24
the antipode was not known, not really. You can compute it inductively of course, but it was not so clear. Last year, in fact, a formula was proved by Benedetti, Bergeron, and Machatsek, I'm not totally sure of the pronunciation, with a combinatorial method which was quite complicated. There is an obvious inversion,
35:45
things like this, and curiously the number of acyclic orientation appears. And in fact, this is obtained with my method by this. If you want to do this, you have to compute phi one of
36:00
the graph evaluated in minus one. Phi one of the graphs is the chromatic polynomial, and it's quite a famous result in graph theory that the graph polynomial evaluated in minus one counts the number of acyclic orientations. And to prove this, it's not so difficult, this is just a combinatorial proof by the induction of the number of vertices.
36:25
So what you obtain is the formula by Benedetti, Bergeron, and Machatsek, with no more combinatorics, more or less. You just apply the chromatic polynomial evaluated to minus one. You can do better for the chromatic character. In fact, there is a very simple
36:44
half algebra morphism from graphs to polynomials, which is just sending a g to a monomial x to the power of the number of vertices of g. This is really easy to show that this is a half algebra morphism. Of course, it's just compatible with the first
37:04
core product, not with the second one. The unique morphism compatible with the second one is phi one, the chromatic polynomial. And I mentioned before that any half algebra morphism from graph to polynomial should be obtained from the chromatic polynomial
37:24
by the action of the character. So we can write that phi zero, this very simple invariant, should be obtained from chromatic polynomial by action of the character, which I denote by lambda, which is very easy to compute. It just sends any graph to one.
37:42
So lambda is a very simple character, but which is more interesting is that it is invertible for the second convolution. So this means that you obtain the chromatic polynomial from this very simple morphism just by acting a certain character, which is not so easy to find now.
38:05
So this means that you obtain a formula for the chromatic polynomial. In fact, the chromatic polynomial is the sum of all possible contraction of your graph, x to the power of the number of classes of your contraction, and then a scalar, which can be
38:27
inductively computed. So lambda, this is a chromatic character, you can compute its values just by induction, and you can observe on this example that it's never zero.
38:40
The chromatic polynomial never goes to zero, and it only depends on the number of vertices. With one vertex it's positive, two vertices it's negative, three vertices it's positive, four vertices it's negative, and you can prove it just by induction. So just by something like this, I don't have any more time, so just cut it a little bit.
39:09
So just by this, and with this formula, you can prove that in the chromatic polynomial, the things of it are alternating, which is a result proof by Rota in the 70s, I think,
39:24
with complicated combinatorial methods. Here you obtain each just by a small combinatorial tool, which is the construction and extraction of edges, and then this formula for the chromatic polynomial related to the chromatic character. Okay,
39:49
I think I've got, I still have five minutes. So for the moment I talk about morphism with values to polynomials. Now I'm going to talk
40:02
about morphism with values in the quasi-symmetric algebra. So I mentioned before that by Aguiar, Bergeron and Sotile, I know that there are a lot of morphism to it because it's a terminal object. If I want a morphism to Q-Sim, I just have to choose a character and then I will obtain
40:23
homogeneous morphism compatible with the product and the first co-product. I've got a formula for this, which is similar than the formula for the polynomial invariant. If I want to construct a morphism from A to Q-Sim,
40:44
I just take all the iterated co-products, I apply some projection on it, so I project on the homogeneous parts. When I applied the co-unit on all the parts, I take in account the
41:02
degrees of the parts with a composition, something like this. So what can I prove is if I'm looking for a morphism from A to Q-Sim compatible with both bi-algebraic structures, so compatible with the products and the two co-products,
41:21
this is the only possibility. If I'm looking for something compatible with the products and the co-products, this is the only one which should work. And happily, it does not work anytime. I need another condition. I need a condition of
41:40
the gradation. In fact, I need that the gradation, more or less, just respect the first components. If this condition, this technical condition is not satisfied, well, phi one is not compatible with the second co-product, so this means that I won't have any morphism compatible with both structures. And this is what happens for forests.
42:07
In fact, my condition means that on the second co-product, I should obtain only things with three vertices on the left. This is not the case. There are some red trees,
42:20
they should have three vertices if I want to be compatible with the second co-products, and happily, well, that's only two vertices. So in this case, I won't have any morphism compatible with both structures from trees to cuisines. So I have to cheat a little bit
42:42
when I just replace forests by decorated forests. So this means that on any vertex on my forest, I had a decoration which is an integer. So if I do some contraction, for example, for this, I'm contracting the edge between a and b. Well, I don't forget a and b, and I just replace
43:03
the vertex, the decoration of the vertex by a plus b. So for this, now the co-product is homogeneous. Here the weight is a plus b plus c, and on the left, and also on the right, I only obtain trees with weight a plus b plus c. So I can just put them into black.
43:24
And now with this trick, the technical condition on the second co-product, which I changed, is satisfied. So this means that I obtained for free a hopf-finchy morphism from forest to cuisines, which is homogeneous for this graduation by the weight, and compatible with
43:42
all structures. This is the generalization of the error polynomial, which is called the error quasi-symmetric function. It is something like this. And the same for trees, for graphs, sorry. And what we obtain is the generalization of the chromatic polynomial,
44:07
which is called the chromatic quasi-symmetric function. But in fact, it's not quasi-symmetric, it's symmetric, for reasons of co-commutativity. And this is an object which was already known by combinatorists on graph analysis. They know it, but they didn't know that it was
44:27
compatible with several structures and quasi-symmetric functions. And the last word, as I said before, that double bi-algebra does not run very well with non-commutative
44:44
bi-algebras. In fact, you can do some non-commutative, you can replace by index tree or planar trees and things like this, and you can also define two co-products.
45:00
Okay, there are two co-products, they are no more compatible as before, but they are still existing, so why not? And you can generalize your chromatic series or L-WAT series, all the morphisms I mentioned before, they still exist in a non-commutative way, but you cannot use any more the formalism of double bi-algebra. There are no more double
45:26
bi-algebra. So if you want to do this, if you want to explain it, you have to work in another category, this is in some sense bigger, which is the category of species. In fact, all my objects are traces, this means that they are imaged by functors, of something
45:45
which exists in a category of species. In the category of species with two functors, which will give in part the commutative objects, which are double bi-algebras, and non-commutative
46:02
objects, which are not double bi-algebras, but which are traces, non-commutative traces, or double bi-algebras in the category of species. So I will stop here, and thank you very much for your attention. Thank you very much, Loic. Are there questions? Okay.
46:29
Yes, did you actually upgrade all your formalism in the species
46:42
setting? Yes, in fact in the species settings you can do the same. So the cursive on kx has replaced by the same object, which is a species of composition. Yeah, so double bi-algebras and things like this, and it turns out that you have two functors,
47:04
the frog functor and the full frog functor, the one will send you to commutative objects, and the other one to non-commutative objects. So in fact, you obtain the double bi-algebras in the setting of species are commutative in the species settings, but it's not of
47:26
strong commutativity, so with the second functor there are no more commutative in the user space in the setting. You can do all this in the set of species. More or less the same proof, something like this, more or less. There are some
47:43
technicalities in some sense, but these are the same ideas. If I may, I'd like to ask or make a comment or ask a question. Thank you, Loic, for your nice talk. You were wondering about coproduct on polytopes. There is one on cones, and polytopes can be seen as intersections of
48:07
cones, and that's the way, formally, on polytopes of the kind you were talking about can be derived. So I think it should be possible, maybe, going that way, following the path of
48:27
Barvinog, Berlin, Wern, I wonder whether it's possible. Okay, so I can say that I found some coproducts on polytopes, but more or less they are useless. There are stupid coproducts, and
48:46
you don't obtain the error of polynomials and invariant, you obtain stupid feelings just by sending, for example, a polytopes to the number of vertices, something like this, or any of these dimensions. I just found stupid coproducts, not interesting ones.
49:05
Okay, the one I'm thinking of is very geometric, and it serves similar purposes. It's for counting integer points on cones, so it's made for that, and it's implicit in
49:20
people's work in toric geometry. So yes, it could be helpful, maybe. It's a geometric one. Okay, so it should be better than mine. I will look at it. There's a question from Yannick Vargas in the Q&A, and I have set him so that he can unmute, but he asks, is there a polytope associated to double-post it the same way you define the
49:45
polytope associated to a forest? Yes, you can. In fact, everything I did for forest can be done for pocets or topologies, and you also obtain some error polynomials
50:02
with the same properties. I just restrict myself to forest because it was easy to describe than pocets or topologies. David, do you want to ask your question? Yes, I wanted to make a comment because it's directly related to the title of this conference,
50:22
on algebraic structures in quantum field theory. Now look, you know very well Dick's work on the Hopf algebra renormalization, but I hope you're also aware of his recent work on the co-action associated with the monodromy of the functions we get from the
50:41
Feynman diagrams, and that's associated with cutting lines. There's an interesting question as to how that relates to co-action and multiple polar logarithms that we obtain and functions beyond that. So there is, facing us at present in quantum field theory, a compatibility question. It might not be directly related to your talk, and that is what about calculations in which we
51:02
both cut lines to discover analytic structure, but we also have sub-divergences that we have to renormalize. David, yes, that's precisely the right question, David, but I think there will be an answer pretty soon, and it has a lot to do with Loïc's co-interacting.
51:24
Excellent, thank you. Okay, thank you. I think there's also a simpler connection that just when we were looking at the general tree Feynman rules, then having the co-structure on trees, the co-interacting structure on trees, it picks out a particular one,
51:45
a particular choice for the lower order terms. So that's a more trivial observation than what David and Dirk are getting at, but it's another connection to the quantum field theory situation.
52:03
I see Mishi has his hand up. Yes, thanks, and thanks Loïc for the nice talk. I also have a question regarding the co-products on polytopes, and it's about,
52:22
I mean, if there's a relation or if you're aware of this stuff, this newer stuff by Agia, together with Agia on the hopf-monoid structure of generalized permetrahedra, which are polytopes, and on them you can define this co-product.
52:43
So is this too specific for your purposes? Because I think they also, at least they asked some questions about Ehrhard polynomials there on the polytopes. Yes, but the permetrahedra are very specific polytopes. All right, but in this case, it's easier to define some
53:08
co-products on these sort of objects, which has a strong combinatorial structure behind them. If you take any polytopes, it's not the case, just the problem. The problem is really the generalization then, so this only works for these specific polytopes.
53:27
Yes, I think that for a special family of polytopes with strong structure, you can define a co-product which should give you the Ehrhard polynomial or something like this. All right. For any polytopes,
53:41
in fact this co-product is quite, you can see it easily on the poset, you just cut or something like this. On the polytopes, it's not so clear for me. This is a section of faces or something like this, which is really something more complicated. Okay. I don't know exactly what it is geometrically.
54:02
Okay, I see. Thank you. The problem for me is that I can understand polytopes in three-dimensional, but no more. But this is only a few or three examples which I can manage, so it's not enough for me to understand what happens for bigger polytopes. So I don't know exactly what the co-product
54:25
is for polytopes. All right. In the interest of time, why don't we thank Loic again now?