Feynman integrals and hyperlogarithms
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Part Number | 02 | |
Number of Parts | 2 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/20231 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Arc (geometry)Length of stayParameter (computer programming)LogarithmGrand Unified TheoryCurve fittingSquare numberDuality (mathematics)Mass flow rateExt functorPermutationTrigonometric functionsPredictabilityModulformVarianceHydraulic jumpMaxima and minimaDivision (mathematics)Algebraic structureFraction (mathematics)DiagramGenetic programmingSequenceComplex analysisGeometryGraph (mathematics)KinematicsLie groupMaß <Mathematik>MathematicsNatural numberNumerical analysisComputer programmingOrder (biology)PolynomialNoise (electronics)Theory of relativityStatistical hypothesis testingGame theoryNetwork topologyTransformation (genetics)TrigonometrySet theoryMathematical singularityVarianceModel theoryFrequencyIterationProduct (business)Matrix (mathematics)ModulformVariable (mathematics)Combinatory logicMomentumINTEGRALEntire functionSurvival analysisWell-formed formulaDimensional analysisIntegerComputabilityInfinityReduction of orderFlow separationRecursionDeterminantPhysical lawEnergy levelDifferential formState of matterAlgebraic functionArithmetic meanCurve fittingProof theorySeries (mathematics)Division (mathematics)Connected spaceVibrationFunctional (mathematics)Sheaf (mathematics)Line (geometry)Directed graphLimit of a functionGrothendieck topologyGroup actionPrice indexLimit (category theory)CalculusKontraktion <Mathematik>Power (physics)LogarithmLoop (music)Extension (kinesiology)MereologyMoment (mathematics)MultiplicationIntegration by partsTheoryPhysicalismResultantScalar fieldSpecial functionsLocal ringSubstitute goodTerm (mathematics)TheoremThermodynamisches SystemThermal expansionAreaEllipseQuadratic equationConfiguration spaceLinearizationLogical constantDivisorFamilySimilarity (geometry)MeasurementRegular graphString theoryRational functionOperator (mathematics)Parameter (computer programming)MassNormal (geometry)Perfect groupPropagatorRootHelmholtz decompositionResidual (numerical analysis)Bound stateComplete metric spaceSquare numberExterior algebraSummierbarkeitAdditionPoint (geometry)Elementare FunktionKörper <Algebra>Social classCuboidVotingSpecial unitary groupVariety (linguistics)Arithmetic progressionCartesian coordinate systemSpring (hydrology)Many-sorted logicCoefficientObservational studyInsertion lossFocus (optics)Process (computing)ExpressionPartial derivativeGreatest elementNumerical digitEuler anglesGraph (mathematics)Sign (mathematics)ExplosionEnumerated typeDivergenceProcess capability indexIdentical particlesCondition numberCanonical ensembleDifferent (Kate Ryan album)CalculationElement (mathematics)Pole (complex analysis)Multiplication signBoundary value problemRule of inferenceDifferenz <Mathematik>ForestSpacetimeRight angleBlind spot (vehicle)Group representation1 (number)PhysicistRational numberSymmetry (physics)Positional notationGauge theoryTime domainDerivation (linguistics)Constraint (mathematics)Divisor (algebraic geometry)Plane (geometry)Finite setHyperplaneLemma (mathematics)ParabolaPolygonPotenz <Mathematik>Color confinementSigma-algebraScaling (geometry)Goodness of fitCounterexampleWeightFactory (trading post)Gastropod shellPolylogarithmische FunktionExpandierender GraphSurfaceAlpha (investment)Vertex (graph theory)Closed setSinc functionClassical physicsDegree (graph theory)Symmetric polynomial2 (number)Lecture/Conference
Transcript: English(auto-generated)
00:15
Thank you very much. And thank you very much to you who stayed around in the afternoon to this last talk. I'm very grateful to the organizers to have this opportunity to speak
00:23
here at the EHS. And I will try to connect with Francis' talk in the morning. And after Ruth has showed you in the previous talk some other approaches and ideas on how to take a Feynman integrals, I want to go back to the parametric representation that Francis showed
00:40
you in the morning. And actually, Francis put forward at the time a very new approach to the computation of Feynman integrals, which uses hyperlogarithms throughout and is one attempt and one way to understand the prevalence of multiple polylogarithms and MZVs in the calculation of many Feynman diagrams.
01:00
So I want to talk about Feynman integrals, but specifically their connection with hyperlogarithms. And the motivation for this particular approach is, or the goal is to understand why are
01:34
so many Feynman integrals, just abbreviate by Fi, expressible multiple polylogarithms.
02:13
And Francis already showed you these functions in the morning, I just recall briefly. So these are functions which you can define like multiple series I just write down now.
02:21
So they're indexed by a bunch of integers and one to nd and they depend on several complex variables. And one way to define them is via the summation over a nested integers k1 up to kd. Then the single sum is just the monomial index by the summation
02:45
indices and the parameters divided by the summation indices raised to certain powers as dictated by the index of the multiple polylogarithm. So we have seen already some
03:01
examples also in Ruth's talk where these functions show up in Feynman diagram calculations, and it's still today sort of mysterious why there's so many of them where this is the case. But you also know, remember, not all Feynman diagrams or Feynman integrals of this form
03:26
be no explicit counterexamples and we have many conjectures of graphs where we are pretty sure that they're of a much more complicated form. And if you think of the whole world of Feynman diagrams, you might actually say probably this is a set of measure 0 in whatever
03:42
kind of measure you put on the set of Feynman graphs. But the surprise is that for most things which are relevant for physical calculation is actually you get already very far with just these Feynman diagrams, but still understanding why they are of this form is very difficult.
04:00
And I will try to demonstrate to you one approach where, which at least in some cases makes it very clear why this is the case. And I try to be very explicit and very down to earth, so I decided to take an explicit example that we have seen already and try to compute it from beginning to end. So if anything is unclear in the meantime, please interrupt me because it should all be first or second year calculus, I hope. But it's still interesting
04:25
because you see what's actually the important ingredients why this approach works. So the idea that we want to follow is take the parametric representation and integrate
04:49
out one Schwinger parameter at a time. So recall that we had parameters associated
05:13
to each edge of the graph called alpha e when we did the Schwinger trick, and then we had the parametric representation. And I will consider an explicit example, the wheel
05:27
with three spokes, which was the first non-trivial example that Francis mentioned in his outlook. So what was the wheel? It is this graph. So it is a wheel with three spokes. Let's
05:48
call these edges 1, 2, 3, 4, 5, 6. And as you can think of it as a graph in 5 to the 4 theories with four external legs. But because it's logarithmically divergent
06:01
in four dimensions, it has this residue, which I will denote by i, this graph. It's just the integral over omega g divided by the square of the graph polynomial, the first semantic polynomial. And what did this mean? I will evaluate this integral
06:22
finally. So we just have to integrate from zero to infinity all the Schwinger parameters. Well, not actually all of them, because remember in this form, it's a projected integral. So when we make it defined, we can restrict an arbitrary hyperplane. And I will choose
06:41
the hyperplane in that form that I just set alpha 6 to 0. So I will integrate 1 over psi g squared. But I set alpha 6 to 1, sorry, to 0. So this is the integral we want to compute. And recall that by definition, this semantic
07:12
polynomial psi g was the sum over all spanning trees. And each spanning tree contributes a monomial, which is the product of all edges not in the spanning tree. So in particular,
07:24
this is linear with respect to each individual variable. And this means that the first
07:45
integrations are essentially elementary. So how does this work? We want to compute
08:08
this amplitude. Let's just do the first integral over the first variable from zero to infinity over alpha 1 over psi g squared. And I just reminded you that this is a
08:28
linear polynomial. So let's just say that I call psi upper 1 the coefficient of alpha 1 in this polynomial and psi lower 1 the constant part. And this integral is clearly trivial to do. So we just have the primitive of the integrand. And we have to
08:51
evaluate this 0 at infinity. And we just get the 1 over the product of psi upper 1 and psi lower 1. This was easy. At the next stage, we have to leave the rational
09:05
functions. So let us compute the next integral over alpha 2, psi upper 1, psi lower 1. Now these are coefficients of the original polynomial, which was linear in each individual variable. So these polynomials are still linear in the next variables.
09:23
So we can do a similar decomposition, d alpha 2. And I just continue with this labeling scheme. So I call psi upper 1 to the coefficient of alpha 2 in psi upper 1, which is a polynomial depending on the other variables. In psi upper 1 lower 2,
09:42
the constant part with respect to alpha 2 in the polynomial upper 1. So these are just short hands for particular spanning trees, which contain H2 but not H1 or which contain neither of H1 and H2 and so on.
10:03
So I started with the linear polynomial at a complete square. So I just get the coefficient of alpha 1 times the constant part. So this is a quadratic polynomial, but it's factorized in two linear forms. So these are polynomials here. So psi upper 1, psi lower 1 are the coefficients
10:21
with respect to alpha 1. So they are still polynomials in the remaining variables. So this is important. But when you make the expression, there is a second linear in alpha 2. No, because these are linear polynomials in the variables. They are linear in each variable. I start with a polynomial which is linear in each variable, and I pick a coefficient.
10:42
So it's still linear in all the other variables. That's the important point. So I only have linear terms here. So I lower 1, upper 2, alpha 2, and the constant part. So how do I do this integral? Well, I do a partial fraction decomposition. So I get a pre-factor from the partial fraction decomposition,
11:05
which you can work out very easily. And then you're left with the integral over alpha 2.
11:38
So I hope you can all believe this formula.
11:42
And now we can just compute this integral in terms of logarithms that evaluated 0 at infinity. And then this thing here just becomes the logarithm of psi upper 2 lower 1 and psi upper 1 lower 2. And in the denominator, we have the other
12:00
two coefficients, psi upper 1 2 and psi lower 1 2. So what we've seen here is that we can very easily integrate the first two variables. And at that stage, the partial integral we have computed so far is the logarithm of some coefficients of the graph polynomial. And it has a pre-factor, which is given by this polynomial from the partial fraction decomposition.
12:21
Now, the idea is to continue with this and integrate out one variable after the other. But in general, now we're in a difficult situation because this is now genuinely a quadratic polynomial. This is before because we multiply two linear things. And if you want to integrate again, we would like that this is actually continuous factorizing
12:40
in these linear forms as we saw here. So let us have a look what this actually is in this case. So what is psi upper 1 lower 2? I want to integrate alpha 3 next. So let's decompose this with respect to alpha 3. So first of all, this is the coefficient of psi of alpha 1,
13:04
where alpha 2 is set to 0. So it is the sum of all the spanning trees which do not contain edge 1, but which do contain edge 2. And if you think about it, this actually means that this is the ordinary graph polynomial of the graph where you delete edge 1 and where you contract edge 2. There's a so-called contraction deletion formula. And if you take the graph
13:25
delete edge 1, but contract edge 2, what happens is that 6 and 4 become parallel. I have 6 and 4 parallel. And you still have 3 and 5 in the business. So this is actually
13:43
this graph polynomial. But I want to see how it depends on alpha 3, just for the fun of it. So I know this is linear in each variable. So what is the coefficient of alpha 3? Well, it's, again, the spanning trees which do not contain alpha 3. So I can just delete edge 3. And then I'm essentially left with one loop graph. And we know from one loop graph,
14:04
we just get the sum of all the edges in the loop. And then we have a contribution where alpha 3 is 0. So I have to contract alpha 3. So this gives me the graph polynomial of the sunrise, which we have seen, with the edges 4, 5, and 6. Just as a reminder, this was the symmetric
14:25
polynomial in two variables, which you have seen already in Francis' talk. Now we can... Contact to that edge 3 at this time, that's it? Yes. So I do the decomposition with respect to alpha 3. I look at the coefficient of alpha 3.
14:41
And what happens when I set alpha 3 to 0? So I contract alpha 3. And then everything becomes parallel. Now we can just do the same game the other way around. So I delete edge 2 and contract edge 1. Then 5 and 6 become parallel. And 3 and 4 remain. And again, I can delete alpha 3 also. Then I have only alpha 5 and alpha 6 remaining in the loop.
15:05
And I have the same... When I contract 3, I still get the same sunrise polynomial. I don't bother writing the indices here. And we also have upper 1, 2, which means that I delete both edge 1 and edge 2.
15:21
Now what happens then is that we have this lonely 6 hanging around, and the edges 3, 4, and 5 remaining. This is just a one-loop graph, and I just told you that you just have to sum all the variables in the loop to get its polynomial.
15:43
And then the last guy is the one where we set alpha 1 and 2 to 0. So we contract both edges, alpha 1 and alpha 2, which means that the edge 3 is now connected. Both endpoints of 3 are now connected, because if you contract it 1 and 2, this looks a little bit odd. So we have this vertex when our edge 3 is like a self-loop.
16:03
And then we have the sunrise of 4, 5, and 6 here. But what does the self-loop mean? You're summing up spanning trees. I mean, no spanning tree can contain edge 3, because then it would have a loop, and the spanning tree is not allowed to have a loop. So actually, no spanning tree contains alpha 3, which means that alpha 3 actually multiplies this polynomial.
16:22
It's a divisor of the polynomial. And then the thing that remains is just, again, this sunrise polynomial. Again, so you will soon see the reason why I wrote it this way. So what is actually this denominator? We want to understand this quadratic polynomial here.
16:44
So what is it? Well, let's collect the terms quadratic in alpha 3. So we get terms quadratic in alpha 3 from the product of these two minus the product of these two. So from this product, we get just alpha 4 plus alpha 6 times alpha 5 plus alpha 6.
17:08
And then we subtract the product of these two, which just gives the sunrise. Then we have terms linear in alpha 3. But here, the terms linear alpha 3
17:22
come from multiplying a psi with such a term. And also here, from the subtraction term, the term linear alpha 3 is this times this. So actually, all these terms are divisible by the sunrise graph polynomial. And we get alpha 4 plus alpha 5 and 2 alpha 6
17:43
from these. And then we subtract alpha 4 and alpha 5. And then finally, we have a constant part, which is just the square of the sunrise polynomial. And now you see that actually these cancel. And what is this polynomial? I wrote it down here.
18:04
So as you multiply this out, the only thing that remains here is alpha 6 squared. So what you find is... Each time you write this sunrise polynomial, it can take different variables.
18:20
It's always the same. I was just lazy. It's always 4, 5, 6. Because it's a variable of permutation of that. Yeah. I mean, it's symmetric. It's a symmetric polynomial of the three variables, 4, 5, 6. It's always the same variables, yeah. 4, 5, 6, left out 1, 2, 3. I have only four variables remaining. And I explicitly look what happens with alpha 3.
18:42
And 4, 5, 6 remain. So we observe there's a miracle happening here, namely that this thing actually is a complete square, which you can see over there. So it's just alpha 3 times alpha 6 plus the sunrise polynomial squared.
19:05
And this thing inside the square is again linear in each variable, because we know that psi itself is linear in each variable. So this is a complete square. Of course, there must be some explanation for this.
19:23
And this is an example of so-called Dodgson identities.
19:43
And these identities follow from the fact that the psi polynomial can actually be written as a determinant of a matrix. And then there's a whole theory of identities between them in terms of so-called Dodgson polynomials, which is something that also Francis and currently it's worked out in great detail.
20:01
You might know Mr. Dodgson is also known as Lewis Carroll. So this is the one of Ellis in Wonderland, if you want. The point is that these factorizations are extremely important for the fact that we see these multiple polylogarithms in so many places.
20:21
Because if this would not happen, if this would just be a generic quadratic polynomial, then in order to continue the integration, we would have to take the roots of this polynomial and introduce algebraic functions already. But because we have a complete square, we can just integrate by parts.
20:48
Yes, so if you integrate by part, we get one term where we just integrate the complete square and we keep the log, but just evaluate it for 3, 0 at infinity. And we get other terms where you have to integrate still, but the log became rational. Then we do again, separation, partial fraction decomposition, then we again get logs.
21:04
But the point is, we don't get a dialogue over them. So in the next integration, because of this complete square, we remain in the world of logarithms and elementary functions. And the result of this, that the period or the residue of this graph,
21:24
you get, because of this integration by parts, you get actually several terms, but you can write them with a symmetry. And you actually see that you can write it three times the same integral, which in terms of alpha 4, and then the sunrise polynomial and the logarithm.
21:53
4 plus alpha 5, alpha 4 plus alpha 6 over the sunrise polynomial.
22:03
This is an easy exercise to do this calculation that you get this expression. The point is that because of this complete square, we didn't get a dialogue with them at this stage. Now I want to write this out explicitly. So remember, we have three variables left, alpha 4, alpha 5, alpha 6. But alpha 6 is set to 1. So we are actually only left with a two dimensional integral.
22:21
And I will just rename the variable. So alpha 6 is 1, alpha 5 I call y, alpha 4 I call z. And this is the integral 0 to infinity dy over y. And you can do the partial fraction decomposition here. Which I already did for you here in this expression.
22:45
So this comes from the sunrise polynomial, if you do these substitutions and do the partial fraction decomposition.
23:00
And then here we just get, this is now called z plus y, z plus 1, and y plus 1, z plus y over y plus 1. So we boiled down the computation of this integral
23:22
to do this two dimensional integral over logarithm. And we again see, there were already Dodgson identities at place in the first thing that I showed in detail. But also in this integration by parts procedure here, we have to do new partial fraction decompositions. And also there, you actually make use of Dodgson identities. So the observation here is that, again, everything is linear.
23:41
Everything factorizes into linear functions in the next integration variables. So if you now integrate z, everything is linear. And then with respect to y, and also the arguments here are linear and z. So everything looks very linear. But of course at this stage now, we have to introduce more general special functions,
24:11
because we cannot compute this integral in terms of classical logarithms and rational functions anymore. So we do need a dialog with them.
24:21
But the question is, how do you represent these polylogarithms? We just erased the sum representation, because we don't want to work with the sum here. We want to have an iterated integral representation. So we want to exploit the iterated integral representation of multiple polylogarithms.
25:01
The multiple polylogarithms are not only sums, but you can also write them as iterate integrals. And so for this, let us take a set of points in the complex plane. It should be a finite set.
25:22
And we call these elements of the set singularities or letters, but I want to write them with a special symbol. So I introduced symbols, omega, sigma, which should indicate that this is actually representing a differential form for all these letters.
25:52
So this is an abstract alphabet.
26:04
And then we can define special functions for each word in this alphabet. Sigma is no more the domain of integration. Sigma was upstairs the domain of integration. This is a capital sigma.
26:21
Oh, no, no, no. It's integral IG. Yeah. But that means the element. Oh, yeah. Sorry. This is a different sigma. Sorry. Then we define the hyper logarithm, lw of z associated to a word, w in A star.
27:01
So w is just a sequence of these letters. By the following rules, first of all, if you have a string of omega zeros, this just should be the powers of the logarithm of z, the normalization by n factorial.
27:22
Then we also want that if you take a hyper logarithm, which has a word which begins with some letter, then there comes some other remaining letters, that this first letter tells you the differential behavior. So this should be one over z minus sigma times the hyper logarithm associated to the
27:45
tail of the word. This is just a reverse way of writing down an iterated interval. But it does not fix the constant of integration, of course, which we require now,
28:01
such that the limit when z goes to zero of these hyper logarithms is zero, unless the word is of the form omega naught to the n.
28:24
So if you have a string of zeros, you just fix them to be the logarithms, which of course diverge quite mildly at zero. But for all the other hyper logarithms defined in this way, you should think of them as iterated integrals from zero to z. So in fact...
28:42
Can we have an example now? Can we have an example? Yes, sure. That's exactly what's going to happen next.
29:02
Well, what happens if you just take one letter? And this is just the integral from zero to z of dt over t minus sigma, because
29:20
the integral is given by the differential behavior, but it also has to vanish. So sigma is not supposed to be not zero, has to vanish at z equals zero. So it's this iterated integral. And this is just the logarithm, z minus sigma over minus sigma.
29:41
Then there is l omega zero omega one of z. What does this look like? We have to integrate from zero to z dt one minus zero.
30:02
This is the last integration. When we differentiate with respect to that, we have to get on this. Then we have the integral, the nested integration, dt two with t two minus one. And if you recall, for instance, we mentioned the iterated integral representation from
30:21
li two, so this is actually minus li two of z. And quite generally, when you have a bunch of zeros, sorry, I'm changing orders now.
30:48
So people familiar with mzv will know this inside out. So I take a word, but I have to make distinctions whether a letter is zero or not, because it's treated differently in the setup. Let's suppose I have a word which ends in a non-zero letter, then there comes a bunch of zeros,
31:01
then there comes a non-zero letter and a bunch of zeros and so on. So I have d non-zero letters, and each of them might come with some zeros. And this is actually the same as minus one to the d times the multiple polylogarithm with these indices evaluated at the ratio of the non-zero letters.
31:32
So we get all the multiple polylogarithms just in a particular representation, this is the point.
31:42
Of course, the benefit of writing them in this way is that it is trivial to integrate in this representation, because we just define them as the iterate integrals. So if you just continue our example over there, let's just rewrite this logarithm here as an iterate integral in this form.
32:03
So we just have to look at the arguments of the logarithm. This integral zero to infinity dz one over z minus one over z plus y over one plus y,
32:22
this logarithm, this is nothing but l omega minus y, and I have the logarithm
32:45
with a singularity of z equals minus one. And I have in the denominator the logarithm which has its singularity with respect to z at minus y over one plus y. Now I made sure that I have the right differential behavior, I also have to think
33:02
if I fix these constants correctly, which do not depend on z, but I know if I take the logarithm here and set z to zero, then I just get y in the numerator and also just get y in the denominator, so this vanishes at z equals zero. And these also define to vanish, so this is the correct expression. And now if I integrate this by definition,
33:21
it just means that I prepend an omega zero and an omega minus y over one plus y to get a primitive of these, and I evaluate at infinity. So in this language, this just means the following. So I introduce a shorter notation. I want to write linear combinations of words in the arguments, and I just define this by linear
33:47
extension. Now I just have to prepend this combination of letters.
34:10
So what I did now, I expressed the penultimate integration in terms of an iterated integral evaluated at infinity, but it still depends implicitly on the remaining integration variable
34:22
y, which I still have to integrate. So if you want to continue in this way, you first have to understand this function as an iterate integral of y. So I hope this is clear. The idea is that at each stage in this process, we want to express
34:43
the integrand as an iterate integral, namely a hyperlogarithm, in the next integration variable. Unfortunately, I don't have enough time to do this in all glory detail, so I have to take a little shortcut here. But it's actually quite simple how to do this.
35:06
So there is a little lemma which you can prove. Suppose that we take a hyperlogarithm of some word at some argument, that might be infinity like in this case,
35:25
and we compute its total derivative. In the case where these sigmas are considered as functions, so they are not fixed, we really compute the full total derivative.
35:44
And you can actually prove that there's a simple explicit formula for this. You have a sum over all the letters and you take the word, where you delete one of those letters, and then you have an explicit logarithmic
36:04
total derivative of the consecutive differences of the letters. So the proof is differentiate
36:26
under the integral integration by parts. But it's really a very simple exercise to do this.
36:41
There are some boundary terms in this formula. So there's a sigma 0 appearing here, which is defined to be z and a sigma n plus 1, which is defined to be 0. These correspond to the boundary terms of the integration. So actually, there's a totally symmetric formula.
37:02
Now with this formula, if you apply the total derivative now to such an expression, you see this grading by weight, this recursive structure, which is so special about multiple polylogs coming into place here. Because if you apply the total derivative, on the right hand side, you only have lower weight multiple polylogs, which have one letter less, and explicit logarithmic derivatives. Now in our case, these letters here are
37:24
rational functions of y. So if you compute the logarithmic derivative of the rational function, you just have to factor the rational function into the zeros and poles with respect to y, and we get the differential forms defining a hyper logarithm. So this looks good. And for
37:42
this piece, which in this case would still be a logarithm which remains, we can just apply the machinery recursively. So I just give you an example here, but I won't work it out. Yeah, I just tell you. So this function here, if you do all this, it's just
38:17
you can write it as in the hyper logarithm with respect to y. So the point is that now
38:23
all the letters are independent of y. So you can convert this representation to this, and this is algorithmic. There is no magic in this process. But now the final integration is
38:41
true, because we just have to integrate. We have to multiply this with 1 over y and integrate from 0 to infinity. So this just means that we have to put another omega 0 in front. So we finally arrive that the residue of the viewer with three spokes graph
39:07
that we started with is three times this thing. So we have now an iterate interval of weight three. And as a hyper logarithm, it has this form evaluated at infinity,
39:32
which still might look mysterious, but what is it? I mean, this is an iterate integral,
39:52
c minus 0 and minus 1. So we immediately know that it is a multiple zeta value. You can also think of it as a period of M06. And if you want to write this multiple
40:18
zeta value, you just use Moebius transformations, for example,
40:23
or associators, which relate infinity to 1 in some way. So we can apply the Moebius
40:40
transformation, which sends z to z over z plus 1, which means that the upper boundary of infinity is now mapped to 1, 0 stays 0. And you can compute the pullback of the differential forms under this Moebius transformation, which is a simple exercise. And if you plug this all in,
41:10
you get that this is the same as the interval omega 0, omega 1. This is now a hyper logarithm
41:40
evaluated at 1, which is essentially a multiple zeta value. There's still a little thing with regularization you have to do because it starts with 0, and you can use a shuffle product. But if you use this, you can show that this is the same as...
42:01
If you multiply this out and do this little thing, which I don't have time to explain, but it's really not that difficult, you can rewrite it in this form. And if you just take the definition of the multiple polylogarithm, how they relate to these hyperlogarithms, namely this formula, I can see that this is essentially Li3 and this is Li1,2. So this is 3 times Li3 of 1
42:29
plus Li1,2 of 1, which is 6 times zeta of 3. OK, so I spent an awful amount of time on explaining this first non-trivial example,
42:48
but I hope it was at least understandable to some degree. The amazing aspect of this calculation is that it gets you ridiculously far in practice. At least in some families,
43:03
for example, these single-scale massless 5 to the 4 integrals that Francis was mentioning. This is a case where we have, I would think, the best knowledge concerning the expansion, the loop number, how far can we get and still compute a lot of integrals. This is really
43:21
outstanding compared to other kinematic configurations, which can be much more complicated. So we know we already have two loops where we get elliptic things, these massive sunrise diagrams. But in these massless diagrams, following these lines, one can get quite far. And the basic idea is exactly the same. So let us recap what we did.
43:50
We computed partial refinement integrals, meaning I call this i index k,
44:06
or Francis calls it i index k, which still depend on the remaining variables from k plus 1 up to the final variable. So integrate from 0 to infinity, the first k variables.
44:27
And let's say we just compute such a residue. So we compute these integrals one at a time. So at each step, we just integrate the next variable. The prerequisite for doing this
44:58
is that we can express these functions as hyperlogarithms. I was only telling you how
45:02
to manipulate hyperlogarithms. But in many cases, in the complicated ones, the Feynman integral is not expressible as a hyperlogarithm, at least not in the sensible iterate integral in any kind of form. But in the case where this works, we can just apply this procedure, which I outlined to you. So the prerequisite for this to work is that all singularities of this function,
45:34
which is a multivalued function of these variables. So there are some devices, there's some variety which describes where this can pick up monodromes.
45:43
But we need that all of these singularities are linear in the next variable, alpha k plus 1.
46:02
Because if this is the case, then I just sketched you an algorithm based on this lemma and another lemma which tells you how you get the boundary constants. There is an algorithm which in this situation can transform a representation given in such an implicit form into a representation which is explicit in y.
46:21
And then we can just find a primitive by prepending letters or doing integration by parts and we can compute the Feynman integral. So the message is that all this will only work in very special situations, but if it works, the whole procedure is automated, it is implemented in computer programs, so we can try to focus our attention on the actual geometry which
46:45
sits behind this computation. So the actual integral itself, what happens here, what are the precise polylogarithms which appear, this is not so important. The only thing that is important is what kind of singularities are there and how do they depend on the variables.
47:03
So I just want to briefly sketch the idea behind this. So in the beginning,
47:21
and we didn't integrate any variable yet, so we just have one over psi squared, the only singularity is psi. It can only have a singularity when psi vanishes.
47:50
After one integration, so what happened after one integration, I'm not sure if I still have it,
48:04
and we are, I still almost do. So after one integration, we did this, we just got one over psi upper one, psi lower one.
48:23
So apparently we now have two potentially different singularities. So we have psi upper one and psi lower one and we also had i2 after two integrations,
48:52
which was this combination, we had this denominator
49:04
times logarithm, psi upper one lower two, psi upper two lower one, psi upper one two, psi lower one two. So here we have singularities, well when either of these polynomials vanish. But the important point is that the set of
49:32
singularities, if you reduce this, then this was also a linear polynomial, which I discussed earlier. So this d was squared. So also in this case, the parabola is actually fulfilled,
49:59
that all these potential singularities are linear in the next integration variable.
50:03
Now if you want to understand Feynman diagram and all the amplitudes that we might associate with this diagram, it is enough to just look at these polynomials and how the singularities develop and you integrate out more and more variables. So there's a name for this, which is something that Francis introduced under the so-called polynomial reduction,
50:25
which is very interesting and it's very important. But of course, I only have five minutes left. So I'll just say to you that there are algorithms under the name of polynomial reduction
50:55
to compute upper bounds on these sets of singularities. They actually have a name
51:23
there called Landau varieties. And the bottom line is that once one has understood this polynomial reduction for a particular graph, one knows that all the amplitudes one could
51:41
associate with it will be via this algorithm expressible in terms of multiple polylogarithms of a particular type. So note, the actual integrand does not matter,
52:08
by which I mean we can take any integrand which is compatible with these singularities we started with. So if you compute a polynomial reduction for this graph,
52:23
just with the first semantic polynomial, then we can also make statements about integrals, generalized integrals of this form. We have omega g, but we raise psi to some higher power a and we put some polynomial in the numerator so that everything is homogeneous.
52:47
Or more generally, even though I stick to the one scale case, we can also look at general Feynman amplitudes where we can also have such a polynomial, but now allow for both graph
53:00
polynomials, psi and the psi polynomial raised to arbitrary integer powers, subject only to the condition that it is a well-defined projective integral. So in this case,
53:21
we start with the variety contains both polynomials, which in general makes things more complicated. Particularly if you have a graph where every edge is massive, then you know that
53:42
the psi polynomial is quadratic and you actually do not get very far. But there are many applications, for example, without masses or just with few masses where you can still play the same game and get statements which are valid. So I don't have time to explain this algorithm, but I want to explain to you what the outcome of these techniques is.
54:04
So we have reduced the study of the actual amplitudes and the integrals to algebraic or geometric task of understanding how these sets of polynomials, which describe the singularities,
54:22
how they behave. So we have the amplitudes i, j, which of course come from the graph.
54:47
But somehow we take a detour if we make things more complicated. In order to understand the amplitude, we understand something much more complicated. We study how these partial integrals depend on all the Schwinger parameters. So here we have a chain, how we get here
55:02
via these integrals, partial integrals, k plus one up to n. And in case of kinematics, they also depend on q and m. But we do not even need to know anything about these explicit functions. We only need to know
55:22
the places where they could have singularities. So actually, we get information about these by looking at these sets of polynomials.
55:42
And if you want to understand this sequence of singularities, you will have to take care of these resultants, of these Dodgson identities, which I mentioned earlier. The point is that they do not appear always, but they depend on the combinatorial structure of the graph. So the question whether such a denominator you get from partial fraction decomposition
56:04
is linear, factorized into linear factors, is something which relates to combinatorial structures in the actual graph. So here the graph feeds into as well. And the bottom line is that you can get statements like the one Francis briefly mentioned.
56:23
So there's a theorem by him. Let G have vertex with, or actually I will give another
56:49
example because I have one minute left only. Consider the family of letter boxes.
57:09
So these are the integrals with four external eggs, but you can make them arbitrarily big. And you can make two legs massive on the side. So you can have a triple box,
57:25
this is what would be called a triple box. And you can have arbitrary and UPPO boxes with two massive legs. So we have an infinite family of Feynman graphs. Then if G
57:46
is such a graph, then the actual amplitude associated with this graph as a function of the four momenta of these side constraints is a multiple poly logarithm.
58:03
And you can actually say what kind of multiple poly logarithms. So you can write down an upper bound on the differential forms which make up this iterate integral. On other words, you can specify what the arguments of this multiple poly logarithm can be in the worst case. And there's another class of graphs which Francis studied,
58:22
which are vertex with three graphs. So the idea is that those contain graphs, for example, with three external vertices like these things. And again, you can generalize this to have three external momenta. And because these graphs
58:41
are very special combinatorial structure, they're very rigid. You can use this structure to prove factorization identities of these Dodgson polynomials, which then tells you that the actual amplitude is a multiple poly logarithm of a particular type. And in practice, you can also use this to do explicit computations. So I want to end here,
59:05
and thank you for your attention. And yeah. Any question? What happens if the dimension D is odd and I have square roots?
59:26
Yeah, I don't say anything about odd dimensions. I'm sorry for that. In the various tool, you have mentioned specifically in your example when you start with,
59:40
you use the contraction of line to have some vertices and to remove line. And then you get also these lambda varieties. And it's exactly what we use as physicists when we're trying to factorize some processes when we have two scale. So for example, you have two scales, okay? And you want to investigate how so.
01:00:00
of diagram can be factorized in order to describe the confinement part and the R part. And that's explicitly what most of people involved, not in exact computation, but in a factorized form of high energy do every day. So there's an expansion in like. Yeah, yeah, to try to get asymptotic, to even get the power expansion, like what we call twist expansion in QCD.
01:00:23
So does all this machinery could say something interesting on this? Well, I think so. Essentially what you're doing, whenever you're doing an asymptotic expansion like this on other cases, I mean, in physics there's this whole business of expansion by regions, for example, where you just take an individual Feynman diagram and look at an expander where some momentum gets bigger,
01:00:42
some mass gets bigger, or even this kind of thing. You're always doing, in this picture it has unified interpretation in some way because you look at this integral, which is made up of this psi polynomial, and here in this polynomial, you have your different masses and momenta and now you're looking at what happens when one of those dominate and the others. So essentially, you would hope
01:01:01
that you can expand the integrand in this limit and then just compute the integrals themselves. And the only problem is that if you do this expansion, you might introduce some divergences, which you might have to take care of, but if this is not the case, then you're right. So you're using, in this case, a factorization of this polynomial in this particular kinematic limit.
01:01:21
What I did here is that I looked at a factorization of the psi polynomial, which does not involve any kinematic invariance, but you also have factorization polynomials for the psi polynomial, they are all very important also in the motivic approaches that were mentioned today. So yeah, I mean, it's certainly very much related.
01:01:43
What you also realize immediately when you do this is that when you do such an expansion, you simplify the situation drastically because you replace this complicated bit by a product of two much simpler polynomials. And then essentially the polynomial reduction also breaks down into two independent pieces
01:02:01
because the variables separate somehow. So yeah, so we do have cases where, for example, in such an expansion, everything is linearly reducible and you can compute the coefficients in terms of multiple polylogs, whereas for the full actual function, it's much more complicated and you do not know what happens. So this is also one approach to try to
01:02:21
get closer to something very complicated, which you don't understand. A question? Can I ask, in the same spirit, so if you have a gauge theory, okay, you have, of course, some of many diagrams, and each individual diagram produces you some,
01:02:43
let's say, power of hard scale or whatever, okay, which has completely nothing to do with if you perform summation. So do you have some clever way to treat numerators? Okay, in parametric, to combine, okay, somehow?
01:03:02
I must say no here. Of course, the hope is that there should be something and you would hope to find, but so far, as I'm aware, there have been some attempts, but it is not yet clear how to, the problem is that you have all this integration by parts relations. In the ideal world, you could just take
01:03:21
the representations for different diagrams with the numerators and put them all in one numerator and look at this integral in one go. And then you would hope that you see the cancellations of the leading degrees or whatever. The problem is that even though this is a sort of canonical representation for a diagram, it's not the unique one because you have all this integration by parts identities.
01:03:41
So you can imagine situations where you take a sum of two terms and they don't look, there are no visible cancellations at all, but when you do partial integration or write in a different form, then they are manifest and then it's easy. So I know that some people have tried this in other applications to try to combine
01:04:00
different diagrams and see why certain cancellations happen in gauge theories, but as far as I'm aware, this is still very much work in progress and we don't yet know the right approach for this. The problem is really that the representation is not unique and you would need a way to find the right representation in order to see it easily. So, but probably finding that representation
01:04:22
is as complicated as solving the problem, so. What you have done here was not change of representation, I mean the steps of your computation and you see it as a change of representation. You mean change of variables or? Yes, I mean all the steps which you transform into your own.
01:04:40
Yeah, but I only look at one integral at a time. Yeah, and I'm changing, that's right, I'm changing the representation of my actually, I don't change the function, I just rewrite it in a different way, which makes it essentially trivial to compute the integral in this new representation. So this is the whole point. What I should mention briefly is at least is that,
01:05:00
of course this is a very simple minded approach in the sense that I just take the Schwinger parameters, which are 50 years old or 60 years old at least, and try to do the integral in these variables. And it's sort of Alice's wonder that we actually get very far in many cases, but we also have counter examples
01:05:22
where we know that an integral is a multiple poly logarithm, but we do not see it in this way because it's not linearly reducible. So you can have situations where something looks very complicated, but it's just a sign that actually these variables are the bad variables. And of course all physicists know that it's extremely important to find the right variables, I mean also for tree level amplitudes
01:05:40
if you want to be efficient in writing them down. And here you have a similar situation in the integration process, since of course absolutely unclear what the perfect representation in each case would be. So this is a completely open field I would say at the moment, and there are many different ideas one could try out and try to follow, but I can just say at the moment
01:06:02
there are examples, many examples where we have to twist this representation, go to another representation, do some changes of variables, and then it's clear that it's a poly logarithm or something related, but not in the original variables. Well, for the formula with the general polynomial p,
01:06:23
I think in some cases that I know, integration by part enabled to transform this into another form with different a and b, it's more or less like we have seen in the previous talk. Yeah, so there are certainly tons of relations here, right? But the problem is,
01:06:40
I mean the question that came earlier was if you have different integrals and you want to combine them and see that there are some cancellations, if you select some master integrals and write everything in terms of these master integrals which are independent, then you should see it on that form but it's not necessarily manifest in this kind of canonical representation.
01:07:01
Not by remarking, if you bring them, you'll see, they're efficient. If you manage to get to master integrals, it might be easier. And the main thing is integration by part in this case, as we have seen. Yes, yes. And I know it was an example of integration by part. It's a piece of shit.
01:07:22
Question here, you computed the residue, I mean in the example you gave, you computed the residue, but can you compute the rest of the extension in the constant term? Well, for this integral, I mean it has been computed I think two or three years ago in the on-channel massless case. I mean the problem is always
01:07:40
what are the kinematics that you do? So if you look at the view with three spokes and you just look at it as a three-point function, and so you put one momentum to zero, but take the other three momentals arbitrary, then it's in this class of vertex with three graphs. I didn't have time to describe these, but this function you can compute explicitly in terms of multiple polylogarithms
01:08:01
to all orders in epsilon close to all even dimensions with arbitrary powers on the propagators. As soon as you go to four off-shell external momenta, I don't even know what the right representation for the kinematics is. I mean, even if you take the one-loop box
01:08:22
there with four off-shell momenta, it is only known in four dimensions. But if you go to four minus epsilon dimensions, you don't really know what to do because it's just, the kinematics is too complicated. So the only case I know where this is computed with four external momenta is also cheating because it puts all four external momenta on shell.
01:08:42
So it's only a function of two variables. And then it was computed by Hennens-Milonov and was shown to be a multiple polylogarithm. And this is actually one of these examples where when you do it with the Schwinger parameters, you do not get linear reducibility in the sense that I mentioned here. So after one integration, you have a quadratic polynomial.
01:09:01
So it seems mysterious, but then there's a change of variables which you can do, which makes this thing factorized, and then you can apply the algorithm to see you get this. But it's one of the examples where I would like to have a better reason to see why it's a polylogarithm or not. But yes, you can compute higher orders
01:09:21
in the epsilon expansion. The reason is that when you go to, so the epsilon sits here in these exponents. So if you do an expansion in epsilon, the only thing you do is that you introduce logarithms of the polynomials, which are, of course, in the space of iterate integrals over these polynomials that I start with anyway.
01:09:42
So this is the statement that here, the actual integral doesn't really matter. So I can allow for arbitrary powers, but I can also expand on the powers and allow for logarithms. You can also have logarithms of individual Schwinger parameters in the game and arbitrary powers of those. This does not play any role for the function theory. Of course, it makes in practice the computation more complicated and takes more time,
01:10:00
but it does not change conceptually what happens to the integral. So if you have a linear reducibility, you denote these logs, if you add the logs, you're saying that you still fit this property? Yeah, so I was quite sloppy, actually, with the way how I defined linear reducibility.
01:10:23
I didn't even mention the name, did I? At least I didn't dare to write it down. Okay, so the idea is that it's really just something depending on the polynomials. So you take the polynomials and then you see what is the worst case that could happen. If I take an arbitrary, so you do not exactly,
01:10:41
exactly do not want to look at the actual integral. Just take whatever integral which only has these singularities, do one integration, what are the singularities that I could have here? Then by quite general arguments, just by looking at the vibration of this variety, you can prove these things, that these sets are upper bounds on the singularities. I mean, I hope I made somehow clear
01:11:02
how this actually arises in the computation because what happens if I want to compute such an integral? Well, this is a logarithm, so I have this clear how to write it in this hyper logarithm form. And the letter I get here, this is psi upper one, alpha one plus psi lower one. So the letter will be something like,
01:11:21
will be some L omega zero is at psi lower one divided by psi upper one. So after the integration, I have, of course, singularities when these go to zero. I will have to do the partial fraction decompositions, but these are all taken care of by D. So the changing the powers or anything
01:11:40
changes the actual representation at the point, but it does not change the fact that it is a hyper logarithm with these as an upper bound of the letters and denominators which appear. So this is the point. In a way, you make it more complicated because you have to look at things depending on all these Schwinger parameters, which are completely unphysical, of course,
01:12:00
which only appear in the very last step when you did the last integration. But on the other hand, you abstract from the actual integral and you look at all amplitudes which you can assign to this graph at the same time. Because this is something which only depends on this geometry of the graph hyper surfaces.
01:12:21
More questions?
Recommendations
Series of 2 media