Random Loops and T-algebras
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 28 | |
Author | ||
Contributors | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/51274 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Quantum field theoryKörper <Algebra>TheoryMathematical structureLoop (music)AlgebraRandom number8 (number)Geometric quantizationStochasticTime evolutionBrownian motionAutocovarianceGradientApproximationVector spaceSpacetimeAerodynamicsManifoldRiemannian geometryMusical ensembleConfiguration spaceCircleFood energyMultiplication signPartial differential equationGroup actionNoise (electronics)Right angleOrder (biology)CurveQuantum field theoryMany-sorted logicKörper <Algebra>TensorfeldParametrische ErregungMathematical structureMetric systemDifferential (mechanical device)Tangent spaceEuklidischer RaumMeasurementAdditionBrownian motionFunctional (mathematics)FinitismusTerm (mathematics)GradientBuildingTheoremBeta functionStochasticEvoluteLimit (category theory)AutocovarianceDivergencePerturbation theoryGeodesicLoop (music)ManifoldPoint (geometry)Maxima and minimaInverse elementElementary arithmeticDistribution (mathematics)Physical lawAlgebraic structureProbability theoryNichtlineares GleichungssystemCategory of beingParameter (computer programming)SpacetimeRandomizationDynamical systemGeometric quantizationStandard ModelTheorySigma-algebraIndependence (probability theory)Mathematical analysisInitial value problemApproximationVector fieldArithmetic meanWeißes RauschenStandard deviationStochastic calculusQuantum chromodynamicsDimensional analysisVariance
09:30
SpacetimeRenormalizationMathematical structureModule (mathematics)Thermal expansionValuation (algebra)ApproximationAxiom of choiceTheoremLogical constantStochastic kernel estimationLimit (category theory)Parameter (computer programming)TheoryIsometrie <Mathematik>Äquivariante AbbildungProof theoryGeometryTerm (mathematics)Vector spaceFunction (mathematics)Price indexManifoldMany-sorted logicCurveQuantum chromodynamicsMultiplication signMeasurementFormal power seriesOperator (mathematics)Condition numberPoint (geometry)Gibbs-samplingVariable (mathematics)CircleNichtlineares GleichungssystemFunctional (mathematics)Real numberAutocovarianceDerivation (linguistics)Direction (geometry)Scalar fieldPhysicalismResultantRight angleFood energyCurvatureDifferent (Kate Ryan album)Partial differential equationParametrische ErregungAlpha (investment)Order (biology)Loop (music)Term (mathematics)Arithmetic meanRegular graphRootExpressionNoise (electronics)TensorMetric systemSigma-algebraKörper <Algebra>Genetic programmingPower (physics)StochasticParameter (computer programming)GradientEvoluteDistribution (mathematics)Natural numberSocial classProduct (business)FamilyGamma functionConnected spaceMatrix (mathematics)Square numberSymmetry (physics)Analytic setCategory of beingMoment (mathematics)Price indexLimit of a functionDynamical systemSpacetimeLimit (category theory)Kontraktion <Mathematik>Algebraic structureCommutatorNetwork topologyObject (grammar)Approximation1 (number)Goodness of fitIdentical particlesEnergy levelMathematicsUniverselle AlgebraMathematical analysisPermutationAxiom of choiceLinear subspaceString theoryDimensional analysisFinitismusVector spacePhysical lawProof theoryInverse elementRandomizationFilm editingEquivalence relationMathematical structureAdditionFree groupNumerical analysisElement (mathematics)Group actionDegree (graph theory)Musical ensembleStochastic differential equationBlock (periodic table)Correspondence (mathematics)Quantum field theoryFree moduleDegrees of freedom (physics and chemistry)TheoryGeometryMultilineare AbbildungLine (geometry)Beta functionCombinatory logicHelmholtz decompositionSequenceSeries (mathematics)Affine spaceLinear algebraVector fieldSymmetric groupTensorproduktLinearizationFinite setSummierbarkeitDivergenceModulformGreen's functionLocal ringMereologyAnalogyDiagramPartial derivativeMultiplicationGraph coloringCountingFigurate numberSinc functionValuation (algebra)Latent heatTheoremLogical constantProcess (computing)Function (mathematics)DiffusionsgleichungTime evolutionContinuous functionWeißes RauschenCuboidTangent spaceMetric tensorIsometrie <Mathematik>Äquivariante AbbildungCoordinate systemRecurrence relationSymmetry groupDirac delta functionTaylor seriesPerspective (visual)Graph (mathematics)Hopf algebraProbability spaceUniverse (mathematics)DiffeomorphismMultilaterationThermal expansionAsymmetrySkewnessMonad (category theory)Symmetric matrixRiemannian manifoldLebesgue measureCanonical ensembleBound stateAnalytic continuationComputer animation
Transcript: English(auto-generated)
00:16
Thanks a lot for the invitation. Well, too bad we can't be there in person,
00:22
but that's life, I guess, this year. Hope it's gonna be better next year. Yes, so, well, so as you maybe know, I'm really not an algebraist at all, but so I'm more of a probabilist slash analyst,
00:44
but in the course of my research, well, the type of analysis and probability that I do is all led naturally to the type of algebraic structures that you actually see in perturbative quantum field theory, but the context is slightly different.
01:01
And so let me start. So the first half of my talk, I want to basically show you in what sort of context these type of structures show up in probability theory. And for this, I want to focus on one example. So here, one way you can link, if you want,
01:26
quantum field theory and probability theory is by this procedure of stochastic quantization. And the basic idea, which was potentially introduced
01:42
by Parisi and Wu back in the eighties, the idea is that, well, if you want to build a sort of Euclidean quantum field theory, and so that would formally be given, be described by some kind of measure of this type,
02:01
where this D phi would be some sort of, the big measure on the space of fields, which doesn't really exist. And this would be some kind of action functional. And the idea is if everything were finite dimensional, so of course, your field configurations
02:20
don't belong to a finite dimensional space, but if you just suspend this belief for a second and pretend that they do, then you can actually write down a stochastic evolution equation, which is essentially a gradient flows. So if you just divide by DT on both sides here, this is just a standard gradient flow.
02:41
So phi, so you introduce a time, which has nothing to do with the time of your quantum field theory, if you want. So it's a purely algorithmic kind of time. And you take a gradient flow. So this guy simply tries to minimize that action. But then you add an additional noise term to this.
03:03
Okay, so the dynamic here is going to try to minimize S, but then it keeps on being kicked around by this noise term. Okay, and so here, this W, you should think of it as a Brownian motion, which means that the DW by DT
03:22
that somehow formally shows up on the right hand side here should be thought of as white noise. And white noise, you just think of it as being kind of independent random variables at every instant of time. So it's kind of as random as possible in a way. And then, I mean, of course, if you have a gradient flow,
03:42
that means that you need to give yourself some kind of metric on the tangent space of your configuration space, because, well, the differential would take values in the cotangent space, and so you want to turn that into something in the tangent space to get an evolution. And so you need to fix a metric.
04:04
And the important thing here is that the metric that you use in order to define your gradient should be the same as the one that determines the covariance of that Brownian motion. Okay, so it turns out, if you can think about it, well, the covariance in terms of
04:23
its sort of tensorial behavior, it actually behaves like the inverse of a metric. Okay, and so you would take the inverse of your metric as being the covariance of this Brownian motion. And then in finite dimensions, it's a very simple kind of elementary theorem
04:41
that you learn in sort of introductory courses on stochastic analysis, that if you take this evolution here and you start with an initial condition that's distributed according to this measure, assuming that everything is finite dimensional and S kind of grows at infinity in such a way that you can normalize this
05:00
and everything is smooth enough, then the solution to this equation has the property that this measure is left invariant. So if you start with an initial condition that has that distribution, then at all subsequent times, the solution has the same distribution. And furthermore, if you start with a basically arbitrary initial condition
05:23
and you look at the solution after a long time, then the law of the solution actually converges to this measure. And so then one idea is to say, well, one way of building this measure is to kind of go backwards
05:40
and to say what you could do is to actually build the dynamic and then try to show that this dynamic has an invariant measure. And then that invariant measure would be the measure that you're actually after. And the reason why you want to do this
06:01
is that, well, there's this hope that these kind of divergences that show up in quantum field theory and sort of all the problems that you encounter when trying to go to pass to the limit for some kind of discrete approximations, this measure,
06:21
all these problems might actually be somewhat easier for the dynamic. The reason being that when you make sense of a dynamic, you actually do so sort of for a very short time, and then you try to sort of extend it for longer times.
06:41
And so you have automatically a small parameter, which is your small time parameter, without having to have a small parameter in here. So like, you don't need to do a perturbation in beta here, okay? So your small parameter is not beta. Your small parameter here would be like the time step for the dynamic, okay?
07:01
So the idea is that it should be easier because there's kind of a small parameter that comes in sort of for free, even if there's no small parameter that shows up in this measure that you're trying to build, okay? Now, well, so Parisi and Wu sort of had that idea in the eighties, but it actually took quite some time
07:23
for this to bear some kind of fruit. And well, the reason being is that essentially the theory of stochastic PDEs that you would need to know that to build these type of dynamics,
07:41
well, wasn't sufficiently developed at the time, and actually took quite a long time to catch up. Now, the example, the specific example I want to focus on for today's lecture is that of the 1D sigma model
08:01
where your fields, if you want, are just loops with values in a Riemannian manifold. And so that's sort of interesting. It's an interesting example because there's no, since the target space is not linear, it's a Riemannian manifold, there's no somehow Gaussian reference measure,
08:22
if you want, right? And so in this case, the energy, if you want, or your action functional, is just the usual Dirichlet energy, right? So your field configurations are loops in a manifold. So they're just curves from the circle into some Riemannian manifold.
08:43
And the energy of a curve is just given by the usual Dirichlet energy, right? So you just parametrize your curve. The curve comes with a parametrization because I really view it as maps from the circle into the manifold. And you just run along the curve and you take the tangent vector at every point to your curve.
09:02
You stick it into the metric at that point and you integrate this along the curve, okay? And the minimizers for this are close geodesics. Now you can actually just stick this into a computer. So you can discretize, you know, you can write down formally the corresponding sort of Langevin dynamic, if you want,
09:22
and you can just discretize it in some kind of brutal way and stick it into a computer and see what you get. So I can kind of show you a little movie of what this looks like. So this looks like something like this, okay? So here in my target manifold is just a two-sphere.
09:41
And you see this curve that sort of wriggles around on this two-sphere. And well, okay, so this is sort of the type of dynamics that you're interested in constructing here. Go back to the talk.
10:00
Now, in this particular example, one actually knows how to build the measure in the sense that there's, at least there's a natural candidate for this measure. And so it's sort of known, if you want, that the Brownian loop measure,
10:20
so it's just you take the diffusion that has the Laplace-Beltrami operator as a generator on your Riemannian manifold, and you condition it on returning to its starting point after a fixed time. And so that gives you a measure on loops. And that measure on loops, at least in some sort of formal way,
10:43
it's been known for some time that, well, at least formally it can be written precisely as a kind of Gibbs measure like this, right? So when you have this Dirichlet energy showing up, except that you have an additional term, which involves the scalar curvature, sort of integrate the scalar curvature
11:00
of your Riemannian manifold integrated along the loop. And so here's sort of an interesting thing, is that if you go and look at the physics literature from sort of the late 70s, early 80s, where people derive these kinds of results, what you see is that actually,
11:20
depending on the papers you look at, you get different values for this constant C. So there's a whole bunch of different values that show up in the literature. And they essentially show up because there's an ambiguity of how you actually interpret this kind of Lebesgue measure here, which again, doesn't really exist.
11:45
Now you can write down this gradient dynamic for this Gibbs measure. And if you do this, you get the following kind of equation. So you see some sort of a non-linear heat equation, right? So now U is a function.
12:01
So it's an evolution, time evolution with values in that loop space. And so it's a function of two variables. There's time and there's still space, the X, which is somehow the parameter of your loop, okay? So X here takes values in the circle and T is just the positive reals.
12:22
And here you get the covariant derivative of DXU in the direction of DXU. That's some type of heat equation. And then you have here this sort of gradient of the scalar curvature showing up, which comes from this term. And then you have a noise. And in front of the noise,
12:41
instead of having a constant, the natural thing to have here is the square root of the metric. The reason being that, well, the natural gradient with respect to which you get a nice expression like this is the intrinsic gradient in the tangent space of your manifold.
13:02
And so the natural metric is really the metric of your manifold at every point. You can write that in local coordinate. So you get some kind of horrible looking PDE. Details don't really matter. So you have the Christoffel symbol showing up here.
13:22
And here, the way you take the square root of the metric, one way of doing it is you take a bunch of vector fields, which I call sigma I, which generate the metric in this sense. So if you want the sum of sigma tensor sigma, sigma I tensor sigma I gives you the inverse metric tensor.
13:41
And so, well, you get this stochastic PDE. Now, one thing that you see is that here, even though my field consists of perfectly continuous kind of functions,
14:01
so these loops, they are continuous as a function of their parametrization. If you remember the sort of little movie that I showed you, they're continuous, but they're actually not very smooth at all. So actually what you can show is that
14:21
the typical regularity of X goes to U of X is typically held at alpha only for alpha less than a half. So it's basically all the continuous of all the half. So they're pretty regular.
14:41
And that means that here, this stochastic PDE here doesn't a priori have an intrinsic meaning because you have these nonlinear terms here involving the derivative of the solution. And the solution is not differentiable. So even though U is a continuous function,
15:02
the space derivative of U is a distribution. And so here you have this product of distribution and then also multiplied by some quite irregular function. And so you have the same type of problems as typically show up in quantum field theory where you have this problem of not having
15:20
a canonical way of multiplying distributions. So in this case, so in this context for these type of stochastic PDEs, so it's not just for this particular equation, but for a large class of equation of that type under just some kind of power counting condition.
15:45
Essentially, the power counting condition says that you should look at the equation, which is such that it only has sort of finitely many elementary divergences if you want. So it's some kind of subcriticality condition.
16:02
So there's now a sort of general result, which is sort of a combination of a number of works of myself with various collaborators where we give a kind of black box
16:22
showing that for these type of equations, you can regularize them in many different ways. So here you don't have sort of nice analytical expressions. So for example, things like dimensional regularization don't really make sense here. The natural regularization would be, for example,
16:43
replace this white noise by some kind of smoothened out version of white noise. So instead of, so white noise formally has a covariance that's a delta function. So you replace your delta function by some kind of approximate delta function. So then you have a small parameter epsilon. You're trying to send epsilon to zero, the usual thing.
17:03
So in this case, you get a number of, so the theory tells you that you have a finite collection of symbols. So these symbols here, they are essentially the analog of Feynman diagrams.
17:20
Okay, you can kind of think of them as being sort of half Feynman diagrams where the actual Feynman diagrams would be obtained by sort of taking two of these trees and sort of gluing the leaves together or several of these trees and gluing the leaves together in various ways. So they are sort of like partial Feynman diagrams.
17:44
And on these, there is a sort of number of interesting algebraic structures that are very similar to, well, what we heard already in the previous lecture, for example, where if you look at sort of the space
18:00
of these kind of symbols, they naturally have a, they don't themselves form a Hopf algebra, but they have a co-module structure actually for two Hopf algebras in this context. And one of the two Hopf algebras encodes the randomization which you can kind of view here
18:23
as some form of re-centering in probability space and the other Hopf algebra sort of encodes like a re-centering in real space, actually, where you somehow, where you perform kind of local Taylor expansions in real space
18:42
in some sense. But for each of these symbols, you also have a valuation that goes with them. So each of these symbols, you can actually interpret them for this equation as a kind of a vector field.
19:02
So the way this valuation works is, well, you're given the Christoffel symbols, so you're given a connection and you're given this collection of vector fields, sigma i. And now what you do is, well, when you see, you see these symbols, they're basically trees
19:21
where you have different kind of nodes. So there are nodes that are these kind of fat green nodes and then there are kind of small red nodes and the fat green nodes, they come paired up. So here sort of either you've got just two of them and then they're paired up or here you have four of them then I sort of drew them in two different colors to show that they kind of form two pairs.
19:43
And now every pair of green nodes like that, you should think of it as representing a sum over i of these sigmas. And you should kind of think of each of these nodes, since it's a tree, as having sort of outgoing edges
20:03
and these outgoing edges, you can think of them as representing the free indices here. So here you have two free indices, alpha and beta, they correspond to the two outgoing edges here. And then the red nodes,
20:21
so you have these red nodes that always have two red edges that go with them. So you should think of them as representing the Christoffel symbol that has three free indices, one upper index and two lower indices. And that represents the fact that, well, here you have two free incoming edges
20:43
that represent the lower indices. And then again, you should think of it as having here a free outgoing edge, which represents the upper index. So outgoing edges at the bottom represent upper indices here and incoming edges represent lower indices.
21:02
And then you can create new incoming edges by taking derivatives, right? So if you take a derivative of an expression like this, that creates you an extra free index, and it would be a lower index. So that will correspond to an incoming line above. And you can join lines by contracting indices, right?
21:23
So for example, if I take that epsilon gamma sigma, if I apply this procedure to the simplest one of these trees, well, what does it mean here? Well, I have these two guys, so that represents a sigma i alpha sigma i beta,
21:42
but then one of them has an extra incoming line. That means that it has a derivative and that incoming line is contracted with the outgoing line of the other guy. That means that this derivative, the index of the derivative should be the same as the index of the second guy.
22:02
And then there's a free index here, which is the free index of this expression, right? So you have this correspondence that sort of allows you to turn each of these little pictures into a function that's built basically some kind of multilinear expression
22:24
of the sigmas and the Christoffel symbols and their derivatives, which sort of automatically satisfies Einstein Convention and has one free index left at the end. And then the general result. So again, as I was mentioning,
22:40
this is sort of really a whole sequence. It builds on a whole series of works with, well, Ivan Buenner, who's going to speak just after me, and A.J. Chandra, who's also at Imperial, Ilya Shevarev, who's now in Edinburgh, and Lorenzo Zambotti in Paris.
23:02
And that black box sort of says that you can find randomization constants. And so here, I view the randomization as an element just of the free vector space generated by these guys. So there's just one constant per symbol.
23:26
So that if you take the, so you take some regularization of this equation, and then, well, you do the usual thing. So you add a counter term. So here, the counter term would essentially change
23:42
the value of this vector field H, right? So you add to H some linear combination of the expressions corresponding to, well, all 54 trees of that type that you can draw. Then there is a way of choosing these constants.
24:03
And that's, again, some type of book of decomposition type thing, which allows you to actually compute these constants so that if you take the regularized solution for the modified equation, and then you send epsilon to zero,
24:21
you can get a limit, and the limit is independent of the approximation procedure. So here, the important thing is that you can really prove, so this is a purely, these are really analytical statements, right? So these are not sort of algebraic constructions. They are actually analytical statements.
24:43
So I'm not going to go into detail of the sort of topology in which this limit takes place, but these are really analytical objects here that converge to a limit. And the limiting guy, you can show that it's very stable under approximations in the sense that you can approximate this guy
25:00
in pretty much any way that you want, along with this stationary and has some kind of moment bounds. You'll always get the same limit. And the limit is also very stable as a function of the data here. Okay, so these guys, if you want, you can make them depend on epsilon as well.
25:21
You're actually going to get the same limit. Now, in this example, well, the problem that you get now is that you get a priori, well, a 54-dimensional space of possible limits. So it's not terribly canonicals.
25:42
So you would like to exploit symmetries in order to kind of reduce your space of nice limits or admissible limits, right? So you would want to have some kind of, you would want to essentially say, well, I have this class of equations here. If that class of equation satisfies,
26:02
at a formal level, some kind of identity, then I would want the object that I build here to also satisfy this identity, right, for all choices of gamma, sigma, and H. And there are two such symmetries, actually.
26:22
So there's a sort of meta-theorem that one can prove. So I call it a meta-theorem because really these symmetries, there doesn't seem to be one sort of good formulation that really covers all possible cases you can imagine. But for all cases that we've encountered,
26:41
you can prove a theorem of that type, and it's just slightly different proof every time. But essentially it says that if you have a symmetry and you can approximate your equation in a way that your approximation preserves that symmetry, then there's a way of renormalizing it so that the renormalized limit
27:02
still satisfies the symmetry. The important thing here is that in general, if you cannot find an approximation which preserves the symmetry, then it may just not be true
27:23
that any of the renormalized limits satisfies all of your symmetries. Okay, and I'm going to show you an example later. So here in this case, there are two natural symmetries. So the first one is changes of coordinates in the target manifold, right? So if you perform a change of coordinates
27:42
in the target manifold, then in the way that I wrote things down, you know, in a coordinate system, because you get a completely different equation, and what you would want is that the solution to that different equation would be the same
28:00
as the solution to the previous equation simply pushed forward under the diffeomorphism that gives you the coordinate change. And so in the case of usual stochastic differential equations, there's a solution theory which is called Stratonovitch, which has precisely that property. And so in our case, one can, you know,
28:23
prove the corresponding theorem here. And what that tells you is that you can impose some restrictions on your randomization procedure. So instead of having 54 degrees of freedom, you can kind of cut it down to 15
28:41
if you want to impose equivariance under coordinate changes. And there's another symmetry, which is that, well, remember, I chose my, in order to take my square root of the metric, what I did is I chose a bunch of vector fields
29:01
so that the sum of sigma i tends to sigma i as the inverse of the metric tensor. You know, of course, there's lots of, there are lots of possible choices for these sigma i's. So if I'm just given G, that does not at all determine the sigma i's. And while at the formal level,
29:25
one can sort of convince oneself that actually the law of the solution shouldn't depend on the choice of square root of metric here. And what one uses for this is something that's called Ito's asymmetry.
29:40
And again, for usual sort of stochastic differential equation, there's a solution theory which has the corresponding property, which is called Ito's solutions. And if we, in our case, sort of prove the corresponding theorem, we can show that, you know,
30:00
you can reduce your 54 dimensional space of solution theories to something 19 dimensional. And so now you could say, well, you know, I have these two symmetries. Of course, you know, if you have two symmetries for something, you can always mash them together. It gives you one big symmetry.
30:21
Well, not always, but in this case, the two symmetries can be actually mashed together. So there's a kind of skew product of these two symmetry groups that kind of acts on the whole thing. But we don't know of a good approximation
30:40
that actually preserves that big symmetry. So we have an approximation that preserves this symmetry. We have one that preserves that one, but they're not the same type of approximation. And we don't know of any approximation that preserves both at the same time. And so, well, so there's a natural question is, can you have both?
31:01
And in the finite dimensional case, so if you don't talk about stochastic P's, but just stochastic differential equations, there is a completely analogous question. And the answer there is actually just no. So there is no solution theory for stochastic differential equations
31:20
that has both of these symmetries simultaneously. In our case, it turns out that you can actually have both at the same time. So that's this theorem we obtained with Yvonne from Gabriel and again, Lorenzo Zambotti,
31:42
which is that actually these, so you have this 15 dimensional affine space of theories that satisfy equivariance on the change of coordinates. You have this 19 dimensional space of theories satisfying e2 isometry. They are two sort of affine subspaces of a space of dimension 54.
32:02
Generically, since 15 plus 19 is 34, which is much less than 54, generically, you wouldn't expect these two affine subspaces to intersect. But what we can show is that they actually intersect and they have an intersection which is actually of dimension two.
32:26
And well, it turns out that you can actually even, so now you have a natural two-parameter family of sort of notions of solutions that have all the symmetries that your class of equation satisfies. There's a sort of more analytical property
32:42
that I don't want to really go into that you might want to impose as well. And we can prove, we can show that you can actually impose that more analytical kind of property also simultaneously with these two symmetries. That kind of reduces things
33:00
by one more degree of freedom. So at the end of the day, you end up with a one-parameter family of sort of very natural solution theories that in some sense behave as nicely as you could possibly expect. And furthermore, in the case
33:22
that we're actually interested in, so in the case we're interested in, this gamma and these vector field sigma i, they are not unrelated, right? Because the gamma are the Christoffel symbols for the Levi-Civita connection that comes from your Riemannian metric and the sigma i's are some kind of square root
33:42
of that Riemannian metric. So the two are related and it turns out that the way in which they are related is such that in this particular case, this one-parameter family of solution theories, they actually all coincide. So at the end of the day,
34:00
you have a completely canonical kind of notion of solution. And well, still there was one, this last degree of freedom, which I eliminated, which I didn't spend much time on it, which was more of an analytic nature rather than a geometric nature. So there was no actual symmetry involved somehow.
34:24
That one, you can still ask yourself, what's the effect of changing this last parameter? And that turns out to actually just add a term to the right-hand side of the equation, which is proportional to the gradient of scalar curvature.
34:41
So it's kind of cute because in a way that gives you a different perspective on this fact that people had figured out back in the 70s, which is that if you formally try to write this Brownian loop measure, well, you always want to write it like this,
35:01
but you don't quite know what the constant C should be. And so here are sort of some of the different possible values of C that appear in the literature. And so here, the way this is interpreted is that this constant C is sort of the remaining degree of freedom in my solution theory for this stochastic PDE,
35:20
which is not fixed by purely symmetric considerations. So now the main step in the proof is to show, so we want to show that these two spaces sort of intersect.
35:45
So we have a space, so we have this big space S, which is essentially just the vector space that's generated by all these symbols. Then we have a subspace which corresponds to,
36:02
if you want those linear combinations of symbols that can be written just in terms of G rather than in terms of this square root, in terms of these sigmas. So that's this subspace S eto. And then there's this sort of geometric subspace, which is the one that corresponds
36:21
to those linear combination of symbols that actually give you a vector field, right? Because all of these symbols give you an expression that satisfies sort of Einstein convention and has one free upper index, but such a thing is not necessarily a vector field because the Christoffel symbols, they are not a tensor of type 2-1, right?
36:42
The Christoffel symbols, they determine a connection, but they are not a tensor of type 2-1. And so just contracting it with other tensors in a way that satisfies Einstein's convention doesn't guarantee that you actually get a tensor in the end. So here you have a subspace of this space,
37:02
which is those guys that actually give you a vector field in the end. And what we can show is that if you take one of these solution theories that satisfies e2 isometry and one of these solution theories that behaves correctly on the changes of coordinates,
37:21
then they differ by a counter term that belongs to the space of linear combinations, such that if I take two different square roots of my metric, and then I look at the difference between these evaluations corresponding to these two different square roots for the metric,
37:43
then what I obtain is a vector field for every choice of sigmas and gammas, okay? So now obviously this space S both contains both the geometric ones and the eta terms, right? Because these terms are precisely the ones
38:03
so that this difference vanishes because they are those terms so that if I choose two different square roots for my metric, I actually get the same thing. So it depends only on the metric and not on the choice of square root. Whereas these guys are the ones so that each of these guys separately is a vector field. And therefore, in particular,
38:21
their difference would be a vector field as well. And so each of these guys certainly belongs to that space. So their sum belongs to that space. And the non-trivial fact is that this sum is actually equal to this space here. Okay, and that's not obvious.
38:41
So in the case of stochastic differential equations, the analogous, you could actually try to do exactly the same proof. Everything works up until this point. And then what you realize is that your space actually consists only of one single symbol.
39:01
And the evaluation of that symbol is, well, the expression that I already wrote down early on. And that expression is not a vector field. And it also really depends on the choice of sigma i. So it is not just a function of the metric. So it belongs neither to this space nor to that space. So these spaces are both zero.
39:22
But if I take two different choices of sigmas that give me the same metric, then it turns out that this difference is actually equal to this difference of covariant derivatives because the term involving the connection actually drops out.
39:45
And therefore that's a vector field. Okay, so these two spaces in this case are zero, but this space is non-zero, but one-dimensional. And so the proof fails and, well, the conclusion is actually known not to be true
40:01
in the case of stochastic differential equations. Now, in the case of PDEs, you see, if I just look at the trees that have two leaves, there was only two of them, then it turns out that this guy here, well, if I hit it with my evaluation,
40:21
you actually get essentially just a contraction of the Christoffel symbol with the sigmas. So that red guy represents the Christoffel symbols, and these two green guys represents the two instances of sigma,
40:41
and the fact that they're connected represents that contraction here. And then of course, this here is nothing but just G alpha beta. And therefore, this guy belongs to this S-Eto space. And similarly, you have this term here.
41:00
You can actually show that if you apply this evaluation up to this term here, this actually gives you the covariant derivative of the sigma I in the direction of sigma I, and so that's a vector field. And it turns out that there's no other vector field you can build in this way. And so in this case,
41:21
well, these two spaces are both one-dimensional, and their sum is actually just everything, and well, and you can actually show that in this case, both of these elements actually have the property that they belong to this space. Okay, so in this case,
41:42
well, this part of the argument works. Once you know that this difference is of the form of a sum of an I-to counter term as a sum of a geometric counter term, that means that you actually know that these two affine spaces have to coincide because you can actually, you can just move.
42:02
There's one space in which you can move with terms of this type and the other space in which you can move with terms of that type. And if the difference is of the form of a sum of these two terms, that means that you can actually move both of them in such a way that they meet. And the problem now is to actually prove this in general.
42:21
So in our case, for the trees that have two leaves, it's kind of easy to just check it by hand. And those with four leaves, well, there's 52 of them. And so you have to somehow figure out what these subspaces are. And well, it's not so easy to kind of figure out
42:42
what subspaces of a 52-dimensional space look like if you don't have something more systematic that you can do, right? So you cannot just turn it, here you could just about turn it into a sort of simple kind of linear algebra problem. But if you look at the sort of trees with four leaves,
43:03
that's not really doable anymore. So you want some sort of more systematic way of looking at it. And so here, it turns out that the natural way of abstracting, sort of the natural algebraic structure that actually shows up, of which these trees
43:23
with these different various decorations that showed up are an example of what we call a T-algebra. So I don't know if it's, we didn't find it anywhere in the literature. So, but maybe, well, I think we don't really know this literature either. So maybe some of you have seen this already
43:43
somewhere in the literature, and then we'd be very happy to have a pointer. But we haven't been able to find this. So this is essentially an abstraction of the notion of functions with multiple upper and lower free indices.
44:01
And so what do we, so how do we define this? Well, so we define it as a vector space with a grading, but it's sort of a double grading. So it has two degrees. And these degrees here, you should think of it as being the number of free indices, right? So the U is the number of free upper indices
44:22
and L is the number of free lower indices. So a vector field would be something with one free upper index. So it would be an element of V one zero. And then you have three additional pieces of structure. The first one is you want on each of these VULs,
44:45
you want an action of the symmetric group. It's actually two copies of the symmetric group, one that acts on the sort of U upper indices and one that acts on the L lower indices. And you should think of it as corresponding to permutation of indices.
45:03
Then of course you have a product, but if you think of these as functions with a number of free indices, well, you can multiply two such functions and it's essentially a tensor product. And so the number of upper indices should add up, the number of lower indices should add up. So you should have a product which kind of preserves degrees in this sense.
45:23
And in terms of permuting indices, of course, now if you multiply A with B and B with A, well, it's not quite commutative, right? But it's sort of almost commutative in the same sense as the usual tensor product is kind of almost commutative.
45:42
So in this case, what you want to impose is that multiplying B with A is the same as multiplying A with B, but then sort of permuting. So this is the permutation which corresponds to taking a block of size U1 and U2 and kind of swapping the two blocks. And the same for the lower indices.
46:01
You have a block of size L1 and L2 and you kind of swap the two blocks, right? So that's somehow the natural property that you would expect, you would want to impose this product to have. And then of course it should, you know,
46:20
in some sense, commute with the action of that symmetric group, right? So in the sense that if you first flip indices around and then multiply the two guys, it's the same as first multiplying them and then kind of flipping the indices around in a natural way. And then the final operation that you want
46:40
is a sort of a partial trace. And the partial trace corresponds to contracting an upper index with a lower index. And so you would view it as an operation from say VU plus one L plus one into VUL.
47:00
You've contracted an upper and a lower index so that both of them go down by one. And it should have to, now you would want to sort of contract two arbitrary indices, but we can actually always reduce it to the case of contracting the two last ones because we have this operation that commutes indices, right?
47:21
So we should think of that operation as actually just being the operation of contracting the last upper index with the last lower index. And then it should have, if you interpret it like this, then this property is of course very natural. And then you can again sort of think a little bit
47:42
about how this should sort of interact with the symmetric group. One important property is this one, which basically says that if you, if you look at sort of the last two upper indices and the last two lower indices and your first contract,
48:03
and so you have say the last two upper indices, the last two lower indices and you apply this trace operation twice. So that means you first contract these two guys. So then they disappear. Then you contract these two. You say, well, that should be the same
48:22
as first sort of flipping these two indices around. Yeah, not trying to have a good way of drawing this here, but you just sort of first flip the last two upper, exchange the last two upper indices and you exchange the last two lower indices
48:41
and then you somehow contract both of them, right? So it just corresponds to sort of contracting them in the reverse order. And you want that to not make any difference. So one typical example would be to just take
49:01
as the UL space, for example, take some fixed vector space V and then take L copies of the dual and U copies of V itself. And so then you have a natural product and permutations and you have a natural tracing operation as well, right? Which sort of takes the last copy of V star
49:21
and contracts it with the last copy of V. And that has exactly all the properties that we just formalized. So I think I'm running out of time. So let me just sort of give you one sort of little result that we have in this direction, which is very useful.
49:41
So the point here is that we're using this algebraic structure sort of as a language, but at the end of the day, we want to use it to prove an analytical result. So we have to go back and forth between the algebra and the analysis.
50:03
And so in particular, we have to prove that at the analytic level, you don't somehow end up with kind of spurious identities that you don't see at the algebraic level, but that may just appear because somehow there's some degeneracy or something.
50:23
And so you want some kind of non-degeneracy result that tells you that generically, you don't actually have any sort of cancellations also at the analytical level that you don't already have at the algebraic level. And so here we have some kind of non-degeneracy result
50:42
of this type, which essentially says that for a large class of these kinds of T algebras that are all the ones that ever show up in the proofs that we care about, you can always, if you look at just the sort of finite dimensional subspace of them,
51:00
then you can take the, so if I go back to the sort of string in the manifold, you can take the, if you take the dimension of the manifold large enough and you choose these Christoffel symbols and these vector fields in a generic way, then you can always guarantee
51:21
that if the dimension is sufficiently large, then there are no kind of spurious cancellations that appear. So that's the sort of type of non-degeneracy results you can prove here. But I think I'm sort of out of time, so this is maybe a good place to stop and thank you very much for your attention.
51:42
Thank you very much, Martin, for your nice talk. Thank you. Before I ask questions, please, if anybody else wants to ask, go first, raise your hand or just start speaking. One thought that came to my mind when you talked about these two algebras, I'm not an expert on these algebraic things either,
52:03
but this thing with many inputs and many outputs, right, I mean, that's a proper ad. And then you- So this is an example, okay, so it's an example of one of these universal algebras that you can associate to an operand. So if you have an operand, then there's always a universal algebra
52:20
that goes with it. So here, this would be like one specific example. Yeah. So there is indeed, so there are whole books on universal algebra, but then they tend to be sort of too general for our purpose because they sort of say,
52:40
oh, take an arbitrary operand, and then there's this algebra that goes with it and then you have sort of general properties. But here, we don't care about an arbitrary operand. There's a sort of one very specific operand. Yeah, yeah. I'm sorry, now I have to ask, I don't think you have anything operatic. You have a sort of weird structure where you have half of the structure for a prop.
53:00
You're not gluing, you're only taking traces, right? Or are you gluing these things together? So you have an SN action and an SM action, so that's the first thing that would underlie a prop. And then you have what's called the horizontal gluing, which is we take the SN, you add them together, and then you have sort of these wheels, which give you these traces, but that's it, right?
53:22
You don't have anything else. You don't have something that you can put the inputs into the outputs, or do you have that? You do, I think. Yeah, but you can, you actually, right, because what I described here was not the operand, right?
53:40
What I described is of the algebra. So this algebra, you can view this algebra as coming from an operand. And then the operand would be the one where your objects are things of that type. You have like boxes. So you have a finite number of inputs,
54:03
a finite number of outputs, and you have boxes and each box also has a sort of number of inputs and number of outputs. And then they're connected, well, in the way that you should think, you know, they're sort of connected in this way.
54:22
So here, now I'm sort of running out of, so I can do this and then this, and well, there's nothing to connect it to these outputs. So there's stuff like that. But now you can, these type of objects, you can plug them into each other, right? Because I can get, I can take an object of,
54:40
I can take a guy with two inputs and one output and sort of stuff in the middle. And now I can take that guy here and I can sort of plug it into this box here. And I connect these two inputs to these two slots and that guy to that slot. All right, then that actually gives me an operand. And these T-algebra sort of comes from there.
55:04
Yeah, so technically speaking, what you have is a wheeled prop. Sorry, officer? It's called a wheeled prop. Operand technically only has one output. You have multiple, but that makes it a prop. Oh, I see, okay. Then since you have things going back, that's what makes it called, be called wheeled.
55:24
Okay. The crop, you have a directed graph and then if you go back up, you get this. And so, and your T-algebra is a free T-algebra or, I mean, I'm sorry, it's a free algebra over this thing or just a specific one, that's what you're saying. Right, so it doesn't have to be free, right? So the free ones, you could describe them
55:43
as basically just being sort of linear combinations of stuff of that type. But the ones that then show up in our context are not free. And those are the ones you care about. That's what you were saying, you don't need the prop language because some of the stuff you just sort of get for free.
56:00
Yeah, yeah, yeah. No, the ones that, yeah, I'll try. All right, thanks. All right, I suggest we prepare for Yvan's talk next. Thanks, Martin, thanks very much again. Thank you, Martin.
Recommendations
Series of 5 media