We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

1/5 Introduction to localization

00:00

Formale Metadaten

Titel
1/5 Introduction to localization
Serientitel
Teil
32
Anzahl der Teile
32
Autor
Mitwirkende
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Significant progress has been made in the study of gauge theories in the last decade. Thanks to the discovery of novel techniques and especially supersymmetric localization, the field now possesses a plethora of exact results that previously seemed unreachable. Starting with the work of Nekrasov who computed the instanton partition function for N=2 theories in four dimensions, Pestun computed the exact partition function on a four-sphere for theories with N=2 supersymmetry. Shortly after the partition functions as well as other observables in various spacetime dimensions and compact manifolds were computed. Our school aims in deepening the understanding of current results and at investigating which of our current methods are transferable to theories with less supersymmetry, as well as trying to increase the list of possible observables that are computable via localization. Each week will feature three or four speakers giving one lecture per day. During the first week, in addition to these three one hour and a half lectures there will be discussion and homework sessions in the afternoon. During the second week, some of the lectures will be replaced by talks on more advanced topics.
StellenringOrbit <Mathematik>Kartesische KoordinatenQuantenfeldtheorieDivergente ReihePhysikalische TheorieNichtlinearer OperatorPerspektiveMinkowski-MetrikStörungstheorieDimensionsanalyseAggregatzustandBerechenbare FunktionTheoremKorrelationsfunktionMultiplikationsoperatorMannigfaltigkeitDifferenzierbare MannigfaltigkeitSymmetrieGeschlossene MannigfaltigkeitMengenlehreIsometrie <Mathematik>ResultanteSupersymmetrieDeterminanteHomologieMathematikFunktion <Mathematik>IntegralFormation <Mathematik>PunktÄquivariante AbbildungQuantisierung <Physik>KohomologieGlattheit <Mathematik>Kompakter RaumKörper <Algebra>ErwartungswertVorlesung/Konferenz
GammafunktionObjekt <Kategorie>StandardabweichungMinimalgradDimensionsanalyseKontraktion <Mathematik>IndexberechnungMinkowski-MetrikMannigfaltigkeitSymmetrieModulformDifferentialNichtlineares GleichungssystemVektorfeldParametersystemDerivation <Algebra>Nichtlinearer OperatorÄquivariante AbbildungAlgebraisches ModellZusammenhängender GraphDifferenteSymmetrische MatrixVektorraumGeschlossene MannigfaltigkeitNumerische MathematikGruppenoperationMultilinearformCoxeter-GruppeKategorie <Mathematik>FrequenzKoeffizientQuadratzahlOrbit <Mathematik>ÄquivalenzklasseCartan-AbleitungKörper <Algebra>PolygonAlgebraische StrukturVorlesung/Konferenz
Lie-GruppeIRIS-TPolygonJensen-MaßZusammenhängender GraphModulformPunktMinkowski-MetrikÄquivalenzklasseMinimalgradKonditionszahlAusdruck <Logik>EnergiedichteVertauschungsrelationStichprobenfehlerQuotientenraumVektorfeldFrequenzMannigfaltigkeitObjekt <Kategorie>MultiplikationsoperatorDivergente ReiheGruppenoperationPotenz <Mathematik>TensorQuotientÄquivariante AbbildungÜbergangswahrscheinlichkeitLoopThermodynamisches SystemEreignishorizontDifferenteNichtlinearer OperatorBetafunktionMereologieIndexberechnungKohomologieAbgeschlossene MengeBimodulLie-GruppeLie-AbleitungVorlesung/Konferenz
StellenringKontraktion <Mathematik>GruppenoperationPunktMannigfaltigkeitMinimalgradVektorraumGruppendarstellungMinkowski-MetrikMengenlehreZusammenhängender GraphNachbarschaft <Mathematik>Äquivariante AbbildungJensen-MaßKohomologieBetafunktionIntegralModulformTheoremTotal <Mathematik>StellenringVektorfeldEndlichkeitSortierte LogikViereckLokales MinimumInverser LimesTeilbarkeitAbgeschlossene MengeDifferenzkernKlasse <Mathematik>Rechter WinkelTermUnendlichkeitÄquivalenzklassePolygonMinimumBerechenbare FunktionObjekt <Kategorie>AdditionFormation <Mathematik>Numerische MathematikKreisbewegungPolynomVorlesung/Konferenz
Produkt <Mathematik>TeilbarkeitFormation <Mathematik>EigenwertproblemDifferenzierbare MannigfaltigkeitNumerische MathematikTermUnendlichkeitIntegralStellenringNachbarschaft <Mathematik>Inverser LimesÄquivalenzklassePunktDifferentialParametersystemKontraktion <Mathematik>Jensen-MaßMereologieLeistung <Physik>Minkowski-MetrikAbelsche GruppeDimensionsanalyseQuotientenraumGruppenoperationMannigfaltigkeitTangentialraumHomologieÄquivariante AbbildungBerechenbare FunktionAuswahlaxiomVorlesung/Konferenz
Finite-Elemente-MethodeNumerische MathematikHomologieUnendlichkeitPunktKreisbewegungGruppenoperationObjekt <Kategorie>Produkt <Mathematik>Abelsche GruppeMetrisches SystemStellenringIndexberechnungModulformVektorfeldÄquivariante AbbildungDiskrete GruppeVektorraumParametersystemZusammenhängender GraphMannigfaltigkeitÄquivalenzklasseJensen-MaßMinimumSummierbarkeitIntegralResultanteNichtlineares GleichungssystemNebenbedingungAbgeschlossene MengeAusdruck <Logik>Lemma <Logik>RandwertTensorEinsTeilbarkeitPolygonMultiplikationsoperatorTotal <Mathematik>Glattheit <Mathematik>KugelTermEigenwertproblemDeterminanteSupersymmetrieAlgebraische StrukturKonditionszahlInhalt <Mathematik>TangentialraumDerivation <Algebra>Vorlesung/Konferenz
GruppenoperationRandwertPunktTermBruchrechnungResultanteMannigfaltigkeitVorzeichen <Mathematik>MinimalgradLokales MinimumKontraktion <Mathematik>DifferentialJensen-MaßModulformThermodynamisches SystemDivergente ReiheIsometrie <Mathematik>ÄquivalenzklasseAusdruck <Logik>KoordinatenDifferentialgleichungMatrizenrechnungKörper <Algebra>Nachbarschaft <Mathematik>VektorfeldArithmetischer AusdruckFinitismusPolygonInverseWärmeausdehnungArithmetisches MittelEigenwertproblemObjekt <Kategorie>KreisbewegungNichtlineares GleichungssystemMinkowski-MetrikApproximationKategorie <Mathematik>KugelKeilförmige AnordnungThetafunktionMereologieBerechenbare FunktionKonstanteOrdnung <Mathematik>Metrisches SystemInverser LimesTeilbarkeitVertauschungsrelationZusammenhängender GraphAlgebraisches ModellGanze FunktionLinearisierungDimensionsanalyseUnendlichkeitsinc-FunktionVorlesung/Konferenz
SupersymmetriePunktAbstandPhysikalische TheorieParametersystemModelltheorieFunktionalintegralWärmeausdehnungOffene MengeJensen-MaßÄquivariante AbbildungMereologieGruppenoperationVollständigkeitKonfigurationsraumNachbarschaft <Mathematik>StellenringErwartungswertDimensionsanalyseKörper <Algebra>IntegralMinkowski-MetrikOvalKlasse <Mathematik>Arithmetisches MittelInverseUnendlichkeitNumerische MathematikZahlensystemDruckspannungObjekt <Kategorie>GruppendarstellungBetafunktionKeilförmige AnordnungNebenbedingungSummierbarkeitMannigfaltigkeitDivergente ReiheStörungstheorieKreisbewegungQuadratische FormLaurent-ReiheOrdnung <Mathematik>Nichtlinearer OperatorKonditionszahlBerechenbare FunktionGerichteter GraphHomologieEuklidischer RaumVektorpotenzialKonforme AbbildungStrategisches SpielModulformQuantenfeldtheorieBootstrap-AggregationFaltung <Mathematik>StandardabweichungMinimumDifferenzkernGeschlossene MannigfaltigkeitRechter WinkelMengenlehreBeweistheorieEigenwertproblemZweiQuantisierung <Physik>ResultanteTopologieAbgeschlossene MengeTermRotationsflächeÄquivalenzklasseKohomologieThetafunktionPotenz <Mathematik>MultiplikationsoperatorVorlesung/Konferenz
StellenringModulformTermPseudo-Riemannscher RaumDifferentePhysikalische TheorieStichprobenumfangGruppenoperationNichtlinearer OperatorWärmeausdehnungTeilbarkeitFunktion <Mathematik>MultiplikationsoperatorPotenz <Mathematik>UnendlichkeitFormation <Mathematik>Objekt <Kategorie>PolygonVektorpotenzialPunktKorrelationsfunktionParametersystemMannigfaltigkeitAffiner RaumResultanteJensen-MaßÄquivalenzklasseBetafunktionNumerische MathematikIntegralAuswahlaxiomSupersymmetrieEndlichkeitMinkowski-GeometrieMinimalgradMinkowski-MetrikMengenlehreQuantisierung <Physik>PolynomFunktionalintegralKonfigurationsraumInverser LimesÜberschallströmungDerivation <Algebra>Kontraktion <Mathematik>Geschlossene MannigfaltigkeitPartitionsfunktionZählenKonditionszahlAggregatzustandRechter WinkelStrömungsrichtungZweiKeilförmige AnordnungNachbarschaft <Mathematik>MereologieQuantenfeldtheorieBerechenbare FunktionKörper <Algebra>Differentialsinc-FunktionÄquivariante AbbildungVorlesung/Konferenz
QuantenfeldtheoriePunktResultanteArithmetischer AusdruckIntegralModulformStellenringNachbarschaft <Mathematik>GruppenoperationMannigfaltigkeitDifferentialDimensionsanalyseKoordinatenKreisbewegungÄquivalenzklasseEigenwertproblemVektorfeldMengenlehreEbeneFormation <Mathematik>SpieltheorieVorlesung/Konferenz
MinimalgradPolynomNachbarschaft <Mathematik>Vorlesung/Konferenz
Jensen-MaßTermZusammenhängender GraphPunktNebenbedingungAusdruck <Logik>SummierbarkeitInverser LimesTeilbarkeitMinimumProdukt <Mathematik>GruppenoperationKonditionszahlTangentialraumLeistung <Physik>Nichtlineares GleichungssystemStellenringPfaff-DeterminanteRechter WinkelNachbarschaft <Mathematik>Objekt <Kategorie>IntegralPhysikalisches SystemInhalt <Mathematik>MannigfaltigkeitMinimalgradTheoremGüte der AnpassungEigenwertproblemKreisbewegungLokales MinimumVorlesung/Konferenz
Jensen-MaßObjekt <Kategorie>KreisbewegungPunktResultanteDerivation <Algebra>EigenwertproblemAusdruck <Logik>IntegralModulformTheoremGruppenoperationNachbarschaft <Mathematik>DifferentialgleichungEuklidische GeometrieStellenringGlattheit <Mathematik>MannigfaltigkeitApproximationVorzeichen <Mathematik>UnrundheitTermKonstanteTeilbarkeitKugelMaß <Mathematik>RandwertMetrisches SystemMatrizenrechnungRangstatistikMinkowski-MetrikZusammenhängender GraphMultiplikationsoperatorNichtlineares GleichungssystemVorlesung/Konferenz
Vorzeichen <Mathematik>WürfelEuklidischer RaumDruckspannungQuadratische FormStrategisches SpielKompakter RaumMannigfaltigkeitFunktionalintegralKreisbewegungErwartungswertMinkowski-MetrikDimensionsanalyseKonforme AbbildungKlasse <Mathematik>SupersymmetrieParametersystemPartitionsfunktionPunktPhysikalische TheorieNichtlinearer OperatorGeschlossene MannigfaltigkeitKrümmungsmaßBerechenbare FunktionPseudo-Riemannscher RaumStellenringGerichteter GraphDivergente ReiheNumerische MathematikQuantenfeldtheorieArithmetisches MittelKonfigurationsraumSummierbarkeitObjekt <Kategorie>UnendlichkeitWärmeausdehnungBootstrap-AggregationOrdnung <Mathematik>ApproximationFunktion <Mathematik>Körper <Algebra>GravitationStandardabweichungIntegralGruppenoperationOffene MengeQuantisierung <Physik>Vorlesung/Konferenz
OvalWürfelMengenlehreMannigfaltigkeitPunktDifferenteNichtlinearer OperatorKorrelationsfunktionSupersymmetrieHolomorphe FunktionAggregatzustandErhaltungssatzStellenringUngelöstes ProblemPartitionsfunktionPhysikalische TheorieFunktion <Mathematik>MultiplikationsoperatorQuantenfeldtheorieBeobachtungsstudieVorlesung/Konferenz
Transkript: English(automatisch erzeugt)
OK, thank you very much. It's a great pleasure to be here. It's my first time at this institute.
And it's a great pleasure to lecture about this subject. So in these lectures, what I would like to do is to describe some aspects of these substantial developments that we have seen in the last one
or maybe two decades in our understanding of supersymmetric quantum field theories, and in particular about exact non-perturbative methods that allows us to perform exact non-perturbative computations of many quantities in supersymmetric theories, such as various types of correlators,
expectation values of operators, or counting of various types of protected quantities, both states and operators. And why these developments is due to localization techniques. Now, these are very powerful techniques
that you will see over and over. In this very week, there is another set of lectures that discuss localization in four dimensions by Wolfgang. And these techniques are very powerful because they allow us to reduce an infinite dimensional integral,
which is the path integral, to something much simpler, to some finite dimensional integral, or to some counting problem, to some series, and so on. But in fact, localization has a very long history, both in the applications to field theory, so what I call supersymmetric localization,
but especially in the version in math that was originally applied to finite dimensional integrals. And so this dates back to theorems of Duestermann and Heckman, Atiobot, and Berlin-Wernher, that appeared at the beginning of the 80s.
So this is between 82 and 84. And in fact, as we will see, supersymmetric localization
is somehow an infinite dimensional version of these localization theorems. And so I think it would be a good thing, since we have a lot of time in this call, to start briefly discussing these important results,
at least sketch how these are obtained. So this will be useful because you will see all the main ideas that will be applied and will appear in the more complicated setup of quantum field theory, but essentially, all the main ideas are here. So I think this will be useful.
So I will start discussing what we can call, with modern perspective, bosonic equivariant localization.
And so suppose that we want to compute some integral on some manifold M. And on this manifold,
we have some symmetry G, some isometry. So if we are in this situation, of course, a natural idea to perform this integral would be to first perform the integral on the orbits of G, and then integrate over the orbits.
So of course, it depends what type of function we are integrating. But in particular, we could try to reduce the problem to M mod G, by doing this integration in two steps. However, in general, M mod G is not a manifold.
In particular, if G has fixed points, M mod G is not a manifold, a smooth manifold. And so equivariant cohomology is in fact a generalization
of what would be the cohomology of M mod G, in the case in which this is not actually a smooth manifold. So if this is a smooth manifold, of course,
one can define the cohomology of this space. But when it is not, equivariant cohomology generalizes that concept to this situation. So just to discuss the main idea, so let me focus on the case in which G is just U1, so the simplest example.
You're assuming G is compact always. Yes, yes. Of course, everything can be generalized, but this is a compact manifold. And G is some, yes, so I'm taking the example
in which G is a compact. This equivariant localization exists for G compact or not, for non-compact hydrometer. Yeah, I will discuss the simplest case in which my symmetry is U1, so it's compact.
Okay, so okay, we are in this situation. So we have our compact manifold M, and we take a metric on it, so this is some Riemannian manifold.
And in particular, we take the case in which the dimension is even, so this would be some 2L. And well, since you have some U1 symmetry on it,
there is a vector field that describes this symmetry. And so let's take V, that we can write in components as V mu D mu, as some killing vector field.
And so in particular, the lead derivative along V of the metric is zero, which is equivalent in component to saying that this symmetrization of the covariant derivative of the vector field is zero,
is the killing equation. And yeah, as it was stressed, so we are assuming that the symmetry G is really compact. So this is really U1, and so there is a common period in the orbits of this U1 on the manifold M.
Just a picture, this vector field V generates some U1 action on the manifold. Now, so we can consider forms on M, and in particular, it is useful to consider,
to consider a space of polyforms,
the space of polyforms that we can indicate as Hm.
So these are just objects in which, so essentially a polyform has many components, this component is a form of different degree. This one is a formal sum, all possible degrees, and we have all possible components. Each of these is a standard form.
Now this is useful, because then on this space, we can define V equivariant differential of dV,
and this dV is defined as d, the standard external exterior differential, minus the contraction with the vector field V. So if you wish, d is the standard differential,
so this maps an n-form to an n plus one form, but the contraction with V maps an n-form to an n minus one form.
And so in particular, this object mixes forms of different degree, because if you start acting on some polyform, it always has one component of a single degree, you get two pieces of different degree, and so this is the reason why we need this space of polyforms to define it.
Now, okay, this might look a little bit inconvenient, the fact that, so we had a grading on the space of forms and then we are losing it, because this object mixes forms of different degree. Now this could be solved by introducing some parameter psi here,
and this parameter in general should take values in the algebra of our group, g, and then we could assign to this parameter some degree, in particular degree two, in such a way that actually this differential preserves the degree, but in the case of d1, it's not gonna buy us much,
so we'll not do that, but in general, this is some useful thing to do. Okay, so we'll not do this, but cool. I'm not too familiar with the terminology. The contraction is just an integral over V. No, no, no, no, so I'm not sure we get the coefficients correct, but essentially, if you have a form,
so if it is just a one form, the contraction, you just contract the vector field with the form in components, and if you have more indices, I'm not sure what, okay, it depends on the convention, which action number you have to put here, but essentially, you contract the indices
with the vector field, and in particular, this operation does not require, of course, the metric, because the vector field is already indexed up, is already indexed down, so. Okay, any other questions so far?
So, the first index is contracted by? Yes, yes. I mean, of course, this is anti-symmetric object. Up to one minus sign. So, yeah. Okay, so what are the properties of this differential?
Well, we can take the square, and of course, this square is zero, contraction with v is zero. This is obvious from this presentation in components, because if you contract another time, this object is a symmetric tensor, but this is contracted with an anti-symmetric tensor,
and so you get zero, but you get the anti-commutator of these two guys, and in fact, this anti-commutator is the lie derivative along the vector field v,
and so, in particular, we can restrict now the space of polyforms to the space of v equivariant polyforms.
This is just a space of polyforms which are invariant under v. So, in particular, when you apply this lie derivative, you get zero.
Let me know what you hear. So, these are all the polyforms such that the lie derivative is zero. So, why do we do that? Because if we do that, it's only restricted space. This is nilpotent, and so in particular,
then we can define a cohomology of this operator. So, let's call it,
let's start with v, and as usual, this is just the quotient. So, if you want our closed forms, module exact forms, so this is the quotient of the kernel of the v restricted to this space,
mod the image of the v restricted to this space. Now, in fact, the interesting thing, as I said at the beginning, is that if this g acts without fixed points on m,
then m mod g is a manifold, and in the south of this cohomology, which is called the v equivariant cohomology, precisely reproduces, precisely equal to the cohomology of the quotient space.
However, in the case in which m mod g is not smooth, and so in particular, when g has fixed points,
this is an interesting generalization of this cohomology. Now, okay, this was pretty obvious from here, but we extend the terminology that we use with forms to the equivariant case. So, we say that the form is equivariant till closed
if it is closed under dv, not alpha. Equivariant close is dv alpha is equal to zero, and it is equivariantly exact
if alpha is dv of beta for some equivariant form beta.
Notice that when we impose this condition on an equivariantly closed form, of course this condition mixes forms of different degree, because, so you have to take each degree of alpha,
apply dv, but this, as we saw, mixes the components. Here, so it mixes two components, and in particular, one can write this condition as a series of condition that relates the various components. So, it looks like all components are mixed. However, still, if you look at components
of even degree and odd degree, they do not talk to each other, right? Because if you apply this condition to some, let's say to the even, to the degree, to the components of even degree, we get object that only involves odd degree parts, and vice versa, if you apply it on the odd degree parts,
these conditions involve the even ones. So, there is no mixing between even and odds, although within the evens and within the odds, there is mixing. So, you're assuming there is no group action still here on these forms? There is not what? The group is not acting on the forms yet. Yeah, so I'm restricting to this space
in which they are invariant in the action of G. So, they are not equivariant with respect to G? Yes, yes. Yeah, I'm saying V equivariant, you could call it G equivariant. No, but then, so if it is V equivariant, but you have to impose the condition G equivariancy.
Otherwise, I don't find it sensible, because, so the condition imposes this. But this is not the G equivariant, G is the compact, the group you're taking. So, is it the G equivariant condition also? Well, the conditions that they are invariant under G. You mean that V exponentiates to some action of U1?
Yeah, yeah, so I said at the beginning, the condition is that this U1 has common periods. So, it's really U1 that acts on the manifold. So, then if U1 has, I don't know the notation, common periods, but if it has fixed points, then how do you understand this? Because then this cohomology may be defined,
the cohomology you defined with the quotient space U. This lambda subscript DM. So, if G has, if the group U1 has fixed points, then how do you define the cohomology there? Yeah, so our space, so our DV is closed,
is important, and we define the cohomology. It also comes with a G action. You know, DV just represents the G action, so it isn't very different, so. Yeah, I mean, a G acts by the killing vector V. So, you can say that this is the G equivalent action.
I mean, you can say that under rotations, the forms are invariant, and infinitesimal version of that is that I act with the killing vector V.
Okay, so now we want to define integration
of these polyforms,
and okay, so we simply define integration of polyform as the integral of the top form, which is the one that we can integrate.
And in particular, notice that if we try to integrate a form which is equivariantly exact, so let's integrate DV beta. Well, DV beta, if you look at the top component,
which is the one that we integrate, can only get contribution from the next to top component. So this will be the integral of D of the component to L minus one of beta, because the top degree is two L, so we cannot go at two L plus one, and they contract with V. And so in particular, well, this is zero on a compact M.
And so if you wish, Stokes theorem still applies to equivariant polyforms, okay?
Okay, so in particular, the integral of a polyform,
in particular of a equivalently closed polyform only depends on the cohomology class, does not, depends on the representative. So another way to put is that if we integrate over M alpha plus DV beta, well, this is equal to the integral of just alpha. So only the cohomology class, the equivariant cohomology class matters,
and not the particular representative. So what we are interested in, so we're interested in computing now integrals on the manifold. In particular, we are interested in computing integrals of equivariantly closed forms,
and the equivariant localization theorems of Atiobot and Berlin-Wern,
that I mentioned in the beginning, in fact, tell us that these integrals only get contributions not from the whole of the manifold, but in fact, only from the fixed points of this action on the manifold. So essentially, if you have this picture that we have before,
and there is this U1 action on the manifold, and in general, this action can have fixed points where the action look like the following. So only these points contribute to the integral of this form, and not the totality of the manifold.
And so of course, this is a great simplification, right? Because in general, there is a finite number of points, and if we have a finite number of points,
only we have to sum of these, some finite number of contribution, we don't really have to perform the full integration over the manifold. So only the neighborhoods of the fixed points, so let me call this space MV,
is the set of points such that the vector field vanishes. So only the neighborhood of this space really contributes.
So let's see why this is the case. So I will give you two arguments.
So let's see the first localization argument. Are there questions before going into that? I don't see how this definition of
the homology solves the problem with the fixed points. Well, so far, sorry, okay, I'm not sure what problem you have. So one of the interesting thing is that, as I said, if G does not act with fixed points, M mod G is a smooth manifold,
and so in particular, we can define its homology, okay? But if this is not the case, if G has fixed points, M mod G is not a smooth manifold. So equivariant homology generalizes somehow the concept of the homology of the quotient space to the case in which the quotient space
is not a smooth manifold, because the equivariant homology is defined even when G has fixed points. Now, of course, this is just one aspect. The aspect that will be most important for us will be that, well, equivariant homology, which is a part of equivariant localization,
will allow us to simplify the computation of these integrals. So for us, this would be the problem. We want to compute these integrals, and the equivariant localization simplifies the problem, the task. Sorry, go ahead. So about the definition of this equivariant homology, so you had to define this DB external differential
in some sense. Is it canonical, or could you somehow? Yeah, I wrote it, what it is. So you take the external differential, and you take the contraction with V. Of course, you need V. Okay, so there is no choice, and no other possible choice. Well, I mean, there are many things
that you can do in general, but this is, okay, this is what I will do. So of course, in general, there are infinite number of generalizations that you can do. To begin with, you can have an abelian group, which has, what I mentioned, bigger than one. You can have non-abelian group, and so on.
So there are infinite number of generalizations. So I mean, for the homology theories, you end up with, or in some sense, there's a market for each other. No, no, no. I mean, I consider this particular case, which is the basic example, and well, essentially, we capture all the ideas, and the features that we'll need,
and we will find in the supersymmetric case. But of course, you can do much, much more. Is there a generalization for discrete group? Sorry? For discrete groups, how do you find IV? For discrete groups? Well, there is no V. Of course, there is no vector field.
So of course, these are not applied to discrete groups. In fact, this applies to U1. I'm doing U1. But yeah, I'm sure you can do something with discrete group actions. Okay, no, it doesn't come to my mind. Something you can do, but I'm sure you can do something with that.
Where do you use it? It's a candidate. Where do I use it? Well, first of all, really, what I want is that, so I want a U1 action on the manifold. Yes, but you don't need matrix, right? Or U1 action.
You need to be real for them, right? Yes. An important thing comes, you don't need V to be killing, to have an important C, because as I said, an important C only comes from, sorry, one question at a time, I cannot. Yeah, so nilpotency is not an issue,
because nilpotency, you see, if I act twice with V, I will get up to factors V mu one, V mu two, omega mu one, mu two, and so on. And these are symmetric tensor, these are anti-symmetric, so nilpotency will come out with no problem.
But I really want a U1 action because of what I'm gonna discuss here. You don't need a metric, no, to define the vector field, but it will come. So I mean, I could have started with, say, okay, there is no metric, and then at this point, I say, okay, at some point, I need the metric.
Okay, I started with the metric from the beginning. Okay. Any other questions? So I think it's misleading to say there is no metric. It's implicitly, metric is implicit there. You can't define it.
It's misleading to say that. No, where is, if I didn't define it, where was it? No, you started with the remaining manifold, and it's, you are defining a vector field with respect to that metric, and when you are defining this, whatever the structure, this forms and et cetera, metric is implicit there. Maybe it's not useful, maybe let's continue.
No, but I, yeah, okay. But in any case, I can define V without a metric. Okay, if you find it misleading, I'm sorry. You can find, I mean, there are a lot of reviews, maybe I understand better. But you don't need the metric to define the vector field, as you know.
Okay. Sorry, there was another question. Yeah, just a quick question, because I didn't really get your argument for saying that in such an integration or in the fixed points of the action contribute. Yes, so I'm gonna give you two arguments, and the first one is coming, the second one, we come afterwards.
Okay, any other question? Okay, so first localization argument. So this first argument uses a version of the Poincarellima that applies in this case.
And in particular, what you can prove is that if you have a V equivariant closer to the point of this polyform on M,
then it turns out that this form, in fact, is exact, equivariantly exact, but not on the full of M, of course,
but at least on M minus the fixed points of V. Okay, so if you remove these fixed points, in fact, the form is automatically exact. So how do we see this?
So how do we see this? Well, essentially by construction.
So first of all, we can construct a one-form, which is dual to the vector field,
and at this point, I do need the metric.
So this one-form is just you lower the index with the metric. So if you want this form is, okay, you can define it as using the metric. But in component is just V mu, G mu nu, Dx nu.
So what are the properties of this form? So first of all, this form is equivariant in the sense that the linearity of this eta is zero.
And this, where does it come from? Well, when the linearity acts on V, you would get the commutator of the two fields, but it's the very same field, and so the commutator of V with V is zero. And when the linearity acts on G,
where you use that, in fact, the metric was an isometry with respect to this vector field. Now we can compute the differential of this eta. And so of course, this has two pieces.
So one piece is D, and the other piece is contraction with V, but contraction with V, well, I gave you V squared, the modulus of V. Now it turns out that this differential is invertible
on M mod V, sorry, not mod, M minus the fixed point. So on the manifold where you would remove
the fixed points, and essentially, so you take this, you put it in the denominator, you take a V minus V, so you can write it, so you can see that this is formal,
but I will be more precise in a minute. So here, essentially, I just took minus V squared out. And then you can use the Taylor expansion of this object. But of course, since we are dealing with form, this Taylor expansion is not gonna be an infinite series,
but it's gonna stop, because when, so this is a two form, and when we reach the dimension of the manifold, it's gonna stop, and so we get a finite expression.
So here we have forms, this is a polyform, you go up, up to the degree two L, which is the maximum degree. And okay, so if you don't like these formal manipulations,
essentially, the meaning of this expression is just that, so of course, this is well-defined on M minus the fixed point, just because V is always different from zero, and then the meaning of this inverse is just that if you compute dV eta minus one,
taking this, if you wish, as a definition, and we wedge with dV eta, this give you one. So it's a simple computation, you can just do it, and this is what you get. So this object here that we can call it the inverse,
it's well-defined on this excised part of the manifold. You can also check that it is equivariantly closed,
and in order to do that, so essentially you just act with dV on this expression, here you get zero, when you act with dV, so here you have dV squared, which is zero, so you have this, and so what you get is that dV, or this expression, wedge this is equal to zero,
this is almost what you get, it's not yet because you are wedging with something, but again, you use that you have an inverse, so you can multiply by the inverse and remove this factor here, and you can get your equation. So it's just simple algebra. So now this fact allows us to define another polyform,
we can call theta V, and this is the nice property that dV theta V is equal to one. And so finally, if we go on M minus MV,
we can write alpha as dV theta V alpha. And so when dV acts on alpha, it gives you zero because this was closed, and when dV acts on theta V, it gives you one, so this gives you what you want.
And so this explicitly shows that alpha is dV or something which is well defined on this manifold. And so in particular, since alpha is exact, it means that when we integrate on this manifold, we can reduce to a boundary term, and so we don't get any contribution from the manifold, we only get contribution
from the boundary of this space. So if you want the integral over M of alpha gets contributions from the boundary of M minus V,
but in fact, this is precisely a neighborhood of the fixed points of V. So okay, so this is not yet showing us what is the result of this integration,
but at least it's showing us that in fact, this integral only gets contribution from special points on the manifold. Okay, we don't have to care about the whole integral, only about the special points. Is there any question on this? Why do we want the dimension of manifold to be even? Let's see.
Maybe for the inverse. I mean, it may be important for us later on because we want, so I will discuss, so this is just a simple example, as I said. This is not a complete theory of equivariant form, cohomology, equivariant integration,
and in particular, we'll show the simplest example in which there are just fixed points. And if you want to have just fixed points, you need the manifold to be even dimensional. Otherwise, in general, you have manifolds, I mean, sub-manifolds, which are the fixed points, which of course you can do, but then it's more complicated. So as I said, this is not the most general thing we can do.
I just want to present a simple example, okay? Any other questions? Okay, so this argument gives us this localization result, but it doesn't tell you yet what the integral is,
so now let's see a second argument, which will actually give us the result.
Usually, what we want is we have an integral, which we want, then, to localize it. Yes.
So is it clear that I can always write it in this point, that I can always find this form alpha? No, I mean, this is our starting point. So as I said, we are interested in computing integrals of equivariantly closed alpha.
I mean, you are given such an integral, and somebody asks you, can you compute for me this integral? So you are given the alpha, and then what we want to argue is, first of all, that this integral, this particular integral, it has to be closed. Otherwise, of course, this argument doesn't apply. Then you localize to fixed points. So first of all, it has to be equivariant,
so it must be invariant under the action. Otherwise, I mean, if the thing that you integrate is not invariant under the action, you don't expect that the action plays a big role. But is that the only requirement? This is what I'm under confused. So this alpha that you're integrating, all it has to be is equivariant equal.
Yes, so far, I give you an argument why only the neighborhood of fixed points should contribute. And what I used was that alpha is equivariantly closed. Because it looks like a very specific alpha that you found by, so it's not clear to me that it's sufficient,
that it's just equivariantly closed. Well, I mean, we can go through the steps. I mean, I showed you that it has to be so, right? I mean, of course, the condition that is equivariantly closed is a condition, is a constraint. But assuming that constraint, I just showed it.
Just one more quick question. When you write the dv eta and its inverse is one, what's the one on the right, and is it the top form, is it just the number one? Oh, it's a bottom form, it's a zero form. Zero form, just one. Yes, yes. I mean, it's very simple to check. Okay, I have another question, sorry.
In the difference, you write alpha as dv of theta v times alpha again? Like what? When you write alpha equal dv of theta v alpha? Yes. So alpha is again in its own. So alpha is equivariantly closed form.
And what I want to show, that if you restrict to n minus the fixed points of v, then it is also exact. It's the d of something. How do I prove it? Well, I tell you what is the d of what? And you can just take the dv, right? dv of theta give you one, so you get the alpha.
And the v of alpha is zero because we took it closed. So this is a proof by construction. Any other questions?
Okay, so let's go to the second localization argument. Yes, by the way, I think that next week, you will have a much more formal and maybe rigorous treatment of this subject by either Zabzin or Nikita.
Go much more in the details of that. Well, it will be probably more formal with less details, but more formal and rigorous. So combining the two, you should have a good picture. Okay, so second argument and evaluation.
So now we really want to do this integral.
So since we already discussed the fact that when we integrate, so once again, we have the equivariantly closed form alpha. So if you want dv alpha is equal to zero, this is our starting point. So we already discussed that when we compute these integrals, only the cohomology class matters.
So we can deform this alpha. We can take another representative of the same cohomology class, and the integral on compact manifold is gonna be the same. And so in particular, let's deform it by some equivalently exact piece. So let's start, instead of studying alpha, we study alpha t, where t is a parameter, is a number,
which is alpha wedge e to the t dv beta. So as I said, t is a number. This exponential, the meaning of this exponential is just to expand, you do the Taylor expansion, and since these are forms, this is gonna, well, actually these are polyforms, so this is probably gonna give you
an infinite number of terms, but okay, this is the definition. And, but we insist that beta is equivariant polyform, of course, because then,
well, because then this is also close, right? If you act with dv on this, here we get dv squared, which is zero only if this condition is satisfied. Okay, so in particular, okay, this should be obvious from here.
So that is equivariant exact deformation, but if you're not convinced, we can just compute the derivative with respect to t of this alpha. So the dependence is here, and so what we get, so let me suppress this wedge in the following. Okay, I'm not writing wedge, but it is implicit.
So we bring down this factor here, but since everything is closed, this is dv of something.
Okay, so this deformation by t is exact, and so integrals are not affected. Okay, so in particular, instead of integrating alpha, we can integrate alpha t, and we're gonna get the same result, and we can choose any t. This all is gonna give us the same number, the same result.
And so in particular, what will be useful to do, so okay, let me say the experiences. So if we take t equal to zero, we have the original integral, but we can take any other t, and in particular, consider the limit in which t goes to infinity, plus or minus infinity.
Now we will see is useful. Okay, so we will evaluate the integral in this limit, because anyway, it does not depend on it. Okay, so what do we do? Well, in fact, we can choose a particular beta,
where there is a natural choice, which is the eta that we defined above. So we can make any choice, but it's particularly useful to choose as our beta, the very eta. And so the statement now is that the integral of alpha that we want to compute
is equal to the limit as t goes to plus infinity, integral over m alpha e to the t. And so now here we have dv eta. Now dv is made of two pieces. There is the differential,
and there is the contraction. So the differential gives us this, and the contraction gives us v squared, because now we are contracting v with the dual of v. So this is the statement.
And now, here we notice something interesting. So first of all, this is still a form. So in particular, we can expand, so these are now a standard, so this is just a one form, so this is just a two form, it's not a polyform, so we can expand the exponential. This time we truly find a finite number of terms in this expansion,
and in particular we get a polynomial. So this is a polynomial in t of degree l.
But on the other hand, so this v squared is just a function on the manifold, and so if we want a zero form, so this is a true exponential. And so in particular, when you take t to plus infinity, this gives us an exponential suppression of the integrand, and this exponential suppression cannot be compensated
by the fact that maybe this piece is becoming large, because this is just a polynomial. So this wins, so this is a true exponential.
And in particular, for any point where this v squared is not zero, you get in the limit an infinite exponential suppression, and so this point does not contribute to the integral, only the points where this is zero, and the neighborhood of them, where this is infinitesimally small, can contribute to this integral.
And so once again, we have found this localization argument that only the fixed points of v where this is zero can contribute to this integral. So once again, we find the result that integral localizes to fixed points of v.
Well, let's see. I mean, of course you have infinities, so it's always more difficult to deal with infinities, but the same argument applies, because, well, at least up to infinities, because you will still have an exponential suppression, if you want, by some bosonic term, but the same argument applies,
and this will still be, essentially, this will be fermions, so this will still be some polynomial. So this, in the quantum field theory case, we'll see this piece will be given by fermions. So in particular, it might have some number of fermion zero modes,
but it's a finite number of fermion zero modes. But, I mean, of course, in quantum field, there are infinities. I mean, at the very beginning of your, I mean, you start with the pate integral, yes, you can go to Euclidean, still you have to deal with
an infinite dimensional integral. So, of course, one has to be careful, at least. Okay, so we find, again, are there other questions? Okay, so we rediscover these results,
but in fact, we can go on from this expression, and we can use it to actually evaluate the integral. And so let me assume, for simplicity, once again, I just want to start the simplest example, and we'll discuss the more general case in the context of quantum field theory. But let's assume that V has only isolated fixed points.
Okay, of course, this is not the most general case, because I say in general, V can have isolated,
well, so the set of fixed points can have some dimension. I mean, it doesn't have to be zero dimensional. If you want the most trivial example, is in which the UN action does not act at all, is a trivial action, and then the vector field is zero. I mean, formally, everything that I said go through,
but then the fixed points are the old manifold. I mean, still what I said is correct. But of course, there are intermediate situations. Okay, so let me, for simplicity, consider this case here.
So now, since the integral localizes to the fixed points, we can zoom on the fixed points, and we can perform, essentially, we can consider the expression only in a neighborhood of the fixed points.
So if you zoom on a fixed point,
well, the metric is essentially flat. So all points are smooth. So a neighborhood of the point is essentially R to L. So let's zoom at P, some fixed point,
and then the metric up to corrections will be the flat metric, and okay, we are in even dimension, so we separate into copies of, if you want copies of the complex plane over R2, and in each of these R2, we take radial coordinates.
So let me consider, let me write the coordinates in this way. Okay, so each of these is a copy of R2 in radial coordinates, if you wish.
And then, and so the vector field takes, so and we choose this coordinate in such a way that the vector field takes a simplified form. So since we are assuming that these are only fixed points, essentially, this B, what it does is a rotation
in each of these planes. And so this will be the eigenvalues of the rotation. So in particular, so I've chosen this code in such a way that vector field takes this simplified form
and then we can also write what the dual form is. This is also simplified.
And finally, we can compute the equivalent differential of this.
And so now that we have all the ingredients,
we take those ingredients and we plug them into the integral. Again, take into account that we are taking this limit, t goes to infinity. And so what we get, let's see.
So we get our integral only on the neighborhood of p. So this is the object, this precisely the same object, there it was decomposed here using dv. And so if we plug in the various pieces, what do we get?
So let me write it and then comment.
So what I've done here, so I've decomposed these into the two pieces. And we said before, the first piece where we have d eta,
if we expand the exponential, this is a polynomial in t. So then I'm only taking the leading contribution, the one which has the largest degree in t, which is degree l. And so these are the terms, essentially from this piece here.
While here I've done nothing. However, now notice that, so this leading piece in t, in fact, give you the maximum degree that you can have on the manifold. And so out of this alpha, the only term that contributes is alpha zero, is the bottom component, right?
Because here you have all the other terms. And so you have alpha zero. So this alpha zero is integrated over the neighborhood, but there is this Gaussian factor. So we are still taking the limit in which t goes to infinity. So this Gaussian factor is strongly picked around the origin. And so essentially, around the origin in the limit,
we can take that this alpha is constant, okay? Because this is a smooth form and around the point where it is picked, this is only the value at zero matters.
So we can take this out. And then, okay, once we take these, well, these are just numbers, alpha has been taken out, and then this is just a Gaussian integral,
so we can do it on each copy of R2. And what we get is this. And so what we see nicely is that, in fact, powers of t cancel out, okay? So this was, so we saw at the beginning, right,
that the integral does not depend on t, so we better find something that does not depend on t. Of course, we are in the limit at t is large, so here there are corrections, but this correction go to zero, but we want to see that at least the leading term does not depend on t, and in fact, powers of t precisely cancel. Also notice that if we were going to keep
some of the subleading terms in the polynomial, those would have had less powers of t here, and so those contributions are indeed suppressed as you take t to be large. And so, okay, so this simplifies,
so this is alpha zero of p, and then two pi to the l divided by the product from one to l omega pi, okay? So what you have computed is the contribution of this integral from a neighborhood of one of the fixed points,
and in fact, it's very simple. Now, so we have this product of these factors here, but in fact, these are the eigenvalues of the U1 action on the tangent space around the point, right? These are the, because around the point,
this U1 action is a rotation, these are the eigenvalues, and so we can give a more geometric or invariant definition of this object here as, well, this is a product of the eigenvalues, so this is essentially the determinant, well, not precisely, this is the Pfaffian, because only contains l terms instead of two l.
And so this is the Pfaffian of the rotation, well, the U1 action on the tangent space at p, which is a rotation. So this is your U1 action on t,
t, p, m. And of course, there are, the manifold in general, you have many fixed points, and so collecting the various results, we get a final nice formula,
which is in fact the content of this Attiabot-Berlin-Wern localization formula, which is that the integral of alpha under our assumptions is given by a sum over the fixed points of v, and what you collect are these local contributions,
so the value of the bottom component of alpha weighted by the Pfaffian of the U1 action at p.
Okay, so as you see, this is quite a nice simplification of our original integral. Any questions? So at some point we kind of thought,
okay, this integral over alpha depends only on the top component right over, and now this bottom component pops up, so to me it seems a bit mysterious. Now, this is a good observation, because, well, it's not, yeah, so we wanted to integrate,
so the integral of alpha is really the integral of the top component. This is the integral we want to compute, and this is really the integral we are computing, and it is true that this integral localizes to the fixed point, however, what appears here is the bottom component, because as we said, these two objects, so if these two objects were unrelated, this formula could not be correct,
but if these two objects are related, because one of the requirements for this to work is that alpha is equivalently closed. So this has to be closed, and as you see at the beginning, this condition mixes up the various components of alpha, because if you look at the top component
of this equation, this is telling you essentially that minus IV alpha top is equal to D of alpha, or maybe this is not minus, is equal to D two L minus two, but then there is another equation
that is telling you that EV alpha two L minus two is equal to D alpha two L minus four, and so on. And so by this, so this equation is a constraint on the various components, and at the end of the day, it relates the bottom component to the top component. It has to be so.
So of course, if you just start from a standard form, and you're just interested in integrating a form, and you want to apply this form, there is some work that you have to do. You have to first of all make into an equivariant polyform, which is closed, and then you can use the theorem.
But still, this formula is just... I have a question which is probably related to this. So this procedure here is a very formal way of separating polar and angular components. This two pi to the L is just the integration over angular components.
And then you still have an integral over R, and somehow presumably this is going to be this alpha zero. It probably is related to the original integral, so let's say we can... Oh, you see it from here, right? So here what we did was... So essentially, the point was that in the formula that we obtained over there,
the crucial point was that there is this exponential factor. And if you imagine the manifold, so let's say that this is our manifold, this exponential factor in the limit in which T becomes very large is picked. I understand, but I don't think this is related to what I'm asking, because the pedestrian way of doing this,
you will find a coordinate system in which your integral is only dependent on R. And therefore, angular integration will just give you... This is simple, this is trivial, because nothing depends on the integral, but that depends on finding that particular coordinate system.
And then you just have R integrals, which are somehow presumably related to this alpha zero at P, because these are still integrals that you have to do by hand. I mean, there's no way to simplify it. You still have an L-dimensional integral, which you have to do, which you cannot... Well, as we saw at the beginning,
sorry, the beginning of the first argument, which is here, essentially in the bulk of the manifold, the form is exact. So it reduces to one of them. So this integral over R that you're doing, you will find that what you integrate is just a total derivative. So you don't really have to do the integral, you just have to evaluate this object
that the integral is total derivative of, at extreme, the points, which are the boundary of this space. Now in my assumption, which is that really I'm removing just points, you see the boundary of these are small spheres around these points. But because of smoothness on these spheres,
everything is, the form alpha is constant. Because when the sphere is very, very small, since I'm assuming that everything is smooth here, then alpha is constant. And this is why I can, instead, you see this alpha becomes just alpha zero. I mean, I can take this as just constant.
And this is probably a realization of what you're saying. Okay. And why can we assume that such coordinates can be found? Yeah, so this is just local around the point. So you have a point, and the U1 action does not,
I mean, there's only a fixed point. And so this U1 action is, if you want, they're gonna rise a matrix. This is, this is U1.
I think so, probably you don't need to use the metric. I mean, the metric simplifies your life because you can construct this eta and so on. But, well, in fact, what you need is to, I mean, take any, essentially here, what I need is to construct this eta. And so I need just to construct a metric, if you wish,
which is invariant under the U1 action, which is probably can be done always, I'm not sure. Yeah, so this was a convenience because, yeah, I wanted to show things in a explicit way. A question? So the fat unit is always positive, is that correct?
Fat unit? Is always positive, no. Okay, nevermind. I was gonna ask whether the only non-trivial signs could come from alpha zero. No, no, you do have signs, right? This is, I mean, you have to multiply the eigenvalues of this rota. I mean, you have a rotation matrix.
This rotation matrix is just a rotation in each plane, and you can have positive or negative eigenvalues. Sorry, can I maybe make my point a little bit more crisp? So the way you presented it, it looks like you've completely done away with all integrals. But if somebody gives you a 2L dimensional real integral,
then you just have alpha 2L. And then you have to relate, you have to find all these alphas, in particular, alpha zero. But to do that, you have to do integrals, because you relate the lower rank alpha to the higher one by doing an integral. Well, you have to, yeah, you have to solve this. You don't really have to integrate over the whole manifold. You just, but you have to find,
to solve these integrals. So there's still an L dimensional integral, basically, that you have to do if somebody gave you. You have to solve these differential equations. Yeah, well, this is how it is. Okay, I just wanted to make a point that there's still some integrals you need to do, right? You don't have to integrate over the whole manifold. You just have to solve these equations, but yes.
But instead of a 2L dimensional integral, you have to do L integrations. There is still some work to do, but this simplifies a lot, the task, if you sit down and try to do one example. Okay, so in fact, so, okay, I can, if you want to familiarize with this,
I can just suggest one simple example that, I don't know, maybe might be, might look too easy for some of you, but maybe for some other ones, it's interesting, I don't know. So, as a simple example, you could try to,
so suppose that you want to compute integral on S2, e to the i, some constant c, cos theta, devolve S2. So, suppose you have some round S2, you want to compute this integral.
Of course, this is elementary integral. You can just do the integral, but if you wish to clarify your ideas, you can try to solve it with the equivariant localization. So, you need to, so this will be the top component, then you need to construct what the equivariantly closed polyform, such that this is the top component,
and then you can apply the localization theorem. Okay, it's a simple example. Okay, so I'm gonna change topic. So, if you have more questions, it's a good time. Just in the derivation of the localization formula before,
first you choose very specific coordinates locally near the point P. And then when you integrate it, for each pair, you integrate it on the entire real plane. Well, I can do that because I have this damping factor. So, let's see, where is it? So, I have this Gaussian factor.
So, this is gonna be picked at the point. So, everything else is not gonna contribute. And so, in the limit, this is correct. Of course, that's fine at T. You might say this is an approximation because, as you said, this really should be only in the neighborhood of the point
and here there are all these. So, there are corrections everywhere. The crucial point is that since this formula is, so we can take any value of T. For any value of T, we're gonna get the same result. We can take T as large as we want. And so, in particular, we can make the corrections as small as we want.
And because of that, this is the final result. It's not an approximation. It's the exact answer. And also, the question about the local coordinate, that's the truth. So, you set up the coordinate system like that. V is the entire neighborhood has component omega PI,
omega PI is the concept, the entire neighborhood of the point. Or just only at the point? Well, I mean, so if I go close to the point, this V is just a rotation around the point. So, I mean, this is not equal to that. There are corrections. But the leading term up to corrections is just a rotation around the point.
And so, this is just the eigenvalue of the rotation. So, that's how you can simplify the equation there saying that P of this omega PI could be. Yeah, this is just numbers. Yeah, this notation. So, this is the eigenvalue of the UN action at P.
This is the i-th eigenvalue. Yeah, but if you say it's kind of thing, the entire neighborhood is. Yes, that's true. I mean, our neighborhood is infinitesimal, right? Yeah.
Okay, so I guess I have 11 minutes. Yeah, okay. So, we'll introduce the next topic. So, now that we understood. So, as I said, here we saw the basic ideas
of localization. But really, what we want to do is to apply this to quantum field theories. So, let me say just in some introductory words about Euclidean half-path integrals, back to results.
So, already Guido said a few words
about what is the general strategy or the general goal that one wants to achieve. And okay, in this school probably you will hear the same things in various lectures because of course it's a very focused set of lectures. And so, one of the ideas,
at least one of the ideas is the following. So, if you are given a quantum field theory, in principle, at least formally, all the information about this quantum field theory can be encoded in the Euclidean path integral. So, if you can compute this object. So, these are integrals over an infinite dimensional space of field configurations.
And these field configurations are weighted by the classical action, which is a functional of these field configurations. These of course include bosons and fermions, weighted by h-bar. So, in principle, the information is contained in these objects,
up to the fact that okay, we should introduce sources if we want to compute expectation value of operators. And then of course, we have to do some weak rotation to Lorentzian signature. But formally, everything is contained here. But unfortunately, this object is too hard to compute.
This is an integral of an infinite dimensional space. We don't know how to do that in general. Of course, we know the standard approximation scheme or the standard paradigm that we can try to apply. We expand this as a perturbative expansion and then we try to compute perturbative corrections. And of course, this works very well if you are at weak coupling,
but it does not work if the couplings, if you wish, are order one, what we call strong coupling. And of course, this is for a couple of reasons. First of all, because there is an infinite number of corrections. So, first of all, you should compute all these corrections. But in general, even if you are able to compute all these corrections
and resum them, at least in general, we are still missing the non-perturbative contributions. In general, you have some asymptotic series. And so, well, at least in general, the non-perturbative contributions have to be computed. So, what do we do? So, one possible approach
to address this problem is to study some special theories for which at least some of these path integrals can actually be computed. Okay, this is not the only strategy, but this is one possible strategy.
However, if you wish, before this localization revolution, we can call it that way, that started with Necrosoft and then with Pestum, it was thought that only specific, very special and maybe peculiar or exotic theories were amenable to such a treatment.
In particular, some comological theories or some topological theories. But in fact, this big development started from the understanding that in fact, it's not true. So, there is a very large class of quantum field theories and of observables that can be exactly computed
and addressed with various techniques and in particular, supersymmetric localization. Now, okay, this might be obvious, but let me just stress from the very beginning that even though we will be able to compute some path integrals with localization, we will not be able to compute every path integrals.
So, if I take a supersymmetric theory, I will not be able to compute any possible path integral in that theory. So, that will mean that I've solved the theory, I compute any observable. This will be awesome, but it's not the case because quantum field theory is still very complicated. We will be able to compute some path integrals, but as we will see, still this sum will be a pretty large class
of observables. In fact, we don't have a complete classification, so it's still an open set. If you want, and these observables that we can compute, even though they are not all, they contain a lot of very interesting physical information. But of course, I mean, localization by no means
is the only non-perturbative tool that we have at our disposal. For other classes of theories, we have other tools. For instance, for conformal theories, we have the Conformal Bootstrap. For integrable theories, we have integrability and many other methods. So, of course, this is just one strategy, but it's very interesting.
Okay, so if you want, the objective of these lectures will be to compute Euclidean path integrals of this form. But more specifically, and this is a point that already Guido stressed or mentioned, we will be interested in Euclidean path integrals.
And moreover, we will put the theories on compact manifolds. So we'll choose some, so the space-time will be some compact Euclidean manifold M. And what we want to compute is the path integral of these theories. Let me set h-bar to one. So this will be, so the action is a functional of the three configurations,
but it's also a function of various parameters. Let me call this, let me call this c, just not to create confusion with what t was before. So this will be a function of various parameters, and so the partition function will not just be a number, it will be a function of these parameters. So what are these parameters?
Well, they can be couplings in the theory, of course, but they could also be parameters of the manifold where we put the theory. And moreover, they could also be parameters that control the background, the supersymmetric background that we use on these manifolds.
So this is precisely the thing that Guido will describe, how to preserve supersymmetry on called manifolds, and he will show you that in fact there are parameters when you do that, there is not just one way to do that, there are parameters, and so in general, these parameters might become parameters of the partition function.
And so our objective will be to compute these objects, and we will discuss how to compute these objects with the localization techniques. And so, well, in fact, you might ask why should we do that? I mean, we are interested in the Lorentzian theory on flat space, I mean, yeah, it's true
that space time is curved, but if you are in a lab, it's essentially flat, up to gravitational pulling. So why should we care about starting a supersymmetric theory on Euclidean and compact, very weird manifold? What's the point with that?
Besides the fact that it's theoretically interesting if we can do that, and in fact there are, if you want more phenomenological reasons to do that, so it turns out that it is a profitable exercise
to study supersymmetric quantum field theories on compact, well, they don't have to be compact, but, let me say compact on, well, the point is on generic, at least as much as supersymmetry allows us
to do, are compact manifolds and backgrounds. And I just want to mention two reasons why this is a good exercise.
So the first reason is that, so we will see that localization will allow us to reduce this complicated problem to a simpler problem, but in fact, how much simpler this problem, depends on which particular manifold we are and which particular background.
So on some manifolds and backgrounds, this becomes much, much simpler, and we can actually solve this in a very explicit way. In some other cases, it becomes simpler, but it might still be some non-trivial problem that involves some non-trivial mathematical problem. So if you wish, some manifolds M and backgrounds
are easier than others, in the sense of computing partition functions. The second reason, which is more important, is that in fact, different manifolds and different backgrounds grant us access
to different sectors of observables in the theory. So different M manifolds and backgrounds give access to different observables
and in particular, the set of observables that we can reach by varying the manifold and the backgrounds can be quite rich. So we can have access to correlation functions of chiral operators, and this has been known for a long time, but in fact, we can have much more. For instance, we can have correlation function
of holomorphic with anti-holomorphic operators. We can have a correlation function of conserved currents, which are not holomorphic. We can have access to various counting problems or counting states or counting operators by discussing operators. So we really have access to much larger sets
of observables than was thought in the past. Okay, and I think that my time is over, so I will stop here.