We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Arakelov geometry on degenerating curves

00:00

Formal Metadata

Title
Arakelov geometry on degenerating curves
Title of Series
Number of Parts
23
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
We investigate the asymptotic of Arakelov Green functions and metrics, and of the delta-function, if a smooth Riemann surface degenerates to a stable curve.
GeometryCurveMathematicsNumerical analysisIntegerInfinityLogarithmDistanceMany-sorted logicDiagonalElement (mathematics)TetraktysMultiplication signAxiom of choiceComplex (psychology)Metric systemTheoryNormal (geometry)Cartier-DivisorLecture/Conference
CurveMaß <Mathematik>MathematicsMatrix (mathematics)ModulformVector spaceFiber bundleAxiom of choiceLine (geometry)Directed graphMetric systemMoment (mathematics)Physical systemMeasurementCurvatureSocial classMany-sorted logicDiagonalSurfaceDegree (graph theory)Multiplication signDifferential (mechanical device)Uniqueness quantificationLine bundleTensorSummierbarkeitAlpha (investment)Group representationLecture/Conference
Physical systemDiagonalLie groupModulformDifferential (mechanical device)Exact sequenceMultiplicationScalar fieldCurvaturePoint (geometry)Many-sorted logicCondition numberVolume (thermodynamics)INTEGRALDeterminantSkalarproduktraumScaling (geometry)DivisorMeasurementAlpha (investment)Lecture/Conference
FrequencyDeterminantLine bundlePower (physics)Many-sorted logicProcess (computing)Degree (graph theory)Line (geometry)ThetafunktionLecture/Conference
OctahedronCohomologyCurveNumerical analysisTorsion (mechanics)ModulformInvariant (mathematics)Category of beingVector spaceINTEGRALWell-formed formulaDeterminantAnalytic setAxiom of choiceDivisor (algebraic geometry)Exact sequenceSheaf (mathematics)Line (geometry)Line bundlePower (physics)Metric systemTheoryScalar fieldTheoremThetafunktionLogical constantDivisorIndependence (probability theory)Normal (geometry)CurvatureSquare numberSummierbarkeitSocial classMany-sorted logicSurfaceCharacteristic polynomialCanonical ensembleDifferent (Kate Ryan album)IsomorphieklasseDegree (graph theory)Rule of inferenceVolume (thermodynamics)Maß <Mathematik>Statistical hypothesis testingSet theoryConstraint (mathematics)Arithmetic meanForcing (mathematics)Functional (mathematics)Directed graphGroup actionMultiplicationScaling (geometry)Point (geometry)Student's t-testNominal numberMultiplication signLecture/Conference
MathematicsGeometryGraph (mathematics)CurveNumerical analysisMathematical singularityModel theoryAsymptotic analysisField extensionWell-formed formulaFinitismusInfinityDivisor (algebraic geometry)Einbettung <Mathematik>Line (geometry)Annulus (mathematics)Representation theoryLogarithmMaxima and minimaAngleFamilyMeasurementNormal (geometry)Connectivity (graph theory)Gamma functionSummierbarkeitPoint (geometry)Many-sorted logicObservational studySurfaceVertex (graph theory)Fiber (mathematics)Different (Kate Ryan album)Multiplication signConjugacy classDivision (mathematics)Hill differential equationLocal ringAreaSimilarity (geometry)Generating set of a groupSocial classStandard deviation1 (number)Lecture/Conference
CurveAsymptotic analysisFunctional (mathematics)InterpolationAnnulus (mathematics)ResultantLocal ringLogical constantNichtlineares GleichungssystemParameter (computer programming)Connectivity (graph theory)Negative numberInterior (topology)Many-sorted logicIdentical particlesFiber (mathematics)Multiplication signStandard deviationBounded variationNumerical analysisSet theoryMathematical singularityModel theoryFlow separationUniformer RaumMathematical optimizationArithmetic meanLine (geometry)Projective planeTime zoneGoodness of fitMeasurementGenerating set of a groupPoint (geometry)Vapor barrierInclined planeFilm editingGraph coloringCondition numberDifferent (Kate Ryan album)Object (grammar)Lecture/Conference
Link (knot theory)Set theoryTheory of relativityFunctional (mathematics)InterpolationConnectivity (graph theory)Many-sorted logicPoint (geometry)FrequencyProduct (business)ComputabilityUniformer RaumState of matterDifferential (mechanical device)Harmonic functionMereologyMultiplicationSlide ruleSineTerm (mathematics)Scaling (geometry)DivisorWater vaporFood energyResidual (numerical analysis)Cross-correlationKörper <Algebra>Social classProcess (computing)Complex numberEvent horizonImaginary numberRule of inferenceGraph (mathematics)Harmonic analysisNumerical analysisOrder (biology)Bilinear mapLine (geometry)Annulus (mathematics)Maxima and minimaResultantLocal ringLinearizationLogical constantParameter (computer programming)Interior (topology)Alpha (investment)Different (Kate Ryan album)Pole (complex analysis)Multiplication signLecture/Conference
Graph (mathematics)CurveSet theoryVariable (mathematics)INTEGRALEquivalence relationHomologieDifferential (mechanical device)Orientation (vector space)Stochastic kernel estimationFamilyMeasurementCurvatureResidual (numerical analysis)Gamma functionSummierbarkeitMany-sorted logicDirection (geometry)Complex numberAlpha (investment)Fiber (mathematics)Different (Kate Ryan album)KozyklusPole (complex analysis)Boundary value problemOrder (biology)VarianceCausalityState of matterMereologyTheoryPhysicalismTerm (mathematics)Connectivity (graph theory)Food energyPoint (geometry)Vapor barrierGraph (mathematics)Right angleLecture/Conference
Holomorphic functionCurveMaß <Mathematik>Order (biology)Modal logicModulformINTEGRALDifferential (mechanical device)Functional (mathematics)Directed graphGroup actionPrice indexMoment (mathematics)Sieve of EratosthenesMilitary baseTerm (mathematics)Stochastic kernel estimationVektorraumbündelFamilyMeasurementBasis <Mathematik>Connectivity (graph theory)Residual (numerical analysis)Point (geometry)Special unitary groupMany-sorted logicOrthonormal basisFiber (mathematics)Block (periodic table)Multiplication signStandard deviationSpacetimeConcentricGraph (mathematics)Theory of relativityAsymptotic analysisArithmetic meanSheaf (mathematics)Annulus (mathematics)LogarithmLoop (music)SkalarproduktraumNear-ringNormal (geometry)Square numberSummierbarkeitPolar coordinate systemLecture/Conference
CurveFrequencyModulformResonatorBilinear formDivision (mathematics)Functional (mathematics)Group actionLoop (music)Extension (kinesiology)MereologyTheoryPhysical systemTable (information)Near-ringAreaFamilyMeasurementGenerating set of a groupConnectivity (graph theory)Residual (numerical analysis)Musical ensemblePoint (geometry)Interior (topology)Many-sorted logicLattice (order)Imaginary numberCondition numberMultiplication signInverse elementAlgebraic curveGraph (mathematics)INTEGRALDifferential (mechanical device)Uniqueness quantificationAnnulus (mathematics)SkalarproduktraumLinearizationReal numberBound stateGamma functionSummierbarkeitRoundness (object)Stability theoryAlpha (investment)Pole (complex analysis)Lecture/Conference
Twin primeGraph (mathematics)Rational numberAsymptotic analysisModulformInvariant (mathematics)Combinatory logicINTEGRALBilinear formDifferential (mechanical device)Functional (mathematics)Green's functionComplex (psychology)Loop (music)Projective planeResultantSkalarproduktraumTerm (mathematics)Scaling (geometry)LinearizationNichtlineares GleichungssystemDivisorSimilarity (geometry)MeasurementOperator (mathematics)Dirac delta functionConnectivity (graph theory)Residual (numerical analysis)Point (geometry)Interior (topology)Many-sorted logicComplex numberAlpha (investment)Condition numberElement (mathematics)Degree (graph theory)Boundary value problemDescriptive statisticsModel theoryFrequencyProduct (business)Principal component analysisOrthogonalityCurve fittingLine (geometry)Group actionInterpolationMereologyAreaRational functionParameter (computer programming)Student's t-testClosed setMultiplication signSpacetime1 (number)Lecture/Conference
Transcript: English(auto-generated)
Thank you. So I'm supposed to say something about Gaba. So he likes to do technical things,
and he even likes to do almost mathematics when nobody else wanted to touch it. So I want to talk about Arachilov geometry, which
is an old topic. So if you have a curve over the integers of a number field, then you want to do an intersection theory for divisors. And you need an intersection number at infinity,
infinite places. And this is done if you have PQ in the complex.
You want the intersection number to be minus log GPQ. Where GPQ is something like the distance between P and Q.
So if P and Q are close, then the distance is small, and minus log is big. And so they should have big intersection. And of course, so GPQ is sort of norm 1.
PQ metric delta, delta C is a diagonal.
And of course, there are many such choices. And Arachilov made the choice that if you have a measure mu, well, I mean, there are many such metrics.
But at least for such metrics, we know something. We know that the curvature is a 1-1 form, which represents the class of the diagonal. And so it's restricted. And Arachilov said if you choose a measure mu,
mu of 1-1 form, the integral, well, now C has become a Riemann surface is 1.
Then we look at Hermitian line bundles. So in the moment, it doesn't have to be.
With curvature degree L times mu. So we know that the curvature integral gives the degree. So if it's a multiple of mu, it has to be this multiple.
And we look at the line bundle, which we call admissible metrics. So admissible metrics. And then we know that such a metric is unique up
to a scaling. And so the GPQ should give a metric on the diagonal or, if I restrict one factor, on O of Q. And then, well, if one requires that these metrics are admissible,
then there's a unique, so there's this unique metric, unique up to scalar delta with curvature.
Maybe add mu tensor 1 plus 1 tensor mu minus sum.
I'll change these on the diagonal.
I hope I got this right. As alpha j are an orthonormal basis,
but not so in this has to be basis. This is true because this is the harmonic representatives
for the diagonal. And OK. And if we have such a metric, we get a metric on the differentials. So then get metric differentials omega
c by the fact that this is O delta.
And the curvature of this metric is good. And I take the negative of this and restrict this to the diagonal. And if we want this metric to be admissible, so the curvature is twice.
And if we want this to be admissible, I think I should put a factor i.
Yes, I should put a factor i over 2 for the inner product. And so this is admissible if mu is a multiple of this. So if mu is i over 2g alpha j bar.
And this is called the Arachilov measure. It's Arachilov.
And well, if one does this, one can do some things. For example, one can define volumes forms. So one can define volume forms in L.
If L has an admissible matrix, which sort of satisfies
the obvious thing, sort of determinant of this one, which satisfy obvious scaling condition. And also, if one has short exact sequences, one gets something. And this is up to scalar and then up to scalar.
And again, we can normalize the volume forms that sat on the omega and the differential. It's sort of given by the square integral. So normalize the parenchials.
And then one knows that on L of degree g minus 1. So if L has degree minus 1, then this
is independent of scaling, because we get a power of the Euler-Poncari characteristic, which is 0. And then this determinant is sort of O of minus theta. So the determinant is isomorphic to O of minus theta
on the Jacobian. Can you tell again, what did you say about independent scaling? Then the metric here, if you have a line of degree g minus 1, which is an admissible metric,
then on the determinant of cohomology, it's independent of scaling, because you get a power of the Euler characteristic. To define the metric on the determinant of cohomology, how did you use the correction by the analytic torsion, or whatnot? No, well, it can be written as the square integral
correction by analytic torsion. But I use a different one. I use sort of, if you have a line bundle, if you have an admissible line bundle and a point,
you have this exact sequence. And this thing has a metric. And so the difference of cohomology of these two
has a metric. And I require that this is compatible. And this and the metric on L minus P. Well, I require on the determinant.
Well, first of all, this also has a metric, because O of minus P is a metric given by Akera. And then you want this to be compatible. And this gives essentially a metric on the O of d's,
where d is the divisor. And then you show that it factors over the isomorphism class. And it's enough to do this for degree g minus 1. And for this one, well, I mean, there you have no scaling problem. And for this one, you compute the curvature of this metric.
And you show that it's induced from the Jacobian. I mean, the curvature is a form on the curve. I mean, if you have O of a divisor, then the curvature is
so the divisor is parametrized by C to some big number. Then it's one form on C to this big number. So it can be written by. And you can compute this curvature. And you see that it's induced from the Jacobian.
Does this answer your question? So this shows that your rule specifies a choice of metrics on the determinant. Yes. Once you, but this for all line bundles is an admissible metric.
Yes. And so there is a unique choice on the category of those things up to a scalar, a common scalar. So you don't get rid of the common scalar. No, you have the omega. The H1 of omega is canonically trivial.
So you have this one. And then from the curvature, then you see that the volume is sort of, I mean, that the log volume is proportional to minus log
of norm of the theta function. So on the Jacobian of degree g minus 1, you have the canonical theta function. And you can normalize it. For example, it's a square integral is 1 or whatever.
And then so it's up to a constant. And this constant I call, so define minus delta over 8. So this is, delta is just a real number,
which is an invariant of the Riemann surface. And I call it this way, this strange way, because of the following. So theta is the log of theta is what? You take the theta function, and it has a canonical,
well, it has a norm with invariant, means the O of theta has a norm with invariant curvature. And you can, and so the log theta, so the norm of theta is determined up to a vector. And you can normalize it such that the square integral is
1. And then you get a value for this one. Square integral of the section? Yeah. Of which section? Of theta, I mean, of the norm of theta. If you multiply the norm by the scalar, then the square integral is also multiplied.
And then if you normalize it. The norm of O of theta, yeah, O of theta, OK. Yeah, OK, so you have this one. And then you can move, for arithmetic surfaces, surfaces.
And then measure on j, g minus 1 is normalized with the volume is 1, for example, yeah. For arithmetic surfaces, you get a Riemann-Roch theorem.
And you get a sort of neuter formula that the arithmetic denos is sort of omega squared plus delta over 12.
Let's say for semi-stable, where omega is sort of the relative differentials, or the dualizing sheaf with the arith admissible metric.
And you have this omega squared by the R-Kilov theory. And delta is the sum. So delta is the sum of all places delta v. Well, at finite places, you said, maybe I'll write log Qv
times delta v, I mean, plus infinite. And see, delta v is the number of singularities.
So if you have semi-stable curves, engineerically, it's smooth. But there are finitely many singular fibers. And there, you have double points. And well, if you have a regular semi-stable model,
you count the number of points. Or if you have a stable model, you would count with multiplicity. And you get this one. And you could count the number of geometric points or the number of angles. Well, OK, you would have to look at the extension field. Oh, OK.
And when you have real and complex places, does it influence the formula that is you take the sum of the embeddings and say that it's twice the complex? And then you get the sum. It's sort of sum of this over this infinite places. So is the delta the same in both places?
Which delta? You have delta on the first line. No, no. Delta is invariant of Riemann surface. And in all infinite places, you have different Riemann surfaces and you have invariant delta. And the sum occurs here.
But the complex conjugate is the same delta, I think. Yes. And then the question was, well, this suggests that the infinite places of delta should somehow measure the singularities at infinity.
And then the question was, well, the question was, is delta related to a metric on, I take the modulus stack
and, well, of all semi-stable curve, and I look at the divisor which gives the singular curves
and whether there's a metric to this should be something like loc of the norm of 1 or something. So this is the problem I wanted to solve.
And for this one, I studied degenerations. So this means if I have a Riemann surface and if it approaches the boundary, I want to look what happens. So study. We'll use the Delin method for simplification.
Yes. So if I have a family of Riemann surfaces, C or my S. Well, the g here is at least 2, or it can be 1, or what is the?
I think g can be 1, but it shouldn't be 0. And in the case of 0, do you get anything that does it? What you wrote make sense for g equal 0 or not? No, it doesn't make because you have to divide by g
for the Arachilov measure. I think it seems to relate to the fact that you have minimal models and g is bigger equal 1. OK, and so you have a pi by c0.
And you want to study the Arachilov, well, want to study, so want to study asymptotic behavior
Arachilov data. And I should say if I have a curve c0,
I get a graph gamma which corresponds vertices of e. So these are the irreducible components, c0.
And these are the double points. And an edge joins two irreducible components if the double point lies in between them.
And I usually orient the edges, so but I won't say this here. And then sort of each, so for small s, so each fiber is a union of maybe c0v,
union c0s, c0e, where these are curves with disks removed.
And these are sort of annuli. So I do a picture, so you have several curve.
And they are sort of the annuli, the different curve.
It may go on and whatever. And so the degeneration happens if these annuli are squeezed.
OK. And so to do the asymptotics invariant, I consider them either sort of on these components or on the annuli, which means I sort of add something
distinguish between them. And the annuli have equation u times v equal te, let's say, and u v less equal 1.
And so when the te are parameters which measure the singularity, and I assume that they are small. And usually I also take the invariant log minus log of te I call se, so that I don't have to write
negative numbers too often. And then I look at the GPQs. Well, sorry, this is the picture I will need later.
And I look at the GPQs that say where p and q are points either in the same or in different components and which behave nicely. And then I look at the asymptotics.
And asymptotics means I want to study them on the curve up to a uniformly bounded function. So asymptotics means up to bounded uniformly function.
And then this means sort of up to a function.
It means I sort of need a reference model for these GPQs, which is sort of standard. I mean, well, if p lies here and q lies there, then I want it as sort of a local equation times something and so on. So you allow also a component which intersects itself,
I suppose, in the special fiber. Yeah, but then sort of it has an edge here. OK, if you're up to a bounded function
and the result will be, but they are sort of constant, I mean, depending on s. So the result would be that they are sort of constant.
So constant depending on s and v on the curves and the interior of the curve, meaning,
I guess, that it has bounded variation on them. And on the annuli, so on annuli, so mostly we are at linear interpolation,
by which I mean sort of it's a linear function.
So in this coordinates, so I want linear interpolation. And so if this is always the case,
then sort of the interesting function are given by the values on the components. But there's one exception. So one exception. When you say constant on c0 v means when the two points lie in the same c0 v,
or one point in one and the other in another, or? No, no, I mean, well, it's both is true, I mean. But constant up to bounded function is the same as 0. So this is what you mean, that it's uniformly bounded.
No, no, no, but the constant depends on the parameter s. And the bounded function is bounded uniformly in s. OK, constant namely a function of s and v.
Yeah, but this I wrote depending on s and v. OK, OK. And then when you have points in one in c0 v and other in c0 w, then it depends on s and v and w? Yes. So one exception, if you have the same annulus.
So if you have the annulus cross. Well, if you have the same one, then it's not bilinear, but there's a correction term. And so correction term.
So I write this thing also maybe, usually it's about the minimum of p times 1 minus q and q times 1 minus p.
So I mean these coordinates log u, they sort of go from 0 to 1 on both of them. And I get the correction term, which is a multiple of this one, which is non-linear in p and q.
And this, I think, is related to the fact that if you consider the diagonal, then on the product of 2 and you lie, this is not a Cartier divisor, and you have to do a blow up. And this way, you get this one. What is the p and q?
p and q are the coordinates on the, well, sort of this log, log q, which I did write. I did write this linear interpolation.
So the linear interpolation means that, let us say, one point is in the annulus and one point is in the interior. And then you get a linear interpolation. So the log u, I normalize so that it runs from 0 to 1. And the log v also. And then I call this p and q. And then I get this multiple of this function.
You get, yeah, but when you say the linear interpolation before the exception stuff, so what does it mean? It means we are referring to the case where one point is in an annulus and one point is not. No, no, I refer to the fact that both points
are in the same annulus. Before, I was asking three lines above this. Yes. When you write the linear interpolation. Yeah, I said, I mean, that the function depends linearly on the interpolate. I mean, this is also allowed if two are in the same annulus except that I have this correction.
You interpolate between the two edges of the annulus. And so if you have two points in different annulus, I also do this. Yes, yeah, yeah. And so there's this one correction term.
So if you have an annulus between the two components, the annulus between these components in component d,
there are four numbers originally, making this, this, this, this, this, this, this. So when you want to incorporate linearly. Yes, then you want a bilinear function. And this you have to know on the four ends, on the four corners, yeah. But also with the, yeah, OK, xy, OK, in order to do, OK.
OK, so this is the result. And this is related to this. And well, and the things in these values are sort of given by the graph.
So then it's given by graph. So they can be computed from the graph g. And OK, so maybe, how do you do this computation?
So how do you compute the R kilo of these? This can be done as follow. If you have a Riemann surface,
and you have two points, you have a sort of harmonic function, Hpq of z, which is sort of poles.
So we have sort of poles with adios plus minus 1.
This means it's sort of near these points. If I do a local coordinate, it's asymptotically like plus or minus like z. And this is unique up to a constant.
And how do you get this? You would take alpha pq, a differential Q
with residues plus minus 1. This you know. And then it's unique up to adding holomorphic differentials.
And you can require to have purely imaginary periods. Because if you have holomorphic differentials, you can prescribe the real parts of the periods.
And you can subtract this. So you have this one. And then you take the Hpq, you take sort of the indefinite integral of alpha pq.
So this is unique up to a constant, because of the real part of this. So the real part is independent of the path, because your residues are there. And then you have this Gpq, maybe a Gp of z.
So you take the integral of z of q.
Hpq of z times the mu of p plus a constant. And the constant is sort of such that the mu integral is 0.
The graph is gamma or G or gamma there and G is the same? Oh, so yeah, that's the same. OK. I guess the letters are not the different.
So the G, z, so q is fixed? Yeah, I integrate over one variable and. No, what does it, z is a variable?
How do you integrate? Ah, z is a variable. Pq depends on three variables, on p and q, and on the variable z. And you take an Hpq's.
So the Hpq is sort of curvature delta p minus delta q. And if you integrate over p, then you get curvature mu minus delta q.
So you have sort of something like curvature, which is the right curvature for the Gpq. And then you have to add a constant to make the mu integral equal to 0.
A mu integral relative to z, or? What? Yeah, well, you could do any, well, you could, for example, do the integral over z. Or you could also do the integral over q. Or fix that.
Yeah. And the post-conditional equivalent. Yes. So I have this one. And so this is how I got this. And now I have to talk about the, well,
how I do this on a degenerate curve. So first of all, I have to know something about the Arkelov measure. So on C0, on the fiber, I have the differentials.
And I have either the differentials which have no poles at the double points. So this is the gamma Cv omega Cv omega.
And then I have the residues. I have sort of the C to the edges.
I have the residues at the ease. And now, when you're going to wait there, more rigorously, the notion of a graph is like in Sarah's book. That is, you have the set of edges is really some,
you have the set is an involution. So it depends. Because you are neither with that. And you have the, so you have, so when you write this, you may have to take this into account. Because you have the. Well, I said you orient all edges. OK. Yeah.
In some way. I don't know how it is in Sarah's book. Oh, yes, I'm set with the involution. It changes the direction of the. So talking about the graph, I forgot one thing. I don't know about the graph because this, if you look at different books, what they say is a graph, slightly different depending on what you read.
So it's a. What? Unoriented multigraph is, I think, the nicest thing to use. But I say it's oriented. And it's no multigraph. It's a similar graph.
For the graph, I have the homology. I have a map from z to the e, z to the vertices, which is the e. And the kernel I call x, which is h1 of t.
What did I say? I had this gamma. And the co-kernel call this differential d.
And the residues of a differential are lying x to the r. So this maps over x, c.
And then it's exact. This is because if you look at the sum of the residues on a component, then it's 0. And this means just that this is the boundary. So the boundary map sends h to a plus to the two endpoints, the difference of the two endpoints.
And because I had the edge oriented, well, you can figure out whether you want the beginning minus the end or the end minus the beginning. So I have this one. And so the differentials form a nice family.
Well, and there's a similar on the singular fibers. You have this residue map. So it degenerates. So on each one. So there is a residue map. And then you get a basis of differentials.
You get differentials on the curves. And you get differentials which have non-trivial residues. So then you have a, so the cocarn is isomorphic to this one. And then if you do the square integral of a differential,
so if you do the square integral, then if the differential is holomorphic and doesn't have poles, then it's about the square integral on the component.
But if it has poles, then the square integral is sort of concentrated on the annuli. And so it's about the sum of Se times residue e squared.
So if you integrate over an annulus, you get some more this one if you look at the residue. So by the way, the residue is defined also on the other fibers by sort of integrating
once around a loop here. OK? Or an integral of what? Of a one form. Is the sum of?
So if you want to integrate du over u squared over this annulus, then you sort of take polar coordinates. Then you get 2 pi times integral maybe of log Te to 1.
You get, what is it, dx over x. And sorry, log Te dx over x. And this gives you 2 pi, and it's 2 pi times.
And the other terms, so in the differential, du over u term is the one which dominates the integral and you get this one. So here you are doing the integral kind of in a family over the mobilized.
Yes, yes. You choose a lifting of something in the section of the dualizing shape on the special fiber. You extend it to? Yeah, I mean on the other fiber, I still have a residue map by doing this integral. And I can do the square integral. I can have a nice algebraic family over my base s
because the omega is a vector bundle. And I can take a basis near s, and then I get a basis of the differentials on each curve. And it has a residue map to this x.
And the kernel, so that I can also consider. And for something in the kernel, the square integral is about the square integral over the components. And for something, and for the residues
in the square integral is dominated by this one because I assume that the Se are big mean. I want to look at the asymptotics if I get close to the similar curve. So by the way, I don't understand. You have this exact sequence? Yes.
You want to see to the e, but then you want to look. What do you say about the x? You say that the? The x is the space of loops in the graph. Yeah, but what is the relation to the? x lies in c to the e because the kernel lies in this one.
You claim that this is the image of the residue map? What? Yeah, it's the image of the residue map is this x. OK, and so I want to find an orthonormal basis.
And then I get an orthonormal basis. I sort of take a basis for the inner product on x.
So I get an orthonormal basis. I take an orthonormal basis for x with norm xe squared, this one.
And I take an orthonormal basis.
And then I sort of get an orthonormal basis of the whole space by combining them. And while I can sort of lift them,
I mean if the se become big, then the xe becomes small. And then I can sort of lift them that they are small on these components. So the integrals are like this. And so this gives me an orthonormal basis. So it gives an orthonormal basis and then the r kilo.
OK, so this is one thing.
Then I have to define these h pq's. So I have to find, so I have to find the pq.
Last comment, plus r kilo. I get the r kilo of measure if I know an orthonormal basis. So here it's an orthonormal basis for over in the family of the space of differentials. But what do you mean you get the r kilo mu is on the 5?
On the 5 I see, yes. So what is the? So the r kilo of mu has sort of some part
which lies in these components and some part which is concentrated on the annuli and the rest is sort of small. And so on the component I sort of get the r kilo of measure of these curves, except I have to multiply with the genus divided
by the total genus because for the r kilo of measure I have this 1 over g. And what kind of stability or semi-stability you have on the family? I mean, about genus 0 components. You assume something or not? I mean, the genus 0, well, on the genus 0 components
the gb is 0, so the r kilo of measure is sort of 0, the component. And then I get an additional contribution which sort of lives on these annuli and this is sort of controlled by this x
with this inner product. Yeah, but anyway, sometimes when they do the generation of curves, they assume like the stability condition, the rational curve is at least 3. Yeah. Do you assume something like this or does things
break down and you? I think certainly semi-stable is enough, but standing up I don't want to, well, I guess semi-stable is enough. But otherwise, I don't want to commit myself out of hand.
OK, so we have to find these alpha pq's and you sort of can find nice algebraic families. Let's say if you have two points pq in there, you can find nice algebraic families, but you need the condition that the periods are
purely imaginary. So then how to get?
And sort of the periods are sort of, I mean, there's a filtration, there are sort of three types. One is sort of the loops around the annuli,
the second one is sort of loops in here in the curve CV, and the third one are sort of periods which come if you, which correspond to loops in gamma.
So I think I call this gamma now, not G. And sort of, I mean, and the integrals over this one
are the residues. So I want that the residues should be real. So the residues, sort of 2 pi i times residues at e
should be real. Then you have these interior loops, and this by you can sort of correct
by adding differentials on the curves, which have no poles. And they're sort of unique. So interior loops CV.
And then they are sort of the most interesting in the one are the loops in x. So loops in x, while in the graph gamma,
which correspond to x. So I said I have differentials with given residue, and I can lift them uniquely that their interior periods are purely imaginary. So lift x to differentials is purely imaginary interior loop.
And then, well, and sort of, and I also should,
so this is sort of the residue should be in the real base extension of x. And then the loop, well, and then I want to add such differentials. And because the interior periods are purely imaginary,
I don't destroy what I've already achieved. So this is good. And the integral about a loop is sort of, is a linear form on the xe. So integral, so integral.
A round loop is a linear form, form on x.
And it differs from the sum r. So it differs from the sum se times xe times ye.
So I have a bilinear form on x by a bounded, bounded function by a bounded bilinear form.
Well, you have a bilinear form on x cross x, where one x describes a loop and the other x describes a differential. And then you get the integral around the loop.
And this gives you a bilinear form. And the integral is sort of dominated by the integral around the interiors of the loops, which sort of give you these terms. And then there's sort of the integral with sort of this part, which goes from one end from the loop to the other.
But this is bounded. So this is a small function. And then you sort of want this form to be 0.
So we have a linear form on x. And you have this non-degenerate inner form. And then you sort of can, no, sorry, this I said wrong.
I was too far. So I have had my candidate alpha pq, which I fixed in the first step. That in the interior loops, it has purely imaginary periods.
And then I take these more complicated loops. And I take the real parts of these integrals and these form linear form. Sorry. And then for x, so for x, sorry.
So this is the linear form. And I want to subtract something from x to make this 0, because I want the real parts to be 0.
So are you taking now your work on the complex of the is it xc or xr or? No, I mean I have only real parts, because I want this condition 0 that around the loops at the edges, the
integral is purely imaginary. So I want an x with real residues. Then I take my alpha, my candidate for a form which I have to correct by adding some form parameterized by x. And the real part of the integral around loops in x
gives me a linear form on x. And this linear form comes from x. x itself gives linear forms which are about this one plus the bounded function. And so this means I can correct it by this, by sort
of taking the orthogonal projection from a linear form, sort of. I mean this linear form is determined by inner product with an element of x. And then the correction is this one plus some bounded
term uniformly in x. So this gives me the, this gives me, what? Five more minutes.
Yeah, sorry. I don't know whether answering question is added, like in soccer. Yeah. So this way you can construct your differential HPQ and
these orthogonal projections are given by the graph. And then you can compute the GPQ. And then you can compute all invariants.
So since I'm supposed to finish, I should say, and the final result is, so the final result is that sort of the interesting invariants are rational functions, or the
interesting, like, invariants are rational functions, are sort of homogeneous rational functions, the homogeneous of
degree one, one in the SE. So this means to the values of these g's on the component and also for the delta function.
But unfortunately, they are not linear forms, so unfortunately, the delta is not a linear, is not a linear combination of the SE.
And so this means there is no metric on the O, on the boundary O of the boundary of the modular space, which
gives this delta function. So it doesn't, but at least you can compute things here.
Or I should say, I have a student, Robert Williams, who
was supposed to do this, but he did something else, which happens with some of my students, which was also very good, so I had to do it myself.
Are there questions, perhaps, from someone other than Ofer? I'm sure he has questions. Can you say, in this case where you have what you call the linear interpolation, so one of the points is on the annual item, where can the other point really be to get a contribution?
Well, it could be anywhere. But how do you, I mean, is there some intuition behind that, because they seem like even on the limit, kind of far apart, right? Well, I mean, if you have linear interpolation, it's enough to have points on the different components. And for the components, for the graph, there is
something, there are sort of, I had this d to the v, and a d, and I have an adjoint. And I can also scale the inner product, well, it is
natural inner product, so scale by s. And then you sort of get a Laplace operator.
And there's a similar thing, sort of, if you have a, now, you have a sort of RKL measure here, so you have sort of, you put v over g on v, and then you sort of, and
the edges, and on the edges, you sort of, you have a certain measure, and you sort of divide it equally between the two ends, so you get a certain measure. And then you sort of, well, you want to solve a Green's
function, you want to solve the equation delta, delta of g,
is sort of a point minus p, mu minus p, I should say. So the mu is total measure 1, and if you have a point also, then you can, then this is total measure 0, so it's the
image of the Laplace operator. And this gives you a Green's function on the graph. And the asymptotic of the Green's function is essentially given by this one. As I said, it's sort of computed by the graph. To resume at 5 to 12, but let's thank Paul things again.