We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

3/4 Chaotic Properties of Area Preserving Flows

00:00

Formale Metadaten

Titel
3/4 Chaotic Properties of Area Preserving Flows
Serientitel
Teil
3
Anzahl der Teile
4
Autor
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Flows on surfaces are one of the fundamental examples of dynamical systems, studied since Poincaré; area preserving flows arise from many physical and mathematical examples, such as the Novikov model of electrons in a metal, unfolding of billiards in polygons, pseudo-periodic topology. In this course we will focus on smooth area-preserving -or locally Hamiltonian- flows and their ergodic properties. The course will be self-contained, so we will define basic ergodic theory notions as needed and no prior background in the area will be assumed. The course aim is to explain some of the many developments happened in the last decade. These include the full classification of generic mixing properties (mixing, weak mixing, absence of mixing) motivated by a conjecture by Arnold, up to very recent rigidity and disjointness results, which are based on a breakthrough adaptation of ideas originated from Marina Ratner's work on unipotent flows to the context of flows with singularities. We will in particular highlight the role played by shearing as a key geometric mechanism which explains many of the chaotic properties in this setup. A key tool is provided by Diophantine conditions, which, in the context of higher genus surfaces, are imposed through a multi-dimensional continued fraction algorithm (Rauzy-Veech induction): we will explain how and why they appear and how they allow to prove quantitative shearing estimates needed to investigate chaotic properties.
Lokales MinimumGammafunktionLatent-Class-AnalyseFormation <Mathematik>ErgodentheorieHamilton-OperatorSingularität <Mathematik>Aussage <Mathematik>BeweistheorieDichotomieEinfach zusammenhängender RaumFundamentalsatz der AlgebraKreiszylinderLokales MinimumMomentenproblemResultanteStellenringTorusFlächeninhaltEllipseAchtGleichgewichtspunkt <Spieltheorie>Orbit <Mathematik>Zusammenhängender GraphKlasse <Mathematik>Offene MengeFlächentheorieJensen-MaßMassestromp-BlockMultiplikationsoperatorRandwertGruppendarstellungFigurierte ZahlÄquivalenzklasseKomplementaritätLoopThetafunktionStochastische AbhängigkeitHelmholtz-ZerlegungGlattheit <Mathematik>Vorlesung/Konferenz
Diophantische GleichungDynamisches SystemErgodentheorieGeometrieSymmetrieTransformation <Mathematik>Hamilton-OperatorMengenlehreSingularität <Mathematik>ScherbeanspruchungModulformUnendlichkeitAsymmetrieÄquivalenzklasseFunktionalGarbentheorieLogarithmusLoopMereologieStatistische SchlussweiseTeilmengeTermKonstanteTrajektorie <Kinematik>KodimensionAbstandParametersystemOrbit <Mathematik>Zusammenhängender GraphSummierbarkeitPunktKlasse <Mathematik>PartitionsfunktionSortierte LogikPartielle DifferentiationKonditionszahlMassestromRechenbuchp-BlockMultiplikationsoperatorRandwertRechter WinkelMathematikSpieltheorieZahlensystemDimensionsanalyseRechteckSymmetrische MatrixDifferenteVorlesung/Konferenz
MereologieLebesgue-MaßPartitionsfunktionMultiplikationsoperatorMinkowski-MetrikErgodentheorieTransformation <Mathematik>MengenlehreSingularität <Mathematik>ScherbeanspruchungZahlensystemKategorie <Mathematik>Ausdruck <Logik>Derivation <Algebra>UnendlichkeitAsymmetrieUniformer RaumBeweistheorieEindeutigkeitFunktionalGrenzwertberechnungInverser LimesLemma <Logik>TermFlächeninhaltTrajektorie <Kinematik>EinflussgrößeSpannweite <Stochastik>ParametersystemRechteckGebundener ZustandSummierbarkeitPunktVerzerrungstensorRichtungAbschattungSchätzfunktionMassestromBillard <Mathematik>Supremum <Mathematik>StandardabweichungZweiRechter WinkelFisher-InformationSymmetrieFaserbündelDivergente ReiheStichprobenfehlerTheoremGüte der AnpassungTeilbarkeitÜberlagerung <Mathematik>DurchmesserFormation <Mathematik>DistributionenraumEvoluteVorlesung/Konferenz
ErgodentheorieFolge <Mathematik>KurveNumerische MathematikRelativitätstheorieMengenlehreScherbeanspruchungIterationAnalytische FortsetzungAnalytische MengeFunktionalGeradeLokales MinimumParabel <Mathematik>Physikalisches SystemTeilmengeTorusKonstanteTrajektorie <Kinematik>Lebesgue-MaßSummierbarkeitPunktVerzerrungstensorPartitionsfunktionRichtungPartielle DifferentiationMassestromMultiplikationsoperatorRechter WinkelDynamisches SystemNatürliche ZahlSymmetrieZahlensystemProdukt <Mathematik>MatrizenrechnungModulformKategorie <Mathematik>Ganze ZahlUnendlichkeitÜbergangAuswahlaxiomDelisches ProblemPrimidealStatistische SchlussweiseZählenEinflussgrößeOrbit <Mathematik>RechteckDistributionenraumKlasse <Mathematik>Turm <Mathematik>SchätzfunktionCharakteristisches PolynomKonditionszahlp-BlockStandardabweichungMinkowski-MetrikOrtsoperatorVorlesung/KonferenzTafelbild
Algebraische StrukturBruchrechnungMathematikNumerische MathematikMengenlehreScherbeanspruchungZahlensystemProdukt <Mathematik>MatrizenrechnungRenormierungIntegralDerivation <Algebra>UnendlichkeitAsymmetriePhysikalischer EffektAggregatzustandArithmetisches MittelBeweistheorieFunktionalGruppenoperationIndexberechnungNuklearer RaumStatistische SchlussweiseTeilmengeWechselsprungFlächeninhaltBasis <Mathematik>NormalvektorSchwebungSummierbarkeitPunktPartitionsfunktionFokalpunktTurm <Mathematik>SchätzfunktionSummengleichungPartielle DifferentiationGraphfärbungDiskrepanzJensen-MaßMassestromDifferenteDickeMultiplikationsoperatorKreisbewegungCliquenweiteMinkowski-MetrikOrtsoperatorGeometrieFaserbündelSymmetrische MatrixBeobachtungsstudieVorlesung/Konferenz
BruchrechnungDynamisches SystemErgodentheorieFolge <Mathematik>KettenbruchMathematikNumerische MathematikPerspektiveRationale ZahlRelativitätstheorieHamilton-OperatorSingularität <Mathematik>ZahlensystemGebäude <Mathematik>IterationProdukt <Mathematik>MatrizenrechnungModulformRenormierungKategorie <Mathematik>DimensionsanalyseDerivation <Algebra>FunktionalGarbentheorieLeistung <Physik>Lokales MinimumPhysikalisches SystemStatistische SchlussweiseTheoremZentrische StreckungGüte der AnpassungKonstanteOrbit <Mathematik>Strategisches SpielSummierbarkeitPunktGlattheit <Mathematik>Spezielle unitäre GruppeKartesische KoordinatenBeobachtungsstudieTurm <Mathematik>SchätzfunktionSummengleichungSurjektivitätJensen-MaßKonditionszahlEinhängung <Mathematik>p-BlockMultiplikationsoperatorKreisbewegungStandardabweichungMinkowski-MetrikDiophantische GleichungStatistikGrenzschichtablösungAnalytische FortsetzungVorlesung/Konferenz
BruchrechnungKombinatorikNumerische MathematikTransformation <Mathematik>ZahlensystemFrequenzGebäude <Mathematik>MatrizenrechnungAussage <Mathematik>Divergente ReiheFunktionalGarbentheorieGruppenoperationInterpolationLineare DarstellungMomentenproblemStatistische SchlussweiseGüte der AnpassungGewicht <Ausgleichsrechnung>NormalvektorStrategisches SpielFormation <Mathematik>GammafunktionDistributionenraumSummierbarkeitSymmetrische MatrixDifferenzkernProzess <Physik>Turm <Mathematik>SchätzfunktionSummengleichungJensen-MaßKonditionszahlp-BlockDickeMultiplikationsoperatorKreisbewegungMinkowski-MetrikRechter WinkelPoisson-KlammerSchnitt <Mathematik>Vorlesung/Konferenz
BruchrechnungDiophantische GleichungDynamisches SystemErgodentheorieFolge <Mathematik>MathematikNumerische MathematikTransformation <Mathematik>MengenlehreSingularität <Mathematik>ModelltheorieZahlensystemIterationMatrizenrechnungIntegralAusdruck <Logik>Derivation <Algebra>Ganze ZahlEntscheidungstheoriePhysikalischer EffektAussage <Mathematik>BeweistheorieEindeutigkeitErwartungswertGarbentheorieGeradeGrenzwertberechnungGruppenoperationInterpolationLeistung <Physik>MereologiePhysikalische TheoriePhysikalisches SystemPotenz <Mathematik>RangstatistikResultanteTermÄhnlichkeitsgeometrieEinflussgrößeAbstandParametersystemNormalvektorCoxeter-GruppeSummierbarkeitPunktBetrag <Mathematik>PartitionsfunktionSortierte LogikBeobachtungsstudiet-TestTurm <Mathematik>SchätzfunktionSummengleichungSchnitt <Mathematik>MinimumGraphfärbungJensen-MaßKonditionszahlMassestromDreiecksfreier GraphMultiplikationsoperatorMinkowski-MetrikRechter WinkelGruppendarstellungOrtsoperatorNatürliche ZahlRenormierungFinitismusGüte der AnpassungStandardabweichungVorlesung/Konferenz
Dynamisches SystemZahlensystemProdukt <Mathematik>MatrizenrechnungMatrizenringKategorie <Mathematik>ÜbergangFunktionalStatistische SchlussweiseZählenKonstanteBasis <Mathematik>Orbit <Mathematik>SummierbarkeitTurm <Mathematik>SchätzfunktionSummengleichungSchnitt <Mathematik>Konditionszahlp-BlockDickeMultiplikationsoperatorCliquenweiteZweiFolge <Mathematik>MengenlehreArithmetisches MittelAussage <Mathematik>BeweistheorieDivergente ReiheGleichverteilungGrenzwertberechnungHyperbelverfahrenInterpolationLemma <Logik>Lokales MinimumMereologieResultanteTeilbarkeitEinflussgrößeParametersystemLebesgue-MaßPunktSymmetrische MatrixArithmetische FolgeDifferenteMinkowski-MetrikVorlesung/Konferenz
ApproximationNumerische MathematikZahlentheorieMengenlehreZahlensystemMatrizenrechnungAussage <Mathematik>BeweistheorieEndlichkeitFuchs-DifferentialgleichungGeradeGleichverteilungLeistung <Physik>Lokales MinimumMereologieSimplexverfahrenStatistische SchlussweiseStichprobenfehlerTermZählenZentrische StreckungGüte der AnpassungParametersystemNormalvektorOrbit <Mathematik>DistributionenraumSummierbarkeitPunktPartitionsfunktionPerkolationstheorieBeobachtungsstudieTurm <Mathematik>SchätzfunktionSummengleichungDickeMultiplikationsoperatorKreisbewegungStandardabweichungMinkowski-MetrikOrtsoperatorBruchrechnungErgodentheorieFolge <Mathematik>KettenbruchGebäude <Mathematik>Produkt <Mathematik>RenormierungKategorie <Mathematik>DimensionsanalyseDerivation <Algebra>Arithmetisches MittelIndexberechnungIrrationale ZahlWechselsprungSurjektivitätJensen-MaßKonditionszahlp-BlockEinsVorlesung/Konferenz
Folge <Mathematik>Singularität <Mathematik>ZahlensystemFrequenzGebäude <Mathematik>MatrizenrechnungDerivation <Algebra>Physikalischer EffektFunktionalGruppenoperationGüte der AnpassungKonstanteNormalvektorOrbit <Mathematik>Strategisches SpielSummierbarkeitPunktGlattheit <Mathematik>Turm <Mathematik>SchätzfunktionSummengleichungKonditionszahlp-BlockMultiplikationsoperatorZweiBruchrechnungMathematikNumerische MathematikOrdnung <Mathematik>ZahlentheorieMengenlehreKategorie <Mathematik>ResonatorGanze ZahlÜbergangResultanteStatistische SchlussweiseTermDistributionenraumSymmetrische MatrixPartitionsfunktionPartielle DifferentiationDifferenteKlassische PhysikKreisbewegungMechanismus-Design-TheorieVorlesung/Konferenz
BruchrechnungDiophantische GleichungErgodentheorieKombinatorikPolynomMatrizenrechnungFinitismusBeweistheorieEindeutigkeitErwartungswertGeradeGrenzwertberechnungInterpolationLeistung <Physik>Lineare DarstellungMereologiePotenz <Mathematik>ResultanteGüte der AnpassungÄhnlichkeitsgeometrieEinflussgrößeAbstandParametersystemNormalvektorGammafunktionSummierbarkeitt-TestProzess <Physik>SchätzfunktionSummengleichungJensen-MaßKonditionszahlDickeMultiplikationsoperatorKreisbewegungMinkowski-MetrikOrtsoperatorVorlesung/Konferenz
Diophantische GleichungErgodentheorieFolge <Mathematik>MengenlehreSingularität <Mathematik>ModelltheorieDerivation <Algebra>Aussage <Mathematik>LogarithmusLokales MinimumTermLebesgue-MaßSummierbarkeitPunktBetrag <Mathematik>BeobachtungsstudieTurm <Mathematik>KonditionszahlMultiplikationsoperatorVorlesung/Konferenz
IkosaederLie-GruppeNumerische MathematikOrdnung <Mathematik>MengenlehreMatrizenrechnungRiemannsche GeometrieAussage <Mathematik>Divergente ReiheDivisionEndlichkeitFunktionalGleichverteilungHyperbelverfahrenInterpolationLogarithmusMereologieResultanteSimplexverfahrenStichprobenfehlerReelle ZahlEinflussgrößeParametersystemNormalvektorOrbit <Mathematik>Lebesgue-MaßSummierbarkeitPunktPartitionsfunktionArithmetische FolgeSchätzfunktionSummengleichungKonditionszahlRatsche <Physik>DifferenteDickeMultiplikationsoperatorMinkowski-MetrikOrtsoperatorVorlesung/Konferenz
ApproximationBruchrechnungHeuristikNumerische MathematikOrdnung <Mathematik>MengenlehreZahlensystemMatrizenrechnungIntegralÜbergangAussage <Mathematik>BeweistheorieEinfach zusammenhängender RaumFunktionalGleichverteilungLokales MinimumMereologieStatistische SchlussweiseZählenZentrische StreckungGüte der AnpassungNormalvektorOrbit <Mathematik>SummierbarkeitPunktPartitionsfunktionBeobachtungsstudieTurm <Mathematik>SchätzfunktionSummengleichungDifferenteDickeMultiplikationsoperatorMinkowski-MetrikOrtsoperatorVorlesung/Konferenz
Diophantische GleichungÜbergangAussage <Mathematik>TermPunktSchätzfunktionEinsErgodentheorieMengenlehreMatrizenrechnungKategorie <Mathematik>ResultanteStatistische SchlussweiseOrbit <Mathematik>SummierbarkeitPartitionsfunktionTurm <Mathematik>Partielle DifferentiationKonditionszahlDifferenteKlassische PhysikMechanismus-Design-TheorieVorlesung/Konferenz
Transkript: Englisch(automatisch erzeugt)
So welcome back, welcome to everybody. So I'm continuing the course, so we had a week break.
So the first week I kind of explained a lot of background material and motivation and outline. So we are, let me recall you that we are studying smooth area preserving flows on surfaces. They're actually symplectic or they are actually locally Hamiltonian flows. And I'm assuming that there are, I'm considering the genetic case where this flow must have
singularities and I'm assuming that the singularities are simple saddles or centers. And we are interested in the ergodic properties of these flows.
And especially so far I'm concentrating on the presence or absence of mixing, which is, and this question, the genus one case was considered by Arnold who conjectured mixing in locally Hamiltonian flows with one saddle and one center on the torus, conjectured
proven by Sinay and Hanning shortly after. And all the course we are trying, I'm trying to go into the proofs and the techniques to prove the full classification in higher genus. And in higher genus there is kind of a dichotomy.
You have an open set where no mixing is typical and an open set where on various minimal components the flow is mixing. And we are trying to describe this picture. And today I really want to go into the proofs. We'll probably do mostly mixing today, but I will prepare some things for
absence of mixing, which I'll finish on Thursday. So last class on the first day, I just gave an overview and motivated this question and the picture. Last lecture, we were more concrete and we said we are going to give a very concrete representation of these flows, which is useful for the proofs.
So let me recall you one proposition which we actually proved quite essentially in all details almost last time. So I'm looking at locally Hamiltonian smooth area preserving. It's equivalent smooth area preserving flow with non-degenerate saddles.
And I look at the minimal component. So maybe I will make a digression before I comment on that. Let me, something which I didn't say the first time, and maybe now it's a good moment to say it.
So let me write a minimal component decomposition. And this is a result which is quite old and it's due already to Meyer, Levit, independently. Meyer, Levit, also Anton Zorich has a, maybe unpublished, I don't know.
So if you have such a flow, so 50 can be decomposed into the following fundamental blocks into elliptic components.
I'm going to give them this fancy name, but these are essentially, yes, theta Meyer could be, it's in the, this is a result from the quite old 60s maybe even. Why it's very, yeah, quite, yeah, no, I'm not, yeah.
So I can check later, but elliptic components, this will be either islands with, have a center surrounded by closed orbits. Disks essentially, disks filled by a center and closed orbits or cylinders filled by closed orbits.
So these are disks plus a center or a cylinder. And the boundary of these elliptic components is indeed a set the loop homologous to zero.
So it will be, so at the boundary of this disk, you will see this famous half a figure eight. And similarly, in this case, at the boundary of your cylinder,
you will see some cell connections. And typically you will see these figure eights. This was what Arnold had already remarked. And this made it by also disconnect your surface. So it could happen that your surface maybe has one of the cylinders.
And this cylinder will disconnect, could disconnect your surface into more parts. And let's see, OK, maybe I'll put some extra genus here. And up to G, up to G where G is the genus, minimal components.
So the basic example we had last week was Arnold flow, where you have one island. And in the typical case, a minimal component on the complement.
And so what we are describing now are these minimal components, OK? So in these minimal components is essentially a subsurface with boundary possibly where orbits are dense. And the flow dynamics is trivial in these parts. So we are interested in the ergodic properties of the minimal blocks.
And we have now a go back here. So for each of these minimal components, I can represent it as a special flow. So we showed that the Poincare map on a co-dimension one section is an interval exchange transformation.
And I can represent the flow as this picture of special flow, where points move vertically up under some identification between f and the base. And the function which appears in the special flow is nothing else than the return time.
So how long it takes to a trajectory to come back to the section. So this return time explodes at the singularities. And by calculation on Hamiltonian simple saddles, you can see that it blows up logarithmically. So it takes the logarithmic amount of time to come back.
Logarithmic is a function of the distance. So this is the explicit form of the roof. And I take the chance to correct some typos that were in last time. So someone correctly asked me, what's the definition of x plus? I don't know who was in the audience.
So indeed, this function wants to take the positive part. So if this argument is positive, this is absolute value of log. If the argument is 0 or negative, I want this to be 0. So I set it to be minus 1. So it's minus 1 so that the log is 0.
Sorry. I think I wrote 0, which of course is silly. I don't want infinity there. And there are two crucially different cases. So this is kind of the right side of the discontinuity.
This is the left side of each discontinuity. And they each have a constant. And we proved last time that if, OK, so there are two cases. The singularities are called asymmetric. And we write asymmetric log. I will use this notation today.
We say that the roof belongs to the class of asymmetric log of t. The dependence on t is only because the singularities coincide with the subset of the discontinuities of the interval exchange. So they vary with the IT.
And it's asymmetric if the constants to the left add up to something different than the constants to the right. And that is the case of the Arnold flow, where you have constant on one side and twice the same constant on the other. And so every time you have set the loops homologous to 0,
like in this case of Arnold, they produce some asymmetry between the right and the left side. And typically, you will have, and maybe let's write phi t in this U. I don't remember.
So typically, you have to be in an open dense set where there cannot be further cancellations that compensate for the asymmetry. And in the other case, symmetric, the constants are the same from right and left. And this difference, I'm stressing it
because today it will be crucial. And let me recall you the philosophy that we want to prove mixing for typical ITs when there is an asymmetry. And we want to prove absence of mixing for typical IT
when there is a symmetry. So this will be our goal. And we will stay today in this language of suspension flows. And what I want to explain mostly today is what are these conditions you need to put on the IT, on the interval exchange, which are some sort of diophantine conditions for interval exchanges. And today, we will use the Rossivitz induction
to explain these conditions. OK, so but I also spent some time last time to describe heuristically what produces mixing. If there is mixing. And the geometric phenomenon of shearing. So before we do Rossivitz induction,
let me write formally criterion for mixing, a criterion for mixing, and a criterion for absence of mixing for some special flow like this. OK, so already spent some time justifying this phenomenon
that when there is a symmetry, there is a shearing. And shearing is quantified in terms of Birko sums. But OK, maybe the criterion is actually more elementary than this. So you have a special flow.
So if for any b rectangle, this is my target rectangle, there is partial partitions.
So I want to find partitions of the, it's enough to do them for one slice. I can do it for the base. Partial partitions of 0, 1 into intervals.
By partial partitions, I mean this joint union of intervals that maybe doesn't fill fully. So I can remove some part of space, some partial partitions. Let me call them PT such that, so the Lebesgue measure of PT
goes to 1. So they are partial, but they fill as time grows. And the mesh, the largest interval goes to 0.
So if you can cut the base into as many small intervals, that fill more and more of the space, and size goes to 0. So the partition is tending to the trivial partition
into points. And each of these intervals separately equidistributes. And for every, let's call it J in the partition, so this would be a small interval.
Every of this J, if I look at the Lebesgue measure of the points in J intersected with phi minus T of B. So these are points in J that after time T enter B.
So this is the proportion of J which, when I flow, will intersect my target rectangle. The Lebesgue measure of the set of this, I can write it like this. Stands to J times the area.
This is Lebesgue measure of J times the area of B. So each of them equidistributes. Then phi T is mixing.
This is not very deep lemma. So essentially, I will only write one word for the proof. And this word is Fubini. So you can just prove this by Fubini argument.
And this is what I said at the end of last lecture informally. If I have a target set A, I want to slice it into horizontal. Once I have a partition of the horizontal, you can flow it easily and it's enough to prove it for the horizontal. I want to find many small intervals that independently equidistribute.
I think I need a new chalk. How do you, what do you want? And basically, you can get mixing. So this is key property that each interval equidistributes.
So to prove star, what we will show is that each interval becomes, what we want to show, we said last time, is to show that each of these intervals shears and shadows a long trajectory of the flow.
So the idea is that there is the shearing phenomenon when there is asymmetry. And horizontal intervals shear in the prevalent direction of the asymmetry. And become almost linear and almost vertical.
And once they are almost linear and almost vertical, they are almost a trajectory of the flow. And the flow, if the base is uniquely ergodic, which is typically the case, long trajectories equidistribute. So what you need, one needs the following.
So we need shearing. And shearing, as we did last time by the explicit formula of a suspension flow evolution, we computed shearing in terms of Birkhoff sums,
if you remember. So one needs to show that Sr of f prime divided by r tends to infinity. Let me recall you maybe here. Recall notation, Sr of f is the Birkhoff sum.
So this is my notation for Birkhoff sums of the function along the base transformation. And we proved this, that shearing is described by this Birkhoff sums of the derivative.
Of course, here there is a derivative. And the derivative has 1 over x type of singularities, which are not integrable. So we are out of the Birkhoff ergodic theorem standard range. So what we will prove indeed, the Birkhoff sums grow faster than expected when there
are these type of singularities. And then you also need, this is what we will try to focus on today. And then you also need some kind of distortion bound. So you have some kind of quantity.
You also need some estimate on the second derivative. So for example, sorry, if you look at the second derivative Birkhoff sums and take the sup over the inf on each partition interval of the, maybe I should put r of t, for j. This is the sup of the second derivative
versus the first derivative has to go to 0. And for j and p t. And then you also need, maybe I'll finish here.
And then you need equidistribution. This is just of the flow. So if I look at the long trajectory of my flow, so if I look at, I don't know.
If I look at, you want to prove, if you have a trajectory of my special flow, phi t of x. So this is the special flow, or the flow. And I want to say how much time it spends in the rectangle b.
And I average the time spent in b. So this is the usual ergodic theorem. This stands to measure of the area of b. So this is simply 3 just from actually unique,
we need some uniform quantity, unique ergodicity of the interval exchange. So I know not everybody works in ergodic theory, so if you don't want to know what ergodicity is,
this is a fact. You have equidistribution of trajectories. Yes? Can you repeat what was the M-E-S-I-H, M-E-S-H, mesh? What are you looking at here? Mesh, mesh. Mesh, what is the mesh?
This is a partial partition. It's just union of these joint intervals. And these intervals are just take the largest diameter, just the size. So these intervals are getting smaller and smaller and filling more and more of my space.
So I don't understand when you write a row below? Yes. You mean that t goes to e? To be honest, I was a little bit sloppy. So maybe what I should write is like this. So for every epsilon greater than 0, there exists the t0 of epsilon,
such that for every t greater than t0, eventually this is actually, let's put it like this, 1 minus epsilon, the Lebesgue measure of the proportion of each interval, which does whatever it should do.
OK, and so it's JpT. For every JpT. So for every epsilon, there exists a t0, such that for every JpT, this proportion is close to the limit. That's bad. You're right. I was actually, yeah. Are you happy?
The ratio goes to whatever it should go. You're right. The size is going to 0. So that's a, I should, yeah, this is. Say it again. What is rt then on the right? What is where? Just below. rt, it's OK. This is written, maybe let me make some space,
rt of x of f prime, double prime of x. Let me write it, rt of x, f prime of x. So rt is a way to relate discrete time with continuous time. So rt of x was the number of discrete iterates that the point x undergoes when flowing for time t.
So I can write something like the max r greater than 0. I'm not sure what I defined it last week, but such that sr f prime of x is, sr of f of x
is less than t. Max r, let's see, max, the largest, which is less than t. So this is the number of iterates that my point undergoes under my suspension flow. Because here, you could write everything with discrete time if you prefer, but I parameterize the partitions
by continuous time. So I need to link discrete and continuous time. So are you up with me? So this criteria, I hope it's clear.
So we want to find small segments with equidistributes. How are we going to do it? Again, let me draw the picture. I'm going to show that each of these small segments, j, when I flow it, will stretch and look like a line.
So essentially, I will not tell you anything more about. So I want to convince you with aristically that if I prove 1, 2, and 3, they give me the possibility of applying the criteria.
So I'll give you just a sketch. So essentially, I claim that sketch that 1 plus 2 plus 3 implies mixing is that, basically, I just
say that 1 plus 2 implies that phi t of j is close, is asymptotically tends to somehow a vertical line,
a vertical long line, almost vertical. So 1 is giving you the shear. It's giving you that it's stretching. And 2 is giving you some distortion. So 2 is giving you that it's stretching and becoming
a line, not a parabola. And then 3, this implies that, as we said last time, that mixing reduces to equidistribution, i.e. 3.
So if these curves become vertical lines,
I just need to know how much time each of these trajectories spans in the target b. And this is given by ergodicity. It's a quick sketch, but I don't want to spend more time than this.
So what I really will try to really prove, and I should say this criterion is, in some sense, quite standard. And it has been used by many people, so starting from Kochergin, Sinay and Hanin, and Basamfayad for analytic reparatumization of flows
on tori. Actually, he has nice text with all the details. And by myself, by David, by Chaika Wright, by anybody who's proven mixing for these flows. So this is very standard. So what I really want to try to explain
is how you get this type of estimate, and what do you need on the interval exchange for proving them. So I want to understand, basically, Birko sums of a function which has 1 over x singularities, and try to prove that there is stretch in the asymmetric case.
OK. And what is the other side? Sorry, I will go here. Maybe we go here. I also want to tell you the opposite.
So this is more formalizing a little bit what we did in the last class, realistically, the mixing via shearing. I said at the very end of the Thursday two weeks ago that not only shearing gives you mixing, but for systems with enough rigidity,
this is the only way to get mixing. So no shearing, no mixing. And that goes in the other direction. So when we have symmetry, we will prove that there is no shearing, and we want to deduce that there is no mixing. So this is not true in general,
but let me then tell you what you need on the base for this to be true. So we have a criterion for mixing, and now let me tell you a criterion for absence of mixing, no mixing.
And this is also old. This goes back to Kucherkin in the 70s. And maybe I should also say Katok.
OK. So I need one preliminary definition. So definition of partial rigidity, one form of partial rigidity. So I said shearing is necessary as long as the base,
so the IT, is rigid enough. What does rigid enough means? So let's say a n, r n are partial rigidity sequence.
So e n are subsets of 0, 1. e n are sets where the partial rigidity will happen, such that Lebesgue measure of e n is bounded below by a positive constant.
So these are the sets of partial rigidity. And r n are the times going to infinity. So these are partial rigidity times, rigidity times.
And basically, you want that t to the r n restricted to the set e n converges to identity. So if I look at e n and at the rigidity time,
this set is essentially fixed by my map. And this you can think in infinity norm. Actually, what we will prove is that there is partitions p n, partitions
converging to trivial partition, mesh going to 0, such that for every f in p n, if I look at, you can forget this.
You can just think of the previous, but precisely. We proved that there are partitions so that when I intersect on my e n and apply t to the r n, this goes back inside f. This is a way to say that things are moved by little, OK?
So sorry, this is the definition. And here comes the criteria for absence of mixing in special flows. So 50 special flow over t under f.
So t does not have to be an i t. It just has to be anything which is partially rigid. So if there exists e n, r n as above,
if there are partial rigidity sets and times for the base, there exists a universal constant such that, and now I want to say there is no stretch on e n.
And I say this by, I look at r n, Birkhoff sums for f, and r n, Birkhoff sums at the point x. Let me write it for every x in my e n.
So if I look at two points in e n, and I look at the Birkhoff sums, the difference of Birkhoff sums at time r n, the difference stays bounded uniformly in n.
So this is no stretch. I'm writing a different color. This is no shearing. This is no shearing. This tells me that two points on this set e are not sheared.
There's no discrepancy between the two. If there is no shearing on a partial rigidity set, no mixing. You can try to prove this as an exercise on special flows
and mixing if you want. So what's the idea? So I hope the statement is clear. So the idea is that if I have a set which comes back close,
and at the same time, at the corresponding time, the Birkhoff sums don't stretch, somehow my set does not have hope to equidistribute. It cannot mix. It stays kind of in a subset of my space.
It's not a proof, but I want to say the idea is that too much rigidity in the base and no stretch forces some sets to self-intersect too much, making mixing impossible. So you can try it if you want, or you can katak or kocherkin would have a proof.
But again, from now on, I think I will not explain mixing directly anymore. So again, from now on, I will try to prove this type of estimate of stretching
in the asymmetric case. And I will really prove this assumption of the criteria in the absence of mixing case. So I will beat for you what are these partial rigidity sets. And I will try to explain. Probably we'll go to Thursday morning. But we will try to present how
you prove these cancellations and no stretching estimates. OK? Are you happy? So from now, we abandon the geometric picture of shearing and mixing. And we focus on estimates on Birkhoff sums. So this is an estimate on the derivative.
This will also be an estimate on the derivative. I will prove that the derivative would be less than R times constant on an interval of size 1 over R. So by mean value, we will have bounds. So they would be all estimates on the derivatives for non-integrable function.
And in the symmetric case, we will exploit that these functions, even though they are non-integrable, they have somehow principal value 0. So they are symmetric enough that, OK? Any questions?
So I hope you got a feeling of this shearing in action. So really, it's a good picture to remember in the parabolic world, in the world of flows with entropy 0.
So mixing, really, I don't know of an example where mixing does not happen because of, I mean, shearing seems to be really a key feature for mixing. So I think it's something good to remember.
So what I want to do now is to explain how to study Birkhoff sums over interval exchanges. And to do that, we need to start a new chapter. So we need to talk about renormalization.
In the concrete, we are going to say something about Rosie-Wich induction. So this is really a key tool to study interval exchanges, which has a long history. So it goes back to Rosie and Wich in the 80s.
And it's very much used in tabular dynamics. But I don't want to do a course. This is not a course on Rosie-Wich induction. I'm going to tell you only what I need from my perspective from the goal of this course. So there are beautiful lecture notes
by Jean-Christophe Jocos, who taught several courses in Paris and by Vianna and others. So let me just tell you what is the idea. And first of all, this is what is the replacement of continued fractions. So if you like to work with rotations
or Hamiltonian systems, you might like to put the Diophantine conditions or rotation numbers through continuous fraction statistics. So when you have an interval exchange, we are going to put Diophantine conditions on the interval exchange by using this tool.
OK? And we will describe typical ITs for the theorems to hold through this. So what is the Rosie-Wich induction? What is renormalization? So first, let me do a trivial remark. Say that I have an IT interval exchange, right?
And I look at a subinterval, J contained in I. I can induce. So Tj, maybe it's a definition.
Remark comes later. Tj from j to j induced map. So this is standard construction in ergodic theory. It's a little bit like the section, but so given a point of in j, what is the induced map?
It's like a first return map. Just in the same space. So a point in j will go out under T. And then at some point, we'll be back in j. So I just accelerate my map until I'm back in j. So x goes to T to the rjx of x.
So I have to use a power of T, an iterate of T, where rj of x is the first return time to j.
So this is the minimum, r greater than 0, such that tr of x is back in j. So this is standard. I can induce a map.
Remark, so I haven't used the T as an IT. But now I will use if T is an IT, Tj is again an IT.
Maybe let me add something. If T has d intervals, d continuity intervals, the induced map has at most d plus 2 intervals, exchange intervals.
So it's very similar to what we did for suspensions last week. How do you prove this? Well, you look at the discontinuities of T and the endpoints.
The plus 2 comes from the endpoints. And you look at the pre-images. You kind of pull back the discontinuities of T. And those will be the first time that they enter j, will create the discontinuities of the induced map. OK, it's one of the first properties
you can prove for ITs. So maybe I can just say pull back to j discontinuities of T plus i endpoints.
And the Rosiewicz induction gives you a procedure, an algorithm, to choose a sequence of inducing intervals. So we want to induce our IT on smaller and smaller scales.
And the algorithm is just a recipe for how to build a sequence of inducing intervals. So maybe I will just say Rosiewicz induction
gives a sequence, i n, sequence of nested intervals.
By nested, I mean that i n plus 1 is contained in i n, shrinking with 0 as left endpoint.
So I start from i 0, which is i. And the algorithm will give me a sequence, i n, i n plus 1.
It will give me a sequence of intervals shrinking to 0. So smaller and smaller with this as an endpoint. Such that, so T to the n would be my notation for the induced map, such that T to the n induced map to i n.
So this is a notation. I'm calling T to the n, the induced map. So if you want T to the n is a T. How did I write it?
T i n. So it's a short form for inducing T on i n. And so the intervals are chosen so that the induced map is, again, i t of d intervals, not d plus 2 or d plus 1.
So Rosi defined this algorithm. So he just wanted the, you can kind of do it in the way not to miss any chance.
You can decrease your interval and capture all the moments where the induced map has exactly d and not d plus 1 or d plus 2 intervals. And you will get this algorithm. To be honest, it's not so important what the algorithm is. But just to leave it not a mystery,
so you choose i n plus 1 to be the following. This is i n. And this is T to the n. Sorry.
Sorry. There will be a last interval of the induced map. And there will be some interval which is moved by the i t to become the last. So what you should do?
You should compare the last interval before the exchange with the last interval after the exchange. Look which of the two is shortest. And cut that shortest. So what is left will be i n plus 1.
So i n plus 1 will be i n minus some j, where j is the shortest between last interval of T
and before and after exchange. Let me write it like this. I do not want to introduce more notation. So I think it's clearer in words.
This is the algorithm. So I should have said that you need the condition to guarantee
that this algorithm never stops. So there is one case where you don't know what to do, which is when these two intervals have the same lengths. And you want to avoid this. But for example, if you assume that your lengths are irrationally related, you are sure that this will never happen.
In general, it's enough to assume the so-called keen condition on i t, if you know what i t. And then your algorithm is defined forever. So you never run into this equality case. Not to worry so much. So what is really important for me, just that we are inducing the i t.
So we are looking at first return maps of the original i t on smaller and smaller intervals. And let me tell you another standard construction, which is important for us. Again, it's another basic construction in the Godic
theory, those of Rocklin towers, or Rocklin or Kakutani skyscrapers. Kakutani skyscrapers. This is, again, something quite standard.
So if I have an induced map, I can try to represent the whole space as towers over the induced map. So I will, so maybe let me write it like this. One can represent as acting, let me write it,
and I will explain. Towers over tn. So it's a way to reconstruct the original transformation from the induced map as follows.
So let me write it like this. I need some more notation. Say that I did my induction up to step. Maybe I'll write it smaller. So this is my i n.
It's a small interval. And the tn is my induced i t on this small interval. And so I will denote i and j. So let i and j for j from 1 to d be exchanged intervals of tn.
So I have the exchanged intervals for tn.
And let r and j. Again, I'm using this capital, this bracket n for the nth. Everything which has to do with the nth step of my induction has a parentheses n. r and j are the return time of i and j to i n.
So again, this is minimum r greater than 0,
such that tr of i and j is contained in i n.
So I'll try to have two pictures on my board. One is the picture of the original space, and one is this induced picture. And I have space in between to draw towers. So if you have a small interval exchanged by this i t, so this is somewhere in my big space.
And it will go around under t a certain number of times before making it back to the small interval. So you should think this small interval travels out of the inducing interval until it comes back. And it's standard to kind of plot
the iterates which are out of the small interval as a tower over the small interval. So I'm going to plot r and j.
I'm going to plot r and j. You can plot them with distance 1. I'm plotting the r and j iterates as floors above, and similarly for any of the others. So any interval will have his return time.
And I will plot as many copies as the return time. So maybe this is longer. And maybe this is shorter. So I'm plotting towers. So I will write just the formula. This is just a graphical representation
that I will show you why it's convenient to visualize the dynamics. So just by definition of return time, we have that 0, 1. So this is i and 0.
You can write it as union from 1 to d. This will be the d towers of the union. And now I'm going to write ti of i and j.
So each base I iterate it from 0 to r and j. Something like this. So this is this joint union.
So each of these small intervals travel for time r to the j with these joint copies, and then makes it back to i n. If I take the union of all these, sorry, maybe minus 1, the rn will already self-intersect.
So all of this is what I call a tower. So this is the j-th tower. This is what I plot like this. It has base i and j and height r and j.
So this union is called the Rocklin tower. And this union of towers is called the Kakutani skyscraper. And interval exchange transformations are an example of what is called a finite rank dynamical
system. So when you can have this representation with Kakutani towers whose base is shrinking to 0, with finitely many, in this case d, towers at every step, you say that your system has rank d. So an interval exchange has actually rank d at most d.
Right, rank at most d by intervals. So you can build it with Kakutani skyscrapers with d towers made by intervals. In general, in this construction, you can have the base could be a measurable set. It doesn't have to be an interval. We are doing it with intervals. So renormalization gives you the sequence of towers.
And soon we'll need a break. But I want to finish with a little bit more about these towers. And sorry, so Kakutani towers. So in this tower, in my picture, so again, at this floor, I don't know, the floor k, what I'm drawing here is tk of ijn.
So these are like a stacked up version of the iterate. And in this picture, so in this picture, so first of all, these towers, you could plot them here. I can plot them here. So basically, what I'm saying that my original interval
is partitioned into d colors, one per tower. So there are blue, red floors, and there will be some yellow floors. So this is just a statement about partitions, just stating that I can partition my space into floors of towers.
But the reason why we plot it as towers is that I can see the dynamics of t in these towers well. So t, how does t act? So t acts on the towers, towers picture,
by moving up one floor until the top. So just by definition, if I take a point in the base,
where does it go? It goes straight up, because I'm stacking up the images. So the dynamics, you can see it as going up the tower. What do you do when you get to the top? This is return time minus 1. And the next time, I'm back in the base with the induced map.
So the top and the bottom are glued by the induced map. So until the top, then use pn back to in.
So that's why we plot it as towers. So the dynamics is moving up. And at the top, I come back to the base using the induced map. It should remind you of what we did with special flows. But we were using continuous time. This is like a discrete time version. For continuous time, we had the Poincare map. And then we build a special flow.
Here we have a discrete inducing. And we build a discrete special flow. This is some sort of discrete special flow. Good. I want to finish one definition before the break.
And yes, one definition before the break. So you can meditate. If you have never seen it, you can ask me questions. The last definition I want to give is about matrices that arise from this algorithm. I want to define through this picture the Rosiewicz
So induction produces a sequence of matrices that I will call An, n in n, sequence of d by d positive,
non-negative integer matrices, so matrices with non-zero
or positive, possibly zero, integer entries as follows. So I will cheat. I will not define exactly the matrices, but I will define the product, what it is. So let me set a notation.
I will call An the product of An, a1. And I will define you the product instead than the single matrices, such that the n is given by just
a counting, so An ij. And I can write it in two's way. So An ij, I have to do the following. I will first say it, then write it. I will take my interval i and j and iterate it
until it comes back. So until it comes back, I have my original intervals of the original i t. So I have an interval i0 i for 1 to d. So I fix one of these intervals of the big i t at the beginning, and I count how many times
my small interval enters the i's original interval before coming back. OK? So let me write An ij is the cardinality of visits of i and j to i0 i.
So i and jj is the small interval. i is the original interval. Cardinality of visits of i and j to ij up to return time.
Let me write it. You sum the characteristic function of this interval along the orbit up to the return time. Count how many visits, OK? Is it clear what I mean or not? So ask me in the break if not.
And it can also, maybe if we finish with this, so maybe I'll make a remark. Towers can be, this is the definition. And towers can be obtained by so-called,
I don't know if you've seen it or not, by so-called cutting and stacking. This is an aside, not crucial. I'll just put it aside. OK? So if I have the towers, let me draw them as a rectangle, just not to draw all the, if I have the towers at step n,
and now I induce, say, here, because, say, this is my last interval. So how do I get the next towers? I actually have to cut a piece or a full tower of level n
and stack it on top of one of the previous towers. So this kind of comes from the dynamics. So OK, so where do I go above this level? Above this level, I come back here, so I'm actually doing the next tower. So I can stack it, because that's where
the dynamics will tell me to go. So OK, there is a way to, you see, you have these towers. What you do, you chop pieces and stack them up and get thinner and longer towers. So you can also think that thin and long towers are made by floors of the original towers, if you want,
or thin and long towers are made by blocks of the original towers. And these matrices are, OK, maybe this is, and these matrices also tell you how many,
here it will tell you how many blue floors there are in the long tower. Or you could look at intermediate products. So let's write this, a and m for n greater than m. This would be a n up to a m.
So a and m ij are pieces. Pieces of cardinality, of pieces of i's tower of step m
used to make j's tower at step.
So these entries of these matrices also give you information of how the towers are built up. And now I have five minutes before we resume, and I can tell you the Joffreinden condition and some ideas of Birko sum estimates, OK?
So now we defined Rozy-Wiec induction, and we defined this induction procedure and towers and return times, Rocklin towers, which I like. Sorry, can I ask you to discuss later? So yeah, I'm starting. Yeah. So in reality, this is low algorithm by Rozy.
It's not the best to use if you want to study many dynamical properties. So there are various accelerations. So you can skip some steps and go faster in your algorithm. And especially, it's not so nice
that these matrices, the single matrices, could have lots of zeros. So sometimes you might want to look at larger steps, so go many steps in one, so that your matrices have strictly positive entries and not zeros. This is sometimes called positive acceleration,
and it was used by your cause, actually, I think, a lot, also by Wich originally. And I want to look at the so-called balanced and positive acceleration.
This is really a crucial tool for me in the estimates of Birko sums. And it's my favorite acceleration of Rozy-Wiec induction. So let me say that n is balanced and actually mu balanced for some mu constant greater than 1.
So if, when I look at these towers, I want them to have more or less the same area, more or less the same basis length, and more or less the same heights. So if nu, this is an induction time, say nu,
let me write it better, say that an induction time nu is mu balanced. If, when I look at the return times, r and i, r and j,
the ratios are bounded above and below by 1 over nu nu. So they are comparable up to nu for every ij. And so this is height balance, or return times balance.
So the towers have roughly the same height. And they have roughly the same width. And let me write i and j for, maybe let me write, actually,
lambda and i for the Lebesgue lengths of i and i. So I'm using lambda as standard notation for the lengths. And I want that the lengths, l and i, i divided l and j are bounded above and below by nu.
So roughly same heights, roughly same width. And I want to look at balanced,
so it's a fact from, that I cannot prove for you, but it comes from ergodicity of this renormalization map, that for almost every i t, there will be infinitely many times which are balanced for any nu. So it's essentially just Poincare recurrence,
or ergodicity. And so let me, I will only look at i t's for which there are infinitely many balanced times. And I will just speed up my algorithm. I want to go from one balanced time, jump directly to the next balanced time.
And I want to avoid having sub-indices. So from now on, I will forget about the induction that I defined and use n as an index for balanced times only. So notation, so from now on, so t such
that there exist infinitely many times n. You can fix a nu. OK, and then I will write tn will
be the nth balanced, balanced induction time. So tn i n to i n. So I'm just renaming by n, is it clear?
I'm just changing the notation and calling directly n only the balanced ones. And we also, this is actually a little technical thing. I also want to assume that all the matrices, this is the matrix to go from step n to step n plus 1, they are strictly positive.
So I could be balanced immediately after balanced again. I want to kind of space this balanced time enough so that the matrices from one to the next are positive. I can do that just by waiting some number of steps.
OK, so now basically I changed the induction algorithm. And I'm using this positive balanced acceleration. When from one step to the next, I go from balanced picture to balanced picture. And this positivity, so sorry, this positivity means that a n ij is greater than 0 for every ij.
This positivity tells me that when I have my towers at step n, I have to cut them and stack them to get them at step n plus 1. And all towers are cut. And all towers are stuck to each other. So I do enough of these basic steps
so that every i-th tower got stuck to every j-th tower. That's the dynamical meaning. So these are my favorite times to impose conditions on the IT. Just as an aside, you might like to,
maybe I should have said that earlier. So if you do for, let me do another remark. So if you do d is equal to 2, so if you do rotations, the matrices that you get, OK, maybe I'm lying a little bit.
Let me not do it. So OK, at some point in your product you will see matrices which look like this or this. And where An are the continued fraction entries,
if you know continued fractions. Now this is an aside of alpha for maybe not the first, but for the second one. So I want to say that these matrices, you should think of them as a way of generalizing
continued fraction. So if you do the algorithm for dimension 2, it will produce you 2 by 2 matrices whose entries are related to continued fraction. And naively, many people try to mimic, you should try to mimic, continued fraction properties through these matrices.
So conditions, the alpha 10 conditions will be conditions on the growth of these matrices. But you should really not do it using Rossivitz induction. You should use an acceleration. And the positive acceleration balance is even better, but positive acceleration will be the minimum you need to do
to have something meaningful. So somehow I leave this as a vague comment, but it's really crucial. So OK. So what is the condition on alpha? What? Irrational here. I'm just, what do you want? Here, if alpha is irrational, I will be able to do Rossivitz induction forever.
It will produce me. So there's no Rossivitz induction will produce matrices of the form, something like this. 1, 1, 0, 1, a certain number of time, actually a0 times. And then it will produce 1, 0, 1, 1, a1 times, and so on.
This is a slow induction. Sorry. Are you asking this? Who asked me the question? Yes, sorry. What were you asking? I was asking what are those irrational numbers? Any rational will work, because you will have this type of matrices. And once you produce, no, you have to be a little careful.
You don't want, basically, you don't want things which escape to infinity. So you don't want your entries to be diverging, yes. Let me think, yes. Let me think. Or, yeah, I think you don't want them to diverge,
because the positive matrices will have this product of two consecutive entries of the continued fraction. So I have to think, so but, OK, let's discuss it later. OK, so now, good.
Now, new step, special Birkhoff sums. OK, so I can use this induction to study behavior of Birkhoff sums. This is what I want to do next.
And the idea is that the positive times of the acceleration, of balanced times of the acceleration, will give me some particularly good Birkhoff sums that I can study quantitatively, and which I will use as building blocks to study Birkhoff sums. OK, so let me say, now I give myself
a function, which will be, for example, my derivative of the Routh function, but any function. And define my 0. And I claim that I also have a sequence of induced
Birkhoff sums, or induced functions. So the algorithm produces a sequence of functions, which I will denote Sn of f.
So this will be functions from In to R, induced functions, you should think of them. And be aware that Nb Sn is not Sn. So this was the Birkhoff sum.
This will be the special Birkhoff sum. It will be sum. And these are called special Birkhoff sums. I will define them. So I claim that, given my function,
I have a sequence of functions. I'm calling them Sn of f. I think it's a notation of Marvin-Musayo cause originally. Defined by the following. So it's defined on this small interval.
It's only defined on this small interval. And I will tell you what happens if x is in the j subinterval. If x is inj, this will be the sum of f. Basically, what you want to do is sum your function up to the return time.
So I will take my point x and look at the Birkhoff sum from 0 up to the return time, minus 1. So this small interval has a return time. And I sum my function up to the return time.
So this, if you want, is Srnj of f of x. It's a special Birkhoff sum. It's a special Birkhoff sum. It's a Birkhoff sum up to the return time. And this is, let me write it, is the sum along the tower.
So the plot would be I have a point x in the j-th tower. And the orbit of this point is moving up, up, up, up, up
to the return time. I just sum the values of my function along the tower. It's the sum along the j-th tower at that point. So a special Birkhoff sums, you
should think of them as your basic building blocks. So we want to estimate Birkhoff sums for a function with certain singularities. But this is always done, even if you want to estimate Birkhoff sum for piecewise constant function,
for smooth function. It was done by everybody. It's about, I don't know, with dissociation, I guess, again, it's in Marve-Musayo course. OK, so general strategy, strategy. So if you want to estimate, let me not call it f,
but let me call it g, because g would be the derivative of f for us in a second. So if I want to estimate Birkhoff sums of a function g, you want to do two things. Step one, I want to estimate special Birkhoff sums of g.
And let me stress, I'm assuming it implicitly in my notation, but let me stress it for n balanced.
It's important for me that it's balanced. So this Birkhoff sums, when the tower are balanced, will be very good, because there will be good equidistribution estimates. So balanced towers will be well spaced in 0, 1. So these Birkhoff sums will have good estimates.
And the step two, this is very broad outline. Step two, it's decompose a general s and g of x into special Birkhoff sums and interpolate
the estimates you had. This is really important philosophy. So I want to convince you that special Birkhoff sums will
be good for us to estimate. And then we will need to use them as building blocks to estimate other sums. So what do you need here?
So if I want to hope to interpolate between two special Birkhoff sums, I need the special Birkhoff sums happen frequently enough. So if it takes me a long time to go from one balanced time to the next balanced time, I will have less hope to interpolate.
This is where the Joffentine conditions come into play. What I need, I need Joffentine conditions. And I'll tell you what I mean. Joffentine conditions on the IT, which means good frequency of occurrence of balanced times.
I will have to estimate on the growth of these matrices. So if this norm of this matrix is not too large, it means that it doesn't take me too many steps to go from balance to balance.
And this norm, you can take any norm like the sum of the entries. I will give you a condition in a second for mixing. OK? So this is important strategy. And let me tell you in action how
this strategy works for mixing. So mixing, mixing case. So this is the f belonging to asymmetric log.
So we want to do the mixing. So I will tell you now that the Joffentine condition, maybe I will tell you this proposition, maybe.
I don't know if it's a definition or. OK, let me just write it without anything. So for almost every IT, and we defined last week almost every IT means almost every length for irreducible combinatorics. For almost every IT, balanced acceleration for every,
sorry, maybe I should put it first. So sorry. For every 1 less than gamma less than 2. This is technical, it will come up later.
So take a gamma from 1 to 2. The balanced acceleration is such that the norm of An. So this again are the matrices in this process from n to n plus 1 balanced times, and they are positive.
These are less than constant n to the gamma. So these entries, these matrices, don't grow too fast. They grow super polynomially. So maybe it looks a little bit out of the blue.
So who likes continued fractions here? Anybody likes continued fraction? Yes, good. OK, so compare with the following statement. For almost every rotation number,
if we write alpha as 1 over a0 plus 1 over a1 dot dot dot dot in continued fraction, there exists a c such that An is less than c n to the gamma. And gamma as before is between 1 and 2.
So this is true statement about rotations. And it's the analogous of my statement for this This is actually the condition which was used by Sinai and Hanin in their paper for mixing.
And maybe let me give you a name. This condition, I'm going to call it MDC. So this will be the mixing diophantine condition.
So this is a full measure condition which I will use for mixing. So it's telling me that this balanced times in some sense are quite frequent. So the distance between them grows super. This norm grows superlinomially.
So do you want to know how to prove this statement? For continued fraction, I will give you as an exercise. And I will give you a hint. So the exercise is prove this statement. Prove it. Let me tell you two hints or two-step hint.
So first, you may want to show that if I look at, so let a0 of alpha be integer part of 1 over alpha, right? The first entry. So hint, you come to prove that if I take for every epsilon greater than 0, if I take a0 to the alpha 1
minus epsilon, and I integrate it in 0, 1 with respect to the Gauss measure, this is finite. So I can give this to the exercise in the ergodic theory course when I teach ergodic theory to my students.
So you can actually prove if I don't put epsilon, this is infinite. The expectation of the entries are infinite. If I put the log, this is also easy to prove that it's true. But you can put the power 1 minus epsilon. So this you can just check. It's an integral to compute with the Gauss measure. So and then the other ingredient that you need
is Borel-Cantelli. And this is a very good exercise. Try to do it. So first, prove this. And then use this plus Borel-Cantelli to prove that statement, OK? And this is essentially the same plot line that you have to follow to prove
that that Diophantine condition has full measure. It comes from a Borel-Cantelli argument. But the input for the Borel-Cantelli is not as easy as that. So what do you need to know? So maybe a full measure of mixing the Diophantine condition
uses a key result by Avilla-Goiselli of course.
They have a, I think, a unique joint paper on exponential mixing for the molar flow. And the main technical thing, which is at the heart of the whole paper, is an estimate on the Rossevich induction, which essentially tells you that, let me write it like this, the integral on the space of IT of this norm of one
positive balanced entry with respect to the invariant measure on the space of ITs, let me not say. It's a finite. So it's something very similar. You will recognize the similarity.
But this is an integral you can compute. This is a deep result, which is the heart of that paper, and the whole technical paper of Avilla-Goiselli of course. But if you believe this, you can do your exercise and prove that mixing has full measure, mixing condition.
Good. We are following this plot. So I told you the Diophantine condition we need. Now I want to tell you the estimates on special vehicle sums and interpolation and try to say, to give a hint of what's behind them.
So now I will assume that my IT satisfy the mixing Diophantine condition.
T satisfies mixing Diophantine condition. So these positive times are such that, balanced times are such that the entries don't grow too fast.
And take, let me lie and do a toy model. So we are interested in an asymmetric log. But let me just do log x. Let me just do one single singularity.
And consider g is equal to 1 over x. So this is what I need to study the derivative, sorry, absolute value maybe. Maybe there is a minus. But I will do it without the minus. So this is maybe minus the derivative.
If you want to read the real derivative with more singularities, you should read Davide's paper in the general case. But for today, I'll stick to the one singularity case, because it has all the ideas needed. And I will tell you step one and step two,
what they become in this situation. So under the standing assumption, I'm assuming the MVC. So proposition one, this is step one. Let me stress that n is balanced, even it's automatic in my notation.
And I want to study a special Birko sum. So take x in the base of an interval. And take r, which is the return time. So how did I call them?
With r or with, oh yes, with r? OK, fine, that's OK. So I take exactly my sum along a balanced tower. And so I claim that, how do I write it?
So for every epsilon, there exists an n0 n0 of epsilon, such that for every n greater than n0. So I want to estimate my Birko sum Sr g of x.
This is my special Birko sum. This is also s and g of x. This is my special Birko sum. I go up to the height of a tower. And I claim that I have to do the following.
I need to remove the closest visit. So x could be, in principle, arbitrarily close to 0. And if x is very close to 0, my Birko sum will be huge.
And I have no hope to control it. So there are many cases in ergodic theory where you cannot study ergodic sums. But you can study trimmed ergodic sum. So you have to remove the largest term. And then you can say something meaningful. This is one of these cases. I cannot say anything on the whole Birko sum. I need to remove the largest term, which in this case
is 1 over x. Because x could be screw up whichever estimate I try to write. But if I remove x, anything else grow like r log r.
So it grows faster than r. So everything else grows faster than r. It grows like r log r. And so this is a special time.
I will try to save you something about it. But then let me tell you the step two. Step two will be interpolation. So s r of f.
So sorry. So there exists a sequence of bad sets contained in 0, 1.
Such that, again, it's a little bit annoying if I write it precisely. So for every epsilon, there exists n0 of epsilon such that for every n greater than 0.
Sorry. So I want to interpolate. If we write something like rn for the max or the mean.
So I have a sequence of special times, r and j. Special return times. And I just pick one of them, say the max. So for every r and x such that, OK.
So I want to say if my r is in between rn and rn plus 1, if my time is in between two good times, or the size of two good times,
I need to throw some set of bn. So if I am in between two good times, I need to exclude some set of bn. Maybe I should have said here the Lebesgue measure of the bad set goes to 0. So I need, there is some small points
which I need to throw away which are related to this closest visits. But in between two balance times, if I'm out of this set, if I'm out of this set, then I have a nice estimate. So Srg is less than 1 plus epsilon r log r and 1
minus epsilon r log r. So out of some small measure set, I have a control of type r log r. And this depends on interpolation
on the previous one. And because I had a discussion last week with you, so actually an unfortunate fact is that the sum of the Lebesgue measures of these sets
is actually infinite. So I cannot do a Borel-Cantelli argument for which I throw something at the beginning which is true for every n. I need to throw something different for every n. And if I can, last day, I want to say something about ratchet property, some refinements. And you will see that this is a knowing part.
So you would like this series to converge. So if you want to have better estimates, you need to work around it. But this is sufficient for mixing. Because for mixing, you want to prove that most of your space is stretch for every large n. So for every large n, I'm allowed to throw a small measure
and prove that Birko sums grow and stretch on most of the space. And the set that I throw for every n, the set which is not mixed for a given time will change with time. So there is a bad set which doesn't mix, but is going to 0 in measure. Great.
So these are concrete estimates. This is the real Diophoton condition. And these are the estimates. I'm left with little time. But I would like to say a few words on, I would like to finish the asymmetric case today. Because then I would like to do the symmetric, which is quite different,
and some new developments and later results on Thursday. So I have a little bit of sketch of these proofs. So maybe I want to say sketch for proposition 1.
So what is really, we are looking at this log. So let me lie. And imagine that my points are the most, the best equidistributed they can be. What is the best equidistributed sequence? An arithmetic progression of step 1 over r. So I want to explain you where the log comes from.
So say x up to tr minus 1 of x is equi-spaced. Then somehow, so the closest point is of order 1 over r.
Will be 1 over r actually. Closest point will be 1 over r. The closest point gives the contribution, closest point is 1 over r. And what I will want to do, I will
want to say that if I look at 1 over r times the sum of g, oh sorry, g is 1 over x. g of ti of x.
If I look at my Birko sum and divide by r, what I want to say that I see a Riemannian sum for the function 1 over x. I can use it to compute the integral of 1 over x. This is a meta philosophy. This will be actually, I want to approximate it with the Riemannian sum for 1 over x in dx.
This is like spacing times the value of the function. So it's a Riemann sum. So this is like a Riemannian sum, Riemann sum maybe. So what's the integral of 1 over x from 1 over r to 1?
Log r. Right? So this is where the log r comes from. And the r log r, because here I divide it by r. So ideally I want to say that my Birko sum is close to r log r. This is the heuristic that I want to explain.
And I want to say balanced times are good. What are they good for? Are good for uniform distribution.
So if you have balanced times, you can control very well how points are spaced in your space. So let me give you a concrete example.
Lemma, for example, say that I look at n minus n. So I take, say that I take some entry of my,
I take, sorry, this is written badly. So I take the matrix between n minus n and n. So I am at step n and I go finitely many, n will be a finite number. I will go some number of step back. So what do you expect this to be?
So I say that I'm looking at, as you can prove that this is what you would like it to be. So I'm looking at the Birko sum of lengths, sorry, r is rnj. What is it written? r is rnj. So rnj times the length of n minus ni.
This is what you would expect this number of visits to be. So I'm looking at the orbit of that length and seeing how many times it visits an orbit, an interval of this size. And the claim is that this entry,
if you have a fixed number of positive balanced acceleration times, this is what it should be, plus an error which is exponentially small in the number of terms, uniformly in n. So somehow, I feel I'm a little bit lying.
But let me write one word here. Do you know how to prove Perron-Frobenius theorem? So if you have positive matrices, they contract the simplex in space. So if you have positive balanced matrices,
so if your matrices have norm bounded above by a fixed constant, they contract uniformly the simplex. And really, by doing just Perron-Frobenius argument, you can prove that essentially, when I go finitely many steps in the past of my induction, my points have to distribute well in partitions of the past
because of this Perron-Frobenius. This is where balance is useful. Balance gives you a quantitative estimate on equidistribution. I feel I'm rushing too much. So I don't know if I, but I really
think it would be horrible to start from here next time. So allow me to run five minutes maybe over time, because I'm happy to explain more details for people for which I'm doing too fast. I'm happy for everybody to have a feeling of what is happening, but I don't want to interrupt in the middle.
So I want to start with something disconnected. I start from zero on Thursday and not from the middle. So let me, philosophy. So special times, why are they good? At special times, I have good equidistribution estimates
of my orbit. So what does this estimate mean? So I'm looking at an orbit for induction time n, long tower. And if I look at time n minus n of my induction,
I see a partition into larger intervals. And in this larger intervals, I see the expected number of visits of my orbit. So if I go a little bit on larger steps,
my points are well distributed. Using this, you can make precise this heuristic. So you can really approximate your orbit. Remove the first point, which is out of control. Be careful in the beginning, which is a little delicate.
But as you look at steps of some previous partition, you have good amount of point. And you can really approximate your sum with the Riemannian sum. You can kind of count how many visits there are in these intervals and replace all of them with the mean value of the integral.
So you really just do a Riemannian approximation using uniform distribution. That's as much as I want to tell you. If you want the details, you can read. In this case, it's my paper on mixing. I think all the details in the general case, it's David's paper. But really, I want the idea is that balance times are good
because they give you concrete equidistribution, equispacing of your points in the orbit. And the log comes from this integral. So that's how the idea. So the last five minutes are on proposition 2, sketch.
And I want to give you just a sense of where are the bad sets coming from. What are these bad sets and how do you decompose? Proof position 2, sketch. So I want to study Sr of g of x.
So this is not a special sum. I need to decompose it into special sums. So what I can do is to find let nr such that, say,
the minimum largest n such that minimum of Hnj
is greater than r. So I want to kind of see my, sorry, I want to see,
I want to basically, maybe if I start writing too much, I think I will get stuck in the notation and you will not get anything. So let me just do an realistic picture. I plot r. So this is like a picture of my orbit of length r.
So what do you want to do? You want to find the composite into special Birkhoff sums. So what you want to do is essentially to find the largest size of, for example, from here to here.
It could be the time in the, find the largest height of a tower that I can fully fit inside my orbit. So I want to find, at some point, so my point, my orbit may cross many towers of all steps of induction. I find the largest step of induction
so that I can fit a full tower inside. OK? So maybe from here, maybe the picture would be, but maybe actually, sorry. Let me do it horizontally, some more space. So this is a timeline from 0 to r.
And it's a timeline of my orbit. And my orbit crosses several towers. But say that here I enter the base of some tower, which goes until, I don't know, which goes until here. Here, these are balanced towers until here.
But somehow, maybe the next one is out. But say they are all inside. I cannot fit fully a balanced towers of the largest scale. So this part is a special Birkhoff sum.
This part is a special Birkhoff sum. I'm left with a reminder. And in the reminder, I'm going to put Birkhoff sums of lower of the previous balance times. And same I will do here. Maybe here I can put. So you can kind of decompose each reminder into a certain number of Birkhoff
sums of the previous scale. It's, again, a quite standard procedure. But in number theory, people would say that I'm doing something called Ostrovsky's function of a number. So I don't know if you've ever seen in number theory, you can write an n as bk qk from k from 0 to some kn.
Where bk are less than ak, the entries of the continued fraction. You can decompose an integer into denominators. And here, similarly, it's like a dynamical Ostrovsky. I would call it a dynamical Ostrovsky.
So you can write something like this. Maybe I will write something after all. My Birkhoff sum, I decompose it into special Birkhoff sums of different levels from n0 to nr. nr is the largest I can fit.
And for each of them, there's a bunch of each level. There are from 0 to, let's call them bn. And here, I will put special Birkhoff sum of level n of g for some point xi.
So I'm writing into different orders of the induction and some number of Birkhoff sums. So what should I write here? So where bn will be less than the norm of these matrices.
And xi will belong to In. So these are special Birkhoff sums of level n. Of each order, there are as most the special Birkhoff sum of each order, there are as most as the entry of the matrix and so on.
And essentially, what you want to do is just to compose the proposition 1. For each, you have a good estimate. What is the danger? The danger are closest points of each.
So the closest point, which in my picture will be the first point of each, could screw up your estimate. So the main terms are logarh of each will combine to the main term are logarh. But the closest point can be dangerous.
And with the Diophantine condition, turns out that the closest point of the past are not so dangerous. But the really dangerous ones are the worst. So the worst are the closest points of level, the last level, or level nr.
So it could really somehow happen that there is not only one closest point, but there are many other points extremely close, very close. This is somehow like a resonant term. So this is what is like a resonant.
This is what you have when you have a mutation and kind of maybe a real time. And then you have some many closest points together. So this somehow is unavoidable. And that's where you need to draw this bad set. So this bad set has to do with accumulation of many closest visits at the largest.
And I think I don't want to go into more details than this because I already was very technical. But I hope I got you a glimpse, at least, of how you can use Rozywitsch induction to estimate Birkow sums and just a little bit of the flavor of what are the Diophantine conditions and what are the tools. So you have this very precise algorithm.
You have rich information thanks to the work of many others like Avilla-Goseliokos on the growth of the matrices in this algorithm. And you can use these partitions and this algorithm to have very detailed information on equidistribution of orbits and use them
to estimate your Birkow sums, comparing them with Riemannian sums or being careful to. So just a little bit of flavor. I hope I got you some ideas. But OK, so it was a dense lecture. We went from Rozywitsch to so I'll try.
It's a quite different mechanism for absence, obviously. So what I want to show you Thursday, something very basic, well, classical, why interval exchanges are not mixing. So I want to show you partial rigidity sets in the towers. So this is some classical result by Katoch, essentially. So inside the towers that we built today,
I will show you where are the partial rigidity sets. And this explains why interval exchanges are not mixing. And then I will try to explain you what happens in the symmetric case. I will give you some ideas of the cancellation phenomena in the symmetric case so that we finish our mixing and absence of mixing.
And then I hope I will be left with an hour to kind of give you a little bit of more recent results. So really what is happening in the last few years, there have been a lot of new results on finer ergodic properties. So I want to state some of this latest development and try to give you the key kind of ingredients.
So what do they really become? Even this was already technical. So why you want to know even finer information on this? Because some give you many more interesting results on ergodic property. So I'll try to connect what is happening today with what I try to explain in the first lectures. So I will give you a glimpse of the
new advances. OK, sorry. I hope you got enough out of it. Thanks. Yeah.