4/4 Sharp threshold phenomena in Statistical Physics
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Teil | 4 | |
Anzahl der Teile | 4 | |
Autor | ||
Lizenz | CC-Namensnennung 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/43927 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
00:00
ElementarteilchenphysikSchärfenSchwellenspannungDrehenGleichstromHerbstMessungWerkzeugSatz <Drucktechnik>FeldeffekttransistorAngeregter ZustandBe-SternModellbauerSchärfenSchlichte <Textiltechnik>SchwarzSchwimmbadreaktorSicherheitsgurtTextilveredelungWeißA6M Zero-SenHobelAbstandsmessungDauRotationszustandDrahtbondenSource <Elektronik>ProzessleittechnikSatzspiegelSchiffsklassifikationDampfbügeleisenIntensitätsverteilungLieferwagenWocheBuntheitKalenderjahrKugelschreiberChirpFACTS-AnlageKreuzfahrtschiffZelle <Mikroelektronik>KernstrahlungKritische TemperaturDreidimensionale IntegrationColourSchwingungsphaseProbedruckVorlesung/Konferenz
08:46
TelefonRinggeflechtEisenbahnwagenFahrzeugKontinuumsmechanikVeränderlicher SternSatz <Drucktechnik>FeldeffekttransistorModellbauerPatrone <Munition>RauschzahlSchwarzSchwimmbadreaktorWeißTagNanotechnologieBombeProbedruckBuntheitKalenderjahrDruckgradientFACTS-AnlageKorporaleRotierender RadiotransientGasturbineKernstrahlungBrechzahlSchlichte <Textiltechnik>ElektronenkonfigurationClusterphysikAnalemmaNahfeldkommunikationVorlesung/Konferenz
17:32
EisEisenbahnwagenGleitlagerJahreszeitSatz <Drucktechnik>KartonTrenntechnikSonnenstrahlungAngeregter ZustandIntervallScheibenbremseSchlichte <Textiltechnik>SchwimmbadreaktorTextilveredelungIsostatisches HeißpressenTagGasbohrlochProzessleittechnikSatzspiegelZentralsternWocheKalenderjahrKugelschreiberRousseau, Jean-SiméonGasturbineVeränderlicher SternStandardzelleZelle <Mikroelektronik>ModellbauerZeitdiskretes SignalFernordnungBuntheitVorlesung/Konferenz
26:18
DrehenWarmumformenLeistenMotorProfilwalzenElektronische MedienPatrone <Munition>ScheibenbremseSchlichte <Textiltechnik>SchwimmbadreaktorSpeckle-InterferometrieTagElektronenkonfigurationProzessleittechnikSatzspiegelProbedruckGasdichteKalenderjahrKugelschreiberSpieltisch <Möbel>FACTS-AnlageGasturbineKontinuumsmechanikSchmiedenVeränderlicher SternVideotechnikSpinZeitdiskretes SignalGroßkampfschiffVorlesung/KonferenzTafelbild
35:04
DrehenEisSchraubenverbindungVideotechnikWeltraumforschungZerstörerStoff <Textilien>Angeregter ZustandBrechzahlFlugbahnSchlichte <Textiltechnik>SchwimmbadreaktorWeißGroßkampfschiffSingularität <Meteorologie>SatzspiegelHintergrundstrahlungIrrlichtWocheKalenderjahrAbendKugelschreiberFACTS-AnlageStundeRömischer KalenderZelle <Mikroelektronik>TonbandgerätZeitdiskretes SignalFernordnungFehlprägungProbedruckBuntheitVorlesung/Konferenz
43:50
KartonBrechzahlErsatzteilExplorer <Satellit>ModellbauerPatrone <Munition>Schlichte <Textiltechnik>Tau <Seil>Zeitdiskretes SignalAbstandsmessungProbedruckKugelschreiberClusterphysikAnalemmaHerbstKristallwachstumFeldeffekttransistorTonbandgerätRuderbootErwärmungKernstrahlungAbwrackwerftSchlauchSchwimmbadreaktorSteinzeugA6M Zero-SenDrahtbondenSpiel <Technik>PassfederSource <Elektronik>SatzspiegelWocheBuntheitKalenderjahrAbendBegrenzerschaltungChirpVorlesung/Konferenz
52:35
AdsorptionAmplitudeAtmosphäreBauxitbergbauDrehenEisEisenbahnwagenElektronisches BauelementTelefonGleichstromHerbstJahreszeitLichtMessungSchmiedenStoßwelleSatz <Drucktechnik>KartonFeldeffekttransistorKühlschrankKraft-Wärme-KopplungTonbandgerätDünne SchichtBootKernstrahlungWerkstattAbwrackwerftAngeregter ZustandConcorde <Flugzeug>DosierenDrehmasseElektronische MedienErsatzteilGrubenausbauH-alpha-LinieLeistungssteuerungMikroskopobjektivModellbauerNiederspannungNiederspannungsnetzPassatPatrone <Munition>PostkutscheSchlichte <Textiltechnik>SchwarzSchwimmbadreaktorSpiegelobjektivTextilveredelungTiefdruckgebietWeißA6M Zero-SenHobelAbstandsmessungDauIsostatisches HeißpressenSteckverbinderBugTagDrahtbondenPassfederFlavour <Elementarteilchen>FernordnungGranatwerferMonatProzessleittechnikBasis <Elektrotechnik>SatzspiegelErdefunkstelleBombeEinbandmaterialZwangsbedingungCocktailparty-EffektIntensitätsverteilungWooferDiscovery <Raumtransporter>DruckkraftWocheGasdichteKalenderjahrAbendKugelschreiberBegrenzerschaltungChirpFuß <Maßeinheit>KleinwaffeClusterphysikFACTS-AnlageMaßstab <Messtechnik>StundeComte AC-4 GentlemanMinuteWindroseGasturbineKontinuumsmechanikWarmumformenFörderleistungZelle <Mikroelektronik>BildqualitätBrechzahlRootsgebläseSchärfenSchallmauerZeitdiskretes SignalGelenk <Technik>Dreidimensionale IntegrationLambda-HyperonProbedruckVollholzBuntheitAnalemmaGrabstockETCSVorlesung/Konferenz
Transkript: Englisch(automatisch erzeugt)
00:25
Well, thank you for coming back and let's finish today by giving you two other examples of application of the same technique to this time continuous percolation models.
00:48
And I will, so I will mostly present one which is going to be Voronoi percolation and then I will discuss a little bit another one just to give you an example that the technique is not bound to just like proving sharpness in the sense proving exponential decay that
01:05
it can also be useful just to prove weaker conditions even when there is no exponential decay in fact. So I just want to discuss briefly one example in that direction. So, so continuous percolation model and the first one is going to be Voronoi percolation.
01:44
So in Voronoi percolation there is no lattice anymore. There is a point process on the lattice which I'm going to decompose into two type of points. There would be black points and white
02:00
points and there are going to be cells meaning points, I mean division of the space into, into points which are the closest to each one of your points. So there is going to be eta
02:21
b and eta y, two Poisson point processes of intensity and 1 minus p. I'm going to define
02:41
eta to be the union of the two. So another way of seeing it is what is you take an intense, I mean a Poisson point process of intensity 1 and for each of the vertices of the points you choose whether it's black or white with probability p or 1 minus p. It's exactly the
03:03
same. And now I'm going to define the cells. So c of x, so for x in eta, c of x is simply the set of points in Rd so that the distance between x and y is smaller or equal to the
03:27
distance between z and y for any z in eta. So it's simply the set of points which are closer to x than to any other point in your Poisson point process. And what I'm going to do is
03:48
something very natural. I'm just going to say that the colour of a point in the plane is simply the colour of the vertex of the point of the Poisson point process in which it is, in the cell in which it is. So we are going to say that y in Rd is black if there exists
04:14
x in eta b so that y belongs to cx. And otherwise I would say that y is white. Just
04:39
one parenthesis there. It's a tiny, tiny thing but it's not completely symmetric,
04:45
black and white, because here I'm including the points on the boundary of my cell. I call them black. But if you think about it anyway they will never play an important role in what we... So I will completely ignore that but that's why I'm not saying that you are white
05:06
if you are in the set of somebody white. Okay. So this is the definition of the model. Then I can say that a is connected to b if there exists two vertices and a path connecting
05:33
them and a continuous path of black points connecting x to y. And as before I can then
05:58
define theta n of p and theta of p. So theta n of p will be the probability. So
06:11
from now on I will just keep the notation as before, this will be the probability measure. Probability that 0 is connected to the boundary of the box of size n around 0,
06:26
which since I am now in some environment which is whether rotational or invariant, I'm gonna just take the ball to actually be the Euclidean ball. So maybe let's... To
07:02
So one of the most famous results on Voronoi percolation is the following. It states, so it's due to Bolobash and Riordan, it states that PC, which you define as usual,
07:31
in dimension 2 is equal to one half. So if I do this on R2, raise the black and the white, play symmetric roles at one half and in fact you get that this is a critical point.
07:47
And I already mentioned this result in the first class, but I hope I can convince you that because you cross squares with probability half, the only thing missing to prove that is
08:05
exactly to prove sharpness of the phase transition. That once you have sharpness of the phase transition, once you have exponential decay in subcritical, here you can deduce that PC is equal to one half exactly as you did on the square lattice. You just use that the
08:23
box is not crossed with probability going to one or to zero. Okay, so my goal is exactly to give an alternative proof of this and in fact to give a slightly, I mean not slightly, a stronger result which is proving sharpness in any dimension. So there exists PC such that,
09:00
well we have this mean field lower bound for any p larger than PC and we have exponential decay for any p smaller than PC. Okay, so the same result as usual. By the way,
09:39
I mean there is an interesting feature that this proof doesn't actually give you
09:44
a mean field lower bound in supercritical, which usually in two dimension things are much simpler and you can get it. But here this is I think, I mean to the best of our knowledge, the first proof of mean field lower bound even in 2D. Yes? Yeah, yeah, yeah, yeah. In fact,
10:04
you can prove that. It's just, this is like you would use some Per's arguments there, which is really of a different kind so I didn't want to really discuss it but yes indeed. Okay,
10:21
so the strategy is going to be roughly, I mean actually exactly the same as what we did up to now. What we want to prove is that we have this theta N prime larger than C times N over
10:42
SN times theta N. Always the same differential inequality. There is a small twist here which is we are not going to manage to prove it, so you remember we couldn't prove that close to 1. In fact, there was no chance to prove it close to 1 because this guy is tending to 1. So we were
11:05
always taking P away from 1 but it was fine because anyway we are interested in the behavior near PC. Here there is going to be a second twist which is we cannot also take it close to 0. It's of course it should be true but here we are going to take P between a certain
11:24
delta and a 1 minus delta. But delta can be taken as small as we want so it's going to be fine. And of course here it's a C depending on delta but not on N. Okay, so before maybe diving into
11:45
the proof of that you feel that of course we are going to use the OSSS inequality again but before diving into that let's just recall a few general statements on Voronoi. So the first
12:01
one that we will need, so background, the first one is the FKG inequality. So here maybe
12:25
I should tell you what A increasing means. So we will say that A is increasing if, well, whenever I have this in A and this, so if I have a configuration in A, a person point
13:00
configuration in A and I take a bigger one in the sense that the blacks are bigger and the
13:09
automatically eta bar, B eta bar white is in A. Okay, so this is the most straightforward generalization of being increasing on the lattice. So now that I define what an increasing
13:26
event is, the FKG inequality will simply be that the probability of the intersection is larger than the product of the probabilities for any A and B increasing. There are tons of ways
13:51
of proving that, I don't want to annoy you with it but this will be important for us.
14:02
The second property that is going to be important, the second ingredient is, so just maybe a parenthesis on this FKG inequality, what is a little bit funny is that it doesn't occur in the same place as in the proof for the random cluster model.
14:24
For the random cluster model, the FKG inequality was important in the proof of the OSSS inequality. That was where it was used. Here we are actually going to use just the OSSS inequality for i d random variables. So FKG there is not involved. FKG is going to be involved because
14:44
in the use of the OSSS inequality after we use it we will have a certain quantity which will not be, we will not manage easily to bound the revealment by theta n which was what we were doing. We were bounding the probability of being revealed by the probability of being
15:01
connected far away. Here it's going to be a little bit more tedious and to relate it at the end to theta n we will use the FKG inequality. So there is a small certainty there. Second property which we really want is that we will have this sum of inferences and we want to compare the sum of inferences to the derivative of an event. So we have to do that in two steps.
15:24
First let's mention Rousseau's formula for Voronoi and it's going to be the following. So define P of A to be the set of x in eta such that the indicator function of eta B union x and
15:58
eta y minus x, so if I assume if you want that x is open, is black, is different from
16:10
the indicator function of eta black minus x and eta white union x. Different if I change the
16:21
color of x, I change actually the occurrence of the event. So it's exactly the definition that you would like to naturally to put in the continuum. And the first lemma is that the
16:46
derivative of A is the expectation of the size of P of A. Okay, so I'm not certain I want to make,
17:21
well maybe I'm going to prove the thing, but one thing that I want you to understand there, and actually maybe I should have been careful, here let's say for A depending on colors in B zero R. So exactly like for Bernoulli percolation, you don't have Rousseau's formula
17:47
for infinite volume. So here you want to say I just look at the colors in the box, I have an event depending on that only, then the derivative is the expectation of the size of the set of
18:02
pivotals. And the remark I want to make, because it's going to be important, is that the set of pivotals may actually be also intersecting the outside of the bowl. So even though your event depends only on what is inside the bowl, changing the state here of a cell, saying okay
18:30
now it's black, may actually change the geometry of my cell, even up to this point. So it changes the geometry of the cell, with the definition there it doesn't change the geometry of the cell,
18:45
but it definitely changes the cell itself, may intersect the bowl. So one small thing is, careful that pivot A is not included in the bowl of size R. But even though it's not included,
19:13
it's almost, it almost is in the sense that if you think about it, the probability that pivot A
19:25
say intersects the bowl of size 40, so if I wanted to intersect say this bowl of size 4,
19:40
say 4R, but you could do it for any 40. Well if you want that the point here is relevant for the colors inside here, in particular there cannot be any, anybody, sorry let's put 40, there cannot be anybody in the bowl of size T. If there is even a single point of our Poisson point
20:07
process in the bowl of size T, then this point will be closer to the point in the bowl of size R than any point outside there. So definitely no point here, the cell will never intersect there.
20:20
So this is always smaller or equal to the probability that just eta doesn't intersect the bowls of size T. And this just very simple estimate on Poisson point processes tells you that this is of order exponential of minus C times T to the D. So it decays very rapidly. So
20:50
it's not such, somehow think of this as being a little bit of dependent, long-range dependencies, this property that the cells are changed even at long-range, but it's a very, very rapidly
21:04
decaying correlation. Second thing maybe is that there it's not obvious a priori that the set is integrable, the size of the set, because you may have infinitely many pivotals even in a box, but this thing tells you that you don't really want points far away from R, from the
21:29
bowl. And the second observation is anywhere for a Poisson point process, having more than T
21:40
to the D plus 1, say, points in the box of size T is decaying exponentially fast. So if you combine the two things, you can very easily prove that the set of pivotals is, I mean, the size of the set of pivotals is integrable, and in particular it tells you that this is
22:04
differentiable, which was not a priori that clear. Okay, so how do you prove this? You just integrate on the environment, on the Poisson point process, the Rousseau formula for
22:22
percolation, simply. So you write that, let's say you look at P plus delta of A minus the probability at A, and you really write this as the expectation with respect to eta of the
22:43
probability at P plus delta of A knowing eta minus probability at P of A knowing eta. So here I use, yep, sorry, okay, and this thing now is just integral between P and P plus delta of the
23:05
derivative, but now you are really on, you are looking at percolation on the model on the discrete set of points, so you can use the standard Rousseau's formula to get that here, you get expectation with respect to S of P of A knowing eta dS, and then you just use a
23:31
Fubini to conclude. So this is integral beginning by Fubini. So nothing really mysterious
23:51
there. Okay, so this is not really what is going to be so important for us, what is going to
24:03
be important is how we use that to discretize our set. Because remember, the third ingredient is really the OSSS inequality, and remember that the OSSS inequality, even in i.i.d. state,
24:26
is really on the discrete space, we have discrete variables. They may take values in some continuous space, but we have a countable number of them. So let me just restate it in a slightly different way here, because now we don't have variables which are 0, 1 valued. So
24:46
the OSSS inequality now will be the following. It will be that the variance of F is smaller than the sum, so I will have a certain set of variables, of the revealment times the
25:06
influence, but here I need to tell you exactly what the influence is. So the revealment will still be the probability that my algorithm reveals the bit i, but here, so I should have
25:21
said we are on the space omega tensor i and pi tensor i, and i is countable. So the influence
25:48
is the expectation, or maybe let's look at it only for an event, because anyway we are going to use that. The influence of A is the probability for this product
26:10
space of having that eta is in A, but not eta tilde, where eta tilde is the space,
26:36
is a configuration obtained by re-sampling the bit i. Let's put omega maybe because
26:57
it's going to be confusing with things. So we have here, right, it's a configuration,
27:06
so we are on omega i, so we have configurations are just a family of variables, right, of realization of a certain variable. They can take value in 0, 1, but they could also take value in any other space, omega. And omega tilde is simply equal to omega j on every j,
27:33
not equal to y, and omega tilde of i is independent of omega i. So you just re-sample
27:43
the spin at i, and you look at the probability that from your configuration the outcome of your event is different for omega and for omega tilde. Just notice this is really
28:05
corresponding to the covariance thing that we were doing in the case where omega is 0, 1 valued. Okay, so this is the OSS inequality in general, and the proof is exactly the same.
28:27
It's the same as in the i ID case. We didn't choose in fact at any point that we were taking 0, 1 valued variables. The last bit of information is that here we have a continuous
28:42
space, Voronoi percolation is continuous space. While here we are working, we want to use something which is really on a discrete space. So there is, yeah, one thing that here it's countable, and the OSS inequality I proved formally was only for finite, but you can just
29:01
check it works. But that is in infinity. But it definitely needs to be countable. So here we have a continuous space, so we need to do something, and that's what people working in continuous, continuum percolation know too well is that you need to discretize the space, which is usually a nightmare. But here we are going to do something which is fairly simple.
29:25
I think the discretization is simple. So let's call it consortization, or discretization, I don't know, Voronoi percolation. So what I'm going to do, I'm going to fix an epsilon,
30:01
and let's define S x epsilon to be simply x plus 0 epsilon to the d. So you just cut your space into small balls, and for an x, this is going to be the ball associated to
30:28
x, and notice that here these two edges are not included in the ball. Like that is really partitioning when I take the union on x in epsilon z. Now, to each of these x,
30:50
I'm going to associate just the intersection of the Poisson point process with this ball. So define eta x b to be eta x intersected, sorry, eta b intersected with S x epsilon.
31:08
I'm going to drop the dependency in epsilon here just to simplify, and eta x y to be the same with y, and maybe eta x which is eta x b given eta x y. So now,
31:34
I can see Voronoi percolation as being just a product space where I would have
31:53
each coordinate will be a possible realization of a Poisson point process in a box of size epsilon,
32:01
and pi would be the law of this guy. Okay, so I didn't, here I didn't do anything, right? I mean, just I cut my space into slices. I can look at the influence of an event if I want exactly as here. So I can look, for instance, at influence, let's call it x epsilon of a,
32:23
which will be the probability that eta is different than what I get when I resample,
32:45
where in this case eta tilde x epsilon is a sample. And what I would like is because here I will have at the end when I apply the OSSS inequality the sum of inferences,
33:04
I would like here to have something relating the sum of inferences to the derivative. And this will be provided by the following lemma, which says that in fact the derivative
33:35
is larger or equal to one half of the lim sup of the sum over the x's of the inferences.
33:58
I do think that this is a fairly, like, neat way, and the lim sup is in epsilon of course,
34:11
fairly neat way because, I mean, usually when you discretize it is a little bit messy, but here, I mean, we will just use this inequality.
34:21
Okay, so how do we prove this inequality? The first thing that I'm going to do, which you are going to believe me on this one, it's, I mean, it's just three lines to actually justify, but
34:41
here I told you before that anyway inferences that are far away, they are uniformly integrable. We have a bound which is completely uniform even in the event A here. On the contribution of inferences far away. So definitely it's sufficient to prove the following.
35:01
We will prove, we will fix M, or let's say capital R, sorry, M. Here of course it's only for events A depending on colors in a certain box. So I will fix an M and I will prove the
35:26
same result with here epsilon ZD intersected with a ball of size M. If I do that for any M, I can let M go to infinity simply because I have uniform convergence.
35:41
So fix M positive and let us prove, let us prove that. Okay, so it may look a little bit
36:27
I mean messy this, but in fact it's you are going to see that it's not that bad. So the reason is that why isn't it bad? Because here, so you could you could think okay
36:49
now our state is kind of complicated right because every variable is a collection of points etc. But notice that in fact except with very small probability there's going to be a single
37:05
point in each of the Sx. So the first thing that I would like to just write is that well the inference here of an event A, well up to a factor 2 it's really going to be
37:30
the indicator of eta not equal to eta tilde. So this is a definition, but what I'm going to say
37:41
is well here I can assume that eta x is 1 and eta x tilde is 0. And the error I'm going to do when I do that is only an error involving the fact that twice I should have had a point
38:02
in this small square of size epsilon which would have cost me epsilon to the d for each one of them. So with probability except if I exclude the point an event of this probability I have only one point either in eta of x or in eta tilde of x by symmetry with a factor 2 I can
38:25
assume it's in eta of x. What I do not know is whether it's black or white. This I don't know. But now what I can observe as well is that here if I'm not pivotal if the eta so I have
38:46
an eta x which is either black or white if it's not pivotal this point. So you have a family of cells. If this point when I switch it from black to white if it's not pivotal
39:07
then definitely if I just remove it I'm not going to change the occurrence of my event. If turning it to white didn't allow me to destroy the event then just removing it
39:24
is not going to do the I mean here of course it's for a increasing because I am with an increasing event I'm not going to change the value. So here this is exactly what I do right I have a point here and I remove it I don't change anything and it's
39:42
the same if so if I start from a black point and I remove it if I'm not pivotal I will not change the occurrence of my event. But if I start from a white point and I remove it I will also not change the state of my event if I'm not pivotal. So all of this to say that this point
40:01
in addition should be pivotal. So this is smaller record to twice the probability or the expectation I need to have at least one point in one pivotal point in s x epsilon and notice
40:23
that because I have only one point I can rewrite it instead of saying I should have one point I can just put the number of points in my guy which are pivotal. So if I do that like that
40:44
I obtain that or maybe let's write it like so right here this point must be pivotal so I need to have at least one point and by the way I mean having two because I have only
41:01
one point here is going to be difficult so I can write it like that. But now you can also just in fact remove these conditions and get that this is smaller than twice the probability
41:20
that p of a I mean twice the expectation of p of a with intersected with s x of epsilon and I still have my error right I just removed two conditions and use that the probability of being larger or equal to one is smaller than the expectation.
41:44
Very deep stuff. Here one may be a little bit worried that I mean here this looked like this was an event which had probability epsilon to the d and I remove it but this one also has probability epsilon to the d right here you already have the information that you should have
42:03
at least one point so this thing is of order epsilon to the d. But now I'm going to sum on of order one over epsilon to the d guys so when I sum these inferences I'm going to get
42:28
smaller than twice I sum these expectations so I get the intersection of p of a with the ball of size m and I sum one over epsilon to the d times this guy so I get big O of epsilon to the t.
42:48
When I let epsilon go to infinity I obtain my s. So I think it's really like you see it's like six lines of discretization which I mean in these fields you are happy if you end up
43:03
only writing six lines for your discretization. Okay so this is the end of the proof now you let epsilon tend to zero. So that was the
43:24
background that we needed now we are going to be able to prove our result. Okay so first thing that we are going to do as we said we fix p in delta one minus delta we fix n and we are going
43:50
to apply the OSSS inequality which is which I just theorized. So we apply the OSSS inequality to the event to the function f which is the indicator function that zero is connected
44:06
to the boundary of the ball of size n and let's assume let's apply it so to a decision tree tk
44:22
which for now I don't tell you exactly how I define it but let's assume we apply to an algorithm tk what do we get we get theta n times one minus theta n smaller or equal to the sum over the x in epsilon zd of influences x epsilon of
44:45
indicator function of zero connected to boundary times the refinement. So let's assume the
45:01
following lemma now which says that for any there exists a tk an algorithm tk
45:26
deciding the indicator function such that delta x of t tau is smaller than the probability
45:55
that x is connected to the boundary of the ball of size k and this for any p larger or equal to
46:08
delta. So let's assume we have that I'm assuming it because you are going to see it's a little bit less obvious than in the case of the discrete model for discrete percolation we
46:23
were just taking this exploration of the boundary I mean of all the clusters intersecting the boundary of the box of size k here you need to do a little bit more but let's assume we have that how do we conclude and notice here that I really assumed p to be larger or equal to delta
46:53
so if we have that and we go back there what we can say is that well we have theta n
47:02
smaller than c zero and sorry here there should be a c zero which is smaller than c zero
47:20
sum over k equal one to n of the sum over x in epsilon zd of the probability that x is connected to the boundary of the box of size k times the influence by the way here we are going to only
48:00
just thinking maybe here it's only going to be for maybe only for x in the ball of size two n I have a problem that we okay let's let's put that and see
48:36
and otherwise it's zero so we have that
48:53
and now we are going to do the standard bound this thing we are going to bound it this whole thing and this there's one of them so this whole thing we are going to bound it by
49:06
sn times the sum over the x in epsilon zd of influence oh no sorry sorry the influence
49:27
of this event how do I get that and there maybe there is a two well it's a standard bound that we did several times that for any x in epsilon zd okay when I do sum over k equal one to n of the probability
49:47
that x is connected to the boundary of the board of size k where it's the same inequality as before it's smaller than two sn where sn is still the sum of the s k of the cta k
50:12
notice that here you may have points here like this is true also for x extremely far
50:20
it's just that anyway being connected to distance very far of being I mean here you are only bounding always this distance by k okay so you do that and then you just let epsilon go to zero
51:01
and maybe you will use also that this is bounded by a certain constant c1 here and by letting epsilon tends to zero lemma 4.3 implies implies that ceta n is smaller than c0
51:26
c1 n over sn over n times the time prime and then you use lemma I don't remember what but then conclude as before okay so we are going to make a break
51:59
and in the second part of the lecture I will explain what is I mean how you get this
52:09
algorithm it's a little bit different from I mean it's based on the same idea but there is a little twist and then I will state I don't think I will actually dive into the proof because it's going to be maybe state the result for Boolean percolation and just give you orally an
52:28
idea of what are the the problems that you encounter there okay so let's make a break and start at 40 okay maybe let's start again so goal now is to prove lemma 4.4
52:45
um so the difference is going to be that uh I mean we are going to still do the same
53:03
the same geometry we are going to take so there is k there is n and we want to explore somehow the connected components of the boundary of the ball but here there is a trick which is that once we explore so we are going to go small ball I mean small s x of epsilon by small x f
53:25
of epsilon but every single time we explore such a guy so we exactly as before we say okay this is black okay then I explore well I want to know the whole cells of the points that are
53:43
inside my small box so if I pick so let's first define for a box y let's first define discover y to be a sub algorithm if you want a sub decision tree which is going to do the following
54:06
it just explores all the boxes around the original box and if now you know from only
54:20
the Poisson point process in all these boxes if you know the cells of all the points in this
54:43
okay now that I know the Poisson point process is all these boxes do I know the cells of these of all the points there if not I carry on if yes I stop okay so I do like that so this is discover y maybe I don't write it formally because it's
55:02
not going to help much is it is it clear for everybody the algorithm so I really want when I choose if you want when I want to check whether a small box is containing a point which is going to be connected to the boundary I want to know all in addition all the cells
55:23
in this box I mean all the size of points in this box therefore I explore around until I'm certain that I saw sufficiently many points that I know for sure the state of the cell and then once I have this discover y I exactly do the same algorithm as before
55:46
I explore both box by box before it was edges by edges when I was doing the random cluster but I explore box by box until I know for sure all the cluster of the boundary
56:03
of the box of size k so it's gonna look like that but notice and this is an important feature of the proof notice that here let's say this point is in a certain box and is indeed connected to the boundary like that well so that's good for me I mean this
56:26
this is revealed but notice that there are other guys that are revealed is that in order to be certain of the shape of all the cells in this box let's say it was a big cell around it I had to explore around and in particular I had to discover a priori point that may be far away
56:52
so it is not true anymore that the probability of being revealed is a probability that says there is a point in my box which is connected to the boundary of the box of size k it is not
57:05
true anymore but what is true is that there must be I'm so y is revealed y is revealed if there exists x in epsilon zd such that what y must be in discover
57:31
x and there must be a x prime in the small square associated to x
57:40
which is connected to the boundary of the box of size k you agree with that if I what I was discovered at some point it's that I was in a discover of somebody which I mean if I at this stage I check this guy is that there is somebody actually maybe let's put three epsilon here so
58:06
you need to be neighboring somebody who who is discovered okay oh I mean okay
58:23
but notice that this in particular if you have that so if x if y is discovered is revealed
58:42
then there exists z this time in z2 such that well I have my y you understood the two epsilon here because we didn't when we wrote the thing I think we put epsilon right you need you need the neighbor you you should be certain maybe it's okay with epsilon it will be better
59:05
with epsilon okay so there is an x here and this small box has an x prime in it which is connected but for this y so first this x is in a box associated to the certain z
59:27
which is in z2 so a much bigger box epsilon is a pretty tiny but notice that in addition to that if y is in discover x then in particular there cannot be anybody
59:43
say in this box there cannot be anybody in this box not that x prime is not is not a point of I mean sorry there cannot be anybody else in this box because if there is any
01:00:00
by ds here, this guy would be irrelevant for determining this cell because this guy is anyway closer. Okay? So there must be z such that the box of size one for z is connected
01:00:21
to the boundary and well the bowl of radius say x I mean z minus y and here you may have some tedious things this is length square root d so let's put something like a 3 square root d or something like
01:00:43
that to be certain to take some margin this has to intersect eta only in there so in particular
01:01:05
let's say only in this but now that means that the probability of being revealed so the probability
01:01:39
of being revealed it's more than the sum of a z of the probability of these two events there
01:01:52
but notice that in particular so here I'm kind of saying that the intersection like this area actually I mean it should be even empty sorry because otherwise this guy is not going
01:02:05
to determine any of the colors there ah that that was my problem sorry I said something a little bit uh wrong I told you that discover y was stopping when you discover the cell other thing let's say rather that it stops when you discover the colors of everybody in the box so
01:02:24
of course if you know the cells you know also the colors that's obvious but you could discover the colors before discovering the cells let's say that you you find a black point here and nobody around well you know you are everybody is back inside even though you don't know the
01:02:45
cells yet okay so this is going to be a little bit better so let's discover why let's say discover the colors so here in particular it really becomes nobody in this box should be there if you want y to be important for the colors of the guy in the small box
01:03:03
but the point is that okay asking this is true but you can also ask it's a weaker statement you could also ask that there is no white guy in the in the thing here because if there is one white guy in particular you will determine this I mean y would not be relevant for this
01:03:23
so here let's assume that this is and this is a nice trick due to one of my co-authors which was to say put white here why is it a nice trick because this is an increasing event
01:03:42
and this is a decreasing event so the fkg inequality tells you that this is smaller than the sum of a z of the probability that s1 z is connected to the boundary of the box of size k times the probability of the bowl of z minus y minus three square root d
01:04:10
intersected with and white is empty sorry that was black here if i want something increasing decreasing i want that okay
01:04:30
the probability that this is empty you have a bowl of size x z minus y this is roughly exponential of minus c z minus y to the d so it's tiny tiny tiny it is kind of telling you that
01:04:53
discover x doesn't go far right having absolutely no points the person point process
01:05:01
so this this is easy to to bound by this now this is a priori larger than the probability that x is connected to the sorry that y is connected to the boundary right here i want z is connected i don't want this guy connected but notice that they are not far from each other why because
01:05:28
imagine z is connected to the boundary and imagine that this ball is black this ball is black this ball is black etc until this ball is black imagine all these events occur at once
01:05:42
so this is connected to the boundary and i have these balls that are open are black if this happens then y is also connected to the boundary but conditionally on this connected to the boundary having this open this open this open
01:06:01
open is just i pay a constant cost for each one of the balls if you prefer just using the fkg in equality in fact this is smaller or equal to the probability that y is connected to the boundary of the box divided by the probability of say b
01:06:29
one or i could even have said like s one zero uh all black to the power roughly
01:06:43
z minus y or maybe 2 z minus one but you see here there is a competition between something decaying like exponential of distance to the d and something growing because it's one over so something growing exponentially fast in the distance but this sum over that times that is
01:07:06
just summable when i sum over all z i get just a constant times this quantity which is what i wanted so this is smaller than c0 times the probability of y connected to the boundary of
01:07:22
the box of size k so here the trick is that the dependencies if you want like having to
01:07:41
discover somebody very far is costing you much more than the probability of being connected so this is much cheaper than this therefore like in average you don't go far don't choose
01:08:01
and notice that we use p larger than delta where we use p larger than delta to bound this from below this is larger roughly that constant times delta so that's why we use this okay well in the last half hour or less i'm going to try to tell you a little bit about the second
01:08:26
model that i wanted to do today which is boolean percolation so we we kind of believe
01:08:54
so this this was a joint work with um arun raufi and uh vincentation we kind of believe that
01:09:01
it should apply really to a bunch of percolation continuum percolation models because at the end what we use so here we definitely use some strong uh decoupling of voronoi but i'm going to give you an example where we use much less but otherwise the features are kind of you use the
01:09:23
osss inequality by discretizing and if you have this nice formula with influences for the derivative for your boolean for your continuum percolation model then the techniques should apply fairly well i mean the question then is whether you have such a formula of course
01:09:44
it's absolutely unclear in general so here i'm going to give you another example where we have a differential formula and therefore we can apply the same techniques so it's boolean percolation so what is boolean percolation you have a Poisson point process eta which is
01:10:09
something of intensity say lambda times lebesgue in the plane but now at every
01:10:21
x in eta what you do is you sample a ball center on this x of random radius taken according to some uh probability measure mu on r plus so you take mu uh probability measure
01:10:45
on r plus and you take o to set of open vertices of open points sorry say o
01:11:02
is the union over x in eta of the bowl of size rx in eta where the rx are id random variables of flow mu okay so you pick your points and boom you get a bowl
01:11:33
boom a bowl and this is my set o and the set o is a set of black points okay so we said that
01:11:46
x i mean y in rd is black if and only if y belongs to o and we are interested in the connectivity properties of this so here think of mu think of mu as being uh
01:12:09
mu could be first it could be delta of r equal r zero so you just put balls of size r zero
01:12:20
you can put mu to be maybe one over r to some power dr or exponential these are classical uh choices and up to now i think the base that was known was bounded for bounded radius you had the following results so for bounded radius you had exponential
01:12:48
decay in subcritical so when the radius is unbounded and if the tail of mu is slow you will never manage to get exponential decay why because you may have a small probability
01:13:05
of having a huge bowl including both zero and x so here the statement is going to be a little bit different so there is a lambda c which is a critical value and i'm going to define lambda c
01:13:28
tilde to be the soup of the lambda such that the limit m when n tends to infinity of the
01:13:43
probability that the bowl of size n is connected to the bowl of size 2n that this limit is zero so it's a slightly weird definition so you take
01:14:03
what i'm saying is that lambda c tilde is the last guy above this guy the probability to connect a bowl of size n and the boundary of a bowl of size 2n so the priority of having that doesn't go to zero above this lambda c tilde below well it doesn't necessarily go to
01:14:26
zero but at least its infimum is zero and the point is that this condition is a very convenient condition once you have that this limit is zero you can actually start to do some
01:14:45
renormalization argument and really prove that correlations decay quickly what does it mean decaying quickly well roughly means you cannot hope to get better than i mean it means roughly priorities that zero is connected to distance n is going to be related to the priority that
01:15:04
there is a big bowl of radius n covering zero it's not quite that but i think at least the prediction would be that it should be roughly that so i think maybe what is known is that you lose n to the d so you have like n to the d times the probability of having so maybe n to the
01:15:22
2d times probability of radius larger or equal to n but let's say if you want the equivalent of exponential decay would be having this condition in this framework this would be the right condition and notice that if you are in bounded case for instance then below lambda
01:15:45
it was known for a very long time that you have exponential decay actually even with exponential tail like that it's known for a long time that once you have this you have exponential decay so where is this the goal is what the statement is going to be to prove that lambda c equal
01:16:03
lambda c the theorem again by the same authors as before it's saying so fix d larger or equal
01:16:22
to two and let's assume that mu of r to the so we we still have some we are not certain what is the right power here but let's put 3d plus epsilon let's assume that this is finite
01:16:46
then then lambda c equal lambda c theta so this is a result of sharpness in this in this context the 3d plus epsilon is okay a little bit weird so the first thing you
01:17:15
could notice is that if you don't have d plus epsilon then in fact you don't have a phase
01:17:21
transition in the sense that at any lambda you are going to cover the whole plane so or maybe like i mean you will have an infinity component so d plus epsilon is a barrier that you cannot hope to beat but the point is whether you can go to d perception this we don't know
01:17:45
so far we have a technique that allows us to go through 3d plus epsilon so the third moment for the volume say but we don't manage to really get down to the the smallest assumption in two dimension this is done so there are results proving really almost the sharp result
01:18:07
but it's using russo semowelch type ideas exactly as i discussed in the first lecture and this doesn't extend to higher dimension you have no hope to be extending that to higher dimensions okay so i'm how do we prove that i'm not gonna make a foolproof because i think anyway
01:18:32
i should reward you for staying until the end so we are going to stop a little bit earlier so how do we prove that you will believe me that you have an fkg inequality you will believe
01:18:51
me that you have some kind of cuso formula if you want you can definitely use your sss inequality but you want to i want to tell you to what so here what we are going to do is the following
01:19:06
so the equivalent of eta x epsilon is going to be the following so first notice that this i mean choosing further points and then choosing the radius you could have done
01:19:23
differently you could have said okay i threw a Poisson point process on rd times r plus so if you take your Poisson point process of point x r of i mean of intensity so lambda times lebesgue d mu r then i have the point x r
01:19:56
which are in my Poisson point process eta and i just take o to be the union over x r in eta
01:20:06
of the bowl of radius r around x right it's exactly the same way of doing but the good thing with that is that there it's going to be a little bit more clear the discretization
01:20:21
so for epsilon what i'm going to do is i'm going to cut the space i'm going to just define for epsilon sorry and a small r in z i'm going to define s x uh r epsilon to be simply x r plus zero epsilon to the d times
01:20:53
zero one so i have my space here i have my Poisson point process if i do a
01:21:01
say if i would look at the model in one dimension which is not that interesting i will have the x's like that and the r like that and i basically cut into small things of width epsilon that way and height one that way okay so you cut the space
01:21:22
into really small boxes and you cut the r direction in boxes of size one this is fine okay and now you define as before eta of x r to be eta intersected with x s r of epsilon
01:21:48
so now i have a family of random variables notice they are not i d but this played no role in the proof of sss so you can just i mean have i mean they are independent that's the
01:22:04
important thing so what you end up with at the end is when you apply the sss inequality to our event uh indicator of zero connected to distance n you obtain theta n times one minus
01:22:21
theta n smaller equal to the sum over um x and r of the revealment for x r of t k times the inference for x r epsilon of the so here i need to tell you what is going to be the
01:22:55
um algorithm so what you are going to do is just you explore as before but in order to know the
01:23:05
state of an edge you need to explore also when i want to know whether this guy is in o or not whether there exists somebody in my box so is there x in s x epsilon this is really the same
01:23:24
notation as before i could actually have done that i could have said this was this would have been maybe right this is exactly equivalent with the notation of the verno i think so what
01:23:41
i'm going to do the discover y that i had before which had to explore the sales here is going to have to explore the state like the all the balls that if they would exist would intersect this thing so i'm going to explore so discover x is going to be exploring the balls in y r
01:24:17
for y minus x smaller equal to r or maybe here you should put uh r plus one to be certain
01:24:29
okay so this notice for instance there are balls that are going to be explored with reveillement really one like the huge ball for instance imagine you are unbounded
01:24:40
so huge ball of size n covering everybody is going to be explored with probably with reveillement one because i need to know whether it's there or not so the reveillement in this thing are not going to always be small but notice that we are in a regime where we are assuming a
01:25:03
strong moment condition on our measure so okay this ball is going to be revealed with quality one but its influence is going to be tiny because just the probability that it's there is small so now there is a competition the reveillement is not always small
01:25:21
and it should not be right because if you think about it we cannot hope to get theta n smaller than n over sn times theta n prime why because otherwise we will get exponential decay and we do not have exponential decay okay so this is the first step of the proof now what do we want to prove we want to prove that this algorithm when we are in a certain
01:25:47
regime the reveillement is roughly of the order of zero connected to distance so how do we do that well the first trick and this is really important is that in voronoi we were trying we we had at some point to use
01:26:08
that p is larger or equal to delta why because somehow we had the geometry constraints we were like we had to discover apriori far away distance r
01:26:23
but somehow what we did is we said yes we have to go far but anyway having to discover far is much more expensive than just connecting to this distance that was somehow the trick and in order to say that connecting to this distance was not too expensive
01:26:43
we use that p was larger than delta because we only needed to beat exponential of minus distance to the power d which is decaying super fast so even if we knew that connecting connecting to distance r is just exponential that was sufficient here discover x goes and sees guys
01:27:06
that are far away and it's not costing anything basically the only think of the cost of having to look far is that we will have to have a big bowl and it's the cost of this bowl which is expensive so here we are going to use we want to use the equivalent of p larger than delta we
01:27:26
want to use that connectivity for really the percolation is going to be less expensive than far and here what we are going to do is that we are going to place ourselves above lambda c tilde so fix lambda larger than lambda c tilde so there exists a constant
01:27:50
such that you know that this probability is always larger than c lambda for every n
01:28:06
so this is saying what they say well it's not that expensive to connect i'm going to tell you later what's uh what exactly we we do but now we are going to say the lemma is going to be
01:28:23
so i guess it's four point this was maybe four point five so this is four point six the lemma is going to be that if lambda is strictly larger than lambda c tilde there exists a c zero such that the revealment of the algorithm outline here which is once i check a new guy i
01:28:43
actually check discover x then for this thing we have what did i want to do no sorry sorry so if lambda larger than lambda c tilde i'm going to use that there exists a c
01:29:03
zero such that the influence of x r epsilon of zero connected to boundary of the box of size n
01:29:23
is smaller or equal to one over r to the one plus c times the influence x zero epsilon of zero connected to the boundary of the box of size
01:29:40
so here instead of bounding the revealment i'm going to bound the influences i'm going to say influences of big balls are not mattering so much so why is it important why is this lemma you can play the same game as before why because the revealment of the ball of size
01:30:16
r around x so the revealment of this guy is what it's going to be revealed if
01:30:25
well if it is in the discover x of somebody in the discover y of somebody all right so this is a probability that y belong of discover x and x is connected to the boundary
01:30:45
or let's say the s x epsilon is connected to the boundary right but this to be discovered
01:31:04
here well because i'm a ball of size r i need to be at distance r of x so the revealment is bounded by the fact that the ball around y of size r plus one is connected to the boundary
01:31:24
of k so here you start to feel maybe the competition so here i'm losing something in the the bigger the r the bigger the revealment
01:31:41
exactly like if you were you could be far away in a Voronoi percolation from the point which is connected and if you are far away then in fact the probability that this guy is connected may be bigger than the point that you are connected but in the Voronoi we had that
01:32:03
anyway we were paying an exponential of minus r to the d for being far away and here what we are going to say is that we don't pay anything from being far away we really gain in the inference but we lose we really gain in the revealment but we lose in the inference
01:32:21
we lose this here so if i prove this lemma then what can i do i can rewrite that theta n one minus theta n is smaller when lambda is larger than lambda c t is smaller than
01:32:47
or let's say n theta 1 minus theta n is smaller than sum over k sum over x and r of the revealment times the inference now i can plug my two bounds this is smaller we call to the sum
01:33:26
over k and x r of probability that the ball of size r plus one around x is connected to the boundary times one over r to the one plus c times the inference of this i just replaced
01:33:58
by my two estimates and really here i think i'm doing a pretty bad job at
01:34:07
at outlining that but here i have again the competition between the term which is getting bigger and bigger when r goes to infinity and the term which is getting smaller and smaller
01:34:21
when r goes to infinity so how do i conclude now from that well remember what was the the trick in the proof the trick was to say well i define lambda c hat say which is the inf of the lambda
01:34:43
such that lim sup of s n log sn over log n is equal to one remember the proof of the lemma so you define this lambda c hat and you say above this lambda c hat you have exponential decay you
01:35:01
have sorry uh above this lambda c hat you have percolation and below you have exponential decay that was the idea right well here what we are going to say is that well this lambda c hat has to be equal to lambda c n lambda c tilde because what we are going to prove is that below
01:35:23
this lambda c hat you have exponential decay so it's a proof by contradiction because you never have exponential decay but that tells you what it tells you that because this is only true for lambda larger than lambda c tilde there cannot be any space between lambda c tilde
01:35:41
and lambda c hat otherwise there would be exponential decay which is absurd so it would just prove it's a kind of weird way of thinking about it but it would just prove that lambda c tilde has to be equal to lambda c hat which has to be equal to lambda c okay so just to tell you to illustrate the fact that we are done basically once we have
01:36:01
this thing is this big sum here now we start to be used to that right so it's sum over x and then you have sum over k and r right written like that i mean this thing is not depending on r anymore but the point is notice that here in particular you have r equals zero
01:36:26
so this thing here is definitely larger than sn okay so this quantity let's call it s n tilde so s n tilde is definitely larger than sn this is clear but this tells you what it tells you
01:36:43
that above lambda c tilde the proof that we did with this f n this uh this lemma the lemma on the differential inequality since it was working with sn it's officially working with n tilde so it really tells you that this differential inequality oh sorry and i'm also
01:37:03
using that the sum of influences x zero epsilon is smaller than the derivative times a small constant with this you can just see from the fact that if you increase i mean
01:37:22
increasing look just the radius between zero and one is already sufficient to gain something so here maybe i'm making a small assumption that there is a probability of having a radius between zero and one but you can just extend that in a trivial so this plus this tells you
01:37:40
that above lambda c hat you will if you run the same proof as before you get that there is an infinite cluster so it tells you so this plus let's call it one this plus one implies for lambda larger than lambda c hat theta lambda positive so remains just to
01:38:09
treat the other case and there it's going to be this proof by contradiction so assume
01:38:20
there exists lambda in lambda c tilde lambda c hat then at this lambda what do we have we know we have exponential decay of sn we know we have decay of sn so we know that there exists that there exists c1 positive such that sn is smaller
01:38:48
than n to the one minus c1 right this was the assumption on lambda c tilde remember when we stretch exponential decay we said you are below this guy therefore this limit is smaller than
01:39:06
this lim sup is smaller than one it implies this but now what you can do is the following here this big sum i'm going to prove that in this context when you have that this big sum is controlled by sn is also bounded from above by sn but if this is true then you can run the same
01:39:25
proof as before so in order to prove that what do you do you just look at an alpha which is much smaller than c1 really take a tiny alpha and observe that the probability of the ball
01:39:45
so observe that the sum over k air k and r sorry of this thing well it's definitely smaller
01:40:06
just truncate depending on r larger than n to the alpha or r smaller than n to the alpha
01:40:25
do these two sums here what do we get in this context we have automatically the one over r to the one plus c when i sum the r i'm going to get one over n to the alpha so this is smaller
01:40:43
than i have n choices for k and i have the sum of r larger than r to n to the alpha of one over r to the one plus c so i get one over i mean n to the minus c alpha right
01:41:00
so sorry i said something wrong i said that it would be controlled by sn it's not controlled by sn but notice that this is polynomial small it's n to the minus c alpha times n which will simplify with this n so this n here if i put a one over n here
01:41:21
i get that this first term is polynomial small excellent that's usually what i want so look at the second term now so here in the second term i still have the one over r to the one plus uh c and i have the probability but here the probability is any way i can bound it by the
01:41:43
probability that the ball of size n to the alpha is connected to the boundary of the ball of size k right because anyway r is smaller than n to the alpha okay so this looks almost
01:42:06
good so here i have n choices for k i have r choices i mean n to the alpha choices i mean the one over r to the one plus c is summable so i get n times the probability
01:42:21
that bx i mean the sum if you want or let okay let me maybe you know do the step here this guy here is i mean the priority of this connected to this is bigger right it's a ball of size n to the alpha connected to the ball of size k so it's bigger than the
01:42:41
that zero is connected to the ball of size but not by much if you think about it why because exactly as before think of the event we for for voronoi we created like small guys connected like that zero to the boundary here it's going to be too costly but the fact that you are
01:43:01
above lambda c tilde the fact that you know that you cross annuli a very simple argument is going to tell you that in fact the probability that zero is connected to x is digging at most like one of a distance to the 2d just it's a union bound if otherwise you would not cross
01:43:25
at all the box so the union bound is telling you that connecting one point to one point here this at most you pay n to the alpha to the 2d but that means that here you could put the ball
01:43:45
of size one if you want paying just the cost of n to the 2d alpha and then you are back here what do you have forget even this guy if you want you have the sum over k of the priorities
01:44:01
that the ball of size one is connected to the ball of size k and this is a standard bound this is bounded by twice sn so the second part here is bounded by n to the 2d alpha times twice sn but sn is bounded by n to the 1 minus c1 so if alpha is indeed chosen much smaller than c1
01:44:25
you get that this decays polynomially fast to zero i guess that's a good sign that i should stop so this is the end of the book it's a little bit messy but maybe it gives you an idea that
01:44:41
at least it keeps working even in the framework where you don't have exponential decay and there was again this competition between revealment getting bigger with the distance and the cost for the inferences i didn't explain to you how you bound that but that's the only place where we use the assumption on the
01:45:04
on the moment for the distribution and that's why we don't know where we are going to stop because i mean yesterday the base bound was 40 now it's 3d i mean hopefully it's going to going like that for two more days but yeah this is the only place if you can prove that
01:45:22
for as soon as you have a moment d plus epsilon then you have basically the optimal result thank you for staying until the end and see you next year hopefully
Empfehlungen
Serie mit 4 Medien