We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Logarithmic geometry and resolution of singularities

00:00

Formal Metadata

Title
Logarithmic geometry and resolution of singularities
Title of Series
Number of Parts
10
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
I will tell about recent developments in resolution of singularities achieved in a series of works with Abramovich and Wlodarczyk – resolution of log varieties, resolution of morphisms and a no-history (or dream) algorithm for resolution of varieties. I will try to especially emphasize the role of logarithmic geometry in these algorithms and in the quest after them.
Mathematical singularityImage resolutionGeometryVariety (linguistics)MorphismusIdeal (ethics)Algebraic structureClassical physicsTransfinite InduktionMathematical singularityAxiom of choiceBoundary value problemProcess (computing)Hausdorff dimensionInductive reasoningEquivalence relationReduction of orderSequencePower setParameter (computer programming)Glattheit <Mathematik>Element (mathematics)Basis <Mathematik>Canonical ensembleEquals signWeightRootFunktorMusical ensembleNichtlineares GleichungssystemNumber theoryEvelyn PinchingConnectivity (graph theory)Point (geometry)Invariant (mathematics)Order (biology)Total S.A.Boundary value problemSpacetimeWeight2 (number)Divisor (algebraic geometry)Ultraviolet photoelectron spectroscopyPoint cloudDerivation (linguistics)Prime idealSquare numberTransverse waveTheoryMultiplication signExpressionPhysical systemMaß <Mathematik>ManifoldSummierbarkeitOrbitDifferent (Kate Ryan album)Algebraic structureMereologyPrincipal idealMassSocial classHeegaard splittingBendingMathematical singularitySet theoryInversion (music)Goodness of fitParameter (computer programming)Loop (music)Event horizonMany-sorted logicComputer programmingCombinatoricsLink (knot theory)Slide ruleFinitismusINTEGRALTheoremDegree (graph theory)Image resolutionResultantLinear algebraMorphismusStudent's t-testTheory of relativityConcentricObservational studyConfidence intervalStability theoryIdeal (ethics)Valuation (algebra)Inductive reasoningDimensional analysisRootGlattheit <Mathematik>Characteristic polynomialLogarithmElement (mathematics)Symmetric matrixCategory of beingArithmetic meanSurfaceCurveIterationProof theoryRight angleDirection (geometry)Energy levelGroup actionVariable (mathematics)Classical physicsMonoidProjektive GeometrieRegular graphProper mapCartier-DivisorNatural numberRankingReduction of orderEquivalence relationCanonical ensembleFunctional (mathematics)Shift operatorProcess (computing)Differential geometrySeries (mathematics)Covering spaceOperator (mathematics)Model theoryAnalogyPoisson-KlammerModulformMathematical singularitySequencePhysicalismGeometryLimit (category theory)Axiom of choiceLocal ringAlgebraic varietyUniformer RaumDegrees of freedom (physics and chemistry)Price indexVariety (linguistics)Modal logicPhysical lawMaxima and minimaPotenz <Mathematik>CoefficientHyperflächeMultiplicationPower (physics)AdditionBlock (periodic table)Directed graphGraph coloringEigenvalues and eigenvectorsInfinityKörper <Algebra>Linear regressionMaxwell's equationsVibrationExistenceState of matterCasting (performing arts)FreezingCondition numberProduct (business)Bounded variationRing (mathematics)Extension (kinesiology)PolytopMultilaterationExpected valueQuadratic formFamilyFiber (mathematics)MathematicsTransformation (genetics)Einbettung <Mathematik>Independence (probability theory)FunktorCircleCartesian coordinate systemThermodynamisches SystemPositional notationNetwork topologyInternational Date LineRoutingHyperplaneStatistical hypothesis testingBasis <Mathematik>GradientPartial derivativeWeight functionField extensionKodimensionDifferentiable manifoldMechanism designDescriptive statistics1 (number)Line (geometry)Symmetry (physics)VarianceArithmetic progressionFunction (mathematics)ExponentiationFactory (trading post)Coordinate systemMoment of inertiaMaximal idealMortality rateFree groupFiber bundleBuildingSurjective functionAutocovarianceSheaf (mathematics)Equaliser (mathematics)TensorDrop (liquid)Normal (geometry)Complete metric spaceKinematicsDiscounts and allowancesÄquivariante AbbildungPosition operatorQuotientGraph (mathematics)Computer animation
Transcript: English(auto-generated)
It's a honor and pleasure to give a talk at this conference.
So thank you very much for inviting me. And yes, my talk will be about the logarithmic aspects of resolution of singularities, both the Young's resolution and classical. Okay, so yeah, maybe I should also mention
that I sent a link to Russ Leitz on my webpage. I sent a link in the chat. Maybe it's possible to see it where you can go for some back and not be stuck with the page time. So it might be convenient during the talk
to see the slides separately. Okay, so let's talk. I was lucky in the sense that my first project where I seriously used the working structures of Fontaine-Luzi was actually a joint project with Luca-Luzi.
So I could, yeah, study things from Luca-Luzi. And our project was about Gaber's version of Dion's resolution. I'll discuss it a bit later. And the intuition of log geometry
and confidence in log geometry, which I got during this project was very helpful for recent advances. My main part of the talk will be about recent advances in classical kinematics resolution. And it was important, yeah, so it seems not related. I'll try to explain a bit about this.
So the recent advances are completely a joint project with Dan O'Brien and Shindyarit Bodarshek, yeah. And the extended classical canonical of entrepreneurial resolution to morphisms. Canonical semi-stable reduction type theorems. And also we obtained a much better and simpler resolution
for algorithmic resolution of singularities. We call it DREAM algorithm. It will be just tangential. In this talk, I'll mention it a bit, but the main part will be about logarithmic aspects. Ironically, the DREAM algorithm does not use log geometry at all. Yeah, it was discovered because of log geometry,
it does not use it at all. This is one of the reasons why we won't concentrate on the DREAM algorithm during this talk. But it has a log variant developed by a student of Abramovich, so it also can be done in a logarithmic setting as well. Good, now the plan.
So we'll talk a bit about altered resolutions and I'll also mention our joint project. We worked on this look, and after that main body of the lecture will be about logarithmic resolution, cost for motivation, formulations, when I'll describe you Shironakis approach. And after that, I'll explain logarithmic piece,
one has to do to the classical approach. Okay, let's start with altered resolutions. So to be brief, I just formulate one more or less result, which generalize many things about altered resolution. So we need the notion of alteration of amorphism.
What does it mean? So if I given a dominant morphism, Y to X, F from Y to X of integral log schemes or schemes, maybe schemes with trivial log structure, when by chart X alteration, we mean amorphism F prime from Y prime to X prime, where both Y and X were altered.
So this compatible pair Y prime to Y and X prime to X, which are proper, generically finite, and rank or degree of these alterations is not divisible by any L which is invertible on X. So ideally we would like it to be one, but the best possible which we can do now is to be prime to any L invertible on X.
And the theorem from 17, altered resolution of morphisms, says that if you are given a finite type morphism Y to X between integral FS log schemes and the generically trivial log structures. And also we have to assume that X is sort of universally resolved by classical meaning.
For example, it's a point, or maybe it's a curve, or maybe it's even a classic surface because there is classical resolution for surface. When there is a log smooth chart X alteration from Y prime to X prime. But this given any such F, we can alter Y and X and get a log smooth morphism.
So we can resolve morphism in log category. So a bit about history. Altered resolution was built discount by De Jong in 1995. He considered the case when dimension of X is at most one, mainly point or a tray. So resolution varieties, also instead of reduction over a tray.
And also he proved an equivariant version with group action. After that De Jong and Brown in 96, proved this result in characteristic zero. So we actually, and X is a point. So we actually resolved varieties in characteristic zero by a completely new approach.
We proved that De Jong's approach is also able to resolve varieties in characteristic zero. Gather announced around 2005, where one can also control in positive characteristic, one can control the degree of alterations, at least at a single prime L. One can get prime to L alteration,
dimension still less equal one. And in our project with Elusi in 14, we actually worked out Gather's program. It was not very easy, but we managed. And the proof moreover, but actually one can take any X and not only X bounded by one dimension by one.
This required a slightly different deduction scheme, but we used many ingredients. So for gathers problem, good. So, so far, and in 17, a few more variation theoretic techniques were used to strengthen this method.
So is there a question in Paris? Yes. So, okay, when you write integral, it's slightly ambiguous because you don't, probably you don't need, mean integral in the sense of log geometry, but integral in the sense of a scheme theory that is. That's correct, that's correct. Moreover, all my log structures will be FS, okay.
Even though, yeah, but you're right, yeah. Integral here means just integral on the level of varieties. Yeah, and then when you want to make it nicer using alteration, using alteration, then the X prime and Y prime are again supposed to be integral or there could be several irreducible component
and just the degrees of the sum. In this case, we assume to be integral, yeah. Okay, so this is, okay, I understand, okay. Yeah, another question.
Is there a version of this theorem where you don't have log structures, but the old cartomorphism is literally semi-stable in the sense of- Soon, okay, soon, in a couple of slides. Now the method, so the proof of all these results, yeah, what was found by Dion,
runs by direct induction on dimension. So morphism of dimension D, we split it to D curves, relative curves, and we resolve them one by one. We started with X zero, which can be resolved. It's a small dimension or by some inductive assumption. And then we resolve F1 and we get X1, which is log-smooth.
And then we resolve F2 and we get X2, which is log-smooth. And a bunch of alterations is collected during this process. But the idea is very simple, just resolve dimension by dimension, one by one. This requires to resolve the morphism of relative dimension one. But the role of log geometry here is crystal clear. Relative curve can be resolved only in log category.
You cannot make this morphism smooth by any alteration. Only log-smooth, also stable in the best possible case. The proof of a resolution of morphism is classical more or less. It's based on properness of MGN bar, one of groups, now graph you by the way. And on semi-stable reduction of the limit fourth,
which actually is the first relative resolution result which was discovered, yeah. Okay, and control on the rank is done by quotients. So we resolve something equivalently and we divide back so that log-smoothness is preserved. This is called, this happens if action
is so-called toroidal or Gabber calls them very thin. Observation, classical context worked with regular schemes and log structures given by SNC divisors. But everything works even easier if we generalize to log-smooth or log-regular log schemes. Moreover, this generality is critical when we want to divide by the toroidal action
because making action toroidal, so-called torification theorems work only in the general context of log-regular log schemes. By the way, it was discovered by De Jong and Abramovich in way work in 96. And the word torification is just a joke.
When it was discovered and we saw what it works, Abramovich wrote an email to De Jong, that's terrific, yeah, terrific, terrific. So it's just play of words. Okay, good. Now, what can we deduce from this? Sort of principle, which I think works often,
is that once log structures are used, there is no reason to be stuck with smooth and SNC. You should better go to general context of log-smooth or log-regular schemes and morphisms. In a sense, from the point of view of log geometry,
all FS monoids are equal, like all animals are equal. And if needed, you can, after that, improve and combinatorially by a separate routine. And here is a theorem I was asked about, a theorem vis-a-vis the project in lieu, where it's an stable reduction for morphisms. If, in the algorithm resolution theorem,
which I formulated two slides before, you can, in addition, achieve with white prime and X prime are regular and log structures are given by SNC divisors. So you can achieve more. Literally, this is the best possible resolution of morphisms. Locally, parameters on X are just products on X prime,
are products of parameters on Y prime. And it is deduced from the theorem on two slides before, by hard combinatorial methods. All you have to do is to improve monoids by blowing up by subdivisions. But it's a really difficult combinatorial method. It's sort of relative version
of the main combinatorial result of KKMS, latest polytopes, which is also difficult. Okay. And now, that's all what I wanted to tell about altered resolution. We have one principle to take with us, and let's see how this works to classical resolution.
So the rest of the talk is about joint project with Zabrawiec and Wodarshiek on resolution of singularities over field K of characteristics here. For simplicity, we always work with varieties, finite type over field K. We can deal with larger generality, but for lecture purposes, we stick with this.
Our goal is to resolve morphisms, log varieties, and in detail, I'll tell about a dream algorithm. References for the talk. So logarithmic resolution is done in two papers. First of all, the result, logarithmic varieties in 17.
This is already published. And now there is a submitted paper about extension to morphisms. In addition, there are two papers on dream algorithms, a paper without log structures and a paper with log structures by Kwek, a student of that. Okay. And now motivation for this project.
Main motivation is as follows. We wanted to improve this result about the resolution of morphisms, which in characteristic zero is due to Abramish and Karol. Dion's method is not canonical. And even if I give a morphism with large smooth locus,
we have no control on smooth locus. It can be destroyed because we have to choose this vibration. It's not canonical and we have no control. So my goal, so when you project one, first of all, the result morphisms. So what locks most locus is preserved. In particular proofs in stable reduction
over non-discrete variation rates. Shira Naka, Shira Naka's theorem implies some stable reduction over discrete variation rate. It's sort of accident. But for non-discrete, the only thing you can do is to spread out, get a family over high dimensional base and try to resolve where,
and you want your generic fiber to be, which is most to be preserved. So one needs to use something new. Second, do this as a fun tutorial as possible. Try to do it canonically, try to do it compatible with base extensions. Shira Naka's stable reduction is not compatible with the extension of the tray and our method will be so functionality.
Third, clarify the role of log-geometry in classical resolution. Okay, just a minute, I'll explain what it means. Now, the only hope was to use Shira Naka's embedded resolution method. Why? Because this is the only canonical method we have. As I explained, mainly there are two methods to prove a resolution
in any dimension. Dion's method and Shira Naka's method. And Dion's is not canonical for sure. So we hoped to use Shira Naka's method, but for log-smooth ambient varieties and not for smooth ambient varieties. So just shift completely to log-geometry
of Shira Naka's method. And why we hope that this is possible? Yeah, not only because we do not have any other tool. We had some indication or expectation, and this was because in Shira Naka's approach, there are signatures of log-geometry, I'll point where. And the hope was that due to this monoid
of democracy principle, if there is log-geometry in Shira Naka's method, it should work for general log-smooth stuff. So this was actually, this principle was, it gave us some hope to start it. Okay. And now a couple of words about classical resolutions.
So classical resolution aims to take an integral variety Z, this time just variety, not log variety. So it's not, no confusion is possible. And it wants to find the modification, zero S to Z with smooth zero S. Shira Naka in 64 proved that it exists
and got Flitz medal for this. And when many people try to understand what Shira Naka did and simplify and Shira Naka himself also worked on this a lot. In 70s, Shira Naka Jiro found a notion of maximal contact, which will be important later. Wilhelminer and Wirstein-Milman independently
in 70s, in 80s and 90s, constructed an algorithm, not just existence. They constructed an algorithm how to resolve canonical singularities. And since then, actually the only algorithm which was available was this algorithm Wilhelminer and Wirstein-Milman. It's essentially the same, many different proofs were given or constructions,
but the algorithm is the same. So our logarithmic algorithm was sort of the first one which is really new. And Wlodarczyk in 2005 proved that the algorithm in fact, satisfy stronger property, not only canonical, it's a factorial for all smooth morphisms. If Z prime over Z is smooth,
when the resolution of Z prime is pullback of resolution of Z. And this is stronger claim, and it is easier to prove as often happens with inductive arguments. And also it proves the covariance resolution, so it's useful for applications.
Now, about our results. So in 70, we constructed an analog of classical algorithm in a logarithmic world. So if we want to resolve morphisms, it's clear that you should go to a logarithmic world. And I also gave a few more reasons why to do.
Now, morphisms are complicated things. So if you want to do something logarithmic, start with varieties, just develop something. So Shirodaka theory results log varieties, just resolve variety, resolve the divisor given toroidal structure and you get the resolution.
But we constructed an algorithm which is not only logarithmic, it's fun toroidal with respect to all log-smooth morphisms. This fun toroidal is completely out of reach for Shirodaka, it's something new and it's important. In logarithmic world, you must work logarithmically. So fun toroidal also is much stronger here. This was the main mode.
And we met at the self. And then in the next paper in SQL, we proved that this algorithm developed in 17 actually works for morphisms. Just the same algorithm works for morphisms. It constructs a modification of X so that X rest to B is log-smooth, but it may fail if dimension of B is larger than one.
But it fails for good reason. It fails for the reason that sometimes you have also to modify B. Dimension of B is larger than one. It can be possible that you have also to modify B. So a new ingredient was to prove that there exists a modification of B.
So what after modification, the base change already can be resolved by the algorithm of 17. So when you modify B enough, you can resolve. Moreover, this will be compatible with any faster base change. So it's completely up to existence. It's independent of the base. It's compatible with base changes.
And so far in the archive version, H is not canonical. So resolution is canonical only relatively once you choose some B, but we are working on canonical modification of B2 it will be done. So we are in the middle of this work, but it's clear what it will be done.
So these are the new things about algorithm. And now I formulated, I gave motivation, I gave formulations. Now I'll describe classical algorithm. And in the end, I'll explain how it can be twisted to logarithmic version. So all canonical methods before our work,
actually constructed essentially the same algorithm. You can work locally because you're building something canonical. So if you do it locally, when it glues automatically. The resolution is embedded. One locally embeds X into a manifold. By manifold, I always mean a smooth variety in this talk.
And then works with a pair. So one looks for blow ups of the embed manifold. So that MRes is a smooth and some transform, certain transform of X, which is the pullback minus few copies of exceptional divisor.
So transform of X is the resolution of X. Frontorial embedded resolution implies frontorial non embedded because embedded is essentially unique. I will not stop on this question, but this reduction from non embedded to embedded is simple.
Main choices. It turns out that this classical algorithm makes a lot of choices, which looks so natural that people just are not aware that they really are done. So first choice, the most natural one is that we only blow up smooth centers.
Why? Because we want this ambient space M to be smooth throughout our algorithm. So, so we constructed sequence of blow ups. Mi is blown up at smooth center Bi and we get a smooth Mi plus one. So this will be the notation.
Transforms. And by the way, I want to say that already one is a really is a decision and in our algorithm centers will be different. And in weighted in dream algorithm centers are different. So one can play even with this choice. It's essential.
Transforms. Well, in heroics approach, you pull back X and subtract the multiple of exceptional divisor. Most natural thing you can do. If you pull back completely, you definitely get something which cannot be smooth because it has few components. It has copies of exceptional divisor. So at least you must remove some copies of exception.
Choice of centers. There is an invariant in the algorithm. I'll describe it a bit later, but the main component of this invariant is the order of ideal defining X. So the order I'll explain a bit later what it is, but it's something very natural you can imagine.
It's a very crude primary invariant. History. In addition, the usual algorithm will run into a loop. If you just use these primary invariant, I'll give example in a couple of minutes. And because of this, it has to use history. It cannot work without history. And history is given by exceptional SNC divisor E
and the number of components at the point will be another primary invariant. So, and finally induction. This algorithm also runs by induction, but it's not induction of dimension and then induction by a vibration. It's induction of co-dimension.
It's induction on hypersurface and then hypersurface and hypersurface and so on. So in this embedded manifold, we'll choose a maximal contact hypersurface so that the problem can be restricted to it and so on. So this is the mechanism of induction. So actual invariant will be a D1S1
and then D2S2 the invariants on the maximal contact and then the invariants of the next maximal contact and so on. So it's a sequence of 2N invariants. Okay, good. And now history. Classical algorithm in addition to subtle inductive structure,
it must encode history and with our choices, no history does not exist. And here's an example of no progress. Let's take ambient manifold A4. Let's take hypersurface given by vanishing of X square minus YZT. Yeah, it's a hypersurface with singularity locus,
which is just union of three coordinate lines, X, Y, YX, ZX, I think Y. Okay, it contains of a union of three lines. It consists of union of three lines. And there is a symmetry by permuting YZT as free symmetry.
And in this singular locus, the only S3 covariance of scheme containing zero is zero. So if we want to find something canonical, it must blow up a covariant centers and then it can only blow up zero. If we blow up zero and consider a chart of this blow up,
when the pullback looks like Y prime square times the same expression and new coordinates. With this, the total pullback of X consists of something which is X prime is just looks like X and two copies of exceptional divisor.
So after removing exceptional divisor, we just are stuck with the same equation. It does not improve. And if we have no memory, we'll do the same and the same and we'll never stop. And a similar computation shows that even with the umbrella, when you blow a pinch point, you again get a pinch point. So Shironaka's algorithm must use history,
but using weighted blow ups and not just blow ups. We have constructed in 19 a dream algorithm, which is just as simple as possible. It defines invariant. It says which center to choose with center with maximal invariant, you blow it up and invariant drops.
And there is no history though. Okay, good. And because there is no history, actually one does not have to consider exceptional divisor in this algorithm. And it works without those numbers. Good. Now about the boundary.
So why history is encoded in the boundary in Shironaka's approach. It's very simple. Once we blow up M and get some M prime, any point X on the exceptional divisor has a God given coordinate T. It is unique after a unit. And it comes from the history of the resolution. So if we want to make some less choices
and remember history, we should use this coordinate always in all our computations. And this is what Shironaka does. So inductively for a sequence of sub manifold blow ups, we define a total boundary to be pre-measured of the ice boundary and the new boundary.
And we call it the accumulated boundary of M, of M I plus one, sorry. We always work with coordinate system T1, Tn such with both new center and the boundary at this stage can be expressed in these coordinates. In such case one actually says that Ei and Vi
have simple normal crossings. This means that Vi lies in few components of exceptional divisor, and it's transversal to the union of the other components. So, and we call the boundary coordinates exceptional or monomial,
and even denote them differently M1 up to Mr. So our coordinate system has some usual coordinates where we have choices and exceptional coordinates, which are good given up to units. And in this even blows up only such Vi's
it's automatically with the boundary will be simple normal crossing at any stage. And if I would blow up a smooth center, which is not transversal, when it can happen that I destroyed the boundary and the next boundary would be non-smooth. So it's sort of must, if we want to use boundaries as an C divisor,
we must blow up something like this. So this restricts our center smooth. Now role of the boundary, good news is that once we use monomial coordinates, we have less choices. This is what we wanted, we avoid loops. And also boundary can accumulate part of I. So we'll split I in the SQL as I monomial,
where I monomial is maximum monomial invertible ideal and I pure, which cannot be divided by monomials. This splitting will be essential just in a minute. Then use, in fact, another side of the same coin.
First of all, we must treat E and monomial coordinates with a special care and less possibilities for coordinates. So sometimes it's also a problem. Okay, good. Now many technical complications of the classical algorithm actually are caused by the fact
that we badly separate regular and exceptional coordinates. And I'll point out where this happens. So, and first of all, in the definition of order, we have two classes of coordinates, but in kiranaka's approach, they are mixed. And in our approach,
after you'll separate them completely as you'll see. Good. Now, principalization. So this idea of splitting I as monomial part and pure part actually is reflected as follows. By a principalization problem, we mean we follow.
All algorithm of embedded resolution do the following thing. First of all, once we embed X into M, we place it by ideal I on M and only what is ideal. From now on, we ignore geometry of X completely. We just work with ideal. Yeah, geometry of M and ideal on M. And we solve the following principalization problem.
Find the sequence of blow ups as above of manifolds with boundary, such that the pullback of I to M M is invertible and monomial. So just it becomes what I wrote I mon
and I pure is completely cute. No pure part. It means that it's just supported on E M. It looks like a different problem, but it turns out to be equivalent to better resolution. If you are given embedded resolution, you can pass through. Okay, no, no, sorry. It's strong, it's strong. In theory, it's strong.
So the magic is that the last non-empty strict transform of X, let's denote it X L in M L, actually is a component of V L. And because of this, it must be smooth and transversal. V L is a smooth and have simple normal process with V L.
So the magic is that if you can solve principalization problem, then you automatically solve the embedded resolution problem. So from now on, we'll discuss principalization problem. So we replace geometric problem by multi-bride problem about ATS.
So moreover, principalization not only solves X L, it solves also the history divisor. It achieves since X L and E L have simple normal process, the restriction of L to X L is SNC. So we wanted to solve one problem
and we solved a stronger logarithmic problem. This gives a strong smell of log geometry and this was one of indications that log geometry is lurking behind Shironaka support. Great profit, working with ideal provides a lot of flexibility as we'll immediately see. Okay, order reduction.
So many variant of the algorithm, as I told is the order. The order of pure part because model of part is sort of our friend. Pure part is our enemy. We want to decrease the pure part. So the order is defined as minimal order of elements of the ideal.
And it's as natural as you can imagine. At the origin, the order of X squared minus Y squared is two. Yeah, it's given by this monomer and the order of such a guy is five because of this monomer. Okay, and in addition, one works not just with ideal,
one works with so-called weighted or marked ideals, I comma D where D is a number. And this number indicates what types of transforms we want to do. D says that we want to remove D copies of exception. So we only use blow ups along with centers,
which are contained in the locus where the order of five is at least D. So we call such a locus I comma D singular. It's singular support of the marked ideal I comma D. And if we blow up such a center, it's automatically when we can update I by pulling it back
and dividing by this power of exceptional divisor. So this guarantees that we can divide by this power. If we blow up the locus where the order was at least D when we'll get at least D copies of exception and we can subtract. For example, we already saw such example. We blew up X squared minus Y Z T
and we removed two copies of exception. Order reduction finds a sequence of blow ups with boundaries. Just to save space, I did not put here the boundaries. Which are I comma D admissible in the following sense.
In this sense. In the sense of blowing up only such centers. And the order reduction not only blows up such centers, it finds a sequence. So with I N comma D singular is empty.
So it managed to get I N so with its order is at any point is strictly less than D. Yeah, so we blow up points where order was more than D and we drop below D. So we sort of reduce the order of I below D.
Now in principle existence of such fake immediately implies principalization, just take D equal one. Yeah, just start with ideal and kill it complete. Using such transforms and factoring out monomial parts at each step.
And remark, the main case actually is not D equal one. The main case is D equal to order of IPO. It's the most natural thing. Our invariant says that the maximum problem happens where the order is maximum. So try to reduce the maximum order
and the next and so on. So the main case is maximum order. But for inductive reasons, we also have to deal with the case when D is not the order of your part, but something small. So it's sort of bad karma inherited on maximum contact
from the general problem. Okay, good. And now we go to a concrete part. Just one or two slides about, you know, concrete. So maximum.
The miracle which enables induction on dimension and the miracle which only happens in characteristic zero. We have no idea of what to do in characteristic P. No, and all of such a phenomenon is that in the maximum order case, yeah. In the case when D is just the order of I.
The order reduction of I comma D is equivalent to the order reduction of so-called coefficient ideal C I, restricted to a hyper surface H of maximum contact with order D factorial.
So any blow up sequence which reduces order of C I on H gives rise to a blow up sequence which reduces the order of I comma D. Just blow up something in H and then blow up something in strict transform of H and so on. So just the same sequence induces a sequence of blow ups of them and manifold.
C I is, yeah, as I said, coefficient ideal and H is hyper surface of maximum contact. Now, the main example, how this look. Let's assume that I is just given by a single equation. So hyper surface. And in such case, we can always choose coordinates
T equal to T one and up to T N so that this element will look like T to the D plus A two T D minus two and so on plus A D where A I depends on T two up to T. At least formally local. And when H is very simple,
it's just vanish locus of T. And C I also something very simple is just the ideal generated by coefficients. Hence the name coefficient ideal coefficients, but we score it weights. We want A two to be a weight two and A D to be a weight D. So we take integral weights,
which put them in the same gradient. And remarks, why such a definition? Why coefficient ideal? The reason is we fall in. If I try to take just I in the restricted to H when I just keep A D restricted to H
this loses a lot of information. No way that it will be equivalent to my original problem. I want to restrict all coefficients to H, but when I kill T, I must somehow keep information. What was the degree of each coefficient? And it's clear that weight should be all the weights
which I wrote here. So it's just a way to way to keep all information about this equation on H. And A one equals zero. This is the place where we really use characteristic zero assumption. Otherwise it's not possible to kill
the coefficient of T to D minus one. And it will immediately be clear why this is so important. Okay, good. Now I gave you example, which completely illustrates the main mechanism of welder, but it has choices, a lot of choices.
I just chose some coordinates. So the question is, if it's possible to do this without choices, yes, it's done by use of derivations. So my tool for a choice free description is to consider derivation ideal of I denoted D I is generated by I and by all derivations of its elements
and iterated derivation will be denoted D N of I. And I'll note that derivation decreases the order of ideal just by one. Yeah, there is at least one partial derivation which will decrease the order, it's obvious. So because of this derivation provides conceptual way to define all basic ingredients.
And the order is just minimal D such that this derivation of the ideal is trivial one, order zero. Maximal contact, see, if I derive my ideal D minus one times it's order becomes one. So where is N element of order one? N element of order one defines a smooth hypersurface.
Any size smooth hypersurface is the maximal contact hypersurface. In this example, when we have no A one, T itself, if I derive D minus one time, I kill all these parts and I only have T.
So a maximal contact also is defined using this derivation ideal and coefficient ideal, again, is just weighted sum of derivations. So more or less the same as we had before. Remark, the only serious difficulty
in improving independence of choices now is independence of choice of this T. There might be few maximal contacts. One must prove independence is a headache of the algorithm is the most subtle point. I won't discuss it in this talk, but there is something to do, yeah. Up to choice of this maximal contact,
I more or less described all ingredients. Okay, good. Now, complications of the classical algorithm. So it has two complications and this is related to use of usual derivations instead of logarithmic ones.
So model of logarithmic derivations is spanned by logarithmic derivations, Mj delta Mj and delta Ti for regular Ti's. Now these are precisely the derivations which preserve the exceptional divisor. Take its maximal ideal for almost all needs. It's easier, more conceptual, easier for computation,
whatever you want to work with logarithmic derivations. Once we want to keep E in the picture, but we cannot compute the other using logarithmic derivations. This is the problem. We must use all derivations. And because of this, Shoronakis approach runs into two complications,
two following complications. First, it says us how to choose H, yeah. This maximal contact is chosen using the derivation ideal. This derivation ideal has no idea what is exceptional divisor, just no relation.
Because of this, it might happen that E is not transversal to H. In such case, I cannot restrict E onto H by getting something SNC. I can restrict this log scheme, but yeah, it won't be log smooth. And because of this, we have no control on transversality to E.
So the algorithm we can run on H will not be transversal to E and we destroy all our inductive scheme. So how one resolves this? It turns out that all new, except if we start to blow up H, all new components will be transversal to H. So the problem is only with the old boundary.
So because of this, the solution is to walk, to remember, to work with stratification of H by the old boundary, by the number of components of old boundaries. So we define a secondary invariant or second primary invariant, S old. The number of old components of the boundary
and we first of all work where this S old is maximal and then where it's next and so on. I'll not go to details because in our algorithm, we get rid of all this mess, but this is existing. It's a headache of usual algorithm. And this is the reason why the initial invariant
is not just D, it's the order and the number of components. Because at this stage, the E is our enemy and we have somehow to bypass this complication. And second complication is that it can happen when the order of I is larger than D,
but the order of P or part is smaller than D. Because monomial coordinates contribute to the order. And in such case, we cannot proceed just by looking only at the P or part. We cannot just say, okay, let's take P or part and reduce because it's already reduced below D. In such case, we have to take into account
the order of I monomial and we'll have to work with stratification where the order of monomial is large enough. Again, you have to stratify our picture and to run something different. And there is a solution outlined here. I will not discuss it because again, it's not essential for our new algorithm.
Maybe I only mentioned that even when IPO is empty, still for the active reasons, you have to get rid of monomial part and it's done by purely combinatorial step. But again, something should be done. And okay, in this combinatorial step,
actually we have an analog, but much simpler in our new algorithm. Okay, good. So we are done with classical algorithm and now we have about 10, 15 minutes to discuss the logarithmic twist and logarithmic algorithm.
So what is the boundary? Before we go further, let's really understand what is the boundary? Because so far I only hinted that in Heronakis algorithm, there are some logarithmic ingredients. Sometimes they help, sometimes they are against us, but there are some. Now, so let's think about boundary.
Typically, and this was one of my thoughts before I started, I was familiar with logarithmic geometry. I thought that this is a divisor. And I think now that this is wrong to view a boundary as a divisor. Unlike embedded scheme X, you should not think of E as a sub-scheme.
Even because there is no map of pairs, M prime, E prime to M E. When you blow up, you increase the pullback of your boundaries. So E prime is not mapped to E. It may happen with E is empty and the new boundary is not empty. So it's not map of pairs of schemes.
Just even by functionality, E is not a sub-scheme. It's not good to view it as a sub-scheme. But if you view this guy as a morphism of log schemes, this makes perfect sense. It's just a morphism of log schemes where we consider log structure associated to this SNC divisor.
Moreover, this is excellent log scheme. It's log smooth log scheme. And yeah, and moreover, the sheaf of monomials, which are invertible outside of E, yeah, this log structure is precisely what we need from E.
In Hironaki's algorithm, we just factor the ideal to monomial part and non-monomial. And to factor out monomial part, we just use this sheaf of monomials. So in a sense, Hironaki invented in this particular case, the notion of log scheme.
Yeah, in very particular case. Okay, and logarithmic parameters. So we'll work with log smooth log varieties. For shortness, I'll just say toroidal varieties. And it's the same, yeah, just classical toroidal varieties
are the same as log smooth varieties. And locally, they are of the form spec of K bracket M bracket T one, T L. Where T one, T L are regular parameters, and M is just sharp FS monoid. Okay, and the view T I is the regular coordinates
and all the elements of M will be monomial coordinates. If the origin of T. So now we don't have good monomials and bad monomials because this M can be complicated.
Also logarithmic derivations, differential, differentials of T comma M. It means logarithmic differentials. This model is freely generated by differentials of T one, T L. And there's time J, J or M J. Where M J now can be any basis of MGP.
Yeah, I don't care if this is a basis of M or not. M does not have to be free. And just any basis of MGP is good for me. Please pay attention. I'm in characteristic zero. Yeah, so this is the reason I can take any basis. And even though MGP tensor Q, I can take.
And this, fact I prefer to say as a principle of monomial democracy, we'll come to it a bit later. From now on, M does not have to be free. There is no canonical base of MGP and all monomials for us are equal.
Yeah, like all of S monoids are equal and all monomials inside such monoid are equal. Remark, the most interesting feature of the new algorithm is font reality with respect to Kumer log-ital covers. Yeah, I told that it's compatible with any log-smooth morphism, but Kumer log-ital is probably the most surprising
and the most interesting one. Because in usual situation, they look like grammified covers. They're not smooth, so why should you expect any compatibility? So for example, if we extract roots of monomial coordinates in classical setting, our resolution is compatible with such operation. And here on our code, obviously not.
Or in the case of semi-stable reduction, we can extract your roots of uniformizer of the base. You can consider a ground field extension, which is remifed. And still, this is compatible with our algorithm. And it's out of reach and also unnatural for classical algorithm,
but it's very natural for algorithmic life. Well, now main results about algorithmic algorithms. So ignoring the orbital aspect, which I hinted at in the beginning and in the last slide, we'll discuss it a bit. If we ignore it, when log principalization says
that given a toroidal variety T and an ideal on this O T on T, we can find a sequence of admissible blowing up of toroidal varieties. I'll say to you later, which admissibility this time, T N to T. Such that the pullback of I to T N is monomial. So it's just direct generalization
to logarithmic setting of principalization. And this sequence is compatible with log-smooth morphisms. Again, log-smooth functionality is essential. And as this implies, within classical situation, as in the classical,
given any integral logarithmic variety X, where exists a modification X-rays to X, such that X-rays is log-smooth. This is functorial, again, in a strong sense. Yeah, the main novelty is something like that. And also, as I mentioned, both principalization and log-resolution, this algorithm work also in relative situation for morphisms, yeah.
Good. Now about the method, and please pay attention. We, yeah, we have something like seven minutes. We have just four slides, but after we work, we have done now, it's really will be very simple. So in brief, we want to log-adjust all parts of classical algorithm.
That is, we want to put log at any place we can. Okay, so I just was confused about your, in log principalization, the ideal, maybe I didn't quite understand what it, toroidal variety is the normal and invertible. It should be also invertible, I forgot to say. Invertible and normal.
Yeah, but the ideal, so the toroidal variety has a log structure in your. And the ideal is any current idea or is related to the log structure? No, any idea. Any idea. But then the, if you want to,
okay, you didn't explain what it means to you. I did, you'll see, you'll see, you'll see. I increased log structure as you can imagine. In brief, we want to log-adjust all parts. So how we do it? Log order of I is removable D such where D log D of the ideal is trivial.
So we just replaced D by D log. Maximal contact is any hypersurface given by vanishing of T where T is regular coordinate. And it meets its coordinate whose log order is one. So in D log D minus one, there are elements of order one. Take any of them, it defines you a maximal contact.
Such maximal contact is automatically toroidal. If I take vanishing locus of monoidal coordinate, I'll not get something toroidal. But here, if I take vanishing locus of such guy, it's always toroidal. Coefficient idea, again, weighted sum of logarithmic derivations. The only new thing is what does it mean to be,
to have I comma D admissible blowup. So we allow this time to blow up any J such where, first of all, I is contained in this power of J. And this is with D admissibility. If I is contained in J comma D, when the pullback of I can be divided
by this power of pullback of J, what is I can remove D times the exceptional device. So this is just to be able to remove D times. And second J is generated by a few regular coordinates in few monomials. And I don't care for monomials. It's democracy.
You can take any set of monomials. So any monomial ideal can be blown up. And obviously this destroys smoothness, but this preserves log smoothness. So in log smooth context, I'm allowed to do such a thing. I have more possibilities for blowups. And in fact, I just blow up
with what we call sub-monomial ideal. It's monomial ideal on logarithmic sub-manifold given by vanishing of T1, Tn. And after blowing up such a thing, I add its exceptional divisor to the monomial structure, increase monomial structure as in the classical algorithm.
Good. Now infinite log order. So a strange thing, which happens a new thing is that log order of Ti's is what? But log order of monomials is infinite by this definition. Because when I take D log of monomial, I kept a multiple of the same monomial where eigenvalues of D log, eigenfunctions.
So we behave like zero, where log order is infinite. And this is the main novelty. And this is the novelty which allows for the reality with respect to extracting roots of monomials, Kummer covers. Because on the Kummer cover, my monomial, which was, for example, M,
becomes square of something else. But its order must be the same if my algorithm is compatible with Kummer covers, all invariants must be compatible. The only way to be compatible is to say that its order is infinite. Derivations are not able to treat monomials and you should give up
and not insist as in Hironaka's approach. So as a price, we have to do something special when the log order of i is infinite, but this something special is very simple. And in fact, it was discovered by Koller by Koller a few years ago before our work. And it just says that you should consider ideal i-mon,
minimal ideal, which contains i. For example, if i is given by vanishing of elements sum of m i's t to i, we just take the ideal generated by coefficients, monomial coefficients, blow this up and divide by its pullback. What you get, you kill one of these coefficients.
So on the pullback, the order becomes finite. So where it's very simple, completely combinatorial blowup, monomial blowup, which makes the order finite. And after that, you proceed as usual. You take maximum contact and run induction on the dimension. Our algorithm CPLY, it avoids both complications I mentioned.
Maximum contact always is given by a regular coordinate. So it's always transversal to the monomial structure. It's always through automatically on the nodes. And in a sense, we completely separate dealing with regular coordinates via local order and dealing with monomial coordinates, which is done by combinatorics by toroidal blowup, by monomial blowup.
And invariant now also is much simpler. It's just the link of orders, d1 up to dn with di, just natural numbers. And the last one is a zero infinity, just like infinity or just something. Okay. And is always so elementary ways we cheat. And I say that there is a cheating
and cheating is that a drawback of monomial democracy is that the algorithm has no idea when monomial is a power of another monomial. And sometimes because of weights, it insists to blow up a fractional power of monomial. We call it Kumer monomial. It's monomial on Kumer cover, but not on, yeah, it's monomial in Kumer locker.
How can we blow up such a thing? Well, we can try to walk log-ital Kumer locker. We can pass to the Galois cover where this root exists, blow up where, and then divide way back by the Galois group.
Excellent idea, but it, yeah. And we did not expect complication here because of log functionality, but it turned out that this action after blowup becomes not toroidal. So when we divide back, we get something which is not log smooth. Because of this, we must divide back as a stack.
So-called not representable modification, which we call Kumer blowup. Blow up of Kumer ideal, which contains, which is ideal in Kumer topology. And it can be made principle invertible, but only by non-representable Kumer block.
And this is okay for applications, because after that, we can remove stack structure by a terrification algorithm. The same algorithm as used in Gaber approach and by Abramovich, Dionk and others. By terrification, or just a complication, we can actually remove stack structure and-
But the last step with the verification, you'll be compatible only with respect to SMOOS monitors. So in order to be log fanatorial, we must also work with stacks with non-representable modifications. So the stage which is log SMOOS fanatorial, in fact, only work in the world of stacks.
So we must enlarge our context to log SMOOS and then also to stacks. And now last slide. There is an example where I show the difference between classical and non-classical situation and show when a non-presentable Kummer blow up is needed.
And I'll not stop here because I'm out of time. And last remark, these weighted blow ups, which we discovered here. We blow up T1 up to Tn and m with weight d. We can be done more generally. For once we discovered weighted blow ups,
in stack theoretic context, we asked what can be done to classical output. It turned out with usual weighted blow ups of TIs of coordinates on A and M with weights d1 up to dr. Is in fact core space of a non-representable modification which is SMOOS. And even works with weighted blow ups
and considers just the usual centers which are predicted by Hironaka, just maximum contact centers, these were weights, but you do the correct blow up when you get the dream algorithm I talked about. So actually it's also, it was always hidden in Hironaka's approach, but just people did not know what are the correct tools to work with.
One have to work with correct weights, one have to work with stacks, and then it's possible to get a simplest algorithm one can imagine. I thank you for your attention.
Thank you. There's one question from Q&A from, from the article. So which paper is cited as T17, if any? 17 is the paper in JIMS.
Yeah, 17 is principalization of ideals on logarithmic orbitals. Just T17. Ah, no, it was, so it's just, logarithmic is everywhere in all these books, yeah, so.
Okay. Not intentionally. It was ATW-17. So there are other questions? I wanted to make some sort of comment. Sorry, just a minute, just a minute. Maybe I was, just a minute, let me see. Where is this AT, no.
Yeah, yeah. This was at 30, at four, 30, no, at 35. At 11, 35. Sorry, say it again, I can't. No, at 10, 35, sorry.
I'm saying this question was asked already at 10, 35 a.m. Ah, 10, well, okay, so. So have I asked questions? Just concerning some comments
in your talk related to my work. So you mentioned the terrification, which you said is used in my book. And in fact, as far as I remember, you mentioned it in 2005 as a possible simplification. So what I did is I used the canonical desingularization
of some quotient, and then you suggested to use terrification, which actually works, but it was not done in the book. It was, I don't know if this is what you meant about terrification in related to my- Yeah, I meant, I meant, yeah, I meant that there are few algorithms for terrification.
Your algorithm indeed used the resolution. Initial algorithm of Dion Cabramovic used some as a trick, but in all arguments I know, you must go to log-smooth setting. You cannot do it only with smooth and SNC.
And it will, it will be possible also in your approach to use a terrification of Cabramovic, but okay, it wasn't. But Grafio, and now also Graf works on so-called desicification, generalization of these two stacks. It's also similar algorithm, Grafio, Grafio versions,
but all of them somehow must work with log-smooth and not just. Okay, now concerning the classical desingularization one. So you have a Villamayo. Yes. Then Gerson Milman was singing on another paper by Sinas and Villamayo.
So I was, I think I read some time ago in the master views of some of these, that the algorithm is not exactly the same. Sometimes they are different, different order of stacks. It's not exactly the same order of stacks. Okay, let's say so, okay, yeah.
In the talk, I allowed myself to put something not important under the carpet, yeah, just to save time and also to make it simpler for listeners. But you are right. You can do combinatorics in a stupid way. You can do it more efficient and less efficient and people are playing with it a bit.
You have some choices with combinatorics, of course. Moreover, difference of few algorithms is like a program in programming, a C program, when compiler, there are more effective compilers and less effective. If it's less effective, it just says to processor to stop
and wait until it's sure that it can do the next operation. The same happens with this algorithm. In some versions, we're not sure that they can proceed, they do some combinatorial steps, much more than needed. So we just blow up divisor, for example, few times, just idle operation.
So there are some nuances, but the main machine in gene of algorithms, we say, the choice of maximal context and so on is completely the same algorithm. Which is due to Iwanagya, this is in Iwanaga. Yeah, but in Iwanaga, it was implicit.
And when Iwanaga himself worked on it many years to make it simpler and so on. So a lot of times, it's glorified. There is this pattern of idealistic exponents. I mean, when we introduce this in 1977, introduces certain things. It doesn't give the algorithm, but then one can actually get the course
where it gave this that I heard about this. So this is closely related to this. Correct, idealistic exponents are marked ideals. This is the ideal to consider marked ideals. Just ideal, idealistic exponent is precisely this mark.
Yeah, so he had some, there is a reduction step that you mentioned the occurs in his, yeah. And he also wrote the paper later in the early 2000s about this. Okay.
Just a very stupid question. I'd like to come back to your result about making amorphism good by modifying the base. I don't remember, come back to that result to you have an F and then the S1 is deduced on F
by some modification of the base and it is good. And so- Is it, is it, is it this light? The new ingredient is that there isn't.
I'm not so sure. Look, what's it mean? There was no log in word at the beginning, maybe. And then you- No log. I've done it. Was it altered resolution or in classical?
Yes, altered resolution. In altered resolution it was, so. I don't remember which assumption you have on your X to Y, or maybe, no. No, it's not so technical. Yeah. So my question was that you have an F from X to Y which is not good.
And you make a Y prime to Y which is a modification of the pullbacks or how is the log goodness. Or maybe you also, you have to modify a little bit the source, I don't remember. Yeah, I feel, look, is it this theorem?
Maybe, yes, logism, yes. Yes. Yes, yes. So it's a modification or alteration of both schemes but you also- So you stop the effect log scheme. So the underlying scheme, is there any assumption there
or some overall field or not? Well, I assume that X is a finite type over a QE surface. Over a QE surface. Over the excellent surface. Over the excellent surface, yes. So I was wondering whether you could use this sort
of a result or related result to prove, for example, there is a Fabrice's theorem about making the generalized R psi of F becomes good after modification of the base. So I wonder whether this sort of a result could be here.
Of course you have in Fabrice's theorem of Fabrice's you have a sheet, but maybe a constant sheet which is already, then maybe you have this X to Y and then the R psi of X, Y is not good but by modifying Y, you make it become good.
And if the morphism you modify becomes good then the R psi will become, it's of course good. So I was wondering. Okay, I'm not prepared to ask but did I hear the author told what he thought about this? No, so anyway, there is an approach
in proposal paper uses a relative dimension one. So it is very, it is closely related to what you're doing with the vibration in curve and things like that. So what I'm saying that instead of in there is some complicated induction and the vibration in other curves
but instead if you can somehow prove the same thing I think by reducing to the case where you have a FS log structure and some on both X and Y and some log smooths may be saturated. And of course the stratification,
the sheet is constructed but I did do some stratification compared to the log structure and there's some thickness to the thickness condition on the sheet and then proves that for this class of sheets then you have uniformly that, I mean the outside, this kind of version of this in uniform it is that you have constructability of outside
and also in the stratification. And your second paper and the other paper maybe. Yes, so it's possible to sum up but of course you have to develop a lot of wrong things to state and then of course to actually check that the outside is good to compute the outside in certain situation
and sometimes it is very useful to use test cabinets to use already the result of. So I say, but I think in theory it's possible to do it somehow just by improving the wormorphism and that it's kind of- Do you think that Michael's result might help me?
No, no, what I'm speaking about is all the ideas that don't use as much as this. It just uses the young approach. I mean, to alter, but without all this, I mean, this is a, here you want to control,
well, I am satisfying that's this log-smooth-saturated morphism and here he wants to have better and also he wants control of the degree and he wants to log, well, I don't know,
okay, he's not saying that he wants to- Stop here because it becomes too technical. Yeah, I know, I know, it's true. But I also wondered about this theorem. So I asked a question about whether X-Plan and Y-Plan are integral. Now, if you have this, when you eta-localize,
of course the being irreducible is changed. So it's kind of strange that- Ah, okay. I see what you are saying, but if structure is not the risk, okay, yeah, maybe you are right. Maybe one has to consider, yeah, but maybe one has to consider components.
You are right, yeah. So I think that's all from- Okay, so yeah, I think we should- All right.