We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

2/4 Mathematical Structures arising from Genetics and Molecular Biology

00:00

Formale Metadaten

Titel
2/4 Mathematical Structures arising from Genetics and Molecular Biology
Serientitel
Anzahl der Teile
18
Autor
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
MathematikMathematische LogikMathematische PhysikNatürliche ZahlStatistikModelltheorieModulformGanze ZahlGesetz <Physik>Algebraisches ModellAnalytische FortsetzungFamilie <Mathematik>DistributionenraumPunktDifferenteMultiplikationsoperatorTVD-VerfahrenFigurierte ZahlHadamard, JacquesAlgebraische StrukturVariableFunktionalGruppenoperationHadamard-MatrixHardy-RaumDistributionenraumOrtsoperatorVorlesung/KonferenzBesprechung/Interview
Algebraische StrukturMathematikParadoxonTopologieKombinatorAusdruck <Logik>ÜbergangFunktionalResultanteZusammenhängender GraphPunktKörper <Algebra>SummengleichungGraphfärbungVerzweigungspunktTrennschärfe <Statistik>ZweiOrtsoperatorLie-GruppeZahlentheorieExakte SequenzComputeranimationVorlesung/Konferenz
Algebraische StrukturBruchrechnungDifferentialgleichungDifferenzierbare MannigfaltigkeitDynamisches SystemEntropieEvolutionsstrategieGeometrieHalbgruppeKombinatorikKurveLineares GleichungssystemMaß <Mathematik>MathematikMathematische LogikNatürliche ZahlNumerische MathematikOrdnung <Mathematik>PolynomQuantenmechanikStatistikSymmetrieTensorproduktTopologieTransformation <Mathematik>Deskriptive StatistikGruppentheorieMengenlehreNormalverteilungWahrscheinlichkeitstheorieModelltheorieZahlensystemFrequenzWahrscheinlichkeitsverteilungProdukt <Mathematik>MatrizenrechnungDarstellung <Mathematik>Quadratische FormModulformGleichungssystemInvarianteVariableKombinatorKategorie <Mathematik>VektorraumTropfenFeasibility-StudieAusdruck <Logik>DimensionsanalyseBilineare AbbildungFinitismusFaserbündelUnendlichkeitNonstandard-AnalysisWahrscheinlichkeitsmaßUngleichungGesetz <Physik>ÜbergangAggregatzustandAlgebraisches ModellAngewandte MathematikArithmetisches MittelBeweistheorieMorphismusDifferentialDivisionDrucksondierungEinfacher RingFreie GruppeFundamentalsatz der AlgebraFunktionalFunktionenalgebraGarbentheorieGeradeGesetz der großen ZahlenGruppenoperationHardy-RaumHomomorphismusHyperbelverfahrenInverser LimesKonvexe MengeKoordinatenKorrelationKugelLeistung <Physik>Lineare AbbildungLineare DarstellungLogarithmusLokales MinimumMaßerweiterungMaterialisation <Physik>Mathematisches ObjektMereologieMetrisches SystemMomentenproblemMonoidMultiplikationOrdinalzahlPermutationPermutationsgruppePhysikalische TheoriePhysikalisches SystemPhysikalismusPlancksches WirkungsquantumPoisson-ProzessPolynomringPrimidealProjektive EbeneProjektiver RaumRangstatistikResultanteSelbstadjungierter OperatorSigma-AlgebraSimplexverfahrenSkalarproduktStatistische HypotheseStichprobenumfangSuperposition <Mathematik>TabelleTeilmengeTensorTermTheoremThermodynamisches SystemVertauschungsrelationZentrische StreckungZusammengesetzte VerteilungThermodynamisches GleichgewichtLinearisierungKonstanteTensorfeldTeilbarkeitFamilie <Mathematik>Formale PotenzreiheÄhnlichkeitsgeometrieKategorizitätZeitrichtungStochastische AbhängigkeitNichtlinearer OperatorBasis <Mathematik>AbstandParametersystemMaximales IdealGewicht <Ausgleichsrechnung>NormalvektorUnterraumZusammenhängender GraphMathematikerinWurzel <Mathematik>Coxeter-GruppeTranslation <Mathematik>Helmholtz-ZerlegungRandomisierungEntartung <Mathematik>EndomorphismenmonoidVollständigkeitKorrelationsfunktionDistributionenraumQuadratzahlSummierbarkeitAdditionPunktQuotientDichte <Physik>Klasse <Mathematik>PartitionsfunktionVarietät <Mathematik>Arithmetische FolgePoisson-KlammerTopologische AlgebraSpezielle orthogonale GruppeDefinitheit <Mathematik>Sortierte LogikKoeffizientRegulator <Mathematik>RichtungDifferenzkernVerschränkter ZustandWahrscheinlichkeitsraumProzess <Physik>SchätzfunktionStabilitätstheorie <Logik>MultifunktionGoogolGraphfärbungVorzeichen <Mathematik>Jensen-MaßEreignishorizontAbgeschlossene MengeKonditionszahlTrennschärfe <Statistik>Kartesisches ProduktKolmogorov-Sinai-EntropieDifferenteObjekt <Kategorie>Element <Gruppentheorie>MinimalgradEvoluteMultiplikationsoperatorExponentialabbildungStandardabweichungTVD-VerfahrenZweiMinkowski-MetrikFigurierte ZahlAssoziativgesetzSpezifisches VolumenHyperbolisches MaßBoltzmann-KonstanteEinsEigenwertproblemEntropieHalbgruppeKraftLineares GleichungssystemMaß <Mathematik>MathematikNumerische MathematikPolynomPseudodifferentialoperatorRelativitätstheorieStatistikStochastische MatrixStrömungswiderstandTopologieTransformation <Mathematik>MengenlehreModelltheorieEntscheidungsmodellProdukt <Mathematik>ModulformGleichungssystemInvarianteVariableKombinatorKategorie <Mathematik>VektorraumImpulsAusdruck <Logik>FinitismusWahrscheinlichkeitsmaßUniformer RaumAbelsche GruppeFaltung <Mathematik>ÄquivalenzklasseAffiner RaumAlgebraisches ModellAnalytische FortsetzungBetafunktionBeweistheorieDifferentialDrucksondierungEbeneEinbettung <Mathematik>ExponentFließgleichgewichtForcingFreie GruppeFunktionalGerichteter GraphGrothendieck-TopologieGruppenoperationHardy-RaumInhalt <Mathematik>Konvexe MengeMultiplikationPermutationPotenz <Mathematik>PrimzahlzwillingeProjektive EbeneRelativitätsprinzipResultanteSimplexverfahrenSinusfunktionTeilmengeTensorTermVertauschungsrelationThermodynamisches GleichgewichtMotiv <Mathematik>Quadratische GleichungLinearisierungTeilbarkeitÄhnlichkeitsgeometrieExogene VariableEndlich erzeugte GruppeNichtlinearer OperatorRuhmasseUnterraumBinärdatenDean-ZahlDruckverlaufSterbezifferMaximum-Entropie-MethodeRandomisierungFormation <Mathematik>EndomorphismenmonoidDistributionenraumAdditionPunktGlattheit <Mathematik>LESWellenlehreAbstimmung <Frequenz>Gleitendes MittelVarietät <Mathematik>Innerer PunktKreisflächeDimension 6FaltungsoperatorDruckspannungE-FunktionMultifunktionGoogolNichtunterscheidbarkeitPythagoreisches ZahlentripelSpitze <Mathematik>Urbild <Mathematik>DifferenteOptische PinzetteIdempotentKreisbogenSchlussregelStokes-IntegralsatzMinkowski-MetrikAssoziativgesetzOrtsoperatorEinsComputeranimationVorlesung/Konferenz
Transkript: Englisch(automatisch erzeugt)
OK, so just let me remind you what I was describing informally last time. And now I present kind of algebra behind it, which was essentially detrimental.
So the key observation where it starts biological is quite simple. But what I could see, there was something kind of remarkable behind it.
And that was that when you, there are certain phenotypes, phenotypic features, which always appear in a very sharp and discrete form. For example, you may have two kinds of flowers. And I mean flowers, they may come in two different, phenotypes say red and white.
And it's never mixes, there is no mixture there. There's either red or white. And there were other features like that. And this was in a sharp contradiction with all mentality of the types of this paper of Mendel wasn't written in that about six years after the Darwin's origin of species.
And at that time, origin of species was extremely inferential. And the major, main point made by Darwin was that all changes happens continuously. And that exactly which was violated, in this example, this variation is very sharp.
And there is no in between, right? And another example which was considered 100 years prior to that, actually more than that and handled here by Maparti, who is reluctant to write because of whom you know from mathematical physics. And he was observing how was distribution of people with six fingers, yeah, you know.
Sometimes there are families when there are six fingers, some have five, some six, six. And never, they never have five and a half, yeah, so to speak. There's no, it's very discrete. And he was observing statistics of that. And both Mendel and Priti were kind of very much impressed by another fact. First, they're discrete.
And secondly, statistics of that is, a statistic of that kind of when you start looking in different phenotypes, proportions are roughly integers, yeah? So small integer numbers like this. So phenotypes may be distributed like one, two, three.
And this is kind of, it's never, of course, sharp, yeah? So if it will be 1.3 versus, you know, 2.7, perfect, yeah, from biology point of view, it says 1.3 and still it's incredible, yeah? Even small deviation, the probability of this being random is just zero,
no matter how small this error, because it appears many, many times, no matter how unlikely it is to happen random, to happen systematically is impossible, right? And this was idealized by Priti, who developed kind of conjecture theory,
different from Mendelian, which was not quite correct. And then it was done by Mendel, completely missed by all biologists in between, including Darwin, who also were observing sometimes these phenomena, just, you know, it was contradicting their philosophy, and so it couldn't make any sense with it.
As you know, actually, the first, I think the first publication of Darwin paper, this actually was quite also, I'm using in many respects, and this is when his first original speeches appeared. First, the major premises there was completely false, and it was pointed out to him that his model of heredity
would never give the observable phenomena. Exactly, this continuity, whatever variation, advantages, whatever variation you have, it will be dissipated, and that's it. And then Darwin competencies, he missed it, but then, so using his suggestion with kind of what is now called
returning to Lamarckism. It was not Lamarck who suggested that new features you acquire unique life may be inherited, but this, now called Lamarckism, incorrectly, and Darwin accepted that. So Darwin, he couldn't figure out what to do. And on the other hand, in the same first edition,
he's not, it's an inexplicable point, he analyzes, and this is extremely good example of how things fit, concepts of natural selection,
because usually example he brought in books are completely irrelevant, they're completely wrong. They don't show anything like growing long neck of a giraffe or whatever, they just can be explained, or rather interpret billions away. But what Darwin observed, that there is one particular phenomenon
which is not so obvious, and kind of paradoxical, and this is preservation of the sex ratio. Very many species have ratio half to half, despite the fact that their mating habits such that, like particularly for the sea elephants, when usually for one male, there are about 10 female.
And the question is, why we still have the same ratio? It will be kind of look absurd, because most of the male were not functional, why at birth they were the same proportion. And Darwin gave an explanation quite reasonable. And then in the following edition, however, this explanation disappeared, and Darwin was writing
that this was a mystery, how it could be. And then about 70 years later, somewhere 1920, Fisher, a great physician and mathematical biologist, suggested an explanation, and indeed if you look
from the point of view of your rights, very simple logic kind of saying that it must be so, because if it's some, albeit you need fewer male, but each male will have more descendants, 10 times more than female, and therefore it equilibrates and give you one half, one half ratio.
However, if you think about this carefully, and actually I remember last I spoke to Larry Guthrie without that, and he immediately found for law in this argument, yeah, it's superficially right, but Fisher never wrote it, it's usually he referred to, and apparently it's not totally clear how it works, and Darwin apparently lied that, and it disappeared from his late editions,
because usually people who said it, they say, well, because he was not editing himself, there was a mistake, but I don't think so, yeah, there is some interesting actual logical, non-trivial logical point, and this, how it could be, what sustained this ratio being close to one half.
But anyway, so there was this, there was this paper by Mendel, and the point was, and this was I think extremely pronounced point, that the phenomenology is not kind of being inherited,
and this again was a major misconception of people before Mendel, and this made all this kind of evolution, theorized nonsensical before him, but being inherited is some features, namely units, which we'll call gene 20 years later, he didn't have this terminology, and so each gene was compositioned
in diploidal organism of two alleles, and these are being inherited, these being inherited by the genes, and what we are observing, so there were two variable, say, A and B, and phenotypes was function of A and B, but what was inherited was A and B, but not F.
And this, in the way, for that reason, the result could have been not obvious, for example, when you mix two flowers, or, say, red flowers could have white descendants, right? We have two parents, both are being red, and the children may be white,
and the reason that the function, if you have here red and white for A and B, and color would be the function, and the typical, typical function here would be that this will be always red, unless this, out of four combinations, only one gives white, white, white,
and so we have parents, red, white, and red, white, and their descendants may be pure white, white, white. So visible phenomenology is not inherited in a simple way,
and this is a layer completely different. I think this was a completely incredible idea, extremely profound, and for that reason, it was not accepted for 30 years, exactly, because well beyond the level of intelligence or balance of that type. And then, on the other hand, well, maybe I'll come to this later, there was already similar development in physical and physical chemistry,
the human ideas were there. Now, how this being, so the simple preliminary observation is that if you make this idea that there are two components in each pure,
each pure phenotypic feature, called, represented by a gene, is composed of two units, maybe more, when you have deployed logging in, but say two for many of them, and the simple kind of function, and from that, you can say, aha, for example, if you have pure white parents, yeah,
their old descendants will be white. If they have red and white, they may have everything, but if you have select long enough, for a particular one, they become all red and all white, because you can, and then when you mix them, then there are certain proportion, how they're being missed, right? For example, you have red and red and white and white,
two pure, and you interbreed them, then you may have four combination, but this will have, will be three times, of course, rarer than the combination red-red, red-white, and white-red. Here, there are three, here is one,
so there's one to three proportion. And this was in the head of Mendel, and this is mathematics, kind of, and again, I was saying this kind of paradoxical thing is that by many, at that time, even later, what Mendel was done was perceived with mathematics. On the other hand, natural selection seemed as biological phenomenon,
but it's completely opposite. Principle of natural selection and nothing to do with biology, you just cut off exponential function, absolutely pure mathematics, because so trivial, it's easily accepted. And here is, is the trivial biological phenomenon. Yeah, you discover, conjecturally, you make some conjecture about structural biology in mathematical language,
and exactly because by order, more subtle, in most sophisticated, people couldn't get it. And even now, yeah, people argue, you know, about Darwin, da-da-da, everybody understands it. Agree or not? About Mendel, nobody argues, because if you understand it, you don't argue, I mean, just, it takes some effort to understand all the ramifications.
This, for me, actually, a mystery. Biologists are not that excited by Mendel. They lie, they laugh. Darwin, I don't understand why. Darwin was certainly a great philosopher, but not a great scientist. And Mendel, on the contrary, we don't know what Mendel thought, because unfortunately, all his documents after her death were burned, for some reason.
When he was in Abbot, somewhere where the new Abbot came, he burned all his document for some political reason. And so, what Mendel thought, what he, he was working on other subject, by the way, and we don't know what he was doing, which is a feature. And, but the point,
this is kind of so far, where the, where the, if you just say possibilities, it's very qualitative and quantitative. You see, everything happens as randomly as conceivably can. And so, and the conclusion from that, which was made by Mendel, and this is a Hardy-Weinberg principle,
which I describe here in the second formula, but this may remind me again what it tells you. In biological terms, looks completely kind of, unfeasible, and again, we have a field of red flowers, in another white flowers, and they're separated first by mountain range,
and then you erase it. In the first generation, proportion of red and white would be different. You may have more red and less white, yeah. These are colors, not their positions. But after the second generation, we have the same proportion. It stabilizes on the first stage, and very much against intuition
of people who believe in tree selection. So I can say it shifted toward red, because red were advantages. And then it must be, keep moving there. It will not happen, right? Which is, and people were always arguing about that and don't understand what was happening. And then, let me show you a formula in a second.
Yeah, so this is the formula. And this exactly corresponds to a certain square of some map equal to itself. It's important, and this is actually the formula. And this was incomprehensible, if you speak kind of in biological terms,
and just, it's a kind of trivial formula in a way. But on the other hand, if you say it's in words, it sounds difficult to absorb it. And this was hardly done, he just proved this formula. And as I said, look, however, so he was saying it somewhat differently, and I personally, what he says, yeah, hardly.
So you just can read, just. And the point, of course, is most not proving this or that formula, but he realized that this biological kind of discussion translates to very simple arithmetic.
But then, there is a next level of this arithmetic, which he overlooked in this which I now I want to explain. So, and this kind of amusing in a way, it's extremely instructive to think in this kind of biology, suggesting what happens.
And just generates mathematical objects quite simple, but still not quite obvious. And so, let me remind you so what it is. So you have linear spaces, they're distributions,
and they have this kind of function, which corresponds to, it's just non-trivial linear function, when these are distribution of some entities, it's just summation of them, yeah? And you can normalize having one, if they're positive, it will be a probability distribution.
But so all you have to know, this is just this vector space over, so we're using this as a linear function. So what in this kind of abstract term, will be this random matching, yeah? So this is just a general notation. When you have this deployed case,
you have A and B two spaces with this kind of functions, and you take the tensor products, and in this tensor product, we read the self-mapping,
right? So you have a mixture. So the point of Mandel, when you have A, a, l, and b, you write it in this form, this kind of symbolic writing. A and B just abstract symbols, but however you think there is independent variables, and you write them together, and these become quadratic polynomials,
and they behave like quadratic polynomials. If you interpret these numbers as probabilities, and this kind of quadratic is kind of remarkable. This kind of formal manipulation, what Mandel will be doing, have also statistical meaning, and you can check them by experiment, statistically. On the other hand, you have formal manipulation. And so what will be this self-mapping, right?
So in naive terms, which I find quite transparent, however, I think mathematically unsuitable, if you think about this tensor product, a space of matrices, which is a tensor product of column by rows. So you take any entry, you take column and row,
you summate elements in the column, you summate in the row, and multiply them, yeah? So you take column coming from some entry, some IG, then you take a row coming from IG. Again, I hate this notation, yeah? And multiply, take summation of this column,
take the summation of this row, and multiply them, yeah? So sum, sum, multiply. So you have a space, a map of matrices into themselves. And the point is this map square of this equal to itself, yeah? So it's a next-generation map, it's a square,
and this is what happened there. Another point that the moment you make the description, you have pretty high symmetry in the picture, which was not before. At most, you had symmetry of permutation of your features, such as for the deployed organism,
permutation of two elements. But immediately, what enters here is a linear group preserving this co-vector. But in fact, in a second, you shall see, it's a bigger group. It's a fully linear group, and you're operating there. So there's high symmetry, and the symmetry explain this and also some other things.
So what comes next? And this is the formula in terms, and again, this I described in a kind of algebraic term. This is extremely kind of easy to see,
but if you come back to this distribution picture, so you have the distribution of genes, but corresponding distribution of these alleles will be a plus b. And so it's kind of bizarre map from polynomial point of view. You have a polynomial, you replace each product, each product you replace by sum.
And then you square it. And then the separation, we repeat it twice, become important. And this is kind of amusing thing, because except for the scalar product, of course, it's not a polynomial map, it's a rational map, because you have to normalize. All the time, you have to normalize by a total sum. And this is rather amazing when you have a polynomial,
and you compose with itself, and it's equal to itself times constant. If you divide by this constant, it becomes square. And typically, you expect degrees of polynomial grow when you compose them, and it doesn't happen here. And in the next level, we shall
see that this is exactly the same phenomenon, which brings in new polynomial groups, new polynomial groups, new polynomial groups, exactly of this kind. When you have group of polynomial transformations, which transform certain space, and degree doesn't grow.
And this exactly is what the important thing comes in. This is kind of what it is. Phenomena is already all in this kind of mental description of all genes. OK, this is just simple formula, simple algebra, which is how things are being done.
This is more or less almost as edited in Mendel's paper, almost his notations. And again, but this kind of formulas look a little bit kind of strange, but now if you turn back to this tensorial product picture, it's extremely simple, because if you want to show,
so what is this mapping in these terms? Because when you have tensor product of spaces which is functional, you can put it in their projections to A and to B. So the fundamental thing about tensor products, you cannot project them
to their factors, yeah? And this is kind of what makes quantum mechanics so kind of tricky, yeah? There is no, you cannot see part of a system. Things are kind of entangled. If you mix two things by tensor product, it's not direct sum, then you cannot separate them. You can't project back. However, if you have this thing,
you can do it easily, right? Because for each monomial, you just go to A times B, right? And this gives you a linear projection to A and to B and similar linear projection to A, right?
So if you have this extra, whenever you have a little extra structure, you have this bracket functional, yeah, it's, so in Hilbert space, you have brackets between two and this is kind of monoid operation. Then you can do it and then you can multiply the two. Whenever you have this projection to A
and projection to B, you can take them back. You can just multiply them back, take their tensor products. And this kind of obvious is square will be, when you square it, you just multiply by, you just multiply by this. And if this normalized, you have for being probability space, so these are ones.
So it becomes square become one, right? So it's extremely simple. And this is a, it's hard to divide in principle in just a different moment. I was making a different estimate of this in Google. The time again was Hardy-Weinberg. So what I done yesterday, gave me the following.
Actually even before I remember I was doing it and this ratio was not one over eight, but one over 30. And I don't know why. You just saw when I was writing this article five years ago. And of course this Google doesn't give you kind of fair estimate.
So Hardy is now was extremely disrespectful top of applied mathematics. And this is just, he's down, you know, just playing golf with his friends for a matter of seconds. And this was the most significant contribution he made to science. Just you know, in science nobody cares about anything he's done, except for this kind of computation,
which is kind of, in a way it's not relevant to true genetics, but still it give you kind of frame for thinking. And always being referred to. Because the problem is that in biological system, really this never happens, yeah?
We never have this pure mixing, this random mating, because there are extra factors. And one of course is selection, which is, there is the superposition of that with environment, environment kills somebody and not that it kills more advantages, less advantages,
but somebody being selected, just this happens. It's another, of course, misconception of conception of selection. Things are being selected because they're better because it just so happened, yeah? And most example is advantages, yeah, being lucky. And then, but still even then, for example,
if you have flowers of different kind, we prefer particular flowers and always just choose seeds or flowers with certain color. Still, however, the map which you have will be bilinear map. So these maps you have in these spaces will be still quadratic. And so you have a quadratic map on a product of simplices.
And it's a kind of tricky dynamics. They never stabilize on the first step, but they stabilize asymptotically. They often converge to equilibria points. And this equilibria usually vertices of the simplex. And these are kind of significant mathematics behind it.
And another factor, of course, is that the space geometry is involved, right? Because your mating is not that random. It depends, it depends, you know, if you're far away, you cannot mate. This is another, by the way, I'm using pointer just for me is mysterious.
The fact that if you have separated population and they cannot mix. And this seems no big deal. However, there is a biologist called Ernst Mayr who's very, very famous, essentially because he's famous exactly for saying that. There's a great kind of emphasis, right? This is related to the concept of species.
And it's, again, it's just very kind of confusing who first made this definition of species, yeah? And apparently it's not definition, but there is a phenomenon which was emphasized by Biffon and quite a while ago, of course. And the phenomenon is that you may have descendants
of kind of different species, right? Like donkey and horse can give you actually two type of animals, mule, or I forgot how it's called, another one. And, but then it stops. These are not fertile, yeah? They have no great children.
And that was kind of remarkable, Biffon kind of realized that and he suggested this is what divide different species. And then it was staying so for quite a while and then came Ernst Mayr said, well, but even if they live in different kind of continents, they also cannot have children. And so you have to correct the definition
and he became very famous as Ernst Mayr and just actually everything Ernst Mayr was saying on the same level and biologists just loved it. It's unclear to me what the point, yeah? Because this is, again, phenomenon I'll emphasize by Biffon is extremely non-trivial and it was understood only with development of genetics. Why it happens in the second generation?
Not in the first. So what's so special about the number two? And it's lead number two is, has something to do with DNA having to strengthen or be deployed. It's really these two shows in molecular levels, just a fraction of something happening at the molecular level,
not just being a different continent. Okay, so this is the point. And so, so what we go next. Now, just again about this formulas, which I wrote before, just to see that there is a kind of algebra.
Just look at this. So it happened for diploidal organism, how the thing would behave. And I just, you just can read it, I don't want to repeat it. Yes, so little formulas in it, but in the second, they kind of rather tricky. These formulas are rather tricky, what happens, but in a second, I just want to show how we can see them in a very simple way
along the lines I indicated, and which leads to a class of very interesting dynamical system. This was point. This is what I was mentioning that, that space may interfere and non-trivial point, unlike what Mejia was saying was done by this equation of these people,
that you can write differential equation, you can incorporate into random matrix, it is random stochastic differential equation, incorporate space. And you have quite interesting differential equation and again, mathematically it's quite attractive, whether it has biological significance, hard to say, yeah. My experience with what I know, even highly developed in a kind of sophisticated
mathematical means done in this kind of biology usually happens to be wrong. One of those, some person called May, I believe, yeah. Probably one, you know him, but he developed mathematical theory of, for ecologists, it was very hail. He didn't receive Nobel Prize,
it was close to that, but then everything happened to be wrong when data were collected more carefully. It was all based on not sufficient data. So all these equations, I'm using, but, and actually what I'm doing, I'm only speaking about the equation because they're amusing, not because they really have biological weight.
But what, another point which I want to bring in, however, this is that Mendelian law, similar to this law of mass action in chemistry and this kind of, again, big mathematical world, usually unvisited by mathematicians and I want to say a couple words about that.
So when you have a chemical reaction and there are several ingredients and this molecule must come together, probability of them coming together is product of densities with some weights of each of them, right? The more of them have to come together, the less likely it is,
which means that you immediately arrive at class of polynomial equations, differential equation with polynomial coefficients. And however, these equations and the way to look at them is absolutely not the way mathematicians speak about differential equations. So first, it should be noted, I think it's correct, somebody told me,
I haven't checked myself, that say two thoughts, if not more, of all articles on differential equation are written by chemists, not by mathematicians. They have a huge amount of equation they want to understand. And secondly, their philosophy is nothing to do with classical what you know nowadays, stability of dynamical systems because one of the major points in say,
in chemistry, at least in industrial chemistry, that even when you have a simple process like burning of hydrogen, the number of intermediate products goes in hundreds. And so one of the phenomenon, so this is what I said,
give you immediately differential equation. However, if you look actually deeper at this chemistry, you see it's not like that. There are lots and lots of intermediate products. Yet, the equation is if they're not there. So there is a remarkable phenomenon, purely mathematical. You have a process when there are lots and lots of ingredients,
but this somehow being erased and only you have a shadow, you have this naive equation. This equation, by the way, was discovered by Gulbrit and Baage in Norway a few years before Mendel, but that paper was completely kind of went unnoticed and then was discovered about 15 years later by somebody else, by Van't Hoff,
and all this kind of interesting also point. This is a major equation, of course, in chemistry, like in biology, just ideal, ideal chemistry, not ideal chemistry, but mathematically, it's extremely kind of interesting class of equations. And there are two points. They're very different from what mathematicians do. So one, so it arounds in the high dimensional space,
but this number, it's not a number. It's high dimensional space where here it's some structural object because there are this many different chemicals, have different properties, have different input. So, and in many cases, you can reuse this dynamic
to understanding combinatorics of this set representing dimension, right? So yes, one work which I know here, and again, it's done by applied mathematicians. Actually, I know two,
both were applied mathematicians, but in fact, one of them probably pertains to what is now called tropical geometry, that there is a kind of tropical limit of this equation because these parameters spread in a kind of larger way. And so there is a limit of this, of many of these system differential equations
that become combinatorial. And this correspond, they very even, yes, this has been done in the model when the equation are linear. And again, it's quite amusing that you look at the system of linear equation with constant coefficients, and you believe you know everything about it, just linear algebra. However, if you think about this linear algebra,
so they have this space of matrices, and what happens to this equation depends on how the matrix degenerates. And so this depends on stratification by rank of the matrices, and some curve how you approach it. And this is highly, highly non-trivial phenomenon, the way how it can happen.
And there are these limits, but for realistic equation, it still has not been done. And another instance of that, of this kind of differential, the point which I want to partly make with this lecture that dynamical systems motivated, say, by biology or chemistry, infinitely far from what was done
by mathematicians for the last 40 years. Well-developed theory of differential equations, partly supported by physics, but completely missing the problems in either in chemistry or in biology, just completely oriented in different way. So one of the point is that dimension is not a number, never.
Dimension is something which understanding which is a kind of fundamental, you cannot change the coordinates are not symmetric at all. Different coordinates have different weight, different symmetries, et cetera. And another point is that people love just having this kind of chaotic equation, you have hyperbolic system, et cetera, et cetera,
which is, however, practically never happens in biological system. The moment your, say, heart or your genes, networks start working in this chaotic region, then you are dead, right? The whole point is that, amazingly, that despite the fact that hyperbolic system
kind of typical, the system chosen both by chemistry and by biology are opposite. They usually tend to have simple equilibria point. And then, and this is a mathematical equation, under what conditions, what classes of systems would behave, they're very complicated, they're very high dimensional system, and again, one of the point,
is that they're all very high dimensional, right? And why they behave in such a simple way. And this is extremely, again, controversial issue, because, for example, the following point is completely kind of not settled.
Say, in cell, the regulation or function of the cell, there are a variety of enzymes, and the point is, is it essential that enzyme have certain parameters very finely tuned, or they're very robust? And that's unclear, and there are just, you speak with people who have very opposite views on that.
So, and it's one point. Secondly, concept of stability, all very simple examples, does not fit stability, the stability of kind of how is in the standard numerical system in the classical, more modern dynamical system theory.
For example, and just again, if you think in simple terms, it's kind of obvious to you, we are rather stable with respect to very many perturbation, you can take temperature, you can eat various kind of food, or you function. But you have tiny little molecule or something poisonous in your net, all right? So you're stable in particular range,
so stability is where you kind of specify stability, and there is no mathematics for that. But again, this is the version. So this is instance of this, of Mendel when things are quite nice. So this I'm describing this, what I told you,
why the square is, it's important, yeah? What I wrote before. Yes, in tensorial terms, this formula which I wrote before, see, so this formula is a generalization of this what I wrote before this formula of Hardy,
where it was, yeah.
Here is a formula which Hardy wrote, essentially, and just he's saying, well, this is multiplication type table. However, this formula, if you write it properly, it uses to what I wrote before, and which would, yeah,
to what's written here, yeah? A times B equals C, or something, right? This implies that, so there is no addition in this formula. It's purely multiplicative formula, right? Anyway, it's multiplication table, but only multiplication, and it's not about numbers, but just associativity of the product. And this, I mean, is unclear to me,
so why Hardy, I will do that, because you don't have to write any formulas to see that, yeah, it's not computational fact, it's just coming from symmetry of that. And, but then it becomes more interesting when you look for diploid organisms. They are not very common, but mathematically, see, here you can become more amusing, and this brings in the next level
of mathematical depiction. So what have, how you think about that? So diploidy means that now you may have, kind of formally speaking, yeah, many parents,
not two, but three different parents, and that you borrow your DNA from this, from them. And each of them has some number of alleles, which means that you have spaces, ai, and you take the tensorial product, and i runs from one to i equals, I'm sorry,
tensorial product from i to one to g, right? So your object, so that you mix now, not two, like a, b, genus combination, but maybe a, b, c, like g, I've already used, I cannot use letter g, because g was the number of these terms, yeah.
So that is tensor products. And I want to think about these tensor products as polynomial in these variables, right? Because tensor products, of course, you can think about them as just polynomials having degree one in each variable,
and I just put, embedded into the space of all polynomials. Because, so what, immediately kind of gives you, much more, much more transparent way to think about everything. Now, what was this thing, yeah?
It was a summation of entries. Now, when you have polynomial in your sum of all coefficients, it corresponds to your real polynomial, it's some vector, one, one, one is called one. However, polynomial is all the same, so it can easily take point zero. And so this may be understood as just free term of the polynomial.
So yes, this means polynomial, take it value at zero. And immediately all formulas disappear. All these formulas in genetics immediately disappear because it's free term, you don't have to make summation. You just, because the space of polynomials is invariant on the translation of the base space. So, this is what you have. Now, how we describe now all these maps,
and this will become quite nice, quite nice description in terms of polynomial. So I want to describe some interesting, when I have a space of polynomials, they have some variables divided into some G groups, yeah, corresponding to different others, all right.
And a priority polynomials have a degree one in each of them, but we don't have to bother about this anymore. We don't have to say it. They're just polynomial. How we can map space of polynomial into themselves in this kind of style, yeah? So what these maps are now, they are as follows.
So let me now describe some class of maps in general terms. So I have this projection, remember, A times B. I have project to A and to B.
Provided, of course, I had this operation. So what it means in terms of polynomials? So each polynomial, in many variables, I want to assign polynomial in fewer variables. So, but if this variable split, I just project my space to this coordinate.
And when I project it, I pull it back. So I have a very similar projection from polynomials on the whole space. They come to polynomial here, then again extend to the whole space. So I have this endomorphism in the space of polynomials just in use to projection to the coordinate space, right?
And then what I'm doing, so these are this point to these projections. But then I just can multiply them. And then these are my maps. So I have endomorphisms in the ring of polynomials. This endomorphism corresponds to projection to this subspace, and then I multiply them. The moment I multiply them,
they are not endomorphism anymore, but they're multiplicative endomorphisms, right? So, and these maps, in general, may be rather complicated, but we consider projections to mutually independent subspaces
and multiply this. And so, to them, this principle applies. Square of such a map equals to the map itself times a scalar product, where the scalar product comes. So let me explain. So what will be the formula?
So, you have this, call these subspaces, and I just use the notation which you have here, maybe I already have it here. I don't have to repeat the formula. Yeah, this was written what I was saying,
and now there is this formula. So I have collection. I have a collection of these subspaces. I project, make another collection, I project. When I compose them, I can see the old intersection and multiply them. It's very simple formula, which, in the case, when you apply,
where essential is, which I suppressed, but it had to be said, even if we have an empty subspace, operation makes sense. It's just taking the free term constant term as a polynomial, right? So empty sets, empty coordinate sets, or zero ones, still give you something.
They give you this free term of your polynomial, like when you evaluate polynomial zero, or coordinate zero, all remains a free term. So, and we have this one given by k i, another given by k prime j. So what you do, all their intersection, and take their product. But the constant coming exactly
when they have empty intersections. And so there is this formula. Again, it's simple algebra, but it has, again, this quasi-biological meaning, that you compose when you multiply such a map with itself, you come up with the same map, but with a coefficient,
which is free constant term in this degree case point, how many in the section are empty, right? And this is both general hardy value formula. If you look, yes, actually, from where I started, because I was trying to read this in Wikipedia, and I didn't bring this formula there,
you know, just huge formula describing that. And this is so huge, because you evaluate polynomial at the wrong point, and then it has all these binomial coefficients, just tremendous mess coming in. But in this is just, this is what happens here.
Okay, now, so what, but these are not the real maps, and just, yes. Another interesting feature, maybe, again, about these kind of maps.
So the point is here, that you start with some linear spaces, and there are simple maps, just linear projection on them. And out of them, you construct maps in spaces of polynomials. And these maps have the kind of remarkable feature that the degree doesn't grow, grow kind of only on the constant term, when you want to compose them.
So they behave like transformation or endomorphism of the important g-groups, right? This is a interesting feature of them. And then there is a next level. Yeah, and they, so in another point, this map is a non-linear map.
Yeah, they're kind of a rather complicated map, if you think about them. And I was mentioning, just to be respectful of these maps, just consider the simplest instance of that, where you go from linear, space of linear form to the quadratic form, just x squared, right? This very innocuous operation, you take linear form, so this is the sum of ci xi,
and you just square it, and you have quadratic polynomials. And these kind of things you do here, yeah? But just again, just to be respectful of this map, just think how it looks like.
So it's very amazingly, amazingly in trivial geometry. So if you apply it to the union sphere in Rn, say Rn, it goes to the union sphere in this space, I keep the times like that, maybe plus minus one over two,
and it's identify opposite point, so I have a projective space embedded there. And this is kind of simplest instance of very nice variety. So it is a projective space lying in this sphere in a kind of extremely symmetric and kind of a well-balanced way. And as I said before,
it give count the example probably to many conjectures you can imagine about sets. It has kind of very unexpected property. As I mentioned before, for example, give count the example immediately to some boards of conjecture about partition of sets into some space immediately. Yes, people who make conjecture never looked at this set. This is one of the basic set.
Another feature of that, if they can wax hollow it, it gives you, especially if you do it in a chronicle way, now you take the whole map not only on the sphere, image assumption and the wax hollow of this is a cone of poisson definite forms. So you have this poisson definite forms and an extremal point, exactly image of this map.
And this is, of course, the most kind of significant cone in mathematics, yeah? Yes, poisson definite forms. So people, say, usual probability cone is just poisson coordinates. But this has, it's another cone having maximal symmetry. And so permutation group is created by orthogonal group.
And this is causing fundamental for quantum mechanics. So cone of poisson definite self-adjoint operators. And it's kind of geometry, extremely kind of, I mean, just to think about that. You see, you have a convex set of dimension about n squared over two and extremal points make this kind of perfectly symmetric, perfectly symmetric thing.
So, and all these kind of segment where there's a variety by no means simple. But the theory doesn't, kind of, genetic goes to the next level, no? And this kind of amusing suggests, what it suggests. Yeah, by the way, here I wrote something.
So we have this kind of maps. So again, I repeat, they're very simple maps. You take a polynomial, you restrict it to certain subspace and extend them constant in the perpendicular direction. In other way, you project, you project the linear,
normal projections to these linear subspaces and take induced transformation polynomials. And then multiply several of them where the space are disjoint, right? And then, kind of, I call them equilibrating map. In a second I explain, in the second part of my lecture when I come to the entropy, I explain why, in what sense they're equilibrating.
And then I wrote something, actually just when I was preparing this lecture, today I had some problem with my time, but I just couldn't figure it out. I say, I wrote this, obviously, this, fine, but now I must say I don't quite see it. It must be obvious, but I couldn't figure it out
when coming here because there was some problem with the air and my time was because the time was not so smooth. And so I just have to explain why. It's obvious, I mean, I wrote it, obviously, I remember it was quite obvious when I was writing it. But this, you know, is a situation where you write the logical thing is obvious and then not obvious.
Yeah, this is kind of linear algebra, simple algebra. But now when I look at the example, it doesn't look too quite right, so I shall share that. And then, but then the point, see here, this is, yes, for the moment we leave it as this,
but so what is good about this map, they commute, now all these maps which I described, they commute with a full linear group operating on each of those components. So my linear space divide into, coordinates divide into the group, we have linear group operation on each of them,
and the whole picture invariant under the action of all these groups, which is bigger than the original group, by the way. So the picture is extremely symmetric. In particular, you can scale polynomials, you just can multiply each variable by a constant. And which brings kind of thing from afar,
localized them, which shows that these maps actually linearizable. Of course, there is another kind of linearization as I had to mention before. So these maps have all very simple dynamics.
But another reason for that was as follows. These maps preserve degrees of polynomials. So polynomial of degree in each variable less than something remains this property. For example, they have linear form,
you multiply them, total degree grows, but degree with respect to each variable doesn't grow. So, and you can think about them as transformation, the space of truncate polynomials, which is, it's a ring, and in this ring there is exponential map. Exponential map of most onto. So if any polynomial which starts with positive free term,
the constant term is admits a logarithm. And this again shows that this, and this is by the way present in many work in this kind of dynamics, so people can rewrite and take a formula. If you write this explicit formula for the exponential,
it certainly will be extremely complicated term. But this is again quite, quite, quite remarkable that this non-linear, highly non-linear transformation, however linearizable, except of course by exponential map. But another reason for linearization is somewhat, which is, doesn't depend on that in more general,
is that these maps are invariant, you commute with very large group. But the essential part of the group is just scaling transformation. So, and because they're invariant with respect to scaling, any global phenomena brings to one point, and at this point, everything determined by differential, and then it can go back to the large scale, right?
So these maps, they're on the large scale, behave the same as in the small scale, which means they conjugate to linear maps. And then that is a consequence of that. Yes, I want to say this kind of a essential theorem,
essential theorem, which is again, much more motivated, much more motivated by genetics. So this will be kind of the last step of the formal genetic, then doesn't go anymore, and then one can go from there. So, so far I was speaking about one locus.
So these are genomic constraints, a kind of gene that are mainly loci. Each gene may have many copies, but they correspond. Everything I was saying concerning one locus is if the rest was not there. However, in reality, you have gene, genome, there are many, many parts of this,
and there is a category, what's called recombinations, you have a mixture of these features here and there. So imagine you have two genes, so you can, something happens to one, the same happens to the second. And again, assume the thing happens as independently as it can be. And so independence, as I said before,
it's again a very tricky assumptions. So probability cannot exist without it, right? Yes, all probability theory depends that something is independent, or nearly independent, or something. Otherwise you just, no, it's just, nothing can be said. It's, and also, and here, if you think what this means,
it means exactly that there is high symmetry acting on the system. And this is why you can accept such an assumption, yeah? Because symmetry applies to objects prior to any kind of probability. For example, in statistical mechanics, this is especially kind of clear, right? If you think about classical statistical mechanics,
you see that symmetry is much more kind of irrelevant than probability per set. So you have a system of 10, something like to the 25 particles, yeah? Which is how many, you know, I have all in this space, yeah? So this is more or less the number of particles of air
in this volume. Of course, it's slightly less, probably 24. But then imagine each of them, particle may be only in two states. So I have so many particles. So each particular state in this system has probability of something like that. So, you know, this is what it is, I mean.
Physically, of course, makes no sense, yeah? We don't, these numbers, you know, the more we can go, I think, minus 46, yeah, this conjecture, Planck scale, even then, of course, unavailable, yeah? But beyond that, it just makes no sense. However, exactly where is which number we operate, and probability theory appeals to that. However, the point is, you don't have to do that.
And you can accept this, just because of the system of fully symmetric particles. All particles are the same, and you can assume permutation group x. Then you can say, huh, this is a number. It makes the meaning, but the point is, the difference in such a number, right,
this equality makes sense, despite the fact that these two don't make sense. Now, you can say that they are equal probabilities, even though these numbers don't exist. And that is the point which is kind of, you know, kind of underappreciated. So probability theory is very much
kind of representation theory. And actually, it's gauging it more and more apparently these days, yeah. When you know new branches come, and this probability disappears, and just representation theory and symmetry enter that. And it's the same, of course, applies to many other systems.
But, so this is a huge symmetry, which is the reason these genetics. And so in the case of our genome, this hidden symmetry, when you have long genomes, and you can have switching over lucky, so it's a big group which operate there,
very kind of familiar group, which is the same as throwing a coin. You just, you have group G divided by two Z, in very high power. Right, so you have, again, extremely small probability of any event, but this symmetry act there, and assuming everything is independent.
You may ask what happens with evolution, so to speak, because there is nothing changes, suggesting there is reorganization of genetic material. So the point that in all these pictures, there is no change in genetic material or individual gene change,
but the content remain the same, right? So there are kind of basic units which being inherited and there a relative proportion does not change. So nothing happens. What you see, visible change. Phenotypes run along kind of certain process, but the fundamental genetic material in population
does not change. And this exactly was the principle of Mendel where it was in great disagreement with Darwin. In fact, it does because there are mutation, but there's a secondary effect, right? All this evolution happens in a much slower, slower scale and essentially invisible in nature.
You see it very, very kind of very poorly. But what you see is kind of this mixture of rearrangement of alleles in populations, which is for random mating, it constant, right? Just nothing happens. And however, there are many of them,
it doesn't stabilize on the first step. However, it is exponentially converges to equilibrium. And this is this theorem of this guy which I mentioned. And this follows from the fact,
yeah, Robin's getting convergence property. It says that if you have this, now any kind of population and they exchange, they have random mating, random recombination, the exponentially fast conversion equilibrium situation where equilibrium means that your polynomial is product of polynomials with respect to each variable.
So we have, remember, this variable is divided into these blocks. And so the distribution of polynomials is product of polynomials. And this is where we come from. And the formula for that, the maps which are involved are just which one describe. They are not this original multiplicative endomorphisms,
but they are convex combinations. But however, because each of them are invariant under this big linear group, this, they behave as if as linear maps. So essentially, this convex combination of them behave like linear maps. So there is one attractive fixed point. And then immediately you may ask, what are kind of, so the formalism coming
to what you arrive at mathematically is the following class of dynamics. You have a commutative algebra, in this case, concave polynomials, or it may be infinite dimensional algebra. And you have some endomorphism of this algebra.
So I know it's OK, alpha i. And this, even as commutative algebra, it's this endomorphism can respond to self-mapping of the space of maximal ideals. So they think about them as functions in certain space,
or some quotient of some space of functions. And so there are simple transformations. And then there are some very simple, otherwise nothing works. Then you can see the products of those for certain. And then you take convex combination of this. You multiply some of them, and then you take convex combination.
And from that, you produce often kind of simple, comprehensible, but still non-trivial dynamics. And this kind of outshot from a mathematical point of view at what kind of dynamics you want to understand. Right? So you start from this kind of random building,
and come to this class, the dynamical systems. And you want to understand them and see, are there other examples? So one of the examples, which is the one I described. And the second example is even more classical. And let me see if I have it here, written, which everybody knows.
Yeah, it's here. Here, what I was skipping is just different variation
of the theorem. So look at the algebra of l1 function, the Euclidean space with respect to convolution. And consider this kind of transformation. And then the fundamental theorem says
that it has a unique attractive fixed point, which is Gaussian distribution. So the normal law is exactly of the same nature as the one which is this Mandel formula of stabilization of population. You have this algebra. You have a demorphism exactly of the same nature.
And then it immediately becomes clear that there might be many, many of such normal kind of laws, right? They're pointing to different demorphism of this algebra. But I haven't looked carefully in the literature and the language, of course, the reason in probabilistic literature certainly I couldn't extract.
So on this, I want to finish what I was saying about the Mendelian dynamics. OK, I say, again, a few words about Surchivan. And then I switch to entropy and return to Surchivan in the end of my lectures, which is because it still came.
So because these more or less kind of mathematics we know, and there are many suggestions which has not been perceived, but they still
go within traditional arithmetic. But then there was a next very simple step, again, biologically relatively simple step, simple by description, not simple by how it was achieved, quite ingenious, which potentially brings a very different kind of arithmetic, which has not been at all touched by
mathematicians. And I was mentioning this last time, and I want to repeat it again. And it was done by Surchivan who worked in the lab of Morgan and the same kind of logic, similar kind of Mendelian type
of logic, allows you to reconstruct the geometry of the genome, namely that before any kind of understanding of molecular knowledge of genes, just knowing that there are these units behind pure phenotypes.
So if you breed any particular feature, eventually you know it's a definite kind of phenotype, may appear in certain forms. And then you know it's a result of decomposition of usually several units, usually of two. We deploy the organism, and there are only two alleles. So there are these hidden units of inheritance, which are not phenotypes.
You see, that's the whole point. They're kind of hidden. But knowing, having this idea in mind, we can say the genes, which are a combination of these, are organized dramatically on the line. So here we have this kind of abstract kind of polynomial predator, where this line may
come by this one-dimensional geometry, how we can see. And so the idea of huge event was extremely simple. And so he reconstruct. I think for him, it was very clear that it must be linear. But what he actually done, he position
of about a dozen gene on one of the chromosome who was positioned where. And so what input he had. So the input was this collection of these Drosophila flies in the lab of Morgan. They have bred fly with very pure properties.
And then we're interpreting them and looking at statistics of appearance of different features. And because they were kind of purely bred, they could say how different gene we are recombining and appearing there. On the basis of this kind of statistical data, one could say that these features corresponding,
representing genes, were actually linearly organized. There was a linear geometry there. And it looks first completely incredible. But then it's very similar, in my view, to what Poincare was suggesting. Our brain reconstructs spatial symmetry out of images.
So this is a problem that you have your eyes. You move your head. Image moves. So this image and this image, they have nothing in common for a brain. They come to different neurons, to different places, and the brain in different spots. In what sense, how you know they're the same image?
So Poincare was considering this. And he suggested some solution. Again, idea of a solution, of course, which is, I think, very much in the spirit of what we know today of neuroscience we're doing. Of course, we, as scientists, don't know about Poincare.
In Sriracha one, you don't know about Poincare either. But mathematically, it is very similar phenomenon. And what enters here, now, about Sriracha one much easier, the idea is as follows. So you can recognize genes by features. So genes are just something representing
certain phenotypes. And then you observe the certain phenotypes often go along. For example, you may have black hair and long hands more often than not. Of course, these different features go for flies.
Flies, they're basic features. They have color. Their eyes are color. Their body is the shape of the wings. They develop the whole language, how to describe precisely these features. They observe that some of these features were going along more often than others. And then you say, aha. So how it happens? That recombination occurs when you cut these two brains
and switch this to this. And those which are close together, being switched relatively rarely, and those which are far away may be switched completely independently. Actually, one of the laws of Mendel of independence of this corresponds to the features positioned in different chromosomes. And when they're in different chromosomes,
they're kind of independent, the fuses. And if they're highly dependent, it means that they're usually positively correlated. They're close. And you can say from this matrix, you can reconstruct with statistics, knowing correlation of appearance, corresponding correlation between features,
corresponding to different genes. You can say this correlation, interpret this as a distance. Take proper function, does it become distance? And then you say, aha, it happens to be one dimension. Of course, it was not like that, as I said before. It was on the order relation, not the metric. But that's the logic of that. And in the same way, again, I say it's oversimplifying.
You can speak about what's done with Poincare. So let me repeat again this mathematical question, which is not very well posed, but is as follows. So how you construct geometry? You have a set, and it has geometric structure. So here is a set of genes, right?
Or it may be a screen with some pixel and geometric structure, the ones superimposed by images. So the screen shows you three space, and three space, or two space dimension, has some symmetry. How is this reflected here? And the point is that you have a measure
on the set of subsets in there. For example, it reaches more transparent in the case of this screen. So you look picture after picture after picture, and you say, aha, on the set, and something is white and black. And so you have a measure, those which say black.
And so you have a measure on the set of the subsets, or in partition in two sets. Of course, you cannot truly observe this measure's huge space, but you have samples. So you have what you believe are representative samples. And from that, you want to say, aha, the space has certain geometry. Now, how we can do that?
Say, here or there, it is a, it is a, let's do the same with down here, then down for images. So again, the way you imagine it, you have your pixel, and then they enumerate in some idiotic way. You don't know how they enumerate it, right? How your brain enumerates, if it does it all.
Of course, it doesn't do it. And your cell, your receptive cell in your retina, of course, it doesn't do it. But, so these are just, this set, which has no structure. And sometimes you see kind of images, some of them being black and some being white. And this kind of systematically repeats.
Can you say, aha, that this came from a world with orthogonal symmetry? Or maybe it was other symmetry, right? So you have a set, abstract set, which has no structure. But all you have, you have many instances of subsets there. So if you have a million, a trillion copies
of the same set, it's the same set. It's very important you can identify elements. But you have light or color on different subsets. How you can say that this came from our world, but not from the universe, you know, some kind of very foreign for us kind of geometry.
And here, of course, is the same, yeah? You have particular manifestation of this. You have organism. And so it's only particular features being materialized. And you see what is being materialized. And you want to say what was geometry of the background space. And the logic, again, is extremely, extremely simple.
So here, the two genes which are close on the genome have tendency to appear often together. Because the combination happens somewhere with certain frequency and being cut, of course, the closer they are, the less smaller probability
that something will separate. And the same is true about real images. This is from where all structures start, yeah? That if you have, and this is, of course, you have to use in the real world. If you have an image, then, for example, if you put here,
you have a brown color with very high probability, much higher than random, nearby point will be also brown rather than red, yeah? This kind of enormous effect looks not very little, because it appears many times exponentially dominant. It seems very weak, yeah?
Because you pass the bound and color changes. Make a very small spot, a very small domain. However, because it happens systematically, no matter how much you gain in probability, it may be different between, instead of zero, one half, it may be zero, four, nine, nine, nine, nine, right?
Or something like that. But this is enormous difference if it repeated many times, yeah? Because it repeated many, many times, it goes into exponential and become, so whether you have this number or you have from the very beginning this number,
it makes very little difference, yeah? They both become infinity or zero when you iterate them. Right? So, and because of that, granted that you can say, aha, two points, they're decent between them is determined by mutual positive correlation
between these two points, yeah? If they often come with the same color or not. You have to take any function, it makes the difference. And if you do it systematically, you observe that this function in two variables, we have symmetry of the orthogonal group. It may be not a distance, it may be function of distance
but they all have the same symmetry. And so, but the problem which is, this Poincarelli was kind of obvious to him. But the issue was, if you can make realistic, you can make simple realistic organism, algorithm, so your brain would follow this pattern.
And so, well, I'll discuss it a little bit later. So what can be expected in this way, what Poincarelli was saying about that. And this, of course, we don't know. Of course, still we don't know and we only can guess. But unless you understand this mathematically, I don't think you may have any progress neurophysiologically, yeah? Because rather, quite elaborate process,
which you cannot see by, you know, in a microscope unless you, unless you think. Okay, now I want to make little switch and come to another point of what I was saying and different kind of mathematics.
So maybe I summarize where Mendelian dynamics brings about us that some point remained a little bit unclear and I want to elaborate on this point.
So one was, it brings for the following class of dynamical system, we have commutative topological algebra. You have some family of endomorphisms. So they were corresponding in the case of proton-ton key polynomials.
You have a linear space decomposing to the product of subspaces and you have projections on this coordinates and this give you endomorphism of this algebra. Then you can see the product of some of them, right? The many, they take different subsets in different products.
Now this will only multiply the endomorphisms. And then you can see that a convex combination. So here is i and j and here is over this j. And so the kind of dynamics you see in preparation in this ideal, ideal genetics,
so it's very simple, I repeat. No selection, no geography, no nothing. It is pure probability theory. You come to this class of dynamics and the basic theorem, which I explain very roughly says that you converge to equilibrium exponentially fast.
And now I want to say what is equilibrium. Why we say equilibrium? So far it was just dynamics and this is quite interesting class of dynamics.
This is a class of dynamical system which comes in many occasions and it's not quite clear. So the point again would be not just look at this in the general case, but find condition on this endomorphism, especially when this algebra is the algebra of function. So they come from endomorphism,
from the homomorphism or the continuous self-mapping of the space of maximal ideals when such maps behave in a nice way. A nice way would be when there is a fixed point, I is attractive or it is hyperbolic. So it should be attractive except maybe find the many direction where it may be repelling.
These are dynamics which are understandable and kind of feasible. Simple as possible. So again, it's opposite to the view of usual dynamic. You can see the simple space and look for most complicated dynamical systems. And here we look at other complicated space, but you want to know the simplest system there. And this is corresponds more or less
to biology and to chemistry. Have a huge complicated space, but dynamics there are rather simple, rather robust and essentially like fixed point dynamics. How this can be, and this is a model for that, right? So this is one thing.
And another thing about equilibrium and related to this concepts of entropy. So one can show that, yes, let me just again look at the simplest example
and then more clear again. So, if you look at this map, which I'm describing in the way of matrices, you have explained, of course, this matrix. You know what you know is what matrix is, yeah?
It's amazing, again, mathematicians say the word matrix, but what is a matrix, yeah? I always ask, what is a matrix? What is mathematical, if you need to know matrix, right? You just, you have always abstract mathematics set, third and then you say, I write a square table written on the blackboard, right?
It's not, it's not so obvious, what matrix is it? And it goes in different contexts, maybe something different. But anyway, this is, we stick to that and we consider this, especially the case I want to emphasize when n is a positive numbers in the sums,
total sum is one. And then there are special matrices representing this Veronese variety, which are products of rows by columns. And so what are special about them? And if you know this probability theory, you say they are matrices of maximal entropy.
They can be defined. So on one hand, they got this remarkable Veronese, kind of very symmetric, sub variety after, or the projective as a picture
in this projective space of matrices. On the other hand, they solve this variational problem. And now I want again to repeat what I was saying a little bit more about entropy.
So this process of random matching, it is very, very physical process. Entropy goes up and converges to its maximal value. In what sense is maximal? It's maximal if you consider measures here, which has given projections. It's a measure, meaning you have a square table
and you put numbers in these places, and they are positive, sum equal one, that's a measure. So a discrete measure on this, maybe I'll do it like that. So you put positive weights in all the squares,
such that sum, these sums are given and these sums are given to you. So the two projections of this measure are given to you. Which of them has maximal entropy? And I remind you what incorrectly called definition of the entropy when you have this weight, p sub i, it is sum of this, yes?
Minus sum of pi, sum of the log of pi. It's not a definition, I want to say. This is what you find, of course, in textbooks, and of course, we cannot take it as a definition. Why log, why not x, why not sine, why not cosine?
I mean, it's kind of absurd as a definition. However, temporarily, temporarily take it, and then, because here the addition was, again, I repeat, it was written as if it was written by Boltzmann, which was, interestingly enough, it was written first by Planck, this formula,
and then it was kind of reiterated many times, especially by Shannon in the discrete context and in certain fundamental concept. But my understanding in Boltzmann, though it was not implicit, it was really proven. It is, entropy was defined differently, and I want to repeat a definition of Boltzmann
in the kind of, in its modern terms, and then this will be outcome of this. However, the fundamental property of this Shannon, what is called Shannon inequality, that if you consider all measures with given projections, then entropy of this will be smaller than the sum of these two entropies,
and the equality is even only if this is product of these two. Okay, so it's kind of this rank one matrix. And this is not difficult inequality in a way, but still kind of significant, and why this Pi and log Pi? And now let me explain what I think is better terms.
Again, you can understand this without formulas, up to a certain point. But then this formula become really significant,
but on a very kind of rather advanced level. So this is, so what I want is to kind of reconstruct kind of naive thinking of kind of thesis,
not physical lecture. Boltzmann was not in a way a physicist, and he was actually, he did not take for a thesis. He was considered applied mathematician. He was extremely mathematically minded, yeah? And he made fundamental contribution to physics, but still he was essentially a mathematician, yeah?
And so what he was doing, he was not discovering new physical laws. He was really discovering how to put them into a new mathematical framework. And now, yes, in modern language, I think what he was thinking, just read his book on gas theory a long time ago, but I certainly don't remember
that the feeling you can get is as follows. So you have these objects, finite probability spaces, and they're extremely kind of simple, mind object is just bunch of stones, and in a more like their way, sort of sum would be one. You take the units of measure and such,
the total weight is one, and this bunch of these stones, yeah? So this already, for me, abuse of notation, because you use here numbers, and there is no numbers in there, right? They're not ordered by the means, just these stones. And then you can bring some stones together. Better think about them not as stones, as drops of water.
For example, you can bring this together, slightly one bigger one, and this together will be this one. And this means that this does produce the spaces, and there are morphism between them. So it's a category. And in a second I explain how physically,
it's actually, you have it, yeah? You don't have these numbers, you really have category, right? These objects themselves, the spaces, you just don't see them, but you see these morphisms. And this is, the point I want to make it look, I was saying it already, I keep repeating it. You can say how, when you have this arrow,
it's as if you have just this inequality, right? This space bigger than another, because all this map has rejection. So it's, but the fundamental difference, conceptual, notational, is when you will be speaking about entropy, you shall have entropy of morphisms. You cannot write entropy of this sign, yeah?
Just meaningless. They're logically different objects. Having morphism or putting this sign. And this is kind of looks very trivial kind of difference, but this completely changed perspective, right? So I want to think of this as an arrow. The moment I have it,
certainly you might be careful, this is a category which is topological category, right? Because there is topology inside. Yeah, these are real numbers, they're not abstract. Of course, this makes sense when they're object from any additive group. However, even semi-group, but for me, they're number, real numbers, so I remember topology, secretly.
But then it's saying, if I have a category, and I, out of this, I want to produce something simple. And this simple, you know, the recipe of that, we take the growth in the group of this category, without thinking, right? Say growth in the group, this biological category, so growth in the group. A priority, it will be not a group,
group is at next level, take semi-group. Which means, just when you have morphism F, G equal H, right? You say that F plus G equal H in the growth in the group, right? As simple as, you just make everything community.
And then you have a, because topological category, you get some topological semi-group. If you don't take topological, you have something huge and countable, it will be too bad, yeah? So, but you take it, in topology, there's only one topology you can do.
And the moment you say, you may just think what the semi-group will be, and this will be the semi-group of points of numbers greater than one under multiplication. You can compute it, yeah? And this is, this equality,
and this is called the law of large numbers. And then log, so for each morphism, in particular to each object, which is morphism to one point, there is one point space here, you assign this element of growth in the group, and this happens to be a number,
and for some reason you take log of this. And this again is for having, no, no, no, not obvious why you take log, but you may take log. Naturally, it's more duplicative group, but you take log. And then you have entropy. And you cannot argue with this, yeah? This growth in the group is not something, you know, ad hoc given to you, it is just inside of this structure.
Now let's explain why it is so. And I think it's quite, quite, quite unique, just, of course, there's no new technically anything new, what I'm saying, but conceptually, I think it give you feeling much better about entropy. You know, when you see this, and this, I think how Boltzmann's do think about that, and I explain why. And then you, it follows, he's more already using this formula.
You can compute it easily once you know that. But it is the law of large numbers behind it. Otherwise, it would make no sense. This formula makes sense only because the law of large numbers. And then lots of properties of entropy, which we proved by some computation, that follow from functionality. There's natural, functorial, trinitarian,
in particular, the Shannon inequality, which I said, namely that if you have measure, and has these two projections, then entropy of the whole thing, less or equal than the sum of the two. And it is, kind of become this. Exactly, you got a physical, naive, which I explained. Naive physical reasoning becomes rigorous proof,
because functorial, right? So everything which was blamed about, on Boltzmann, that he was not rigorous. And I think, the point is, Mathematica was not right. Mathematica had no language to say what he was saying. And there were two, in two points. Yeah, one was from the functorial language, he had in mind, and yes, I think it applies to me.
I think he's done, and still not transformed to the modern language, like Boltzmann equation, right? So for example, this is a Boltzmann entropy, and you can say, it is in a minimum growth in the group, and then you are happy, and just, of course you have to prove law of large numbers, but you know beforehand what it is. Now what is Boltzmann equation? Of course people don't know what it is. They write this equation absurd, yeah?
Because if you think about this, you know what this equation, you don't have to write it, but you cannot say it in words. And you write a formula, because you have no mathematical language to say exactly what it is. It also is a functor, between certain categories, but you don't know what a category is, right? And this is how Boyce was thinking, and objection of mathematicians, in particular, were because he was saying it in the language
of mathematical of 19th century. In another, of course, thing he was using all the time, implicitly, was non-standard analysis. Always he was thinking those terms again and again, this language was not ready, as we can say it now. Because in a way, entropy might be understood
as the growth in the group of non-standard completion of this category. Kind of the better way to think about that is when you use, it's unneeded at the beginning, but when you go deeper, you see it becomes absolutely essential. You have to take so-called non-standard completion of this non-standard model of this category.
You have to look at spaces when all these terms become infinite, because only IP is in the limit, and this is not surprising, that what you see in physics are rather big numbers. Okay, now let me explain why it is so, why this growth in the group enters here.
So what is the law of large numbers? So first, in this context, this is a essential point. Actually, I don't know, certainly what I'm saying is in slightly different terms is well-known,
but I couldn't find references. For some of them, I found words people use, yeah, and in probability theory, but not for everything. So what is the law of large numbers?
Again, as we see, just kind of mathematically, which are very similar to what was happening in this Mendelian dynamics, slightly different aspect of that. And so you have a probability space P, and this is a bunch of atoms with different weights.
And in this category of, and this is essential, of probability spaces, you have Cartesian product, which is a very simple thing in this example, you just multiply them as sets, and when you have atom here and atom here,
the weight will be product of two atoms. Yeah, better to say you have segment here, segment here, you have the square. Yeah, even unquestionable product. You better be careful, actually, in what sense it is. It is a product, but it is Cartesian product.
So what the law of large numbers says, well, when it applies, it applies to this high Cartesian power when n goes to infinity. And when I say no standard analysis, in fact,
much of what you do, you don't have to be this kind of power. It may be just space, but this is not a limit of the sequence, but just finite probability space, but n is an infinitely large number. So it's a number, but it's a finite, many atom, but finite, understood is a non-standard way.
And well, and there is now a good justification to think in those terms because many, many, there is some kind of mathematical theory, highly non-trivial, of Lewis-Boyn exploiting this idea. You prove really hard theorems with this way of thinking.
But the law of large numbers concerns this. So what is the, how this space behaves when n goes to infinity? Now, among all probability spaces,
there are something we can call homogeneous spaces. And homogeneous, of course, where all atoms have equal weight. Now, we have this, again, it's kind of categorical notion, yeah? It makes sense purely categorical. You say it in those numbers, you become homogeneous and the objects
are highly homogeneous. And the law of large numbers says that these things, when n goes to infinity or when we say two to the n is asymptotically homogeneous.
Now, why this would kind of settle this matter about growth in the group and everything, yeah? So, entropy, by functionality, if you have a definition of growth in the group, by functionality, must be multiplicative underproducts.
When we take long, it become additive, yeah? But it's not so well, if you write additive with the group, it will be additive, yeah? It's multiplicative group, you write additively. So, you have entropy, whatever it is, of pair p times q, must be for entropy of p
times or cross depending, again, on the annotations of entropy of q. That's a kind of consequence of, so here, I speak about entropy of objects rather than morphisms. In this particular instance, it's kind of,
it is sufficient. But of course, when you go kind of next level, it becomes not, it's all the same, yeah? And then, on the other hand, so if it's homogeneous,
and kind of the only kind where you look at the category of sets, and this point, you can growth in the group, whatever, of course, everything you have is just cardinality of a set, right? So, your entropy must be kind of generalization of cardinality. So, when homogeneous object, and this equation of normalization, as I said, growth in the group,
what I said, there is some ambiguity about what is one in the, what is one, yeah? What is kind of, can multiply it by constant, or by use different base of logarithm, and so normalization would be, when all atom have equal weight, you want this entropy to be equal to cardinality. If it's cardinality, it will be multiplicative,
if it's log of cardinality, it will be additive, right? So, this is normalization you have. It must be understood that there are lots of function with this provision. I priority that I, enormous number of this, for example, if you take sum of weights squared, or, well, a cube, or whatever,
this quantity is multiplicative, and in this case, it's in production, right? So, there are lots and lots of them, if you're thinking in terms of a Laplace transform, it is a whole kind of world of, we can describe all this multi-religive invariance, we will essentially space of function, all Laplace transform of all these weights, yeah.
Well, essentially, of course, the API to the power lambda and their combinations, yeah? So, huge number of them. And, which corresponds to Laplace transform, of course, of the corresponding distribution functions. But, the point is, yeah, that the usual entropy
is kind of a, has extra property. So, these are multiplicative for all weights. But, the entropy which we construct will be not. It will be kind of marginally multiplicative, only under the condition the sum of Pi equals one. So, in particular, you can check.
This is simple computation, but it's not a kind of a priori. Obviously, you have to check it. But, this thing is, this entropy, it's a space property, okay, put minus here, that's good. But, only using this condition. If you take something like that, you don't need any condition. You put here any lambda, yeah?
And, this thing is multiplicative, of course. This norm is multiplicative. But, the entropy will be additive only under this condition. But, another feature of that, that it is maximally continuous. It's certainly, when you say this, when you say this kind of topological category,
which topology you use. And, you have to use exactly topology, which guaranteed by the law of large numbers. And, as I said, the law of large numbers says, the thing is asymptotically homogeneous.
So, when you take very high power, almost all atoms will have almost the same weights. And, this is the law of large numbers, because we applied to log. So, if you apply formally additive, of course you can use multiplicative group. But, if you use log in your form, you take log P, think about this function,
this probability space. And, this function become almost constant additively, which means that all atoms become approximately equal. What did it mean approximately, in what sense? So, it's in a rather weak sense, but exactly in this weak topology, which you use for entropy. So, let me explain what is,
yeah, well, I have just put two words in there. We can see next time. So, I have to compare, I want to compare two different probability spaces, and say what means they're close. And, then I can say what it mean to be closed,
closed, close to a homogeneous space, right? So, I need to introduce some kind of metric on the space of probability spaces. And, here immediately, when I do that, I have to work with non-standard numbers. So, N will be very, very large number. It will be not property of individual space. It only makes sense when number are huge, and only that will be used, yeah? So, we don't care about space being close as they are.
They might be close when N goes to infinity. Or, better to think about this as kind of non-standard number. Non-standard being huge, non-specified number, right? And, this is the following thing. There are two notion of being close, and they being kind of brought up together.
One is additive, and one is multiplicative. And, this is again quite significant, because in probability theory, there is both, additive, which you measure, and multiplicative, which you're carrying from kind of independence, yeah? And, you have to use both of them. It's very simple to carry. The most naive you can do will be there. Additive is very simple.
If you have one measure space, and throw away subset of small measure, you agree it is close. So, if N is infinitely large, you throw away infinitely small piece of measure, and they become equal, okay? So, this is why non-standard analysis is very convenient. You don't have to specify this number, right?
In delimited means, where N goes to infinity, you throw away small in smaller part, and the rest, what remains, is close. So, they converge. And, secondly, what do you mean multiplicatively? Multiplicatively is tricky. So, here, you see the number, N, does not enter.
Yeah, just something small. This makes sense, regardless of number. We take multiplicatively, we have two spaces, and we have weights here and weights there. You look at their ratio. You want this to the power one over N
to be close to one. So, another way, operation which makes spaces close, you multiply them by weights, but such that, when we take this root, this factor will be close to one, right? So, we mix these two notion of closeness,
and then you can say, well, that two spaces, depending on parameter N, and another parameter N. So, the distance goes to zero. Now, it makes sense. When the one obtained by another by two of these operations,
finitely many, if you wish, but two will do, yeah? You allow throwing aging sets with small measure, and you multiply them by weights, such that square root of this, not square root of this, a root of this will be a small to one. And, the law of large numbers says, in this sense, for this power,
there is a sequence, QN, or code HN, of homogeneous spaces, such that their distance converges to zero. So, for sufficiently large N, every space approximated by homogeneous spaces. And then, the moment you say it,
you kind of know anything about entropy, because entropy is multiplicative, by definition, by functionality. And now, we know it's also must be continuous with respect to this metric. So, when you take P to the N, because it's become asymptotically homogeneous,
it's entropy will be equal to the entropy of this, but because it took power, entropy multiplies by N. So, I have to divide, take entropy of this creature, and divide by N, in this kind of growth in the group. By the law of large numbers, but now, but because it became constant, it becomes cardinality, so it's log of cardinality.
So, everything you have to know about entropy, you just, you can read it from cardinality. You can forget about measures, it is the same as cardinality. It's exactly mentality of Boltzmann. So, what is entropy of a system? It's the number of states. Entropy is log of the number of states. So, there is no weights.
Weights disappear when system is large enough, because all weights become equal. And so, everything can be used to this kind of a homogeneous case, and anything you want to prove about entropy follows from what you know about sets. So, next time I explain how you can elaborate on that,
in the same functorial way you continue, because this is entropy about just finite measure spaces. And the big advance of entropy was in dynamical system, I think, in 1958. When Kolmogorov proven that entropy serves as,
this kind of entropy serves as invariant of dynamical system. He introduced this dynamical entropy, and then it was kind of polished by C.C. Nye, and this is entropic theory, and then you shall see that if you take this point of view, this become kind of apparent, yeah. You don't have to even, both formulation proves become
just don't know what, automatic. It's only definition because it's just extension, just purely functorial categorical extension of what I said. You extend this language and complete it in categorical way, and you arrive at all the theory. And it's one, and then, more recently, about a couple of years ago,
there was another line exactly, I mean, following just Boylston kind of reasoning, yeah. You just take Boylston reasoning, what he was saying, translate this language, and you get all this Kolmogorov-Sinai entropy with all definition and theorems. Just from words, I mean, for that thing. Just reinterpret what Boylston says in categorical way,
and what he says is just hand-waving, by the way. There is no kind of really hardcore mathematical in that, yeah. But a more subtle point was done recently when you pursue this, another line of thought corresponding to this non-standard analysis. And this is more subtle, and this was elaborated by Lewis Boyan, and you have a next extension
of the entropy which is much more sophisticated when it's a more difficult question, and brings us to what we don't know. And so next time, I'll start explaining how you can arrive at all that by kind of thinking in kind of physical terms. It will be as remote from physics
as what I was saying here was remote from biology, but you start with what you think is physical, and as a mathematician, just translate when they were in a naive thing into mathematical language. Partly, I've done it, and then I can do it in a more elaborate way next time. OK, so today, it's finished.
Thank you.