We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Fuzzy generics

00:00

Formale Metadaten

Titel
Fuzzy generics
Untertitel
Several months of using 1.18 features
Serientitel
Anzahl der Teile
287
Autor
Mitwirkende
Lizenz
CC-Namensnennung 2.0 Belgien:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Several months ago, a new project was started: FerretDB, an open-source alternative to MongoDB. It is a proxy between MongoDB clients/driver and PostgreSQL, developed in Go. Since the very first commit, it used a version of Go which soon will be released as Go 1.18. The two biggest reasons for that were first-class support for type parameters (generics) and fuzzing. In my talk, I will cover both of them: how they work in theory and our practice, how they were useful for us (spoiler: very useful), and what are their downsides and gotchas.
StörungstheorieNabel <Mathematik>GenerizitätSoftwareschwachstelleDiagrammTechnische ZeichnungVorlesung/Konferenz
CodeDatenbankDatenstrukturImplementierungNatürliche SpracheOrdnung <Mathematik>ZeichenketteTypentheorieMAPFilter <Stochastik>GrenzschichtablösungPhysikalischer EffektNebenbedingungBinärcodeFunktionalProjektive EbeneZentralisatorVersionsverwaltungKonstanteSerielle SchnittstelleCASE <Informatik>DatenfeldZeiger <Informatik>QuadratzahlÄußere Algebra eines ModulsInformationsspeicherungInnerer PunktFuzzy-LogikMailing-ListeDateiformatOpen SourceSchlüsselverwaltungDifferenteGenerizitätProxy ServerMultiplikationsoperatorProgram SlicingInterface <Schaltung>DokumentenserverOrdnung <Mathematik>ZeichenketteMAPSchnittmengeChatten <Kommunikation>SchlüsselverwaltungDemo <Programm>Interface <Schaltung>Computeranimation
CodeSyntaktische AnalyseZeichenketteTypentheorieMAPGenerator <Informatik>SoftwaretestGanze ZahlUnendlichkeitNebenbedingungBinärcodeFunktionalInverser LimesLoopMereologiePaarvergleichPhysikalische TheorieZahlenbereichReelle ZahlParametersystemCASE <Informatik>DatenfeldSampler <Musikinstrument>Zeiger <Informatik>AdditionSchnittmengeEin-AusgabeOffene MengeInnerer PunktProtokoll <Datenverarbeitungssystem>SystemzusammenbruchMailing-ListeSchlüsselverwaltungGenerizitätKomponententestEinfache GenauigkeitMultiplikationsoperatorStandardabweichungSchreiben <Datenverarbeitung>Message-PassingInterface <Schaltung>MusterspracheDefaultSoftwareentwicklerSyntaktische AnalyseZeichenketteTypentheorieSoftwaretestBinärcodeParametersystemFehlermeldungInnerer PunktFuzzy-LogikProtokoll <Datenverarbeitungssystem>SchlüsselverwaltungInterface <Schaltung>Computeranimation
CodeImplementierungInformationMathematische LogikSchaltnetzSyntaktische AnalyseProdukt <Mathematik>Generator <Informatik>SoftwaretestBildschirmmaskeBitFunktionalGruppenoperationKompakter RaumProjektive EbeneResultanteSpeicherabzugTabelleZahlenbereichVerschlingungQuick-SortSystemaufrufVersionsverwaltungGüte der AnpassungParametersystemCASE <Informatik>Prozess <Informatik>NormalvektorProgrammfehlerDatenfeldSampler <Musikinstrument>AdditionStrömungsrichtungEin-AusgabePoisson-KlammerProtokoll <Datenverarbeitungssystem>Web-SeiteUmwandlungsenthalpieSystemzusammenbruchVerkehrsinformationElektronische PublikationWeb SiteGibbs-VerteilungDifferenteGenerizitätMini-DiscSelbstrepräsentationWrapper <Programmierung>Fahne <Mathematik>LoginMessage-PassingProgram SlicingMinkowski-MetrikInterface <Schaltung>Figurierte ZahlDefaultTwitter <Softwareplattform>DokumentenserverSyntaktische AnalyseZeichenketteSoftwaretestBinärcodeEinfacher RingTabelleVerschlingungProgrammfehlerFuzzy-LogikProtokoll <Datenverarbeitungssystem>Interface <Schaltung>Computeranimation
FunktionalVersionsverwaltungCASE <Informatik>QuellcodeAblaufverfolgungVerschlingungFuzzy-LogikInterface <Schaltung>ComputeranimationBesprechung/Interview
DatenbankSoftwaretestBinärcodePhysikalische TheorieProjektive EbeneVirtuelle MaschineCASE <Informatik>Ein-AusgabeFuzzy-LogikProtokoll <Datenverarbeitungssystem>Fahne <Mathematik>Besprechung/Interview
VersionsverwaltungTouchscreenBesprechung/Interview
SoftwaretestGrenzschichtablösungFunktionalProjektive EbeneGüte der AnpassungProgrammfehlerFuzzy-LogikBesprechung/Interview
HalbleiterspeicherTypentheorieSoftwaretestGeradeLoopVirtuelle MaschineProzess <Informatik>Fuzzy-LogikWeg <Topologie>MultiplikationsoperatorZweiBesprechung/Interview
CodeSelbst organisierendes SystemSoftwaretestResultanteEin-AusgabeInformationsspeicherungFuzzy-LogikSicherungskopieCachingDokumentenserverBesprechung/Interview
CodeSoftwaretestChatten <Kommunikation>GenerizitätBesprechung/Interview
UnrundheitGenerizitätBesprechung/Interview
ComputeranimationBesprechung/Interview
Transkript: Englisch(automatisch erzeugt)
Our next speaker will be talking about detecting the lock for shell vulnerability using Go. Sorry, Nova, we're live.
What? He's sick? Oh, no. We've got an eraser. Yeah? What? What's he going to talk about? Okay. Yeah. Thank you.
Thank you. So first of all, I want to wish Leo a very get well soon. Next up, we're going to talk about two hot topics in Go now. First, we have generics and we also have fuzzing now in Go. What if we just combine them together in one hot new topic talk? Well, Alexei is going to do that. So let's talk about fuzzing generics.
Hello, Fuzzdom. Thanks for having me. My name is Alexei Palashenko and I will tell you about my experience of using two biggest 1.18 features, generics and fuzzing. Several months ago, we started a new project. It's called FeralDB. FeralDB is truly open source alternative for MongoDB.
It is a proxy written in Go, of course, which speaks MongoDB protocol, but stores data in PostgreSQL. This way, you can have open source solution with reliable database and also without all the problems of SSPL. So from the very beginning, we started using 1.18 and you may wonder why,
because that version is not released yet. Of course, you already know the answer. The two biggest features was generics and fuzzing. If I show you the relative size of these features, that would be something like that. So we use fuzzing much, much more than generics currently,
but let's talk about generics first. You're probably more interested in that. So originally, the very first version of FeralDB actually did use generics. And that was several months ago, generics were not as reliable as they are now. And also, it wasn't very clear if they should be used there or not.
So they did not work for us very good. And we switched back to using interfaces as we used interfaces for many, many years before. And some time ago, I was preparing to talk about how generics work and why they don't work for us. And I was thinking, okay, let me extract small examples
just for the talk into separate repository. So it would be easier to show to people how they work and how they don't. And FeralDB is quite a large code base already. So in small repository, that would be easier to understand. And it started out so they are not that bad, generics.
Like when you extract them into a small separate self-contained problem, it becomes much more clear. And also, now we understand that we basically did not use them correctly. So now we know how to do that and maybe we are going to use them in the future. To understand the problem,
let me introduce some MongoDB stuff. So you probably heard about BSON. BSON stands for binary JSON. That's a binary serialization format which is used by MongoDB and also by FeralDB. And BSON document is essentially an ordered map, key value pairs where keys are strings and values,
any other BSON value including other documents, of course. And that's the central data structure in FeralDB and also MongoDB. MongoDB commands on particular level, that's BSON documents. And to know which command is that, we should check the first field of this document. And of course, to have the notion of the first field,
it should be ordered map. So how do we implement that in Go? Using interfaces, we will do something like that. You probably saw this code many times and wrote it too. So we just define interface with an exported method and then define our own types, which wrap basic types and implement this method which does nothing.
So you may wonder why we have this method, why it's not exported. That's to disallow people to someone else to define types implemented in this interface in some other packages. So sometimes that's desirable, sometimes not. In that case, you want all BSON types to be defined in the same package.
So the users of this package could not define their own BSON types. That sounds reasonable. But also this method does not do anything useful. Also, there are several interesting goals that allow us to check that once we do, for example, switch type, we switch between different types of BSON type,
those filters can check that all types are actually checked, all types are covered by cases. And then implementation of documents look like that. We basically have a map of key value pairs and here we see that value is any BSON type, so any value that implements this interface. And we have a slice of keys which preserve order.
So that's how it looks as you could expect it to behave. That's up to key is a string and value as BSON type. We check is this value already present in map. If not, we add it into end of keys. So if you add a new key, it's being added to the end in the order.
If you set an existing key, it just replaces the value without changing the slice of keys. And we use it like that. So we just call set this string key and value. You can see that key here is not wrapped. That's because that's untyped string constant
and untyped string constant can be automatically converted to string and any string like type. And now string type is very much like string. But for value, we have to wrap it into a BSON type interface. So int implements this interface. So here we have to wrap it explicitly.
The problem here is that we have to wrap it. It would be nice to have to use basic types for both key and value, right? Currently, it's not possible with interfaces. It's another problem we can also set a new value. Sometimes that could be desirable. For example, in BSON actually there is a new value. It can be represented as new.
But we decided to use a separate value for now because new causes too much problems. So it would be nice to allow that code in compile time to prevent someone accidentally by passing nil to the function. So let's see how the same code will look in genetics.
So first of all, we define constraint which looks like an interface but have a list of methods inside. And you can see that this constraint can be, is implemented, you can say, by int string or pointer to document. It looks like it kind of like in type union,
but it's not, and that's important. And here is a document. And you can see that map value here is any. Any is an empty interface in Go 1.18. So instead of interface and empty square brackets, curly brackets, you can say any. But why we don't BSON type?
We don't use BSON type here. Why we introduced all those generics to have compile time safety, but here we use interface which says even less than BSON type on the previous example. The reason for that, is that BSON type is constrained, not just an interface. At one particular moment,
that can be only one specific type. For example, I can create a document which contains only integers as values or only things as a value. But if you want to have documents that contains both of them, I can't use a constraint there. So I have to use any. In fact, that's not that bad because that's implementation detail
and we can expose all the public functionality of the document type as methods with function functions with constraint. So inside it will be wrapped into interface, but all outside API will be type checked as a compile time. Okay, let's see how the method
for setting value would look like. It looks like almost exactly the same as set method in the previous example. But here we accept additional type parameter of type BSON type and we call it T and then we use it in the actual parameter list. So key is a string. Now it's a basic type string,
not our own type and value of type T. So the rest of them, they should look the same. All right, so that code actually has one problem. It doesn't compile. And that's because methods cannot have type parameters yet in Go. And for reasons, for that is quite interesting.
So by definition, interface is just a set of methods. And if you type to implement some interface, you can do that by just defining the same methods as an interface. But what if interface contains methods with time parameters? How type can implement this interface? By using the same time parameters or any type parameters,
which are compatible or not, how that should work. So other than that, methods are just syntax sugar for functions. And Go team for now decided not to have methods at all because it is not clear if interfaces can be implemented by methods with type parameters or not.
Should be method with type parameters should just be a part of interfaces at all on open. So for now, they decided that functions are enough and maybe in the future methods could have type parameters. So we have to use functions. Function looks exactly the same as the previous example, but now we have to pass document as the first argument
instead of passing it to the receiver. Other than that, we still pass the type parameter and we still use the type parameter as well. So how the usage looks like? Here we call document set function, pass in document and foo is this basic string and the value is a basic integer. And then second example does not compile
because nil is not a part of our BSON type constraint. So it doesn't compare. Awesome. So let's take a look at that function. That function creates a new document. It accepts a number of pairs. You probably see this API pattern somewhere. And then every time this function should check
the number of parameters passed. It should check the number of parameters is right, the pairs not an incorrect number of arguments which do not form a pair. It should check type of arguments, et cetera, et cetera. Also done is in rat time. It would be nice to have it to check at compile time, but that's not possible with interfaces.
So how we use it? We call a number of functions with number of parameters. And again, all this values are wrapped, but we can't do better with interfaces. It's also possible to call this function with invalid number of arguments and also with invalid types.
So the key of the document should be thing, but here it is integer. Both of those cases are not covered at compile time. So how can we do that better with generics? The generics, we can write this function which accepts single type parameter of value of this field and one key value part. It returns a document.
Here is not possible there. So we don't even return that because invalid number of parameters is not possible to pass. It is not possible to pass new there. It is also not possible to pass invalid key type. But what if we want to make a document with two pairs?
Unfortunately, there is no yet support for variadic type arguments, so we have to write something like that. We pass two type parameters and then we create two key value pairs. And if we want to want create document with three pairs, we have to use function like that. Of course, that doesn't look so good,
but that's the best we can do right now, unfortunately. Okay, let's take a look at get method. So for interfaces, that's very, very basic method which just returns a value from the map. And if that map value is not present, the nil is returned. Nil is default zero value for all interfaces.
So here it is actually somewhat useful. And also we have to check that the value is the same, has the same type. So we can't check just this basic int. We have to wrap our int into interface. So how that looks in generics? So let's write this function which returns a value.
And here we say that, okay, the user should not only use a key, but also shouldn't know the type of the field, right? So we convert this return value to the given type. And if the type is not correct,
we return a nil value of the type to indicate that this field is not present or type is incorrect. We can return also a real value, for example, but for simplicity, let's say we return a real value. That may be not that useful. Again, the problem here is that the code should know the type. Here we don't assign that to any existing variable,
so we have to specify type explicitly. So I don't know how much that's useful. In my practice interfaces are better in some aspects, what's in others, and generics, of course, the same. Sometimes they solve the problem for us, sometimes they're not.
So if you're looking for an idioms for you, should you use generics or interfaces? Unfortunately, I don't have it. I would say it depends. It depends on your situation, but at least now you know the limitations. There are no methods, and then there are problems with what types you can use and what already works and what not.
Okay, let's look into finding. Finding might be much more simply and more interesting. So what is finding? Finding essentially is a testing week on steroids, and testing week is a standard Go package that allows you to generate random values. And finding kind of works like that. It generates random values and pass it to your function,
but then it tries to generate good values that increase the chance of your code to hunk or crash or panic or take an infinite loop or use a lot of memory, something like that. And it implements it in a way somewhat similar in theory and not in implementation,
but in principle, how test coverage works. So it instruments the code and see what code is being executed with what input and tries to generate better input to increase test coverage. And one important part of LGB is MongoDB binary protocol passing packages, of course.
And that's the sweetest spot for phasing because binary parsing is quite complicated. There are a lot of edge cases, a lot of problems you could not just find with basic unit tests because we developers tried to test for happy path and phasing couldn't generate inputs that make our parser crash.
So actually, LGB had phasing tests even before unit tests. They were as that useful. That's because we started by dumping MongoDB protocol packages and writing code to parse them. And we were more interested in getting this right,
like not having any crashes. And after that, once we started to understand protocol, we started writing good unit tests, not just phasing tests. But before we talk about phasing, let's talk a bit about test driving tests. So you probably know that we just define a test case with name, with some bytes,
expected value and expected error. And we check it like that. We un-march all bytes into value and compare that we got the same value by un-marching those bytes. And then we march all V or V2, it doesn't matter because they're the same, to the new slice of bytes and compare that bytes are the same.
So that's very basic test for parsing binary data. Quite common. And it looks like that. We just iterate over test cases and run our sub-test with a name and name allows us to have nicer message and also do not fail for the first test case,
but for each test case that fails individually. So phasing looks very similar. We write a fast document function, which accepts testing F instead of testing T. And you just call F at with test cases with generated bytes, with bytes from test case. And there is a fast function similar to sub-test function,
but the fast function, which accepts additional parameter B. And B is generated bytes. So then phasing agent stands to generate interesting value based on already provided seed values. And seed values is what you pass to F at. In my case, that data from table test.
And that's also, I think, would be quite common in the future. And then we do the same. We un-marshal bytes, compare, then marshal them and compare. So what are problems with that approach? First of all, we can un-marshal data, then marshal them back and get different values for bytes.
How is that? So first of all, the first slice of bytes may be larger than we needed. So that happens when, for example, you un-marshal some value from a slice of bytes, but some amount of bytes is not read during the un-marshaling process. Let's say you, for example, un-marshal JSON document, and the first byte you got is a credit bracket,
and second byte is closing credit bracket. So you just read two bytes and you got the full JSON document. And then the rest of bytes generated. So solution there is quite simple. You just unread portion of B should be loved. Once you compare B with B2,
on the next step, you compare with a slice of byte which was actually read. Another problem that sometimes two different combinations of bytes produce the same result. Again, in the case of JSON, that could be just additional bytes space. For example, you have a document
and you have a wide space between colon and field value. And they are no difference from the logic perspective, but bytes on the wire, in the memory, could be different. In solution for that, we use some form of canonization. For example, for JSON, you just call JSON compact and get the canonical version of that representation.
Okay, so what are current issues with filing? First of all, there is no support for subtests unfortunately. You can't call testing frun because it does not exist. In that case, that we can't use the same function for both table-deriving tests and filing tests.
And they are very similar. In the previous example, you saw that they look very, very similar. It would be nice to be able to iterate over test cases, test them right there and then pass to filing function, right? But there is no support for subtests and that means that first test case that fails,
actually fails the whole test. That's not really helpful. Hopefully, some forms of tests can be produced in the future. Second problem is that seed values are unnamed. And that means that once we add seed values to the corpus, and then we call the first function on them and one of them fails,
the fail message would say something like, okay, this function fails for seed value number two. And you have to check all the seed values. And if you allow the seed values from the files on disk, for example, not from the code, you have to like figure out which seed value was that. Like it's good if files like have distinct names
like one, two, three, but if it's hashes, it could be tricky. And I encountered a strange problem which is not very easily producible but very easy for me, but I wasn't able to reduce it to something small sort of contained bug wrapper.
So one thing that Fisang should detect is hung in tests. But in my case, if I call the Fisang with gomax-prox-unset, so by default it uses all the cores just like compilation or like in normal tests to use all the cores of your processor, sometimes Fisang fails with every message saying,
okay, for that particular test case, the function hungs, right? And it writes a file on disk and tells you, okay, run this command, go test with this flag to check this particular input. And you run it and it passes. And then we walk around, I found that set gomax-prox manually to some smaller values and the number of cores you have.
So for example, for 20-core machine, I pass six and six logs reliably. If I pass 10, sometimes not. So by default I set it to twice as little as cores as I have. I will try to reproduce the problem and report it. Maybe it will be fixed before release.
But overall, Fisang is great. It found, by using it, I found a few problems in go test was itself. And most of them were fixed. And also I found many, many, many bugs in fair tp and parsing code. Like you can't imagine how wonderful that was. So definitely, definitely use it for parsing.
If you have any parsing in your code, like network protocols and parse it and trust input from users, trust input from users even. Like if you can trust it, you should verify that and use Fisang for testing your parsers. And here are some links.
I created a separate repository to extract all the information from this talk about generic search interfaces. There is another repository I want to give a shout out. It's like in transition to generic starting from the very, very beginning to the internals and implementation details.
There are two official tutorials for Fisang and for generics now on the website. And there is a product page for Fisang specific. And there are links about me and fair DB project. Go check out us on GitHub, give us a star and also you can subscribe to me on GitHub and Twitter.
Thank you. Muntz. Muntz asks, is it possible to run the fuzzer
and the race detector together? Yeah. So that's actually was one of the issues I found during my usage of Fisang. It wasn't possible in one edge case and I departed the problem because fixer now in the latest version of coffee compiled from the source it is possible, yes. So now you can run fuzzer with race detector
and it is useful to find traces in Fisang functions. Thank you. A question from me. Would you recommend everyone to use fuzzing or only in certain use cases? So there are cases when Fisang works like very, very good like anything for untrusted input,
even for trusted input for binary data for text protocols. There Fisang is a must. And I would say that for me, that is a quite big red flag. If I go check some, I request on GitHub, say just maybe yesterday I found a new project which is like a virtual machine written in Go but there are no fuzzing tests.
And that was a big red flag for me and I'm looking forward to contribute to fuzzing test support. There are other cases where fuzzing is not easy. For example, you can in theory use fuzzing to generate all these queries and send them to the database. Other approaches could be simpler but also if you have a really powerful fuzzing engine
like in Go, you can also try to use that. Where are any challenges, Will asks, when upgrading FerroDB to go to 1.18 before the official release? So there were no challenges because we started with the release actually. So only for that.
But also, I know from the very beginning that once we go production later, go 1.18 will be restored later. So that wasn't a problem for us. And fuzzing for me, it was that important to use that version.
How long, oh, the question has just moved on my screen. That's annoying. How long does it take to go fuzzing APIs so that you can start to be productive and write integrating tests? So I would say that API is the best example of how fuzzing works in Go, how it is integrated.
So before that, there was a separate project that requires separate tooling, separate run instruction, separate way. Right now, it is very deeply integrated with Go test. So you just undergo test fuzz and fuzz function name and the trust. So something is very, very easy. And being productive and getting bugs from fuzzing
is also a reason. And this time, you start writing better and better fuzzing functions you like and start, okay, now you can, for example, check two different applications and compare them and find bugs this way and so on and so on. So writing good fuzz functions takes some time, but starting is very easy. I highly recommend everyone to do that.
MFRW asks, from what I've read, fuzzing is a process that goes on till the trumpet blows. Where do we draw the line while first test and keeping track of time of resources that fuzzing actually takes?
Yeah, so that's a good question. So first of all, fuzzing can find, like there are two kinds of issues fuzzing can find, right? One of them is just bugs, finics, errors, and so on and so on.
And the second type of issues is fuzzing finds something which takes a lot of memory or takes a lot of time. And like, for example, you can fuzz something like virtual machine. Again, virtual machine can just execute a program, a valid program, which just happens to contain an infinite loop. There is no easy answer for that.
You, like, for example, in the virtual machine example, you can add some facility to cancel the execution and then call this facility and check that it actually works, that your virtual machine is not completely complete. And then there is like another aspect of that is how often you run fuzzing, do you run it on CI?
Do you run it like on your machine in your garage for hours at a time? Again, there is no easy answer for that. And my take would be like run it on CI for a few minutes for each request you receive. So basically, when other jobs run integration tests,
tests, linters, and so on, the other job could run fuzzing. Then have maybe a failure, I think, for an hour or so. That's what we do, at least for now. I have a follow-up question. First of all, I want to say, sorry, there were no technical delays. My AirPods just missed a question, died on me.
So I quickly changed to a backup. Zygmunt asks, how do you run them in a CI? And more specifically, why do you post your results of fuzz tests? Because they only are inconsistent. Yeah, so again, great question. So right now, fuzzing has two corpus.
One of them is called seed corpus, and it stores data in Go code. You add it like by using add functions. And also, there is a test data collection with fuzzing input. And the other corpus is generated corpus, and it's stored in Go cache. So in FeralDB, we tried an experimental approach. We basically combined two corpus together
and stored them in separate repository, and it just repeated on CI, and then push updates from CI to that repository. You can find it on GitHub in FeralDB organization. It's fuzz corpus repository. That's an experiment. I don't know if that would be a best practice for us. Fuzz goes forward to us.
Thank you. If there's still any questions, we still have around three minutes left, so please feel free to post them in the chat. One more question is, did you notice any performance impacts when using generics in your code?
So we are quite far from that yet. We are working on basic functionality, like making it work, and we did not start doing performance testing yet. So there are a lot of places where performance of FeralDB can be improved. In some cases, it was optimized for simplicity, and in most cases, it was optimized for the speed of writing this code.
Basically. So there are a lot of not good code, not performing code, and once we start, we'll investigate. But also, I hope at that time, generics will be faster. Not a problem for us yet. Thank you.
And I'm actually getting more compliments than questions right now in the chat, and so I think everyone has answered all the questions about generics for today. So unless there was any quick last question, I really want to thank you again. Alexei just jumped in for the last week
to produce this talk, because we had Leo Wasik. So thank you very much for doing this all last minute. Thank you for joining us again. And I think he also deserves a round of applause. I wish you many luck on the podcast you have to go on, and the five same talks you have to do still today. Yeah. Thank you. Thank you very much.
Bye. Bye.