We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Optimizing string usage in Go programs

00:00

Formal Metadata

Title
Optimizing string usage in Go programs
Title of Series
Number of Parts
542
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Strings can seem like one of the most innocuous data structures in Go. Yet, they still play a significant role in most ubiquitous types of programs, such as text processors, in-memory key-value stores, DNS resolvers, or codecs. Their burden on the performance of such programs becomes especially pronounced in distributed systems and cloud-native environments, where the number of strings within an instance of the software can reach an order of millions or more. This gives rise to performance issues and bottlenecks, especially with regard to memory consumption. To provide answers to these problems, the talk will discuss several string-optimization techniques. To be more accessible for both beginner and intermediate levels, the talk will first state the problem and briefly introduce strings as a data structure, and will explain how strings look internally in Go. The core of the presentation will be dedicated to discussing operations with strings and how these can be optimized based on various techniques, such as string interning and string concatenation. The author will also share his experience and practical examples of open-source programs, where these techniques are being applied.
14
15
43
87
Thumbnail
26:29
146
Thumbnail
18:05
199
207
Thumbnail
22:17
264
278
Thumbnail
30:52
293
Thumbnail
15:53
341
Thumbnail
31:01
354
359
410
SoftwareSoftware maintenanceTwitterString theoryOpen sourceSoftware engineeringMetric systemBitComputing platformState observerComputerMultiplication signAreaRoundness (object)Software maintenanceMathematical optimizationDifferent (Kate Ryan album)Presentation of a groupPoint cloudDiagramLecture/ConferenceComputer animation
Data storage deviceDifferent (Kate Ryan album)Presentation of a groupProjective planeOpen setLecture/Conference
Mathematical optimizationString theoryDemoscenePersonal digital assistantString theoryDemosceneTheory of relativitySemiconductor memoryCASE <Informatik>Context awarenessMemory managementMathematical optimizationType theoryStrategy gameStreaming mediaTerm (mathematics)Stack (abstract data type)Data storage devicePresentation of a groupComputer animationLecture/Conference
String theoryDemoscenePersonal digital assistantMathematical optimizationData storage deviceDatabaseSeries (mathematics)Open sourceTerm (mathematics)State observerMetric systemDifferent (Kate Ryan album)BenchmarkSemiconductor memoryDatabaseProjective planeData storage deviceWind tunnelTime seriesComponent-based software engineeringProfil (magazine)Series (mathematics)Point (geometry)Computer animationLecture/Conference
Data storage deviceDatabaseOpen sourceTerm (mathematics)Mathematical optimizationTrailVideoconferencingOnline chatCodeBroadcast programmingMathematical optimizationSlide rulePresentation of a groupFunctional (mathematics)Computer animationLecture/Conference
Focus (optics)String theoryOpen sourceProgrammable read-only memoryPresentation of a groupMedical imagingMultiplication signString theoryTime seriesProjective planeDatabaseRevision controlComputer animation
Dew pointFocus (optics)String theoryProgrammable read-only memoryOpen sourceUser interfaceSemiconductor memoryTime seriesDatabaseCodeGraph (mathematics)ResultantLecture/ConferenceComputer animation
String theoryType theorySpacetimeMathematical optimizationSampling (statistics)MetadataData compressionComputer programLimit (category theory)Point (geometry)Perspective (visual)Time seriesKey (cryptography)State observerLecture/Conference
String theoryRepresentation (politics)Run time (program lifecycle phase)EmailRepresentation (politics)String theorySemiconductor memorySource codeEmailPointer (computer programming)Type theoryCategory of beingDifferent (Kate Ryan album)Computer animation
String theoryComputer multitaskingMaxima and minimaEmailOverhead (computing)Semiconductor memoryString theoryData storage deviceProgram slicingInformationIntegerSinc functionContent (media)Channel capacityOverhead (computing)EmailLecture/ConferenceComputer animation
MetreString theoryComputer multitaskingEmailOverhead (computing)InformationString (computer science)Type theoryLengthSpacetimeString theoryNumberCodeBitComputer animation
String theorySemiconductor memoryInformationSpacetimeGoodness of fitLengthLecture/Conference
String theoryComputer multitaskingOverhead (computing)EmailString theoryVariable (mathematics)Total S.A.Overhead (computing)CASE <Informatik>Control flowComputer animation
EmailString theoryDemosceneString theoryString (computer science)Variable (mathematics)Address spacePointer (computer programming)EmailOverhead (computing)Identity managementLine (geometry)InformationLecture/ConferenceComputer animation
Time zoneString theoryVector potentialTerm (mathematics)Read-only memorySpeicherbereinigungContext awarenessStrategy gameSlide ruleDatabaseComputer programRun time (program lifecycle phase)Data storage deviceBitTime seriesMathematical optimizationCharacteristic polynomialMultiplication signString theoryKey (cryptography)Resolvent formalismComputer animation
String theorySinc functionNumberDatabaseGene clusterMultiplication signLecture/Conference
Read-only memoryString theoryVector potentialCluster samplingTerm (mathematics)SpeicherbereinigungTime zoneProcess (computing)Key (cryptography)SpeicherbereinigungSound effectType theorySemiconductor memoryForm (programming)String theoryField (computer science)InformationDifferent (Kate Ryan album)Mathematical optimizationComputer programDatabase2 (number)Communications protocolComputer animation
String theoryObservational studyComputer fileSemiconductor memoryKey (cryptography)Process (computing)Lecture/Conference
Time zoneEwe languageProcess (computing)Key (cryptography)Read-only memoryString theoryStrategy gameMereologySemiconductor memoryString theoryMathematical optimizationTerm (mathematics)Library (computing)Multiplication signMechanism designStandard deviationCloningSoftwareSlide ruleComputer animation
String theoryOverhead (computing)EmailDensity of statesVector spaceCache (computing)CountingString theoryCache (computing)Formal languageLevel (video gaming)Semiconductor memorySingle-precision floating-point formatJava appletLecture/ConferenceComputer animation
String theoryOverhead (computing)EmailDensity of statesVector spaceCache (computing)CountingLibrary (computing)Concurrency (computer science)Axiom of choiceMechanism designSynchronizationString theoryStandard deviationLevel (video gaming)Revision controlSlide ruleLecture/ConferenceComputer animation
Level (video gaming)Vector spaceFrequencyCASE <Informatik>String theoryMultiplication signResource allocationPoint (geometry)InternetworkingSemiconductor memoryCache (computing)Condition numberDenial-of-service attackComputer configurationComputer programLecture/Conference
Gamma functionString theoryOverhead (computing)EmailVector spaceDensity of statesCache (computing)CountingString theoryImplementationNumberProjective planeWritingComputer animationLecture/Conference
String theoryOverhead (computing)EmailVector spaceDensity of statesCache (computing)CountingImplementationPointer (computer programming)Level (video gaming)Library (computing)ImplementationBoolean algebraCuboidRun time (program lifecycle phase)String theorySpeicherbereinigungPoint (geometry)Functional (mathematics)EmailComputer animationLecture/Conference
Core dumpLibrary (computing)CodeSimilarity (geometry)String theoryEmailUniqueness quantificationPointer (computer programming)CodeLibrary (computing)Slide ruleImplementationMereologyComputer animation
String theoryImplementationPointer (computer programming)Overhead (computing)Wechselseitige InformationGEDCOMMaxima and minimaMaß <Mathematik>Point (geometry)Wind tunnelProjective planeLibrary (computing)ResultantBenchmarkMetric systemGraph (mathematics)Computer animationLecture/Conference
Gamma functionGEDCOMLine (geometry)Different (Kate Ryan album)Instance (computer science)Memory managementMetric systemRun time (program lifecycle phase)
Limit (category theory)Dew pointDrum memoryGamma functionGEDCOMGraph (mathematics)CountingRun time (program lifecycle phase)NumberAverageSemiconductor memoryObject (grammar)Lecture/Conference
Electronic data interchangeSymbol tableString theoryData structureComputer networkResource allocationString theoryExterior algebraSymbol tableTime seriesIntegerField (computer science)Computer animationLecture/Conference
Gamma functionString theorySymbol tableData structureResource allocationComputer networkSoftware maintenanceString theorySymbol tableKey (cryptography)SoftwareTable (information)Series (mathematics)Wind tunnelComputer animation
Symbol tableSoftwareResource allocationString theoryData structureLecture/Conference
Commutative propertyString theoryEmailOverhead (computing)Data structureString theoryLengthEmailOverhead (computing)Computer animation
String theorySymbol tableComplex (psychology)Concurrency (computer science)Data structureLevel (video gaming)Overhead (computing)Mechanism designLecture/Conference
Mathematical optimizationString theoryOverhead (computing)EmailData structureProjective planeComputer animation
BefehlsprozessorRead-only memoryProgrammable read-only memoryGraphic designInformation managementLemma (mathematics)1 (number)Mathematical optimizationWordComputer programCASE <Informatik>Slide ruleCompilerBitMultiplication signPoint (geometry)Cartesian coordinate systemChemical equationCategory of beingLatent heatSemiconductor memoryBefehlsprozessorRun time (program lifecycle phase)String theoryLecture/ConferenceComputer animation
Program flowchart
Transcript: English(auto-generated)
Yes Okay, our next speaker is gonna talk about something we all use in Go which is strings if you didn't ever use it in Go What are you doing here? So let's give a round of applause for Matei
Thank you everyone. Thank you excited to be here excited to see so many so many faces Excited to speak first time at the FOS of them also a bit intimidating But hopefully I can show you a thing or two about the string optimization in Go About me, my name is Matei Igera
I worked as a software engineer at the company called Corelogix where we're building an observability platform apart from that I am active in different open source communities mostly within the cloud native computing foundation Specifically in the observability area. I work a lot with with metrics
I'm a maintainer of the TANOS projects, which I will also talk a bit about during my presentation And apart from that I contribute to a couple different projects most interestingly open telemetry and Yeah, these are my handles I'm not that active on social media best is to reach me on the github issues directly or PRs and
Let's let's get into it. So If anything else, I'd like you to take at least three things today from this presentation So first of all, I'd like you to Understand how strings work behind the scenes in go this might be old news for many people who are more experienced with go
Or might be a new knowledge for for newbies, but I want to set kind of a common ground from which we then Can talk about the optimization Secondly, I want to tell you about the use cases in context of which I have been thinking about string optimization and writing the presented Strategies can be useful and lastly I want to tell you about the actual optimization strategies and show some
Examples of how they can be applied that where they have been applied. I Won't be talking today that much about Stack versus heap. Although a lot of this has to do with memory For the presentation. I kind of assumed we'll be talking more about the heap and kind of a long term
storage of Streams in memory also being going into encoding or related types like runes and chars, although it's it's all kind of Related but it's outside of the scope for today So let me first tell you what what kind of brought me to this topic
What was the inspiration behind this talk? Is I already said I worked primarily in the observability landscape with metrics and over the past almost two years I was working a lot on the Tunnels project which I mentioned and which you can for simplicity here. Imagine as a distributed database for storing time series and With these goals. It's intended to store
Millions of time series even up to or more than billion series. We have heard also about deployments like that and as I was Working with tunnels and learning about these various aspects and components One particular issue that has been standing out to me was the amount of memory needed for certain tunnels components to operate and
This is due. This is partly due to the fact that the the time series data is stored in memory in a time series database And this is where I decided to focus my attention where I started to explore like how we can what are some possible avenues where?
We could optimize the performance here Big role here was played by doing this in a data-driven way so I started looking at different data points from tunnels I got like metrics profiles benchmarks and is a small side note because I consider data-driven performance optimization to be of utmost importance when you're improving the efficiency of your program and
So I don't want to diverge here, but I highly recommend for you to check out a talk by Partik Plotka Who's I think is in in the room here So he's talking couple couple slots after me who is kind of Dedicating a lot of his time into into this data-driven approach to efficiency in go ecosystem
I don't have it on the slide. But also the presentation that's after me that has to do with squeezing go functions It seemed interesting. So a lot of optimization talks today, which I love to see And you might also ask why string specifically what makes them so so interesting or so optimization worthy
And although I've been I've been looking at Thanos for some time Something clicked after I've seen this particular image at the different presentation. So this was presentation from Brian Boran I know it's also it should be also somewhere around Fosden Who is working on a kind of a neighboring project called Prometheus, which is a time series database on which
Thanos is built. So if Thanos is kind of a distributed version of the Prometheus we reuse a lot of the code from Prometheus and also the actual time series database code and So he showed based on the profile and on the icicle graph that you see here
That the labels take most of the memory in Prometheus and that was around one third And when I thought about it The result was rather surprising to me because the labels of the time series we could think of them as some kind of metadata or some kind of Contextual data about the actual data points about the samples as we call them
And these were taking up more spaces than those actual data points those actual samples themselves So there's been a lot of thoughts and work put into optimization and compression of the samples of the actual time series data But Brian's finding indicated that there can be more can be squeezed out of labels and what what are actually labels
Labels are key value pairs attached to a given time series to kind of characterize it So in principle they are nothing more than pairs of strings So this is what brought me in the end to the strings and it inspired me to talk about this topic to a large audience I thought it might have be I might be useful to look at this from kind of a more general perspective
Even though we're dealing with this problem in a limited In a limited space of observability I think it can be also some learnings from this can be gained and used also in different and in other types of programs So First let's lay foundations to our talk by taking a look at what string actually is in go
So most of you probably are familiar with different properties of strings. They are immutable They can be converted easily to slice of a bytes can be concatenated sliced etc, etc However, talking about the qualities of strings does not answer the question what strings really are and if you look at the source code of Go, you'll see that the strings are actually represented by the string struct struct
So strings are structs shocking, right? You can also get the runtime representation of this from the reflect package which contains the string header type So based on these two types We see that the string consists of a pointer to the actual string data in the memory
An integer which gives the information about the size of the string When go creates a string it allocates storage corresponding to the provided string size and then sets the string content as a slice of bytes As you've seen the string data is stored as a condignent slice of bytes memory The size of the strings stays the same during its lifetime since as I mentioned previously the string is immutable
This also means that the size and the capacity of the backing slice of bytes stays the same When you put this all together the total size of the string will consist of the overhead of the string header Which is equal to 16 bytes and I show in a bit why and the byte length of the string We can break this down on this small example of the string I created with FOSDEM space waving hand emoji
So this is just a snippet. I don't think it's a it's a it would compile this code but for brevity I decided to to show these three small lines and By calling the size method on the string type from the reflect reflect package You would see it return number 16 and don't be fooled the size
Method returns only the information of the size of the type not size of the whole string Therefore it correctly tells that it's 16 bytes 18 bytes due to pointer pointing to the string in memory and 8 bytes for keeping the string length information To get the size of the actual string data. We have to use the good old LAN method
This tells us it's 11 bytes since the string literal here is UTF-8 encoded We count one byte per each letter and space and we need actually four bytes to encode the waving hand emoji And this brings our total to 27 bytes Interestingly for such a short string the overhead of storing it is bigger than the string data itself
It's also important to realize what happens if we declare a new string variable that is copying an existing string In this case co-creates what we can consider a shallow copy Meaning the data the string refers to is shared between the variables Let's break it down again on the example of our FOSDEM string
So we declare a new string literal FOSDEM waving hand emoji And then create a new STR or new string variable and set it to value equal to String or STR. What happens behind the scenes if you would look at the values pointer of each of the strings You would see different addresses making it obvious that these are two different strings strictly speaking
By looking at their headers. We would see identical information same pointer to string data and same length, but because It's Yeah, it's a bit light right sorry
But anyway So these are two different strings strictly speaking and looking at the header information I would see that they point to same string data and have same length But because they are two different strings We need to be mindful of the fact that the new new STR comes with a brand new string header
So the bottom line is when we do this copying there is again Like the even the data is shared the overhead of 16 bytes is still is still there So I briefly talked about my inspiration for this talk But I also wanted to expand a bit on the context of the problems where I think the string optimization strategies can come can be
useful I think in general many programs when Characteristics of in-memory stores may face performance issue. I will talk about in this slide Such programs I already mentioned numerous times the time series database DNS resolvers or any other kind of key value store Where we come with an assumption that these are some long-running programs and over the runtime of the program. We will keep
We will keep the the number of strings will be will keep accumulating So we can be talking potentially billions of strings There's also potential for repetitions of strings since many of these stored values may repeat themselves
so for example if we associate Each of our entries with a label denoting which cluster they belong to we are guaranteed to have repeated values Since we have a finite and often small amount of clusters So the string cluster will be stored as many times as many entries there are in our database There's there are also certain caveats when it comes to handling of incoming data data
Will often come in a form of request through HTTP C or HTTP or gRPC or any other protocol and? Usually we handle this data in our program by unmarshaling them into a struct And then we might want to store some some information some string from the struct
in in the memory for for for future use However, the side effect of this is that the whole struct will be prevented from being garbage collected Because as long as the string or as a matter of fact any other fields from a strike is being referenced our date is being referenced by our database in memory the
Garbage collection one kicked in and eventually will lead to bloat in the memory consumption I Think the second kind of different type of programs can where string optimization can be useful Are kind of one of data processing? Situations as opposed to the long-running programs, so we can take an example of handling
Some large JSON file perhaps it can be some data set from from from a study or a health data Which I think were some good examples I've seen out in the wild and Such processing will require larger amount of memory to decode the data during processing So even though we might be processing same strings that repeat themselves over and over again such as the keys in the JSON document
We're having to allocate such strings in new each time So now that we have a better understanding of the problem zones. Let's look at the actual optimization strategies So the first strategy is related to the issue
I mentioned a couple of slides before where we are wasting memory by keeping whole structs in the In memory when we only need part of the struct That that is represented by the string So what we want to do here is to have a mechanism that will allow us to Quote-unquote detach the string from the struct so that the rest of the struct can be garbage collected
Previously this was also possible to achieve with some unsafe manipulation of strings But since go 118 there is a new method called clone in the string standard library It makes it quite straightforward Though clone creates a new fresh copy of the string This decouples the string from the struct
Meaning the struct can be garbage collected in the long term and will retain only the new copy of the string So remember previously I showed that when we copy strings we create shallow copies here We want to achieve the opposite. We want to truly copy The string and create a fresh copy of the underlying string data So the origin original string can be garbage collected together with the struct it's part of so this
We can refer to as deep copying The next most interesting and I'd say one of the most widely used strategies in software in general is string interning String interning is a technique which makes it possible to store only a single copy of each distinct string And subsequently we keep referencing the same underlying string in the memory
This concept is somewhat more common in other languages such as Java or Python But can be implemented effortlessly in Go as well and there are even some ready-made solutions out in the open that you can use So it's simple as you could achieve this by having a single simple
Map string string and you can keep the references to this string in this map Which we can call kind of our interning map or cache or anything like that First complication comes with the concurrency, right? Because we need a mechanism to prevent concurrent write and read to our interning map
So obvious choice would be to use mutex Which however incurs a performance penalty, but so be it or concurrency save map version from the sync standard library The second complication or the noteworthy fact is that with each new reference string We are incurring the 16 bytes overhead as I explained couple of slides back. So even though we're saving on the actual string data
It's it's not We're still incurring the overhead. So with millions of strings 16 bytes for every string. It's not a trivial It's it's it's a non-trivial amount Third complication comes from the unknown lifetime of the string in our interning map
At some point in the lifetime of the program there might be no more references to a particular string so it can be safely dropped But how to know when these conditions are met? Ideally, we don't want to be keeping unused strings as in an extreme case This can be a denial of service vector leading to exhaustion of memory if we allow the map to grow unbounded
One option could be to periodically clear the map or give the entry certain time to live So after given period the map or the given entries are dropped from the map And if a string reappears after such deletion, we simply create the entry in the internet map so kind of like like cache and
Naturally, this can lead to some unnecessary churning and unnecessary allocations because we don't know exactly strings are no longer needed or referenced But we might be still dropping them second and more elaborate way to do this is to keep Keep counting the number of references of the use strings and this naturally requires a more eloquent and complex implementation
But you can see here I linked a work done in the Prometheus project writing is a good example of how this can be implemented with counting the references We can take this even to the next level as I recently learned
There is an implementation of an interning library that is capable of automatically dropping unused references The go4.org intern library is capable of doing this Thanks to somewhat controversial concept of the finalizers in the go runtime Finalizers that very plainly make it possible To attach a function that will be called on a variable that is deemed to be garbage collection ready by the garbage collector
At that point this library checks the sentinel boolean on the reference value and if it finds This is the last reference to that value. It drops it from a map The library also cleverly boxes the string header down to a single pointer, which brings the overhead down to 8 bytes instead of 16
So as fascinating as this implementation is to me it makes uses of some potentially unsafe code behavior Hence the dark arts reference in the slide title However, the library is deemed stable and mature enough and has been created by some well-known names in the go community
So if you're interested, I encourage you to study and look at the look at the code It's just one file, but it's quite interesting and you're sure to learn a thing or two about about some less known parts of go And as an example, I've recently tried this library in the last bullet point in the in the tunnels project again
I linked you the the PR with the usage with the implementation, which I think it's rather straightforward And we ran some some synthetic benchmarks on this on this version with with in turning Turn on this was the result on the left side. You can see yeah, probably not very clearly unfortunately
But there is a there's a graph showing metrics for both both reported by the go runtime how many bytes we have in the in the heap and Metrics reported by the by the container itself and you can see the
differences between the green and yellow line and the blue and red line So it's it came up to roughly two to three gigabytes improvement per instance So this is averaged per per I think across six or nine instances. So per instance This was around two to three gigabytes so we can count overall improvement around ten to twelve
Gigabytes but more interestingly on the right side of the slide. There is another graph to kind of Confirm that the interning is doing something that it's working Then we can we can see we're following again metric reported by the go runtime and we're looking at
The number of objects held in the memory so we can see that it dropped It drops almost by by half when we look at the average Finally there's a string interning with a slightly different flavor I would say which I refer to a string interning with symbol tables and in this alternative instead of keeping a reference string
We replace it with another referring symbol such as for example an integer so that integer one will correspond to string Apple or string integer two will correspond to string banana and so on and This can be beneficial with scenarios with a lot of duplicated strings again This brings me to my home home field and to the time series databases where there is generally a high
probability of the labels so also the strings being repeated and Especially when such strings are being sent over the wire so instead of sending all the duplicated strings we can send a symbol table in their place and We can replace the strings with the references in this table
So where this idea come from or where I got inspired for this was also in in tunnels, but this was by by one of my fellow maintainers so you can you can look at that PR Who implemented this for data series being sent over the network between tunnels components? So instead of sending all the long and unduplicated label keys and values
So instead of sending all of these strings We build a symbol table that we send together with the duplicated label data that that includes That Contains only references instead of the strings so that all we have to do on the other side Once we receive the data is to replace the references by the actual strings based on the symbol table
which saves us on one hand the the cost of the of the network since the requests are smaller and also the Allocations once we're once we're dealing with the data on the receiving side Lastly you could try putting all of the strings into one big structure into one big string
And this can be useful to decrease the total overhead of the strings as this eliminates the already mentioned overhead of the string header So yeah, since this is always 16 bytes plus the byte length of the string which consists which creates the size of the of the string by putting all the strings into the
one we can effect effectively decrease the Overhead of those string headers So, of course, this is not without added complexity because now we have to deal with how to look up those substrings all those those smaller strings within the big within the bigger structure and so you need a mechanism because you cannot simply look them up in a map or a symbol table and
Obviously another already mentioned complications such as concurrent access You also have to also still have to deal with this and I think particularly interesting Attempt that this is going on in the Prometheus project which again this is this is done by by Brian Boren
Who I mentioned in the previous Slides, so if you're interested feel free to feel free to check out this this PR So I will conclude with a few words of caution So I have show you some optimization techniques that I found particular interesting when I was doing my research
But let's not be naive These are not magic ones that will make your program suddenly work faster and with fewer resources This is still a balancing exercise So many of the presented techniques can save memory, but will actually increase the time it takes to retrieve a string So when I mean optimization, this is mostly in a situation where we want to decrease expensive memory footprint of our application
While sacrificing a bit more CPU a trade-off that I believe is reasonable in such setting Most of not making any concrete claims about performance improvements of various techniques as you have Seen and I think this nicely ties into the introduction of my talk where I talked about the need of data data driven optimization
So I believe there's still more data points needed to show how well these techniques work in practice How well they can work in your specific use case how they compare with each other when it comes to performance And whether there are some other real-world Implications or maybe properties of go or compiler or the runtime that might not render them useful in
practice or or the the Performance gain might be negligible so Just to say that your mileage might might vary, but I think these these ideas are worth exploring and
Can be interesting And that is all from my side. Thank you for attention Also included couple more resources for those who interested you can find the slides in the in the penta bar