We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Our Mad Journey of Building a Vector Database in Go

00:00

Formal Metadata

Title
Our Mad Journey of Building a Vector Database in Go
Subtitle
Building a Database in Go
Title of Series
Number of Parts
542
Author
Contributors
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
"We're going to build a new type of database in Go" – "Are you mad?!" This was the common reaction when back in 2019, we decided to build an open-source vector database in Go. Today, Weaviate's downloads have exceeded 1.5M (at the time of writing), and we're over the moon with how far we've come. But oh boy, they were right; it was crazy indeed: In this talk, I would like to take you on a journey of the less common and crazier parts of Go: You will learn about pure-assembly optimizations, obscure pitfalls, tricks of heap allocations, and memory management in general. To get the most out of this talk, you should have at least an intermediate experience level of Go. The talk touches on database internals, but no prior knowledge of inner-database mechanics is required.
14
15
43
87
Thumbnail
26:29
146
Thumbnail
18:05
199
207
Thumbnail
22:17
264
278
Thumbnail
30:52
293
Thumbnail
15:53
341
Thumbnail
31:01
354
359
410
BuildingDatabaseCASE <Informatik>Vector spaceComputer animation
ModemElectronic mailing listUniqueness quantificationDatabaseComputer animation
Endliche ModelltheorieFormal languageDatabaseSource codeComputer animation
World Wide Web ConsortiumSearch engine (computing)Scaling (geometry)QuicksortQuery languageDatabaseOnline chatVirtual machineVector spaceEndliche ModelltheoriePhysicalismComputer animation
Vector spaceEndliche ModelltheorieDatabaseLogic gateSearch engine (computing)QuicksortOnline chatVector spaceOpen sourceRevision controlEndliche ModelltheorieComputer animation
Goodness of fitBlock (periodic table)Mathematical optimizationMoment (mathematics)Computer animation
Data structureMiniDiscMemory managementSpeicherbereinigungDatabaseMiniDiscQuicksortCartesian coordinate systemNumberCodecComputer animation
Data modelInterface (computing)CodeBookmark (World Wide Web)Library (computing)QuicksortRepresentation (politics)Error messageComputer animation
Run time (program lifecycle phase)ParsingMemory managementMemory managementFunctional (mathematics)Stack (abstract data type)ParsingProfil (magazine)Multiplication sign2 (number)Run time (program lifecycle phase)QuicksortBuffer solutionVariable (mathematics)SpeicherbereinigungPoint (geometry)DatabaseNumberSemiconductor memoryComputer animation
Mathematical analysisBEEPProof theoryMultiplication signMathematical analysisMemory managementQuicksortEscape characterVariable (mathematics)Line (geometry)CodeComputer animation
Binary fileReading (process)Position operatorLine (geometry)Buffer solutionMemory managementView (database)Computer animation
ParsingTotal S.A.Functional (mathematics)NumberRange (statistics)Noise (electronics)2 (number)Profil (magazine)Computer clusterComputer animation
Interface (computing)Pointer (computer programming)Variable (mathematics)Escape characterQuicksortBinary codeMemory managementCodeDatabaseLine (geometry)Interface (computing)CASE <Informatik>2 (number)Pointer (computer programming)Computer animation
Data structureComplex (psychology)MiniDiscNumberInterior (topology)Element (mathematics)Computer configurationBinary fileRepresentation (politics)Codierung <Programmierung>Program slicingLengthTask (computing)DatabaseElement (mathematics)2 (number)CASE <Informatik>BitView (database)NumberQuicksortArray data structureMiniDiscString (computer science)Reading (process)Price indexData storage deviceRepresentation (politics)MereologyObject (grammar)Binary fileSummierbarkeitMemory managementGame controllerSingle-precision floating-point formatParsingDigitizingSpacetimeDoubling the cubeComputer animation
Program slicingQuicksortLengthHardy spaceCodeCodierung <Programmierung>Line (geometry)Computer animation
Codierung <Programmierung>Total S.A.SpacetimeMiniDiscDatabaseComputer animation
Codierung <Programmierung>Program slicingTask (computing)Data structureBitComputer animation
Total S.A.MiniDiscPrinciple of localityRead-only memoryDatabaseProgram slicingMiniDiscPointer (computer programming)Semiconductor memoryLocal ringComputer animation
Semiconductor memoryVideo gameSemiconductor memoryQuicksortElectronic mailing listProxy serverSpeicherbereinigungMereologyMultiplication signBinary code
2 (number)CoprocessorQuicksortVideo gameNeuroinformatikInheritance (object-oriented programming)Computer animation
Set (mathematics)Series (mathematics)Parameter (computer programming)Greatest elementOpen sourceElectronic mailing list2 (number)Computer animation
QuicksortMathematical optimizationLevel (video gaming)Vector spaceSimilarity (geometry)outputEinbettung <Mathematik>Pairwise comparisonElectronic mailing listDampingEndliche ModelltheorieVirtual machineComputer animation
Element (mathematics)Object (grammar)Query languageSummierbarkeitVector spaceProduct (business)CalculationMultiplicationComputer animation
CompilerFunction (mathematics)BitCodeCompilerLie group
Element (mathematics)IterationComputer animation
MereologyOperator (mathematics)MultiplicationSingle-precision floating-point formatComputer animation
outputWorld Wide Web ConsortiumGEDCOMDirection (geometry)Standard deviationMereologyAssembly languageWritingComputer animation
Interface (computing)Overhead (computing)CodeBitLevel (video gaming)QuicksortSystem callFormal languagePerturbation theoryHigh-level programming languageOverhead (computing)CodeBefehlsprozessorComputer animation
Demo (music)Computer fileView (database)File formatDemo (music)Arithmetic meanQuery languageInternetworkingSoftwareMatching (graph theory)Connected spaceComputer animation
Binary fileGamma functionVideo game consoleSigma-algebraLocal ringVector spaceEndliche ModelltheorieQuery languageVirtual machineElectric generatorDatabaseOpen sourceWindowMereologyLaptopInternetworkingQuicksortRevision control
Computer fileView (database)Demo (music)Multiplication signSoftware repositoryDatabaseMereologyLibrary (computing)Concurrency (computer science)Online helpComputer animation
Transcript: English(auto-generated)
It's four o'clock, so let's look at our preview, sorry. Now our next talk, I have been doing some mad things in Go,
but building a database, I honestly have strong respect for. So next up is Etienne, who's gonna tell us everything about crazy journeys in Go. Thank you. Yeah, welcome to our mad journey of building a database in Go. And yeah, it's pretty mad to build a database at all.
It may be even worse or even madder to build a database in Go when most are built in... Closer. Okay, cool. Let me start over in case you didn't hear it. So hi, my name is Etienne.
Welcome to our mad journey of building a vector database in Go. So building a database at all could already be pretty mad. Doing it in Go when most are built in C or C++ could be even madder or even more exciting. And we definitely encountered a couple of unique problems that led us to create creative solutions. And there's lots of shouts out in there
and also a couple of wish lists that we just released, Go 1.20. And of course, the occasional madness. So let's get one question out of the way right away. Why does the world even need yet another database? There's so many out there already, but probably you've seen this thing called ChatGPT because that was pretty much everywhere and it's kind of hard to hide from it.
And ChatGPT is a large language model and it's really good at putting text together that sounds really sophisticated and sounds nice and sometimes is completely wrong. And so in this case, we're asking it, is it mad to write a database in Go? I might disagree with that.
But either way, basically we're now in a situation where on the one hand, we have these machine learning models that can do all the cool stuff and do this sort of interactively and on the fly. And on the other side, we have traditional databases and those traditional databases, they have the facts because that's kind of what databases are for, right? So wouldn't it be cool if we could somehow combine those two? So for example, on the query side,
if I ask Wikipedia, why can airplanes fly? Then the kind of passage that I want that has the answer in it is titled the physics of flight. But that is difficult for a traditional search engine because if you look at keyword overlap, there's almost none in there. But a vector search engine can use machine learning models basically that can tell you these two things are the same
and searching through that at scale is a big problem. Then there's that sort of chat GPT side where you don't just want to search through it, but maybe you also want to say like, take those results, summarize them, and also translate them to German. So basically not just return exactly what's in the database but do something with it
and basically generate more data from it. And that is exactly where VV8 come in. So VV8 is a vector search engine which basically helps us solve this kind of searching by meaning instead of keywords without sort of losing what we've done in 20 plus years of search engine research. And now most recently you can also interact with these models such as chat GPT, GPT-3,
and of course also the open source versions of it. So VV8 is written in Go. Is that a good idea? Is that a bad idea? Or have we just gone plain mad? So we're not alone. That's good. So you probably recognize these things. They're all bigger brands at the moment than VV8
but VV8 is growing fast. And some of those vendors have really great block posts where you see some of the optimization topics and some of the crazy stuff that they have to do. So if you've contributed to one of those, some of the things I'm gonna say might sound familiar. If not, then buckle up. It's gonna get mad.
So first stop on our mad journey, memory allocation. Then that also brings us to our friend, the garbage collector. So for any high performance Go application, sooner or later, you're gonna talk about memory allocations and definitely consider a database a high performance application or at least consider VV8 a high performance application. And if you think of what databases do,
like in essence basically you have something on disk and you wanna serve it to the user. That's like one of the most important user journeys in a database. And here this is represented by just a number so it went for UN32. So that's just four bytes on disk and basically you can see sort of these four bytes. If you parse them into Go,
they would have the value of 16 in that UN32. And this is essentially something very much simplified that a database needs to do and it needs to do it over and over again. So the standard library gives us the encoding slash binary package and there we have this binary.read method which I think looks really cool. To me it looks like idiomatic Go
because it has the iota reader interface like everyone's favorite interface and you can put all of that stuff in there and if you run this code and there's no error then basically you get exactly what you want. You could turn those sort of four bytes that were somewhere on disk, turn them into our in-memory representation of that UN32.
So is this a good idea to do that exactly like that? Well if you do it once or maybe twice, could be a good idea. If you do it a billion times, this is what happens. So for those of you who are new to CPU profiles in Go, this is madness, this is pretty bad. So first of all you see it in the center,
parsing those one billion numbers took 26 seconds and 26 seconds is not the kind of time that we ever have in a database. But worse than that, if you look at that profile, we have stuff like runtime malloc GC, runtime memmove, runtime madvice. So all these things, they're related to memory allocations
or to garbage collection. And what they're not related to is parsing data, which is what we wanted to do, right? So how much time of that 20 seconds did we spend what we wanted to do? Don't know, doesn't even show up in the profile. So and to understand why that is the case, we need to quickly talk about the stack and the heap. So you can think of the stack
as basically your function stack. So you call one function that calls another function and then at some point basically you go back through the stack and this is very short-lived and this is cheap and fast to allocate and why is it cheap? Because you know exactly the runtime of your variables or the lifecycle of your variables. So you don't even need to involve the garbage collector. So no garbage collector, cheap and fast.
Then on the other side you have the heap and the heap is basically this sort of long lift kind of memory and that's expensive and slow to allocate and why? Because and also to deallocate. Why? Because it involves the garbage collector. So if the stack is so much cheaper, then we can just always allocate on the stack, right? So warning, this is not real go. Please do not do this.
This is sort of a fictional example of allocating a buffer of size eight and then we're gonna say like, yeah, please put this on the stack and that is not how it works and for most of you, you probably say like, this is pretty good that it's not that it works that way because why would you want to deal with that? But for me, just trying to build a database and go is sometimes like this, something like this may be good or maybe not.
So how does it work? Go does something that's called escape analysis. So if you compile your code with gcflax-m, then Go annotates your code basically and tells you sort of what's happening there. So here you can see in the second line that this num variable that we used was moved to the heap
and then in the next point, you see the bytes.reader which represents our io.reader escaped to the heap. So two times we see that something happened to the, or went to the heap. We don't exactly know what happened yet but at least sort of there's proof that we have this kind of allocation problem. So what can we do? Well, we can simplify a bit. It turns out that the binary
or encoding binary package also has another method that looks like this which is just called view in 32 on the little endian package and it kind of does the same thing. You just put in the buffer on the one side. So no reader this time. You just put in the raw buffer basically with the position offset and on the other side, you get the number out and the crazy thing is this one line
needs no memory allocations. So if we do that again, our one billion numbers that took 26 seconds before now take 600 milliseconds. So now we're starting to get into a range where this is acceptable for databases. And more importantly, what we see on that profile, the profile is so much simpler right now. There's basically just this one function there
and that is, yeah, it's what we wanted to do. So admittedly, we're not doing much other than parsing the data at the moment but at least we got sort of rid of all the noise and you can see the speed up. Okay, so quickly to recap. If we say a database is nothing but reading data and sort of parsing it to serve it to the user,
then we do that over and over again and we need to take care of memory allocations and the fix in this case was super simple. We changed two lines of code and reduced it from 26 seconds to 600 milliseconds but why we had to do that wasn't very intuitive. Like it wasn't very obvious. In fact, I haven't even told you yet why this binary.littleendian.read,
why that escaped to the heap and in this case, it's because we passed in a pointer and we passed in an interface and that's kind of a hint basically that something might escape to the heap. So what I would wish is, yes, this is not a topic that you need every day you write go but maybe if you do need this, would be cool if there was better education.
Okay, so second step, delayed decoding. So this is kind of the idea that we wouldn't want to do the same work twice and we're sticking with our example of serving data from disk but now while the number example was a bit too simple so let's make it slightly more complex.
We have this nested array here, basically a sort of slice off slice of view in 64 and that's representative now for a more complex object on your database. Of course, in reality, you'd have like string props and other kind of things but just sort of to show that there's more going on than a single number and let's say we have 80 million of them,
so 10 million of the outer slice and then eight elements in each inner slice and our task is just to sum those up. So these are 80 million numbers and we want to know what is the sum of them. So that is actually kind of a realistic database task for an OLAP kind of database. Yeah, we need to somehow represent that data on disk and we're looking at two ways to do this.
The first one is JSON representation and then the second one would be some sort of binary encoding and then there'll be more. So JSON is basically just here for complete and say we can basically rule it out immediately. So when you're building a database, you're probably not using JSON to store stuff on disk unless it's sort of a JSON database.
Why? Because it's space inefficient. So if you want to represent those numbers on disk like JSON basically uses strings for it and then you have all these control characters and you have like your curly braces and your quotes and your columns and everything, that takes up space. So in our fictional example, that would take up 1.6 gigabyte and you'll see soon that we can do that more efficient. But also it's slow and part of why it's slow
is again because we have these memory allocations but also the whole parsing just takes time. So in our example, this took 14 seconds to sum up those 80 million numbers and yeah, as I said before, you just don't have double digit seconds in a database. So we can do something that's a bit smarter
which is called length encoding. So we're encoding this basically as binary and we're spending one, in this case one byte, so that's basically a U and eight and we're using that as a length indicator. So basically that tells us that when we're reading this from disk, that just tells us what's coming up. So in this case, it says we have eight elements coming up
and then we know that our elements in this example is U and 32, so that's four bytes each. So basically the next 32 bytes that we're reading are gonna be our eight inner arrays and then we just continue. Then we basically read the next length indicator and this way we can encode the stuff sort of in one contiguous thing. Then of course we have to decode it somehow
and we can do that because we've learned from our previous example, right, so we're not gonna use binary.littleendian.read but we're doing this in an allocation-free way. You can see it in the length line basically and yeah, our goal is to take that data and put it into our nested sort of go slice of slice of slice of U in 64.
And the code here basically you see we're reading the length and then we're increasing our offset so we know where to read from and then we're basically repeating this for the inner slice which is just hinted at here by the decode inner function. So what happens when we do this? First of all, the good news, 660 megabytes, that's way less than our 1.6 gigabyte before
so basically just by using a more space-efficient way to represent data, we've done exactly that. We've reduced our size. Also it's much, much faster. So we were at 14 seconds before and now it's down to 260 milliseconds. But this is our mad journey of building a database
so we're not done here yet because there's some hidden madness. And the hidden madness is that we actually spent 250 milliseconds decoding while we spent 10 milliseconds summing up those 80 million numbers. So again, we're kind of in that situation where we're doing something that we never really set out to do like we wanted to do something else but we're spending our time on,
yeah, doing something that we didn't wanna do. So where does that come from? And the first problem is basically that what we did, what we set out to do was fought from the get-go because we said we wanna decode, so we're basically thinking in the same way that we're thinking as we were with JSON, we said that we wanna decode this entire thing
into this Go data structure. But that means that you see we need to allocate this massive slice again and that also means that we need to, in each inner slice, we also need to allocate again. So we're basically allocating and allocating over and over again where our task was not to allocate. Our task was to sum up numbers. So we can actually just simplify this a bit and we can basically just not decode it.
Like while we're looping over that data anyway instead of storing it in an array, we can just do with it what we plan to do. And in this case, this would be summing up the data. So basically, getting rid of that decoding step helps us to make this way faster. So now we're at 46 milliseconds. Of course, our footprint of the data on disk
hasn't changed because it's the same data that we're reading. We're just reading it in a slightly more efficient way. But yeah, we don't have to allocate slices and also because we don't have these like nested slices, we don't have like slices that basically have pointers to other slices. So we have better memory locality. And now we're at 46 milliseconds. That is cool. So 46 milliseconds is basically the timeframe
that can be acceptable for a database. Okay, so quickly in recap, we immediately ruled out JSON because it just wasn't space efficient and we knew that we needed something more space efficient and also way faster. Binary encoding already made it much faster, which is great. But if we decode it upfront, then yeah, we still lost a lot of time.
And it can be worth it in these kind of high performance situations if you either sort of delay the decoding as late as possible until you really need it or just don't do it at all or do it in sort of small parts where we need it. No wish list here, but an honorary mention. So Go 1.20, they've actually removed it from the release notes because it's so experimental.
But Go 1.20 has support for memory arenas. The idea for memory arenas is basically that you can bypass the garbage collector and sort of manually free that data. So if you have something that you know has the same sort of life cycle, then you can say, okay, put it in the arena and basically in the end, free the entire arena, which would sort of bypass the garbage collector.
So that could also be a solution in this case, if that ever makes it. Like right now it's super experimental and they basically tell you, we might just remove it, so don't use it. Third stop is something that when I first heard it, almost sounded like too good to be true. So something called SIMD. We'll get to what that is in a second. But first, question to the audience.
Who here remembers this thing? Raise your hands. Okay, cool. So you're just as old as I am. So this is the Intel Pentium 2 processor. And this came out in late 90s, I think 1997, and was sold for a couple of years. And back then I did not build databases. Definitely not in Go because that also didn't exist yet.
But what I would do was sort of try to play 3D video games. And I would urge my parents to get one of those new computers with an Intel Pentium 2 processor. And one of the arguments that I could have used in that discussion was hey, it comes with MMX technology. And of course I had no idea what that is. And it probably took me 10 or so more years to find out what MMX is.
But it's the first in a long list of SIMD instructions. I haven't explained what SIMD is yet, but I will in a second. Some of those, especially the one in the top line, they're not really used anymore these days. But the bottom line, like AVX2 and AVX512, you may have heard them. In fact, for many open source projects, they sometimes just sort of slap that label in the read,
like yeah, it has AVX2 optimizations and that kind of signals you, yeah, we care about speed because it's like low level optimized. And VVA does the exact same thing, by the way. So to understand how we could make use of that, I quickly need to talk about vector embeddings because I said before that VVA doesn't search through data by keywords,
but rather through its meaning. And it uses vector embeddings as a tool for that. So this is basically just a long list of numbers, in this case, floats. And then a machine learning model comes in and basically it says do something with my input and then you get this vector out. And if you do this on all the objects, then you can compare your vectors. So you basically can do a vector similarity comparison
and that tells you if something is close to one another or not. So for example, the query and the object that we had before. So without any SIMD, we can use something called the dot product. The dot product is a simple calculation where basically you use, you multiply each element of the first vector
with the same corresponding element of the second vector and then you just sum up all of those elements. And we can think of this like multiplication and summing as two instructions. So if we look out first, shout out here to the compiler explorer, there's a super cool tool to see like what your Go code compiles to. We can see that this indeed turns into two instructions.
So this is a bit of a lie because there's more stuff going on because it's in the loop, et cetera. But let's just pretend that indeed we have these two instructions to multiply it and to add it. So how could we possibly optimize this even further if we're already at such a low level? Well, we can because this is our mad journey. So all we have to do is introduce some madness.
And what we're doing now is a practice that's called unrolling. So the idea here is that instead of looping over one element at a time, we're now looping over eight elements at a time. But we've gotten, we've gained nothing. Like this is, we're still doing the same kind of work. Like we're doing 16 instructions now in a single loop and we're just doing fewer iterations.
So by this point, nothing gained. But why would we do that? Well, here comes the part where I thought it was too good to be true. What if we could do those 16 operations for the cost of just two instructions? Sounds crazy, right? Well, no, because SIMD, I'm finally revealing what the acronym stands for.
It stands for single instructions, multiple data. And that is exactly what we're doing here. So we wanna do the same thing over and over again, which is multiplication and then additions. And this is exactly what these SIMD instructions provide. So in this case, we can multiply eight floats with other eight floats and then we can add them up. So all is perfect here, maybe not
because there's a catch of course, it's our mad journey. How do you tell Go to use these AVX2 instructions? You don't. You write assembly code because Go has no way to do that directly. The good part is that assembly code integrates really nicely into Go
and in the standard library, it's used over and over again. So it's kind of a standard practice. And there is tooling here. So shout out to Avvo, really cool tool that helps you. Basically, you're still writing assembly with Avvo, but you're writing it in Go and then it generates the assembly. So you still need to know what you're doing, but it's like it protects you a bit.
So it definitely helped us a lot. So SIMD recap. Using AVX instructions or other SIMD instructions, you can basically trick your CPU into doing more work for free, but you need to sort of also trick Go to use assembly. And with this tooling such as Avvo, it can be better, but it would be even nicer
if the language had some sort of support for it. And you made my saying, and now okay, this is this mad guy on stage that wants to build a database, but no one else needs that. But we have this issue here that was open recently and unfortunately also most recently, because no consensus could be reached, but it comes up back and back basically that Go users are saying like, hey, we want something in the language,
such as intrinsic. So intrinsic are basically the idea of having high level language instructions to do these sort of AVX or SIMD instructions and C or C++ has that for example. One way to do that, and maybe you're wondering like, okay, if you have such a performance hot path, like why don't you just write that in C and you see Go or write it in Rust or something like that?
Sounds good in theory, but the problem is that the call overhead to call C or C++ is so high that you actually have to outsource quite a bit of your code for that to pay off again. So if you do that, you basically end up writing more and more and more in that language and then you're not writing Go anymore.
So personally that's not, or it can be in some ways, but it's not always a great idea. So demo time. This was gonna be a live demo and maybe it still is because I prepared this running nicely in a Docker container and then my Docker network just broke everything and it didn't work, but I just rebuilt it without Docker and I think it might work. If not, I have screenshots basically that do a backup.
So example query here, I'm a big wine nerd. So what I did is I put wine reviews into VV8 and I wanna search them now. And one way to do it to show you basically that the keyword, that you don't need a keyword match but can search by meaning is for example, if I go for an affordable Italian wine,
let's see if the internet connection works. It does. So what we got back is basically this wine review that I wrote about a Barolo that I recently drank and you can see it doesn't say Italy anywhere, it doesn't say affordable, what it says like without breaking the bank.
So this is a vector search that basically happened in the background. We can take this one step further by using the generative side. So this is basically the chat GPT part. We can now ask for a database based on the review, which is what I wrote, when is this wine gonna be ready to drink? So let's see, you saw before there was a failed query
when the internet didn't work. Now it's actually working, so that's nice. And here in this case, you can see that so this is using OpenAI but you can plug in other tools, you can plug in open source versions of it. This is using OpenAI because that's nice to be hosted at a service, I don't have to run the machine learning model on my laptop. Then you can see it tells you the wine is not ready to drink yet, we will need at least five more years,
which is sort of a good summary of this and then you can see another wine is ready to drink right now, it's in the perfect drinking window. So for the final demo, let's combine those two. Let's do a semantic search to identify something and then do an AI generation basically. So in this case, we're saying, find me an aged classic Riesling,
best wine in the world, Riesling. And based on the review, would you consider this wine to be a fruit bomb? So let's have sort of an opinion from the machine learning model in it. And here we got one of my favorite wines and the model says, no, I would not consider this a fruit bomb. While it does have some fruity notes, it is balanced by the minerality and acidity,
which keeps it from being overly sweet or fruity. Which is, if you read the text, like this is nowhere in there, so this is kind of cool that the model was able to do this. Okay, so let's go back, now is the demo time. By the way, I have a GitHub repo with this example so you can run it yourself and yeah, try it out yourself.
So this was our mad journey and are we mad at Go? Are we mad to do this? Well, I would pretty much say no because yes, there were a couple of parts where we had to get really creative and had to do some, yeah, rather unique stuff. But that was also basically like the highlight reel of building a database and all the other parts,
like it didn't even show the parts that went great, like concurrency handling and the powerful standard library. And of course, all of you basically, representing the Gopher community, which is super helpful. And yeah, this was my way to basically give back to all of you. So if you ever wanna build a database or run into other kind of high performance problems,
then maybe some of those.