We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Tempesta FW

00:00

Formal Metadata

Title
Tempesta FW
Subtitle
Linux Application Delivery Controller
Title of Series
Number of Parts
611
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language
Production Year2017

Content Metadata

Subject Area
Genre
Abstract
Tempesta FW is a high performance open source Linux application deliverycontroller (ADC). The project is built into the Linux TCP/IP stack to getmaximum performance for normal Web content delivery and efficient trafficfiltering for volumetric DDoS mitigation. I'll start by considering a simple example of how to build an ADC usingtraditional open source software. I'll describe drawbacks of the approach andwhy we started Tempesta FW's development. Next I'll go into the projectinternals and conclude the presentation with Tempesta FW performancebenchmarks and several examples. Application delivery controllers (ADCs) are typically hardware appliances thataccelerate Web content delivery, intelligently balance loads among upstreamservers, employ QoS and traffic shaping to efficiently and elegantly mitigateDDoS on all network layers, and provide Web application firewalling andapplication performance monitoring. However, it seems there are no open sourceprojects that are able to perform these tasks with comparable performance andaccuracy. In this presentation I'll describe Tempesta FW - a high performance, opensource Linux application delivery controller. The project is built into theLinux TCP/IP stack to get maximum performance for normal Web content deliveryand efficient traffic filtering for volumetric DDoS mitigation. I'll start by considering a simple example of an installation of Nginx,Fail2Ban, and IPtables. Alternative configurations containing other opensource projects will be covered as well. I'll describe why such configurationsusually do a poor job, and why we started Tempesta FW's development. Next I'll describe how Tempesta FW services HTTP requests, and how the HTTPlayer works with low-layer filter logic. There are several HTTP load-balancingstrategies, including flexible distribution of requests by almost any HTTPfield and predictive strategy by monitoring application performance. Severaltechnologies at the basis of Tempesta FW's performance will also be covered: * Linux TCP/IP stack optimizations for efficient HTTP proxying * stateless HTTP parsing and using AVX2 instruction set to efficiently process HTTP strings * lightweight in-memory database, TempestaDB, based on a cache-conscious lock-free data structure used for servicing a web cache I'll conclude with Tempesta FW performance benchmarks and show severalinstallation and configuration examples.
James Waddell Alexander IIGUI widgetSoftware development kitGroup actionRule of inferenceView (database)OrbitMusical ensembleWhiteboardAreaMedical imagingFigurate numberMetropolitan area networkGame theoryLimit (category theory)Process (computing)Human migrationMachine visionPower (physics)Order (biology)PressureGoodness of fitLie groupBitMereologyElectronic program guideInternetworkingOffice suiteInformation securityForm (programming)Ring (mathematics)Electronic mailing listTerm (mathematics)SoftwareScaling (geometry)Mixed realityLattice (order)Web applicationConnected spaceServer (computing)CuboidFunctional (mathematics)Data compressionOpen sourceKernel (computing)Social classTask (computing)Web 2.0Module (mathematics)Right angleBootingSoftware developerProduct (business)Student's t-testInternet service providerPattern languageRevision controlCache (computing)VelocityCASE <Informatik>Cartesian coordinate systemLink (knot theory)Suite (music)Normal (geometry)Computer hardwareContent (media)Fluid staticsMultiplication signElectric generatorFirewall (computing)Endliche ModelltheorieDependent and independent variablesBlock (periodic table)Field (computer science)Complex (psychology)Computer fileSpacetimeMessage passingClient (computing)INTEGRALOcean currentContext awarenessTable (information)Type theoryAdditionLogicArithmetic progression
Connected spaceCASE <Informatik>Web pageQuery languageInformation securityPhysical systemPower (physics)Computer fileLevel (video gaming)1 (number)Statement (computer science)Denial-of-service attackSerial communicationImage resolutionDatabaseState of matterMereologyKey (cryptography)Different (Kate Ryan album)State observerDifferential (mechanical device)NumberVariable (mathematics)EmailPasswordTerm (mathematics)Stability theoryMobile WebRevision controlSpacetimeFacebookAdditionDomain nameMultiplication signServer (computing)CodeGreatest elementRule of inferenceFreewareVirtual machineGoodness of fitKernel (computing)Content (media)Projective planeWeb 2.0Partition (number theory)HierarchyHash functionWeightCache (computing)Operator (mathematics)Point (geometry)Software frameworkRootTheoryGroup actionWave packetMachine visionComputer virusQuadrilateralOrder (biology)AuthorizationVideo gameView (database)PressureField (computer science)BitRight angleWordAreaLie groupArmWhiteboardFitness functionMetropolitan area networkDeterminismXML
Process (computing)Cache (computing)Computer hardwareWeb 2.0SoftwareServer (computing)State of matteroutputSpacetimeCASE <Informatik>BefehlsprozessorData conversionDigital electronicsWeb applicationWeb serviceIn-Memory-DatenbankWave packetSoftware testingWeightInformationWater vaporStreaming mediaSlide ruleVariable (mathematics)Table (information)Alphabet (computer science)ResultantHydraulic jumpFirewall (computing)Set (mathematics)Software developerDescriptive statistics2 (number)Rule of inferenceRevision controlValidity (statistics)EmailDifferent (Kate Ryan album)VideoconferencingVirtual machineReliefBounded variationWordComputer fileExtension (kinesiology)Integrated development environmentMultiplication signPoint (geometry)SpherePriority queueHTTP cookiePattern languageMathematical optimizationGaussian eliminationMessage passingBinary codeFlow separationSpezielle orthogonale GruppeFunctional (mathematics)Parameter (computer programming)Kernel (computing)Decision theoryNetwork socketSocket-SchnittstelleBit rateMereologyAreaOffice suiteParticle systemPhysical lawFrequencyElectric generatorArmLengthBitFocus (optics)40 (number)Computer configurationView (database)Execution unitInheritance (object-oriented programming)Goodness of fitCondition numberWebsiteProduct (business)Core dumpElectronic mailing listMaterialization (paranormal)Key (cryptography)DemosceneHypermediaRegular graphWhiteboardLattice (order)Orbit1 (number)XML
Active contour modelDifferent (Kate Ryan album)Extension (kinesiology)Network socketWeb pageGoodness of fitServer (computing)Core dumpReal numberAdditionDynamical systemBit rateEndliche ModelltheorieRule of inferenceSubject indexingTable (information)FeedbackData structureSoftware developerQuantumBefehlsprozessorPhysical systemFreewareSoftware testingConnected spaceArithmetic meanDigital electronicsMeasurementPatch (Unix)SpacetimeOrder (biology)Similarity (geometry)Line (geometry)Functional (mathematics)Channel capacityPriority queuePairwise comparisonComputing platformHeat transferAlgorithmKernel (computing)Game theoryHTTP cookieImplementationMultiplication signCodeComputer architectureInternet service providerCache (computing)BitString (computer science)SoftwareWeb 2.0Process (computing)Computer hardwareoutputNeuroinformatikMathematicsCASE <Informatik>Flow separationWireless LANNumberKeyboard shortcutArray data structureSet (mathematics)Dependent and independent variablesFigurate numberGodMetropolitan area networkAreaWater vaporThomas BayesMereologyWordGroup actionInternetworkingMusical ensembleWhiteboardCausalityEuler anglesReflection (mathematics)Physical lawSystem callRow (database)ARPANETMedical imagingMoment (mathematics)NP-hardModal logicStudent's t-testView (database)WebsiteComputer animation
Pointer (computer programming)Product (business)Order (biology)MathematicsImage resolutionVirtual machineLink (knot theory)Message passingDependent and independent variablesGame theoryDomain nameCASE <Informatik>WebsiteWater vaporComplex (psychology)Endliche ModelltheorieRevision controlKernel (computing)Configuration spaceTheory of relativityHeat transferPattern recognitionQuadratic equationCharge carrierCodePoint (geometry)Scheduling (computing)SpacetimeCompilerPatch (Unix)TelecommunicationBlogMultiplication signGreatest elementMereologyHypermediaView (database)FrequencyWordArmIntelligent Network
Computer animation
Transcript: English(auto-generated)
OK, it's 3 PM. I think we'll start with Mr. Janosky from KSA. Can you repeat the word?
Hello, my name is Ivan Janosky, and I am here. I am here as a domesticologist, and we are here in KSA. I have done all of this, all of those things, education, media, and other. And after we took our classes as we are here,
and I performed a lot of the practice again, and in other ways. A couple of examples would be a web application on Fiverr, mentioned by a bunch of others, being made about the structure of the product. And also, right now, we have a few reasons for my ADP, my request of my ADP ADP.
And the issue will be, of course, we are here in May 3, in May 7 or 8. Right now, we are in a very special version of the feature from our research department. So we started, depends on example,
where by request of one of our customers, it was a huge product provider. They actually wanted to contact a huge web publisher, against very complicated velocities.
And essentially, their way is about calculating those tasks as the most complicated tasks to detect and prevent. And we started to form question and links from time to time. We started open-source web experiences,
and we believe that modern web experiences are suitable for all tasks. So we can look and cannot excavate, and say we have been rendered, however, they often didn't satisfy our needs.
Basically, we need some kind of fabric and firewall to be able to process HTTP as fast as possible, and generate complex tools to work on application-aware tasks. So it must have a very fast HTTP accelerator,
because it still needs to process HTTP requests to fight against HTTP fields and those types. It must have a very fast web cache to be able to mitigate those attacks if we cannot find out and work on those.
Also, we need a very strong network set to be able to fight against attacks with many kind of connections,
and so on. Also, and that's the feeling I also mentioned there for me. So we started from the habit of depreciating the firewall. We were there, and we moved to the kernel space.
During this presentation, I will put all the details about the current problems with user-space web integrators and explain details while we move to kernel space. So basically, the first type of request was just a web integrator, a firewall,
and since usually web integrators stay in front of huge server farms, we invented several of the problems in our meetings. As our development moved forward,
we implemented more web capabilities. We introduced several few modules to protect against web application attacks, and we also need to use the same and implement it in kernel space
because nowadays, many of the applications work using HTTPS. Data compression and connection is going to survive the civil progress. Basically, some software and such products
are named like application-driven folders. Until you find the mix of the functionality in boxes like F5 or C3, it is complicated and cost-solution, and we now make it so that open source has equal solution, equal in performance
to this complicated boxes and such student open source. Different examples of the KCL developer would be the same, exactly as for main application here at Euler. So if you think about more performance
and protection, for example, if it's not enough performance from engineers or other web integrators, then you might think about hardware box, for example, F5, to boot it in front of your
and change the large part and protect and create a pattern. And even though this case is also suitable to better develop it, it can be used in myself years, but I've mentioned seasons, or shared concepts, and so on,
a lot of use cases. So, we started to discover what's wrong with nowadays web-to-web architecture. So, the first thing we did was to put some digital suite on these things.
And, surprisingly, we found that engineering spends a bit more time to work maliciously created spaces and devices from side to side. Basically, we used a very small static file, and, surely, you will see it in the picture
if you use large dynamic content and have it encoded. But, anyway, it's really unacceptable to spend or to associate to work maliciously, because the character does not regress from those words. The problem with this is
it's very limited model. Basically, when you need to survive such a pattern and block malicious cases, you do exactly the same model. You must still send parallel response to new malicious wire
which are just observed. However, you must run additional model, which is limited in very limited model. So, the limited functionality engineers introduce additional logic and additional complexity with this one text and microsecond.
So, to talk about the issues people usually give you, engineers do not send parallel response to different clients. And sometimes, they use some loud passes like Fade 1 to process engineering success works
and generate. In that time, you can enable tools. However, the approach based on passing and writing a text file seems like 50 years old. It's not fast, and sometimes people recommend to just switch off success works
for better performance. So, it seems I have issues with soils. one more thing about writing a trace model, which we generate
rules, table rules on the fly. However, generating a topic, like I said, with a parallel response called soil. And actually, we have a lot of issues with web scenarios. Actually, any one of the web scenarios
were designed to deliver content in normal context. So, in most cases, they deliver perfect performance for you. And most of the cases means your clients are innocent wood clients. However, web scenarios provide some freedom,
and we just saw that pitching is not so fast. It's a true thing to defend the application world with a lot of steps. And also, all the servers are basically designed in early 2000s when we had an issue to kind of 10,000 connections concurrently.
It's not important to generate 10,000 or 200,000 connections for DDoS bottleneck. And actually, DDoS is always about common cases. It means that if your system has a bottleneck for such
a connection, then an attacker can generate exactly the same amount against your handshake. If you're not so fast to handle small packets, then an attacker can generate
DDoS with a lot of small packets. So DDoS is always about common cases. And we started in our project to be a viable and stable guest for the cases. So you can see the stable performance regardless of number of watch calls, reorgets,
and DDoS attacks. So as to analyze it, here are some of the observations we have because there are no notifications and so on. However, modern systems, modern databases and modern security
are very nice, which is why we see a problem and we see a deal which makes the notifications very cheap. And here are the observations now and then. I hope to show that if we think about a high resolution
which can defend against DDoS and deliver stable performance for non-monetization, we still need Kaggle presets. one of the huge problems which is now by Facebook and
Netflix is KOS. Basically, if you need to send your content to a specific connection, you must focus and file content to user space and as such, you leave it and hope again to transmit the content to a specific target.
So, the device tries to put the intention part of KOS together. So, now we can use the serial interface to send files and send the content to the target. However, KOS can shake remains in user space
and it's good for DDoS back and shake. So, that's why we put KOS into the kernel. If we think whether we can take one web server
and optimize it and deliver more performance, it will be better to defend against large DDoS attacks. Last but not least, we need Facebook file
and simple HTTP and we see that KOS is HTTP we also have some copy and metal IO and finally, this weight
takes out the good weight such as the top 10 of most KOS and we see the file is actually flat. It means that this is not single code which we can optimize
and at least the KOS will save in two times, for example, or in three times. It means that if we need to make a solution, for example, three times faster, we need to do all the code, we need to remove the small IO, we need to remove working and so on.
The problem between these use-based servers are also teaching both. They should be cheap in nowadays' houses, still not for free. Technical requests can take about nine teaching both in H-Lix and we can
eliminate only one of them if we disable it, that's all. By the way, so far I'm talking about H-Lix However, most of the things which I'll say are also applicable to other web accelerators. This is just an example which we started from.
The other problem is web cache. A lot of web accelerators still use old-fashioned partition-based databases, some kind of databases, because they generate keys in the database using hash on
URL and host. The funny part begins that you see that there are short dedicated names and we have a hierarchy of two ways and the things are done to
cope with so-called partition operation. Actually, to find a file in participatory thousands of files distributed to operation. File opens and it requires the two files
and it requires distances. So, H-Lix uses cache of 4.5 distributors and surely the cache decreases the program. However, it is not about saving performance. So, you can see here in this web that you
just iterate over all your very old resources. So, your cache will be efficiently created and you will run without cache and you will use the framework system to manage the queries. That is a simple case how we can queue.
So, PyChalk itself starts to do more level things and they implement some part of the database to have the web cache. So, we do the same. Also, Valium HTTP header which actually requires
secondary key in terms of addition of the database. The problem is if you have some kind of security which can have different versions. For example, for mobile ones at the bottom and desktop ones at the top. And you want to send different content to
different devices. Content has the same URI and cost, so it is stored in the same file and the same power system. And we need some kind of security key to differentiate web users from desktop users. So, if you use the secondary key
and you replace this file's user agent, then you can take care of getting about user agent failure and delete different content for secondary key. So, all in all, we need a few other ways to efficiently handle web content with stable performance allows.
The next thing is HTTP password. As we just saw, HTTP password is put on under those rules. And surprisingly, most of HTTP servers use a whole-fashioned domain approach to
process HTTP. This is an example of password-safe machine. So, we have a file with a switch statement and now we have a state variable with value and here we have value 1, we have
the state 1, and we have the new chapter b. So, starting from checking state variable, we move to proper state with corresponding to proper chapter, and we assign the value to state variable.
Let's move to the end of wire and switch statement. Keep in mind that at this point we mention this here page wise of code page. The current page 1 we are fetching at the beginning of wire and page statement. The second one we are fetching at
processing page B where we assign a way to the state variable. The first wire is fetched right now and then goes the loop and switch statement. Now we move to beginning of the loop. We start the check state variable again and finally we go
to state 2. You can look at the big chip and try to understand what's happening. We have seen just a lot of spin around the root code. The reason at state 1 and given
chapter B is simple. We are not consistent where we need to go. In this case we can't just fetch the next instruction. We don't need any chapter at all. So our CPU prefetch will work perfectly. If we would be in state 1 and
in the case A we need to jump but we don't know where we need to be. We actually don't need the dummy and state variable. We can do it and we can save a lot of water. This process is actually generated by
weight training and we choose to acquire the same water and we also split the set of states in different pieces to generate a slow switch
jump tables. So we acquire different activities to make the jumps and participants possible. HTTP is actually a text protocol. This is version 1.1. So you can
easily get very large streams. It could be a bookie we saw with the engine. About 65 kilobytes in size. It could be a bookies, your eyes, with the engine. So sometimes you need to process very long streams. Since we have a week
to fill the extension for about 3 years HTTP still doesn't use the extension in 3 variations. Moreover, HTTP is a very special case. There are different
points which make HTTP and things special. And they are of all importance in optimization of strict patterns in HTTP. The first one is that actually we don't need to treat the O5 in a special way.
Actually we have several binary O's in ASCII and we should treat the O5 separately. The second is that there are special parameters and CRLF makes a real trouble for us. Because if we are creating a simple profession
the CRLF will be in one packet the O5 will be in another packet and we have to keep the function machine stays between the decision and path of the delimiter. Also, some basic words, since just
relief is widely connected and also we have to handle the eliminator the same way. In HTTP paths, when you put your own HTTP path you definitely have some
such things. For example, there could be names of HTTP headers which you want to possess in a special way. For example, host header, literature header, cookie header and so on. So the things are important in HTTP paths so you have some things.
So when you compare an input stream against this function stream you don't need to compare the case of the second stream. You need actually one case conversion for input stream only and you can save the same information. The same case stream compares
streams of both cases of both of the things. If you need to validate input stream against the word alphabet you might think about using a stereo sphere to accept or reject some alphabets.
And this is a bad idea because a stereo sphere strength does greatly make resources to compile, accept and reject sets. Basically, HTTP headers support the several
alphabets for different HTTP headers and you arrive and you find them in both the sets directly in your HTTP path. So in some of modern HTTP servers, there are attempts to do things but that's why I mention HTTP header names using
file set machines which are so various. Soil and performance of such passes vibrate at all. So let's think about modern web accelerators is network IO. Actually,
we can consider an example on this slide. There are three packets and each packet contains two HTTP messages. So, first thing we need the packets in software queue and software queue places the packet in the input queue of sockets.
After that, software queue makes a process owning the socket and process starts to copy the data from kernel and we use the space what says HTTP and do some other work. So, when what says finishes with
the first circuit, it can find the packets on the second circuit waiting out of the cache. Actually, you see technology from Intel shines in this situation. However,
GEA voices network packets in web feed of CPU cache. However, if you want to process technical network traffic when the feed cache should be not enough to wait while use space process
pass HTTP and so on. So, in the test, we built HTTP into GEA and all the packets are processed as soon as possible, while all the data is in CPU cache.
So, all the issues which I just mentioned are listed in the test developer and this is a private of HTTP and firewall. That's why it's called firewall because it uses firewall rules and
IP-ware and HTTP-ware in GEA itself. And, we shipped built-in features to fight against web servers and web application attacks. As I told
we employ faster HTTP and fast. We have specially designated in-memory database which is UMA-aware and partly optimized for modern hardware. That's what I've mentioned about web service and I will cover the topics in the next slides.
I want to start from benchmarks. benchmark results and description. How did we do the benchmark such that you can produce the results. You can find them in our video. The slide shows that we can reach
1.8 millions requests per second on 4.0 It's one of the GDCPUs. If you are not aware, the resources are carefully listed somewhat a few times faster than web-generated
GDCPUs. We should be much more fast because we have that integrated far more we can work more faster than any GDCPUs but that's why we didn't and so on. However, it's very difficult
to produce I mean it's very difficult to in for example about and it's not possible to
emulate in this environment. The next interesting result is about popular user space experience and one of the examples of HPX 7 on top of new space
accelerators is CSUN. Basically, there's not something for me for them from their performance which whether they use 1.3 millions requests per second for 4.0 or 4.0
but it's a way CSUN will be slower or a bit faster than HPX 7. So basically, we are capable of performance for no more. HPX 7 is good user space experience. However, we do not
have development of such approaches because HPX is actually continuous. It's built into Linux probably we can argue about this approach but it's not so good. However, we
fully integrated with many Linux changes. For example, you need to write some ID table rules and the graphics which is best for HPX will be created by ID tables. If you need to run HPX
now as graphics by HPX you can use graphic contour to review the contour graphics which is fast for HPX and so on. So basically, we need to do such things in HPX
you need a fast rating method for HPX or you need to use some dynamic devices with and additional hardware. So our approach is we know that
Go and HPX is not good but we argue not to work code-wise. It's not good for agencies who want to execute to work with HPX on the same system. If we need to go further then we need to
run to the same system. You need to run to individual server and whatever. We just do not work it and we do revise the code and we divide it by smart-based developers. Also I want to mention that while the HPX
is still remaining in the user space you have to mention how large this step is. And basically, a lot of user space systems don't have good extensions like same acknowledgements different speed
and so on. So actually HPX is good and fast and we can employ it and we can make use of it. Let's move to the next step. So if we have packets going to the network we check it against
our rules and we have the next we test the HPE process and run a model test and analyze our HPE requests and acquire HPE rules against it.
If it decides that the request is malicious we will generate new rules which will be put to our team and we will just run the request. If the request is good then we can pass it to our team or it will run from our patch. As I said we use common circuits
since we live in Linux kernel we do not use wireless speakers, we do not use input queues and so on. So there are no rules. However the case why user space are slow while we still see
very good numbers for user space HPE is that real world HPE server must own different issues you want. The issue is blockchain HPE request. In this example we have two sockets. First one is 1, second one is separate socket. We have two HPE rules and basically if you have
a directional connection between 4 and 7, 4 and 7 is new request, 4 and 7 is new request, then you might want to take work of HPE quantum works in different order and we can go to that work.
This is how HPE will still be in software using user space HPE server. And now what we use in the CPU in the connection to excellent platform we use in markets of pure HPE.
If you consider targets from 1 to 7 first the fact is on pure HPE now we have no work because of HPE quantum works are built to work with HPE.
For that HPE you cannot access to work also. So first we can access HPE quantum on the first HPE quantum work. Next we request the removal of server circuit and CPU which handles the circuit will do
actual transfer to server. So there is no working. We have great performance on HPE's traffic and low-weighted CPU for work. So also we were very good with low-weighted CPU HPE device.
We did a performance measurement in 2015 of similar circuits. We see that the blue line of similar circuits much faster in comparison with
user-space circuits and camera-space circuits. Also we have more stable and more performance in HPE sorry HPE packets also capacity is for other camera circuits.
I already said about HPE packets. You can find algorithms about HPE packets and particular things specifically in our work. And now we just a few numbers. fasted down the engineers
in 1.6 1.8 drives in the short HPE quest. However, if you go through work strings for example, with the red sugar or the yellow line, it's something about 1 kilowatt it's not so large it's not so large here are the cookies
of such games. It's very common nowadays. And how the same process can be fasted three or six times the routine and user-space implementation. We also use dedicated web cache. It's basically
on top of NUMAware architecture with NUMAware inside we can put data on NUMA nodes to be able to provide requests from NUMA node or we can do sharding computation which
will not respond. And we use our own unique algorithms that is very important to for fasted index in a lot of ways. Basically, it's based on modern research
data structures and the core idea of the data structure is not to have an array of variant shortcuts in one CPU cache wire. So if you have 16 cache wires, if you want to find some strings, we populate a cache value
and we use different core needs. Since NUMA data we just started a new data collection with NUMA data. If we find the NUMA data page is full, we have our calculations, then we pass
the node. Then we pass the new index node which is those second four means and we have to split our data page and this is the only way we use works. Any other voices aggregated with the data structure
are work free. So basically, it doesn't provide several limiting functionality for resource attacks. You can also if you have a very small CPU attempts to find and
it's basically very hard to do in this space because you do not undo the network pages. Also, you have web protection against several web attacks. For example, HPE is responsible for HPE requests and responses.
Basically, we constantly go in our set of feedback orders. However, we do want the very fast order. If we feel that we need to do some complex order, for example, which protests the domain in a response or something like that, we leave the order for user space
to replicate for us. So the point of the case is just to learn the very fast order and we can analyze the data. We have also viruses. I would say probably some of them are known. For example, the Desmos version is able to play a role.
We use machine learning technology in quadratic and relative schedule. Otherwise, how quickly server requires the request and builds against the message.
I want to skip some configuration. We have sticky bookies to fight against details which are able to solve bookie changes and how I sticky session is able to scale them.
Three pieces are great because we use modern few features. And I just want to address some of questions and concerns about the product. Firstly, Chikitsa is very small case of product. It is much smaller than any modern policies.
We do not try to move any water from each case directly to the carrier. Moreover, we are going to deliver fast transfer to user space for faster HTTP messages. And in particular, we deliver web communication to follow which is more capable
of resetting the machine in user space. We use best practices in data development. You can find our pull request and what we use in our GitHub. We will deliver packages so you don't need to build in the GitHub and Chikitsa
by hand. As a result, Chikitsa double is very good if you need a lot of performance when traditional Chikitsa are not enough for you. And when you need to detect and efficiently against many of those attacks. So, thank you.
Our site is powered by Chikitsa double for 9 months. We didn't see any violated this time. We will be at our GitHub NEM recognition very welcome. And you can find our links and so on
at our web blog. So, thank you for listening. Just a few minutes for questions.
No one from the audience. So, this is a viable compiler to become our product. Your code is viable. I see it's compatible to the actual kernel versions. You will see it's your site is powered by
your own product obviously. Actually, we have a patch in the kernel. You can download it from our GitHub. And we have the kernel model. So, it's not the main game so far. Hopefully, we will be
in the main game sometime soon. Did I answer your question? Yes, thank you. Some more if not.
So, thank you very much. Thank you. It was very interesting. One more.