Tempesta FW
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 611 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/42364 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Production Year | 2017 |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
James Waddell Alexander IIGUI widgetSoftware development kitGroup actionRule of inferenceView (database)OrbitMusical ensembleWhiteboardAreaMedical imagingFigurate numberMetropolitan area networkGame theoryLimit (category theory)Process (computing)Human migrationMachine visionPower (physics)Order (biology)PressureGoodness of fitLie groupBitMereologyElectronic program guideInternetworkingOffice suiteInformation securityForm (programming)Ring (mathematics)Electronic mailing listTerm (mathematics)SoftwareScaling (geometry)Mixed realityLattice (order)Web applicationConnected spaceServer (computing)CuboidFunctional (mathematics)Data compressionOpen sourceKernel (computing)Social classTask (computing)Web 2.0Module (mathematics)Right angleBootingSoftware developerProduct (business)Student's t-testInternet service providerPattern languageRevision controlCache (computing)VelocityCASE <Informatik>Cartesian coordinate systemLink (knot theory)Suite (music)Normal (geometry)Computer hardwareContent (media)Fluid staticsMultiplication signElectric generatorFirewall (computing)Endliche ModelltheorieDependent and independent variablesBlock (periodic table)Field (computer science)Complex (psychology)Computer fileSpacetimeMessage passingClient (computing)INTEGRALOcean currentContext awarenessTable (information)Type theoryAdditionLogicArithmetic progression
10:05
Connected spaceCASE <Informatik>Web pageQuery languageInformation securityPhysical systemPower (physics)Computer fileLevel (video gaming)1 (number)Statement (computer science)Denial-of-service attackSerial communicationImage resolutionDatabaseState of matterMereologyKey (cryptography)Different (Kate Ryan album)State observerDifferential (mechanical device)NumberVariable (mathematics)EmailPasswordTerm (mathematics)Stability theoryMobile WebRevision controlSpacetimeFacebookAdditionDomain nameMultiplication signServer (computing)CodeGreatest elementRule of inferenceFreewareVirtual machineGoodness of fitKernel (computing)Content (media)Projective planeWeb 2.0Partition (number theory)HierarchyHash functionWeightCache (computing)Operator (mathematics)Point (geometry)Software frameworkRootTheoryGroup actionWave packetMachine visionComputer virusQuadrilateralOrder (biology)AuthorizationVideo gameView (database)PressureField (computer science)BitRight angleWordAreaLie groupArmWhiteboardFitness functionMetropolitan area networkDeterminismXML
20:05
Process (computing)Cache (computing)Computer hardwareWeb 2.0SoftwareServer (computing)State of matteroutputSpacetimeCASE <Informatik>BefehlsprozessorData conversionDigital electronicsWeb applicationWeb serviceIn-Memory-DatenbankWave packetSoftware testingWeightInformationWater vaporStreaming mediaSlide ruleVariable (mathematics)Table (information)Alphabet (computer science)ResultantHydraulic jumpFirewall (computing)Set (mathematics)Software developerDescriptive statistics2 (number)Rule of inferenceRevision controlValidity (statistics)EmailDifferent (Kate Ryan album)VideoconferencingVirtual machineReliefBounded variationWordComputer fileExtension (kinesiology)Integrated development environmentMultiplication signPoint (geometry)SpherePriority queueHTTP cookiePattern languageMathematical optimizationGaussian eliminationMessage passingBinary codeFlow separationSpezielle orthogonale GruppeFunctional (mathematics)Parameter (computer programming)Kernel (computing)Decision theoryNetwork socketSocket-SchnittstelleBit rateMereologyAreaOffice suiteParticle systemPhysical lawFrequencyElectric generatorArmLengthBitFocus (optics)40 (number)Computer configurationView (database)Execution unitInheritance (object-oriented programming)Goodness of fitCondition numberWebsiteProduct (business)Core dumpElectronic mailing listMaterialization (paranormal)Key (cryptography)DemosceneHypermediaRegular graphWhiteboardLattice (order)Orbit1 (number)XML
30:04
Active contour modelDifferent (Kate Ryan album)Extension (kinesiology)Network socketWeb pageGoodness of fitServer (computing)Core dumpReal numberAdditionDynamical systemBit rateEndliche ModelltheorieRule of inferenceSubject indexingTable (information)FeedbackData structureSoftware developerQuantumBefehlsprozessorPhysical systemFreewareSoftware testingConnected spaceArithmetic meanDigital electronicsMeasurementPatch (Unix)SpacetimeOrder (biology)Similarity (geometry)Line (geometry)Functional (mathematics)Channel capacityPriority queuePairwise comparisonComputing platformHeat transferAlgorithmKernel (computing)Game theoryHTTP cookieImplementationMultiplication signCodeComputer architectureInternet service providerCache (computing)BitString (computer science)SoftwareWeb 2.0Process (computing)Computer hardwareoutputNeuroinformatikMathematicsCASE <Informatik>Flow separationWireless LANNumberKeyboard shortcutArray data structureSet (mathematics)Dependent and independent variablesFigurate numberGodMetropolitan area networkAreaWater vaporThomas BayesMereologyWordGroup actionInternetworkingMusical ensembleWhiteboardCausalityEuler anglesReflection (mathematics)Physical lawSystem callRow (database)ARPANETMedical imagingMoment (mathematics)NP-hardModal logicStudent's t-testView (database)WebsiteComputer animation
40:04
Pointer (computer programming)Product (business)Order (biology)MathematicsImage resolutionVirtual machineLink (knot theory)Message passingDependent and independent variablesGame theoryDomain nameCASE <Informatik>WebsiteWater vaporComplex (psychology)Endliche ModelltheorieRevision controlKernel (computing)Configuration spaceTheory of relativityHeat transferPattern recognitionQuadratic equationCharge carrierCodePoint (geometry)Scheduling (computing)SpacetimeCompilerPatch (Unix)TelecommunicationBlogMultiplication signGreatest elementMereologyHypermediaView (database)FrequencyWordArmIntelligent Network
50:04
Computer animation
Transcript: English(auto-generated)
00:12
OK, it's 3 PM. I think we'll start with Mr. Janosky from KSA. Can you repeat the word?
00:21
Hello, my name is Ivan Janosky, and I am here. I am here as a domesticologist, and we are here in KSA. I have done all of this, all of those things, education, media, and other. And after we took our classes as we are here,
00:40
and I performed a lot of the practice again, and in other ways. A couple of examples would be a web application on Fiverr, mentioned by a bunch of others, being made about the structure of the product. And also, right now, we have a few reasons for my ADP, my request of my ADP ADP.
01:02
And the issue will be, of course, we are here in May 3, in May 7 or 8. Right now, we are in a very special version of the feature from our research department. So we started, depends on example,
01:24
where by request of one of our customers, it was a huge product provider. They actually wanted to contact a huge web publisher, against very complicated velocities.
01:42
And essentially, their way is about calculating those tasks as the most complicated tasks to detect and prevent. And we started to form question and links from time to time. We started open-source web experiences,
02:00
and we believe that modern web experiences are suitable for all tasks. So we can look and cannot excavate, and say we have been rendered, however, they often didn't satisfy our needs.
02:20
Basically, we need some kind of fabric and firewall to be able to process HTTP as fast as possible, and generate complex tools to work on application-aware tasks. So it must have a very fast HTTP accelerator,
02:43
because it still needs to process HTTP requests to fight against HTTP fields and those types. It must have a very fast web cache to be able to mitigate those attacks if we cannot find out and work on those.
03:07
Also, we need a very strong network set to be able to fight against attacks with many kind of connections,
03:20
and so on. Also, and that's the feeling I also mentioned there for me. So we started from the habit of depreciating the firewall. We were there, and we moved to the kernel space.
03:42
During this presentation, I will put all the details about the current problems with user-space web integrators and explain details while we move to kernel space. So basically, the first type of request was just a web integrator, a firewall,
04:03
and since usually web integrators stay in front of huge server farms, we invented several of the problems in our meetings. As our development moved forward,
04:21
we implemented more web capabilities. We introduced several few modules to protect against web application attacks, and we also need to use the same and implement it in kernel space
04:40
because nowadays, many of the applications work using HTTPS. Data compression and connection is going to survive the civil progress. Basically, some software and such products
05:01
are named like application-driven folders. Until you find the mix of the functionality in boxes like F5 or C3, it is complicated and cost-solution, and we now make it so that open source has equal solution, equal in performance
05:22
to this complicated boxes and such student open source. Different examples of the KCL developer would be the same, exactly as for main application here at Euler. So if you think about more performance
05:41
and protection, for example, if it's not enough performance from engineers or other web integrators, then you might think about hardware box, for example, F5, to boot it in front of your
06:01
and change the large part and protect and create a pattern. And even though this case is also suitable to better develop it, it can be used in myself years, but I've mentioned seasons, or shared concepts, and so on,
06:21
a lot of use cases. So, we started to discover what's wrong with nowadays web-to-web architecture. So, the first thing we did was to put some digital suite on these things.
06:44
And, surprisingly, we found that engineering spends a bit more time to work maliciously created spaces and devices from side to side. Basically, we used a very small static file, and, surely, you will see it in the picture
07:00
if you use large dynamic content and have it encoded. But, anyway, it's really unacceptable to spend or to associate to work maliciously, because the character does not regress from those words. The problem with this is
07:21
it's very limited model. Basically, when you need to survive such a pattern and block malicious cases, you do exactly the same model. You must still send parallel response to new malicious wire
07:41
which are just observed. However, you must run additional model, which is limited in very limited model. So, the limited functionality engineers introduce additional logic and additional complexity with this one text and microsecond.
08:01
So, to talk about the issues people usually give you, engineers do not send parallel response to different clients. And sometimes, they use some loud passes like Fade 1 to process engineering success works
08:21
and generate. In that time, you can enable tools. However, the approach based on passing and writing a text file seems like 50 years old. It's not fast, and sometimes people recommend to just switch off success works
08:41
for better performance. So, it seems I have issues with soils. one more thing about writing a trace model, which we generate
09:00
rules, table rules on the fly. However, generating a topic, like I said, with a parallel response called soil. And actually, we have a lot of issues with web scenarios. Actually, any one of the web scenarios
09:21
were designed to deliver content in normal context. So, in most cases, they deliver perfect performance for you. And most of the cases means your clients are innocent wood clients. However, web scenarios provide some freedom,
09:42
and we just saw that pitching is not so fast. It's a true thing to defend the application world with a lot of steps. And also, all the servers are basically designed in early 2000s when we had an issue to kind of 10,000 connections concurrently.
10:02
It's not important to generate 10,000 or 200,000 connections for DDoS bottleneck. And actually, DDoS is always about common cases. It means that if your system has a bottleneck for such
10:21
a connection, then an attacker can generate exactly the same amount against your handshake. If you're not so fast to handle small packets, then an attacker can generate
10:41
DDoS with a lot of small packets. So DDoS is always about common cases. And we started in our project to be a viable and stable guest for the cases. So you can see the stable performance regardless of number of watch calls, reorgets,
11:01
and DDoS attacks. So as to analyze it, here are some of the observations we have because there are no notifications and so on. However, modern systems, modern databases and modern security
11:21
are very nice, which is why we see a problem and we see a deal which makes the notifications very cheap. And here are the observations now and then. I hope to show that if we think about a high resolution
11:40
which can defend against DDoS and deliver stable performance for non-monetization, we still need Kaggle presets. one of the huge problems which is now by Facebook and
12:00
Netflix is KOS. Basically, if you need to send your content to a specific connection, you must focus and file content to user space and as such, you leave it and hope again to transmit the content to a specific target.
12:21
So, the device tries to put the intention part of KOS together. So, now we can use the serial interface to send files and send the content to the target. However, KOS can shake remains in user space
12:41
and it's good for DDoS back and shake. So, that's why we put KOS into the kernel. If we think whether we can take one web server
13:02
and optimize it and deliver more performance, it will be better to defend against large DDoS attacks. Last but not least, we need Facebook file
13:21
and simple HTTP and we see that KOS is HTTP we also have some copy and metal IO and finally, this weight
13:41
takes out the good weight such as the top 10 of most KOS and we see the file is actually flat. It means that this is not single code which we can optimize
14:02
and at least the KOS will save in two times, for example, or in three times. It means that if we need to make a solution, for example, three times faster, we need to do all the code, we need to remove the small IO, we need to remove working and so on.
14:22
The problem between these use-based servers are also teaching both. They should be cheap in nowadays' houses, still not for free. Technical requests can take about nine teaching both in H-Lix and we can
14:42
eliminate only one of them if we disable it, that's all. By the way, so far I'm talking about H-Lix However, most of the things which I'll say are also applicable to other web accelerators. This is just an example which we started from.
15:02
The other problem is web cache. A lot of web accelerators still use old-fashioned partition-based databases, some kind of databases, because they generate keys in the database using hash on
15:21
URL and host. The funny part begins that you see that there are short dedicated names and we have a hierarchy of two ways and the things are done to
15:41
cope with so-called partition operation. Actually, to find a file in participatory thousands of files distributed to operation. File opens and it requires the two files
16:00
and it requires distances. So, H-Lix uses cache of 4.5 distributors and surely the cache decreases the program. However, it is not about saving performance. So, you can see here in this web that you
16:20
just iterate over all your very old resources. So, your cache will be efficiently created and you will run without cache and you will use the framework system to manage the queries. That is a simple case how we can queue.
16:42
So, PyChalk itself starts to do more level things and they implement some part of the database to have the web cache. So, we do the same. Also, Valium HTTP header which actually requires
17:01
secondary key in terms of addition of the database. The problem is if you have some kind of security which can have different versions. For example, for mobile ones at the bottom and desktop ones at the top. And you want to send different content to
17:22
different devices. Content has the same URI and cost, so it is stored in the same file and the same power system. And we need some kind of security key to differentiate web users from desktop users. So, if you use the secondary key
17:41
and you replace this file's user agent, then you can take care of getting about user agent failure and delete different content for secondary key. So, all in all, we need a few other ways to efficiently handle web content with stable performance allows.
18:03
The next thing is HTTP password. As we just saw, HTTP password is put on under those rules. And surprisingly, most of HTTP servers use a whole-fashioned domain approach to
18:22
process HTTP. This is an example of password-safe machine. So, we have a file with a switch statement and now we have a state variable with value and here we have value 1, we have
18:41
the state 1, and we have the new chapter b. So, starting from checking state variable, we move to proper state with corresponding to proper chapter, and we assign the value to state variable.
19:01
Let's move to the end of wire and switch statement. Keep in mind that at this point we mention this here page wise of code page. The current page 1 we are fetching at the beginning of wire and page statement. The second one we are fetching at
19:20
processing page B where we assign a way to the state variable. The first wire is fetched right now and then goes the loop and switch statement. Now we move to beginning of the loop. We start the check state variable again and finally we go
19:42
to state 2. You can look at the big chip and try to understand what's happening. We have seen just a lot of spin around the root code. The reason at state 1 and given
20:02
chapter B is simple. We are not consistent where we need to go. In this case we can't just fetch the next instruction. We don't need any chapter at all. So our CPU prefetch will work perfectly. If we would be in state 1 and
20:22
in the case A we need to jump but we don't know where we need to be. We actually don't need the dummy and state variable. We can do it and we can save a lot of water. This process is actually generated by
20:42
weight training and we choose to acquire the same water and we also split the set of states in different pieces to generate a slow switch
21:00
jump tables. So we acquire different activities to make the jumps and participants possible. HTTP is actually a text protocol. This is version 1.1. So you can
21:21
easily get very large streams. It could be a bookie we saw with the engine. About 65 kilobytes in size. It could be a bookies, your eyes, with the engine. So sometimes you need to process very long streams. Since we have a week
21:41
to fill the extension for about 3 years HTTP still doesn't use the extension in 3 variations. Moreover, HTTP is a very special case. There are different
22:01
points which make HTTP and things special. And they are of all importance in optimization of strict patterns in HTTP. The first one is that actually we don't need to treat the O5 in a special way.
22:23
Actually we have several binary O's in ASCII and we should treat the O5 separately. The second is that there are special parameters and CRLF makes a real trouble for us. Because if we are creating a simple profession
22:41
the CRLF will be in one packet the O5 will be in another packet and we have to keep the function machine stays between the decision and path of the delimiter. Also, some basic words, since just
23:01
relief is widely connected and also we have to handle the eliminator the same way. In HTTP paths, when you put your own HTTP path you definitely have some
23:20
such things. For example, there could be names of HTTP headers which you want to possess in a special way. For example, host header, literature header, cookie header and so on. So the things are important in HTTP paths so you have some things.
23:40
So when you compare an input stream against this function stream you don't need to compare the case of the second stream. You need actually one case conversion for input stream only and you can save the same information. The same case stream compares
24:01
streams of both cases of both of the things. If you need to validate input stream against the word alphabet you might think about using a stereo sphere to accept or reject some alphabets.
24:22
And this is a bad idea because a stereo sphere strength does greatly make resources to compile, accept and reject sets. Basically, HTTP headers support the several
24:40
alphabets for different HTTP headers and you arrive and you find them in both the sets directly in your HTTP path. So in some of modern HTTP servers, there are attempts to do things but that's why I mention HTTP header names using
25:01
file set machines which are so various. Soil and performance of such passes vibrate at all. So let's think about modern web accelerators is network IO. Actually,
25:21
we can consider an example on this slide. There are three packets and each packet contains two HTTP messages. So, first thing we need the packets in software queue and software queue places the packet in the input queue of sockets.
25:42
After that, software queue makes a process owning the socket and process starts to copy the data from kernel and we use the space what says HTTP and do some other work. So, when what says finishes with
26:00
the first circuit, it can find the packets on the second circuit waiting out of the cache. Actually, you see technology from Intel shines in this situation. However,
26:21
GEA voices network packets in web feed of CPU cache. However, if you want to process technical network traffic when the feed cache should be not enough to wait while use space process
26:42
pass HTTP and so on. So, in the test, we built HTTP into GEA and all the packets are processed as soon as possible, while all the data is in CPU cache.
27:03
So, all the issues which I just mentioned are listed in the test developer and this is a private of HTTP and firewall. That's why it's called firewall because it uses firewall rules and
27:20
IP-ware and HTTP-ware in GEA itself. And, we shipped built-in features to fight against web servers and web application attacks. As I told
27:40
we employ faster HTTP and fast. We have specially designated in-memory database which is UMA-aware and partly optimized for modern hardware. That's what I've mentioned about web service and I will cover the topics in the next slides.
28:03
I want to start from benchmarks. benchmark results and description. How did we do the benchmark such that you can produce the results. You can find them in our video. The slide shows that we can reach
28:20
1.8 millions requests per second on 4.0 It's one of the GDCPUs. If you are not aware, the resources are carefully listed somewhat a few times faster than web-generated
28:41
GDCPUs. We should be much more fast because we have that integrated far more we can work more faster than any GDCPUs but that's why we didn't and so on. However, it's very difficult
29:03
to produce I mean it's very difficult to in for example about and it's not possible to
29:21
emulate in this environment. The next interesting result is about popular user space experience and one of the examples of HPX 7 on top of new space
29:42
accelerators is CSUN. Basically, there's not something for me for them from their performance which whether they use 1.3 millions requests per second for 4.0 or 4.0
30:02
but it's a way CSUN will be slower or a bit faster than HPX 7. So basically, we are capable of performance for no more. HPX 7 is good user space experience. However, we do not
30:22
have development of such approaches because HPX is actually continuous. It's built into Linux probably we can argue about this approach but it's not so good. However, we
30:41
fully integrated with many Linux changes. For example, you need to write some ID table rules and the graphics which is best for HPX will be created by ID tables. If you need to run HPX
31:03
now as graphics by HPX you can use graphic contour to review the contour graphics which is fast for HPX and so on. So basically, we need to do such things in HPX
31:21
you need a fast rating method for HPX or you need to use some dynamic devices with and additional hardware. So our approach is we know that
31:41
Go and HPX is not good but we argue not to work code-wise. It's not good for agencies who want to execute to work with HPX on the same system. If we need to go further then we need to
32:01
run to the same system. You need to run to individual server and whatever. We just do not work it and we do revise the code and we divide it by smart-based developers. Also I want to mention that while the HPX
32:21
is still remaining in the user space you have to mention how large this step is. And basically, a lot of user space systems don't have good extensions like same acknowledgements different speed
32:41
and so on. So actually HPX is good and fast and we can employ it and we can make use of it. Let's move to the next step. So if we have packets going to the network we check it against
33:00
our rules and we have the next we test the HPE process and run a model test and analyze our HPE requests and acquire HPE rules against it.
33:21
If it decides that the request is malicious we will generate new rules which will be put to our team and we will just run the request. If the request is good then we can pass it to our team or it will run from our patch. As I said we use common circuits
33:41
since we live in Linux kernel we do not use wireless speakers, we do not use input queues and so on. So there are no rules. However the case why user space are slow while we still see
34:00
very good numbers for user space HPE is that real world HPE server must own different issues you want. The issue is blockchain HPE request. In this example we have two sockets. First one is 1, second one is separate socket. We have two HPE rules and basically if you have
34:22
a directional connection between 4 and 7, 4 and 7 is new request, 4 and 7 is new request, then you might want to take work of HPE quantum works in different order and we can go to that work.
34:41
This is how HPE will still be in software using user space HPE server. And now what we use in the CPU in the connection to excellent platform we use in markets of pure HPE.
35:00
If you consider targets from 1 to 7 first the fact is on pure HPE now we have no work because of HPE quantum works are built to work with HPE.
35:20
For that HPE you cannot access to work also. So first we can access HPE quantum on the first HPE quantum work. Next we request the removal of server circuit and CPU which handles the circuit will do
35:42
actual transfer to server. So there is no working. We have great performance on HPE's traffic and low-weighted CPU for work. So also we were very good with low-weighted CPU HPE device.
36:00
We did a performance measurement in 2015 of similar circuits. We see that the blue line of similar circuits much faster in comparison with
36:22
user-space circuits and camera-space circuits. Also we have more stable and more performance in HPE sorry HPE packets also capacity is for other camera circuits.
36:42
I already said about HPE packets. You can find algorithms about HPE packets and particular things specifically in our work. And now we just a few numbers. fasted down the engineers
37:01
in 1.6 1.8 drives in the short HPE quest. However, if you go through work strings for example, with the red sugar or the yellow line, it's something about 1 kilowatt it's not so large it's not so large here are the cookies
37:20
of such games. It's very common nowadays. And how the same process can be fasted three or six times the routine and user-space implementation. We also use dedicated web cache. It's basically
37:42
on top of NUMAware architecture with NUMAware inside we can put data on NUMA nodes to be able to provide requests from NUMA node or we can do sharding computation which
38:01
will not respond. And we use our own unique algorithms that is very important to for fasted index in a lot of ways. Basically, it's based on modern research
38:20
data structures and the core idea of the data structure is not to have an array of variant shortcuts in one CPU cache wire. So if you have 16 cache wires, if you want to find some strings, we populate a cache value
38:42
and we use different core needs. Since NUMA data we just started a new data collection with NUMA data. If we find the NUMA data page is full, we have our calculations, then we pass
39:02
the node. Then we pass the new index node which is those second four means and we have to split our data page and this is the only way we use works. Any other voices aggregated with the data structure
39:21
are work free. So basically, it doesn't provide several limiting functionality for resource attacks. You can also if you have a very small CPU attempts to find and
39:42
it's basically very hard to do in this space because you do not undo the network pages. Also, you have web protection against several web attacks. For example, HPE is responsible for HPE requests and responses.
40:01
Basically, we constantly go in our set of feedback orders. However, we do want the very fast order. If we feel that we need to do some complex order, for example, which protests the domain in a response or something like that, we leave the order for user space
40:21
to replicate for us. So the point of the case is just to learn the very fast order and we can analyze the data. We have also viruses. I would say probably some of them are known. For example, the Desmos version is able to play a role.
40:43
We use machine learning technology in quadratic and relative schedule. Otherwise, how quickly server requires the request and builds against the message.
41:01
I want to skip some configuration. We have sticky bookies to fight against details which are able to solve bookie changes and how I sticky session is able to scale them.
41:20
Three pieces are great because we use modern few features. And I just want to address some of questions and concerns about the product. Firstly, Chikitsa is very small case of product. It is much smaller than any modern policies.
41:42
We do not try to move any water from each case directly to the carrier. Moreover, we are going to deliver fast transfer to user space for faster HTTP messages. And in particular, we deliver web communication to follow which is more capable
42:02
of resetting the machine in user space. We use best practices in data development. You can find our pull request and what we use in our GitHub. We will deliver packages so you don't need to build in the GitHub and Chikitsa
42:22
by hand. As a result, Chikitsa double is very good if you need a lot of performance when traditional Chikitsa are not enough for you. And when you need to detect and efficiently against many of those attacks. So, thank you.
42:41
Our site is powered by Chikitsa double for 9 months. We didn't see any violated this time. We will be at our GitHub NEM recognition very welcome. And you can find our links and so on
43:02
at our web blog. So, thank you for listening. Just a few minutes for questions.
43:20
No one from the audience. So, this is a viable compiler to become our product. Your code is viable. I see it's compatible to the actual kernel versions. You will see it's your site is powered by
43:41
your own product obviously. Actually, we have a patch in the kernel. You can download it from our GitHub. And we have the kernel model. So, it's not the main game so far. Hopefully, we will be
44:00
in the main game sometime soon. Did I answer your question? Yes, thank you. Some more if not.
44:22
So, thank you very much. Thank you. It was very interesting. One more.