We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

High Performance Political Revolutions

00:00

Formal Metadata

Title
High Performance Political Revolutions
Title of Series
Part Number
33
Number of Parts
86
Author
License
CC Attribution - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Bernie Sanders popularized crowdfunding in politics by raising $220 million in small donations. An example of the challenges with handling a high volume of donations is the 2016 New Hampshire primary night, when Sanders asked a national TV audience to donate $27. Traffic peaked at 300K requests/min and 42 credit card transactions/sec. ActBlue is the company behind the service used not only by Sanders, but also 16,600 other political organizations and charities for the past 12 years. This presentation is about the lessons we learned building a high performance fundraising platform in Rails.
Presentation of a group1 (number)Surface of revolutionRight angleWeb serviceComputer animation
Memory cardInformationSoftware testingPower (physics)Self-organizationPoint (geometry)Domain nameState of matterWebsiteCoefficient of determinationExecution unitWeb serviceMereologyBit rateStatisticsAddress spaceTraffic reportingComputing platformProcess (computing)Special unitary groupRule of inferenceWeb pageLimit (category theory)Cartesian coordinate systemMultiplication signCASE <Informatik>PlastikkarteComputer animationLecture/Conference
ExpressionNumberPlastikkarteCASE <Informatik>Point (geometry)Process (computing)Metric systemWeb service1 (number)Graph (mathematics)Form (programming)2 (number)TrailSelf-organizationMemory cardOrder (biology)Library catalogSpeech synthesisEmailDifferent (Kate Ryan album)State of matterResponse time (technology)Presentation of a groupAuthorizationMultiplication signMusical ensembleRight angleMatrix (mathematics)DataflowDependent and independent variablesToken ringHeat transferWebsiteLevel (video gaming)Web pageSoftwareEscape characterLecture/Conference
CASE <Informatik>MereologyGraph (mathematics)NumberAuthorizationMetric system1 (number)WeightMultiplication signSpeciesGraph (mathematics)Process (computing)Cross-correlationMoment (mathematics)2 (number)Web serviceCartesian coordinate systemLecture/ConferenceDiagram
Structural loadMultiplication signServer (computing)Thread (computing)Cache (computing)MultiplicationMereologyLastteilungComputer configurationBlock (periodic table)Mechanism designWeb serviceDatabaseComputer fileDistanceNeuroinformatikCombinational logicMetric systemSoftwareContent delivery networkShared memoryGraph (mathematics)Computer hardwareQuicksortMeasurementWeb browserFront and back endsVirtual machineMetropolitan area networkWorld Wide Web ConsortiumReplication (computing)IP addressGauge theoryAlgorithmLimit (category theory)Connected spaceGraph (mathematics)AuthorizationPlug-in (computing)MiniDiscGoodness of fitRevision controlChemical equationGrass (card game)CASE <Informatik>Stress (mechanics)Social classSingle-precision floating-point formatSemiconductor memoryConstructor (object-oriented programming)BefehlsprozessorInformationPoint (geometry)Physical systemRight angleDecision theoryLoginRoboticsCircleAddress spaceCuboidState of matterData storage deviceFile systemComputer animationLecture/Conference
EmailScripting languageMereologySlide ruleDenial-of-service attackMultiplication signDependent and independent variables3 (number)Level (video gaming)Point (geometry)World Wide Web ConsortiumContent delivery networkPublic key certificateDot productServer (computing)Latent heatVideoconferencingKey (cryptography)Web pageCache (computing)Public-key cryptographyWeb browserFront and back endsLastteilungGauge theoryGame controllerSystem callCuboidConfiguration spaceForm (programming)Point cloud2 (number)Web serviceNumberSequenceTriangleInferenceView (database)Uniform resource locatorFormal languageDispersion (chemistry)Pattern languageArithmetic meanCASE <Informatik>Computer architectureCellular automatonRight angleBuildingMultilaterationComputer animationLecture/Conference
Queue (abstract data type)World Wide Web ConsortiumAntivirus softwareStapeldateiServer (computing)Row (database)Point (geometry)Physical systemTask (computing)Process (computing)Multiplication signGreatest elementAuthorizationMereologyCartesian coordinate systemSocial classWeb pageRight angleNumberCASE <Informatik>Slide ruleForm (programming)Regular graphCuboidWebsiteDatabaseEndliche ModelltheorieFlow separationPlastikkarteLine (geometry)2 (number)EmailInformationBlock (periodic table)MultilaterationLaptopSet (mathematics)RückkehrpunktToken ringIdeal (ethics)Web serviceMusical ensembleMIDIArchaeological field surveyLinearizationSinc functionFigurate numberComputer animationLecture/Conference
Wave packetMereologyNeuroinformatikMultiplication signComputer simulationPhysical systemForm (programming)Sampling (statistics)Coordinate systemPoint (geometry)WordAuthorizationScalabilityCASE <Informatik>LoginRow (database)Electronic mailing listSimulationSelf-organizationComputer architectureSemiconductor memoryUser interfaceTelecommunicationSystem callProcess (computing)Decision theoryTrailStapeldateiPersonal digital assistantSoftware developerRight angleTask (computing)SoftwareScaling (geometry)Server (computing)Web pageClosed setStructural loadMachine learningHeat transferWorld Wide Web ConsortiumComputer animationLecture/Conference
XML
Transcript: English(auto-generated)
Thank you for coming. My name is Braulio and I work for ActBlue Technical Services. I would like to start with a question.
How many of you have used ActBlue to make a donation? Wow, quite a few. How many of you tipped? Not so many, but for the ones who tipped, thank you.
I stole the title for the presentation from Bernie Sanders. He was saying that we need a political revolution, but Sanders made small dollar donations popular, but he's only one of the more than 17,000 organizations that have been using the service for the last 13 years.
And it's not only political, we also provide the service for non-profits. In fact, ActBlue is a non-profit. In the first quarter of this year alone, we have 3,000 organizations using the platform,
and our Rails application is 12 years old. So how does it work? Let's say that Jason in the back, who works here, he wants to run for city council,
and I'm sure he would be a good city council, but he needs money to promote his campaign. So how he would like to get donations. Of course, he cannot process credit cards by himself, so what he will do is he will go to ActBlue, we'll set up a page, and from that point,
we'll process the credit cards using that page. Once a week, he will get a check, and we also will take care of the legal part, which is very complicated, and we'll also do the compliance. There are multiple reports that have to be sent when you're doing political fundraising or for non-profits.
So we also provide additional tools for the campaigns, statistics, A-B tests. We have, someone who donates can also save the card information in the website, and the next time they donate to that, to the same organization or a different one,
they don't have to enter anything. It will be a one-click, a single-click donation. We have 3.8 million people with ActBlue Express users. So far, we have raised $1.6 billion in 31 million contributions in these 13 years, and we like to see ourselves as empowering small-dollar donors.
How many of you don't know what Citizens United is? Oh, all of you know, okay. Well, in case someone doesn't know, I didn't raise their hand, it's Citizens United. It's a ruling by the Supreme Court, and it allows a limited amount of money to promote a political candidate.
So a few people with lots of money will have a lot of power in the political process. In the political side, we also do non-profits, but in the political side, this is how we started. We would like to have lots of small, lots of small dollars, which means a lot of people
with little money having the same power. This is how the contribution page looks. This is for John Ossoff. The person here has never visited the website before, and it will be a multi-step process. It's getting the amount first, then the next step will get the name,
address, credit card, et cetera. This is a non-profit, by the way. In this case, it is a Express user donor. It says, hi, Braulio, so it recognize that it's me, and if I click any of those buttons, the donation will process right away.
I don't have to enter a card number. That's what we call the single click. During February of last year, Bernie won the primaries in New Hampshire. That was the second state with primary elections,
and he won big. It was after Iowa, and he gave a victory speech that night and I'm going to show a little clip from there. Right here, right now, across America.
My request is please go to berniesanders.com and contribute. Right away, we felt the Bern.
So the first one is requests per minute is about 330,000 per minute, and the second one is contributions, a credit card payment, 27,000 per minute, about 40 per second.
That's a lot because a credit card payment, you'll see it's expensive to do. And the reason why I show these graphs is to stress the fact that improving performance is a continuous process. It's not something that happens from one day to the next.
We were able to handle this spike pretty well. Some donors didn't see the thank you page. They only saw the spinner, and after donating, they never got to the next page, but we never stopped receiving contributions. And you can see that there is no gap, so the service was never down.
And that's because we're doing all this 13 years. Every time we have high traffic for any reason, we have been analyzing why do we have that? Is there a bottleneck? Can we improve it? So this presentation is about all the experience we have gathered during all these years.
The first thing we have to do is define what we're going to optimize, and that will depend on the business. In every case, it will be different. For an e-commerce website, for example, it's very likely that it will be the response time on browsing the catalog. In our case, it's pretty simple, the contribution form, and we have to optimize two things. One is how we load it,
and the other one is how we process. Loading the contribution form has no secret. It's what you think of loading a form. It's very simple, but processing is a little different. In the center, I have our servers, and around that, I have all those web service calls
that I have to do in order to execute a payment. We have a vault for the credit card numbers, and outside, the vault is the only place where we have the numbers. Outside of that is all tokens, so the first thing we have to do is we have to get a token. We have to tokenize the card. That's number one.
That's a post and a response. Then I have a fraud score. I have external service which will provide that service for me. Then I have, with the bank, I actually have two. The first step, when all the credit cards are processed, this way, you first make what is called
an authorization, which is a post, again. The bank will respond whether it's approved. In that case, it will give me an authorization number, or it will say decline, but there is no money transferred at that point. I have to do a second step, another post, and send the number I got, if it was approved, of course,
and the bank will respond with a confirmation. Also, I have an email receipt I want to send, and every organization wants to know, most of the organizations want to know right away when they get a contribution, so they want also to be informed.
The thing looks like this. Do you remember those? How many can you do in a minute? It's too many young people here that you have never seen this. Only the old people can remember. Okay, so this is a high-volume track.
Because we have so many donations, we have a scaling challenge. We have to be able to process this fast and efficiently. What I'm going to do now is I'm going to present one approach.
I will show several, so for each one, I will explain how it works, how it is implemented, as maybe some code, and there will always be a cost, and we'll give you how to solve it. The first one, metrics.
Here is the part I'm not, can you see the graphs, or no? I thought that was going to be the case. How about that? Well, we have dozens of this. I'm going to show only a few, the most important ones. This is contributions per minute.
On the x-axis is the time, on the y is the number. And something happened there. Actually, what happened there was a burning one in Indiana, and there was a spike. This we called burning moments, by the way. So that one. Now, I have another one here, which is traffic.
It will be correlated. By the way, correlating, when you have metrics, numbers are not enough. You need graphs like this. And if you have graphs, you can correlate, and between traffic and contributions, there is a correlation. But this one is the number of contributions
that are being processed, because we have so many web services we have to touch, there will be a certain number that will be always part of that process. So I'm counting those. And if you see, between these two, the contributions and the pending,
there is no correlation, which is great. That's how it should be. If for some reason pending was also going up, in the same way contributions are going up, it means the service is saturated. I cannot process as fast as I received them. In this case, it's wonderful.
That's how it should be always, and sometimes it wasn't. But that's the goal. Then the last one is, that one is latency. That is a time interval. It is how long it takes between,
I create a contribution, and I receive an authorization from the bank. That's also an important number. In this case, it's about two to three seconds.
This is how I do metrics in Ruby. There is a gem called StatsD Ruby. I call the class, StatsD. I create a new object. I pass the host, the host name. I will have multiple hosts, so I need to know where this is happening. The second instruction is a gauge, and the gauge will generate a data point,
which is an integer. In this case, how many pending authorizations I have. And the timing method, both gauge and timing are statsD methods. I have a time interval, which as I mentioned before, the distance between when it was created to when it was approved.
Very simple. But if you have lots of this, you will be able to have those graphs. And the way you render the graphs is with something called graphite. You will have Postgres, Posty. You have all sorts of things. You also want to measure CPU, memory, disk. So there is something called collectD that will have plug-ins to gather
that information easily. I mentioned logs. They are not really metrics, but they are very important. Don't forget them. Good. We covered the first one. Now, multiple servers.
Multiple servers, if you start with one host, which is normally the case, even the fastest computer in the world won't be able to handle all the load. You will have to put a second, a third, et cetera. This graph shows on the right, I have three machines running.
Inside each machine, I have a little circle. Those are representing threads. So I can have a web server running in each thread. I have multi-threading, which is simple. But the important part here is I have different computers. And because I have that, I need to have this piece in the middle, which is a load balancer.
The load balancer can be a piece of software or hardware. Well, in the end, anything is software, but don't get too technical with me. The browser will, the DNS will resolve into the IP address of the load balancer. The request will get there. The load balancer will pick one host, and it will pass that request.
How you implement will depend on the hosting company. I have here, this is called the poor man's version of load balancer, because it's free. It is doing nginx. I can configure nginx to do load balancing, and I see two blocks. Can you see well?
Yeah. The first block defines the IP addresses of the three hosts. And the second block, the server block, it's telling me that I will be listening on port 80, and all the requests should be passed to the backend block. There is an algorithm that will define how they are picked, but in this case, it's whatever sequential round-robin.
It doesn't matter. You can define it if you want. So there are costs involved when you are doing this. The first one, if you have used Heroku, the first surprise when you start with Heroku is there is no file system, and this is why.
You upload, in the browser, you upload the file. It will be on one host. Later, there will be a different request on a different computer, and that will, you will try to see the file on that computer, but it's not there. So you need to provide a mechanism to fix that. One way is use Amazon Web Services S3,
which is memory, basically it's disk. Or another option is something called sticky sessions, which is the load balancer can pick always the same host and send those requests there. But that's for the second problem, but I will leave it ahead. I'm talking about persistence.
You can replicate the files. You can do that. With persistencies, I don't share the memory, and I can have the sticky sessions I was talking about, but Rails is very good. You have out-of-the-box Rails sessions. That's how you share state.
You can also use Redis if you want your own data store. The third problem you're going to have is, you will have, because you're having more servers running, all of them connected to the same database, you're going to start running out of connection. The database has a limit on how many you can connect.
What we do is in Postgres, you can easily define replication. That means there will be copies of the database that the data on those database will be a little behind, but not too much. They're read-only, but I can still use them, and if there is a host that doesn't need read-write access, that one doesn't have to connect
to the main database. It will connect to that one, to the replica. The last one is, it doesn't matter if all the hosts are up. It doesn't matter how many I have. If the load balancer is down, I have a problem, and in our case, the solution is a combination
between our CDN, I will explain what a CDN is, but it's a combination between the CDN and the load balancer provided by the hosting company. Good, we did two. We have, next one, caching.
Caching is the most popular one. Every time you hear about performance, you will hear, hey, you have to do caching, and caching is basically making a copy, keeping a copy somewhere to save time. You will have caching between, there is a cache in the browser, and there is a cache in the web server, and there will be caches in between as well.
If you have money, you can hire a caching service and have something up and running very quickly. That's why you say, hi, it's bye with all the effort. We use something called Fastly,
which is a content delivery network. There are several, Akamai is another one, very popular, Cloud Fair, and I will explain how it works in the next slide, but this is very good. This is something that works very well, and the loading part of the form is the part that gets all the benefit of doing this.
We one time had a distributed denial of service attack, and we handled very well, only because the CDN was there for us. We couldn't have handled that with our own servers. These are how it works. I have a browser in Boston,
and the boxes on the right are pop's point of presence. That belongs to the CDN, it doesn't belong to actblue. I have a pop in New England, so the browser in Boston will make it, if you follow the numbers, you will follow the sequence, will get a get request,
because this is the first time I request this document. The pop will make another get to our own server, the actblue host, to get the document. We will respond, that's number three. We're adding two headers there. We'll explain what it is, and the pop will, in time,
respond to the browser with the document. There is another header in the last response. Now, later, there is, in Providence, it's a CD near 100 miles from Boston, so that CD will also go to the same pop,
will make a get, but because the pop has the copy, it won't request the copy from us, so we are not going to see the second get. The pop will respond with the copy they have. There will be other pops. The pops are distributed all over the world, and for example, in this case, I'm putting one in the West Coast,
so if someone in LA is browsing actblue, they will go there, and they will have their own copy. This is a map where the red dots represent pop pops. This is a dashboard for Fastly, and the size represents how many hits I have hit.
It means that I have the copy. It's a cache hit, and the biggest are in the US, but there are also red dots in Europe, Asia, Australia. The gauge on the left indicates that there is a 97% hit rate,
and what that means is only 3% of the requests will get to my server. 97% of all the requests that the CDN is receiving, you have to keep in mind that all the requests go to the CDN first. 97% of them will stay there, and it will never touch my web server, which is great.
How do you control cache? You need to control two things. You need to control how long the copy will live in the cache, and how you do the purge. Purge means you force a refresh. You specify how long it will live in the copy,
but sometimes you want to do it right away. You don't want to wait all the time. So you do this with headers. I'm showing a few here. Cache control is the first one, which is the most popular one. It tells me max age 400. It tells me that I will put the slides online.
By the way, you don't have to worry about it. 400 seconds, any place, it means that in the browser, it will live 400 seconds, or in between, or in the CDN as well. Surrogate control is longer, 3600 seconds, and that one is the same as cache control,
but it's only for the CDN. There is a specific specification for how CDNs work. It's called edge architecture, and that's where this header is defined. Vary, another one.
Surrogate key, for each document, I can define that there is like a tag, which is great, because at some point I might say, hey, I want to force a refresh on all these pages. If all the pages have tag key two, I can do it in a single call. There is another thing called
varnish configuration language, which is a script. The script has access to all the requests, and the whole request, and also the URL. I want to show something. The VCL, the script will run in the place where the pop is getting the request from the browser,
and it will also run in the part where it's getting the response from the host. And this is an example of what can I do with the VCL. I can check the URL, and in this case, it starts with videos. In that case, I say, I would like to use
a specific backend for videos, and that's the name of the backend, fvideo, it's a load balancer. And I don't want to, this is what I'm trying to avoid, I don't want to mix videos and the contribution. I can also respond right away. I can say, hey, return 400, for example, without touching the server.
There is also an API. With the API, you can purge keys, one or all of them. The cost is very expensive. If you want fine control on how the copies are kept
and first, it will get complicated. Second, the third, if you are doing SSL, in our case, we always do SSL. It's complicated because all the pops will need to have a copy of the certificate and also the private keys.
I need to maintain that. The other thing is, you remember from the slide, what slide is five? Yes. It says, hi, Brawley. I can bet that checkout pages in regular websites
are not cached because you have this personalization. If you cache this and my neighbor is seeing hi, Brawley, it doesn't work. So I need to handle that and the way we do it, in our case, we cannot follow that approach. We have to cache because it's the most important form.
So we have JavaScript, we cache everything except those little pieces and they are filled with JavaScript. Great, we have part three. We're doing it in time. Separation of concerns, also known as SOA or microservices, it's a very simple idea.
I will have different applications to handle different parts of the system. The first example here is the tokenizer, the vault, which is an application written in node, completely separate. It's even in a different hosting company.
I can have also multiple copies of the database that way and one of the advantages of having separation of concerns is for compliance. The fact that we have this vault means that if I don't have access, I don't need to comply, I don't need to have an antivirus on my laptop,
so that's why I have never seen a credit card number because I don't want to have an antivirus on my laptop. The cost, as anyone who has done microservices or SOA, is the fact that it's very difficult to implement and very difficult to test.
Great, one, two, three, four. We have covered four. We are now going to cover now different tasks. In our case, we're going to talk about tasks that are slow. So I don't want the web server to be doing something
that's slow because it will hold it for a long time and that server won't be able to handle other requests. It means I need to, for example, this is slide number nine and all of these things are slow. So let's say I'm talking with the bank.
That shouldn't be done with a regular web server. It will take several seconds. So what I do is I save that job for later. If I want to do it later, I need to save it somewhere, so I need a queue. So that's all, in our case,
almost everything will be a different task. Extra benefits of doing this is fault isolation. If the bank is down, for example, and cannot process authorizations or settlements because it's a deferred task, it doesn't matter, the customers will still be able to donate. They won't know if it's approved or not,
but they will get the thank you page and they might even get an email saying thank you for the donation. And we'll tell them later if it was approved or not. The other advantages in crisis reliability if for some reason something failed on the authorization, for example, I have all the information saved and I can rerun it.
In our case, the deferred batch system, I put that there because it was a big gain for us. We decided that the contribution was going to be paid, was going to be realized at the authorization point. Although we hadn't the money yet,
because we haven't done the settlement, we're going to consider that paid right away. If we do that, we can do the settlement, well we do it deferred always, but we can do it in batches. Instead of having one settlement per post, we can send one post with 400 settlements.
Big gain. This is how a QU system looks. On the right I have the processes doing the work. They are called workers. Authorization settlements, sending email. I have the Q in the middle where I save the jobs. On the left I have the web servers putting the jobs in the queue.
We use Sidekiq and we have two blocks. This is how we use Sidekiq. We have two blocks. One is you define a class for the worker. In this case we're doing the settlement. And you have to define a method called perform. And the settlement.find will get an ID
of the model settlement. And the method that will talk to the bank is settle exclamation point. That first block is who is doing the job. And the second, the line that's in the middle, it's how to put it in the queue.
I use the perform async method, which is a method by Sidekiq. And I give the ID of the settlement of the record. I put it in the queue. The other one will process. Rails 4.2 has active job incorporated. And active mailer has it integrated out of the box.
So you want to send an email asynchronously. You say deliver later. The line at the bottom. Very simple. If you're doing different tasks, you have some costs, the queue and systems are unreliable, except Sidekiq.
That's because Mike who wrote Sidekiq was around. Just in case. Well, but it's not that. A job can die. The computer can die. The bank can have a problem. The communication gets disconnected.
All those things. And you're having a different task. The job didn't run. What are you going to do? Sidekiq will do retries automatically. But maybe you run it all the time. All the times are used. You're supposed to retry and it never succeeded. What do you do? If the settlement, let's say $100 settlement and you never settle, you're going to lose that money
because you didn't transfer. What else? Coordination. You cannot do authorizations after doing settlements that will fail. And it's difficult to debug. It's kind of crazy because things happen anytime and because we have multiple hosts, they happen anywhere.
You remember I told you about the logs? This is where they are important. That's the only way to know what happened. Great, we're doing great. That's the last one where we have covered everything but the last one. Scalable architecture.
This is the most important one. And I put it at the end because I am a developer and as developers, we always overlook architecture. But it shouldn't be like that. The idea is when I'm writing software, I have to be thinking it has to be fast.
I'm sure all of you write fast software but that's not enough. You also have to think how I am going to scale this in the future. And if you don't think this way, you might make mistake and it's going to be difficult to fix because you have the whole system written that way.
So I'm going to give you two examples. So let's say that you saw the first contribution form. I have these amounts. So I might say, hey, I would like to have a process that on the fly will calculate what are the best amounts to show depending on the organization and depending on the user. So I can say, okay, let's start developing this
and I'm going to use machine learning. Oh, great. And right away I say, you know what? If we have a central system to do this, it would be great and things will be easy. Oh, great, we do it that way. Well, that doesn't scale. At some point I will have so much load on this system, I would want to have two of these and I can't
because there is a central one so I cannot have two. There is another example, the deferred, excuse me, the batches for the settlements. We decided it was a decision. It's a architectural decision. We are going to consider the payment, the contribution
processed at the authorization, at the end of the authorization and that is huge because I don't need to do the other part. I can say right away after that step, I can say, hey, we're done. And that's the list, scalable architecture at the top
and that's all I have. If you are interested in what we do, come talk to us. We have stickers also somewhere there. Those are my colleagues, by the way. But we have time. We have, yeah, seven minutes or six minutes for questions.
Anyone have? Okay, the question is when you have multiple servers, how do you handle the logs? You will have many computers generating them. We use PaperTrail and it works very well. And with PaperTrail, you define, you have to,
the systems generating the log will connect with their system and through a web interface, you see everything in a single page. You can filter if you want, of course. But that's the way to go.
Okay, we have, we could do that. We don't do it on purpose. We don't have a system to simulate load. Excuse me, the question was how do we simulate load or how do we prepare for the future record like this one? And we don't do it on purpose because we have something called recurring contributions.
Every day at four in the morning, we run, you can define, I make this contribution and I want to make it every month or I want to make it every week. So we have lots of them and at four in the morning, we run them all together and that's a lab in itself.
So it's pretty close to reality and we can analyze there how the system in handling. In fact, in some cases, the bank cannot handle the load because it's all in one single time. We can do it very close. We have to gauge it, throttle. We have to throttle it and make it a little spread.
Also, we have the end of quarter. On every end of quarter, the organizations have goals and they all push until midnight. After midnight, all the traffic will go down. So we also use that. So basically, we don't do our own simulation
but we are very careful to study those cases. Another question? Great. Thank you so much.