We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Autoscaling best practices

00:00

Formal Metadata

Title
Autoscaling best practices
Subtitle
How did we survive the peak
Title of Series
Number of Parts
199
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
This talk will cover the basics of autoscaling, different types of auto-scaling, and how you can use your metrics to take good auto-scaling decisions. Targeted to entry level to mid level auto-scaling users. * What is autoscaling * Different kinds of traffic peak scenarios * Autoscaling reactive vs proactive * Autoscaling with external tools - Rightscale, Autoscale API, Heat, Ceilometer * Autoscaling with your metrics - Graphite, Provisioning, Configuration Management
65
Thumbnail
1:05:10
77
Thumbnail
22:24
78
Thumbnail
26:32
90
115
Thumbnail
41:20
139
Thumbnail
25:17
147
150
Thumbnail
26:18
154
158
161
Thumbnail
51:47
164
Thumbnail
17:38
168
Thumbnail
24:34
176
194
Thumbnail
32:39
195
Thumbnail
34:28
Level (video gaming)Address spaceModemPresentation of a groupSoftwareReplication (computing)Canonical ensembleSpacetimeCartesian coordinate systemSystem administratorComputer virus
PhysicalismMultiplication signService (economics)WebsiteCloud computingSpacetimeIntegrated development environmentComputer programmingGoodness of fitXML
Multiplication signServer (computing)Computing platformService (economics)Order (biology)Lecture/ConferenceComputer animation
Order (biology)Pattern languageScaling (geometry)Cartesian coordinate systemProper mapConsistencyVariable (mathematics)1 (number)Goodness of fitMereologyHypermediaServer (computing)Web pageData centerCalculationVirtual machineEvent horizonMehrplatzsystemCellular automatonWebsiteTouch typingView (database)Service (economics)LoginObject (grammar)Computing platformAnalytic setOrbitSelf-organizationFunctional (mathematics)Inverse elementForcing (mathematics)Row (database)Multiplication signState of matterCASE <Informatik>Lecture/ConferenceComputer animation
Scaling (geometry)Pattern languageEmailBitCartesian coordinate systemXML
Right anglePosition operator1 (number)PredictabilityScaling (geometry)BitMultiplication signServer (computing)LastteilungVideo gameService (economics)Data miningLecture/ConferenceXMLUML
Prisoner's dilemmaComputing platformHypothesisService (economics)Server (computing)Goodness of fitLastteilungCartesian coordinate systemNumberConsistencyEvent horizonBitDigital watermarkingMultiplication signChannel capacityCASE <Informatik>Lecture/ConferenceXMLUML
Level (video gaming)Server (computing)Service (economics)Artificial neural networkEvent horizonCASE <Informatik>Uniform resource locator1 (number)Scaling (geometry)BitMobile appStructural loadMetric systemAnalytic setPredictabilityLecture/ConferenceXMLUML
Metric systemVariable (mathematics)Table (information)PhysicalismINTEGRALPattern languageReal numberBitCartesian coordinate systemService (economics)PlotterInterior (topology)Data storage deviceCAN busGraph coloringLecture/ConferenceComputer animation
Digital watermarkingTemplate (C++)Integrated development environmentOrder (biology)Cartesian coordinate systemInstance (computer science)Server (computing)Service (economics)Wechselseitige InformationCloud computingGroup actionLastteilungMedical imagingEvent horizonScaling (geometry)Transformation (genetics)Software developerProduct (business)SpacetimeNumberComputer animation
Medical imagingWechselseitige InformationSoftware developerVirtual machineGroup actionPoint (geometry)WindowServer (computing)Digital watermarkingMultiplication signEvent horizonScaling (geometry)Order (biology)Point cloudService (economics)Right angleSpacetimeTraffic reportingLogicView (database)Lecture/ConferenceComputer animation
Server (computing)Order (biology)VotingGroup actionMedical imagingDifferential (mechanical device)Scripting languageEvent horizonDigital watermarkingTemplate (C++)Functional (mathematics)Service (economics)Scaling (geometry)1 (number)Floppy diskRight angleSign (mathematics)Lecture/Conference
Functional (mathematics)Multiplication sign1 (number)CloningProjective planeFile formatEvent horizonOrder (biology)Template (C++)Product (business)Maxima and minimaScripting languageMedical imagingServer (computing)Digital watermarkingComputer animationLecture/Conference
Server (computing)Order (biology)Irrational numberArtificial neural network1 (number)Computing platformProduct (business)Self-organizing mapSystem callAnalytic setJoystickDirection (geometry)Linear regressionComputer animationLecture/Conference
Multiplication signEndliche ModelltheorieCodePoint (geometry)Execution unitComputer animationLecture/Conference
Order (biology)Metric systemGradientMultiplication signSoftwareCodeMessage passingQueue (abstract data type)InformationRhombusAtomic numberXMLLecture/Conference
Context awarenessCodeSoftwareMessage passingQueue (abstract data type)Scaling (geometry)Multiplication signXMLLecture/Conference
Service (economics)Video gameMereologyScaling (geometry)Resource allocationNumberOrder (biology)Maxima and minimaGoodness of fitComputer animation
Multiplication signDecision theoryMaxima and minimaService (economics)Online helpResource allocationCuboidSpacetimeLecture/Conference
AreaEndliche ModelltheorieSoftwareMultiplication signSemiconductor memoryMiniDiscMetric systemScaling (geometry)1 (number)Game theoryOrder (biology)Lattice (order)Service (economics)Matrix (mathematics)Mechanism designSpecial unitary groupXMLLecture/Conference
Scaling (geometry)Pattern languageScalabilityDivisorNumberService (economics)Multiplication signXMLLecture/Conference
Moment (mathematics)Fitness functionFunctional (mathematics)Multiplication signCartesian coordinate systemStrategy gameServer (computing)Internet service providerProduct (business)
Medical imagingServer (computing)Block (periodic table)Multiplication signPredictabilityEndliche ModelltheorieOcean currentMyspaceProduct (business)Internet service providerOrder (biology)MereologyLastteilungUniqueness quantificationService (economics)Point (geometry)State of matterMeasurementLoginComputing platformLecture/Conference
Server (computing)Chemical equationMedical imagingRight angleService (economics)Product (business)Presentation of a groupInstance (computer science)Group actionConfiguration managementHigh availabilityLecture/Conference
Server (computing)Group actionNP-hardMultiplication signSpacetimeShape (magazine)Condition numberJSONXMLUML
XMLUML
Right angleLecture/ConferenceXMLUML
Lecture/Conference
XMLUMLLecture/Conference
Right angleInstance (computer science)Key (cryptography)Insertion lossTheoryComputer virusVolume (thermodynamics)Moment (mathematics)Function (mathematics)Ferry CorstenLecture/ConferenceMeeting/Interview
Transcript: English(auto-generated)
Hello everyone.
Hello. All right, so we're going to talk this morning about autoscaling. This presentation is oriented towards beginners
and mid-level users for autoscaling. So just to give you some ideas where to explore what kind of tools can you use to autoscale your application. So first, presentations, of course. My name is Marc Clouet. I've been a ASIS admin for more than 16 years now, from racking modems to setting up networks
to anything, you mentioned it, I've done it. I've been working at Canonical in the past where I was one of the founding members of Juju and Mars. I might apologize for some of that. I've also right now a leading DevOps engineer at Rackspace, trying to get Rackspace into the DevOps world. And I said, I like DevOps, I like programming
in my free time, and I like walks on the beach. So what is Rackspace? Rackspace is a hosting company, basically, but good things about Rackspace is home of fanatical support, so we are fanatic about everything. It's the second biggest public cloud provider in the world after Amazon.
We're far away second, but still number two. And we were the co-founders of OpenStack with NASA, which is a pretty exciting thing to say. So what is autoscaling? If we look at your normal hosting environment, you have your physical service,
and what you normally do is you calculate how much traffic do you get to your website, and you put as many services needed to get to that bigger scenario. So what's the problem with that? You can scale up and down very nicely, but you're basically wasting a lot of money. Because all the time that your servers are doing nothing,
you're paying for nothing at all. And it's not convenient, especially if you're a startup, that's not something you want to do. So the ideal scenario would be something more like this, where your platform grows up and down based on your traffic with enough leeway so you can cope with small peaks.
So I took this from Wikipedia, and I modified it a little bit, sorry. So autoscaling, you would consider that any kind of resource that you pull on demand to be able to cope with your service.
So in order to understand autoscaling better in your application, we need to look at the traffic patterns that you have. Traffic patterns will define exactly how you need to autoscale and what kind of things can you do and you can't do in order to autoscale properly. These would be the most basic ones.
So you can see here there's on and off, fast growth, variable, and consistent traffic. So the on and off traffic is the typical application that you just turn on at night. Let's say you need to run your analytics, you turn on the whole infrastructure at midnight,
calculate all the logs of the day, and then maybe at 3 a.m., 4 a.m., you're finished with that and you shut it down. The problem is you have physical service and a physical hosting is that those servers are doing nothing all the rest of the day unless you give them some other function. It's also the typical thing that banks do. So banks, they do all these calculations
on huge platforms that they have, on huge data centers in London, New York, and anywhere else, and they don't touch this service during the rest of the day. I've gone to banks, talked with them, and they have all these holes full of machines and machines and rocks. They say, yeah, that's just for a night calculation. They do nothing all the rest of the day.
Then we have the fast growth scenarios. That would be for events like concert tickets or conference tickets, the kind of a scenario that you have a very high amount of traffic for a very short time. That would be one day, two days. It could also be that your business is awesome. You created the new toaster
and everyone just wants to come to your website so your traffic keeps growing immensely, which is a good thing for you, right? Or you've been mentioned on a slash that, which we all know what that ends up with. Then you have variable scenarios. In this case, it's most news organizations,
media organizations, if you go to the webpage of CNN or The Guardian, you will see that when there's a very important news event, they will reduce the amount of objects in their website in order to be able to cope with the amount of traffic that they're getting, which is one of the remediation methods, right?
But normally, what would you like is that even if something big happens, you would like every single user to have a full view of your website because that will drive more traffic to the rest of the website, will print more banners, make you more money. It's also the same for rapid fire cells like eBay, like Woot.com.
They will have a very bursty amount of traffic during parts of the day, but the rest of the day, those servers will do nothing. And the last one is consistent traffic. This is the easiest traffic to scale because basically you almost have to do nothing to scale this. It's the typical traffic that you get from nine to five
for example, HR applications, accounting applications, they are just on when the user sits in front of the desktop and starts doing something with it. And it's pretty much the same with email. Email, even if at night there's not that many emails, normally you can forecast the pattern very easily and know when you need more or less demand for emails.
So I'll talk a little bit about which auto-scaling methodologies are there. So basically, what do you do with all these chickens in your car, right? They're just running rampage and you need to make sure that they're doing something useful.
So the main ones are time-based, reactive, and predictive auto-scaling. And I'll talk a little bit and give you examples about each one of those. So in time-based auto-scaling, let's say that we have a couple servers behind the load balancer. And you know your traffic very well. You know that you have 2x the amount of traffic
that you had in the next hour because it has been the same every single day of your life. So it's something that is not difficult to forecast. So let's say that it's 9 a.m. or if it's traffic that happens during the month, it's November 1st, just when the Christmas buying spree happens and then you start adding more servers to your platform.
And that's the easiest one. What time-based auto-scaling is good for? It's good for on and off applications and consistent applications. So in these applications, you don't have to have those servers up all day. You can just turn them on, make them run whatever you need to do, and then shut them off again, and it's all good.
Then there's reactive auto-scaling in which we're actually doing something a bit smarter than that. We are measuring the amount of traffic that goes to the servers, and we get the number out of that. So for example, in this example, we have a couple servers that are 60% capacity already,
but when they get more load and go to 80% capacity, that generates a high watermark event. So the kind of event that will trigger the creation of another server. So when you create this new server, the load balancer will start sending traffic towards there, so the amount of traffic on the other two servers
will slowly get down to more tolerable levels. And this auto-scaling has, the other good thing is that you can also scale down. So if the three servers now, after the peak traffic, they go down to 30%, that would generate a low watermark event, and you would remove one of them
and spread the load across the other ones. So this kind of auto-scaling is very good for fast growth applications, because auto-scaling app is fairly easy to do. And for variable applications as well, which is a bit more tricky for that, because you can end up flapping depending on the amount of traffic that you do,
so you need to be careful with that. And the last one, and this is the most fancy one, it's predictive auto-scaling. So in predictive auto-scaling, what you do is that not only you know what kind of traffic you're getting through all the metrics that you're collecting, but you're feeding all that through analytics
to an artificial intelligence engine that will predict the traffic for you. So in this case, the AI engine will say that the forecasted traffic is plus 30% in the next 30 minutes, and that maybe has a fidelity of 80%, so it's almost certain that this will happen.
So in this case, automatically boom, you add another server, and you're happy and coping with your growth. This kind of auto-scaling is incredibly good for variable traffic, because it's the kind that can actually tolerate unknown peaks, peaks that you can't forecast with your metrics, that are too fast for your metrics to capture.
And it's able to go up and down with your traffic very well. So once that you know the traffic patterns and the kind of auto-scaling we have, I would like to talk a little bit about the kinds of tools that you can use. And this is heavily cloud-oriented, I'm afraid, but right now on cloud, we do auto-scaling a lot better
than we do in physical service, until there's more tools that will allow you to add API integration to do HDMI and Pixiboot and all that into real application service. So this would be your main players. Of course, Amazon, as being the biggest cloud provider in the world, has a solution for that.
So there's RightScale, OpenStack, there's one as well, and we do Rackspace. And there's another one from Netflix called Cryo, which is quite interesting, I'll talk about this at the end. So the first one is Amazon CloudFormation. So this was created by Amazon in order to be able
to rapidly deploy new servers, install the applications on them, so they are ready for production as fast as possible. And on top of that, they added auto-scaling groups. So in Amazon, what you do is that you create an auto-scaling group and you start feeding service into that and you connect that with a CloudFormation template.
So what it does is that this CloudFormation template will define what the new service will look like in this auto-scaling group. The normal thing with this is that it's completely reactive, so it will react to a high and low watermark event. So whenever there's a high watermark event,
Amazon will instantly start a new instance with this AMI image and will execute the commands that you define on your CloudFormation template on top of that. As soon as that happens, then that will be added automatically to the load balancer and that server will be ready for production. It also supports scheduled events. So there's a lot of companies that use these
for developer environments in which you know that the developers will show up at work at 8 a.m. and they will leave at 6 p.m. more or less. So you're able to shut down those machines during the rest of the day. And all of this is using base images. So AMIs could be an AMI created from a snapshot
or it could be a base OS image provided by Amazon or any other third party. So at Rackspace, we just launched as well another auto-scaling tool. This auto-scaling tool is also all about scaling groups because that's easy and that's very, I would say, very logical.
So you create a new server. You add that to the auto-scaling group and what it will do in our event is that when you define the high and low watermark events it will use a snapshot of that server to create new servers. And those snapshots can be a fixed snapshot that you point in time or it can be the latest snapshot from your server. So you can recover from the latest point
of one of your servers in that auto-scaling group. The third one is Ridescale. Ridescale is a tool that was created in order to be able to simplify deployments into the cloud. It was first supported by Amazon
but now it supports everything from Amazon, Rackspace, Windows Azure, Ocean something, I think the other one. So it supports any kind of scenario. It's also based on scaling groups because, again, that is logical. Those high and low watermarks, but this is the differentiator in Ridescale.
What it does is that when you define these high and low watermark events every single server in the auto-scaling group votes in order to decide if they want another server in the group or not. And that is done in order to avoid the spikiness and flappiness. So when the majority of the servers vote that they need another one the auto-scaling trigger will happen
and another server will be created. All these new servers are created using a base image that is provided by Ridescale because they add on top of that all the tools like StatsD and their agent monitor. And on top of that they run the Ridescale scripts that you attach to the template and the function of those servers.
The bad thing about Ridescale, it is awesome but it costs money. So if you're a company that is tight on resources and you need to put your money somewhere else it might not be the solution for you. And one of the last ones is Hit. So Hit was created inside the OpenStack project as a clone of CloudFormation.
And then it evolved a bit further than that. We provided from Rackspace also some DSL compatibility with our internal project for that that we didn't launch to the community on time which was called Checkmate. So it's DSL compatible with both CloudFormation and Checkmate and it provides all the functions of CloudFormation and more.
So it also gives you high and low water max events. It gives you scheduled events as well and it uses Hit scripts. So all these Hit scripts are basically templates defined in JSON which are very similar to CloudFormation templates. So you can define the kind of image that you want, define the base or the image that you wanna use
and define all these triggers and events and anything that you need to do in order to get this server in production. And this is Scryer which was created by Netflix and it was just announced last month. This is very interesting because they are the first ones
to actually use AI on production. So what they did is that they used something that's called Analytical Regression in order to calculate the probabilities of a new server and they added on top of that an Irrational Artificience Intelligence platform.
I think it's based on like a Honda engine but I'm not certain because they have not published anything yet. But what they do here is that they predict the fidelity of the traffic. So the longer amount of time that you get it's kind of like meteorology. The longer you're in time the more difficult it is to actually predict with accuracy what will happen
but the shorter you are in time, so let's say that if you are 30 minutes before the traffic happens the fidelity of your model will go up up to a certain point where it's almost certain. So you can say that there's 70, 80% chances that you will have 10% more traffic
in the next 30 minutes. So with this what you can do is to stay as close as possible to the traffic and only allocate the resources you need. And all of this is done and fed into the Amazon APIs. I don't know if Netflix will try to implement these for other APIs like OpenStack but as soon as they publish the code we'll see.
And also of course you can make your own because there's all these different tools but they might not fit your business, they might not fit what you want to do. So what would be the best way to create that? So in order to create that this would be just to collect your metrics, be able to collect as much as you can,
get any kind of metrics that you get as simple as possible, collect them through collect the diamond collectors that's the tool of your choice and have a good metrics database, have something that can store metrics for a long time like RD or like Whisper which is what Graphite uses which can store information with a lot of accuracy
and a lot of atomicity for a long time. And then of course you need to write your own autoscaling code. My recommendation is use message queues because this is the kind of software that you want to use message queues for, it does scale very well in a synchronous way.
And the other advantage that you have is that this is very close to your business. If you write your own code it's very close to what you actually need, it understands exactly what you want to do and what your business needs. So it's fairly easy to achieve the most with that.
But of course you need to invest time and money into developing this. So what do you do for taking the most of autoscaling? So autoscaling was not invented to make your life easier, it was invented to make the most money. So the less money you spend on service going idle
the more money you can spend in something else like awesome parties at the beach and pizzas of course. But autoscaling is dangerous as well, it's a very dangerous thing. So beware of the Kraken, it's not a good thing. In order to avoid that, my recommendation is whatever kind of autoscaling you do please always have minimum and maximum allocation numbers
because what you want to do is having your autoscaling engine all the way down to zero because then you have no service. But the autoscaling is happy, everything's good. But at the same time you don't want to have a million bucks bill from Amazon or from Rackspace.
So what you want to do is have a maximum allocation so you want to have a human after that that goes yep, actually we're having this amount of traffic, everything's going awesome so yeah, we'll allocate more service. But be careful with letting the autoscaling engine take those decisions for you because we had some customers already at Rackspace that came back to us with a huge bill
and trying for help. So my other recommendation is stay with the basics. So autoscaling can get very complex very fast. If you start throwing all kinds of business metrics, all kinds of data, your model will deviate and will do crazy things for you.
They might be right but most of the time are wrong. So stick to the basics, stick to CPU, memory and both disk and network area because those are the ones that will help you autoscale the best. Then on top of that, maybe what you want to do is to add your business metrics but in a manual way
that you can review your autoscaling mechanism and see if that works for you or not. And I said, it's very important, keep reviewing your autoscaling because if you let it go, it will do awesome things for you but it will also be a waste of money for you. So keep reviewing your autoscaling mechanisms, keep having meetings with that, with all the metrics, sit down, check them,
make sure that they fit to your business and that the gap between your real traffic and the amount of service that you have allocated is as narrow as possible because at the end, that's the game. It's trying to get that gap as narrow as possible in order to use all your resources for what they are supposed to be used.
And I said, these are recommendations from Netflix. I've seen the same kind of pattern with Rackspace customers. Scale up early, it's never dangerous to scale up and scale up too much unless you are concerned about the bill of course but it's never dangerous to do that. It always helps you and the important thing there
is that the phasing in time, it always plays a factor. And when you scale down, it scales down slowly because when your traffic picks up very rapidly, it's possible that it goes down very rapidly but it can pick again. So if you scale down slowly, you're able to not having to shut down
and start service again because that always has a cost. And also, don't apply the same kind of strategy for your applications. If you have five different applications, you need to review each one of them. Don't use the same for everything because it's not a one size fits all. You need to make sure that your auto-scaling fits exactly your application needs.
So phasing in and phasing out and auto-scaling is very important. And it's one of the most problematic things. So whenever you're phasing, you have a certain amount of time that it takes from the moment you say, I need a new server, to the moment you have a new server in production. And that amount of time varies widely.
It's the amount of time that your provider or your platform takes in order to install the OS image and get the server up. It's the amount of time that it takes from that server being a vanilla OS image or a golden image to being in the same spot as all the rest of the servers in production. And it's the amount of time it takes for the load balancer to add traffic to that server.
So that time is very important and very crucial for you. That's why a predictive model works so well because if you know that it will take you five minutes to add a server in production and you can predict the fidelity of your traffic 30 minutes in advance, that gives you 25 minutes for any kind of problem you have. You also need to have in mind the decommission time.
Decommissioning a server is not always easy. You have lots of unique things on a server. You might have sessions. You certainly will have logs that only put time to that server, be it traffic logs, be it debug logs, any kind of logs that you need, you need to take them out of the server before shutting it down. And you might also have any other kind of things that you have unique on that server
that need to be taken out before it's destroyed. So have in mind also the commission time because that also plays a very important part. And it's actually what makes scaling down the most difficult thing. And sometimes in these golden images can help you get to the point where you want faster. The problem also with golden images
is that if you keep taking them for a fixed snapshot, that fixed snapshot in time will delta from the current state of production. So that image will in time be slower than actually deploying an OS-based image. But if you want to keep doing snapshots and use the latest snapshot, you might also incorporate corruption into that.
So it's a very tricky balance there. So you need to make sure that whatever you do, it's something that will ensure that there's no corruption in your image and there's no corruption in your new service. And if that means that you need to add another check before the server comes in production, so be it. It's always more important
to be able to serve traffic right than to be overwhelmed by traffic. And that's my presentation. So any kind of questions you have? Instances where it doesn't make sense
to have it in an auto-scaling group, but you need high availability for it. So like the NAT instance, for example, where you only need one of them. But if that instance has a problem, then the auto-scaling group configuration will bring up the new instance. Yeah. So the question is that Denise here
has seen auto-scaling being used sometimes to be able to do HA. So you have just one server in the auto-scaling group. As soon as that server dies, the auto-scaling group will create another one for you. So you always keep an HA with just one server which is kind of cheeky, but it works. But I know about that and I've done that in the past. It's a good way to save money
by letting Amazon or Rackspace do the hard work for you. But it's also not real auto-scaling. So it's really a HA that is disguised as auto-scaling in a certain way. And also if that happens, that means that you don't really need that server all the time, right? So if the server goes down and it takes some time to get another server up,
you're sacrificing those three, four, five minutes. Any other question? All right, that's it then. Thank you.
Hello, hello, yes. This one, this one is right.
Like, you can.
Hello. Okay, yeah, it's about the moment.
So why doesn't it send it out? I'm having the sensation that it's as horrible as that we would have to. This one's out. I'm starting to think that there's a mixer and we would just have to put this here. Yeah, it's quite straight on the headphone exit. So we would have to. So this and this is the same one.
So if you want this to hear. Yeah, then we will hear the. Okay, that's fine. So do we do that? Yeah, let's do that. All right, so now we're. One, one, one.
For the mixer. Okay, this is working. This is now not recording. Okay, let's do it that way. Okay, and if this goes out or something. Oh, yeah, I just meant that probably this one. This is the mixer. If the mixer would have an output on the back,
I would probably be able to plug into it. Yeah, both. Both would be. It doesn't seem like, no, we have performance.
It doesn't look like a connector that would be on the back.
This one down to here. Why? Just if we can switch, move this one down. This one. To this mixer, to this down. If we can, this is the. Ah, yeah, to mix it all together, you mean?
Yeah, the question is, but it looks like they're all. All right. No, they're all connected though. Let me check the set. Okay, so. Let's not do this. It's the two receivers? Okay, later we can do it. It's not. This one is working. So if we're moving.
So this one, this one? Yeah. This is, yeah, if we use the speaker, if we use the microphone, it doesn't work.
I think that one is better to hear sound here. That's right, we can do it. This one, this one is having very low volume here.
Use this one here.