Optimizing Your CI Pipelines
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 141 | |
Author | ||
Contributors | ||
License | CC Attribution - NonCommercial - ShareAlike 4.0 International: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/68701 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Mathematical optimizationMultiplication signSlide ruleContinuous integrationGame theoryComputer animation
00:28
Configuration spaceSoftware developerInternetworkingService (economics)AdditionProjective planeTemplate (C++)File formatInternet service providerServer (computing)Complex (psychology)Computer animation
01:00
Complex (psychology)Projective planeCodeRevision controlSoftware testingTask (computing)Computer animationDiagram
01:33
Thread (computing)Sheaf (mathematics)VotingGroup actionCirclePhysical systemWeb 2.0Context awarenessIntegrated development environmentDatabaseInstallation artFunction (mathematics)Data bufferComputer fileHypermediaFluid staticsService (economics)Gastropod shellCollatz conjectureLevel (video gaming)Mathematical optimizationScripting languageScherbeanspruchungRevision controlSoftware testingWebsite2 (number)Computing platformDefault (computer science)Level (video gaming)Archaeological field surveyMobile appServer (computing)Operator (mathematics)Projective planeMedical imagingDatabaseIntegrated development environmentMultiplication signDifferent (Kate Ryan album)Process (computing)Computer fileBitSoftware testingComputer configurationNumberHuman migrationImplementationPlanningRandomizationCodeWeb 2.0Configuration spaceCuboidStandard deviationLink (knot theory)Internet service providerContinuous integrationSlide ruleDampingService (economics)Group actionComputer animationXML
05:19
Public key certificateGradientCloud computingProjective planeProduct (business)GradientCartesian coordinate systemSoftware developerInternet service providerComputer animation
05:57
Configuration spaceScripting languageWindows RegistryComputer-generated imageryLevel (video gaming)AlgebraUsabilityThetafunktionData Encryption StandardCASE <Informatik>Computer clusterSoftware testingBranch (computer science)Computer fileMedical imagingStack (abstract data type)Process (computing)Multiplication signBitAxiom of choiceLevel (video gaming)Speech synthesisSoftware testingAdditionDampingLibrary (computing)Windows RegistryComputer animationXMLUMLSource codeJSON
08:40
Computer-generated imageryWindows RegistryBranch (computer science)Software testingBinary fileProcess (computing)Windows RegistryProcess (computing)Software testingMobile appMedical imagingAdditionComputer fileCache (computing)2 (number)CASE <Informatik>MassResultantBinary codeDifferent (Kate Ryan album)Formal languageBuildingSingle-precision floating-point formatComputer animation
10:33
Graph (mathematics)Level (video gaming)Process (computing)Limit (category theory)Software testingRevision controlDifferent (Kate Ryan album)Level (video gaming)Flow separationPoint (geometry)Configuration spaceNeuroinformatikIntegrated development environmentGraph (mathematics)Medical imagingFluid staticsMathematical analysisCorrespondence (mathematics)ResultantData structureParallel portMultiplication signOpen setComputer animation
14:03
Level (video gaming)Series (mathematics)BuildingFlow separationTerm (mathematics)CodeProcess (computing)Configuration spaceComputer fileDirection (geometry)Graph (mathematics)Computer animation
14:35
DebuggerCodeSoftware frameworkDifferent (Kate Ryan album)Projective planeSoftware testingFront and back endsStack (abstract data type)Computer animation
15:05
Front and back endsFreewareLevel (video gaming)Strategy gameConfiguration spaceFocus (optics)DebuggerFront and back endsRule of inferenceInheritance (object-oriented programming)MathematicsPattern languageProcess (computing)Computer animationSource code
15:55
Parallel portSoftware testingMassScripting languageGroup actionBranch (computer science)BefehlsprozessorFingerprintIRIS-TConfiguration spaceSoftware repositoryServer (computing)BefehlsprozessorSoftware testingBranch (computer science)Variable (mathematics)NeuroinformatikProcess (computing)Multiplication signPiGroup actionParallel portPlug-in (computing)CountingComputer configurationSubject indexingInternet service providerTerm (mathematics)MultiplicationGoodness of fitCodeDifferent (Kate Ryan album)Computer animation
17:59
Configuration spaceLevel (video gaming)Event horizonBranch (computer science)Software testingComputer configurationCodeDefault (computer science)Process (computing)Server (computing)CASE <Informatik>Multiplication signSuite (music)Set (mathematics)Point (geometry)Projective planeDatabase normalizationVideo game consolePiSystem callComputer animationXML
20:01
Software testingSoftware bugINTEGRALBitLevel (video gaming)Mobile appFeedbackMultiplication signDifferent (Kate Ryan album)Suite (music)Branch (computer science)Process (computing)Projective planeComputer animation
21:08
Cache (computing)Computer configurationData compressionLevel (video gaming)Configuration spaceSoftware testingBranch (computer science)Computer-generated imageryMultiplication signSpeech synthesisServer (computing)Cache (computing)CodeOperator (mathematics)Data compressionProcess (computing)Physical systemMedical imagingBefehlsprozessorYouTubeObject (grammar)Default (computer science)Level (video gaming)Software testingSoftware developerProjective planeCore dumpFeedbackTrailCodeInstallation artBitPoint cloudBranch (computer science)ResultantPoint (geometry)Basis <Mathematik>Transport Layer SecurityInterrupt <Informatik>LoginComplex (psychology)Different (Kate Ryan album)Perfect groupNeuroinformatikTerm (mathematics)DampingOrder (biology)Computer animation
26:19
Slide ruleVector spaceComputer-generated imageryVirtual machineMultiplication signCASE <Informatik>Link (knot theory)Slide ruleDampingInheritance (object-oriented programming)Computer configurationTerm (mathematics)PlastikkarteServer (computing)Medical imagingSoftware repositoryComputer architectureVideo gameEmulatorJava appletArmGastropod shellComputer animationLecture/Conference
Transcript: English(auto-generated)
00:05
Thank you very much Chris, I'm afraid we won't have time for jokes because I have 30 minutes and like 90 slides so Hi, yeah, welcome to my talk Can you hear me? Yeah, you can there is no crazy game here, but it's okay So my name is Sebastian Vytotsky
00:20
And today I want to talk with you about continuous integration or to be more specific about how you can optimize your CI pipelines Setting up a CI pipeline is not the easiest thing to do Because unlike with a local development and now you have to debug things running on the server that you don't necessarily control and
00:40
on top of that there is this additional layer of complexity that you have to configure various services together using some kind of configuration format that your CI provider requires you to use but usually with some templates that we can find on the internet We can glue together some reasonable setup and that might work well when we start our project
01:00
But as the complexity of the project grows also the complexity of the CI starts growing so we start adding more tasks and more tools We start building different versions of our release packages We have more and more tests and it starts to get frustrating to wait for the CI run to finish before we can merge your code or for example to
01:24
Wait for half an hour and see that your pipeline has failed because you have like unused variable and that made your linter unhappy So in this talk I want to take a look at a few different ideas that you can consider when it's time to improve your CI setup First we'll take a look at some improvements for Docker images. Then we'll talk about running things faster
01:45
For example by configuring jobs to not wait for other unrelated stuff And then we'll have a look at not running unnecessary things and stopping them earlier And then finally I will share some miscellaneous tips and tricks depending on how much time we have left
02:03
If we want to discuss continuous integration we have to choose one of the existing implementations because Different CI providers like GitHub actions, GitLab CI, Circle CI and whatnot they all Require you to use a different configuration setup I mean the general idea is the same you write some kind of a config file
02:23
But the way you write this config file differs between different CI providers So you cannot take let's say GitHub configuration move it to GitLab CI and expect it to work out of the box So for the purpose of this talk, I chose GitLab Because this is the platform that I have been using most in my recent projects and also according to this Reddit survey
02:44
It's still the most popular option But I am in no way related with GitLab. I was not paid by them to come here I know that they have paid plans, but everything I will be showing here can be used with the free plan
03:03
We also need some code that our CI will run on So I created a simple project that you can find under the link at the top I will have all the links to the slides and to the this project at the end. So don't worry This is a Django project with a simple to-do app if you don't know Django, don't worry
03:20
You don't have to know it to follow this talk I use Django for the simple reason that Django app by default is a bit more complex than a bare-bone flask or fast API, so we will have some migrations to apply and We will also have like more files lying around so it feels a bit more real-world But we just need this project to have something that we can run our CI on
03:43
So I'm not even going to explain you the code that's not important What is important is that I have for example a bunch of random dependencies that I'm installing to slow down the build process. I Have some tests that are sleeping or performing some large mathematical operations. So also they are slow
04:00
We also have a build process that uses Docker and Docker compose so Docker compose will set up two services Postgres database and a web container and The web container is built from a pretty standard Docker file, so So we start so we start from a
04:22
Basic image. We just set some environments. We copy the requirements run PIP and then we start the server pretty pretty standard stuff and This initial setup has a pipeline that takes around six minutes to finish so You know here we can see the six minutes and we have three jobs
04:44
so first we build a Docker image, then we have a test job that runs the migration and runs the test and This stage is actually badly designed because the first Docker run command here will actually build the Docker image from scratch So I will I will fix it as we go with the talk and then finally we have a deploy stage
05:05
that Takes around 54 seconds and all it does is just prints this Command so 54 seconds is the time it takes to just start the job container So keep that number in mind and of course this example project is simple for the illustration purpose. It's not production great
05:26
So maybe don't use it in production As I mentioned that my example project is using Docker Docker or containers in general are now very well supported by most of the CI providers so if you can use Docker use Docker because it will make your
05:43
development CI and production setups much closer to each other And Even if you don't do development with Docker, it's quite easy to wrap a simple application in a simple Docker setup So if you do use Docker The first step to improve your CI is to actually take a look at your Docker file and make sure that you're not doing
06:07
Some obvious mistakes like make sure you're using layer caching properly during the build time make sure you use stacks So, you know which images you're actually using in your setup And speaking of images, which one of those two images is better
06:21
So we have slim buster that is a smaller Debian based image and we have Alpine which is a pretty bare-bone Linux image. So who here thinks that Alpine is a better image as a base image Raise your hands we have around 10 hands up and who here thinks that slim buster is a better image
06:41
Okay more people. Um, the answer is it depends So using Alpine image means that you have to install a lot of additional Linux libraries yourself This will make your Docker file a bit more complex and the build process will be longer But the final image will be smaller because Alpine image is very small
07:02
So it will be faster to push this this image back to the registry and pull it in all the other jobs on the other hand slim buster is twice the size of Alpine which is still not that bad because if I was using Buster that would be like 15 times the size of Alpine But if you slim buster the chances are that it has all the Linux dependencies already installed
07:25
So all you have to do is to run pip install and you're ready to go So Yeah It will be a larger image, but and it will take longer to download it between all the jobs But your build process will be much more simpler and in the end
07:41
The choice should depend on whether your pipelines spend more time Building the Docker image or actually pushing and pulling it between the registries But slim buster in general is a good choice I would say So one way we can speed up our build time is to not build the Docker image in each job but to actually build it in the build step and pull it in all the consecutive jobs and
08:05
Here we can see that even though we are building the Docker image in the build stage In the test stage when we run this Docker compose run command We will be rebuilding Docker image from scratch because jobs are independent So the test job doesn't know that the build job already build our Docker image
08:26
So we can fix that by pushing our Docker image to the registry at the end of the build job and simply pulling it at the beginning of some other jobs and Here we can see that the build job
08:42
Stop it The build job build job is now Slower, but the test job is faster and we are talking about like few seconds of difference So this might come from the fact that it just took longer to start job container But your mileage might vary so if your build process takes long and you have many jobs
09:04
Then I would say it makes sense to pull the image from the registry But if you have a build process that is simple and fast Then maybe actually maybe it actually makes more sense To build the Docker image at the beginning of each job because the saving you would get from pulling the Docker from the registry
09:21
Won't be that great. So you have to test different setups Another thing that you can do if you end up building massive docker images is to use a multi-stage build multi-stage build means that You start a build in a separate image. So you copy a bunch of files you install a bunch of Linux dependencies
09:42
You run the build process that creates some cache file some temporary files and the size of that image grows big but you don't really care because in the end you will just take the results of your build process and Move it to the separate image. That is much smaller And the multi-stage build works much better in languages where the build step requires you to install a lot of additional Linux dependencies
10:08
But the result of your build will be a single binary like for example in rust In the Python world. It doesn't matter that much I mean if I were starting with Alpine Linux and I was installing a lot of Linux dependencies
10:22
Then I could get some benefits, but I used slim buster that had all the dependencies already installed So in case of my particular example app, there will be no difference Let's leave Docker for now and let's talk about CI pipelines. So how can we make them faster?
10:45
So as your pipeline starts to grow and your stages starts to include more and more jobs You'll realize that maybe some jobs are unnecessarily waiting for other jobs to finish before they can start By the way, if you're curious, this is the CI setup of github itself. So that's that's that's quite a big setup
11:06
So you might want to take a look at the structure of your CI Configuration and move things around for example instead of running your jobs in separate stages one by one You can run some jobs in one stage in parallel. So this
11:20
Setup in this pipeline is ultimately going to finish faster Except that what if one of the jobs in the stage fails Now we are wasting computing resources running other jobs in this stage even though we no longer care about the results of static analysis or Preparing the release because we have to go and fix test in the first place
11:41
And there's actually an open issue about Counseling pending jobs if one of them fails in a in a stage, but that issue is open since 2018 So don't get your hopes very high that it's going to be solved anytime soon So there are always some design considerations that you have to Consider when you're structuring your CI setup, but in general I would say that the time you waste
12:04
waiting for a pipeline to finish is much more precious than Build the than the cost of a build minutes that cost you a couple of bucks so I would say that if you can parallelize things you should parallelize things and
12:20
There are actually two ways how you can move things between stages One is called directed acyclic graph or DAG and you might know this concept from other tools So we've DAG we can configure one job to start after another job finishes regardless of which stage they belong to For example, let's say you're you have a Python package and you're testing it under different Python versions and
12:46
For some reason you don't want to use a tool like talks or knocks that will allow you to set up different Python Environments you just build Docker image with Python 3.8 run tests there and run the release you build a docker image with Python 3.9 run the test and so on
13:04
So here we have a build stage that has to finish then we have a test stage that can Start after the build is done. And then finally we have a release stage In the ideal world all the build jobs will run in parallel and finish in more or less the same amount of time In the real world they want some of the jobs will take longer to finish and if you have a custom GitLab runner setup
13:28
You might actually have a limit on how many jobs can run in parallel so some of those might actually wait for their turn to start and If the image for Python 3.8 is already prepared What's the point for the test for Python 3.8 to wait for this Python 3.10 to be built we can start right away
13:47
So this is where we can use the directed acyclic graph and connect some jobs with their dependencies regardless of what stage They belong to so here we can see that the corresponding test and release jobs are starting right after the previous job is
14:02
finished And in terms of code all we have to do is to add this Needs keyword let's specify which jobs have to finish successfully before this job can start And direct data cyclic graphs are cool Yeah But if you have a lot of jobs and you start connecting them throughout your whole configuration file
14:23
It might be hard to follow what's going on So instead of doing that you can group some of your jobs together and create mini pipelines that can run as a whole So for example, let's say you have a project that uses a different text tag on the backend and different on the front end
14:42
That could be like Django rest framework with react on the front end And then your back end code lives in the back end folder you front end leaves in the front end folder and Your back end test probably don't depend on having your front end up and running and also You probably don't need to run all the back end test if you only change something in the front end
15:02
So you might want to separate those two things So we can create two child pipelines They are one is called front end and the other is back end They are pretty similar so we'll focus on the front end So under the trigger we say that configuration for this trial pipeline is living in the front end
15:22
GitLab CI and we also specify the strategy depend so If we don't specify the strategy and this child pipeline fails, then the parent pipeline will continue running but if we say that the parent pipeline depends on this trial pipeline then if a child fails the parent pipeline will also fail and
15:40
We also have the rules key here we which says that okay we only want to run this job when something changes in the front end folder and Same here. We only run this pipeline when something changes in the back end What else can we parallelize we can parallelize tests?
16:02
Most of you who used by test are probably familiar with the pie test X dist plug-in So it will distribute your tests across multiple CPUs Which can give you some good speed improvement if you're running your test on a server that has a lot of CPUs but if you don't You can run your test across multiple runners
16:21
So this is especially useful if you want to like dynamically spawn new runners instead of keeping one Large expensive multi CPU server up and running all the time so here each runner can run in a separate VM and To do that we need to install the pie test test groups plug-in and then specify the parallel option in your gitlab config
16:46
You also need to provide pie test with two configuration variables Test group count which specifies how many groups we're gonna have in total and test group which specifies the index of the current group So are we in a group one, two, three, four or five out of five total groups?
17:02
And this setup is nicely supported by gitlab CI because we have environment variable for both of those things So all we see here is all the code that you need to run your test across five different runners in parallel And this is how it looks when we enable this feature So by the way, my repo has different branches that corresponds to different things that I'm talking about
17:22
So if we go to parallel pie test in groups, we can see that now We are down to five minutes instead of six in terms of the time But we basically use twice the amount of computing credits because I think previous time it was like six or seven computing credits now It's twelve and here in the test. We can see that we have six five jobs running in parallel
17:46
So yeah It's faster, but it's more expensive because as we saw at the beginning just starting the job container takes around one minute So starting five job containers cost us five computing credits
18:01
Other things that you should pay attention to is to make sure that your jobs are interruptible That is if you have a pipeline running, but you push some new code You want your pipeline to restart and actually run on the new code and there are actually two steps here Which is maybe not something that everyone is aware of So first make sure that in the setting of your project you select this on auto cancel redundant pipelines option and this will
18:26
Restart the pipeline when there is a new code pushed to the to a given branch This option is enabled by default. So you might not know it's there and take it for granted But if you have a good reason you might want to disable it But with this setting alone your pipelines cannot be interrupted
18:44
They can they cannot stop in the middle of the job They have to finish the currently running job before they can be restarted and that can be a pain if your job They if your jobs takes a lot of time because Let's say you have tests that are running for half an hour You have to wait for this half an hour to finish running tests, even though you actually
19:03
Want to stop and start running tests again? so you can mark your jobs with this interruptible true and This will make them interruptible So they can stop immediately when there is some new code and you probably want to have this option enabled for example for the build and test jobs, but you
19:21
Don't want to have this job enabled for the deployment job because you don't want to end up with like Partially deployed code to your server. So in in case of deployment you probably want to like finish the current deployment and then start a new one and other thing is to Stop your job when it doesn't make sense to run it anymore
19:41
For example, if your full test suit takes half an hour to run and already the first test failed And what's the point of running all the tests if you know that you have to like run the test locally? And fix those tests so you can run pytest with minus X this will stop the pytest run after the first First failed test and then some next job can start
20:03
You know what else can make your CI faster Not running things in the CI So one of the biggest revelations for me was that you don't have to run every possible check in every possible pipeline If you have a bunch of slow integration tests, you can just run them on the main branches for example
20:22
In my current project, we have some integration tests that takes quite a bit of time But they check that all our apps are working nice together. So they are like testing different integrations But because of that our test suits takes around 45 minutes So we marked all of those jobs as slow and we moved it to a separate pipeline
20:42
That is interruptible and that runs only on staging and on master branch and now the pipeline for a merge request takes five minutes to Finish and that that works fine. I mean sure we don't detect all the bugs right away but the merge request pipeline finishes in five minutes and we eventually get the feedback from the Full test full test run. So that's fine. And you can also run some particularly slow jobs manually or during the night
21:10
and Very quickly because I'm running out of time and we might actually have time for questions So, I think I'm speaking faster than when I was practicing I hope you guys can understand when I'm speaking
21:22
My talk will be on YouTube so we can play it with like those zero seventy five percent speed So you probably know that you can cache things For example, you can cache the pip cache between jobs and that can give you a bit of a speed improvement But you can also specify the cache policy by default each job that uses cache will pull things from cache run the steps that you define in your job and then push things
21:45
back to the cache again But maybe for some reason you don't want to do that because let's say your job is doing something destructive to the cache so there is a policy keywords that you can use to either disable pulling the cache at the beginning of the job or pulling or pushing the cache at the end of the job and
22:03
Speaking of caching you can also select the fast zip compression method and that will allow you to specify the compression level for your artifacts or for your cache So here we can select for example the fastest method That will run very fast, but the resulting cache object will also be larger So it means that the caching will take less time, but like downloading this cache object in consecutive jobs will take a bit longer
22:28
that works really nice with the cache controlling policy because let's say If you have if you have a pipeline setup where you build your cache only once and then you push it But then you pull it in all the other jobs
22:41
It actually makes sense to use the slowest compression method that will take a bit longer to build the cache But the resulting object will be Smaller so it will be fast to pull it in all those next jobs If docker is too slow to build your images consider using a different build system there is build out there is Kanika
23:00
They have very similar commands. They actually have a bit of different features. So maybe one of them is better for you You can also use your own runners And this is a very vast topic that could take a separate talk to talk about in in details but using your own runners first of all will save you cost because you're not paying for the computing credits of you for using
23:21
GitLab CI and also gives you much more flexibility For example in my current project We have to use runners because we have some proprietary code that we cannot really push to GitLab So instead of running tests on GitLab we set up runner on our server So the tests are running on our server and we only push the results back to GitLab and then GitLab handles
23:43
All the displaying whether the job failed with all the logs and so on and just for fun. I checked How it is running runner on my computer and I got actually some interesting results So the build job was faster and the deploy job was also faster. That's the build
24:02
That's the deploy so deploy is twice as fast as it was on the GitLab VM But actually running tests now took me like 10 minutes Which is super weird, especially that it was for those tests that were making a lot of mathematical operations So I would expect that my MacBook Pro with 16 gigs of RAM and 16 CPU cores
24:23
Would be more powerful than a small VM in a cloud that GitLab CI is using but apparently Apple scammed me But Jokes aside this drives a point home that CI is a complex beast Especially if you don't do DevOps on a daily basis as I guess most of us don't do it because this is a Python conference
24:43
So adding a custom runner gives you flexibility, but it also gives you another layer layer of complexity for your CI Okay, let's wrap it up. So here are some key takeaways. I want you to remember from this talk learn concepts not tools Even though I was showing you how to use GitLab CI here, I think whatever I showed you here is universal if
25:04
If you know how to use something, it's just them if you know that you can do something It's just a matter of figuring out how to do this in your CI setup There are no silver bullets in terms of a perfect CI setup should use Alpine image and install Linux dependencies yourself, or should you use
25:21
Debian image and deal with the fact that the image is larger Should you pull your docker image in order all your jobs or should you build your docker image from scratch in all your jobs? I don't know. It depends on the setup of their project Not every check has to run in every pipeline Slow jobs can run manually or on the main branches if there is new code available
25:45
interrupt and restart the job and Try to make your merge request pipelines fast and your main branch pipelines thorough And also if you think that you set up your CI once at the beginning of your project And that's the last time you touch it think again
26:02
Not updating your CI is the same technical depth as any other technical depth in your code It will make your code review slower It will delay important feedback and it will make your developers more and more annoyed So give yourself a favor and check from time to time what can be improved there But overall a well-designed CI can be a great tool in your daily work. Thank you very much for listening
26:34
I think we have a time for question or two Anybody wants to ask a question? There's microphones in the room and anybody can come and ask a question if anybody has a question online
26:45
I'll Show it And if you don't have questions now, you can always find me online Here's the link to the repo here is the link to the slides Any questions on this court
27:08
Okay, I Knew I was talking so fast. Oh, there's a question Can you maybe speak to the microphone because I think it's recorded in terms of the multistage Docker builds that you mentioned before?
27:24
Do you find them useful and what scenarios what problems are they solving for you specifically? That's a very good question. I Never used them. I mentioned them because I know that they exist and in some Particular cases where you have like a huge build step
27:41
They would make sense As I said, I don't think in Python they make they made that much more sense when mostly like Java Rust and things like that But yeah, it's a viable option
28:01
Hi Is it possible that you mentioned the the custom runner? So was it running locally on your MacBook? So is it possible that you build a Docker image for Intel? Or something and then run it on on your MacBook, but it wasn't, you know, multi-platform built
28:22
but from the What is it arm in my book? Yeah silicon. So it's a possibility was so slow because it was emulated and not native So that's a good question. I knew I should have like debug. What's the issue? I think I was building the Docker image fully from scratch there So I think it was using whatever architecture I have But I know that the runners have like a lot of options whether you want to run things in shell whether you want to like run
28:45
It in like Docker in Docker and things like that. So Yeah, I really don't know. What was the reason behind it? I just checked it I was surprised and then I moved on with my life. Yeah, thanks I think it might be the case like it's it's super slow on the emulated when you build it on like Linux server and
29:02
Then you pull this image and run it locally on Mac. It's super slow and it's emulated. Okay I have to check it. Thanks. Thanks. I think that's that's it. Thank you very much for coming. Enjoy the rest of the conference