We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

How to deploy Volto sites automatically in a no-docker scenarios

00:00

Formal Metadata

Title
How to deploy Volto sites automatically in a no-docker scenarios
Title of Series
Number of Parts
44
Author
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language
Production PlaceNamur, Belgium

Content Metadata

Subject Area
Genre
Abstract
We are not in the docker-world yet, at least not for deploying sites. Last year we had to publish several Volto sites in production and needed some automated way to handle those deployments because Volto's build process takes so time. We have manged to build a CI-CD pipeline using Gitlab-CI and several other tools to release, build and publish volto addons and sites in an automated way. Earlier this year we published a blog-post explaining this process and this talk will be the extended version of the blog-post.
Turtle graphicsWebsiteCodePoint (geometry)Moment (mathematics)ResultantLecture/Conference
Moment (mathematics)Software developerWebsiteElement (mathematics)Computer animation
Drop (liquid)Standard deviationWebsiteSpring (hydrology)Product (business)Software developerProjective planeDebuggerComputer animation
Server (computing)CASE <Informatik>WebsiteArmLecture/Conference
Drop (liquid)Standard deviationGradientChannel capacitySoftware developerBuildingDebuggerServer (computing)CASE <Informatik>MiniDiscStructural loadProcess (computing)
Product (business)WordArmProjective planeLecture/Conference
Front and back endsProcess (computing)Windows RegistryTraffic reportingNeuroinformatikObservational studySubject indexingClosed setMultiplication signGoodness of fitUltimatum gameClient (computing)PasswordRepository (publishing)DebuggerBuildingProjective planeConfiguration spaceMachine code
Amsterdam Ordnance DatumHill differential equationRepository (publishing)Meeting/Interview
Core dumpField (computer science)Object (grammar)Food energyProxy serverTorusPower (physics)QuicksortSource code
Windows RegistryAsynchronous Transfer ModeMedical imagingConfiguration spaceInformation technology consultingGoogolOptical disc driveNeuroinformatik
Hill differential equationAmsterdam Ordnance DatumAuthenticationAsynchronous Transfer ModeMereologyObservational studyNeuroinformatik5 (number)PasswordComputer fileConfiguration spaceMeeting/Interview
Configuration spaceNeuroinformatikLecture/Conference
CodeComputer configurationSoftware testingComputer fileBuildingComputer animation
WebsiteComputer fileNeuroinformatikInformation securityGoodness of fit
CASE <Informatik>Machine code
Front and back endsServer (computing)Process (computing)Software developerBranch (computer science)Point (geometry)Goodness of fitInformation security
Front and back endsServer (computing)EmailProcess (computing)Repository (publishing)Software developerBranch (computer science)Multiplication signGreen's functionRevision controlServer (computing)Ideal (ethics)PlanningObservational studyMeasurementEntropie <Informationstheorie>Computer file
Revision controlLevel (video gaming)Computer fileEmail
Revision controlFront and back endsDifferenz <Mathematik>Server (computing)Software developerEmailProcess (computing)Machine codeRevision controlArchaeological field surveySystem callGame controller
OscillationUniform resource locatorRepository (publishing)Software developerMereologyTouchscreenProcess (computing)Server (computing)Link (knot theory)RobotSource code
Computer wormRepository (publishing)
Gamma functionResultantEmailFront and back endsServer (computing)Multiplication signSoftware developerContent (media)Scripting languageComputer fileBuildingEvent horizonRevision controlMachine visionShared memorySet (mathematics)Tunis
Channel capacityModule (mathematics)Computer fileConfiguration space5 (number)
Template (C++)Different (Kate Ryan album)Variable (mathematics)Repository (publishing)Server (computing)Multiplication signComputer fileWebsiteConfiguration spaceTorusClient (computing)Process (computing)
Template (C++)Embedded systemCache (computing)Variable (mathematics)Client (computing)Computer fileWebsiteConfiguration spaceTemplate (C++)Multiplication signCASE <Informatik>Link (knot theory)Scripting languagePattern languageFrequencyNeuroinformatik
Process (computing)Goodness of fitDigital photographyLink (knot theory)Configuration space
Software developer1 (number)Computer chessLogic gateLecture/Conference
Meeting/InterviewComputer animationLecture/Conference
AreaLecture/ConferenceMeeting/Interview
Repository (publishing)PasswordLecture/Conference
2 (number)Revision controlLecture/Conference
Hill differential equationRepository (publishing)Lecture/Conference
Turtle graphicsComputer animation
Transcript: English(auto-generated)
Hello, everyone. Thank you for attending my talk today. I'm Michel Arategi. I work for Code Syntax in the Basque Country. I'm going to show you how we are deploying our Volta sites in an automatic way,
but not using Docker. OK. Why is this reason of we are not using Docker? It's a long story, but the point is that lately we
have been migrating all Plone 4.3 sites to Plone 5.2. At the moment we started, Plone 6 was not even an alpha, I think. So we started migrating them to Plone 5.2 and doing some Volta developments to build
the front end of those sites. We are not yet in the Docker world, at least for deployment, except for a project we have in the EA, but in the other projects we are not in the Docker world. We use Docker for development things,
for spinning MySQL databases, for testing, or memcache, or whatever, but not for deployment. We are quite used to deploy our Plone sites with build out. We have more than 70 Plone sites in more than 70 servers
in DigitalOcean, so we thought that in this case we were not going to change the way we were deploying our sites just because of Volta. We want to have the complete experience for our developers too, and our people who deploy the sites,
so we wanted to do everything in the same way. So which was our first approach to deploy our Volta site? We wanted to work, as I said before, we wanted to work like we did with build out,
so why not copy the front end packet to the server, run Yarn build, and start the server. That was our first idea, but it didn't work.
The Yarn build took at least half an hour. We had to double the server, and even after that, the build took about 15 minutes. We had to upgrade twice the capacities of the server to run Yarn build. Yeah, it was insane, but that was the case.
So we saw that that was not the solution at all. We couldn't work like with build out, running build out or upgrading the package and just restarting the server. So at that time, we were already working in a project at the EA.
We were using their deployment pipelines. Thank you all for all that work. So we thought that that could be a thing that we could replicate or copy to do our deployments. So we went to automate somehow all that project building
and publishing. So what we wanted to do was to automate the Volto add-on package creation, the package building, to tag the package and release it or not or whatever.
We want to automate the way that the front end was being built. As I said, our build was taking too much time, so since our Volto add-ons and front ends were not hosted in GitHub, in public GitHub, there were a client code that was in private GitLab
repositories. We are using GitLab to host our client code and configuration and stuff. So we needed a way to publish those private packages in a kind of PyP index, but for MPM, for JavaScript
packages. We already have a private PyP repository. It's just an engines folder with password protection, and it works with build out quite fine. You just add the repository URL in the findings,
and it works. So we thought that that could be the way also for MPM, but it looks like this doesn't work like that in MPM world, in JavaScript world. We didn't find a way to MPM to work with several different repositories to use the public one and the private ones.
So we started looking for some private repository for MPM packages. We found Verdaccio. I don't know if it's called, if it's said Verdaccio. Perhaps the Italians will tell me that it's not Verdaccio, but Verdaccio, or whatever.
Sorry. It's a private, as advertised, it's a private MPM repository, and it works like a proxy. So you can configure your MPM and Yarn and all JavaScript tooling to talk to Verdaccio, and it
will handle the packages that it hosts, and also will proxy to MPM to download the packages that are outside. So with this, we just have to configure our tooling to get the packages from there, and it will handle everything.
We password protected Verdaccio. It's just a Docker image with some YAML-based configuration. So after googling a bit, we managed to configure everything correctly. And we have published the configuration
we used in GitHub, so if anyone wants to use it or deploy it for themselves, it's quite easy to do it. We had to, as I said, we password protected Verdaccio and created the MPM token needed to communicate with that.
And then came one of the hardest parts that was configuring MPM and Yarn to use that repository, because sometimes when you execute Yarn install, it uses some configuration files.
And when you do MPM publish, it uses some others. So we had to figure out what to touch in each configuration package. And after that, it was quite easy to publish everything. Like we published with search register in PyP, just publishing with release it in Verdaccio.
We had to change the MPM we had to create in each of our packages the MPM RC and Yarn C files, saying that our packages will be hosted there. And then started trying to automate the building of them.
How we did this? We were already using GitLab, so I guess you know that GitLab has the CI CD options. We already were using CI CD for running some tests
and also for deploying some Django sites. We also do not only Plone, but also Django. So we were already using that to release and publish some Django sites. So we started configuring the GitLab CI YAML files to manage all that stuff.
And let GitLab CI do the hard work. We also have some GitLab runners in our premises, so we are not relying on public GitLab CI runners. Just in case you want to run the code in private things,
it's quite easy to spin a new runner, and it does the work. So our process was the same that we were using already in GitHub and within the EA.
So we do our developments in a branch called develop or in branches taken out from develop. When ready, we open a merge request. In GitHub, they are called pull request. In GitLab, they are called merge request. I don't care. And when ready, we open a merge request to the main branch. And at that point, GitLab CI is run, the package is built,
and the merge request is approved or at least in green. When it's in green, the merge request is merged. And at that point, the CI builds the package, publishes it in the private npm repository.
And also, as it takes some time to do that, it sends an email to our developer team so that they can, OK, this package is already built. I can go on and create the front-end package. For the front-end package, the process is quite similar, but it takes more time.
So we update the version that we have just released in the previous step in the develop branch. We create the pull request for main. At that step, the JARM build is executed so that we can guarantee that the build is OK.
And if the merge request is OK, the build is OK. We merge it. And another pipeline is run to upgrade the version and so on and copy all the resulting files to the server, to the staging server, or whichever server,
STP them, restart the supervisor, and send an email to the developer team. This way, the developers just take care of upgrading the version of the package, issue a pull request, and they don't have to do anything. The drawback of this is that when
we were publishing a build-out or Python code, sometimes you can do quick fixes and just push the code to the repository, restart the build-out, and everything was working. Now, it takes about 15 minutes to publish a modification of the front-end.
But it's something that is a controlled process. It's a complex process. So we think that having it automated lets the developers to do their work and not be looking at the screen and to check if the server is being updated or not.
I am going to show now some parts of the pipelines that we are using. I have a link with all the pipelines. We have everything published in GitHub, so I will share the link with you afterwards if you want to check it or improve them or tell us
you are doing this wrong or whatever. This is the part of the pipeline that we are using to release the Avolto add-on. It prepares the repository that comes in from GitLab,
creates the MPMRC and the JAM files. Here you can see them. And then it just runs release it here to create a new package and publish it
in whatever it needs. And after the script is run, it just increments the version, pushes back to the development branch, and sends an email to the development. The frontend releasing is quite similar.
When the merge request event happens in GitLab, it executes JAM install to gather all the packages and build all the dependencies and prepare everything. Then it just executes JAM build. And this is what takes a lot of time
because it has to download everything and so on. And when the merge request is accepted, there is another pipeline that just releases it and async or STPs all the contents to the server.
We are using GitLab. CI has a thing called artifacts and shared, I don't know how is the exact name, but you can pass from one step to another the results of the previous one. So you can do some tricks also to share generated
created files from one pipeline to another. So in the first pipeline where we build the frontend package, in the merge request, we are creating a lot of files. We are creating not only the build files, but also the node modules folder.
So we save that for the next step. Otherwise, we are running JAM build twice. And it was needed to do that because it takes about 15 or 20 minutes, depending on the runner capacities of your GitLab configuration.
As I said, artifacts can be shared and can be cached. So each time you run it, you can reuse the previous step and previous files. And we are using a ton of variables. So we can parameterize all our builds
in using different configuration and different repositories or different server stuff. We are in the beginning of this process. We have two Volto sites almost ready to be published.
They are waiting for the client's latest approval. And we are preparing another one, another two, sorry. And we are constantly checking these pipelines and these ways of upgrading the site. So we are constantly updating these configuration files
to have them at least perfect for our needs in such case. So we try to keep all of them updated and not to start copying and pasting the GitLab CI
configuration files and some other configuration files for all those projects. Because I said they are four projects, so we have eight packages to update each time. We just wrote a Python script that downloads the configuration files from GitHub
and updates everything automatically. So if we do some updates in one of our projects, we just run Python update templates front end, and everything is downloaded and configured correctly again. You will find all data in this link.
You have also the link to this talk if you want to catch it later. And also our Verbatio configuration and also the GitLab CI and all other packages,
all the configuration files there. Everyone is taking a photo. So that's all. This is our team at Code Syntax. They are not all Plone developers,
but we do a lot of Python. We do Django and Plone. We also like chess. I play chess. The other ones say chess is nothing, but OK. I'm here if you want to ask anything or whatever.
No, we have planned that, but we haven't done that yet. But we want to do that too. Yeah, we are thinking of doing something like you are already doing in the EA, but we haven't reached there yet.
Yes, we have our own PyPy repository. Yeah, PyPy, yeah. But it's just an nginx folder with password. I mean, that works in Python.
No more questions? Yes, yeah, OK. The first question was if we were
updating the version of the add-on that is being created automatically in the frontend, automatically also. We are not doing that, but we plan to implement that. And the second one was if we already have a PyPy repository, a private PyPy repository,
and yes, we have one. But it's just an nginx folder. OK, so if there is no more questions, thank you very much.