How to deploy Volto sites automatically in a no-docker scenarios
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 44 | |
Author | ||
License | CC Attribution 3.0 Germany: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/60215 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Production Place | Namur, Belgium |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
Plone Conference 202239 / 44
10
14
22
23
26
27
40
00:00
Turtle graphicsWebsiteCodePoint (geometry)Moment (mathematics)ResultantLecture/Conference
00:41
Moment (mathematics)Software developerWebsiteElement (mathematics)Computer animation
01:01
Drop (liquid)Standard deviationWebsiteSpring (hydrology)Product (business)Software developerProjective planeDebuggerComputer animation
01:31
Server (computing)CASE <Informatik>WebsiteArmLecture/Conference
01:54
Drop (liquid)Standard deviationGradientChannel capacitySoftware developerBuildingDebuggerServer (computing)CASE <Informatik>MiniDiscStructural loadProcess (computing)
03:11
Product (business)WordArmProjective planeLecture/Conference
03:31
Front and back endsProcess (computing)Windows RegistryTraffic reportingNeuroinformatikObservational studySubject indexingClosed setMultiplication signGoodness of fitUltimatum gameClient (computing)PasswordRepository (publishing)DebuggerBuildingProjective planeConfiguration spaceMachine code
05:02
Amsterdam Ordnance DatumHill differential equationRepository (publishing)Meeting/Interview
05:29
Core dumpField (computer science)Object (grammar)Food energyProxy serverTorusPower (physics)QuicksortSource code
06:15
Windows RegistryAsynchronous Transfer ModeMedical imagingConfiguration spaceInformation technology consultingGoogolOptical disc driveNeuroinformatik
06:42
Hill differential equationAmsterdam Ordnance DatumAuthenticationAsynchronous Transfer ModeMereologyObservational studyNeuroinformatik5 (number)PasswordComputer fileConfiguration spaceMeeting/Interview
07:27
Configuration spaceNeuroinformatikLecture/Conference
07:43
CodeComputer configurationSoftware testingComputer fileBuildingComputer animation
08:20
WebsiteComputer fileNeuroinformatikInformation securityGoodness of fit
08:50
CASE <Informatik>Machine code
09:08
Front and back endsServer (computing)Process (computing)Software developerBranch (computer science)Point (geometry)Goodness of fitInformation security
09:35
Front and back endsServer (computing)EmailProcess (computing)Repository (publishing)Software developerBranch (computer science)Multiplication signGreen's functionRevision controlServer (computing)Ideal (ethics)PlanningObservational studyMeasurementEntropie <Informationstheorie>Computer file
10:46
Revision controlLevel (video gaming)Computer fileEmail
11:03
Revision controlFront and back endsDifferenz <Mathematik>Server (computing)Software developerEmailProcess (computing)Machine codeRevision controlArchaeological field surveySystem callGame controller
11:46
OscillationUniform resource locatorRepository (publishing)Software developerMereologyTouchscreenProcess (computing)Server (computing)Link (knot theory)RobotSource code
12:35
Computer wormRepository (publishing)
12:53
Gamma functionResultantEmailFront and back endsServer (computing)Multiplication signSoftware developerContent (media)Scripting languageComputer fileBuildingEvent horizonRevision controlMachine visionShared memorySet (mathematics)Tunis
14:28
Channel capacityModule (mathematics)Computer fileConfiguration space5 (number)
15:01
Template (C++)Different (Kate Ryan album)Variable (mathematics)Repository (publishing)Server (computing)Multiplication signComputer fileWebsiteConfiguration spaceTorusClient (computing)Process (computing)
15:40
Template (C++)Embedded systemCache (computing)Variable (mathematics)Client (computing)Computer fileWebsiteConfiguration spaceTemplate (C++)Multiplication signCASE <Informatik>Link (knot theory)Scripting languagePattern languageFrequencyNeuroinformatik
16:58
Process (computing)Goodness of fitDigital photographyLink (knot theory)Configuration space
17:32
Software developer1 (number)Computer chessLogic gateLecture/Conference
17:55
Meeting/InterviewComputer animationLecture/Conference
18:15
AreaLecture/ConferenceMeeting/Interview
18:42
Repository (publishing)PasswordLecture/Conference
19:02
2 (number)Revision controlLecture/Conference
19:22
Hill differential equationRepository (publishing)Lecture/Conference
19:51
Turtle graphicsComputer animation
Transcript: English(auto-generated)
00:03
Hello, everyone. Thank you for attending my talk today. I'm Michel Arategi. I work for Code Syntax in the Basque Country. I'm going to show you how we are deploying our Volta sites in an automatic way,
00:25
but not using Docker. OK. Why is this reason of we are not using Docker? It's a long story, but the point is that lately we
00:42
have been migrating all Plone 4.3 sites to Plone 5.2. At the moment we started, Plone 6 was not even an alpha, I think. So we started migrating them to Plone 5.2 and doing some Volta developments to build
01:04
the front end of those sites. We are not yet in the Docker world, at least for deployment, except for a project we have in the EA, but in the other projects we are not in the Docker world. We use Docker for development things,
01:20
for spinning MySQL databases, for testing, or memcache, or whatever, but not for deployment. We are quite used to deploy our Plone sites with build out. We have more than 70 Plone sites in more than 70 servers
01:41
in DigitalOcean, so we thought that in this case we were not going to change the way we were deploying our sites just because of Volta. We want to have the complete experience for our developers too, and our people who deploy the sites,
02:01
so we wanted to do everything in the same way. So which was our first approach to deploy our Volta site? We wanted to work, as I said before, we wanted to work like we did with build out,
02:23
so why not copy the front end packet to the server, run Yarn build, and start the server. That was our first idea, but it didn't work.
02:41
The Yarn build took at least half an hour. We had to double the server, and even after that, the build took about 15 minutes. We had to upgrade twice the capacities of the server to run Yarn build. Yeah, it was insane, but that was the case.
03:04
So we saw that that was not the solution at all. We couldn't work like with build out, running build out or upgrading the package and just restarting the server. So at that time, we were already working in a project at the EA.
03:20
We were using their deployment pipelines. Thank you all for all that work. So we thought that that could be a thing that we could replicate or copy to do our deployments. So we went to automate somehow all that project building
03:45
and publishing. So what we wanted to do was to automate the Volto add-on package creation, the package building, to tag the package and release it or not or whatever.
04:00
We want to automate the way that the front end was being built. As I said, our build was taking too much time, so since our Volto add-ons and front ends were not hosted in GitHub, in public GitHub, there were a client code that was in private GitLab
04:21
repositories. We are using GitLab to host our client code and configuration and stuff. So we needed a way to publish those private packages in a kind of PyP index, but for MPM, for JavaScript
04:40
packages. We already have a private PyP repository. It's just an engines folder with password protection, and it works with build out quite fine. You just add the repository URL in the findings,
05:01
and it works. So we thought that that could be the way also for MPM, but it looks like this doesn't work like that in MPM world, in JavaScript world. We didn't find a way to MPM to work with several different repositories to use the public one and the private ones.
05:22
So we started looking for some private repository for MPM packages. We found Verdaccio. I don't know if it's called, if it's said Verdaccio. Perhaps the Italians will tell me that it's not Verdaccio, but Verdaccio, or whatever.
05:40
Sorry. It's a private, as advertised, it's a private MPM repository, and it works like a proxy. So you can configure your MPM and Yarn and all JavaScript tooling to talk to Verdaccio, and it
06:02
will handle the packages that it hosts, and also will proxy to MPM to download the packages that are outside. So with this, we just have to configure our tooling to get the packages from there, and it will handle everything.
06:22
We password protected Verdaccio. It's just a Docker image with some YAML-based configuration. So after googling a bit, we managed to configure everything correctly. And we have published the configuration
06:42
we used in GitHub, so if anyone wants to use it or deploy it for themselves, it's quite easy to do it. We had to, as I said, we password protected Verdaccio and created the MPM token needed to communicate with that.
07:03
And then came one of the hardest parts that was configuring MPM and Yarn to use that repository, because sometimes when you execute Yarn install, it uses some configuration files.
07:22
And when you do MPM publish, it uses some others. So we had to figure out what to touch in each configuration package. And after that, it was quite easy to publish everything. Like we published with search register in PyP, just publishing with release it in Verdaccio.
07:44
We had to change the MPM we had to create in each of our packages the MPM RC and Yarn C files, saying that our packages will be hosted there. And then started trying to automate the building of them.
08:07
How we did this? We were already using GitLab, so I guess you know that GitLab has the CI CD options. We already were using CI CD for running some tests
08:24
and also for deploying some Django sites. We also do not only Plone, but also Django. So we were already using that to release and publish some Django sites. So we started configuring the GitLab CI YAML files to manage all that stuff.
08:43
And let GitLab CI do the hard work. We also have some GitLab runners in our premises, so we are not relying on public GitLab CI runners. Just in case you want to run the code in private things,
09:04
it's quite easy to spin a new runner, and it does the work. So our process was the same that we were using already in GitHub and within the EA.
09:21
So we do our developments in a branch called develop or in branches taken out from develop. When ready, we open a merge request. In GitHub, they are called pull request. In GitLab, they are called merge request. I don't care. And when ready, we open a merge request to the main branch. And at that point, GitLab CI is run, the package is built,
09:44
and the merge request is approved or at least in green. When it's in green, the merge request is merged. And at that point, the CI builds the package, publishes it in the private npm repository.
10:03
And also, as it takes some time to do that, it sends an email to our developer team so that they can, OK, this package is already built. I can go on and create the front-end package. For the front-end package, the process is quite similar, but it takes more time.
10:23
So we update the version that we have just released in the previous step in the develop branch. We create the pull request for main. At that step, the JARM build is executed so that we can guarantee that the build is OK.
10:42
And if the merge request is OK, the build is OK. We merge it. And another pipeline is run to upgrade the version and so on and copy all the resulting files to the server, to the staging server, or whichever server,
11:04
STP them, restart the supervisor, and send an email to the developer team. This way, the developers just take care of upgrading the version of the package, issue a pull request, and they don't have to do anything. The drawback of this is that when
11:24
we were publishing a build-out or Python code, sometimes you can do quick fixes and just push the code to the repository, restart the build-out, and everything was working. Now, it takes about 15 minutes to publish a modification of the front-end.
11:43
But it's something that is a controlled process. It's a complex process. So we think that having it automated lets the developers to do their work and not be looking at the screen and to check if the server is being updated or not.
12:03
I am going to show now some parts of the pipelines that we are using. I have a link with all the pipelines. We have everything published in GitHub, so I will share the link with you afterwards if you want to check it or improve them or tell us
12:24
you are doing this wrong or whatever. This is the part of the pipeline that we are using to release the Avolto add-on. It prepares the repository that comes in from GitLab,
12:43
creates the MPMRC and the JAM files. Here you can see them. And then it just runs release it here to create a new package and publish it
13:02
in whatever it needs. And after the script is run, it just increments the version, pushes back to the development branch, and sends an email to the development. The frontend releasing is quite similar.
13:21
When the merge request event happens in GitLab, it executes JAM install to gather all the packages and build all the dependencies and prepare everything. Then it just executes JAM build. And this is what takes a lot of time
13:43
because it has to download everything and so on. And when the merge request is accepted, there is another pipeline that just releases it and async or STPs all the contents to the server.
14:04
We are using GitLab. CI has a thing called artifacts and shared, I don't know how is the exact name, but you can pass from one step to another the results of the previous one. So you can do some tricks also to share generated
14:25
created files from one pipeline to another. So in the first pipeline where we build the frontend package, in the merge request, we are creating a lot of files. We are creating not only the build files, but also the node modules folder.
14:41
So we save that for the next step. Otherwise, we are running JAM build twice. And it was needed to do that because it takes about 15 or 20 minutes, depending on the runner capacities of your GitLab configuration.
15:04
As I said, artifacts can be shared and can be cached. So each time you run it, you can reuse the previous step and previous files. And we are using a ton of variables. So we can parameterize all our builds
15:21
in using different configuration and different repositories or different server stuff. We are in the beginning of this process. We have two Volto sites almost ready to be published.
15:43
They are waiting for the client's latest approval. And we are preparing another one, another two, sorry. And we are constantly checking these pipelines and these ways of upgrading the site. So we are constantly updating these configuration files
16:04
to have them at least perfect for our needs in such case. So we try to keep all of them updated and not to start copying and pasting the GitLab CI
16:25
configuration files and some other configuration files for all those projects. Because I said they are four projects, so we have eight packages to update each time. We just wrote a Python script that downloads the configuration files from GitHub
16:41
and updates everything automatically. So if we do some updates in one of our projects, we just run Python update templates front end, and everything is downloaded and configured correctly again. You will find all data in this link.
17:04
You have also the link to this talk if you want to catch it later. And also our Verbatio configuration and also the GitLab CI and all other packages,
17:20
all the configuration files there. Everyone is taking a photo. So that's all. This is our team at Code Syntax. They are not all Plone developers,
17:42
but we do a lot of Python. We do Django and Plone. We also like chess. I play chess. The other ones say chess is nothing, but OK. I'm here if you want to ask anything or whatever.
18:20
No, we have planned that, but we haven't done that yet. But we want to do that too. Yeah, we are thinking of doing something like you are already doing in the EA, but we haven't reached there yet.
18:47
Yes, we have our own PyPy repository. Yeah, PyPy, yeah. But it's just an nginx folder with password. I mean, that works in Python.
19:07
No more questions? Yes, yeah, OK. The first question was if we were
19:21
updating the version of the add-on that is being created automatically in the frontend, automatically also. We are not doing that, but we plan to implement that. And the second one was if we already have a PyPy repository, a private PyPy repository,
19:43
and yes, we have one. But it's just an nginx folder. OK, so if there is no more questions, thank you very much.