We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

University of Oxford - Plone 4 to Plone 6 - Upgrading the beast

00:00

Formal Metadata

Title
University of Oxford - Plone 4 to Plone 6 - Upgrading the beast
Title of Series
Number of Parts
44
Author
Contributors
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language
Production PlaceNamur, Belgium

Content Metadata

Subject Area
Genre
Abstract
Haiku is a software as a service CMS utilising Plone as the engine. Haiku is aimed at research and higher education institutes. With over 140 websites at Oxford University using Haiku, we will talk about some of its key features and the long anticipated upgrade from Plone 4 to Plone 6, covering the good, the bad, and the epic. We will cover the small fish in the logistical pond, as well as the whales under the water.
Transport Layer SecurityCloningEvent horizonContent (media)WebsiteHuman migrationSet (mathematics)Different (Kate Ryan album)Web portalParallel portTracing (software)Windows RegistryInstance (computer science)Medical imagingCodeError messageDefault (computer science)PlanningCartesian coordinate systemComputer fileRevision controlDatabaseInterface (computing)Point (geometry)Type theoryOpen setEndliche ModelltheorieDivision (mathematics)Centralizer and normalizerBranch (computer science)Covering spaceData storage deviceSoftware testingComputing platformTask (computing)Multiplication signUniverse (mathematics)EvoluteWave packetContext awarenessShared memoryStability theoryProjective planeCASE <Informatik>Proper mapSelf-organizationTesselationClient (computing)Execution unitGroup actionMereologyFluid staticsExpected valueSoftware developerBasis <Mathematik>ResultantOnline helpStapeldateiFunctional (mathematics)Metric systemServer (computing)Scripting languageWebdesignVideo gameTransport Layer SecuritySpacetimeSingle sign-onInformation Technology Infrastructure LibraryCubeCanadian Mathematical SocietyReal numberMonster groupCore dumpWeb pageTemplate (C++)Product (business)Service (economics)Physical systemDemo (music)MathematicsGraph coloringSuite (music)View (database)INTEGRALMultiplicationQuicksortSheaf (mathematics)Point cloudGene clusterForm (programming)Field (computer science)CollaborationismAlpha (investment)Software maintenanceAdditionText editorFrequencyElement (mathematics)Source codePatch (Unix)Game controllerReplication (computing)BitTheory of relativityProfil (magazine)Variable (mathematics)Moment (mathematics)Term (mathematics)Validity (statistics)Slide ruleElectric generatorPresentation of a groupArithmetic progressionData managementDirectory serviceLatent heatDisk read-and-write headPeer-to-peerBootstrap aggregatingWordTransformation (genetics)Message passingFluxDampingPlastikkarteConsistencyVirtual machineInternetworkingSystem administratorAreaEmailConfiguration spaceLibrary (computing)Data structureMixture modelSpring (hydrology)Domain nameLoginFront and back endsRow (database)File systemUser interfaceThumbnailHypermediaYouTubeData analysisSimilarity (geometry)State of matterPhotographic mosaicChemical equationRepository (publishing)ImplementationNumber1 (number)Flow separationElectronic mailing listElectronic visual displayLink (knot theory)DemosceneMusical ensembleConstraint (mathematics)GUI widgetSelectivity (electronic)Beta functionLocal ringPower (physics)Scaling (geometry)Food energyNumbering scheme10 (number)Data conversionTwitterDirection (geometry)Goodness of fitLine (geometry)Price indexSoftwareWater vaporRule of inferenceRegular graphComputer configurationSynchronizationVariety (linguistics)Order of magnitudeWeb 2.0Object (grammar)Cache (computing)Exception handlingLanding pageRepresentational state transferWeb applicationBenutzerhandbuchSeries (mathematics)Complex systemLattice (order)Focus (optics)Stack (abstract data type)Process (computing)RoutingCommunications protocolFilter <Stochastik>Arithmetic meanSurjective functionComputer animation
ImplementationProjective planeDivision (mathematics)Data managementProduct (business)BitContext awarenessLecture/ConferenceComputer animationMeeting/Interview
Maxima and minimaProduct (business)Disk read-and-write headData managementInformation Technology Infrastructure LibrarySoftware developerMultiplication signDifferent (Kate Ryan album)Universe (mathematics)WebsiteMultiplicationPoint (geometry)Revision controlComputer animation
Execution unitProjective planeOnline helpGroup actionUniverse (mathematics)Revision controlTerm (mathematics)Endliche ModelltheorieWebsiteDifferent (Kate Ryan album)Expected value
Self-organizationWebsiteConsistencyData managementDivision (mathematics)EvoluteComputing platformBitDifferent (Kate Ryan album)Arithmetic meanSoftware maintenanceExecution unitEndliche ModelltheoriePatch (Unix)Surjective functionMeeting/InterviewLecture/Conference
CloningData conversionExecution unitSet (mathematics)WebsiteComputing platformFunctional (mathematics)Revision controlSingle-precision floating-point formatDivision (mathematics)INTEGRALEvent horizonService (economics)SoftwareMeeting/Interview
Focus (optics)Different (Kate Ryan album)Profil (magazine)AreaBitConstraint (mathematics)Power (physics)Physical systemContent (media)Beta functionINTEGRALExecution unitWebsiteService (economics)Meeting/InterviewComputer animation
ConsistencyContext awarenessProduct (business)Division (mathematics)Execution unitData managementWebsiteWordWeb 2.0Service (economics)Peer-to-peerSet (mathematics)Meeting/Interview
Cartesian closed categoryTransport Layer SecurityDivision (mathematics)Product (business)Client (computing)Software developerMultiplication signMoment (mathematics)BitBasis <Mathematik>Projective planeWebsiteFunctional (mathematics)Meeting/Interview
Product (business)Service (economics)Data managementVariety (linguistics)Latent heatEndliche ModelltheorieFunctional (mathematics)BitComputing platformSpacetimeQuicksortMultiplication signSoftware developerCodeMathematicsProcess (computing)Data conversionDirection (geometry)EvoluteWebsiteLecture/Conference
SkewnessContent (media)WebsiteType theoryPoint (geometry)Shared memoryFunctional (mathematics)Division (mathematics)Field (computer science)CollaborationismText editorAdditionSynchronizationQuicksortComputer animation
WebsiteSuite (music)INTEGRALMultiplicationRevision controlGene clusterType theoryUniverse (mathematics)Content (media)Functional (mathematics)Open setPhysical systemCovering spaceCloningLine (geometry)Complex systemDemo (music)WebdesignMeeting/InterviewLecture/Conference
Monster groupTask (computing)MereologyReal numberProduct (business)Human migrationMeeting/Interview
Canadian Mathematical SocietyCommitment schemeHuman migrationPlanningVideo gameLattice (order)Series (mathematics)Exception handlingMeeting/InterviewLecture/Conference
Different (Kate Ryan album)Differenz <Mathematik>Spring (hydrology)Human migrationData structureWebsiteEmailConfiguration spaceSystem administratorDatabaseDifferent (Kate Ryan album)Content (media)Lecture/ConferenceMeeting/InterviewComputer animation
Human migrationProduct (business)Service (economics)SoftwareClient (computing)Lecture/ConferenceMeeting/Interview
Web pageTemplate (C++)Software testingLine (geometry)Multiplication signSoftware developerTemplate (C++)Web pageSoftware testingHuman migrationComputer animation
Goodness of fitPrice indexSoftware testingWebsiteCovering spacePhotographic mosaic10 (number)Branch (computer science)Lecture/ConferenceMeeting/Interview
Gamma functionHuman migrationLecture/ConferenceMeeting/InterviewComputer animation
DatabaseCodePoint (geometry)Computer animation
Task (computing)Software testingWechselseitige InformationHuman migrationCartesian coordinate systemHuman migrationCodeSoftware testingCovering spaceAlpha (investment)PlanningWebsiteRevision controlError messageInstance (computer science)Point (geometry)Computer animationLecture/ConferenceMeeting/Interview
ModemHuman migrationPoint (geometry)Default (computer science)CASE <Informatik>Core dumpCovering spaceLecture/ConferenceMeeting/Interview
Sample (statistics)View (database)Drop (liquid)CodeForm (programming)RankingCovering spaceContent (media)Sheaf (mathematics)Human migrationFAQRing (mathematics)Execution unitLine (geometry)Content management systemOpen setInterface (computing)Integrated development environmentMathematicsSocial classSoftware maintenancePlane (geometry)Java appletCore dumpOpen setCodeHuman migrationProjective planeWave packetCovering spaceComputer animationXML
Windows RegistryInternet forumWave packetHuman migrationCodeWindows RegistryInterface (computing)System callInstance (computer science)PlanningComputer fileWebsiteSoftware developerGoodness of fitDefault (computer science)Configuration spaceDifferent (Kate Ryan album)Set (mathematics)Source code
Event horizonHypermediaThumbnailTesselationWave packetEvent horizonHuman migrationType theoryMedical imagingWebsiteContent (media)Multiplication signComputer animation
ThumbnailHypermediaEvent horizonCore dumpEvent horizonNeuroinformatikSet (mathematics)Instance (computer science)Different (Kate Ryan album)Web portalComputer animationMeeting/Interview
Web portalObject (grammar)Cache (computing)Multiplication signCore dumpNP-hardRevision controlRepresentational state transferLecture/Conference
Covering spaceUser interfaceGoodness of fitHuman migrationCodeLecture/ConferenceMeeting/InterviewComputer animation
Human migrationWebsiteMereologyDatabaseResultantSoftware developerHuman migrationMultiplication signType theory
Human migrationMereologyMultiplication signCodeNP-hardCartesian coordinate systemSoftware testingMeeting/Interview
Human migrationSheaf (mathematics)SynchronizationComputer iconCASE <Informatik>Projective planeHuman migrationMathematicsComputer virusContext awarenessWebsiteUniverse (mathematics)Revision controlMeeting/InterviewComputer animationLecture/ConferenceSource code
Sheaf (mathematics)SynchronizationContent (media)Context awarenessType theoryContent (media)Stack (abstract data type)Revision controlNumberFunctional (mathematics)Different (Kate Ryan album)Selectivity (electronic)Local ringForm (programming)
Sheaf (mathematics)SynchronizationDesign by contractContent (media)Landing pageElectronic visual displayConstraint (mathematics)Functional (mathematics)DemosceneText editorContent (media)View (database)Type theorySheaf (mathematics)Meeting/Interview
Electronic mailing listSheaf (mathematics)SynchronizationComputer-generated imageryDefault (computer science)World Wide Web ConsortiumType theoryContent (media)Instance (computer science)Field (computer science)Text editorSheaf (mathematics)Context awarenessFunctional (mathematics)Transport Layer SecurityMultiplication signCASE <Informatik>Covering spacePhotographic mosaicMeeting/InterviewSource codeComputer animation
Gamma functionMainframe computerWeb pageGraph coloringMathematicsTesselationClique-widthDifferent (Kate Ryan album)Content (media)Set (mathematics)Covering spaceComputer animation
MUDMixture modelRevision controlInternetworkingWebsiteTesselationDefault (computer science)Covering spaceMeeting/InterviewLecture/ConferenceComputer animation
Type theoryContent (media)Lattice (order)Template (C++)Point (geometry)FacebookLatent heatUniverse (mathematics)Presentation of a groupDifferent (Kate Ryan album)Group actionProfil (magazine)Type theoryTheory of relativitySource codeMultiplicationContent (media)Computer animation
Formal grammarContent (media)Proper mapCentralizer and normalizerPoint (geometry)Content (media)Repository (publishing)MereologyCovering spaceWeb pageTemplate (C++)TesselationGroup actionComputer animation
Repository (publishing)Content (media)CloningRepository (publishing)WebsiteWeb pageSet (mathematics)Data storage deviceType theoryTemplate (C++)Computer animation
Content (media)Repository (publishing)MetreWebsite1 (number)ResultantCentralizer and normalizerFunctional (mathematics)Library catalogComputer animation
HypermediaData storage deviceDisintegrationHypermediaLibrary (computing)WebsiteMedical imagingData storage deviceDefault (computer science)Computer animation
Data storage deviceHypermediaLemma (mathematics)Library (computing)Computer-generated imageryDefault (computer science)Physical systemHypermediaMedical imagingArchaeological field surveyNumberElectronic visual displayLibrary (computing)Gene clusterDifferent (Kate Ryan album)Flow separationThumbnailMultiplicationYouTubeService (economics)Link (knot theory)Meeting/InterviewComputer animation
Mathematical analysisMessage passingBlogTraffic reportingTransformation (genetics)Directory serviceLibrary (computing)Default (computer science)Bootstrap aggregatingLoginFile systemUser interfaceError messageMessage passing
RobotGraph coloringWeb pageDirectory serviceFile systemWebsiteCartesian coordinate systemArithmetic progressionElectric generator
ResultantValidity (statistics)Human migrationMereologyStandard deviationVariable (mathematics)File systemService (economics)Replication (computing)Server (computing)Patch (Unix)WebsiteMeeting/InterviewProgram flowchart
Error messageServer (computing)Single sign-on
Human migrationFront and back endsAdditionSoftware testingRevision control1 (number)Lecture/ConferenceMeeting/Interview
QuicksortUniverse (mathematics)Lecture/ConferenceMeeting/Interview
Turtle graphicsTransport Layer SecurityHuman migrationMeeting/InterviewLecture/ConferenceComputer animation
Transcript: English(auto-generated)
Okay, I present to you Tim Jones, J.L. Caddy, Lucas Zick and Philip Bauer to present Upgrading
the Beast at the University of Oxford.
Thank you very much. Thank you all for joining us for this talk today. And so as you can see from the title, it's about Upgrading the Beast, which is our implementation at the University of Oxford, more specifically in the Medical Sciences Division there. Now I'm sure a lot of you are here to hear about the more finer details and the probably
more interesting stuff about the project from Philip and Lucas, but myself and J. work more on the product management and project management side of it. And so we thought it'd be useful to give you a bit of background about the project, where it came from and why, just to give it a bit of context as to where we are. So as we've already been introduced, but we have, so specifically for this part of
the project, we've been working with Philip Bauer, as a lot of you will know. As introduced, we've also got one of our other colleagues here in the audience who is Artur, who is also a developer with us, Lukasz, who you'll be hearing from shortly, J, who's the product manager and myself, Tim, I'm the head of service delivery at Fry.
So starting out with the University of Oxford, when we first engaged with them, it was about 11 years ago, which was actually when I started at Fry IT myself as well, so I've been around a little bit of time. And the university at that point in the Medical Sciences Division had one lady and creating hundreds of websites on different technologies, ranging from WordPress to Plone,
basic HTML, and all of this was very inconsistent and unstandardised. There were multiple different versions and things that was there to be contended with. The university quickly realised that this was not going to be sustainable in the long term.
So when we first started working with the university, we were working on standalone website projects for individual research groups and departments, and we were also helping with some more generic technical help for the lady who was managing, like I say, hundreds of websites across different versions and things like that.
Collectively, along with the university and working with this lady, a lady called Anne, we very quickly realised that their current model wasn't going to be sustainable for them or something that we would be able to support as well as we would hope going forwards. And this was purely because the demands and expectations of the departments and units
far outweighed the resourcing, like I say, the one lady, and also the time available. It was clear then that a potential partnership with an external organisation like ourselves would provide more consistent resourcing, management and support of the websites within the division, and this would be a huge benefit to them.
This relationship would also help to provide better stability, support, training, and accountability, as well as evolution of a common platform for the division's websites. Initial discussions led us to think about how we might be able to support the current suite of between 200 and 300 websites, some legacy and some active.
We quickly realised, like I say, that it wouldn't be sustainable. We couldn't come up with a model that would even get us close to kind of lining up all the different websites, platforms and things onto consistent versions, and meaning that you could actually do something proactive rather than just consistent maintenance patches and upgrades.
This was made even clearer with the different demands from the different divisions and units across the university, especially because everyone thought they were a little bit special. So, out of this conversation came the idea of a turnkey solution,
a solution that would enable departments and units to request new websites and have them started rapidly with a very straightforward set of features and functionality that anyone could get into quickly. One of the things that was also demanded of this was the ability to customise the websites quickly and easily so that we could still follow some of their brand guidelines.
We quickly got stuck in, working with three pilot departments within the Medical Sciences Division at Oxford to understand what a version one of Haiku, which is what we've called the software as a service platform that we deliver to the university, it was previously known imaginatively as Oxford University MSD Turnkey Solution.
We quickly got into version one of what that would look like. We uncovered some of the department's biggest pain points, such as integrations with publication systems, university-wide event systems, single sign-on, letting people update their own profiles but not ruin them, and all different bits and pieces like that, and what areas would be considered for a good pilot.
We agreed that in the first implementation, we'd focus on providing the essential content types, publications, integrations, and the granular permission schemes, hence leveraging the power of Plone, and a timeless, as far as possible, theme, and some flexibility within constraint.
We quickly got out an early beta version, and we worked quickly with the pilot departments to refine the system, its content types, layouts, and integrations. And within a year and a half, the first website had gone live, with a couple more following quickly afterwards. It's important now to note that at the University of Oxford,
no one can tell anyone what to do. Unlike some other institutions where departments and units must go to central IT services to procure a website or system Oxford lets you do what you want, when you want. If you want your friends, uncles, cousins, twice-removed friend from down the road who did an HTML course last year in their summer holidays
to build your website, go for it. This had its own issues, not only with maintenance, but consistency and brand awareness. One of the underlying themes within the Medical Sciences Division that they had in mind when looking to work with us on a more turnkey product was to take that opportunity to steer their departments and units
to have a more consistent branding guideline, so that if you went from the Department of Pediatrics to the Department of Psychiatry, you knew you were still within the same division and also the same university, which before using Haiku might have been a bit tougher. Once we had the initial three websites set up,
we then began what we thought would be the biggest challenge, selling people a product and service that they were used to getting for free. As it was with the one internal person managing their web estate up until this point. I'm pleased to say that we were quickly proved wrong. Once we had launched, other departments and units came to us asking us to create websites for them
purely through word of mouth and from seeing what their peers were creating. All of a sudden, there was an understanding that by paying, they would get a service that suited them and that they would be able to build and get the feeling of a community which they could all contribute to, leveraging each other's knowledge and skill sets. This all driving forward to create the one more coherent
medical sciences division brand within the University of Oxford. As the beast grew, we realized that we needed more people at Fry to support and manage Haiku. This was why we brought in Jay, our now product manager, to work closely with the internal team and clients alike to continue the successes. One of the things I think it is also important to point out
is that alongside Jay, as you saw from the intro slide, we have two to three developers that actually work on this at any one time. Hence, we actually work with Philip at the moment as well for, like I say, upgrading the beast. So now I'm going to hand you over to Jay, who's going to continue a bit more about the website project. Thank you, Tim.
So with the continued rollout of new websites, the inevitable happened. With each new website and therefore new projects, requests for new functionality came in on a bespoke basis to fit the requirements of these different websites, these different projects. This feeling was compounded... Thanks.
This feeling was compounded by the fact that Haiku had now become a paid product as a service. So we quickly realized that we were not ready to take on some of the larger departments with more complex requirements and also understood the importance of not trying to run before we could walk. We were, of course, listening to these bigger departments,
guided by their ideas, even if we couldn't start to work on them at that time. So moving forward, by maintaining one code base and system, it was easy for us to roll out new sites which fitted the model as well as support and servicing them. So this meant we had development maintenance, which was also giving us a bit more time and space
to focus on the product. So moving forward from there, we had the resource and time to start working on new functionality that would benefit all customers who are using the platform. And this process we called Haiku product evolution. So Haiku product evolution consists of a variety of activities, including holding regular user groups,
allowing our users to suggest and upvote new functionality ideas that they want to see on the platform. And this can be done through the sort of Haiku HQ customer platform that they had access to, they have access to, monitoring and understanding market changes and trends, but crucially, closely nurturing customer relationships
for a natural product development. So having conversations with the people who are using the platform is, in my opinion, the most important way of understanding and learning about which direction you should be heading in as a manager of this product and as a company. So carrying out these activities, we were able to take specific requests
and create general solutions to our growing and varied customer base. This helped to guide our development over Haiku over about 10 years, getting us to the point where we were pre-migration.
During this 10-year period, new content types and new functionality were developed, and we were even able to develop a content sharing functionality, which allowed websites to share content between websites, allowing editors to clone and subscribe to content, keeping it in sync with the original on the sort of cloned website,
in addition to simply taking a copy of some content, migrating it over to whatever website you wanted to and making some tweaks to it. So this is an example of some functionality that was extremely powerful for the Medical Sciences Division at Oxford, who found great value in the collaboration and republishing of content,
which benefits all of them, especially in this field of medical research and higher education. So to summarize where we've gotten to since those first three pilot websites that Tim mentioned, we've increased to over 140 websites running the same version of Haiku across multiple clusters. We've made available a well-planned suite of content types and functionality.
We've developed multiple integrations with university platforms, including Simplectic, which is a publications aggregator, university maps, open talks aggregators and amongst others. We've also provided an impactful design system which allows non-technical site owners to champion style and design initiatives
in line with contemporary web design practices and in line with their own brand requirements, etc. So we'll demo some key features later on. But first, we'll cover how we are getting that complex system Haiku into clone six.
So you might wonder why it took us so long to carry out this upgrade. And that's a fair thought. Well, there are a few things to consider. The fear of the monster. It's the not knowing what you don't know. That can be a scary challenge. Philip used the phrase whales under the water, which I thought was quite poignant.
Working as part of a small team is great, but it can be challenging, especially when you're faced with such a monumental task and just general uncertainty about the approach to migration. As I'm sure you'll appreciate, managing something like this with such a small team is incredibly challenging. Just keeping up with documentation and user guides,
as well as having to update all of that for the upgraded products is a real challenge and something that we still grapple with today. But moving forward to the migration plans for this date back a couple of years. However, upgrading Haiku's underlying engines is not something that we had undertaken before as a project.
The initial CMS solution we provided to Oxford ran on Plone 3, although Haiku began its life on Plone 4. So the Plone 6 migration plan kind of began to pick up pace with a series of meetings with Philip Bauer, whose exceptional experience in migrations has become an absolutely invaluable asset for us in helping to structure and undertake this migration
alongside our committed developers, Lukasz and Artur. So we're now going to hand over to Philip and Lukasz, who are going to talk through the technical aspects of this behemoth task, migrating Haiku. Yeah, thanks.
This is not going to be too technical, but a couple of details are worth mentioning maybe. So I was approached in spring last year with this project and the requirements were special compared to all other migrations that I've done before.
Mostly because there's like about 140 plus sites with the almost same installation. By almost, I mean that the content structure is different. Obviously, the content is different. They have slightly different designs,
different configurations like email server, maybe admin users, stuff like that, and also different different installed add-ons because the add-ons provide the features provided by Haiku are encapsuled by add-ons. There's nothing very different from Kuei, for example.
There's like an add-on, but you can enable it or if it's just if it's enabling or installing, it's basically some configuration that lives in the database that makes a feature available or not. But the biggest thing in this case, and I've talked about that yesterday a long time, is that nothing should change,
which is usually an approach to a migration that I would say, don't do that because you can't sell that. But they decided to do it. And once after I saw the product, the old product, I decided that that's actually a good idea because the old product actually looks really good and works really well.
So not approaching that as a relaunch to the at least because they don't have to sell that to the client in that way because it's a software as a service thing, so different rules apply. So it keeps all the features the same. So going to Volto is not an option. And a very important thing also came out
that it's not an option. I'm going to go there. So first I checked, I read everything like and I found out if we can run this in Python 3. So there is a... Everything content-wise was already dexterity, so that was good.
So that saves a lot of trouble and time. There's a ton of code, 2000 Python files, not lines of code, that will be easy, 500 page templates. So a lot of stuff to look at and evaluate. But the quality overall is actually pretty good.
So kudos to the development team. Even though there was Plone 4 code, I can always complain, but there was very few reasons to complain about anything there. So really, really, really good work. The test coverage though was really not that surprisingly good.
And that's a challenge when you do a migration because when you do a migration, everything is broken. But if the tests are still green, then it seems something might be good. But so you have very little indication if all the test coverage is not that good. Also, and that's the big thing,
the add-ons and dependencies were interesting because they used collective cover. And that is, they don't have like a cover page for every site. They have like tens of thousands of cover pages. This is the main tool to build content. So obviously we thought, okay, what should we do?
Migrate that to Mosaic. Which was already like pre, going to work in Plone 5.2 and in Python 3. Whereas collective cover is not really ready yet. But I figured out there is a branch that, hang on, that was too quick,
that supported 5.2, not yet Python 3 and not yet 6. But I looked at that and looked at the work that Kleber Santos and Wesley Barroso Lopez did. And I saw that it was really, really good work. And there was not that much for us to do to make that work. So I figured out that, yes, actually we can make that happen.
And we, yeah, we did that. First, we made a plan. Since there is no archetypes to dexterity migration, that was like my first thinking, no archetypes to dexterity. Okay, let's do that in place. Since then, I would have changed my approach
and always planned for export import migration. And if that has, I don't know, for some reasons that I can't think of now, at least, wouldn't work, then maybe use in place. But yeah, that was the thinking at that point.
And so there were a couple of steps that we needed to do to get there, like an in place update to Python, to clone 5.2, then do the Python 3 support stuff, migrate the database from 2 to 3, which means supporting the code base had to support both versions.
This is basically exactly the same stuff that we did for Plone itself, without flying that much blind, which is good. And then the idea was to combine all these steps in a pipeline of automated upgrades and run these for all 150 websites.
So it turned out that, yeah, and then migrate that to Plone 6 as soon as possible. And later it turns out that that's not a really good idea to do that. So the initial tasks, and I stress that and not talk about the migration itself so much
because of reasons that I'll explain later. So a lot of work went into upgrading the application. And you remember, I thought, how many thousand, 2000 Python files? I know there's a lot of automated helpers and I wrote the documentation for those and I know how to use them,
but still there's a lot of manual work to get the application to actually start up on Python 3. And basically we followed the docs for migrating Plone 5.2 to Python 3. The good thing is I've written them, so I know how to use them and get it running them and worry about any data later, obviously.
And in October 2021, we actually were able to start it up in Python 3 without errors. We were able to create a site and install all Haiku packagers, but everything was broken still. But the instance didn't die and there were no tracebacks and stuff, but it looked horrible and nothing worked. But we had a working setup to actually start the tests.
Obviously, these were also broken or not even written. And parallel to that, I started the first attempts to export data and import it into a brand new 5.2 site at that point in Python 3. And that looked promising. And in November, actually, we were able to switch to Alpha 1 of Plone 6.
We have updated all Haiku packagers for Python 3 and that was an intense month, as you can guess. All tests were passing, like everything was actually green. Collective cover was usable in Plone 6. I was able to export all relevant data from a Plone 4 site
and import all relevant data in a Plone 6 site. And at that point, we decided, okay, let's not do the original plan and use collective export import instead. And so the old plan switched from the one on the left,
which is an abbreviated simplified version, to the also abbreviated and simplified version on the right side. So everything goes in one big step. So how did that work out? Basically, we had to write migration hooks instead of upgrade steps, which is good because they're much faster because they don't have to deal with the database,
but just with a bunch of JSON, mostly. And some very important pieces of code ended up in collective export import, so the community got something out of that. I'm going to show some of that also. But even more ended up in examples in the docs, for example,
export import, which is even better. So the point here is that the default features of export import after it did some minor changes, not minor, okay, some changes, did 99% of the work of the migration,
only the 1% that is edge cases and crazy add-ons like collective cover. But we don't want that in core of export import. Core of export, it's an add-on, but we don't want it in that add-on. Just add code examples, that's what we did instead of that. So what can you steal?
I'm going to show you. I have a tab open. So I showed that at the training yesterday. So yesterday we had a migration training. I showed some of that already. So there is lots of code examples here. I'm going to need to go to the navigation on the top to actually find the things that are relevant for this project.
Hang on, where is it? So somewhere there, it says collective cover. Here it says collective cover. Just one example. There is an example how to move, export collective cover and import collective cover. And you can obviously steal that and many other things
that are in that documentation. So some tiny examples for how that works. I'm actually not going to show code examples for any of that because I did that for an audience of 25 people yesterday who actually will have to do migrations in the next couple of months.
So look at the training documentation. There is a migration specs practice, I think that's called. So if you're planning to do a migration, this is where you go read all that stuff. So I don't have to read it to you. We obviously, in the past, in this case, we have instance behaviors and we have annotations and marker interfaces.
We have cover, that was the code I showed you. But this is a nifty little thing that I added a screenshot for, where we export registry settings that are not the default value. So that can be helpful because, remember, it's not one site where everything is in XML file
because we're good developers. We're putting all our configuration in there. These are sites that are managed by editors and they're configured and the configuration ends up in long registry and they differ from site to site and everything that's different from the default can be exported with that. There's a pull request in the documentation
for collective export import that has this code, all of it, for you to grab. Nothing for you to do except for copy and paste. Similar with settings, but that code is even simpler. In the import thing, we use something like import deferred who was at the training yesterday and knows what I'm talking about.
And the tiles for cover, not that interesting. The only thing I'm going to show here is the thing with the event handlers because when you import, when you migrate a site and you import the data, imagine you have a content type like department and when you add a department as an editor,
you automatically want to populate that area. There needs to be an image gallery and a person gallery and whatever. All that nonsense needs to be auto-generated, auto-configured, but for a migration, obviously, you don't want that and there's a very, very easy fix. At the beginning of the migration step, you provide an importing marker
or whatever you want to call that on the request and during the event handler, which is the bottom part, you say, is that there? Then please don't do anything. And it saves you a lot of time. You just, since this is everything their export import does, doesn't mess with the event handlers
of Plone Core itself that much so that there's no problem there, you only have to take care of your own event handlers. That's always good. There were a couple of challenges. Where are my challenges? So one is that there are obviously, as I said, 140 plus portals
with different settings. Another is that there were instance behaviors and the first thing in Plone, as always in computer science, is cache invalidation and invalidating the caches of the schema on objects on the request. It lives everywhere. It's not that easy.
If you want stuff to happen when a behavior is there because you need the data to be actually deserialized, that was very technical, sorry. That was hard. As I said, cache invalidation. Also, it was hard to develop against Plone 6 core dev because there was a lot of time
between Plone Alpha 1 and 2 and we obviously had to pin revisions because there were fixes that we really, really wanted and that was annoying and waiting for a release and annoying merits with questions and so it was a bit hard. There was also a fork of Plone REST API which was like the most horrible idea
I've ever seen. We removed that. And collective cover was also a challenge. So but a couple of good lessons. Collective cover is actually awesome. It really works well. It has a great user interface. Obviously, for classic, I prefer Volto. I know that there's no chance
to just move everything to Volto and it doesn't compute. We're going to, Lucas is going to show you some stuff. Export import, and that's more important, even can solve even the toughest migrations. I don't think there is anything it can't do with your additional coding.
It is faster by magnitudes and by faster, I don't not only mean that it finishes fast, that it's very rapid turnaround because if you only export one content type or a part of the site or one problem at a time, you get very quick results and can very iteratively and quickly develop
where an in-place migration has, because you always have to handle these huge databases unless you strip your site down, which also takes a very long time, is much, much harder. So that is a huge, huge benefit and it also allows you to move stuff around. And that is, that's the main takeaway here.
So in, and I'm never thinking about that since I'm a developer and I get paid by the hour, so I'm fine if things take a long time. But for Plone, the community and the ecosystem and the companies and the users, migrations are a coffin nail. It was horrible because they are so expensive
in time and I'm not talking, okay, you have to update your code from Plone, from Python 2 to Python 3, but the migrations themselves take so much time and believe me, I've written most of the migration code, the hard part at least, and even for me, they take a lot of time.
For my clients, they're really expensive and since what I said before, I'm focusing a lot on the like Python 2 to 3 update of the code and the application from the company. The time I build to my client, Fry, it's like 15% was the actual migration. Everything else was the update to Python 3
and getting the tests running and all that stuff. But the migration is a very small part of the project budget in this case. And that's a huge change for the Plone community because now selling migrations is much easier because they cost less,
it's as simple as that. Yeah, that's me and Lukasz. Show them what we have. Okay, thank you. Hello. I'm going to tell you about some features we think might be interesting which helped us to keep the project growing for so many years.
And I would like to start with context behaviors, which is something which the idea started when we came to the university and the lady managed a lot of sites and it was in different versions. We found out that mostly it was because this department wanted a different Plone add-on for blogging and another department
a different add-on for the same thing. And that's why they hold different stacks on different versions of Plone. And from that came out the idea that we actually can handle a lot of stuff with not just creating a new content type but also use the feature functionality of behaviors but to not produce
a large number of content types. We ended up with the solution for local behaviors and we needed to expose it to editors. So we provided them with simple form where in most of the content types they can select various types of context behaviors, activate actually.
For example, there are behaviors which can be activated only exclusively, which is mostly used, for example, for landing pages and it helps to editors, for example, for initial setup of the landing page. If we have a section which holds
a lot of researcher profiles, it needs to have some constraint types set up. It needs to have some display view selected and sometimes it needs to have sub-content pre-generated so they can achieve that by just enabling such a behavior and it does the thing for them behind the scenes. And additionally, a lot of functionality
can be enabled just by marking the content, which can be done in ZMI, but this way we can expose it to editors so they can mark some content with marker interfaces, as well as extending content instances in the similar way we did in the past with the schema editor so you can easily add additional fields
not all instances of the content type, but individual types. So some of the maybe interesting functionality in these context behaviors might be the geolocation-based redirection, which is used for deciding if visitor
comes from one country, where to redirect him inside the website, or sharing some titles inside the whole section. Which might sound weird, but there was some use case. And moving on to cover pages, which we adopted very early
and we decided for collective cover because at that time Mosaic wasn't in the state. We thought that we can use it and actually it didn't provide the collective cover suited our needs quite more, but we needed to extend it and so we decided to customize it because it's the beast on its own, actually.
So we actually customized the layout in ComposeU, which is used to manage the layout of the page and creating the individual tiles and its content to provide slightly different or more suitable user experience.
One of the key changes was that additionally to settings for the individual tiles in the layout, they could have individual settings per layout row, layout column, which was widely used to make, for example, full page width
content inside the cover page and various different color backgrounds, text and spacing, etc. Next to it, we also bring something called private tiles, which was used in the mixture
with another version of the Haiku, which was slightly differently configured mostly by add-ons and was used as a separate site for the Internet site, where they needed to publish some content, which was also propagated to public sites,
so they managed the covers as they do normally in the Internet, but they marked that particular tiles as a private, so when we propagate such a cover page to the public site, these tiles were omitted, for example. We also developed a bunch of custom tiles because we very quickly found out that
actually default tiles in the collective cover are great, but they sooner or later ended up for us as great templates or starting point or where to look, how to do it, but most of them we had to subclass and extend, etc.
And we're moving to actually what was the most essential for the university and it was specific research content, which was basically about presentation of the research and everything that was around the research profiles, which was something like university Facebook and inside medical science division,
where each researcher needs to manage their own page, these researchers are grouped into the research groups, so each research group needs to have its own presentation and each research group can appear on multiple websites, but under a different research theme, so actually all of this was built
on quite large relationships, which were there and there, which are actually from each content type to other content types and back. One example of usage is that each research profile has some related research publications and based on
research publications of all researchers being member of the team of the research group are considered as research publications related to this group, etc. For the publications, we are using external source, which actually aggregates
publications from multiple research sources and it's called Semplectic Elements, which is standard at the university, so we had to adapt and integrate with it, as well as the Altmetric service, which provides actually that small batch with it, which is some kind of a score for the research content.
Also, what we had to come up with, for example, for the research group, is that we combined the cover layout, where you can manage a bunch of tiles, but part of the page is still held statically in the page template.
For example, this is the static part of the research groups page, while under this, there are tiles and proper cover page. Then we are also moving to the content sharing, which we had to introduce at some point because customers find it very useful
to share content. We actually came up with the idea of centralized repository for the content, which in short works the way that using the subscriber in Haiku site, we let the central repository know that there is a new content or update of the content, and this repository actually extracts the content
and stores it, which is based actually currently on Pyramids SQL database, and it's indexing some of the data in Elasticsearch. This way, we were able to let users go to another website based on some settings and restrictions, and fire up a form,
which is actually on this screenshot, and search other sites based on a filter of the type and other criteria, where they can pick some page from other websites and clone it into their own website, which was used as a content template
so they can start new. If they found some similar content, they don't have to start it from scratch, but they can build on top of some of the existing, or they can, during this cloning, optionally check that they would like to subscribe for updates, so the original page, when it gets updated,
these pages get automatically updated from the central repository. It also let us introduce features which are similar to searching over multiple websites where related websites, if they are configured,
can use for the global search, they use portal catalog search, but if a user gets to the advanced search, next to the filters, can select the ones to search all the related sites or some of them, and in that case, the search is behind the scenes, the request is sent to the central repository,
and search results are taken from that, and other small functionality which is pinned to this, but I'm not going into detail because we have a lot of it. There are media libraries or so. Five minutes. We have media libraries we introduce
because we, at some point, found that users can make the site very messy with the media, uploading them everywhere they want, so we came up with the idea that in the site we will have a special folder called images library where every image uploaded to the website is actually moved automatically, usually,
because, for example, if you upload it through TinyMC, it's already done, it's already uploaded to the images library, which enables webmasters to manage the images, see where the images are used, and probably delete those and not use them, etc.
We also, sooner, had so many media that we came up with the idea to integrate with the external repository, Cloudinary.com, which actually provides the scales, functionality, and other transforms, and let us display the images
from the Cloudinary's CDN, which makes it much faster and more scalable, while we still kept the default blob storage as a default, which means that the original image is always uploaded to the image field, as always, and it's behind-the-scenes
uploaded to the Cloudinary and displayed from it, but if, for whatever reason, somebody turns the Cloudinary integration off or the Cloudinary isn't available, the default blown scaling system takes place and works as normal, decreased by some advanced features.
And we also replaced, for example, for news items where you can upload a lead image, we replaced that with something we called media widget, where you can upload an image, but behind-the-scenes, it's actually moved to the images library under subsection of the images library based on where the news item is placed,
and we use it even for listing. We have different images for lead image and the summary image, which is used in the listings, and also you can upload their link for the YouTube video or other services and it automatically grabs the thumbnail from the surveys and displays it as the
summary, as the listing image. Then we also, very soon, with a growing number of sites, we had to come up with some advanced logging because, as you can imagine, each site, especially those large accounts, run on one virtual machine, so we ended up with
several clusters of multiple machines, multiple small machines, and on each machine, there are usually zero clients, etc., and everything produces a lot of logs, so we couldn't, we needed the effective way how to analyze those logs, so we actually started to collect those
logs. We started to collect all those logs into the centralized log stash pipeline and advanced user interface to analyze them, and what was most important is that those messages are cached, so that if there is some very nasty error
coming each second, it doesn't float, and most critical errors are actually posted automatically to a dedicated Slack room, so a lot of people can be notified before the customer. Okay, and we also had a slightly extended teaming system,
which was still based on diazo themes, file system themes, which were living in the resources directory of the build-out, and we needed to transfer the default templates, and at that time, we decided that we are not going to customize all the templates, but we rather
built over the years a large library of accessibility transformations to transform the Plonforce templates into the Bootstrap 3 at that time, and we let the users to do slightly, to customize slightly the pages by having customizations filled
in every context, so if they want to use our full support, they can put some of their JavaScripts and CSS in the page and break their site locally or globally. And we kept it still at the file system themes, which let us leave the designer mostly working on the themes by himself,
and generate the fancy CSS for the themes which get loaded into the resources directory almost automatically. Currently, we have in progress also something which internally we called Chromosome Theming, which is an external one-page application which lets webmasters
generate colors for their theme. It automatically generates shades and validates for blindness and more advanced accessibility stuff, and the result of it is actually Bootstrap 5 variables, which is thankfully now available in Plon 6, so they can
then only pick not theme in the Haiku site, but select one of the few layouts and paste from the Chromosome those variables which will change the colors, fonts, etc. Because currently, with the file system themes, we have more than 100
themes they can choose from, and always there are some customizations. Yeah, and if I can quickly about the deployment, how we run this piece, we are using the buildout to configure the mostly standard SEO setup, but we needed to
use the ZOB replication services mostly because we needed to deploy just for authenticated users first, and then animals to prevent downtime during the deployment, which actually required some patching of ZOB, of some resites for the buildout, and we actually, we are deploying to AWS
with Ansible scripts, and we automate it using the Jenkins server. We currently, the current Haiku actually manages the sites in the mountpoints, but that's something we are, as part of the migration, thanks to choosing the migration by exporting and importing, let us very easily
avoid the mountpoints, which produces some weird errors in the past, and some of the other improvements we are currently working on, like wrapping the single sign-on with OIDC and taking into place the Amazon server and CDN.
And that might be it, if you have any questions for some of us.
Thanks. Where are we right now? Are we online with this? Yeah. Are we? Where are we with the migration? Ah, sorry. The migration currently, the backend is done, all the stuff
we talked about is ready. We are now finishing testing, because there is a lot of stuff we also did to TinyMC, so it's being updated by the designer, and we also had some additional JavaScripts and CSS in the themes which needs to be updated to work together with new resource resources.
And we are doing some final testings before we start rolling out migration of a lot of small sites, before we will start with those bigger ones. So if you go there online now, it's still the old version. Yeah. Any questions?
Kim. I just wanted to say that we're from the University of Leuven in Belgium, and we're doing sort of the same thing, so replace Oxford by Leuven, and we'll
give the same presentation next year. But we should talk later, because we have a lot in common with our university setups. You've been there for all universities. Exactly. No more questions?
Okay, you can talk to the team during the whole conference, so especially those from universities, I'm pretty sure that they have interesting war stories to share, as usual. Only talk to me if it's about migration. No, that's just a joke.
Talk to me about anything. Thanks a lot. Thanks.