“Falcon” - supporting 300 Plone instances with 3 staff
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 45 | |
Author | ||
License | CC Attribution - NonCommercial 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/48043 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Production Place | Bristol, UK |
Content Metadata
Subject Area | |
Genre |
Plone Conference 201444 / 45
1
2
10
11
17
19
20
27
31
39
00:00
Staff (military)InformationPersonal identification numberPlane (geometry)NumberStrutInformationService (economics)CAN busUniverse (mathematics)Physical systemMultiplication signGraphical user interfaceCASE <Informatik>MassNeuroinformatikGoodness of fitPoint (geometry)View (database)QuicksortLecture/Conference
00:35
Plane (geometry)Software developerCloningMoving averageView (database)Point (geometry)Suite (music)QuicksortPlanningRevision controlNumber
01:27
NumberServer (computing)Virtual realityPlane (geometry)Software maintenanceInstance (computer science)Multiplication signComputer hardwarePatch (Unix)Operator (mathematics)PhysicalismHigh availabilitySoftware developerCloningWebsiteOcean currentEquivalence relationNumberVirtualizationCASE <Informatik>QuicksortService (economics)Server (computing)Virtueller ServerVirtual machinePhysical systemStaff (military)MultilaterationContent (media)Slide ruleBasis <Mathematik>Video gameTheoryCore dumpChannel capacityBitOperating systemField (computer science)Entire functionArmNP-hardLecture/Conference
04:49
Plane (geometry)Web pageFaculty (division)Bookmark (World Wide Web)WindowView (database)Universe (mathematics)Web pageQuicksortEndliche ModelltheorieFocus (optics)Faculty (division)WindowWeb 2.0Event horizonMusical ensembleDependent and independent variablesRandomizationCloningLecture/Conference
05:18
View (database)WindowService (economics)InformationMenu (computing)Sheaf (mathematics)Web pageFaculty (division)Goodness of fitMenu (computing)WebsiteEntire functionWeb pageDrop (liquid)Network topologyLecture/Conference
05:53
Directory serviceAirfoilProfil (magazine)PressureCollaborationismMereologyComputer hardwareContext awarenessSurgeryPhysical lawExtension (kinesiology)Multiplication signPersonal digital assistantLie groupStaff (military)Image resolutionBitLaptopProjective planeMorley's categoricity theoremLink (knot theory)Design by contractLecture/Conference
06:54
Component-based software engineeringComputer hardwareSoftwareRAIDMiniDiscVirtual realityPlane (geometry)Web browserConfiguration spaceWebsiteIntrusion detection systemProxy serverService (economics)Virtual machineMultiplication signWebsiteElectronic mailing listSoftware developerConfiguration spaceInstance (computer science)LoginSoftwareWeb browserRoundness (object)EmailAddress spaceServer (computing)Series (mathematics)RewritingContent (media)PasswordIP addressLine (geometry)MetadataCase moddingPhysical systemSet (mathematics)BitNetwork topologyNumberFront and back endsCache (computing)Template (C++)Uniform resource locatorRule of inferenceComputer fileGoodness of fitBackupWeb 2.0Error messageProcess (computing)Different (Kate Ryan album)Universe (mathematics)Direct numerical simulationTraffic reporting1 (number)Computer programmingScaling (geometry)LogicLaptopGastropod shellProxy serverData managementScripting languageVolume (thermodynamics)Revision controlQuicksortMiniDiscStandard deviationNeuroinformatikDataflowRAIDProjective plane2 (number)PlanningHuman migrationSystem administratorComputer hardwarePerturbation theoryArithmetic meanTheoryConservation lawDirected graphExecution unitData structurePatch (Unix)19 (number)Office suiteSpacetimeRight angleMathematicsFocus (optics)Operator (mathematics)Term (mathematics)Nichtlineares GleichungssystemStaff (military)Source codeCollaborationismBit rateLecture/Conference
15:01
Plane (geometry)Configuration spaceWeb browserWebsiteState of matterLarge eddy simulationInterface (computing)Web pageInstance (computer science)NumberSmith chartDirectory serviceContent (media)Type theoryLocal ringHuman migrationServer (computing)Process (computing)WebsiteInstance (computer science)Symbol tableMultiplication signLibrary catalogDivisorPoint (geometry)Service (economics)Chemical equationStructural loadTerm (mathematics)Program slicingPlanningCASE <Informatik>Video gameResultantMappingVirtual machineCloningMoment (mathematics)OvalView (database)Personal digital assistantReal numberDirection (geometry)MiniDiscScripting language2 (number)MathematicsMetric systemAsynchronous Transfer ModeSemiconductor memoryForm (programming)Product (business)EmailBitComputer configurationHacker (term)Physical systemSystem administratorConfiguration spaceChainArc (geometry)Observational studyOffice suiteDirectory serviceElectronic mailing listQuicksortPhysical lawRevision controlInterface (computing)WaveDataflowReading (process)UsabilitySoftware developerPublic key certificateWeb browserGradientSign (mathematics)Web pageWater vaporTemplate (C++)Fault-tolerant systemArithmetic meanCellular automatonDifferent (Kate Ryan album)Staff (military)AuthorizationCurveInjektivitätFaculty (division)Basis <Mathematik>Slide ruleCache (computing)Direct numerical simulationLecture/Conference
23:07
Directory serviceLocal ringContent (media)Type theoryPlane (geometry)Human migrationContent (media)Numbering schemeCASE <Informatik>Graph coloringMobile appQuicksortBitType theoryWeb pageElectronic mailing listData managementService (economics)Multiplication signTerm (mathematics)Moment (mathematics)Factory (trading post)Staff (military)Event horizonMereologyProduct (business)WebsiteFaculty (division)Bit rateExtension (kinesiology)AreaDecision theoryMultiplicationDirectory serviceDifferent (Kate Ryan album)Cache (computing)LoginProcess (computing)Group actionHuman migrationFile systemGreen's functionPower (physics)Direction (geometry)Greatest elementVideo gameSoftware maintenanceMusical ensembleRight angleSet (mathematics)ResultantPhysical lawExistential quantificationForcing (mathematics)Physical systemText editorKeilförmige AnordnungSoftware developerNumberLecture/Conference
31:14
Staff (military)Faculty (division)Web pageWindowOnline helpWebsiteGraph coloringGreen's functionVector potentialClient (computing)Asynchronous Transfer ModeUniverse (mathematics)Instance (computer science)Server (computing)Virtual machineMusical ensembleSoftware testingLie groupComputer configurationLecture/Conference
32:51
Directory serviceSineQuicksortINTEGRALDiagramGoogolAuthenticationTranslation (relic)Revision controlWeb pageMoment (mathematics)Directory serviceStaff (military)Membrane keyboardSpacetimeProcess (computing)Bit rateDisk read-and-write headSummierbarkeitMereologyLecture/Conference
35:11
Universe (mathematics)BitIP addressComputer fileWebsiteDirectory serviceHuman migrationDatabaseQuicksortPoint (geometry)CodeElectronic mailing listData conversionAddress spaceRoutingSuite (music)View (database)System administratorSpacetimeFigurate numberMedical imagingLecture/Conference
Transcript: English(auto-generated)
00:00
I guess I'll start if some people come in late. Well, they probably won't have missed too much. Good afternoon, I'm Matthew Vernon. I'm from the University of Cambridge. I work at Information Services, which is what was until recently called the computing service, but we've been rebranded as happens from time to time. I'm gonna talk about our system called Falcon.
00:22
In case you don't know, this is a Falcon. Helen talked this morning about sort of the user experience of Falcon and what it looks like from the user's point of view. So I'm not really gonna talk about that. So perhaps my picture should have been this instead. This is a radiograph of a Falcon. I'm gonna talk about the internals of Falcon.
00:42
From what I've heard and indeed seen so far, I think Falcon is a relatively unusual clone setup. We have about 300 sites. They're all essentially the same from the clone point of view. They're all basically identical build-outs
01:01
running the same versions of everything, provisioned as automatically as possible. And we charge people 100 pounds a year for a Falcon site, which is not a lot, but the sort of quid pro quo for that is that we don't expect to do much the way of bespoke development. If people asked us for features that aren't in Falcon either, we'll decide to develop that
01:21
and roll that out across the entire suite of our sites, or it doesn't happen at all. So to give you some idea of some numbers, numbers are always good, it sounds like you know what you're talking about. There are three of us that work on Falcon, myself, Helen, who's here, and David, who isn't.
01:41
None of us, I think it's fair to say, would call ourselves as clone specialists. We all do other things. And in theory, at least, it should take about half of our time. So about one and a half full-time equivalents. In practice, things never quite go according to plan, but that's sort of what we're aiming for.
02:01
The service has been running for about four years. I've only been with the computing service for 18 months or so. So if you're gonna ask me questions about why did you design a system like that, I probably don't know, because I wasn't there when it was designed. I've come along relatively recently because we've been increasingly growing in numbers, and that's meant we need some more staff time
02:22
to support the infrastructure. I wrote these slides last week, so they're out of date already. I think we're now up to 275 sites. Still only 209 live, because we're all here in Bristol, rather than making sites live. I've got a couple to do when I get back on Monday, and another 60 or so in development.
02:42
We run those on four pairs of physical servers. All of our servers are paired, so we have what we call the live server that is presenting the sites to the world, and then we have, if you like, a warm spare. So we replicate all of the instance data, the instances and so on, on an hourly basis.
03:02
That means we can failover, which we use most oftenly during routine maintenance. If we're gonna patch the underlying operating system, we'll patch the spare first, migrate the service over, and then patch the other server. We don't have any kind of high availability set up, so if there is a problem,
03:20
we have to do that failover by hand. There's no automated way of doing that, and I'll allude to the reasons why that's the case a bit later. We also have one course server. Helen runs a course once a month for people who want to use the service. It gives them a sort of basic introduction to creating content in Falcon, managing users, that sort of thing,
03:41
and we have a server that's dedicated to that. We also use that if people want to do intensive developments on the site that's already live, we clone the instance onto the course server, and then they can work on that to their heart's content, and then we can either move that back to be the live service or discard it later.
04:01
We did that because we found if we tried to rename the instance, clone typically got rather unhappy, so we found it better to take a copy of the entire instance, move it somewhere else, let the user develop that, and then they can pick which they want to go live on. Currently have one pair of virtual servers. Our initial physical hardware
04:21
is reaching the end of its maintenance life, and we're currently planning to move physical machines into virtual machines as they reach the end of their natural lives. One of the things I'd be interested to hear about is if anyone's got any sort of stories or ways in which moving from physical to virtual hardware has worked or indeed hasn't worked,
04:41
if there are things we should be aware of, or if we should abandon all this crazy virtualization stuff and go back to old-fashioned spinning rust. This is an example of the sort of page you get from a Falcon site. This is the Faculty of Music, which I picked almost at random. You've got sort of news and events up here along the top in the way that you'd expect from a clone site,
05:02
a calendar, RSS feeds, all that sort of thing. What you can't see from here, because it's a static page, is that this is the university's relatively new web house style, and it's responsive, which I'm going to endeavor to demonstrate. This might not work, but we'll see how we go. So you can see as the window narrows,
05:22
you get now what's rather more like the mobile view, and now if I click on here, you get the navigation menu as things you can click on, which is quite good. It's good for site visitors. They can browse it on their mobile device, but the downside of that is that menu that dropped down
05:41
contained essentially the entire navigation tree of the site. So all of our pages on all of our sites have to know about the entire navigation tree, and that does bring with it some drawbacks. This is the Faculty Staff Directory. Everyone knows what this is, I'm sure. Eric, who wrote it, said in the keynote yesterday
06:01
that he wants it to die, which got wide applause, so that's a little bit worrying for us. So if you're not familiar with it, this lets people create entries for members of staff, and then they can be, on my screen, I can read all of this, but the resolution's not so great, but this gives you, you can categorize staff
06:21
by other academic staff, contract staff, assistant staff. People can put themselves into departments, collaborations. So we have some extensions to the FSD, which lets people link to other members of staff and say, this is someone I work with, and either people can edit their own profile,
06:40
or they can get their secretary, or usually some overworked postdoc to do it for them. So this is a big part of Falcon, and it's something that our users are very fond of, and it's also something that we worry about from time to time. So the stack, we have hardware. This lot is now probably looking a bit small.
07:02
You're probably thinking there, I have more than that on my laptop. When we commissioned the project four years ago, this was quite a lot of disk. There were four or five 250-gigabyte disks. The initial round of service had five in, one of which was a hot spare. Experience suggested that the hot spare didn't gain us very much, so the second round of hardware just has four drives in.
07:23
They're on Linux software RAID, so MD RAID, which means that if a drive fails, we can replace it, and in theory, without taking the service down in practice, we always do a failover to the other machine before we start removing disks. Again, 24 gigabytes of RAM, you'd probably put that on your phone these days,
07:41
but again, when it was set up, that was quite a lot, and it seemed to work for us, and then as we move on to virtual machines, those are on an existing VMware infrastructure. Software-wise, if you've spoken to me already this weekend, you'll know I'm a Debian person, so I wouldn't have chosen SLEZ,
08:00
and indeed, SLEZ 11 is beginning to show its age. The advantage at the time was we could set up software RAID and logical volume management from an automated installation, so that's really why we went with SLEZ rather than anything else. Subversion, we use for version control. All of our add-ons and such like are in version controls, as you'd expect,
08:21
but our build-out configuration and so on is as well, and in practice, when we build a site, it's like doing a subversion checkout and then build-out. Rsync, venerable bit of Unix infrastructure, that. As I said earlier, we have pairs of servers, a live one and a replica, and we use an hourly Rsync job
08:41
to keep those two up to date. The rest of this software stack is, I'm sure, familiar. We have Apache with mod rewrite at the front rather than nginx. We use Varnish, which is, again, standard. We're still on Plone 4.2.6. The plan is to migrate to 4.3
09:01
and then Plone 5 in due course, but migrations causes a certain amount of pain, so we've not yet had the manpower to do that migration. Shibboleth, either you'll never have heard of it or you know exactly what I mean, I suspect. This is federated access management for the web. It's quite common in the higher education sector, and what this means is that,
09:22
while most sites only let Cambridge users log in, it does mean if our users are collaborating with people from other higher education institutions, they basically put into the user's setup the email address of someone they want to collaborate with and then that person can log in using their own details at home.
09:41
The great advantage of this means we don't have to worry about usernames or passwords. Everyone already has their login credentials and it all just works, mostly. I had an email from Janet today, they're proposing to change how they deal with the SAML metadata in a way that may be incompatible with our Shibboleth setup, but we'll worry about that in due course.
10:01
So that's the stack we built Falcon on top of. As I said earlier, I'm not a Plone specialist. In fact, I didn't know what Plone was before I started this job. My background's general development and system administration, so in that mind, what do I want to do on Falcon? Nothing.
10:22
We have 300 sites and if a job that takes two or three minutes to do on a site, that's two of my days gone to do that to all of our sites. So I'm quite lazy, all good system administrators are lazy. Sadly, so are all the bad ones. I'm not sure how you tell the difference. Thankfully, neither does my line manager.
10:40
So I want to do as little as possible on Falcon because anything that takes time doesn't scale. We don't have a lot of time. We don't charge enough to spend a lot of time on it. So maybe more specifically, what don't I want to do? It might seem funny working in content management for the web, so I don't want to use a web browser,
11:01
but I don't. If I'm doing something through the web, we then have to worry about how does that end up on the file system? If we need to restore from a backup in the future, will we have lost that change? And perhaps more importantly, doing things through the web is slow and error prone. I can't automate using a web browser and I make mistakes all the time.
11:21
So I don't want to have to use a web browser. We do from time to time. Sometimes we go into the ZMI and use the undo log to fix users' mistakes. And if someone reports a problem, naturally I have to use a web browser to see what they're talking about. But essentially, the less I have to get out a web browser, the happier I am. Relatedly, I don't do anything more than once.
11:40
In the world of Falcon, there are two numbers, one and 300. So I don't have to do it once, I have to do it 300 times. So basically, I don't want to do anything. I'm doing more than once. I should have written a program to do it. Likewise, I don't have to think because if I'm having to go to a Falcon site and think about what I'm doing and it's a complicated operation, A, it's taken too long,
12:01
B, I'll probably make a mistake. So anything difficult, I shouldn't be doing. I should replace myself with shell script. And that's the sort of philosophy I've brought to Falcon. I want to try and take myself out of the equation as much as possible because I'm slow and error prone and computers are fast and do exactly what you tell them.
12:21
If what you tell them to do is wrong, well, that's another story. So with that in mind, this picture is about the sort of both the flow of data from the user's web browser to Plone and back again, and also the flow of configuration. Falcon is actually subordinate to another system,
12:43
which is called Jackdaw. We like to name all our systems after birds so nobody has any clue what we're talking about. But essentially that can maintain the master list of all the sites in Falcon, including their host name, who their administrators are, and all that kind of stuff. And every night, every one of our Falcon servers
13:02
talks to this other service and gets a list of all the sites. And from that, we construct the entire stack. So this bottom line is probably pretty familiar. I've seen it in a number of talks today. So the web browser talks to Apache. Apache uses mod rewrite and then mod proxy to take that request either for an external host name
13:23
for a live site or a sub URL from WW Falcon on the development site, rewrites it into an instance name on our internal network, and then mod proxy sends that off across the internal network. So our site list knows about every single site,
13:41
and we use that for a bit of Perl to produce a vhost configuration. And that gives you a very long set of rewrite rules that Apache then knows how to turn this request from the browser into eventually a Plone instance. We have an internal network within Falcon.
14:02
It's a private bit of network. It's not exposed to the rest of the university. And that's, it doesn't have any DNS. It's just a series of host names in a private, in an RFC 1918 space. And again, what's on that network is determined entirely by the site list because each site has its own instance.
14:22
And so it gets its own IP address on the internal network. So the et cetera host file, which essentially determines the topology of that internal network, is produced every night from the site list. So Apache redirects from the browser to Varnish. Varnish is what we use for performance.
14:42
And these Varnish caches, we have one per server. So each Varnish cache has to know about the 50, 60 or so instances that are behind it because they're each a different backend. And that configuration file, again, we know exactly what's got to go into it from the site list, so it's templated. We template that out to our Varnish sites.
15:02
And then finally, that ends up on Plone. As I said earlier, each site has its own Plone instance. We found that means if there's a problem with the site, it's only one site that goes down. You know, if they manage to mess up some bit of the catalog or what have you, it doesn't matter on any other site.
15:21
It's just the one site that's affected. And that also means if we're migrating sites between servers, which we do sometimes to try and balance the load, it's one site at a time. So that just makes life easier. Also, what that means, we have a one-to-one mapping between these sites and which Plone instance we should have. And so we can commission and decommission sites on the basis of the site list.
15:41
And again, that happens overnight. If a server that was providing a Plone site is told that site no longer exists, it stops the instance, stops the zero server, parcels up the instance directory and shoves it. Actually, we don't, we could throw them away, but we keep them indefinitely because disk is relatively cheap. Likewise, for new sites,
16:02
the overnight job runs a subversion checkout, runs build out in offline mode, starts the zero server, starts the instance, and the following morning, the site's ready to go. Which is particularly good because when someone requests a new site, Helen tells user admin, user admin, update site list. Overnight, the site list causes a new instance
16:22
to be created and I have to do absolutely nothing. It's great. Similarly, the list of live sites is maintained in here and that updates the DNS. So a site going live, the only thing I have to do is I have to get a new SSL certificate. And sadly, that process really can't be automated. You have to cut and paste the certificate
16:40
signing request into a form. You get an email back from Terrena, which you have to unpack, and that's a bit tiresome. And it's become more complex now because Janet have moved from SHA-1 certificates to SHA-2. So we have two different certificate authority chains. But again, thankfully that bit we can do automatically because we have the script that produced the Apache configuration,
17:02
looks at the certificate, see who's the issuer is, and then knows which authority chain to be using. So from that point of view, it all works quite nicely and I don't have to do too much. We've already had a talk on why Plone will die. I'm not gonna be that vigorous, but we do have some problems. And these are sort of some of the current issues
17:22
we have with how we use Plone. And I'm sort of hoping you're gonna wave your hands and tell me I'm doing it wrong because then we can fix things and that'll be great. Plone has too many interfaces. When I started, I read Martin Nespoli's great book, Professional Plone for Development. And it describes how you use dexterity
17:41
and you use grok and it's all great. Then I came to look at Plone and we're all using archetypes. And we still have some of these, we have browser views and page templates. And there's three different ways to do everything in Plone. And as well as meaning you're always wrestling with craft, it means your learning curve is pretty steep.
18:02
I like to kid myself I'm pretty smart and I'm still trying to get the hang of how many different ways there are to do everything in Plone. Particularly when something goes wrong, the debugging process is quite lengthy because there are so many ways it could have gone wrong, trying to work out which it is, is sort of time consuming.
18:21
So that's difficult from my point of view because I had a lot to learn. It also means if we want to get more staff in, there's a long lead time between hiring someone and them actually being able to be of a great deal of use in terms of dealing with problems that arise because there's so much to learn. So I would like Plone to be simpler and maybe in Plone 5 that will be the case.
18:42
Maybe we'll throw away some of the craft and the result will be a system that's simpler to understand. Acquisition, acquisition is very clever. Our users like to mess it up. We'll find they'll rename something often in the middle of faculty staff directory and everything will explode in their face. I think often these are down to acquisition and maybe that's that our add-ons are lazily written
19:02
but suffice to say acquisition is often not my friend. Performance. If you talk to people about Falcon, the one thing they'll never say is it's blisteringly fast. And we've put a load of metrics on our servers. You know, we have disk IO, memory usage, swap usage, CPU usage, system load.
19:22
And there's not an obvious story there of what I need to do, how I could improve the spec of our servers, what we should be doing with our virtual machines to try and make Falcon run quicker. So this is something I would like to improve and I don't really at the moment feel I've got a good tool to say,
19:41
well, this is the problem with our performance. This is how we could fix it. So maybe you'll tell me what I should do later and that'd be great. We use varnish for performance, but what that does mean is users will edit a page and then email us and say, well, edit this page and it's still showing me the old stuff.
20:00
I think probably we ought to be able to configure Plone to empty the cache more frequently than it does. And it does eventually do so. And I think that's probably an easy fix. We just haven't found it. So, and that's something that does confuse our users. This next one is the thing that annoys me most. Maybe that's not true. A lot of things annoy me. I'm the ranty one.
20:22
You're on, when we migrate to server, we start the zero server up, we start the instance, the scripts return zero and we think, hooray and go onto the next one. Only then the instance didn't start up because some egg was missing and we don't find out until someone emails us to say the site was broken. This is really annoying.
20:43
Particularly because, as I said, I have 300 sites. If you've got one site or you've got a few customers each with a few sites, you can do this. You can start your, you can apply a change, start your instance and go look at the site. I can't do that for 300 sites. I want to be able to automate it and these scripts lie to me
21:01
and then, so I can't rely on that. I now have some ridiculous thing where I run this, I wait five seconds and then I run wget on the page and if I get 200 back, I assume the site started and this is ridiculous. This is vile hacks. This is not kind of production infrastructure. So I would really like it if there was some option I could pass to these scripts
21:21
to only return zero, to return success if the instance has actually started. Yeah. Do you use Supervisor D? Yes. That's what it's there. Okay. Great, excellent. There are some more answers in one of your questions. Okay, cool. I've got another slide
21:42
and then you can tell me all these answers and I'll write them down and be very happy. But yeah, cool. If there's an answer to this, that would be really great because it's kind of embarrassing. I move sites from one server to another and then they don't work and I didn't notice and then I feel foolish and everyone shouts at me. Upgrade pain. So some of our sites started off in Plone 3
22:01
and have been upgraded through Plone 4.0, 4.1, 4.2. And I always worry that every time we upgrade to a new version of Plone, something won't work and that that failure may not be immediately apparent. And because we have so many sites, again, what we actually do with an upgrade
22:21
is we know the sites that are likely to trip us up and we'll try those first. But upgrades, they're a bit, they're not very robust and it's difficult with such a huge fleet of sites to know how to make that better. So that's something I continue to worry about.
22:42
So future issues. Plone 5 is coming up sometime soon, hopefully. It's going to be great, I'm sure. Faculty staff director is going to die. Indeed, Eric said so yesterday. He wrote it and he wants it to die, which is fine. Unfortunately, we make considerable use of it and our users love it.
23:03
So I'm really worried about where faculty staff directory is going in Plone 5 and what we can give our users that gives them the same sort of experience to have a faculty staff directory in Plone 5.
23:20
Yes. Likewise, we have some local content types. We have some extensions to faculty staff directory. They're all in archetypes. They need migrating to dexterity. We could probably do this in-house, but we don't have a lot of time. What we'd like to do probably is pay someone else to do it, possibly this as well. And related to that is, as I've just said,
23:40
upgrades are a bit of a problem. How are we going to migrate from where we are on 426 to Plone 5? I saw the talk earlier about Plone app content types, which promises what looks like almost automatic migration of archetypes dexterity. So that's great. Maybe that sounds like that's a big piece of that puzzle, but still, this is something we're worrying about
24:02
and management is just worrying about, how much time is it going to take? Where are we going to get that staff resource from? Relatedly, we found it difficult to get contractors. Falcon is not cool. I mean, our users like it, but it's not a cool new development. It's dealing with that kind of old craft and tidying it up.
24:21
Lots of people don't seem to like that sort of thing. They want to build something new, not take something that, well, you mostly it works. It does what we want most of the time and fix it up. Particularly, we would like contractors who come in a bit so we see what they're doing and they can talk to us because then we learn from what they're doing and we ask them fewer stupid questions afterwards.
24:40
And it's proven difficult. We have money to spend and no one wants our money. So if you think some of this might be stuff or even some of this might be stuff you want to help us with, and it's more than just telling me which manual page to read, I won't pay you for that, but I would like to know which manual page to read, then come and talk to me or Helen later and maybe we can sort something out.
25:01
And I talked quickly because I always do, but there we go. That's all I was going to say, so I won't waffle anymore. Now you can tell me what I was doing wrong and I shall make notes. Cool, question? Well, more answers. Cool, that's what I like. Yeah, first of all, for the starting supervisor,
25:22
it's basically one. To get an HTTP okay. Yeah, HTTP okay, use it in combination, it works perfectly, yeah. Supervisor, did you say? Yeah, supervisor. HTTP okay. HTTP okay. Yeah, it's part of the supervisor. Cool. For the rest performance, I was very, I was worried
25:43
that we were using software aids. Is it, what rate are you using? It's rate. 10 or five. Five is fine. I can't remember. I think it's rate 10, I think. Okay, that's better. But also you should really look at what logging
26:01
takes place and what file system you're using. Yeah. Because usually you have like quadruple logging. You have like Zope logs, then Apache logs, then if you have extended four, it writes it to a journal, and they all have time to do that at the same time, and they fight over who gets this kind of sense.
26:23
Then they all wait because they're all polite processes and want the other to go first, and they wait for each other. So that's, the biggest is sort of like, keep one logging, one of those logging, and the rest is sort of like, I don't care, I don't need double logs. So it's a really big improvement usually.
26:41
Another thing is that you should look into the setup of your varnish and the intuition of the code, because in our case, when a user edits a page, the page gets invalidated, and it gets bottomed. There are some cases where like in folder listings and in other listings that stuff gets stale a bit.
27:01
Yeah, I don't know whether it's because, so you edit some, because we have these portlets, like the events portlet appear all over the place. So I don't know whether on some pages that's- That's our list, that's our listing, so. It's still, but single pages should auto-imvalidate. If not, then something is stopping run
27:20
from talking to varnish and invalidating. And for your end users, maybe there was another talk where they had a product that you can just, as an editor, invalidate your page in the cache. Okay, that might be useful.
27:45
Yeah, go for it, sorry. Wrong in the content quality, yeah. One of these things is there's people who know all the answers, and they're all on the list, and they're all responding at that time.
28:03
So be patient and spill out exactly on someone, but it's an incredible resource. And it's really quiet these days for some reason. In terms of switching to dexterity, how many custom content types are you using, you know?
28:22
Maybe a dozen, that's an order, not a huge number. And we've got some extensions to our faculty staff directory as well. Because, yeah, with the basic custom content types, it's, I'm trying it on a new site, so it's being largely simple and decisive. Okay. So I wouldn't worry about it too much. Yeah, it depends on the area it happens to be.
28:42
It is something you can sight-and-see anyway, isn't it? Yeah. If you've sort of used schema extender a lot, you'll be fine, hopefully. Yeah, I mean, I've not, obviously I've not looked at these custom content types in detail, because someone else wrote them, and I think they kind of work at the moment.
29:01
I think at some point, with the disability-based factory service directory, but the thinking directory, a bit as well. It's not a small feature, but it could be interesting. I think our starting place, please, to go a bit further, if you want to take a look. In terms of making sure that everything is still working,
29:22
you might look at, as close as, for what this thing work. Yeah, no, I was in that talk earlier. I guess, yeah, no, I can see that might work. It seems a bit heavyweight for just, has this thing started up on it?
29:40
Yeah, even starting up isn't necessarily enough to see if there's a problem once you start loading pages, because so much of what's been happening is that you actually start to instantiate objects. Obviously, about these 300 sites you have, how big are each of them, and I mean, as I look at it, you have just all, so for me,
30:02
it looks like very micro sites, without- So a lot of the sites are really- How much difference is between them? Why you make them, why did I use the sub-sites in Plone like that? Did you consider that we just arrived there, you said there were already 200? Because that happened to me when I also joined my organization,
30:21
I arrived there, there were like 20 Plone sites, and then I just merged them all into one, because it was a nightmare to manage 20 sites upgraded. This benefit to have, ah, if I need to shut down one, it didn't wait enough for all the upgrades. When I do three of Plone, I do one. Yeah, so I can't speak to why we originally decided
30:44
to do it the way we did, because I didn't make that decision. These sites are all run by lots of different people. They're very enormously in size, so we have some little research groups that probably- Provide to each of them. Yeah, and you know, so some things like the color scheme is quite,
31:02
that's a sort of a parasite thing. I would say like just design, because it's a corporate design. Yeah, so we have the- And just allow like the logo, a few things. Yeah, so you saw the example site here, that's the green color scheme, and then my institution, that's a blue color scheme,
31:22
there are like a dozen color schemes, so. They can all have their own, so that could be an option. I've tried both, it can be annoying, because if you, yeah, if one has a problem, then they all have a problem, so you need to be quite sure that they don't have a problem. But you can use, Varnish is very good at same modes
31:43
to do restarts and stuff like that, but yeah, you need to do rigorous testing, but then it does go faster. Yeah, and so some of our sites are quite big, like some of the music faculty's quite large history, and then we've got, so we've got some really small sites, and some quite big sites, because, yeah, sure.
32:09
We'd have to replumb the SSL setup as well. It's one instance per site.
32:20
One client and one server. You could be, well, two clients at the same machine. Yes, potentially. Like I said, that's just how it currently works, I'm not, you know, I'm not saying this is definitely the right way to do it, that's, you know, that's how we do it. We thought you'd get it also in our university
32:54
for some years, but it was hard to do a lot further, and we had some drawbacks and stuff,
33:01
so we developed a dexterity-based version of tachyldic staff diagram, it's called collective-personal roster, and it does just about the same that tachyldic staff diagram does, and we have used it for some, one year. Right when you tell the world. Yeah.
33:21
That's the stuff we're talking about. I waited for a good moment. It's because you gave another name. I don't, don't start on names here, we had names yesterday. Two issues with the personal roster, it doesn't have the LDAP integration as faculty-staff directory does,
33:42
when the name of the page is same as your user ID, you get permissions there, so it's missing, and the other one is translation, doesn't have any translation mechanism, it's just a copy, if you want to make one. Yeah, no, I don't think anyone uses
34:01
the translation features, though. They're all lazy and use English, you know. Collective-personal roster. Yeah, I'm sure Google will find it for me. I'll make sure it's. What's the reason for killing the faculty-staff directory that didn't get that? Because it's based on archetypes,
34:21
it's inherently always annoying, because the right way to do things is usually like one tool for one job, and this has like the kitchen sink, plus the garage, plus the shed, plus the outhouse thrown in, and trying to do it all at once,
34:41
and it's sort of like. In fairness, the user experience of it's pretty good, you know, but everyone loves to use it. So you're doing nice pictures of people, and you're also doing authentication and stuff, and there's lots of magic going on in the background, and Membrane has had a rocky road,
35:03
and Remembr has had a, well, over the years, some issues with sort of upgrades that went horribly wrong, and it's just, it's big. Difficult to maintain, you know. It's big and fragile and difficult to maintain. Yeah, I mean, it's a thing that people break
35:22
most often, I think, aren't they? To have a conversation about FSD at lunchtime, where there's probably three American universities who would probably be interested in redeveloping it in dexterity, and taking away some of the horrible bits at the same time. So, hopefully, that will,
35:41
if three of us put in some money and did it, then it could get done fairly quickly. How much of it do you use, is it? FSD? Yeah. We use quite a lot of it. We've got a few extensions from it. There's some bits that we don't use,
36:00
but almost all of it is used. One more thing. We also have a migration steps from, I hope, this directory to this new person across there, so that might be helpful, too. Right, yes, they did mention your. Please talk to us, and we'll make sure that the code is somehow released to you also.
36:22
Yeah, good, that'd be great. Cool, any more questions? Yes, one at the back. Just a short question. Your site list code, is it just a plain text file or a database? It's a database, but it comes over to us as JSON. The Jackdaw database is all singing, all dancing.
36:40
It does all sorts of things, not just providing Falcon with a site list. And it's a custom in-house thing. It drives the DNS, it runs Falcon, it runs this, it runs that, it runs the other. But from our point of view, we get a JSON list of all the sites, and it has the public IP address, the site name, the administrators.
37:01
The host name, if it's been allocated as a host name. Any more? No, well, thank you for listening. And thank you for answering all my questions.